Twitter says open source tools preventing service disruptions

Twitter says open source tools preventing service disruptions

To prevent disruptions and scale up its service while keeping costs down, Twitter has had to drastically change its core infrastructure, taking up open source tools while doing so.

Twitter processes about 6,000 messages a second, adding up to more than 500 million messages per day or about 3.5 billion a week. And at one peak time, Twitter handled a record 143,000 messages in one second during the airing of the movie "Castle in the Sky" in Japan earlier this year, said Chris Aniszczyk, head of open source computing at Twitter during LinuxCon Europe in Edinburgh on Monday.
Handling this number of messages has been challenging for the company, Aniszczyk said. Twitter started out in 2006 using a monolithic Ruby on Rails application rather than a distributed platform. That worked out fine back then because the service wasn't that busy, but the setup led to growing pains in 2008 when a lot fail whales -- the term Twitter uses to describe service disruptions -- started happening.
Twitter's engineers were able to keep up by basically applying Band-Aids, Aniszczyk said. Things got really problematic though during the 2010 football world cup, which was kind of a low point as well as a high point for Twitter. While the service broke 3,000 messages per second it was hard to deal with the number of messages being sent.
"It was painful because from an engineering perspective it was all hands on deck," Aniszczyk said. Anytime anyone scored a goal or got red or a yellow card the site would be down, he said.
So things needed to change. After analyzing the situation, Twitter determined the problem was using one code base to handle everything from managing raw database information to rendering the site graphically. "What we were essentially doing to keep things going was throwing a lot of machines at the problem. Not the best solution because that gets expensive," Aniszczyk said.
Rather than improving the system and rolling out new features, Twitter's engineers went on "whale hunting expeditions" to solve specific failures, which wasn't really what the company needed to do.
 
Twitter ultimately decided that it was time to invest in new infrastructure and eventually doubled down on JVM (Java Virtual Machine). This allowed them to break the monolithic, single application into different services such as a service that specifically handles messages, Aniszczyk said. Engineering is now set up with mostly self-contained teams that can run independently.
 
To cut costs and reduce the number of machines it uses, Twitter also turned to Apache Mesos, which originally started as a research effort at the University of California at Berkeley. Mesos is a cluster managerA that allows users to run multiple processes on the same machine so hardware can be used more efficiently in order to save money, said Aniszczyk.


Leave a Reply

Send us a Message

Get all the information