Using Redis at Pinterest for Billions of Relationships

Pinterest has been one of those amazing Silicon Valley stories—they grew over 1047% for PC-based use in 2012 , 1698% for mobile use, and had 53.3 million uniques in March of this year. People have followed billions of things on Pinterest—a complex engineering problem since almost every screen of the user interface performs a query to see if a board or user is already followed. This happens to be the wheelhouse for Redis.

Over the past couple of years, Pinterest has blossomed to become one of the leading citizens in the world of media, social or otherwise. If they haven’t gotten your attention yet, here are a few more factoids:

  • They drive more referral traffic than Google+, YouTube, and LinkedIn combined.
  • They are considered the third most popular social network after Facebook and Twitter.
  • Shoppers referred by Pinterest are 10 percent more likely to follow through with a purchase than visitors from other social networking sites.

As you can imagine, we are talking about a high-scale site that makes strong demands on the IT infrastructure with it’s unique user experience.

pinterest-lemur-in-a-suit

Optimizing the User Experience without a Cache

Recently, Abhi Khune, Engineering Manager at Pinterest, published a great article about the demands of the user experience and the use of Redis. Even for savvy web application builders, you wouldn’t necessarily catch these features without analyzing the site in detail, but they are there. First, there is the previously mentioned check for followers on each screen. In addition, the UI shows accurate counts and paginated lists of a user’s followers and follows as well as a board’s followers in many places. To perform queries like these for each mouse-click requires a high performance architecture.

Logically, the Pinterest software engineers and architects had MySQL and Memcache already in place, however the caching solution had already reached their limits, and to better serve their users the cache needed to be expanded.  In fact, to really perform, the engineering team found that a cache was only useful if a user’s sub-graph was in cache. So, whoever was using the system needed to be in cache, and this really led to caching the entire graph. As well, for one of the most common queries, “does user A follow user B,” the answer is often no, but this was considered a cache miss and meant a lookup on the data store. Expanding the cache meant they needed a new approach.

Ultimately, the team decided to store the entire graph in Redis to serve lots of lists.  Immediately, Redis begins to show how it is truly different and acts almost like an in-memory, operational data store.

Storing lots of Pinterest lists in Redis

Next time you log in to Pinterest, remember that Redis is running in the background and storing several types of lists for you as a user:

  • A list of users who you follow
  • A list of boards (and their related users) who you follow
  • A list of your followers
  • A list of people who follow your boards
  • A list of boards you follow
  • A list of boards you unfollowed after following a user
  • The followers and unfollowers of each board

Redis stores the above lists for each of it’s 70 million users—it basically stores the entire follower graph, sharded by user ID. As you can see by the types of data in the list of lists above, analytical summary information is stored and viewed more like a transactional system. Pinterest’s current group of 70 million users are limited to 100,000 likes—rough math shows that, if the average person liked 25 boards, there would be 1.75 billion relationships between users and boards. If the average was 100 likes, there would be 7 billion relationships. And, this is a core feature—the relationships keep growing every day the system is used.

Redis Architecture and Operations at Pinterest

According to one of their founders, Pinterest started writing the application in Python and a modified Django—they continued this way to 18 million users and 410 terabytes of user data. While several data stores are used for all data, the engineers at Pinterest have stored the lists above by splitting the user id space into 8192 virtual shards where each shard runs on a Redis DB, multiple Redis DBs run on an instance, and multiple Redis instances run on a machine. They place multiple, single-threaded instances of Redis on a machine to fully utilize CPU cores.

While the entire data set runs in memory, Redis logs every write operation to disk on Amazon EBS for every second that passes. Scaling is accomplished two ways: 1) at 50% utilization, half of the Redis instances running on a machine are moved to a new machine by swapping a slave to a master or 2) nodes and shards are expanded. The overall Redis cluster is run in a master-slave configuration where the slaves are hot back-ups. Failure of a master, means the slave takes it’s place as a new slave is added, and ZooKeeper controls the process. They also run BGsave done hourly to a more permanent store on Amazon S3—this Redis operation saves the DB in the background. Pinterest uses this data for MapReduce and analytics jobs.

As you might see from this summary in this post, caches and a databases have limits, and Abhi’s full article provides a much greater level of detail into the reasons and approaches of using Redis to scale Pinterest. In the future, we plan to post about how Pinterest uses RabbitMQ!

Learn more about Redis:

  • There are Redis clients for ActionScript, C, C#, C++, Clojure, Common Lisp, D, Dart, Emacs Lisp, Erlang, Fancy, Go, Haskell, haXe, IO, Java, Lua, Node.js, Objective C, Perl, PHP, Pure Data, Python, Ruby, Scala, Scheme, Smalltalk, and Tcl.
  • How Redis is used at Viacom
  • How it Redis is used at Twitter
  • How Redis is used at Superfeedr

Permalink

Tags: , , ,

5 comments on “Using Redis at Pinterest for Billions of Relationships

  1. Martin Cozzi on said:

    Nice article, however, Redis being an in memory data store, and Pinterest being hosted on EC2, this sounds like an expensive solution to the problem.

    Does every single user actually need to be in a warm cache? How about users that haven’t used their account in a while? Redis is difficult to scale horizontally, you mention having to promote slaves with more capacity, and having to shard Redis based on user id.

    I’m curious to know if they looked into a NoSQL solution for this, Cassandra sounds like a good fit for this. It abstracts the sharding, allows you to grow your cluster easily, most used keys (active users) can be stored in Cassandra’s key cache etc.

    Just curious to know what alternatives to Redis they looked at and why they picked it versus another solution.

  2. We stored near 1 billion user’s 60 billions relationships here in weibo.com, just FYI

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>