Player segmentation using bitmap data structures

At King, we need to group our users by different criteria in order to provide them with a better experience. For example, ‘all users that connect from France’ (so we can address them in their own language) or ‘all players that are on level 31’.

We use this information during runtime to take fast decisions on what experience to offer to the user.  We need this information already materialised in some way that can be easily queried hundreds of thousands of times a second. This is usually achieved by having a lot of database nodes, each one storing the information for a particular subset of users in what’s called user sharding.

But a database is a huge infrastructure for such a modest requirement as asking ‘is this user in the segment of U.S. users?’ Do we need foreign keys, primary keys, permissions, joins, subselects, execution plans, data types, locks, …?

In this post, we’ll describe the  advantages of using bitmap indices instead of  RDBMS or NoSQL databases when dealing with user segmentation. We’ll show how we can use the RoaringBitmaps implementation to segment users in a lightweight and intuitive way.

Bitmaps as an alternative

The bitmap or bitset is a data structure, designed to answer exactly the kind of binary-answer-type questions described in the above introduction.

For a given segment, we allocate a bit for every possible user ID and set it to ‘1’ whenever the user is part of the segment and ‘0’ when he isn’t.

So if you had one billion possible user IDs, your segments would take 120MB of space (1 billion divided by 8 bits, then by 1024 and by 1024 again). Below, we see the difference in size between a Set and a Bitmap, depending on the number of users in them:

The chart above shows how storing IDs in a compressed bitmap is a good choice even when compared with a primitive-type, high-performance integer set like hppc’s IntSet. After 70 million IDs, the bitmap doesn’t take any extra memory space, as we’ve reached the maximum 120MB (for a range of 1 billion IDs).  The bitmap implementation used in this example is RoaringBitmaps.

Using RoaringBitmaps for segmentation has some other remarkable advantages as well:

  • The access time is faster than in a Map or Set, as it only requires a division by 8 and checking one of the resulting bits. For a 600M-user segment, we can check a user membership in 500ns. By using compressed bitmaps, we can store 3.8 billion user assignments to groups by only requiring 1.8 bytes per assignment. This is impressive when you take into account that a long ID itself already takes 8 bytes.
  • If you have 10 different segments, you can very easily combine them with OR and AND operators. The CPU will combine millions of bits in a few µs, and you’ll be able to simulate that you have 2^1024 materialised segments for the same cost in disk and memory as the original 10 (2^1024 are all possible boolean algebra expressions with 10 variables).
  • We can check whether a user is a member of a segment and list all users in a segment (by iterating through all the bits in the bitmap) .
    In an RDBMS or NoSQL database, you must store the information by segment ID or user ID. If you want both types of access, you’ll have to make a reverse index, actually storing the information twice.
  • Creating a snapshot of a 600M-user segment can be done in a couple of seconds. It’s just a matter of copying a file. If you want to try that in a database sharded by user, you’ll have to query for all the users (to retrieve the original information) and then run 600M updates (to create the snapshot).
    Having a snapshot of a segment is an important feature from the product point of view.
  • You can add more segmentation dimensions without having to re-deploy or touch a single line of code. Just add the segment through a web UI and you can start combining it with all the others.

Are bitmaps widely used?

Yes. Bitmaps have been around for decades and, together with B-Tree indices, they are extensively used in all databases. It’s just that we never use them directly but by creating indices on our data.

So why not index the data in the first place and do so where the original data resides?

  • The original datastore might not be particularly suited to answering hundreds of thousands of queries a second in a sub-millisecond time.
  • You have plenty of freedom to manipulate data in databases, but not so much when it comes to indices. You can require your data to be indexed, but you cannot give too many instructions on how you want that index to be configured.
  • By separating indices and data, you can index information coming from several different database sources without having to replicate all the data again in the new central database or force all other systems in the company to move to the new shared database.

What are the drawbacks?

The drawback of bitmaps is that they cannot store associated key information. But the goal we are pursuing here is not implementing a key-value store but a segmentation engine. We are not interested in complete and detailed user login records, we just want to know if the user connected last week.

If you need detailed information about a user in real time, then you’re facing a different problem and you’d probably be better off googling ‘key-value store databases’. But keep in mind that if you want to look for all users in a particular segment, you also need to create an index for that and thus have the information stored twice (once as key-lookup data itself and then in the form of a reversed index).

Another important drawback is that you can only use bitmaps if your user IDs are numerical and in more or less limited ranges.

The IDs need to be numerical because if a user has the ID ‘2438954025’, you have to put the bit exactly in the position ‘2438954025’ in the bitmap to ‘1’. But if the user ID is ‘’, what bit are you going to activate!?

The user IDs must also be more or less in limited ranges or clusters because if they were randomly distributed in the 2^64 space, they would be so spread out that they could not be efficiently compressed.

Some figures

Our current bitmap indexing is implemented using RoaringBitmaps, King’s HttpClient, the Netty server, and Netflix’s Ribbon for client load balancing.

At the moment, we have 5 billion user assignments to segments. They occupy 9.2GB of space, so they can easily fit into memory only (1.86 bytes per assignment).

A 600M-user segment can be created or updated in 40 seconds (provided that you have the .csv file with the IDs). Once created, you can request it again at any point and stream its contents in 20 seconds.

Our servers are handling 5500 req/sec, with a CPU load of under 2% and a mean service time of 340µs (from the caller’s clock). 99.99% of the requests are handled in under 5ms. The maximum throughput tested with a single server was 50k req/sec, with a CPU load of under 20% and a service time below one millisecond.

If memory becomes scarce (for example, if you have more than 20 or 40 billion user assignments in 64GB), there are several strategies available:

  • Vertical scaling. Just add more memory.
  • Memory map the segment files instead of loading them completely into physical memory. If you have more than 20 or 100 billion user assignments, it’s very unlikely that you are really using them all at the same time. Let the OS decide what to load in physical memory based on what memory pages are being requested. Just monitor the rate of page faults and free buffers regularly.
  • Add another group of replicas and shard by the segment ID. Spread the segments to several servers according to your chosen strategy (for example, by the ID hash, the server load, and so on). The sharding is already implemented and ready to be deployed when necessary.

This solution is very reliable. It only had 10 minutes of downtime last year. This is because what the server does is really simple (integer divisions and checking bits) and there’s very little room to screw it up.

Where to go from here

Currently, we are working on making the segments thread-safe. The implementation of roaring bitmaps is easy to parallelise because it divides the entire integer range in 65535 different containers that are read and written independently. The performance difference between only reading and reading and updating is barely noticeable.

The difficult part is actually reflecting the changes on disk. We know what containers were updated and have to be re-written (dirty containers), but the problem is that some types of containers have a variable size. So if container X is updated and increases in size, either all the following containers or container X itself have to be moved.


User segmentation and data indexing are very closely related. By using  the most recent and efficient compressed bitmap technologies, we can reduce the size of user segments (both on disk and in memory) and their access time. Segments stored as bitmaps are much easier to organise, store, transmit, query, and iterate.

Join the Kingdom -

Share this article!

Galderic Punti

About Galderic Punti

Galderic has been working at King for the last two years mainly as Backend Engineer for cross-promotions, A/B testing, and segmentation. Previously, he worked in the banking and online gaming industries. He is particularly interested in the profiling and optimisation of applications, machine learning, and network security. In his free time, he likes running for pleasure (no marathons please), watching good TV shows, and learning weird new science-related stuff.