Build Cache Latency: 3 Common Causes and How We Solve Them

In a talk at Droidcon San Francisco, Bitrise Solutions Engineer, Ben Boral, addressed three causes of latency that affect build cache efficiency and how Bitrise overcomes these bottlenecks. The following blog post summarizes the key takeaways from Ben’s talk.

While continuous integration (CI) offers many benefits, it’s common for long-running builds to become a resource sink. Long build durations lead to extended feedback loops, degraded developer experience, and cascading CI infrastructure costs.

For developers using build systems like Gradle or Bazel, implementing a remote build cache is one approach to reduce long build durations. However, simply connecting up a build cache isn’t necessarily enough; the key lies in its efficiency.

If you’re new to build caching, then we suggest you first check out our guide to build caching. In summary, a build cache is designed to save time by storing build and test outputs. During subsequent builds, the previously saved outputs can be reused to avoid redundant processes. However, the effectiveness of a build cache hinges on overcoming several performance bottlenecks.

Geographic Distance

One of the leading bottlenecks impacting build cache performance is the physical distance between your cache’s storage location and where your builds are running. For CI builds, this refers to the distance between your CI runners’ location and the cache storage. For local builds, it pertains to the proximity of individual developers’ machines to the cache storage.

In a real-world case study, Gradle studied the effect of geographic distance and latency on the Cash App team’s build durations. With a build cache hosted in US West, Gradle found that West Coast Android developers realized an average saving of 3 minutes and 10 seconds per build. East Coast developers saw a marginal improvement over normal build durations, with an average saving of 38 seconds. Finally, developers in Australia faced a negative impact on build durations, proving that a poorly distributed cache could, in fact, harm productivity.

Developers building their apps on Bitrise CI with Bitrise Build Cache avoid latency because our cache storage is physically colocated only steps away from the macOS and Linux runners responsible for their build. A ping’s round trip time between continents is 150ms, and inside a data center is only 0.5ms.

Teams using Bitrise Build Cache to build locally or with other CI providers, like CircleCI, benefit from our globally distributed cache storage. With cache replicas located in most major metropolitan areas, developers can trust that network latency will be kept to a minimum, ensuring that build caching is a benefit – not a detriment – to their builds.

Cache Entry Read Speed

The second critical bottleneck in optimizing build cache performance is the read speed of cache entries. This determines how quickly your build process can access and use cached outputs. Slower read speeds, such as those from traditional hard drives, can negate the time-saving benefits expected from using a build cache.

To address this bottleneck, we have implemented high-performance storage solutions:

  1. In-memory storage for hot entries
    Recognizing that certain cache entries are accessed more frequently, we use in-memory storage for these hot entries. This approach allows for the fastest possible read speeds. In-memory storage is particularly effective because it bypasses the slower read speeds of disk-based storage systems, offering near-instantaneous access to cache entries.
  1. High-performance SSDs for other entries
    For less frequently accessed cache entries, we use high-performance Solid State Drives (SSDs). These drives offer significantly faster read speeds than traditional hard drives, improving the overall efficiency of the build cache. The use of SSDs ensures that even the less commonly accessed cache entries can be retrieved quickly.

In addition to employing high-performance storage, optimizing data management is important. By compressing data at rest, we minimize the amount of data that needs to be read from storage, further accelerating access times.

Network Protocol Efficiency

The third critical bottleneck in optimizing build cache performance is the efficiency of the network protocols used. This impacts how quickly data is transferred between the build cache and the build environment. Inefficient protocols can add unnecessary latency and overhead, especially when handling the large number of small transactions common to build caching.

With Bitrise Build Cache, we leverage HTTP/2, which makes things fast by minimizing the number of TCP connections (fewer handshakes to slow things down) and by using smaller packet headers. 

Conclusion

By addressing these three major bottlenecks with build caching, we deliver a highly performant caching platform for developers building their apps with Bitrise CI and other CI providers. To learn more about Bitrise Build Cache, explore our documentation and consider starting a free Bitrise Build Cache trial to test drive it for yourself.