Need to track hundreds of billions of data points? These Opower engineers’ Open Source software can help.
Last month, our colleague Greg Poirier wrote about Opower’s innovations with Open Source software, and specifically Wizardvan — an Opower open-source contribution that helps organize computational metrics on a large scale.
Today we’ll discuss a similar innovation that Opower developed and everyone can now use thanks to the nature of Open Source contributions. As our colleagues havepreviously mentioned, Open Source software is so valuable because the larger the community there is using it and working on it, the better it becomes.
THE CHALLENGE AND OPPORTUNITY OF LARGE DATA VOLUMES
Because Opower’s data volumes are scaling rapidly as we partner with more utilities and launch more programs, we have to take special care that our production systems and services always operate at optimal levels of performance. We measure that performance around the clock by tracking metrics on the type and amount of data flowing through our system.
Historically, we’ve utilized two traditional open-source systems to provide a backend to store our metrics data: Graphite and OpenTSDB. Both systems claim to to be scalable solutions for storing vast amounts of information about data metrics, but have different approaches to addressing the scalability challenge.
Schematics of two OSS approaches that store metrics data: Graphite and OpenTSDB
As the amount of metrics data in our systems has continued to grow, we’ve begun to approach the limits of what existing versions of Graphite can support. For example, Graphite runs on “whisper“— a fixed-size database where each individual node can only store so much data. In addition, whisper’s archiving and time-stamping features aren’t nimble or efficient enough to support ultra-high volumes of data.
Where Graphite and its fixed-size database structure fall short, OpenTSDB can step in. OpenTSDB’s advantages span a range of features, including linear scaling and time-efficient scans.
However, as we’ve started to rely more on OpenTSDB for storing our metrics, we’ve found the need to add more functionality to support specific scenarios that stem from processing large and ever-growing streams of energy-related data. So, we did what’s become the natural thing: we implemented new functionality ourselves and shared our improvements back to the Open Source software community.
STRENGTHENING OPEN SOURCE SOFTWARE, STRENGTHENING OPOWER’S DATA SYSTEMS
Here are two important Open Source contributions we recently developed that show how our day-to-day experience with high-volume data processing allows us break new ground in Open Source software.
a) Metasync thread dead-locking
A benefit of having billions of rows of data in our system is that it can help us expose edge cases in Open Source software — especially edge cases that may not always be evident to the original maintainers of the Open Source repositories.
For example, in the case of OpenTSDB 2.0, we ran into a strange issue with running an operation called “metasync.” It would start properly and process data for about 5 minutes, then lock up suddenly and stop processing data. After some debugging work and looking at the code, we found the code block responsible:
A buggy block of code in a previous version of OpenTSDB software, which Opower engineers’ Open Source software contributions have helped improve
After 5 minutes, the OpenTSDB procedure would make a deferred call to reload some information (related to the new tree functionality, as shown in the code above). Unfortunately, this code block exited without releasing a mutual exclusion lock, which is required to ensure that multiple computational processes can run at once. As such, things would deadlock and no more data would be processed.
In this case, we were able to identify and fix a serious bug because we had the amount of data required to run the “metasync” operation long enough to produce this edge-case. This may not always be possible for the maintainers of the source repository because their testing data-sets may be much smaller (or they may not have the time or resources to run extended tests). We submitted this bug fix to the OpenTSDB repository, and it’s included in the upcoming 2.0 release.
b) Support for open-ended queries
Here’s another case in which our large data volumes enabled us to identify and rectify a procedural limitation of OpenTSDB.
When running open-ended queries, it’s not always known beforehand how many data points any specific computational metric will produce. This situation is fine when a metric’s count of data points sits in the tens or hundreds of thousands. However, we have some very common metrics such as Input/Output performance and central processing utilization that are computed for all systems and have millions of data points.
Whenever we ran into an open-ended query of this kind, OpenTSDB would continue trying to retrieve data until eventually all of the worker threads would be occupied by these gigantic queries and the process would eventually stall.
Our solution was straightforward: add an option to OpenTSDB to abort querying for more data after a specified timeout was exceeded. This allowed us to bypass queries in a set amount of time (e.g., 60 seconds) instead of being stuck in an indefinite waiting pattern. An added benefit is that if people accidentally query for too much data, it will prevent them from crashing any given process in OpenTSDB. This feature is also beneficial to other OpenTSDB functionality because different portions of OpenTSDB share Input/Output worker capacity.
We felt that this feature was broadly useful, so we contributed it back to the upstream open-source OpenTSDB repository for the 2.1 release. It’s an entirely optional configuration parameter, so regular users with less data won’t need to worry about it,. But it’s available for users who need this sort of functionality to keep their OpenTSDB nodes functional during workloads that query large datasets. Additionally, users that do receive timeouts can retry the same query with a different downsampling rate.
By applying the above improvements to OpenTSDB, we’ve made our systems more stable and ensured we can continue to support an ever-growing amount of metrics data. In our day-to-day work of processing uniquely large utility data streams, we’ve been able to break new ground in using and refining powerful Open Source tools. By making improvements to OpenTSDB’s metasync operation, we’re helping large-scale data users around the world (including ourselves) reliably generate metadata about their metrics. And by building in new support for open-ended queries, we’ve provided the ability to time out and re-optimize queries for stall-prone scenarios where waiting forever is not an option.