Is Big Data better Data Quality?
Big Data is everywhere. Chances are you’ve used a big data solution today. However, are big data solutions delivering big data quality?
High Availability versus High Data Quality
Typically, Big Data solutions are designed to ensure high availability. High availability is based on the concept that it is more important to collect and store data transactions than it is to determine the uniqueness or accuracy of the transaction. Some common examples of big data / high availability solutions are Twitter and Facebook.
It is possible to configure a big data solution to validate uniqueness and accuracy. I want to make sure I state that clearly. However, in order to do so you need to sacrifice some of the aspects of high availability to do so. So, in some regard, big data and data quality are at odds.
This is because one of the fundamental aspects of high availability is to write transactions to whichever node is available. In this model, consistency of transactional data is sacrificed in the name of data capture. Most often, consistency is eventually configured on data inquiries, or on data reads as opposed to data writes.
In other words, at some given point in time you do not have consistency in a big data dataset. Even more troubling is the fact that most transactional conflicts are resolved based on timestamps. Which is to say that the most recently updated transaction is commonly regarded as the most accurate. This approach is, obviously, an issue that requires further examination.
Room for improvement
As we examine big data solutions and learn more about implementing them, it is important to design more robust conflict resolution approaches that ensure that big data includes big data quality.
More on that to come …
Thanks for taking the time to visit the weblog!