Another Relational vs. NoSQL Kinda Day

I came across this article on schema/data modelling in Cassandra vs. a traditional SQL counterpart. In my current job, we have an input stream that comprises of several files through the day that each contain the same structured data. Currently we have an ETL tool that ingests these files and spits them out into different SQL Server tables. There are some ETL jobs that then take this data and further denormalize it. We have two issues at a database level. Firstly, some performance issues, but I am not convinced they are due to this being a relational database. I think in most cases we could use SQL features and techniques like indexing, denormalization, etc. to alleviate most of these performance issues. The second concern is size. Due to the size of the tables, the indexes become huge over time. If your query is not optimized, one bad read on this table can kill you. If you need to perform a time consuming job, it is going to lock up the table and the ingestion essentially has to be stopped. I have been thinking about whether a NoSQL option like Cassandra would help in this case. Cassandra gives you horizontal scaling and faster reads. Cassandra also encourages (careful) data duplication and denormalization. The whole one table per query type thinking may help.

That’s all. No solutions for today. Just thoughts and questions.

Another Relational vs. NoSQL Kinda Day

When and when not to use NoSQL

I came across this great article on Microsoft Azure Docs on NoSQL vs. SQL. In the development world, there are new technologies coming down like rain every day. It is easy to get caught up in the latest and biggest trend and have a tendency to replace your current favorite technology (a hammer) for any problem (a nail) with a different technology (a different hammer). It is important to not lose focus of what are the true applications of any new technology, and when to use it or not use it.

The Microsoft article example gives a great example on a social site where you may have a user making a post with different media that get comments and likes by other users. To think of it in a purely relational database sense, you may end up creating different tables to host users, posts, media types, comments, etc. with one-to-many or many-to-many relationships going every which way. And to do something simple like showing a post from a user may require you to run joins on several of these tables. Definitely not great for performance.


In comparison, in a document based NoSQL database, you could have entire documents saved with all the relevant information for a particular post, assigned to a user. It would be very performant unlike the multi-table, multi-relationship joins all over solution an RDBMS would offer.


There are things that relational databases are good at, for instance

  • Relational Queries
  • Defined and uniform table structure (all entries have same fields)
  • Well Defined Schema (though adding properties requires more work)
  • Structured Data
  • Vertical Scaling (More RAM, More Processing Power)

and there are things NoSQL storage is good at, for instance

  • Non-relational data (JSON, key-value pairs, etc.)
  • Ease of adding new properties
  • Unstructured data
  • Availability of Consistency (CAP Theorem)
  • Horizontal Scaling (Add Servers)
When and when not to use NoSQL

Google’s Spanner – Holy Grail of DBs or not?

Google just made it’s internal DB called Spanner open to public via it’s cloud offerings couple of days ago, and it’s already being touted as somewhat of a game changer. But is it really?


CAP Theorem

So basically there is this term CAP, often referred to as the CAP Theorem, that is an acronym for Consistency, Availability and Partition Tolerance. Consistency refers to the idea that all data in every node and cluster should have the same value at a given point in time. Availability signals at 100% uptime for both read and write executions. And partition tolerance refers to whether the database continues to function correctly if communication between servers is interrupted for some reason. Now, CAP Theorem says you can have only two of the three, and must sacrifice the third. Basically, you can either have CA, CP or AP. But not all three simultaneously.

It’s always been about A

Now, the person who initially coined the CAP Theorem was Eric Brewer of Google. He just wrote an article yesterday on valentines (a true romantic) where he claims that it’s always been about A. That is that 100% availability has always been the most important of the trinity. You can live with outdated data, as long as some data, even if its not the most recent, returns successfully.

How Google Beat Time

In a truly distributed database, where you have data centers strewn across the world, having real time or near real time consistency has been an issue. The reason Spanner is making waves the last few days is basically due to the claim that Google has been able to somehow bend time. How have they done that? Basically by developing an advanced and sophisticated timekeeping mechanism. It uses GPS receivers and atomic clocks to keep its own track of time rather than depending on NTP. Google calls this TrueTime. A key factor in achieving this hyper accuracy is the fact that Spanner runs on Google’s private network. Google not only has a global footprint like no other company, but also runs and controls its own WAN.

RDBMS vs. NoSQL vs. Spanner

Typically relational databases (RDBMS) like SQL Server, Oracle, MySQL, etc. scale-up. That is you can throw more RAM and processing power at them. Problem is at one point, you reach a limit. NoSQL databases get around this by scaling-out i.e. adding more servers or nodes. Problem with that then becomes synchronization and consistency. So NoSQL databases like Cassandra have specialized replication algorithms where nodes send each other updates to keep data fresh and synchronized between updates. Well, Spanner basically brings the relational quality of RDBMS with the distributed architecture of the NoSQL database. In Brewer’s own words:

Spanner is Google’s highly available, global SQL database. It manages replicated data at great scale, both in terms of size of data and volume of transactions. It assigns globally consistent real-time timestamps to every datum written to it, and clients can do globally consistent reads across the entire database without locking.

But is it really? Is it?

Strictly speaking, no you cannot have 100% availability. What Spanner claims you can have though is near 100% availability, with near consistency, while operating over a wide area network. But that near may be just good enough. Google claims that Spanner offers five 9s availability meaning less than 1 in 10^5 calls. That is good enough for a lot of businesses.

Is Spanner the DB Holy Grail?

I think that remains to be seen. What will make a difference is that now that near CAP is possible, do companies really need it? If you are a multinational running global operations, are you going to be ok with other NoSQL choices like MongoDB and Cassandra or even running local scaled-up RDBMS that are cut up by regions and business units? Do business really need all three tenants of CAP, or is it just a cool bit of technology.

Further Readings
Inside Cloud Spanner and the CAP Theorem
Why Google’s Spanner Database Won’t Do As Well As Its Clone
Google Launches Cloud Spanner — A NewSQL Database For Enterprises
CAP Confusion: Problems with ‘partition tolerance’
Google’s Spanner – Holy Grail of DBs or not?