I came across this article on schema/data modelling in Cassandra vs. a traditional SQL counterpart. In my current job, we have an input stream that comprises of several files through the day that each contain the same structured data. Currently we have an ETL tool that ingests these files and spits them out into different SQL Server tables. There are some ETL jobs that then take this data and further denormalize it. We have two issues at a database level. Firstly, some performance issues, but I am not convinced they are due to this being a relational database. I think in most cases we could use SQL features and techniques like indexing, denormalization, etc. to alleviate most of these performance issues. The second concern is size. Due to the size of the tables, the indexes become huge over time. If your query is not optimized, one bad read on this table can kill you. If you need to perform a time consuming job, it is going to lock up the table and the ingestion essentially has to be stopped. I have been thinking about whether a NoSQL option like Cassandra would help in this case. Cassandra gives you horizontal scaling and faster reads. Cassandra also encourages (careful) data duplication and denormalization. The whole one table per query type thinking may help.
That’s all. No solutions for today. Just thoughts and questions.