This is the dawn of the unbundled database era – Marketingwithanoy

Thanks to cloud, the amount of data generated and stored has exploded in size and volume.

Every aspect of the enterprise is instrumented for data, so new operations are built from that data, making every business a data company.

One of the most profound and perhaps not obvious shifts causing this is the rise of the cloud database. Services like Amazon S3, Google BigQuery, Snowflake, and Databricks have solved computing on large amounts of data and made it easy to store data from any available source.

The company wants to stockpile everything they can in hopes of providing better customer experiences and new market opportunities.

It’s a good time to be a database company

Database companies have raised more than $8.7 billion in the past 10 years, nearly half of them, $4.1 billion, in the past 24 months alone, according to CB Insights.

That’s not surprising given the skyrocketing valuations of Snowflake and Databricks. The market has doubled to nearly $90 billion in the past four years and is expected to double again in the next four years. It’s safe to say there’s a huge opportunity to chase.

See here for a solid list of database financings in 2021.

Database growth drives spending in the enterprise

Database growth drives spending in the enterprise. Image Credits: Venrock

20 years ago you had one option: a relational database

Today, thanks to the cloud, microservices, distributed applications, global scale, real-time data, and deep learning, new database architectures have emerged to meet new performance demands.

We now have different systems for fast reading and fast writing. There are also systems specific to ad hoc analysis or to data that is unstructured, semi-structured, transactional, relational, graphing, or time series, as well as data used for cache, search, index-based, events, and more.

It may come as a surprise, but there are still billions of dollars worth of Oracle instances still powering critical apps today, and they’re probably not going anywhere.

Each system has different performance requirements, including high availability, horizontal scaling, distributed consistency, failover protection, partition tolerance, and being serverless and fully managed.

As a result, on average, companies store data in seven or more different databases. For example, you can have Snowflake as your data warehouse, Clickhouse for ad hoc analytics, Timescale for time series data, Elastic for their search data, S3 for logs, Postgres for transactions, Redis for caching or application data, Cassandra for complex workloads, and Dgraph* for relationship data or dynamic schemas .

That’s all assuming you’re housed in a single cloud and you’ve built a modern data stack from scratch.

The level of performance and guarantees of these services and platforms are quite different from what we had five to ten years ago. At the same time, the proliferation and fragmentation of the database layer increasingly poses new challenges.

For example, synchronizing between different schemes and systems, writing new ETL tasks to bridge workloads across multiple databases, constant crosstalk and connectivity issues, the overhead of managing active-active clustering across so many different systems, or transferring data to new ones. clusters or systems come online. Each of these has different requirements for scaling, branching, spreading, sharding, and resources.

In addition, we now have new databases every month that aim to solve the next challenge of enterprise scale.

The new age database

So the question is, will the future of the database be defined as it is now?

Leave a comment