What is a data lakehouse?

Organizations everywhere are searching for storage solutions to manage the volume, latency, resiliency, and data access requirements of big data. Initially, companies made use of their existing technology stacks, attempting to get their data lakes to deliver the same capabilities as a warehouse, adjusting their data warehouses to handle massive amounts of semi-structured data, or opting to keep data in both. 

Ultimately, these approaches resulted in high costs, unhappy users, and duplication of data across the business. The data lakehouse has emerged as a new hybrid data architecture that aims to deliver all the best benefits of data warehouses and data lakes while eliminating the weaknesses of both systems. 

Google Cloud provides a cloud-native, highly scalable, and secure data lakehouse solution based on our low-cost storage, serverless compute engines, and powerful data management offerings. Learn more about how Google Cloud helps you build an open data lakehouse.

Data lakehouse defined

A data lakehouse is a data architecture that blends a data lake and data warehouse together. Data lakehouses enable machine learning, business intelligence, and predictive analytics, allowing organizations to leverage low-cost, flexible storage for all types of data—structured, unstructured, and semi-structured—while providing data structures and data management features.

What is a lakehouse?

A data lakehouse is a modern data architecture that creates a single platform by combining the key benefits of data lakes (large repositories of raw data in its original form) and data warehouses (organized sets of structured data). Specifically, data lakehouses enable organizations to use low-cost storage to store large amounts of raw data while providing structure and data management functions. 

Historically, data warehouses and data lakes had to be implemented as separate, siloed architectures to avoid overloading the underlying systems and creating contention for the same resources. Companies used data warehouses to store structured data for business intelligence (BI) and reporting and data lakes to store unstructured and semi-structured data for machine learning (ML) workloads. But this approach required data to be regularly shifted between the two separate systems when data from either architecture needed to be processed together, creating complexity, higher costs, and issues around data freshness, duplication, and consistency. 

Data lakehouses aim to break down these silos and deliver the flexibility, scalability, and agility needed to ensure your data generates value for your business, rather than inefficiencies.

Data lakehouse vs. data lake vs. data warehouse

The term “data lakehouse” merges two types of existing data repositories: the data warehouse and the data lake. So, what exactly are the differences when it comes to a data lakehouse vs. data lake vs. data warehouse? 

Data warehouses

Data warehouses provide fast access to data and SQL compatibility for business users that need to generate reports and insights for decision-making. All data must go through ETL (extract, transform, load) phase. This means it is optimized in a specific format, or schema, based on the use case before it is loaded to support high-performance queries and data integrity. However, this approach limits the flexibility of access to the data and creates additional costs if data needs to be moved around for future use. 

Data lakes

Data lakes store large amounts of unstructured and structured data in its native format. Unlike data warehouses, data is processed, cleaned up, and transformed during analysis to enable faster loading speeds, making them ideal for big data processing, machine learning, or predictive analytics. However, they require expertise in data science, which limits the set of people who can use the data, and if they’re not properly maintained, data quality can deteriorate over time. Data lakes also make it more challenging to get real-time queries as the data is unprocessed, so it still potentially needs to be cleaned, processed, ingested, and integrated before it can be used. 

Data lakehouse

A data lakehouse merges these two approaches to create a single structure that allows you to access and leverage data for many different purposes, from BI to data science to machine learning. In other words, a data lakehouse captures all of your organization’s unstructured, structured, and semi-structured data and stores it on low-cost storage while providing the capabilities for all users to organize and explore data according to their needs. 

Data lakehouse features

The key data lakehouse features include: 

  • Single data low-cost data store for all data types (structured, unstructured, and semi-structured) 
  • Data management features to apply schema, enforce data governance, and provide ETL processes and data cleansing
  • Transaction support for ACID (atomicity, consistency, isolation, and durability) properties to ensure data consistency when multiple users concurrently read and write data 
  • Standardized storage formats that can be used in multiple software programs
  • End-to-end streaming to support real-time ingestion of data and insight generation 
  • Separate compute and storage resources to ensure scalability for a diverse set of workloads

Direct access for BI apps to the source data in the lakehouse to reduce data duplication. 

How does a data lakehouse work?

When it comes to making a data lakehouse work, it’s important to consider what it’s trying to achieve. Data lakehouses aim to centralize disparate data sources and simplify engineering efforts so that everyone in your organization can be data users. 

A data lakehouse uses the same low-cost cloud object storage of data lakes to provide on-demand storage for easy provisioning and scaling. Like a data lake, it can capture and store large volumes of all data types in raw form. The lakehouse integrates metadata layers over this store to provide warehouse-like capabilities, such as structured schemas, support for ACID transactions, data governance, and other data management and optimization features.

Benefits of using a data lakehouse

Simplified architecture

A data lakehouse removes the silos of two separate platforms, so you only have to focus on managing and maintaining a single data repository. Tools can also be connected directly to source data so you don’t have to extract or prepare data to be used in a data warehouse. 

Better data quality 

You can enforce schemas for structured data and data integrity in data lakehouse architectures, enabling you to ensure consistency. Plus, lakehouses reduce the time to make new data available, ensuring fresher data.

Lower costs

Store huge volumes of data on low-cost storage and eliminate the need to maintain both a data warehouse and a data lake. Data lakehouses also help reduce costs from ETL processes and de-duplication. 

Increased reliability

Data lakehouses reduce ETL data transfers between multiple systems, reducing the chance of quality or technical issues that can occur with data movement. 

Improved data governance

Data and resources get consolidated in one place with data lakehouses, making it easier to implement, test, and deliver governance and security controls. 

Reduced data duplication

The more copies of data that exist in disparate systems, the more likely it is to be inconsistent and less trustworthy. With data lakehouses, you can achieve a single source of data that can be shared across the business to make decisions, preventing any inconsistencies and extra storage costs caused by data duplication.

Diverse workloads

You can connect multiple tools directly to the lakehouse to support analytics, SQL, machine learning, and data science workloads from the same repository.

High scalability

The low-cost cloud object storage of data lakehouses allows you to decouple compute from storage to provide nearly limitless and instantaneous scalability. You can scale computing power and storage separately according to your business needs.

Simplified architecture

A data lakehouse removes the silos of two separate platforms, so you only have to focus on managing and maintaining a single data repository. Tools can also be connected directly to source data so you don’t have to extract or prepare data to be used in a data warehouse. 

Better data quality 

You can enforce schemas for structured data and data integrity in data lakehouse architectures, enabling you to ensure consistency. Plus, lakehouses reduce the time to make new data available, ensuring fresher data.

Lower costs

Store huge volumes of data on low-cost storage and eliminate the need to maintain both a data warehouse and a data lake. Data lakehouses also help reduce costs from ETL processes and de-duplication. 

Increased reliability

Data lakehouses reduce ETL data transfers between multiple systems, reducing the chance of quality or technical issues that can occur with data movement. 

Improved data governance

Data and resources get consolidated in one place with data lakehouses, making it easier to implement, test, and deliver governance and security controls. 

Reduced data duplication

The more copies of data that exist in disparate systems, the more likely it is to be inconsistent and less trustworthy. With data lakehouses, you can achieve a single source of data that can be shared across the business to make decisions, preventing any inconsistencies and extra storage costs caused by data duplication.

Diverse workloads

You can connect multiple tools directly to the lakehouse to support analytics, SQL, machine learning, and data science workloads from the same repository.

High scalability

The low-cost cloud object storage of data lakehouses allows you to decouple compute from storage to provide nearly limitless and instantaneous scalability. You can scale computing power and storage separately according to your business needs.

Challenges of using a data lakehouse

The concept of a data lakehouse is still a relatively new architecture, meaning that some of the biggest challenges deal with the fact that it is evolving and best practices are still being defined by early adopters. 

In addition, data lakehouses are complex to build from the ground up. In most cases, you’ll need to either opt for an out-of-box data lakehouse solution or use a platform like Google Cloud that offers all the needed components to support an open lakehouse architecture.

Layers of data lakehouse architecture

A data lakehouse architecture consists of the following layers: 

  • Storage layer: The storage layer is the data lake layer for all of your raw data, usually a low-cost object store for all your unstructured, structured, and semi-structured datasets. It’s decoupled from computing resources so compute can scale independently. 
  • Staging layer: The staging layer is the metadata layer that sits on top of your data lake layer. It provides a detailed catalog about all the data objects in storage, enabling you to apply data management features, such as schema enforcement, ACID properties, indexing, caching, and access control.
  • Semantic layer: The semantic layer, the lakehouse layer, exposes all your data for use, where users can use client apps and analytics tools to access and leverage data for experimentation and business intelligence presentation. 

Data lakehouse examples

There are several existing data lakehouse examples, including Databricks Lakehouse Platform and Amazon Redshift Spectrum. However, as technologies continue to mature and data lakehouse adoption has increased, the implementation has shifted away from coupling lakehouse components to a specific data lake. 

For example, the Google Cloud approach has been to unify the core capabilities of enterprise data operations, data lakes, and data warehouses. This implementation places BigQuery’s storage and compute power at the heart of the data lakehouse architecture. You can then apply a unified governance approach and other warehouse-like capabilities using Dataplex and Analytics Hub. 

BigQuery is not only integrated with the Google Cloud ecosystem, it also allows you to use partner and open source technologies to bring the best of lake and warehouse capabilities together in a single system.

We are continuing to build on this approach with the release of BigLake, now in Preview, a unified storage engine that simplifies data access to data warehouses and data lakes. You can apply fine-grained access control and accelerate query performance across distributed data. 

Solve your business challenges with Google Cloud

New customers get $300 in free credits to spend on Google Cloud.
Talk to a Google Cloud sales specialist to discuss your unique challenge in more detail.
Google Cloud
  • ‪English‬
  • ‪Deutsch‬
  • ‪Español‬
  • ‪Español (Latinoamérica)‬
  • ‪Français‬
  • ‪Indonesia‬
  • ‪Italiano‬
  • ‪Português (Brasil)‬
  • ‪简体中文‬
  • ‪繁體中文‬
  • ‪日本語‬
  • ‪한국어‬
Console
Google Cloud