Explore and transform data in a lakehouse
Transform and load data
Most data requires transformations before loading into tables. You might ingest raw data directly into a lakehouse and then further transform and load into tables. Regardless of your ETL design, you can transform and load data simply using the same tools to ingest data. Transformed data can then be loaded as a file or a Delta table.
- Notebooks are favored by data engineers familiar with different programming languages including PySpark, SQL, and Scala.
- Dataflows Gen2 are excellent for developers familiar with Power BI or Excel since they use the PowerQuery interface.
- Pipelines provide a visual interface to perform and orchestrate ETL processes. Pipelines can be as simple or as complex as you need.
Analyze and visualize data in a lakehouse
After data is ingested, transformed, and loaded, it's ready for others to use. Fabric items provide the flexibility needed for every organization so you can use the tools that work for you.
- Data scientists can use notebooks or Data wrangler to explore and train machine learning models for AI.
- Report developers can use the semantic model to create Power BI reports.
- Analysts can use the SQL analytics endpoint to query, filter, aggregate, and otherwise explore data in lakehouse tables.
By combining the data visualization capabilities of Power BI with the centralized storage and tabular schema of a data lakehouse, you can implement an end-to-end analytics solution on a single platform.