The aim of this project was to examine how data modeling influences Big Data processing and storage within modern data architectures. To achieve this, three data models with varying levels of schema normalization, 3NF, Star Schema, and One Big Table, were designed and implemented in the context of a Data Lakehouse.
An ELT process was carried out, and the data was organized following the Medallion architecture design pattern. Finally, the impact of data modeling was analyzed through categorized queries, and the results were validated.
Technologies
- Cloud environment: Microsoft Azure with Databricks
- Infrastructure provisioning: Terraform
- Programming language: Python
- Data processing: Pyspark
- Basic data visualizations: Power BI
Data Source
Data selected for the purpose of experiments and system testing come from the IMDb website. https://datasets.imdbws.com/
This dataset is only a subset made for personal and non-commercial use, which means it does not meet Big Data requirements in terms of data size. This is because the thesis is not a commercial project, and the budget was very limited, requiring cost reductions. However, the system architecture and the technology stack are strictly targeting Big Data issues. Therefore, this dataset might be treated as the benchmark for the experiments.
Data Models
3NF:
Star Schema:
One Big Table:
Examplary Visualizations
