In this article, we will discuss the differences between transactional and analytical databases, the benefits and drawbacks of using MySQL as both transactional and analytical databases, and the concept of dimensional modeling. The article also provided a step-by-step guide on how to create a dimensional Model using MySQL with a practical example of creating a dimensional model for analyzing sales data.

Analytical Database vs Transactional Database

Transactional databases are designed for online transaction processing (OLTP) and are optimized for handling high volume, real-time transactions. These databases are characterized by normalized tables with complex relationships, where data is constantly being updated, inserted, or deleted.

Analytical databases, on the other hand, are designed for online analytical processing (OLAP) and are optimized for querying and analyzing large datasets. These databases are characterized by denormalized tables with simple relationships, where data is typically read-only or updated in batch processes.

Using MYSQL as both transactional database and analytical database

it is possible to use MySQL for both transactional and analytical databases. However, it is important to keep in mind that transactional and analytical databases have different requirements and characteristics, and using the same database for both purposes can have its pros and cons.

Pros:
  • No need for a separate data warehouse: Using MySQL for both transactional and analytical databases eliminates the need for a separate data warehouse, which can reduce costs and simplify the overall architecture.
  • Familiarity: If your team is already familiar with MySQL, using it for both transactional and analytical databases can reduce the learning curve and make it easier to maintain.
  • Real-time analytics: Since the analytical data is stored in the same database as the transactional data, it can be queried in real-time, allowing for near-instantaneous reporting and analysis.
Cons:
  • Performance: MySQL may not be optimized for analytical workloads, which can lead to slow query performance and decreased overall efficiency.
  • Limited scalability: As the amount of data grows, MySQL may not be able to scale horizontally as effectively as dedicated data warehouse solutions, which can lead to performance issues.
  • Complexity: Managing both transactional and analytical data in the same database can lead to increased complexity and potentially affect data quality if not done properly.

In conclusion, using MySQL for both transactional and analytical databases can be a viable solution for small to medium-sized organizations with limited data volumes and relatively simple analytical requirements. However, larger organizations with more complex analytical requirements may benefit from using dedicated data warehousing solutions that are optimized for large-scale analytics.

Dimensional Modeling with MYSQL

Dimensional modeling is a data modeling technique designed to support analytical queries and reporting by organizing data into a structure that is optimized for fast, efficient queries.

There are two main components of a dimensional model facts & dimensions.

Facts

Fact tables in dimensional modeling only contain numerical measures, foreign keys to dimension tables, and attributes that are used for mathematical computations. Textual data or other non-numerical details are not included in fact tables as they do not add value to the analytical process.

The name “star schema” comes from the fact that when the table relationships are visualized, the fact table is in the middle, surrounded by its dimension tables; the connections to these tables are like the rays of a star.

Dimensions

Dimensions add context to business process events by providing information on who, what, where, when, how, and why. For instance, in the sales business process, the quarterly sales number fact is accompanied by dimensions such as customer names (who), location (where), product name (what), date (when), equipment and services (how), and reason (why). This contextual information helps to provide a comprehensive understanding of the business process event.

One way to generate facts and dimensions is to consider which numerical values (measures) your reports or charts require and which filters you will employ. For example, in traffic analytics, if you wish to report website traffic by location, date, and referrals, you will require a traffic fact table that includes a traffic column as well as foreign keys to the location, date, and referrals dimension tables.

Use Cases of Dimensional Modeling

Dimensional modeling offers several benefits over traditional relational database modeling. One of the main benefits is its ability to support complex queries and reporting with minimal performance impact. This is achieved through denormalization, which reduces the number of tables required to store data and allows for faster joins and aggregations.

Dimensional modeling is commonly used in data warehousing and business intelligence applications, such as sales analysis, financial reporting, and customer segmentation. It is also useful in scenarios where historical data is required for analysis, as it allows for efficient querying of large data sets.

A practical example demonstrating how to apply dimensional modeling with MySQL, step by step.

Following is an example that walks through the process of creating a star schema using MySQL based on the transactional database provided:

Transactional Database: Our transactional database consists of four tables: orders, order_details, products, and stores. Here’s a brief description of the tables and their columns:

  • orders (order_Id, order_total, store_id, date)
  • order_details (Id, order_Id, product_id)
  • products (ProductID, name, price)
  • stores (store_id, name, address)

We want to create a star schema that will allow us to analyze sales by date, product, and store.

    • Identify the business process and fact table: The business process in this example is sales, and the fact table is the sales fact table. In this case, the sales fact table will contain the measurements or facts related to sales such as order total.
    • Identify the dimensions: Dimensions provide the context around the facts. In this example, the dimensions are date, product, and store. These dimensions provide the who, what, where, and when context for the sales facts.
    • Create the dimension tables: For each dimension identified in step 2, create a dimension table. In this example, we need to create three dimension tables: Date, Product, and Store.
    • Date dimension table: The date dimension table will contain information about the date such as day, month, year, etc. The SQL code to create the date dimension table would look like the following:

  CREATE TABLE date_dim (
  date_id INT NOT NULL PRIMARY KEY,
  full_date DATE,
  day INT,
  month INT,
  year INT,
  quarter INT,
  fiscal_quarter INT,
  fiscal_year INT
);
  

 

  •  Product dimension table: The product dimension table will contain information about the products such as product name, price, etc. The SQL code to create the product dimension table would look like the following:

  CREATE TABLE product_dim (
  product_id INT NOT NULL PRIMARY KEY,
  product_name VARCHAR(255),
  price DECIMAL(10, 2)
);

  
  • Store dimension table: The store dimension table will contain information about the stores such as store name, address, etc. The SQL code to create the store dimension table would look like the following:

  CREATE TABLE store_dim (
  store_id INT NOT NULL PRIMARY KEY,
  store_name VARCHAR(255),
  address VARCHAR(255)
);
 
  •  Create the fact table: Once the dimensions are created, we can create the fact table. The sales fact table will contain the sales measurements along with the foreign keys to the corresponding dimensions. The SQL code to create the sales fact table would look like the following:

 CREATE TABLE sales_fact (
  order_id INT NOT NULL PRIMARY KEY,
  order_total DECIMAL(10, 2),
  date_id INT NOT NULL,
  product_id INT NOT NULL,
  store_id INT NOT NULL,
  FOREIGN KEY (date_id) REFERENCES date_dim(date_id),
  FOREIGN KEY (product_id) REFERENCES product_dim(product_id),
  FOREIGN KEY (store_id) REFERENCES store_dim(store_id)
);

In order to appreciate the significance of dimensional modeling, consider a scenario where we need to analyze the sales trend of a specific product over several years. In a transactional database, we would be required to execute a complex and time-consuming query that involves joining multiple tables such as orders, order_details, products, and date, to obtain the relevant information.


 SELECT YEAR(o.date) as year, SUM(od.quantity * p.price) as sales
FROM orders o
JOIN order_details od ON o.order_id = od.order_id
JOIN products p ON od.product_id = p.product_id
WHERE p.name = 'Product X'
GROUP BY YEAR(o.date)

However, with the dimensional model, this query becomes much simpler:


SELECT d.year, s.sales_amount
FROM sales s
JOIN product p ON s.product_id = p.product_id
JOIN date d ON s.date_id = d.date_id
WHERE p.name = 'Product X'

This is because the necessary information is already denormalized and stored in the fact and dimension tables. The query can now be executed much faster, and the results are easier to understand and interpret.

Populating the dimensional model

We’ll load the dimension tables with data from the original tables. For example, to  populate the date dimension table, we need to extract the date information from the orders table and transform it into the required format. The SQL code to populate the date dimension table would look like the following:


 INSERT INTO date_dim (date_id, full_date, day, month, year, quarter, fiscal_quarter, fiscal_year)
SELECT DISTINCT
  DATE_FORMAT(date, '%Y%m%d') AS date_id,
  date AS full_date,
  DAY(date) AS day,
  MONTH(date) AS month,
  YEAR(date) AS year,
  QUARTER(date) AS quarter,
  QUARTER(date) AS fiscal_quarter,
  YEAR(date) AS fiscal_year
FROM orders;

);

To continuously feed the analytical tables from transactional tables, you can use a process called ETL (Extract, Transform, Load). The basic idea of ETL is to extract data from the transactional tables, transform it into a format suitable for the analytical tables, and then load it into the analytical tables. Here are the steps you can follow:

  1. Extract: Extract the data you need from the transactional tables. You can use MySQL’s SELECT statement to extract the data, and you can schedule this step to run periodically (e.g. every hour) using a cron job or similar scheduling tool.
  2. Transform: Transform the extracted data into a format that is suitable for the analytical tables. This may involve cleaning the data, aggregating it, or joining it with other data sources. You can use MySQL’s built-in functions or a scripting language like Python or Perl to perform the necessary transformations.
  3. Load: Load the transformed data into the analytical tables. You can use MySQL’s INSERT statement to load the data, and you can schedule this step to run periodically (e.g. every hour) using a cron job or similar scheduling tool.

To ensure that the analytical tables are continuously updated with the latest data from the transactional tables, you can set up a pipeline that automates the ETL process. For example, you can use a tool like Apache Airflow to schedule and execute the ETL pipeline, and you can set up alerts or notifications to notify you if any part of the pipeline fails.

It’s worth noting that ETL can be a complex and time-consuming process, especially if you have large volumes of data or complex transformations. To optimize the ETL process, you may need to consider using specialized tools like Apache Kafka

ELT Vs ETL

ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) are two common methods for moving data from source systems to target systems, such as data warehouses or analytical databases.

The main difference between ETL and ELT is the order in which data is transformed. In ETL, data is first extracted from source systems, then transformed into the desired format, and finally loaded into the target system. In ELT, data is first extracted from source systems and loaded into the target system, and then transformed into the desired format.

Both ETL and ELT can be performed using either full or incremental loads:

  • Full Load: In a full load, all of the data in the source system is extracted and loaded into the target system, regardless of whether the data has changed since the last load. This is useful when you need to create a new data set from scratch or when you need to refresh the entire data set in the target system.
  • Incremental Load: In an incremental load, only the data that has changed since the last load is extracted and loaded into the target system. This is useful when you need to keep the target system up-to-date with changes in the source system and when you want to minimize the amount of data that needs to be transferred.

The choice between full and incremental loads depends on the specific use case and the characteristics of the source and target systems. Full loads are typically used when the amount of data is small, the frequency of changes is low, or when the target system needs to be refreshed completely. Incremental loads are typically used when the amount of data is large, the frequency of changes is high, or when you need to keep the target system up-to-date in near-real-time.