Learn the most important data engineering concepts that data scientists should be aware of.

As the field of data science and machine learning continues to evolve, it is increasingly evident that data engineering cannot be separated from it. Gone are the days when organizations could rely on models trained and stored in notebooks without any structure, governance, or testing. In the current age of readily available deep learning models and easy model training, the most valuable data scientists are those who are able to focus on the stability and scalability of their models, rather than just their performance on a single machine. These skills are increasingly important as organizations look to deploy models at scale and ensure their ongoing reliability.

At DareData, we believe that data scientists should be responsible for the deployment and maintenance of their own models, or at least have a strong understanding of data engineering concepts to better contribute to the lifecycle of model projects.

In this post, we'll discuss some key data engineering concepts that data scientists should be familiar with, in order to be more effective in their roles. These concepts include concepts like data pipelines, data storage and retrieval, data orchestrators or infrastructure-as-code.

Our goal is to help data scientists better manage their models deployments or work more effectively with their data engineering counterparts, ensuring their models are deployed and maintained in a robust and reliable way.

At the end of each section we'll leave a couple of resources to help you get familiar and boost your skills on these concepts!

Data Ingestion

Data ingestion refers to the process of importing data into a system or database for storage and analysis. This can involve extracting data from various sources, such as files, operational databases, APIs or IoT data, and transforming it into a format that is suitable for storage and analysis. This can definitely be a complex process, as it often involves dealing with large volumes of data, handling errors and exceptions.

Data ingestion is also a critical process to ensure a continuous delivery of data to operations and models, as most of these products/solutions require a huge amount of integration and standardization. At a critical part of a data ingestion process is the data pipeline, a series of steps that are used to gather, transform, and store data. They can be simple or complex, and they can involve multiple steps, technologies or formats such as CSV, Tabular or JSON formats.

For data scientists, these skills are extremely helpful when it comes to manage and build more optimized data transformation processes, helping models achieve better speed and relability when set in production. Learning SQL / NoSQL and how major orchestrators work will definitely narrow the gap between the quality model training and model deployment.

Here are some resources to help you:

Data Orchestrators

A data orchestrator is a system or tool that is responsible for managing the loading and transformation of data between different systems or components within an organization. This can include tasks such as extracting data from various sources, transforming it into a desired format, and loading it into a target system or data store. Data orchestrators are often used in the context of data pipelines but are also relevant for calling machine learning pipelines that need to be executed on a batch basis.

They can help organizations automating the process of gathering and preparing data for analysis, and can make it easier to integrate data from different sources and systems, by using triggers and dependencies between processes.

Some common features of data orchestrators include scheduling, error handling, the ability to handle large volumes of data, data quality assertion and lineage automation.

Some examples are:

  1. Apache Airflow: An open-source data orchestrator that enables users to define, schedule, and monitor workflows. Airflow is written in Python and has a web-based user interface for managing and monitoring pipelines.
  2. AWS Glue: A fully managed data orchestrator service offered by Amazon Web Services (AWS).
  3. Talend Data Fabric: A comprehensive data management platform that includes a range of tools for data integration, data quality, and data governance.
  4. Azure Data Factory: A cloud-based data integration service offered by Microsoft. Data Factory can be used to create, schedule, and orchestrate data pipelines that move and transform data between different systems and data stores.
  5. DigDag: An open-source orchestrator for data engineering workflows. Digdag allows users to define workflows in simple YAML syntax.

Here are a couple of resources to learn more:

Data Storage

In the context of data engineering, data storage refers to the systems and technologies that are used to store and manage data within an organization. This can include both structured and unstructured data, and can encompass a wide range of data types, including transactional data, log files, and analytical data.

Choosing the right data storage solution is a decision that encompasses many variables:

  • Availability
  • Cost
  • Data history
  • Size and complexity

Some common data storage solutions that are used in data engineering include databases, file stores, and cloud storage services with some common "standards" being:

  1. Relational databases: These are data storage systems that are designed to store and manage structured data using tables and relationships. Examples of relational databases include MySQL or Microsoft SQL Server.
  2. NoSQL databases: NoSQL databases are often used for applications that require high scalability and performance, such as real-time web applications. Examples of NoSQL databases include MongoDB or Cassandra.
  3. Data lakes: These are large-scale data storage systems that are designed to store and process large amounts of raw, unstructured data. Examples of technologies able to aggregate data in data lake format include Amazon S3 or Azure Data Lake.
  4. Data warehouses: These are specialized data storage systems that are designed to store and manage large amounts of structured data for reporting and analysis. Some examples include Amazon Redshift, Azure SQL Data Warehouse, and Google BigQuery.

Some resources you can use to learn about these topics:


Infrastructure as code (IaC) is the practice of managing and provisioning computer infrastructure using machine-readable files rather than manual processes. IaC allows organizations to automate the process of deploying and managing their IT infrastructure, including servers, networks, and other resources.

IaC also enables organizations to more easily scale their infrastructure up or down as needed, and to make changes to their data infrastructure in a controlled and reversible manner. This can be useful in many scenarios where organizations may need to process data faster or want to scale down applications during certain periods to reduce costs.

Overall, IaC is a valuable tool for organizations that want to improve the efficiency and reliability of their IT infrastructure. Learning Terraform or Ansible can be a fun way for data scientists to get exposed to infrastructure concepts, something that can be extremely valuable for their career, making them fit to work in multiple areas of the data value chain.

In conclusion, data engineering plays a crucial role in the field of data science and machine learning. As organizations look to deploy models at scale and ensure their ongoing reliability, it's very important for data scientists to have a strong understanding of data engineering concepts, as that will, definitely, improve the value of their models.

Looking to work in different stages of data science and machine learning? Join us! We're looking for data scientists and engineers eager to create value from data.