Deploy ETL Data Pipelines in Amazon Web Services using Azure DevOps

In today’s data-driven world, the efficient extraction, transformation, and loading (ETL) of data is critical for organisations to derive actionable insights and drive informed decision-making. However, managing ETL pipelines can be complex, involving various tasks such as code deployment, testing, and monitoring. 

This blog post explores how we can streamline the deployment of ETL data pipelines in Amazon Web Services (AWS) using Azure DevOps. With its powerful suite of Continuous Integration and Continuous Deployment (CI/CD) tools, Azure DevOps offers a robust platform for automating the deployment process, ensuring reliability, scalability, and efficiency in managing ETL workflows. 

Throughout this post, we will delve into the fundamental concepts of ETL, the benefits of using Azure DevOps for CI/CD, and the steps involved in setting up and deploying ETL data pipelines in AWS. We shall gain a good understanding of leveraging Azure DevOps to deploy ETL data pipelines on to AWS Cloud. 

Section 1: Understanding ETL Data Pipelines

ETL, or Extract, Transform, Load, is a fundamental process in data management used to gather, prepare, and transfer data from various sources to a destination, typically a data warehouse or database. 

The Role of ETL in Data Processing Workflows:

  1. Extraction (E): This first phase involves retrieving data from external sources such as databases, files, APIs, or streaming platforms. The aim is to collect all relevant data needed for analysis or reporting purposes. Extraction methods may include querying databases, parsing files, or streaming real-time data feeds. 
  2. Transformation (T): Once extracted, the data transforms to ensure it aligns with the requirements of the target system. This step involves cleansing the data to remove inconsistencies or errors, standardising formats, aggregating, or summarising information, and enriching datasets with added context or calculations. Transformations aim to make the data consistent, reliable, and suitable for downstream processing.
  3. Loading (L): In the final phase, the transformed data is loaded into the target destination, such as a data warehouse or database. Loading involves inserting the prepared data into the destination system efficiently and securely. This step may include data validation to ensure accuracy, managing dependencies between datasets, and optimising the loading process for performance. 

The significance of ETL in data processing lies in its ability for:

  • Data Integration: ETL is crucial in integrating data from disparate sources, enabling organisations to combine and analyse data from multiple systems. 
  • Data Quality: ETL processes often include data cleansing and validation steps to improve the quality and accuracy of the data. 
  • Decision Making: ETL transforms raw data into actionable insights, providing decision-makers with the information they need to make informed decisions. 
  • Business Intelligence: ETL feeds data into business intelligence tools and analytics platforms, enabling organisations to derive valuable insights and trends from their data. 
  • Regulatory Compliance: ETL processes can help ensure compliance with data governance and regulatory requirements by standardising and centralising data storage. 

Section 2: Introduction to Azure DevOps

The Choice between Azure DevOps and AWS Developer Tools:

The choice between Azure DevOps vs AWS Developer Tools like CodeCommit, CodeBuild, CodeDeploy, and CodePipeline depends on the organisation’s specific requirements, preferences, constraints and considerations related to infrastructure, features, integration, cost, and team expertise.  

Here are some scenarios where you might choose Azure DevOps over AWS Developer Tools: 

  1. Azure Ecosystem: If your organisation primarily uses Microsoft Azure for cloud services, Azure DevOps may be a more natural fit due to its tight integration with Azure services. Azure DevOps provides seamless integration with Azure Repos, Azure Boards, Azure Artifacts, and Azure Test Plans, making it easier to manage your entire development lifecycle within the Azure ecosystem. 
  2. Active Directory (AD): If your organisation uses on-premises Active Directory then Azure DevOps provides flexible options for integrating both on-premises Active Directory and Azure Active Directory (now called Microsoft Entra ID, is an integrated cloud identity and access solution), allowing organisations to leverage their existing identity infrastructure while delivering users a seamless authentication and authorisation experience.  
  3. Microsoft Technologies: If your development stack relies heavily on Microsoft technologies such as .NET, Visual Studio, or SQL Server, you may prefer Azure DevOps for its native support and deep integration with Microsoft tools and frameworks. Azure DevOps provides built-in support for .NET projects, Visual Studio integration, and Azure-specific features like Azure Functions and Azure App Service deployment. 
  4. Team Collaboration: Azure DevOps offers robust features for team collaboration and project management, including agile planning tools, Kanban boards, and customisable dashboards. If your team values comprehensive project management capabilities alongside CI/CD, Azure DevOps provides a more holistic solution for managing the entire software development lifecycle. 
  5. Built-in Artifacts and Test Plans: Azure DevOps includes built-in support for package management (Azure Artifacts) and test management (Azure Test Plans), seamlessly integrated with CI/CD pipelines. If your project requires comprehensive artifact management and test automation capabilities, Azure DevOps provides a unified platform for managing these aspects alongside CI/CD. 
  6. Extensive Marketplace: Azure DevOps Marketplace offers various third-party integrations and extensions, allowing you to customise and extend your CI/CD workflows with additional tools and services. If your project requires specific integrations or customisations, Azure DevOps’ extensive marketplace may offer the flexibility you need.
     

The choice of Azure DevOps over AWS Developer Tools depends on factors such as your organisation’s existing technology stack, cloud provider preference, team collaboration requirements, and project-specific needs. While both tools offer robust CI/CD capabilities, Azure DevOps may be preferred in scenarios where Microsoft technologies and Azure services are predominant or where comprehensive project management and collaboration features are essential. 

This blog considers Azure DevOps as a Continuous Integration/Continuous Deployment (CI/CD) tool. 

Overview of Azure DevOps:

Azure DevOps is a comprehensive platform offering software development and delivery tools, focusing on Continuous Integration/Continuous Deployment (CI/CD) practices.  

At the core of Azure DevOps are three key components: Azure Repos, Pipelines, and Artifacts.  

  • Azure Repos provides Git repositories for version control.  
  • Azure Pipelines enable teams to automate their applications’ build, test, and deployment processes, supporting various languages and platforms. With Pipelines, developers can define workflows to automatically trigger builds, run tests, and deploy applications to target environments. 
  • Azure Artifacts is a package management service that enables teams to store and share artifacts such as binaries, packages, and dependencies across their CI/CD pipelines. This ensures consistency and reliability in the deployment process by providing a centralised repository for artifacts used in software development and deployment. Azure Artifacts maintains version history for packages, allowing teams to track changes, roll back to previous versions, and ensure reproducibility of builds. Azure Artifacts play a crucial role in Azure DevOps CI/CD pipelines by providing a reliable and efficient solution for managing and sharing software packages and dependencies, facilitating the automation and scalability of software delivery processes. 

Section 3: High-Level Architecture

This high-level architecture outlines end-to-end setup of deploying AWS ETL pipelines using Azure DevOps. 

AWS ETL Services:

In this blog, I have used the following AWS services for building ETL (Extract, Transform, Load) pipelines, aiding us with the cloud’s scalability, flexibility, and reliability: 

Amazon S3 (Simple Storage Service): 

  1. Raw Data Bucket: Store raw data files in an S3 bucket. This bucket is the first landing zone for data ingested from various sources.
  2. Processed Data Bucket: Store the transformed data in another S3 bucket after data processing. This bucket holds the processed data ready for analysis or consumption by downstream applications.

AWS Lambda: 

  1. Use AWS Lambda to perform lightweight data processing tasks, such as data validation, filtering, or enrichment.
  2. A S3 Object event shall trigger Lambda functions. S3 events include file uploads to the raw data bucket or AWS Glue job completion notifications. 

AWS Glue: 

  1. AWS Glue is a fully managed ETL service that simplifies the process of building, managing, and running ETL workflows.
  2. Define Glue jobs to extract data from the raw data bucket, perform transformations using Apache Spark, and load the processed data into the processed data bucket.
  3. Glue crawlers automatically discover schema and metadata from data stored in S3, making working with diverse data formats easier. 

Amazon SNS (Simple Notification Service): 

  1. Use SNS to send notifications about job status updates, errors, or alerts related to ETL workflows.
  2. Subscribe to SNS topics to receive notifications via email, SMS, or other supported protocols. 

Amazon Athena: 

  1. Amazon Athena is an interactive query service that enables us to analyse data directly from S3 using SQL queries.
  2. Query the processed data stored in the processed data bucket using Athena to gain insights, perform ad-hoc analysis, or generate reports.
  3. Athena integrates seamlessly with Glue Catalog, allowing us to query data catalogued by Glue crawlers. 

Section 4: Setting Up AWS ETL Pipelines in Azure DevOps

Setting up and configuring Azure DevOps YAML CI/CD pipelines for AWS services involves several steps.  

Here is a high-level overview of the process: 

Prerequisites: 

  1. Either an Azure DevOps account or an on-premises Azure DevOps server exists.
  2. Ensure we have an Azure DevOps organisation and project set up.
  3. An Azure DevOps project with the build definition template (azure-pipelines.yml).
  4. Access an AWS account with the necessary permissions to deploy AWS services.
  5. Install and set up the AWS Toolkit for Microsoft Azure DevOps. More information can be found here.

     

Define Pipeline Steps in YAML: 

  1. Use Azure DevOps YAML syntax to specify stages, jobs, tasks, and triggers for the CI/CD pipeline in YAML format.

    Sample azure-pipelines.yml file is as follows: 

# File azure-pipelines.yml

container: amazon/aws-cli

trigger:
  branches:
    include:
      – master
      – develop

parameters:
  – name: awsServiceConn
    displayName: Azure DevOps AWS Service connection name
    type: string
    default: “”

  – name: region
    displayName: AWS Region
    type: string
    default: ap-southeast-2

  – name: deploymentEnvironment
    displayName: Deployment Environment
    type: string
    default: development
    values:
      – development
      – testing
      – staging
      – production

steps:
  – stage: Quality
    jobs:
      – job: Build
        steps:
          – script: >
              # Run code static checks (e.g., linters, code analyzers)
              # Replace the placeholder commands with actual static analysis commands
              pylint src
              flake8 src
            displayName: Code Static Checks

          – script: |
              # Your build commands here
            displayName: Build

  – stage: SecurityScans
    dependsOn: Quality
    jobs:
      – job: Security
        steps:
          – script: >
              # Run vulnerability checks (e.g., Snyk, OWASP dependency-check)
              # Replace the placeholder commands with  actual vulnerability scanning commands
              npm install -g snyk
              snyk test
            displayName: Vulnerability Checks

          – task: SonarQubePrepare@5
            inputs:
              SonarQube: YourSonarqubeServerEndpoint
              scannerMode: CLI
              configMode: manual
              cliProjectKey: YourProjectKey

          – task: SonarQubeAnalyze@5

          – task: SonarQubePublish@5
            inputs:
              pollingTimeoutSec: “300”

  – stage: Testing
    dependsOn: SecurityScans
    jobs:
      – job: RunTests
        steps:
          – script: |
              # Run tests (e.g., unit tests, integration tests)
              # Replace the placeholder commands with actual test commands
              pytest tests/
            displayName: Run Tests

  – stage: Publish
    dependsOn: Testing
    jobs:
      – job: Publish
        steps:
          – publish: $(Build.ArtifactStagingDirectory)
            artifact: etl-pipeline-artefacts
        displayName: Publish Artifact

  – stage: DeployAWS
    dependsOn: Publish
    jobs:
      – job: downloadArtefacts
        steps:
          – download: current
            artifact: etl-pipeline-artefacts
            displayName: Download Artifact

      – task: CloudFormationCreateOrUpdateStack@1
        displayName: “Create/Update Stack: Demo ETL DataPipeline Deployment”
        inputs:
          awsCredentials: ${{ parameters.awsServiceConn }}
          regionName: ${{ parameters.region }}
          stackName: etl-pipeline
          useChangeSet: true
          changeSetName: etl-pipeline-changeset
          templateFile: cfn-templates/etl-pipeline-template.yml
          templateParametersFile: ${{ parameters.deploymentEnvironment }}/etl-pipeline-parameters.json
          captureStackOutputs: asVariables
          captureAsSecuredVars: false

Commit azure-pipelines.yml file: 

  1. Commit and push the YAML pipeline configuration to the Azure repository. 
  2. Merge the PR to the master branch. This action uses the default trigger to build and deploy.

     

Monitor Pipeline Execution: 

  1. Monitor pipeline execution in the Azure DevOps portal. 
  2. Review builds and deployment logs for errors or warnings. 
  3. Debug and troubleshoot any issues met during pipeline execution.

     

Enabling Continuous Deployment 

  1. Continuously iterate and improve the pipeline configuration based on feedback and evolving project requirements.   

Section 5: Conclusion:

Deploying ETL (Extract, Transform, Load) data pipelines in AWS using Azure DevOps enables organisations to streamline their deployment and management capabilities providing: 

  1. Scalability: AWS provides scalable infrastructure for ETL pipelines, allowing us to handle large volumes of data efficiently. Azure DevOps enables seamless deployment and scaling of ETL pipeline resources based on demand. 
  2. Cost-effectiveness: With AWS’s pay-as-you-go model, we only pay for the resources we use. Azure DevOps helps cost-effective deployment by automating resource provisioning and management, optimising resource utilisation, and avoiding manual intervention. 
  3. Flexibility: AWS services for building and deploying ETL pipelines, such as AWS Glue, Amazon Redshift, and Amazon EMR, offer flexibility. Azure DevOps supports the integration and deployment of these services, providing flexibility in choosing the right tools for the ETL requirements. 
  4. Automation: Azure DevOps automates the deployment process, streamlining ETL pipeline deployment and reducing the risk of errors. It enables us to define deployment pipelines as code using YAML, ensuring consistency and reproducibility across environments. 
  5. Integration: Azure DevOps integrates seamlessly with AWS services, allowing us to use the full capabilities of AWS for ETL pipeline deployment. We can use Azure DevOps to automate the deployment of AWS resources, manage dependencies, and orchestrate complex workflows. 
  6. Security: AWS and Azure DevOps prioritise security, offering robust security features and compliance certifications. By deploying ETL pipelines in AWS using Azure DevOps, we can receive help from a secure and compliant deployment environment with support for encryption, access controls, and auditing. 
  7. Monitoring and Logging: AWS provides monitoring and logging capabilities for ETL pipelines through services like Amazon CloudWatch and AWS CloudTrail. Azure DevOps integrates with these services, enabling us to check pipeline performance, track changes, and troubleshoot issues effectively. 
  8. Collaboration and Visibility: Azure DevOps promotes collaboration among development, operations, and data teams by providing a centralised platform for managing ETL pipeline deployments. It offers visibility into pipeline status, deployment history, and performance metrics, easing cross-team communication and collaboration. 

Section 6: ETL Data Pipeline Monitoring and Alerting:

Monitoring and alerting are critical components of ETL (Extract, Transform, Load) data pipelines to ensure their reliability, performance, and data integrity. This blog does not include steps to configure or set up monitoring and alerting solutions for ETL data pipelines mentioned in the above sections. For more information on setting up monitoring and alerting solutions, please refer to Monitor data pipelines in a serverless data lake Blog.

Section 7: Additional Resources:

Enjoyed this blog?

Share it with your network!

Move faster with confidence