dbt Core vs dbt Platform: Modern Data Transformation with dbt (Part 1)

dbt Core and the dbt Platform use the same transformation engine but differ in deployment, governance, and operational approach. This blog explores where dbt fits in the modern data stack, the strengths of each option, and how teams can choose the right solution based on size, maturity, and workflow requirements.
On-Demand dbt Execution: Rethinking Analytics Engineering in Secure Cloud Environments

In secure enterprise cloud environments, traditional dbt deployment models can introduce unnecessary cost, security risk, and operational friction. This blog explores an on-demand, containerised dbt execution model that treats dbt as an ephemeral workload rather than a long-running service. Orchestrated with AWS MWAA and backed by ECS Fargate, the approach enables scalable, secure analytics transformations while improving cost efficiency, data quality observability, and CI/CD integration in modern enterprise data lakes.
Cevo’s adoption of the AWS EBA Framework for our clients

Digital transformation often stalls between strategy and execution. AWS Experience-Based Accelerators give organisations a safe, hands-on way to test cloud, modernisation and AI initiatives before committing to scale. Learn how Cevo delivers partner-led EBAs that turn experience into confidence, and confidence into action.
Enhance Kubernetes Cluster Performance and Optimise Cost with Karpenter – Part 3

Discover how Karpenter enhances Kubernetes cluster performance and optimises costs in EKS. Learn how to monitor Karpenter using logs, metrics, and dashboards to gain full observability, identify scaling bottlenecks, and fine-tune your workloads for reliable, cost-efficient operations.
Enhance Kubernetes Cluster Performance and Optimise Cost with Karpenter – Part 2

Learn how to migrate from Kubernetes Cluster Autoscaler to Karpenter on Amazon EKS to improve scaling speed, reduce infrastructure costs, and increase resource efficiency. This deep dive also explores EKS Auto Mode, showing how cloud-managed autoscaling can further reduce operational overhead while improving security and reliability.
Kafka ZooKeeper to KRaft: The Next Chapter in Apache Kafka (MSK and Kafka 3.9 Support)

Apache Kafka’s move from ZooKeeper to KRaft simplifies metadata management, boosts scalability, and accelerates failover. Learn how Kafka 3.9 on Amazon MSK leverages Raft consensus to streamline operations and support large-scale event streaming.
AWS re:Invent 2025 Recap

AWS re:Invent 2025 marked a major shift in Agentic AI, with tangible solutions like Bedrock AgentCore, AWS Security Agent, and DevOps Agent transforming how AI agents are orchestrated and deployed. Security Hub went GA, multimodal AI capabilities advanced, and hands-on workshops highlighted real-world applications. This recap covers the key announcements, innovations, and human connections that made re:Invent 2025 a landmark event for AWS builders and partners.
AWS re:Invent 2025 Key Announcements

AWS re:Invent 2025 delivered one of the most transformative years of innovation yet, signalling a major shift toward AI-native cloud platforms, frontier-scale infrastructure and streamlined developer experiences. From AgentCore and Nova 2 to Trainium3 UltraServers, S3 Vectors and multicloud networking with Google Cloud, AWS introduced capabilities that will fundamentally change how engineering teams build, automate and scale. In this blog, we break down the biggest announcements and what they mean for organisations looking to modernise, innovate and move faster with confidence.
Continuous Evolution: Building Adaptability in the Age Of AI

In the age of AI, standing still is not an option. Continuous evolution and adaptability are essential for organisations to thrive, harness technology, and turn rapid change into a competitive advantage.
Transforming Data Engineering with DevOps on the Databricks Platform

The role of the Data Engineer is rapidly changing, from writing ETL scripts to engineering production-grade data products. On the Databricks Lakehouse Platform, this shift demands more than technical know-how; it requires a DevOps mindset. By embracing software engineering best practices, automated testing, and CI/CD pipelines, data teams can deliver scalable, reliable, and secure solutions. This blog explores how DevOps principles and tools like Git Folders and Databricks Asset Bundles are transforming data engineering into a discipline of continuous innovation and delivery.