A leading energy services provider teamed up with Zemoso Labs to tackle the fragmented systems problem in its carbon capture, utilization, and storage (CCUS) program. Siloed monitoring and compliance gaps led to slowed progress. We built a unified digital platform that pulls sensor data from wells, fields, and plants into real-time dashboards. These dashboards bring risk management and alerting into the workflow, giving operators the clarity they need to act quickly.
The results led to safer operations, less downtime, lower operating costs through automation, and stronger compliance footing. All of this fed directly into the client’s net-zero commitments and opened the door to scaling CCUS programs with confidence.
For oil and gas companies, CCUS is no longer a side project. It’s a key part of the energy transition. Yet building CCUS at scale is easier said than done. The industry runs into the same set of roadblocks again and again:
When these issues pile up, operators pay the price. Costs climb, decision-making slows, and regulators begin to ask hard questions. What the industry needs is not just more data, but a way to connect it all into a real-time backbone that’s built for safety, compliance, and financial sustainability.
Our client’s CCUS teams were working with disconnected systems — raw CSV files from well, field, and plant sensors sat in silos with little processing. Existing tools couldn’t bring those signals together into a single dashboard, and alerting was inconsistent at best.
The ask was straightforward but far from simple:
On top of the technical hurdles, there was the human side—aligning storage, capture, and transport teams under a shared system, while keeping regulators confident and data secure.
Our clients love what we do:
Zemoso Labs built a digital backbone specifically for CCUS—secure, scalable, and able to bring together signals from across the operation. The real breakthrough was weaving subsurface-to-surface data into one system, while tying every alert to a risk framework. That meant engineers weren’t just flooded with alarms—they had traceable issues connected to hazards and controls.
1. Unified Data and Visibility
The platform brought together well, field, and plant sensor data into a single system powered by Databricks for ingestion and aggregation. Raw CSV data (Pressure, temperature, flow, acoustic signals, and even microseismic activity) was processed into streams that could be easily accessed. On the front end, Angular, Plotly, and Leaflet enabled world-map dashboards, KPI cards, and drill-down asset views. The platform could give both operators in the field and executives in the boardroom the same “single pane of glass,” eliminating the need to manually stitch together subsurface and surface data from multiple tools.
2. Proactive Risk Detection and Management
A structured risk management framework was embedded into the platform, allowing users to create, categorize, and score risks tied to specific assets. Teams could track hazards, define mitigations, and measure pre- and post-mitigation effectiveness, while alerts were directly connected to these risks for traceability. Unlike conventional monitoring systems where alerts often pile up as noise, this approach turned each anomaly into an actionable event linked to a defined control. That meant engineers could respond with clarity, knowing what the issue was, how severe it might become, and which steps had already been planned to address it.
3. Compliance-first Framework
Workflows were designed from the ground up to support emissions reporting and regulatory requirements. Alerts could be tied to thresholds such as CO₂ injection pressures or flow anomalies, while data exports in CSV format were structured to meet audit needs. Plans for role-based access control further strengthened governance. Compliance wasn’t bolted on as an afterthought. It was baked directly into daily workflows, which cut down audit preparation time and reduced the risk of non-compliance penalties.
4. Scalable Cloud-native Architecture
The system was built on 18 microservices orchestrated with Kubernetes, supported by a backend stack of NodeJS, Express, and TypeScript. Redis caching accelerated high-frequency queries, Azure Blob Storage handled large sensor datasets, and RabbitMQ with Dapr ensured reliable messaging across services. CI/CD pipelines through Jenkins and GitHub kept deployments secure and frequent. This kept the system agile, and prepared the platform to handle future CCUS expansion without costly overhauls.
5. Advanced Security and Reliability
Sensitive operational data was safeguarded through a multi-layered security model. Static and dynamic code testing (Checkmarx, DAST), cloud monitoring (Prisma), and vulnerability scanning (AVScan, Blackduck) worked alongside secure Docker images and encoded data transmission to protect against threats. Rather than relying on a single defensive measure, this layered approach combined cyber protections with safeguards for operational integrity. For clients managing highly regulated CCUS projects, this offered peace of mind that both regulators and investors demand.
This partnership turned CCUS monitoring from a maze of disconnected tools into a single, secure, and scalable platform. More than just a technology shift, it tied safety, compliance, and cost savings together in ways that built long-term confidence.
As CCUS expands worldwide, the client now has a foundation that supports regulatory resilience, operational continuity, and growth—making net-zero ambitions less of a promise and more of a plan.