Prescient
Webinars
Live Events
Lectures
Seminars
Presentations
Conferences
Prescient Webinars Live Events Lectures Seminars Presentations Conferences
Publications
Whitepapers
Prescient Technology Overview
Operational Excellence for Oil & Gas
Conference Papers
-
Lei Wang, Yakov Krasnikov, Andy Wang, Kartik Chaudhari
Abstract (Excerpt)
Drilling rig equipment operates under highly variable load conditions, making traditional sensor-only Condition-Based Maintenance (CBM) systems prone to false alarms and limited in predictive capability. This paper presents a holistic, context-aware CBM framework that integrates multisensing technologies - ultrasound, vibration, and temperature - with operational context data such as rig states and Electronic Drilling Recorder (EDR) channels. The goal is to enable both early fault detection and long-term Remaining Useful Life (RUL) estimation through a scalable digital twin architecture. [Read More]Paper Number: SPE-228115-MS
https://doi.org/10.2118/228115-MS
Published: October 13 2025 -
K. Chaudhari, R. Whitney, P. Acosta, A. Wang
Abstract
The advent of artificial intelligence (AI) and machine learning (ML) solutions enable detailed time-series data analysis previously thought impractical to impossible. This paper highlights the development and deployment of a Transformer-based Asset Life Model (ALM) which outperforms previously deployed classical machine-learning approaches for predicting drilling equipment component lifetimes. The use of advanced normalization methods enables consistent performance comparisons across a wide array of assets which may be deployed on different rig types and configurations, geologic basins, and varying operational conditions while offering a flexible framework to accommodate future expansion in data volume and diversity. Furthermore, factors such as manufacturer attributes, like make and model, can be efficiently incorporated and compared. Said features form the cornerstone of insights which influence supply chain and operations optimization efforts which resulted in a 40% reduction in operating costs. This paper presents a novel Context-Conditioned Normalization (CCN) layer, the supporting technical stack - from data governance through hyper-parameter optimization, ML Operations (MLOps), and field rollout - offering a blueprint for industrial-scale deep learning.Paper Number: SPE-227949-MS
https://doi.org/10.2118/227949-MS
Published: October 13 2025 -
A. Wang, P. Acosta, R. Whitney
Paper presented at the SPE Oklahoma City Oil and Gas Symposium, Oklahoma City, Oklahoma, USA, April 2025.
Abstract
This paper presents a novel distributed database architecture that supports a large-scale rig digital twin solution by processing over 1-Billion real-time Electronic Drilling Recorder (EDR) data points and 50-Million database queries per day. This novel architecture significantly improves scalability, performance and fault tolerance while reducing compute cost when compared to conventional monolithic database architecture for real-time, high data volume applications.Paper Number: SPE-224359-MS | https://doi.org/10.2118/224359-MS | Published: April 14 2025
-
A. Wang, P. Acosta, R. Whitney
Abstract
The need for the distributed real-time data pipeline arises from the requirement to process disparate, asynchronous data streams that emanate from edge devices deployed to remote field locations, like drilling rigs, and in the cloud. These data streams need to be combined in real-time to serve advanced analytics applications like Artificial Intelligence (AI) and digital twins. Because of its distributed nature, this data pipeline not only needs to process data but also manage deployment, containers, and physical and virtual computing appliances. Due to this complexity, today's distributed real-time data pipelines are nearly all built as custom software solutions, incurring expensive development costs, long development time, and introducing an extensive maintenance burden. This paper presents a new framework that builds the data pipelines as interconnected, graphical functional blocks in a single software user interface. Each functional block can be assigned to execute in multiple remote locations. Furthermore, each functional block can be parametrized to support unique properties of each rig. This architecture makes it significantly simpler and faster to build, deploy and scale distributed real-time data pipelines while accelerating data engineering and data science projects that require high-speed, high-volume, distributed real-time data.Paper Number: SPE-220863-MS
https://doi.org/10.2118/220863-MS
Published: September 20 2024