How pipeline engineering gets done today
Integrated data, automated workflow and management by exception.
The Cloud and other software technology innovations are impacting engineering desktop applications.
Technical Toolboxes, supplier of pipeline engineering software, recently released the Pipeline HUB (HUBPL), which the company says furthers integration of pipeline data while facilitating users’ technical work. The HUBPL is being deployed as a desktop and cloud-based offering.
The platform automates integration and analyses for insights into infrastructure design and operational fitness. It connects a library of engineering standards and tools to users’ data across a pipeline lifecycle. Integrated maps allow geospatial analysis, visual reconnaissance of existing databases and leveraging of disparate geographical information systems (GIS) data components.
When the solution was first introduced, Drew Lafleur, CTO, Technical Toolboxes, said, “As the industry embraces digital transformation, our legacy applications are evolving from manual calculators into sophisticated, integrated holistic analysis tools. This will enable users to make efficient, accurate decisions.”
Oil & Gas Engineering recently spoke further with Lafleur. An edited version of that conversation follows:
Oil & Gas Engineering: What is your background? What brought you to Technical Toolboxes?
Lafleur: I’ve worked in a wide variety of positions in the upstream of the oil & gas industry, especially pertaining to engineering and asset optimization, both here in the United States and internationally. I saw a lot of different kinds of projects but also some real commonalities.
When I was with ConocoPhillips, I became deeply involved in integrated operations, which pertains to how people, processes and technology combine to drive efficiencies and enhance optimization efforts. The big questions here today include deciding what work to computerize and what must remain manual, as well as the integrated data environments, workflow automation and surveillance by exception that are proven key components to successful efforts aimed at integrated operations. Implementing them in a way that quickly gains broad stakeholder adoption is the secret sauce.
My current role is an opportunity to apply the principles of integrated operations to a domain focusing on the piping infrastructure, as opposed to upstream engineering and operations.
This fit well with my previous experience. A top source of production gains in many upstream fields I’ve worked, as well as in midstream, comes from optimization of the piping infrastructure and of the rotating equipment that enables flow.
OGE: Once you became the CTO of Technical Toolboxes, what were your priorities?
LaFleur: When I first looked at the organization, I felt we needed a realignment so that our personnel and skills were aligned with our teams’ functional needs. Once done, the next step was to bring in people to fill skills gaps, whether related to cyber-security, IT governance, pipeline engineering, or otherwise. An overhaul of the Help Desk systems & processes ensued, followed by confirmation that we had the business intelligence needed for a sound understanding of how our products are used.
At that point we were ready to formulate a plan for re-platforming our solutions to address pain points related to scalability, supportability, and user experience. I joined Technical Toolboxes in November 2017. We began developing the platform starting in March 2018. We’re now bringing that solution to market.
OGE: What are some of the defining characteristics of that platform?
LaFleur: It’s all about boosting engineering efficiency and productivity through analysis automation and establishing, or leveraging, a single source of truth that’s trusted by those who use it. Our clients include owner operators, engineering firms, and many others. They may have annual revenues that range anywhere from $1 million to billions of dollars. Each of them has some means of managing data. The bigger companies will most likely have a centralized repository that is supposed to be definitive.
The truth is, however, that so many viewpoints are involved that it becomes a real struggle to be holistic and minimize latency. Therefore, the decision makers compile data from several viewpoints, including the official repository, and then decide what data inputs and what data outputs they trust. They construct their own version of the truth in a pragmatic way that delivers the results they need. It seems to me that even the largest companies are still working toward a mature vision or version of the data repository needed. Others are too small to have it on their radar.
We can take the client’s asset database and use it to populate the database associated with our solution, then apply quality control and add more detail as a byproduct of using our calculators and analyses.
How did company veterans react to the change in focus at Technical Toolboxes?
LaFleur: There was some initial apprehension from the sales force. It’s a change going from talking about calculations to talking about integrated operations. The user base was ready for the change, and the sales force is embracing the solution. Technical Toolboxes has entered a new phase in its development.
Our user-base is global, but predominantly North American operators, service providers, inspectors, and engineering consultants. We recognize that a large portion of our user base consists of highly qualified, very experienced engineers approaching retirement. They in turn mentor millennials and others in the client organization that may have different expectations for what software can, or should, do.
OGE: Can you give us an example of how it works?
LaFleur: One very popular module is the “crossings” module, that evaluates effects of outside forces on buried pipelines and is used, for example, when a heavy piece of equipment must cross where a pipe is buried.
These calculations are often done in the field and sometimes without all the relevant information. For example, details of depth of cover and soil type involved may not be readily available. Pipe material and strength may be stored as paper files in a truck or file cabinet. Through our collaborative environment, on the other hand, we can continue to obtain and aggregate relevant information about the piping system as people go about their business so that they share this knowledge with others.
On the operational level, the doers are often compiling personal databases that are unknown or inaccessible by others. These can be in various formats, from shapefiles and geodatabases to spreadsheets and relational databases. We want to continue to aggregate those personal databases over time into the single source of truth that can be leveraged by others in the same team and used to automate, or enhance, routine analyses.
Not only is it possible to load all these different types of data into our platform, we have automated the steps involved in assimilating it to remove repetition of effort. We can consume directly, or transform data into PODS (pipeline open data standard) and UPDM (utility and piping data model). Most people are going to be using one or the other of them. Once we have the data, the relevant calculations can be tied to the assets and we can auto-populate inputs; the users can review and then simply run calculations as needed.
Data management must be a means to an end, which is getting engineers to do more engineering work and less data gathering. What you don’t want is engineers typing the same input data three or four times.
What you do want is field and design engineers designing and implementing better solutions. Engineers need to assimilate a complete view so they’re not forced to make gut calls.
OGE: Hasn’t responsibility for performing calculations already changed?
LaFleur: Computers have largely already taken over responsibility for running calculations. But how engineers interact with computational resources performing the calculations has changed dramatically from just a few years ago.
The way it’s evolving is based on the way an engineering career is assembled. The engineer has in school attained a degree for “x,” but once in the working world he or she is put into scenarios that introduce them to new areas. This frequently continues even when they’re at the mid- to senior-level. This organizational churn results in perpetual need for knowledge retention and transfer to people new to an asset or function.
There are tools for most of the analyses involved. What the engineer must do is get the data, load it, and have enough understanding of the theory and science to be able to say whether 1) the analysis method or formula is reasonable, 2) the input data is reasonable, including finding or “guesstimating” unknown values that are required, and 3) the resulting output data is reasonable.
In my experience, I’ve frequently seen top-performing engineers who are basically working as data scientists. They’re motivated to learn how to find, quality control, and integrate data from various sources quickly and evaluate the soundness of the inputs and outputs. They also learn new processes and disciplines quickly. This makes them very productive, and other people often leverage their skills to ride in the wake of those efforts.
Technical Toolboxes has automated a significant portion of what that top performer is good at doing, which can make an average engineer more productive and sophisticated. Our platform integrates inputs and outputs of different systems that previously weren’t integrated; there is a single, integrated database for all applications. Before you’d have three to five applications open at a time, and it would be integrated in spreadsheets or elsewhere. Additionally, we continue to evolve the concept of engineering software as a knowledge transfer tool, helping engineers choose the correct analysis for a given situation.
OGE: What’s the impact on how users interact with the system on a day-to-day basis?
LaFleur: Integrated operations reduce the unwanted duplication of efforts. Too often an engineer takes on a task only to discover that someone else had already performed it weeks or months ago. It’s important to ensure that the engineers focus on innovation.
Automating workflows allows engineers to push the boundaries. We have roughly 240 applications in the platform, each of which calculate some portion of the pipe lifecycle. We have a case study from a large operating company that shows our software enabled six people to be refocused on new tasks after our software was adopted by a seven-person team performing the same analyses manually ─ that’s an 85% efficiency gain and 240 hours per week of engineering time freed-up.
OGE: What are some of your favorite platform calculations?
LaFleur: My background makes me lean toward drilling and integrity as favorites, including horizontal directional drilling, ac mitigation, and corrosion protection.
We have a tool that automate many aspects of ac mitigation, and clients report saving weeks of effort on a single powerline analysis. We also have a “remaining strength analysis” that takes information from smart pigging devices and gives a readout of critical areas affected by corrosion across several parameters. It greatly reduces the number of times a company has to do costly excavations to assess integrity and then finds there is nothing that needs repair.
OGE: What’s next for the platform?
LaFleur: As we move forward, we intend to include more automation, advanced analyses and potentially machine learning. We are also incorporating more analyses from partnerships with different IP sources to deliver a more holistic resource for supporting and fostering successful engineers, across the pipeline lifecycle.
Original content can be found at Oil and Gas Engineering.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.