Dr. Ajith Parlikad leads research activities at the Institute of Manufacturing in Cambridge and is Head of the Asset Management Group. His particular focus is examining how asset information can be used to improve asset performance through effective decision-making. He actively engages with industry through research and consulting projects. In this interview he provides insights into using effectively collected data to drive decision making in asset management.
Interview By: Lance Bisinger
Dr. Parlikad, what is the goal of the Asset Management Group?
The Asset Management Group lies somewhere in the middle of technology and strategy policy. The group’s aim is to develop tools, methodologies, and techniques that allow organizations to maximize the value they get from their asset systems. The key term in that statement is value. We focus on how assets can deliver value to an organization, and how they help organizations to enhance that value capture. This could be better maintenance decisions, it could be effective renewal of the asset systems, or it could be simply by managing the data systems better, and using data to exploit better decisions. It’s the link of data and decisions that is the key enabler for delivering the goal of maximizing value.
Based on your research why do you think data is often either not available or not accurate? And what measures help address those issues?
Companies have different levels of maturity in relation to curating and managing data. At one end, we have companies that have absolutely no clue what data they need to hold or how to identify what their data requirements are. Then on the other end, we are working with companies that collect vast amounts of data but don’t know how to generate value from that data.
There are a number of companies I’ve worked with who are not mature in data approaches but have realized the potential of data in terms of how it can be used to drive their decisions. Not so much from an intelligent machine learning or data analytics point of view but from a purely strategic asset management point of view. Data gives them the ability to make evidence based decisions instead of subjective or experience based decisions.
How does the ISO 55,000 standard show factor into your research and work?
We help companies understand what ISO means to them and assist in developing tools that allow companies to actually implement ISO 55000. For example, asset management is about the management of value, and less about the management of physical assets.
All business decisions need to consider the effective balance of cost, risk, and performance. It says this in ISO 55000, but companies struggle to understand what that actually means to their business and their assets. Given a problem, how do they apply these principles? For example, we did a project on value based asset management which essentially brought together a number of tools to map out the value that a particular asset delivers to the operation of the business. Those tools identify who the stakeholders are, what the stakeholders need from the system, and how to quantify those needs into specific metrics.
Then we take a systems approach and ask how does this particular tunnel or pump effect each of those value metrics. It’s not just ISO55000. There’s Building Information Modeling (BIM) and a number of standards in the infrastructure space. The problem with these standards is they tell people what they “should do.” However, they often fail explain how to do it.
Our work is focused on bringing that clarity so that asset managers are provided with an executable solution. For example, there’s a misconception in a lot of companies that to get on the digital data journey you need to start deploying lots of sensors and incorporate the internet of things, but it’s amazing what can be done with the data these companies already have.
Even with simple data, like capturing failures and failure modes in a systematic way. These are things people can do without having huge investments in IT systems or technologies. It’s more of a cultural barrier than a technological barrier that prevents companies from getting on that journey.
While data integration can be a challenge, we’re seeing more acceptance of BIM standardization from the capital projects perspective. How is the Asset Management Group assisting the change over?
For capital projects within the BIM 1192 standards, the first step is to identify the organizational information requirements and from there derive the asset information requirements. One of my Ph.D. students has developed a systematic technique that is workshop based, consensus based, and discussion based. That brings together asset managers, financial managers, and higher level managers to help them define what their organizational information requirements are and a step wise approach from actually drilling down the organizational requirements to the asset information requirements. It has become an eye opener for companies who realize they can use this technique to develop their own requirements.
Are there occasions where consultants engage you and your team to support various asset management projects that have some unique and complicated challenges?
Consulting companies will come to us with general problems their clients are facing. We work with the consulting companies to develop tools and techniques that we apply to their client base. We see this as quite an exciting prospect, because this ensures that our work is not just applicable to one particular case in a company but is generally applicable across the sector at least. Also, because we are working with the consulting company they are able to then pick up the outputs of our research and apply it. Which helps our impact score, which is an important metric here at Cambridge.
On one hand we have to have publications, on the other we have to make sure our work is having real impact out there in the world.
Many of our clients now invest in digital twins to maximize their efficiency. What are some of the challenges and possibilities you’re seeing in the area of digital twins?
I came into this space about 15 years ago, without calling it digital twins, right at the time internet of things came into the picture. The research group I am based in were part of the Auto-ID center that was kicked off at MIT. Which was really the start of IOT revolution. Our activities mainly revolved around the manufacturing processes but also developing, what we call, intelligence on our manufacturing assets. It was not just about IOT for us, it was not just about putting on an RFID tag and making it visible across the internet but was also using tools, such as software agents. Thereby we could make assets and machines intelligent in the sense that they could make decisions on their own, they could communicate with other machines, and organize and optimize the manufacturing process automatically.
Over the last couple of years, as computing technology has become cheaper and more powerful, this terminology of digital twins has become really popular. Increasingly, industry is struggling to innovate and find ways to improve their profit margins and cut costs.
Now from a maintenance and prognostics point of view, I saw a real opportunity to bring in some of those ideas into the space. How do we use the idea of intelligent products, or digital twins with intelligence, to communicate and collaborate with other machines to improve their prediction process and improve the way in which a system can perform?
For example, one of our projects is looking at how machines can actually communicate and share their load between each other based on the deterioration they have, so that their life long performance can be optimized. If the load on the machine is affecting deterioration we can adjust the load. When machines share the load it gets the system performance to the optimal level.
It’s common for equipment to be operated outside of and beyond its design capabilities. How can elements of digital manufacturing help to remedy this problem?
The specific example with the load sharing work, was done one with a manufacturing company and the other with a petrochemical. It was on desalination tags in a refinery. In that case, it was and old system. They didn’t have a clear idea on how they deteriorated. We had to make some assumptions about the deterioration model. We also took a best case and worst case scenario and based on that risk profile. So we had a bit of uncertainty incorporated into the model.
Like we were saying, often times data is either not available or is inaccurate. One of the things we need to make sure when we develop decision models is that they can cope with a bit of uncertainty in the data. They are a bit robust as well. That is something we explicitly model into our decision models. That level of uncertainty is incorporated. That often changes the decisions as well, because you’re faced with a much higher risk when you have uncertain data coming in. If you assume all new data coming in is perfect you’re actually taking a risk by using all these decision models. You’d actually be better off using your gut feeling and instinct if you are dealing with poor quality data.
We call that information risk. I’ve written a book about this topic as well about risk arising from your data.
Can you tell us more about the Natural Language Learning tool your team developed, and where you see the natural application of it in the future?
There are two projects where we’ve used natural language processing. A challenge that almost every company faces is having a predominate amount of maintenance records written in natural language. When engineers go out on site and swap parts they make a record of that in a log book and write it out the same way they would speak to someone about it.
If you want to actually digitize those records, for a lot of companies that means actually scanning it into PDF form.
There are a lot of natural language processing tools out there, but the challenge is often they use domain specific dictionaries and libraries. If we can get that to work, what I see is a tremendous opportunity to gain knowledge out of the existing bits and pieces of paper that almost every company holds in terms of their maintenance records.
We have done some work for example with gas and power companies, trolling through their maintenance records to identify the most common failure modes, the most common parts that are replaced, and even linking them to the sensor data. These failures are often found in maintenance records. We then correlate the data into sensors that identify a trend that can be used for identifying failures.
From the perspective of the asset manager, they typically have poor visibility over the condition of their assets if the assets are not properly equipped with sensors. They typically have a time-based inspection plan where someone goes out and inspects the assets, writes a report and comes back.
How can digital manufacturing resolve the issue of large time gaps between collected data points?
Our project was actually putting 2D barcodes on critical assets, allowing users to simply use their smart phones to scan those tags and write in a comment that will be sent to the facility or asset manager. Now, there’s a twofold challenge here. First these comments are provided in natural language and need to be converted into a standardized form so any analysis can be done. The second issue being, that now because everyone can send in a report, suddenly asset managers are under a flood of reports.
There has to be a way to effectively prioritize the problems based on the critically of the issue being raised. So we use a mix of natural language processing to put them into a standard form and a machine based learning system to rank the priority of the problem.
So what’s next in digital manufacturing for your researchers at Cambridge?
Our focus for next year is data driven decision making. One of the challenges is that machine learning requires a lot of data. In a lot of our assets, especially reliable equipment, we don’t have enough failure data. Which is a good thing; you don’t have enough failures which is why you don’t have enough failure data. So from a research point of view, one of the big focuses is to develop techniques that can be used to do predicative maintenance where there is, what we call, imbalanced data. There is a lot of non-failure data. The question being, how can you use such data to predict failures. Secondly, just predicting failures is not enough, we need to turn that into an action. Just simply saying that an asset is going to fail on day three is not going to tell the asset manager when exactly they need to take what action.
Because of this, we are moving away from descriptive analysis to prescriptive analysis. The complexity arises when you have complex systems like a refinery, that might have several components with predictive maintenance capability. A lot of these components are going to tell the asset manager, when you need to get what done. Putting all these together is the challenge we will be focusing on in the coming years.
I want to leave you with a tag line that was coined by one of my PHD students, “Data is like radioactive gold, it’s valuable but if you don’t handle it carefully it can be dangerous as well.”