On knowledge management in industry
(and what it could be) [Jul 2024]
TLDR: Industrial sites have knowledge bases (non-numerical stuff, docs and drawings), databases (numerical process data readings) and formal models of the process. These three things are poorly connected. The process of taking changes in one of these things and updating another is very manual, error prone and often overlooked. Industry does not have the software tooling it deserves, and there is huge value in providing it.
The problem
I have previously written about the improvement process and how the fundamental problem is a knowledge management issue. This post covers the same ground from a different perspective. In short, industry1 has a knowledge management problem that makes improving industrial processes very manual, laborious and error prone. Here's why:
1) Key drawings and documents are often out of date, untrustworthy/contradictory or simply non-existent. There is poor visibility on and incentives for updating and creating docs when it would be valuable, for example when there is a process change. Knowledge creation is perpetually ignored, to the detriment of the process, engineering team and company2.
2) The non-quantified knowledge base (docs and drawings) is not intrinsically/rigorously tied to a physical model of site. Today, physical models for analysing live data or for process design have to be built manually in software3. There is no software package that can ingest docs and drawings and automatically create a model of the process for knowledge management or process improvement insights.
3) Data is siloed, untrustworthy, and lacks context. In general, industrial processes today have decent data coverage, but the entirety of the data rarely comes back to a holistic system, and when it does it never comes with the context of the real process4. Today, process is data is transmitted with only an ID and a name, it does not come with the context required to maximise insight generation. This means that so called ‘advanced’ data platforms have underwhelming automated logic, anomaly detection and forecasting.
4) Today there is too much tacit knowledge solely held in the minds of engineers. Practically, this is caused by engineers not having the time, tools and incentive to formalise knowledge into documentation. There will always be knowledge ill-suited to formalisation, but it remains very true today that basic facts about the process, knowledge base, control system, company etc are only held in the minds of the site team. The key repercussion of this is that to truly understand the process and generate insights requires the time and attention of the site experts. In addition, the personnel with the most insight to share are those with the least time to do so. It is a very common to find one or two personnel on each site who could massively expand the working intelligence of the entire team/company if only they had the time to do so5. This bottlenecks progress and makes coordinating engineers contact/communication time very important, where in an ideal world individuals should have the basic information at hand to work independently. Today engineer to engineer contact time is far too often basic fact finding instead of real collaboration. These interactions are caused by poor knowledge management, not the engineer’s desire to collaborate.
Downstream implications of all this are that it’s hard to take a holistic approach and optimise a process, it’s hard to model third party solutions, and its hard to develop truly reliable cost benefit analyses for improvement opportunities. There is lots of room for improvement in industry.
So, what should industry look like in the future?
There should be an intelligent connection between a site’s knowledge base, physical model of the process, and all of the site’s live data. Changes to the model leads to automatic knowledge creation and vice versa. All live data is analysed holistically, tied to a thermodynamic model of the process. Engineers share a ground truth model of the process, but can fork, update and merge changes individually. Engineers should be able to converse with the knowledge base and update the model of the process in natural language, and there should be a digital assistant with access to the knowledge base and the ability to model the process rigorously and carry out reliable calculations. The digital assistant should accumulate tacit knowledge, and be the go to source for basic facts and data. The platform should monitor and forecast performance, detect anomalies, and find process improvement opportunities. It would look like an IDE for real-world engineers.
Someone should build this..
Footnotes:
1 By industry I’m referring to all fixed process and manufacturing plants.
2 Third party consultants and solution providers almost always need direct time with engineers to tease out the workings of the process. The drawings and docs are almost always inadequate, and importantly are less comprehensive than they quite easily could be. Another angle is to consider how much a new recruit could learn about the process purely from documentation. The usual answer is very little.
3 This is important not just because you could generate a model of site from drawings/docs but also the reverse.
4 For example, the data platform never knows out of the box that a ‘boiler steam output flow’ must at max equal the ‘boiler water input flow’. Today this basic logic always has to be manually built out, before or after the data is sent to the system.
5 I’m not sure if it’s a new problem but there is a very real issue of expert personnel retiring/leaving and causing an almost irreversible knowledge drain.