As an engineer-manager in manufacturing you have been exposed to the concepts of Industry 4.0. You are now aware of the huge benefits that data from connected machines can bring for you and your customers. You have now decided to get started with an IoT project to see for yourself.
All IoT solutions are basically data analytics solutions. However much we may get excited about sensor technologies, edge computing, data communication protocols and cloud platforms, in the end it is the meaning that you make from the data is what is going to drive value.
To make meaning from the data you need to analyse it. So, ideally even before you jump into a full scale IoT POC project think about doing an analytics project if you have collected data with regard to the problem you are trying to solve.
That said, it is important to understand that there are some fundamentals that you need to get right if you are to increase the success rate of your analytics projects. (Yes, there is no guarantee that all projects will be successful!)
Let’s look at some of the most important factors that will help you to succeed.
Do not pick a project because it looks like a good thing to do or gets you excited because it seems to be an interesting problem to solve. Use techniques such as the Pareto Analysis (‘vital few, trivial many’ rule) to identify the problem that is a good candidate for solving.
It may not, however, be necessary that the highest occurring problems are the best candidates. Use other criteria such as impact on your customer delight and/or ROI. Factors such as your budget or your objective to strike quick gains by selecting the low hanging fruit can also play an important part in selecting the project.
All things being almost the same, I recommend that you pick the project that can have the most beneficial effect on your customer.
In short, do a data-based analysis based on your historical data and potential gains before you zero in on your first data analytics project. Ironical, isn’t it!
We see people choosing a project and then diving directly on what they would like to analyse. They are ready to create a nice table of all the measurements that they would like to capture. But first things first. Once you have decided on which problem to solve, be clear of what your goals are. Are you trying to reduce scrap, increase efficiency, reduce cost or improve reliability?
Depending on your goals your measurement parameters will also vary. The key is coming up with the right KPIs for improvement. This was discussed in an earlier article. Note that the KPIs are often output parameters and dependent on one or more input parameters that need to be measured. So, once KPIs are determined, expert knowledge need to be used to decide what input parameters for the KPIs are to be measured and captured.
For example, as a maintenance engineer your goal maybe to increase the mean time to failure (MTTR) of your motors by 20%. MTTR is your KPI and 20% increase is your targeted goal. Instead of routine preventive maintenance you would now want to use predictive maintenance to ensure this goal, because you know that this is likely to be more effective.
Now, what are the input parameters for which you would like to collect data? Your expertise and experience may suggest parameters such as vibration, power surges, temperature, bearing wear and so on as the required parameters to be measured to predict failure and thereby increase the MTTR. Missing out on selecting the right parameters or selecting the wrong ones can result in poor outcomes, loss of valuable time and also confidence.
Ideally, your team should include the process expert, the process data analyst, the data analytics expert and the information technology expert. Depending on the size and complexity of the problem there could be more than one person for each of the roles.
The process expert in the above motor related example is the maintenance engineer with experience in maintaining motors.
The job of the process data analyst is to ask the right questions and ensure that the inputs from the process experts are converted into meaningful decisions on what to measure and what to leave out, the frequency and method of measurement and so on. This person has a reasonable amount of knowledge of the process (in this case the maintenance of motors) and at the same time has a data oriented background – e.g. a background in statistical techniques or SPC. Note that a process expert with a background in using statistical techniques can also double up as a process data analyst.
The data analytics expert (or as some would call the data scientist) is responsible for all the activities that are necessary to give you the desired information from the collected data. At the end of the day all you need is actionable information based on which you can make decisions.
The job of the data scientist is to allow to you do that without you knowing how he/she does it, which can be quite complicated. This expert is trained to identify the right statistical tools to analyse the data, to run tests against hypotheses, use techniques such as modelling, clustering and decision trees, identify patterns and draw conclusions. To help with the process he/she uses appropriate data analytics software, since it is virtually impossible to do all of this manually.
Finally, your IT expert is the one who helps you with the right hardware, network and software infrastructure to support the data analytics process.
Without the requisite expertise for each of the above roles a data analytics project can be doomed right from the beginning.
The general assumption is that you need to have a huge amount of data to draw the correct conclusions. This train of thought can actually become a roadblock to even getting started. (“We do not have enough data and it is difficult to generate enough data within such a short period”.)
Now, how ‘enough’ is good enough? Actually, it depends. But you really do not need tons and tons of data to get started with. Even as low as 20-30 data points for each parameter can be good to start with. I would imagine that it is not difficult to generate 3 to 4 times more than that in real working condition. The data analyst is of course the right person to consult on this. Bottom line: do not use insufficiency of data as a reason to not get started.
The quality of data is, of course, a very important factor. The old adage ‘garbage in garbage out’ is applicable here as well. The process data analyst and the data scientist can help you identify the pitfalls of collecting quality data and how to go about ensuring data quality. As an example, when you collect data you have to eliminate bias caused by say an abnormal running condition which may be interpreted as normal. Or, as in the case of manual measurements, there can be a measuring operator bias. Or there could be a problem with measuring instrument or gauge. Using the right statistical tools your data scientist should be able to help you to avoid data quality problems.
Do not wait until the end of the project to find out that you have not obtained the correct results. Data analytics can be tricky. In spite of all the initial planning and study of the requirements, the method selected to arrive at the conclusions can be wrong. You may have chosen an incorrect technique or a wrong data model or may not have defined the criteria such as accuracy or precision correctly. In some cases, the data model may be good but it needs to be adjusted in terms of what is acceptable data and what is not acceptable.
To avoid this problem, you need to consider the outputs of the analytics model periodically as it goes about analyzing your data and spewing out the results to ensure that you are on the right track. Periodical sampling of the results will help you to course-correct early on. This will ensure a higher chance of success in the end.
You may have a separate list of factors that influence the success of a data analytics project. I will be happy to learn from you. Please share your valuable inputs in the comments box below.
Designed by W3Squad