Unlocking the Power of Self-Serve Analytics

It was May 2020, and a mid-sized pulp and paper company was struggling to navigate the global pandemic's demands on businesses. With supply chains disrupted, limited access to spare parts to keep their equipment up and running, and reduced capacity due to employees working from home, the company struggled to stay afloat.

To stay in business, their CEO, Helen, turned to a self-serve analytics platform to help her quickly adapt to the changing market conditions. With her data accessible in the cloud, she was able to collaborate remotely with Jeffrey, a data scientist, and Michael, the maintenance and reliability business function lead. Their data-driven decisions helped the company survive and even thrive during the pandemic.

Helen created a dashboard that let her adjust production schedules while considering the current overall equipment effectiveness, asset health, and spare parts inventory. She also analyzed sales data and customer feedback to identify new product opportunities and adjust their marketing strategy accordingly.

Additionally, Nadine, the leader of their people team, used self-serve analytics to bring in data from manually updated spreadsheets that tracked the health and safety of their employees, ensuring that they had the proper protective equipment and that workspaces were properly sanitized.

By using self-serve analytics to adapt to the changing market conditions, the company was able to stay in business during the pandemic and continue serving their customers. The platform allowed them to make data-driven decisions quickly and collaborate remotely, which was critical when in-person meetings and interactions were limited.

Today, companies need to respond quickly if they are going to remain competitive. We use data to make decisions, which traditionally has required a fair amount of analysis done by data scientists and IT personnel solely responsible for generating reports and insights. Sometimes, this process can be slow and inefficient. This is where self-serve analytics comes in. Self-serve analytics empowers users to access and analyze data independently without needing specialized personnel or software. In this blog post, I'll explore the benefits of self-serve analytics for the industrial process control industry, including improved decision-making, increased efficiency, reduced costs, and more.

The benefits of self-serve analytics

Here are some of the many advantages of self-serve analytics:

  1. Faster decision-making: Self-serve analytics empowers users to access data and insights on their own, without needing to rely on data analysts or IT personnel. I was working with one of our manufacturing customers, and one of the senior managers at the company was able to pull up detailed information about his equipment uptime. From that data, he was able to understand his production capacity accurately. This information was critical for transforming his business from an inventory-based model to producing products on demand to fulfill a customer order.

  2. Improved data quality: With self-serve analytics, users can take a more hands-on approach to data analysis, which can help them identify and correct errors or inconsistencies in the data. This can lead to improved data quality and more accurate insights.

    "The highest quality data is data that is validated at the source" -Dr. Dave Shook, Chief Data Officer, Uptake Fusion

  3. Enhanced collaboration: Self-serve analytics can enable greater collaboration between different teams and departments, as users can easily share their results with one another. This can help improve communication and lead to more effective decision-making. When I talk to customers, one of their significant challenges is removing silos within their organizations. Better decisions are made when you are able to collaborate effectively with experts.

  4. Better scalability: As organizations grow and generate more data, self-serve analytics can help ensure that users have access to the data they need without overwhelming IT resources. A traditional workflow for getting access to data is to put in multiple requests to people in IT. By having data readily available, we have more people looking for solutions and fewer people waiting in a helpdesk queue.

  5. Improved agility: With self-serve analytics, users can quickly respond to changes in the market or their business environment, as they have access to real-time data and insights. This can help organizations stay ahead of the competition and adapt to changing conditions more effectively.

Be Careful - There are dangers and pitfalls that you have to look out for

While self-serve analytics can offer many benefits, there are also some pitfalls to be aware of. Here are a few potential drawbacks:

  1. Lack of expertise: Self-serve analytics tools are often designed to be user-friendly and accessible to non-experts. However, this can also mean that users may not have the same level of expertise as trained data scientists, which could lead to errors or incorrect interpretations of the data. It is essential to validate your findings with experts, especially when first starting out.

  2. Incomplete data: Self-serve analytics tools may not provide access to all the data that is available. Trained data scientists have the knowledge and expertise to identify and access data sources that may take time to become apparent to other users.

  3. Limited capabilities: Self-serve analytics tools may not offer the same level of capabilities as specialized data analysis software used by trained data scientists. This can limit the complexity of analysis that can be performed and could lead to a shallow understanding of the data.

  4. Data quality issues: While self-serve analytics can improve data quality when they are used to help validate the data at the source, these same tools can be prone to data quality issues if users are not familiar with data cleaning and preparation techniques. Trained data scientists have the knowledge and expertise to identify and address data quality issues. When in doubt, ask for help!

It's important to consider these potential pitfalls when considering whether to use self-serve analytics over having trained data scientists perform the analysis. While self-serve analytics can offer many benefits, organizations should weigh these potential drawbacks against their specific needs and resources.

How to get started

Of course, I'm going to show you how to get started using my product, Uptake Fusion. Even so, many of the things I'm going to talk about should be applicable regardless of what analytics tools you are using.

Step 1 - Find the data that you are interested in

This might seem like a fairly trivial activity. You've got a nice big dump of data from IT, or maybe you've found a spreadsheet with all the data in it. If you aren't lucky enough to have all the data already, let's grab some from our single source of truth.

With Uptake Fusion, I can browse the model of my plant. This model was automatically generated from my Rockwell Factorytalk system. I can either search for or browse to the piece of equipment I'm interested in, find all the sensor data available, and add it to a trend.

Step 2 - Validate the Data

Being able to visualize your data right away is an important step in making sure that this is good data. Trending the data lets you spot gaps where data may be missing. It enables you to spot obvious outliers, spikes, or anomalies. It also lets you compare the values for an asset that is performing poorly against one that is known to be working well.

Step 3 - Extract the data for analysis

With Uptake Fusion, I can easily get access to data in a format that is ready to import into an analysis. I can export the values from the trender straight into a CSV, or better yet, I can prepare a query that I can use in any off-the-shelf analysis tool that I want such as Microsoft Power BI or Tableau.

Step 4 - Paste the data into your favorite analysis tool

I'm going to add my data directly into Power BI. The Export KQL tool inside Fusion lets me create a custom query. I just need to copy and paste the query, along with the server and database information, into Power BI and my data appears almost magically.

Making your analysis even more powerful

That was the most basic use case that we just went through, but for users that are willing to go a couple of steps further, and possibly learn a little bit about the query language, you can do so much more with your data preparation.

Fusion leverages KQL, the Kusto Query Language used by Microsoft Azure Data Explore. Instead of having to browse a model, you can leverage the powerful search functionality within Microsoft ADX.

The query I used in the above screenshot started off with

let AnalogTags = dynamic(["fusion-rockwell/31867", 
"fusion-rockwell/31869", "fusion-rockwell/31871"]);
let DiscreteTags = dynamic(["fusion-rockwell/31863"]);

These are unique tag identifiers that the Fusion Trender was able to provide. But instead of getting my list of tags from the trender, I might consider searching the data model based on conditions.

GetLatestDataModel()
| search "Pump" and "Flow Rate"

If you spend a little bit of time exploring the query language, you can really unleash the power of a good model. Filter your results to only include assets that are producing a certain product. Look for sensor data that has exceeded an alarm limit more than 10 times in a single day.

Extending the possibilities by leveraging machine learning

The last step is to use the self-serve analytics themselves to help analyze and narrow in on the data you are looking for. Learning a new query language can be challenging, but that's why tools like Microsoft Power BI can leverage machine learning to let you ask the questions you want answers to … in plain language.

Wrapping it all up

Self-serve analytics can offer many benefits for the industrial process control industry as long as we keep in mind the potential pitfalls associated with them. Organizations should carefully consider their specific needs and resources when deciding whether to use self-serve analytics over having trained data scientists perform the analysis. While self-serve analytics can be a powerful tool, it's vital to ensure that users have the necessary skills and expertise to interpret the data accurately and effectively.

Ultimately, a combination of self-serve analytics and trained data scientists may be the best approach for many organizations. By leveraging the strengths of both techniques, organizations can maximize the benefits of data analysis and make more informed decisions to drive their business forward.

Fusion could be a great first step toward unlocking the power of these analytics.

Previous
Previous

From Greenwashing to Transparency

Next
Next

The Knowns Unknowns Approach to Industrial Data and Analytics Strategies