We look at the most overused business intelligence buzzwords of the year. Some are emerging technologies, and others have been around for years but refuse to die. We explain what these terms are, how they’re often mis-used and what the appropriate expectations are before deploying them in your organization.
We should also preface this list with an acknowledgement that there is a place for many of the upcoming and breakthrough technologies that we mention. Many of the themes are valid and point towards the future of business intelligence and data analytics in the workplace.
That being said many of these technologies are misunderstood and incorrectly scoped. It’s important to understand so you can apply appropriately within your organization.
Let’s look at the standout business intelligence buzzwords you should be aware of.
8.) Democratization of Data
The term democratization of data gets thrown around a lot. We’re honestly not even sure what exactly it means in practice. The core idea is that everyone within an organization should have access to organizational data, and then also be able to extract, transform, and understand it.
Conceptually it makes a lot of sense. After all, why should only a select few people have access to all of the good data? However, in practice it’s a terrible idea.
Most job roles within a company only need a specific subset of data to perform their daily tasks. By giving them access to an increasing amount of data it becomes more difficult to find the information that they actually need. Employees can spend countless hours trying to surface important information, when a central team that works in subsets of the data every day will be able to surface the right information in a matter of moments.
Working with data is an increasingly important skillset for people across organizations, but there is still a lot of value in having people trained and experienced in working with it.
- Improving overall business intelligence
- Cultivating a data-driven culture
- Providing data access without appropriate security measures
- Overwhelming employees with data they cannot interpret
7.) Self-Service Analytics
The concept of self-service analytics has been around for a while, but we see many organizations that want to turn every person within an accounting department into a business analyst. Why track the numbers, when you can build insights? A wave of low-code tools have made it easier for non-technical employees to achieve things that would otherwise be very difficult or almost impossible in Excel.
While moving towards self-service analytics is a great goal, there are a number of challenges with executing it well. There will always be a part of an organization that does not want to learn new tools, and does not want to perform analysis. After all, this was not the role that they went to school for or have any desire to actually perform.
The bigger challenge is that the more analytics are distributed across an organization the higher probability there is that each team gets their own slightly different results or at best duplicates work of another group. Some centralization is necessary to maintain data governance and avoid duplication of efforts across departments.
- Enabling faster decision-making
- Reducing bottlenecks in data analysis
- Encouraging data literacy among employees
- Assuming all employees have the interest or skillset to use analytical tools
- Implementing self-service analytics without proper data governance
6.) Does it Export to Excel?
Large organizations will spend millions of dollars implementing cutting edge systems to store and present large volumes of information. Business Intelligence Analysts can spend weeks and even months prepping data, building out data models, and determining which visualizations will be the most impactful for managers to make better decisions.
Once they’ve put all of this effort in, inevitably someone will ask if they can have the report Exported to Excel and sent as an e-mail attachment.
In these scenarios an organization can waste massive amounts of time and effort by either not scoping the project out correctly or by acquiescing to the demands of a few people who don’t want to put in the effort to open up a reporting system.
- Quick data manipulation
- Familiar interface for less tech-savvy users
- Eliminating the many benefits of an advanced reporting system
- Ignoring the limitations of Excel’s data handling capabilities
- Using Excel as the primary tool for visual analytics
5.) This Software Will Replace Your Spreadsheets
The phrase “This software will replace your spreadsheets” is enough to trigger concern across accounting departments. Excel, a venerable tool released in 1985, has achieved widespread adoption, boasting over 300 million active users. Replacing Excel is not just a technical challenge, but also a cultural one, given the years employees have invested in mastering it.
Companies that want to replace spreadsheets will find it incredibly challenging to find a tool that’s as well documented, and easy to use. Many deployments of spreadsheet replacement software will go underutilized as a cohort of employees will resist change, ignore it, and continue to use spreadsheets anyway.
Software to replace Excel should be re-scoped to augment Excel processes and drive organizational change. As people find that new software makes their job easier, others will be curious and want to utilize it. It’s a long process and spreadsheets will likely never be fully replaced.
- Handling data sizes that Excel is limited by
- Financial modeling
- Workflow automation
- Using Excel for large-scale data storage, which it isn’t designed for.
- Relying solely on Excel for complex, collaborative tasks, leading to versioning issues.
4.) We Need a Data Lake
The IT and data world are slowly shifting to the use of Data Lakes. The technology has been around since the 2010s and has been growing in popularity as companies work with larger and larger datasets. The data lake architecture makes it a great choice for working with extremely large datasets. Data is typically stored on inexpensive cloud blob storage and converted into SQL tables on demand.
Two of the largest players, Databricks and Microsoft Azure Synapse continue making strides to make the technology easier to implement and use, but the technology requires a team with specialized skillsets to implement. A typical data engineer setting up a data lake would be expected to have knowledge of SQL, Data Architecture, Scala / Spark, Python, and among others depending on the implementation.
Because of the specialized skillsets and difficulty finding qualified data engineers, any cost savings of data storage are quickly surpassed. For the time being Data Lakes are best left to very large organizations that require the ability to handle massive datasets and can afford to hire a dedicated team to build and maintain them.
- Storing massive datasets
- Unifying structured and unstructured data
- Scalable analytics
- Deploying a Data Lake when a database or data warehouse will fit an organization’s needs.
- Adopting a data lake without adequate skilled personnel
- Underestimating the complexity of setting up a proper data lake architecture
3.) Predictive Analytics
Fairly often, you will find sales pitches for software that promise to move your organization from Excel Spreadsheets to using advanced Analytics utilizing the latest in Machine Learning and Artificial Intelligence. These systems utilize common Auto-ML functionalities wrapped in easy-to-use interfaces. With a few clicks of a button, the computer can spit out a predictive model to show you the future.
Where predictive analytics often fall flat is that they’re essentially a black box for users. Even with a guided Auto-ML wizard it’s difficult to understand the key drivers of the software. It’s also difficult to prep data to the point that it will be able to apply the predictive analysis in the correct way.
Once the prediction is complete, you are also left with an output that is difficult to explain the assumptions behind. It’s very easy to build an Excel model to say we expect sales to increase by 3% next year because of factors XYZ. It’s much more difficult when you’re not sure what the model is doing.
Because of these difficulties we think the current place of predictive analytics is in the hands of data scientists and business intelligence analysts that have spent the time to understand the nuances of each model and can appropriately articulate what the model is doing.
- Advanced analytics and generating new insights
- Sales forecasting, customer churn predictions, anomaly detection.
- A specialized tool for a specialized skillset
- Overestimating the capabilities of predictive models
- Failing to understand the model’s assumptions and limitations
2.) Natural Language Processing
Natural Language Processing is the technology behind ChatGPT. It’s used to synthesize text responses based on user input that is typed into a chat box. While we are gigantic fans of ChatGPT and are generally amazed at how far it has come in a short period of time, almost every software company on the planet is rushing to add Natural Language processing into their software.
The problem is that Natural Language Processing relies on two things.
- The ability for a user to suscinctly and accurately articulate their process
- Having enough training data so the ML model can appropriately determine a response
If you’ve ever worked on an RPA project or tried to automate an existing process you will quickly find that users don’t know their processes very well and struggle to describe them in detail.
We think that training data will improve over time and NLP will shift from giving advice to taking action but it’s going to be a long road.
- A replacement for Google and a Quick Reference Guide
- Performing tasks when there is little training data available to pull from
- Integrating NLP into software that has a poor use case for the interaction format
- Expecting people to use NLP to describe processes that are difficult for humans to articulate
1.) Artificial Intelligence
Artificial Intelligence is everywhere over the last year. Every software company, and even non-software companies are rushing to the AI gold mine by adding AI powered features to their products. The demand is sky high, and corporate executives are drooling at the idea of reducing labor costs by using AI to autonomously complete tasks. AI promises to usher in a new era of productivity.
While AI holds a lot of promise, and we’re excited to see how tools like ChatGPT evolve over time, and AI begins to answer more complex questions and perform more complex tasks.
Unfortunately, most of the products that have rushed to the market in the business intelligence space are quasi-useful at best, and completely useless at worst. It’s almost as if features are being driven by the ability to issue an AI feature press release to pump stock price than it is to actually add value to the end user.
- Quick reference guide to answer technical questions to compliment results on Google
- Natural Language Processing for quasi-useful text to complete actions
- Assisting in writing text and ad copy
- Implementing AI without a clear business case
- Misunderstanding the scope and limitations of AI
- Putting too much faith in AI results when the information provide is inaccurate
This year, more than anything will be remembered as the year of AI. Artificial Intelligence took the world by storm with the explosion of interest in ChatGPT. Stock valuations for AI companies are sky high, and it reminds us a little bit of the .com bubble.
We wouldn’t discount any of the technologies on this list, but would urge our readers to take them with a grain of sale, and truly understand the value proposition for each one. Companies that rush to implement cutting edge technologies run the risk of spending hundreds of thousands or millions of dollars on implementing products that are ill suited for their current needs or needs in the near future.