New radical empiricism and interpretability in decision making

7 September 2023

The rise of big data and deep learning have transformed the way we understand and process information. This transformation has ushered in what many refer to as a new age of 'Radical Empiricism', ie taking advantage of insights without fully understanding how they are derived. The challenge in a business context is to understand the trade-off between optimised "black-box non-linear recommendations" and traditional linear model recommendations with easily understood and interpretable results.

Traditionally, empiricism emphasized observation and experimentation as the primary sources of knowledge. In the context of data science, this would mean understanding patterns and relationships in data through explanatory models that are interpretable. The new radical empiricism, on the other hand, is about letting the data speak for itself without being constrained by our prior theories or biases.

Deep learning and other advanced machine learning methods fit snugly within this new paradigm. They can identify complex patterns in vast amounts of data, often with little or no explicit instructions, which traditional models would have missed. The power of these models lies in their flexibility and capacity to fit non-linear and intricate relationships.

Non-linear data models, including deep learning networks, have the advantage of modeling real-world complexities much more closely than linear counterparts. They can capture intricate interactions, hierarchies, and other complex relationships inherent in many datasets. This makes them exceptionally good at prediction tasks, where raw performance is the primary criterion.

However, this exceptional predictive power comes with a trade-off. Deep learning models have a lack of transparency in their decision-making processes. This makes understanding why a deep learning model made a particular prediction, or how different variables relate within the model very challenging.

For businesses, predictions without explanations can be problematic. Decision-makers often need to understand the underlying reasons behind a model's output to make informed choices. If a machine predicts a sudden drop in sales, for instance, merely knowing the prediction isn't enough; businesses need to understand why to take appropriate countermeasures.

Deep learning models can sometimes overfit to the training data, especially when they are overly complex or when the data is noisy. While they might exhibit stellar performance on the training dataset, they might falter when exposed to new, unseen data. For businesses, a model's real-world generalizability is crucial, often more so than its performance on historical data.

In light of these challenges, it's essential to strike a balance. Non-linear data models and deep learning have undeniably transformed our capabilities in data analytics, but relying solely on their predictive prowess without understanding the underlying patterns and relationships can be risky for businesses.

Hybrid approaches, which combine the power of deep learning with the interpretability of simpler models, are gaining traction. Techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are examples of methodologies designed to shed light on the workings of complex models.

While the era of new radical empiricism, powered by deep learning and non-linear data models, offers unprecedented opportunities in data analytics, it also poses challenges in terms of interpretability and generalizability. For businesses to truly harness the power of these advanced models, they need to approach them judiciously, emphasizing not just prediction accuracy, but also model transparency and understanding.

Despite the science label, data science often involves a fair degree of art and science. Experience understanding business context and the balance between different approaches and models can save lots of time exploring data relationships and features in models.

Algospark are experts in applied AI. Get in touch to discuss getting the most of out of your applied AI opportunities.

7 June 2024

Product label compliance using applied AI

Do you work in an international, multi-product organisation that sells food, drink, medication or cleaning products? You are probably involved with the …
23 May 2024

Applied AI - key considerations and lessons learned

Algospark has been delivering successful applied AI systems since 2015. We have worked across numerous industry sectors and have learned how to …
+44 207 558 8728
3rd Floor, 207 Regent Street
London, W1B 3HH. UK
Interested in working in analytics and applied AI? Contact us at
Details on our data security and management policies here.
This site uses Google Analytics. Google collects cookies for tracking.