How to do Machine Learning without Learning Data Science

machine learning without data science blog hero

For all that TensorFlow is the current darling of the Machine Learning (ML) crowd, it’s just a graphing library that represents a series of commands and computations as a graph. Because each node in the graph is an operation and each branch is a value, TensorFlow lends itself well to ML tasks.
Keras, on the other hand, is a front end for Tensorflow that lets you access all the raw power of Tensorflow while making it far simpler to use. Want to create a neural network that does image classification? If you use Keras, that’s one line of code: just tell it how many layers you want, and Keras builds the classification engine for you.

Technology Democratization

And here’s the rub: as technology becomes more accessible, it gets used by a wider range of people. But it also means that a lot more people are using technology without having a good grasp of the principles and theory behind them. For example, almost everyone has some form of a computing device in the form of a phone, laptop, tablet, etc, but they couldn’t care less how it works. For the majority of us, computing devices are black boxes that we’re content to use as long as they continue to function. And when something does go wrong, we have computing experts to fall back on. However, ML is different.


Interested in exploring ML? Download our O’Reilly book on how ML is changing the rules of business.

Download Book


ML offers us the quintessential black box: it takes in data and spits out conclusions providing no insight into how that output was realized. On the one hand, ML models can deal with complexity and relationships in data that even experts can’t identify. But on the other hand, these ML black boxes present two major issues:

  • Recognizing problems: how do you know if the ML is cheating? In other words, how do you know whether you can trust the results? For example, an image classifier designed to recognize horses was found to be cheating by learning to recognize a copyright tag associated with horse pictures.
  • Diagnosing Problems: if the results are incorrect, how do I diagnose the problem? In other words, if I don’t know what’s causing the error, how can I troubleshoot it? For example, complex models may have 100’s to 1000’s of inputs. Mapping that volume of inputs to outputs is extremely difficult, if not impossible for humans.

Explainable AI

One of the proposed solutions to these problems is Explainable Artificial Intelligence (XAI), which seeks to create a version of ML that can explain in human terms how “black boxes” arrived at their decisions. The goal is to help users better understand whether to trust the results, and provide insight into how the conclusion was reached.
XAI is of particular interest to those industries that are subject to regulation, such as housing, finance and healthcare; markets required to explain how high stakes decisions are made. Additionally, since its possible for data science teams to inadvertently incorporate their biases into their ML models, industries like HR want to be assured those biases don’t come through when an ML model selects hiring candidates, for example.
While there are some companies hoping to make XAI a reality, not everyone is convinced it’s possible in all cases. What do you think? Have your say by joining the discussion below.


Just getting started in ML? Read our 4-chapter Executive Guide to Machine Learning.

Read Guide

Recent Posts

Scroll to Top