Will A.I. take our jobs? Computers can now build smarter computers!

Will A.I. take up our jobs

Were you thinking about getting into machine learning? ML is a quickly growing IT sector with a constantly increasing demand for specialists. Sounds good? Well, you may want to rethink this idea if the main reason that motivates you is job security. If Google gets its way – ML Expertise may soon share the faith of RadioShack. It would really be an ironic twist if the technology branded as a “job taker” finally took the jobs of its creators, wouldn’t it? Let’s see if there is any remote chance of that really happening.

In this article you will learn:

  • How AutoML allowed for a step by step, wizard-like neural network building.
  • How “Neural Architecture Search” improved on AutoML by adding deep learning components to it.
  • What libraries and ready SaaS solutions can you use to leverage AutoML today.
  • What expertise will still likely be required even in the post AutoML world

The problem of complexity

Disclaimer: this may potentially get a bit scary as I describe some complexities but after that, we will see how we can easily bypass all the complexity with the power of AutoML.

First, let’s consider what are some important parts of designing a machine learning model. As always data is the key but assuming you have well-prepared data for training and evaluation you then must come up with an architecture for the network model. On top of architecture, you must pick values for all the available hyperparameters which tweak the ML algorithms inner workings.

There is plenty to choose from in the architecture components. You have all the prebuild layers to choose from. For these layers and maybe some custom-built ones, you will have to come up with the order in which they are linked together (perhaps you will throw some recurrence into the mix too). Each layer takes a bunch of parameters with some crazy acronyms. If this does not scare you, have a look at the Neural Network Zoo describing branches of neural network topologies you should likely be familiar with.

Scary? I think it should at the very least generate some respect for the field of machine learning. All this is also the reason why it’s so hard to start the journey with ML. In the beginning, a lot of work is based on guessing parameters and configurations in order to gain that ever praised “intuition” on what works and what does not. Even when you do get this intuition you cannot be expected to have experience with all the methods and topologies. ML is a complex and time-consuming thing to learn and keep up with.

So, what is this AutoML and how can it help?

Believe it or not, AutoML is a solution for most of the problems I mentioned earlier. It is a great tool that is now available in the form of SaaS solutions like Google’s AutoML or libraries that you can run on your own like AutoKeras, tpot, or AdaNET.

What you can expect from these solutions, given enough computing power, are auto-generated models that match or in a lot of cases beat the existing human build state-of-the-art models.

Before jumping to coding and picking one of them at random I recommend reading through some documentation and perhaps this benchmark to understand some advantages and limitations between them. There are subtle differences but at the very core, they do provide a lot of similar functionality. So you can expect to see features like:

  • Automated architecture discovery (both autonomous and guided).
  • Automated hyperparameters tuning.
  • Feature level works like Data selection and preprocessing.
  • Model compression – Series of optimization algorithms tweaking things like scaling factors in convolutional neural networks.

AutoML libraries will (in most scenarios) need some configuration, you will for instance have to tell them how much time they should be spending trialing a single model. You may also want to pass a description of some high-level idea of architecture but that’s usually optional. You may simply let your AutoML come up with everything on its own. You could for example say:

  • Here I have 2GB labeled image samples.
  • You will be solving a problem of image classification.
  • Find me the fastest and most accurate model predicting whether someone is fighting a penguin in the picture.
  • You can work on my 2 PCs and a laptop at the same time.
  • Do not spend more than 1 hour on a single model.
  • I need results in 20 hours at the latest.

Below is on a high level what would happen in AdaNET:

And as for a minimalistic code example, here is what image classification model searching would look in AutoKeras:

import autokeras as ak

classifier = ak.ImageClassifier()

classifier.fit(x_train, y_train)

results = classifier.predict(x_test)

There is not much to say here, of course, this is a hello world example, and as we know complexity grows as we go further away from it. In case of AutoML however, even some very complex concepts are not that hard to grasp. The reason is that this technology is built and marketed as a way for non-data scientists to do machine learning.

What is this Neural Architecture Search then?

It’s a new and particularly popular branch of AutoML. It aims to solve the same problems but with a different underlying technology. And rest assured most of the libraries mentioned above utilize some form of this deep Neural Architecture Search. NAS employs deep neural networks and techniques like reinforcement learning or evolutionary algorithms to boost its model building capabilities even further. In recent years, deep neural networks have been outperforming their classical counterparts in many fields like language or voice processing as well as complex games like Go, Mario, or even StarCraft. To put it simply, deep networks are a costly but extremely effective method of solving problems:

You can read more about Neural Architecture Search here.

Is it a silver bullet? And where is human in all of this?

No, it is not a silver bullet. I can however imagine some ways that it could be in the future – although there are still some challenges ahead. For instance there is a lot more to the data science than preparing and configuring a model. We still need to understand the context and core problem we are solving as well as prepare the data extraction pipelines – all this will still require us human.

It appears AutoML has a lot going for it, especially with strong advocates like Jeff Dean (lead of Google A.I.). Check this keynote at the TensorFlow DevSummit where he claims that given 100 times computational power they could perhaps replace machine learning expertise entirely.

If you are an ML expert I don’t think you should lose a lot of sleep over this just yet. For now, just familiarize yourself with the AutoML technology and use it as a tool it is meant to be. I hope you found this article interesting and as always please leave your thoughts and experience with AutoML in the comments box below.

Aspire Blog Team

Aspire Systems is a global technology services firm serving as a trusted technology partner for our customers. We work with some of the world's most innovative enterprises and independent software vendors, helping them leverage technology and outsourcing in our specific areas of expertise. Our services include Product Engineering, Enterprise Solutions, Independent Testing Services and IT Infrastructure Support services. Our core philosophy of "Attention. Always." communicates our belief in lavishing care and attention on our customers and employees.