Do you have a future?

Do you have a future?

Elon Musk is afraid of AI. Bill Gates is afraid of AI. Stephen Hawking is afraid of AI. Artificial intelligence is dangerous.

To be precise, the world-famous physicist, Hawking, sees the election results that lead to Donald Trump becoming the President of the US and Britain preparing for Brexit as an embodiment of widening financial inequality, caused by artificial intelligence. 3 of the world’s 10 largest employers are already replacing their workers with robots, and AI will furthermore continue to destroy millions of jobs in the future, Hawkings predicted in World Economic Forum in December 2016.

In his opinion on Guardian, titled This is the most dangerous time for our planet, Hawking highlights that large groups of people are seeing their standard of living collapse and eventually escape out of their reach, for good. Digitization reorganizes wealth and prosperity, giving it to some and wiping it away from the others. And out of all the security threats of our time, Hawking states, this is the undeniably biggest threat to security on Earth.

I don’t claim to be Stephen Hawking, but I do know a thing or two about digitization, safety and the ongoing change of work. I also believe that safety is, among other things, a feeling and can thus be improved by knowledge and understanding. Here, I will try to disclose what will happen to work, roles and appreciation.

This article could as well be titled what kind of knowhow is required in the future?

Most of us accept a doctor as an unquestionable authority. We also accept that a qualified doctor, in comparison to say, a witch doctor, first examines, then makes a diagnosis and in the end prescribes the medicine. Experienced doctors have seen so many patients, that their level of knowledge is above our googling skills. These attributes, by definition, make a reliable doctor.

The plot twist in year 2017 is that a machine does all of this better. Computers are faster and more accurate at making diagnoses and choosing the appropriate treatment. And their experience database is beyond any mortal being. This might sound utopistic, but this is an actual service that is already in production. Dr. A.I., as they call it, has the knowledge of more than 100 000 doctors and experience of millions of patients.

Let’s take an example that even a child can understand. Quick, draw! is a robot that tries to guess what you are drawing. And the more you draw, the better it becomes in guessing. Since it’s launch in November 2016 it has become incredibly good at identifying your terrible doodles.

This is my perception of a beard. It took 3 seconds for me and the AI to understand each other.

 

The AI behind the drawing game has been taught simply by random people that have entertained themselves by playing with the robot. Amusement for human beings is training for AI. Gamification is a revolutionary phenomenon, and in the conclusions I will get back to the impact this change of roles has.

Most of us lack the ability to imagine the influence that AI will have on humankind.

To truly understand the scale of the coming change, one must first identify the existing solutions and the logics behind them. Only after that can one start detecting silent signals and the unknown possibilities that they can lead to. This topic was lately publicly discussed by those who have the latest information on AI and it’s potential.

IBM CEO, Ginni Rometty, MIT Media Lab director Joichi Ito, Healthtap CEO Ron Gutman and Microsoft CEO Satya Nadella talked about AI and the Fourth industrial revolution it will bring at World Economic Forum Annual Meeting in mid-january.

To get an idea of just how common AI is already, imagine this: one of the most recognized cognitive technologies, IBM Watson, will touch approximately 1 billion people this year through it’s business solutions. Recently it read through all of Bob Dylan’s lyrics and made an analysis of them.

Microsoft bought Skype for $8,5 billion in late 2011. It was a huge investment for, even though dominant, a relatively simple VoIP product. Five years later it can translate 10 different languages to 100 participants simultaneously.

Cognitive means that a system pursues a human-like thinking. It doesn’t just repeat predetermined algorithms. Instead, it can understand, reason, learn and interact. Take these features and combine them with what a machine can do today. Then imagine, what it can do tomorrow.

As an overview, all the Annual Meeting panel participants at World Economic Forum recognised the change to the number of jobs caused by AI. But they also brought up the change on availability of services.  The democratization of AI will not only reduce the jobs, but also provide service for those who wouldn’t afford it at the moment.

In Kenya, there’s a startup selling you the means to illuminate your home by creating credit histories for a large number of customers, some with a monthly income as low as 1 dollar, who didn’t have access to formal financial services before. You start with a simple $35 kit with a solar panel, multi-device charger, lights, radio and a pay-as-you-go SIM card. You take a loan, pay the deposit, get the starter kit and then pay the loan back through mobile money transfers than can be as low as .50 cents per transfer.

Again, this is available because AI can do a loan decision within microseconds. Low-income customers wouldn’t be profitable if they were served by a human. But when you have a machine that can reliably handle tens of thousands of loan applications in a day, you all the sudden have a huge mass of low-risk customers – and therefore a profitable market.

And this is where lies the key to understanding the future of work.

As using the accumulated data is a competitive asset in any organization, the ability to make use of that data is a competitive asset for job applicants, be that man or a machine. Computers are in several places better than men, and even more so in the future.

Information overflow will make it impossible for humans to compete with knowledge. That is the most obvious single competitive advantage that computers have compared to the limited computing power of a human, and the constant possibility of a human error that we have. Computers have basically infinite capacity and, when properly defined, they don’t miscalculate.

AI also enables moving from reactive to proactive healthcare. Wearables, sensors and big data provide for means to identify possible diseases even before they occur. Illnesses that can be sensible, for example mental illnesses or any substance abuse, could be easier to bring up based on data. When someone tells you that you might have a problem, you are likely to feel judged, push up your defence mechanisms and deny the problem. It might be easier to take a neutral attitude if you get a message instead, saying “Based on your recent behaviour, you belong to a risk group. Learn more?”

The fact is that digitization is changing, and will change, the concept of work.

This causes anxiety, mostly because the future’s not ours to see. We can’t make an exact prediction of the forthcoming, but luckily we can reason some parts of it.

First of all, we can assume that not all skills needed in the post-revolution era are high value skills. The earlier example of gamification with the drawing game gives us a simplified picture of the easiest jobs that need to be done, but can only be done by a human.

AI needs to be teached to understand humans, and this only happens with high volumes of different kinds of users. You don’t need an education to show AI, how you intuitively would use a service. If you tap wrong, the AI will learn from it, and the next time someone taps the same button in the same user interface, the AI will understand what the user wanted to do.

This, of course, is not high value work. It is just a modern version of proletariat. Instead of driving trucks, emptying garbage cans and swabbing floors, some of us will train the AI to serve other people.

There will probably be a middle class in the future too, but it will differ from today’s middle class. Because anyone can communicate directly with AI, there is less need for service jobs. Service will no longer be necessary the way it is today. Human service will eventually become something extra, a luxury, from middle class to middle class. But the middle class will no longer be doctors or loan officers, as demonstrated above. The typical middle class job of the future is a luxury servant. Which leads us to the topic:

What is the high value work of tomorrow?

Understanding the logics behind computers won’t be required for the high productivity work in the future. What it will require is the ability to comprehend abstract matters into something tangible that can then be implied by machines.

I claim that there will be three categories of high value work that not everyone, and certainly not a machine, will be able to do. The first category is about creativity, the second about transparency and the third about moral.

Creativity, without saying, is humane. One can argue about the true nature of art or philosophy, but they are by definition something humane. I know that this is a huge topic and asks for a blog post of it’s own, so let’s just take this as a fact and agree on the terminology to be fined down later.

Within the creative category fall also the kind of jobs where you design new kind of services. This is not to be mixed with designing user interfaces or user experience, because that too can be automated. Designing new services requires imagination, because it is about things that don’t yet exist.

Of course AI can create new by doing random mixes, but this is limited to combining the things that the AI is already familiar with. AI can produce scenarios, but it doesn’t come up with new. And, most importantly, AI can’t recognize a value of an idea if the idea is not based on familiar matters. Ideas can’t be valued if they are not understood.

The second category, transparency, on the contrary isn’t equally obvious. In order to the new technologies to break through, they need to be accepted. CEO:s of the biggest companies within AI development say that people only trust on something they understand.

Luckily we don’t need to understand everything, intuitive knowledge is enough. Few of us know exactly how a TV works, but we have an idea. But imagine taking a TV to the Middle Ages and try to convince people that it’s not about sorcery.

Transparency means that if we are to take into use enormous amounts of new services, we need to understand it at some level. This blog post is an example of transparency: I’m telling you that AI aims for augmentation instead replacement: the objective is to create AI that’s purpose is to help humans do what they do better.

Open communication is a must if we are supposed to trust a self-driving car or let AI go through our messages and health records. We who understand the technologies must explain the gains of this transformation.

This leads us to the last and most challenging job there will ever be.

There must be regulation. Even if I firmly believe on market-based research and development, it does not work with AI. Let’s take a simple example.

When the AI for self-driving cars is created, one must make decisions. There might be a scenario where a school bus full of elementary school pupils is on a narrow mountain road and about to fatally collide with a passenger car with two passengers. If the pupils could be saved by the passenger car driving off the road to inevitable death, should the passenger car do that? Most people say yes. But nobody would buy that car.

Morals are the tricky part that can’t be left to the AI to decide. There must be both research and deployment to first of all make the hard decisions, but also to implement them. We will need morals, ethics and rules that are not set by the markets or the AI, since these two don’t have morals.

Do you have the talent needed in the future?

As described above, there will be low value jobs and middle class jobs in the future. But if you aim to a high productivity position, reflect the three categories, creativity, transparency and morals, to your work. Do you posses a useful talent? Should you work now to acquire a skill set that will be relevant in the near future?

As humankind we also have to consider these questions from a wider perspective. What does the changing of the required skill set mean? We have to move from teaching skills and knowledge to teaching learning. We should stop teaching things that a computer can do and focus on how to be creative instead.

A true winner combination and a high value work consists of all the three categories. First, you must be creative to produce a new idea. Then, you must be transparent in order to get people to support the change and to be able to implement the idea.

Eventually, you have to have your heart in the right place. This article started from a safety point of view, and is drawn together by safety. Morals in AI context mean that we must plan AI that exists to help people, and ensure that the solutions are safe.

Who masters these three abstract areas of expertise, will never have to look for a job.
No Comments

Post A Comment