“Will AI take our jobs?”
People tend to talk about AI as an autonomous agent. We anthropomorphize AI with human verbs such as write, see, draw, listen, converse. I don’t think these verbs are incorrect, but they leave another verb to the imagination: intend. If we believe AI has its own intent, separate from our own, we’re misstepping.
AI is very narrow, and fragile. It doesn't function well outside of the scope it's set up for. It can only manage simple objective functions; so, it really is us, the humans, using our human intelligence to apply it effectively to the point where a job may be automated.
And even so, jobs must be highly repeatable tasks for us to be able to automate them completely away. (If you want to think about the jobs of the future, think about the non-repeatable, nuanced parts of a job and see how you can scale that up.)
We are the source
AI is not something alien. We are doing this through our collective action, which has every capability of producing something we don’t want. There is a false sense that once it is set-up, you can leave the system to run on its own and that it will take care of everything by itself—the reality is anything but that.
Up until 2 years ago, the entire field was just trying to exist. It’s shifted so quickly, from barely working, to nicely working, to being really effective on very important tasks and everyone wanting to roll it out. The potential impact now is so great that it is certain to affect the entire economy and even our social tissue.
The main pitfall we face is completely within our control: it’s that we think it’s not in our control. I think assigning AI its own intent is rooted in this erroneous thinking. Spreading a clear understanding of what this technology is, and what it isn’t, will be critical to its healthy development.
More importantly, we will have to recognize the immense power AI represents for implementing human intent and the side effects it can have once at scale—this is the biggest threat, not the potential of us losing control.
Look to climate change activists
I was just at the Aspen Institute for a roundtable to discuss this topic of healthy development for AI and the future of personal autonomy. I was amazed to hear the crazy stories of how overwhelmed many agencies and institutions have become in the last year trying to cope with the speed and impact of change. Just getting everyone on the same page about what the real problems are is a huge challenge. In considering a way forward, we looked to how climate change activists have communicated their cause.
Climate change is a tricky problem to fight. It is something that affects everyone; but by the time most people will notice the impacts on their own lives, it will be too late. The challenge is getting people to see the effects now, which are only really noticeable through scientific observations and contexts. Activists have then needed to explain several fundamental concepts (emissions, greenhouse gases, weather vs. climate, etc.) to bring the population up to speed and get them to sign on to certain solutions.
For AI, we have a similar challenge of coming from a highly technical field. Some of the fundamental concepts people need to understand are: data governance, biases, privacy, machine learning, information vs. data vs. intelligence and intellectual property. We need populations to understand these concepts, or symbolic versions of them, to help reshape our social contract and demand effective regulation of the technology.
Regulating a powerful, yet simplistic AI
When we discuss regulation, the focus should be on keeping organizations from pushing simplistic automation too far that it becomes unsafe. However, re-writing regulation to cover all the affected domains is just too big of a task in the timeline that government has to catch up with the technology.
In the U.S., the Federal Trade Commission is talking about design principles for a new high-level framework with which to judge the current law. Our own approach to AI-First Design is similar; we are engaging with other leaders in the field of experience design to determine a guiding philosophy and principles so that practitioners can then work out their own domain-specific rules.
This is actually a lot better than just rewriting the regulation, because it flattens society greatly. It allows us to have common philosophies that last because rigid rules aren’t being poorly applied where they don’t fit. You empower individuals this way, and as our own practitioners, we can be more engaged in those high-level discussions of philosophy.
But, back to what the philosophy even is. What characteristics of a narrow AI should we expect to see in order to trust it in production? We are having these conversations now, and they start with understanding the fundamental concepts of AI technology and its related impacts.
I’ve started using a name for being able to see the world with a clear understanding of these concepts: The AI-First Mindset. This mindset means being able to see the world with AI underlying everything, much as we see electricity or the internet. This mindset is taking shape and helping us form new principles for designing different domains like organizations, policy, products and humanitarian programs.
I think these principles will themselves become a part of the mindset, and make it accessible to a broader and broader group as it develops. The first concept to remember is that we, humans, are the source of all of this and have the option of controlling it. To control it we all need to understand it.
A group of people that understand it are already taking collective action to demand new rules and set clear expectations. My co-founder, Yoshua Bengio, just signed an open letter with 115 other experts calling for the UN to ban lethal autonomous weapons.
Understanding our intentions
Tristan Harris was the Design Ethicist at Google and he talks about Facebook’s algorithm for grabbing and holding attention. What that algorithm “discovered” is that outrage is a powerful tool for winning in the attention economy. Now we have an outrage machine that over 2 billion people are using (to be fair, the other discovery was that really, really cute kittens are also a powerful pull on attention—what we’re seeing are extremes). Is that the outcome we as a society want? We can’t just tell Facebook to stop optimizing for attention (ads) if that’s the game they are playing to win. The incentives, and thus the intentions, have to change.
Right now, the primary measure of well-being for a country is GDP. If the incentive is to just drive GDP, well, then, yes we will automate away jobs and concentrate even more of the world’s wealth in the hands of the few. I don’t think this is where we want to go. GDP does not capture everything; what should we be optimizing for? We need to rethink our own intent.
I also post these blogs on Medium and , and usually send it out first via the subscriber list for this site.