For people to start acting on a problem, much less pay attention to it, something has to really go wrong. As noise is being turned into information, I think we’ve had some fundamental shifts in our social contract that modern society is just beginning to pay attention to. In 2010, society was hardly aware of the kind of information organizations had on people. Now, at the advent of AI’s widespread integration into our lives, an increasing number of events in the digital world (Equifax, Snowden) is forcing us to think hard about the implications of the technology we use every day.
Many loud, influential voices (perhaps most notably Elon Musk) are wary of the next 40 years of technology, framing it figuratively and literally as its own autonomous being. AI is at the core of these discussions, and there are some related applications that deserve concern. AI is showing the ease with which pseudo-science can sneak back into our institutions and the power just a few companies have over what we think.
What can we do?
To start, practitioners could use a brush up on Data Science 101, especially concepts like "Correlation does not imply causation” and "informed consent".
“(w) ‘Correlation does not imply causation’ is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other.
...
(yy) ‘Informed consent’ denotes the agreement by a person to a proposed course of conduct after the data scientist has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct.”
-From the Data Science Association’s Code of Conduct
Fudging these standards of science has directed questioning eyes at the tech industry. Blood is in the water when it comes to antitrust regulation. Leaders inside and outside the tech industry are also calling for laws and principles of safe technology, again with a central focus on AI. These range from Oren Etzioni’s update on Isaac Asimov’s famous Three Laws of Robotics, to the research community’s Asilomar AI Principles, to other executives’ own rules. [We have a full literature survey of principles, ethics, and other relevant rules for AI in society here.]
Most of these rules are about far-fetched AGI or high-level moral imperatives. No one really disagrees on the need for virtues, but what are they really changing? The reality of the industry is AI is still very narrow and splintered. Not only are we far away from tying those capabilities together into AGI, we are also without many common standards and the ideas put forth are not very actionable. We do have a few players coming together on missions like the Partnership on AI, but we need to do more to set high standards of quality and security and lay the foundations for even being capable of meeting the moral imperatives set by these philosophers and futurists.
These rules are not just about how we translate our human values into machine outcomes, but also how machine outcomes impact our values. In developing our methodology for at Element AI, we saw that as designers we can't ignore that feedback loop and need to include it in our overall design process. It is time to stop treating AI like a black box and be willing to shine a light on what the technology is really doing in order to renovate our social contract consciously rather than automatically.

From our AI-First Design (AI1D) Methodology
Self-regulating will be as important as governmental regulation. For one, legislation will take some time to get up to speed, but tougher rules are coming thanks to the recognition of the great power big tech holds with its data.
There are those, too, who are calling for regulation to give themselves a chance to catch up. They base those calls on reasonable claims of the need for assuring consumers about the use of their data and the need for clarity and confidence in digital technologies/services.
When the government acts, hopefully it will turn into something positive, but the industry should also show some leadership and help frame this debate. We should fight for the industry to be transparent, accountable, and good for humanity so that people don’t gang up against this tech in a backlash.
4 steps to good narrow AI
Transparency is the hard part. The enforceability of the regulation and accountability of the practitioners hinge on transparency. This is a real can of worms for our industry because at first glance it goes directly against many business models.
But we have a three-legged stool problem. For us to maximize the benefit of AI, we need to balance the benefits to the user, society, and the industry. If one leg is too long, or if one leg is broken or damaged (say due to unsafe AI), the whole thing threatens to topple over. That is why having clear, well-planned rules is important: to keep AI fair and working for good.
As the creators of AI systems, we are closest to ensuring the proper setup for keeping the stool balanced, and have a vested interest in leading the healthy development of an industry that can be regulated from without and from within.
In order for our industry to start being accountable, I think we should follow four steps with the systems we are building:
-
Make it Predictable - What is the purpose? Have you stated your intent of how you will make use of that purpose?
-
Make it Explainable - Is it clear that you are achieving that intent? Can the user ascertain why a result happened?
-
Make it Secure - Is the stated purpose stable? Have you tested it with some shock tests for corruptibility?
-
Make it Transparent - Have you hit publish or made this information auditable?
1. Predictable - What is the purpose? Have you stated your intent of how you will make use of that purpose?
In laying out their ethics for narrow AI, Nick Bostrom and Eliezer Yudkowsky said, “[These are] all criteria that apply to humans performing social functions; all criteria that must be considered in an algorithm intended to replace human judgment of social functions.” When you meet someone for an exchange, you are going to want to understand their intent. The digital world has tricked us into ignoring that, and I think gotten to a point where we can no longer make a strong claim of “informed consent.”
We need to be clear that machines do not have their own intent. Right now we have many algorithms that seem to do the same thing, like image recognition, but their purposes are different. One may look at the clothes, pose, and background, while another may look solely at the permanent features of someone’s face.
Then it is what we do with these tools, our intent, that it is also important to be clear about. Why are you identifying faces? What are you doing with that output (or who are you selling it to)?
2. Explainable - Is it clear that you are achieving that intent? Can the user ascertain why a result happened?
The UI of software until recently has exposed everything that’s in the software. You could query it and get access to the database. Now software runs on the cloud and various devices, running all sorts of services in the back end the user would never know about. Sometimes it’s optimized for the user, but it doesn’t necessarily have their best interests in mind. And that’s ok if they know what those motives are, but I think most people unknowingly are being served experiences purely designed to get a hold of their attention and serve them ads. That relationship is opaque, and in my opinion unethical.
AI is making software even more of a black box. For it to be explainable, it should provide the inputs it takes into account, the purpose of the software, what feedback it is gathering, and where that feedback is being used. This is where we can get back to achieving “informed consent”, and contrary to popular opinion, this is quite doable if it is done from the beginning of a project.
3. Secure - Is that all stable? Have you tested it with some shock tests for corruptibility?
Just as we test banks to check their resilience against financial shocks, so should we test our algorithms against corruptive agents or data anomalies. Is it robust through false signals or introducing bias? Incorruptible against bots, trolls and other manipulations?
After all of this work clarifying the purpose of the machine and how it achieves that, it’s critical to show that purpose won’t change, otherwise undermining the other principles. In fact, the algorithms can become our canaries in the coal mine by alerting us as to when it is when it is time to take back control of the wheel.
4. Transparent - Have you hit publish or made this information auditable?
If we do this as an industry, we have an opportunity to be accountable. The principles others have put forth are highly subjective, so these things need to be transparent for everyone in order for our society’s collective values to be applied, not a single company’s (or board members’) interpretations.
Every stakeholder that wants AI to be for good should get moving. The users will have their consumer groups, society its policy makers, and industry its ethics boards. The key will be having regulation and consumer groups strong enough that they paralyze those who are not acting transparently.
We need to enforce transparency of what’s in software because it impacts society. If you’re a food company that believes in healthy eating, not just offering healthy options, you’re going to ask for better regulation of the industry as a whole, and at the same time invest in preparing yourself to not only meet the standards of healthy nutrition, but also preparing yourself to be transparent about meeting those standards.
Just by beginning action (beyond talk), we can create a powerful economic incentive for companies to enforce their own standards of transparency so that they can immediately jump the transparency hurdle and not disrupt their businesses.
I realize this proposal sounds like it’s blowing up business models as we know them. I think it is to an extent, but right now we face a few realities that I believe necessitate this.
-
We need the trust of society to carry forward and innovate
-
The trust is beginning to wane as externalities become apparent (to all of us)
-
A lot of regulation is a blank slate and can change quickly, for better or for worse
-
It will be for the best if we participate as an industry to enforce transparent standards
I am not proposing companies lay bare everything, but with the many splintered, narrow applications of AI, we all need to participate as we create the foundations for this fledgling industry. If you can’t prove you’re playing by the rules, should you be allowed to play at all? In order for AI to be for good, those building it have to be accountable to it, and in order for them to be accountable they have to be transparent.
I look forward to feedback and discussion on these steps. I also post these blogs on Medium and , and usually send it out first via the subscriber list for this site.
You can also see me speaking in more detail on this with Q&A at the this week and the Web Summit in Lisbon November 6-9.
See the