Without agreed standards, innovation will grind to a halt in litigation courts.

Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars become human-like, as in artificial intelligence that can drive cars? Who is responsible for any laws that are violated by the AI?

An AI by design is artificial, and thus ideas such as liability or a jury of peer appears meaningless. The law will need to adapt through governance in application, to avoid an unlikely future, where AI is held responsible for its own actions.

This consortium aims to work with government policy makers and the American Civil Liberties Union to ensure accordance with all applicable domestic and international law and to provide the greatest possible transparency to the public.

Our Solution

"A dog isn’t held liable for biting."

We don’t regulate non-human behavior. After considering the ability of the court system, and the impact of AI in today’s society, the world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, transparency, and technical standards mandated by AI experts, and government regulators. This ensures that manufacturers and developers are liable when it is foreseeable that the algorithms and data can cause harm.

Requirements include holding developers and corporations to a standard that requires clarity on liability for harm resulting from the use of artificial intelligence.
AI Consortium also uses machine-learning techniques to spot similarities in other AI applications formerly approved, as a way to provide reference for new submissions.

At minimum, to ensure standards, you must:

  • Correctly identify any harmful impacts of your artificial intelligence application.
  • Appropriate means of redress.
  • Clearly define accountability measures.
  • Clearly provide substantial benefits for society offered by this artificial intelligence application.

Who watches the watchers?

This consortium works with government regulators, partners, academic and industry professionals to serve as:

  • An ombudsman for supporting citizen challenges to organizations using machine learning.
  • Ethical review boards capable of assessing the potential harms and benefits to society of particular applications and combinations of artificial intelligence.

The mission is to ask and answer whether the application achieves the end it sets out to, and that this end is desirable.

Transparency in Innovation

AI systems are starting to have transformational impacts on everyday life: from driverless cars, to bank apps that let us deposit checks with a picture, to informed decision making tools that tailors products for sales, and advertising. Such breakthroughs raise a host of questions for society, including ethical issues about the transparency of AI decision-making as well as privacy, safety, and overall governance.

By ensuring transparency, we provide avenue for public debate on applications of AI, an important factor in governance. The consortium is understanding of privacy issues, and is able to adapt to new uses and more advanced forms of artificial intelligence.

This consortium ensures that only AI models trained properly and working as expected would make it into the market.

Leveraging Existing Initiatives

The AI Now initiative researches the social impacts of artificial intelligence, while the Partnership on AI studies and formulates best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

While these initiatives are key to the work at AIGC, they propose an academic approach. AIGC adapts a more practical approach where applications are reviewed before they are sent out to the public. Similar to the The United States Patent and Trademark Office that issues patents to inventors and businesses for their inventions, and trademark registration for products, AIGC reviews AI applications for the safety of the public, the legal benefit of inventors, and an avenue for clearly defined accountability measures.

By combining efforts with new and existing initiatives, we ensure that we promote best practices across AI development, and work with policy makers to ensure proper legislation that promotes innovation.

Simplicity in Process

The simplicity of our process promotes innovation in this field of Artificial Intelligence, and also provides an avenue for collaborative research into public issues, knowledge sharing, and governance.
We propose the existence of many more organizations focused on AI governance, and public debate, and we intend to create partnerships for a broader reach, impact, and thorough review and approval process of AI applications.

The consortium is flexible, able to adapt to new and more advanced forms of artificial intelligence applications.

Contact

An opportunity to build on this project, along with many more contributors would pave way for a major impact.

Connect with Felix

Want to learn more about artificial Intelligence and its governance?

Sponsored by The Berkman Klein Center & MIT Media Lab


Who am I?

I am an experienced product manager with proven results, delivering successful products to companies like IBM, on a global scale. I run the Felix AI Lab for Social Assistive Robots with over 3900 members .