Few remain wondering whether or not AI is going to be fantastically profitable. But many are wondering what we can do to prevent it from being fantastically destructive to our ecologies, economies, communities, identities, and bodies.
Software for managing AI models directly is necessary but insufficient for realizing a more just, sustainable future. The problems were never really the datasets. They were the people and processes around the datasets. Always have been.
The fundamental challenge of the widespread deployment of AI Responsibility is not a problem of technology, but rather is a problem of hegemony and change management. Of incentivizing and aligning capital and management with a better way of doing things towards the imperative of a stakeholder-oriented world.
That's why we built The AIRL.
We created The AIRL to serve leaders who want to use AI Systems responsibly. In ways that are de-risked, profitable, and sustainable.
We partner with organizations that appreciate the incentives they’re under to nurture better relationships with the stakeholders around them. Who want to simultaneously improve the quality of their AI Systems and decrease their impact. Who want practical, state of the art tools and guidance for what they can do to ensure that their people, processes, technologies, and lines of business are safer from AI Risk.
We believe that the sustainable stewardship of Artificial Intelligence is all of our responsibility. Case and point; its poor stewardship has already become all of our problems. As of the writing of this essay, AI systems have already exacerbated our intrinsic biases; behaved inexplicably; been used to undermine human rights; and pose a growing and poorly documented threat to our ecology.
We're working on this, today, because we are increasingly convinced of two things:
1.) Failure to address this problem with the gravitas it deserves will fundamentally compromise the human condition. Therefore we ought act.
2.) That the time window for doing this right is not only vanishingly small, but we're already upon it. Therefore we ought act, now.
We see three driving factors for why we need to act now:
One) The global response to COVID-19 accelerated all technological timelines.
Companies that previously planned on backburner-ing AI investment until 2040 were suddenly confronted with an unprecedented pressure to modernize and adopt AI Systems, and with dizzying speed. Companies were compelled to not only undergo the Digital Transformation, but the AI Transformation as well. The Fourth Industrial Revolution, writ large, was set into motion. Often, these AI Systems were implemented with little regard for the externalities they generate. In this sense, AI is here, it's not leaving, and many implementations were cobbled together during crisis.
Two) Technocapital change is accelerating. It strongly favors the widespread deployment of AI Systems. The Schumpeterian force of Creative Destruction isn't necessarily goal-oriented in what it wants to creatively destroy, per se. But man, does it like destroying things and replacing them with automation. This isn't an indictment against AI, technocapital society, change, or capitalism writ large. It's helpful to identify the underlying forces at play in why we're not turning around from AI; it's what Capital and Technology want. In this sense, we ought expect more AI and better AI, deployed at an increasing pace.
Three) Recent advancement in large language transformer models are about to do two things. They're about to commoditize (systems don't like remaining proprietary for long), and they're about to disrupt nearly every symbolic analyst human occupation. Can they do the job you do 100% as well as you 100% of the time? No, not today. But in 2024 will they be able to do your job 75% as well as you, 75% of the time? Likely. And they'll do it 24/7 and for a SaaS license fee of $<1k/mo. It will be hard for many employers to justify keeping their current human labor force. Expect to confront this within the Biden Administration.
The confluence of these factors, to us, says that we need to move immediately, or we're going to be in an extreme amount of trouble.
Moving immediately means developing foundational memes and engaging software; and getting them both in the hands of teams that can have the greatest impact. It means building partnerships and solidarity within the greater AI Responsibility community. It means speaking truth to power, and enabling power to be a part of the solution and change.
It will be a process defined by optimism, bravery, and a commitment to our virtues. It will ultimately be a process towards a world defined by greater human quality of life, and a new era of intelligence on earth.
Getting this right looks like widespread incentive realignment, analogous to the climate crisis. It will look like firms taking proactive steps towards making decisions that favor sustainable development over asymmetrical value extraction. That favor stakeholders over shareholders.
Getting this right looks like doing so, and doing so in the 2021-2025 time range. Getting this right is not a nice-to-have. It's not a consumer watchdog play. It's not a "could be a problem" issue. It's the second single largest looming threat to organized human life.
In this sense, we have one shot to get this right.
We're the Artificial Intelligence Responsibility Laboratory. We're here to help.
-The AIRL Team, 2021