📣 Risk Takers and Price Payers: Agency and Accountability in AI

ChatGPT's debut sparked debates on AI's impact, balancing its benefits with accountability concerns in business adoption. Ethical guidelines are crucial for responsible AI usage.

📣 Risk Takers and Price Payers: Agency and Accountability in AI
Photo by FlyD / Unsplash

On November 30, 2022, the quietly simmering cauldron of interest in Artificial Intelligence (AI) exploded with the release of ChatGPT by OpenAI. While some consider advancement in generative AI a great boon to civilization, others perceive a turning point towards destruction. Businesses are leaping to reap the rewards in cost savings, revenue generation, and fundraising, but deployment of AI comes with significant risks. The question is: who makes the decisions about how to use AI, who reaps the rewards, and who bears the costs?

AI Adoption and its Implications
The first thing I asked ChatGPT to generate was a poem, which it did with demoralizing speed. It spit out multiple stanzas immediately, having been “trained on vast amounts of data from the internet written by humans,”¹ including reams of award-winning published poetry. We could quibble about the literary value of the product created, but by any measure it offered superb value on a variable cost basis. And AI models don’t just offer text; there are models for images, audio, video, robotics, research, analytics, and automation. It appears that any digital output that a human can create could be done cheaper and quicker with AI assistance, including real-world activities like self-driving cars, scientific testing, manufacturing, etc.

Since forever we have used technology to replace human effort, and since the 19th century we have been improving technology at a breathless rate to reduce the cost of labor in almost every field. What feels different about this new era of generative AI is that it can automate tasks that we didn’t think could be automated, including the output of highly skilled workers and creative artists.

The use of AI in existing companies requires staff who understand how it works and know what could go wrong as a result of its deployment. Unfortunately, most companies don’t have those people on staff. AI jobs as a percentage of total increased four-fold from 2014 to 2022 but was still only 2.1%². According to LinkedIn, conversations about AI increased 70% in the year ending in November 2023³ as thousands of companies doubled down on a skill that very few people possess. AI Researcher positions can command shockingly high salaries, some posts offering $900k per year⁴.

Technologists are quick to point out that improperly trained and tested AI systems can be far worse than not having an AI system at all, which explains why many AI enabled systems have been pulled quickly after being introduced in the market.

Ethical Consideration on Accountability
Cautionary tales abound. In a well-publicized federal legal case in May of 2023, an attorney for the plaintiff submitted an affidavit stating that they used ChatGPT for research and that the bot had made up cases, provided bogus sources and falsely assured the user of their accuracy and reliability⁵. Scientific American reported on a study published in the journal Science finding that a healthcare risk-prediction algorithm that was supposed to help insurance companies allocate access to high-risk care was using an easy-to-obtain metric of prior spend rather than a more accurate metric of need⁶. That shortcut meant that the algorithm was biased because “even when Black and White patients spent the same amount, they did not have the same level of need”.

In 2021, online real estate firm Zillow used AI to generate estimates for home values used to “iBuy” houses that were supposedly undervalued, so they could be sold quickly at a higher value. This AI innovation allowed CEO Rich Barton to compete for a slice of the nearly one billion dollars raised in the online real estate funding market at that time. When it was discovered, soon after launch, that the error rate on their estimated valuations was significantly higher than expected (a realization referred to as “new shit coming to light”), the result was a cash hemorrhage. CEO Barton, who liked to say that a big goal or “BHAG”⁷ doesn’t have to be successful to be considered a success⁸, laid off the 2000 employees who had done what they were asked to do, despite the difficulty of that ask. While Barton presented himself as a big thinker willing to take big risks, the actual cost of failure was paid by employees who found themselves applying for unemployment during a pandemic. Refusing to take responsibility for his own failure in leadership, Barton blamed the debacle on an unpredictable housing market with unstable pricing, a shameless shunting of responsibility belied by the lack of similar failures by other iBuying startups at the time.

Automation versus Safety
In October 2023, the AI enabled driverless taxi service Cruise, a GM subsidiary, suspended service due to “safety concerns”, a euphemism for multiple traffic problems and a crash where a pedestrian who had already been hit by another car was then hit again by a Cruise vehicle and dragged 20 feet as it pulled over to the side of the road. Cruise management, in a cold-blooded attempt to risk-shift, insisted that the party at fault was the human driver in the first crash⁹.

It’s probable that eventually we will have automated taxis and they will be safer than human-driven cars. The technology is not inherently unsafe, but the situation demonstrated above with Cruise shows that humans, particularly CEOs, are so eager to move fast and gain rewards that they’re ignoring or minimizing risks while simultaneously attempting to transfer those risks to others, even the risk of death. When we consider how many systems are run by software, from elevators to HVAC systems to security protocols to airplanes to electrical grids to battlefields, the cost of an AI enabled software mistake gets bigger every day.

From 2019 to September 2023, equity investments in generative AI grew by a factor of 10 through a combination of increased deals and higher price tags¹⁰, setting off a feeding frenzy. Big tech firms in 2024 continue to lay off thousands of traditional skilled workers while spending heavily on faster chip processing and AI/ML experienced staff. The bubble is growing, and we can look forward to the inevitable pop. It’s all-systems-go for those promising rewards, while those who attempt to focus on real and present risks are chided for a lack of imagination and big-picture vision.

The Specter of Sentience: Imagined Risks vs. Real-world Harms
There has been much attention paid to the amorphous dread professed by Sam Altman, Elon Musk and others regarding the possibility of systems becoming sentient and outwitting the humans, like in Kubrick’s 2001: A Space Odyssey. These salacious pseudo-risks are headline grabbing and possible, but also avoidable. We shouldn’t allow AI to control sensitive systems without “off” switches, firewalls, and back-door controls. Perhaps overblown hyperbolic fears like this are to be expected from people who don’t have to worry about normal risks like the painfully high cost of housing, medical care, and food. Tech billionaires can spend vast amounts of time and money obsessing about unlikely doomsday scenarios derived from science fiction and compete to build their own escape hatches, “billionaire bunkers”¹¹ and housing on Mars. It is telling that Elon Musk abandoned OpenAI and the dangers he perceived there in order to create his own AI company with the same inherent risks. His lips speak of concern while his actions show determination to move forward, as long as he can capture the rewards on his own terms.

AI systems may become sentient, and those systems may one day outwit their human creators, but in the meantime, millions of real people will be affected by actual failures. The risk of the potential uncontrollability of AI is a red herring, distracting people from the clear and present need to address the real problems affecting normal folks who are swept up in AI-driven nightmares because their livelihood was threatened, or they have to work with a company that is trying to wring out the benefits of AI without taking responsibility for harms done. These more hum-drum harms are rife in the collection, review, and labeling of training data, as well as talent churn as companies jump on the AI bandwagon and neglect core technology.

Balancing Innovation with Ethics
There is no bright line between responsible and irresponsible use of technology. What is troubling is that we put that technology in the hands of people who celebrate speculation and creative destruction as long as they’re not the ones who will need to pay the costs. We find ourselves beholden to business leaders vying for supremacy in funding, using the technology to go after the greatest possible rewards in the shortest amount of time.

Responsible leaders also seek the same prizes in more cautious ways, but they are overshadowed by the aggressive and self-aggrandizing who capture attention, funding, and adulation. We’ve seen this before, many times, including the dramatic dotcom and housing bubbles. We don’t need to wait until we go through all the stages of the bubble, from hype to peak, through the burst and into the subsequent crash to know who is receiving the rewards, and who is absorbing the costs. We can ensure, through a combination of regulation, the establishment of norms, and peer-pressure, that those in power who make the decisions and reap the rewards are not able to push the costs onto others.

Edited by: Lee Howard


[1]: OpenAI. https://help.openai.com/en/articles/6783457-what-is-chatgpt. 01/05/2024.

[2]: Our World in Data from Lightcast via AI Index Report (2023). https://ourworldindata.org/grapher/share-artificial-intelligence-job-postings. Accessed 01/06/2023.

[3]: LinkedIn. https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/PDF/future-of-work-report-ai-november-2023.pdf Accessed 01/06/23.

[4]: The Wall Street Journal. https://www.wsj.com/articles/artificial-intelligence-jobs-pay-netflix-walmart-230fc3cb

[5]: Affidavit filed 05/25/23: https://www.documentcloud.org/documents/23826751-mata-v-avianca-airlines-affidavit-in-opposition-to-motion

[6]: Scientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/

[7]: Big Hairy Audacious Goal

[8]: Financial Review. https://www.afr.com/world/north-america/zillow-vowed-to-revolutionise-real-estate-then-it-all-fell-apart-20220510-p5ak0w Accessed 01/06/24.

[9]: The New York Times. https://www.nytimes.com/2023/10/24/technology/cruise-driverless-san-francisco-suspended.html Accessed 01/06/24.

[10]: CB Insights. https://www.cbinsights.com/research/generative-ai-startups-funding-customer-satisfaction/. Accessed 01/05/24.

[11]: Business Insider. https://www.businessinsider.com/billionaire-bunker-openai-sam-altman-joked-ai-apocalypse-2023-10 accessed 01/06/24.

Subscribe to The Ethical Tech Digest

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe