🌐 Global Roundup - July 15, 2024

A biweekly roundup of curated stories and opinions about ethical tech and innovation.

🌐 Global Roundup - July 15, 2024
Photo by Michael Dziedzic / Unsplash

Sarah Bitamazire, author of the Lumiera Loop newsletter out of Sweden, helps companies implement responsible AI. She lays out her take on all the key issues here. Part of TechCrunch's spotlight on women in AI. TechCrunch | Women in AI Making a Difference | Lumiera Loop Newsletter

The first edition of the Global Responsible AI Index is out. Released on June 13th this year, it is full of insights, analysis, and recommendations for policymakers, industry leaders, and civil society to collaborate in shaping AI's future. It covers 19 thematic areas across three dimensions: Human Rights and AI, Responsible AI Governance, and Responsible AI Capacities. Lumiera summarizes the Index here. Lumiera

UC Berkeley Professor Hany Farid, a media forensics expert, launches GetReal Labs. From AI-generated profiles on social media, to voice scam targeting, to a photoshopped image of a CEO in handcuffs, GetReal offers a suite of forensic solutions for multimedia content as well as real-time detection of malicious deepfake voice and video. Learn more. PR Newswire | GetReal Labs

The European Commission sends preliminary findings to Meta over its “Pay or Consent” model for breach of the Digital Markets Act. In the Commission's preliminary view, Meta's binary choice fails to provide users a less personalized but equivalent version of Meta's social networks. The Commission will conclude its investigation within 12 months from the opening of proceedings on March 25th, 2024. The linked press release contains a downloadable PDF of the Digital Markets Act for your reference. European Commission

A group of over 30 civil society organizations, including the leading European Consumer Organization, has cast doubt on the independence of national authorities tasked with enforcing the AI Act. In an open letter in June, they called on the European Commission to clarify. “If the General Purpose AI Codes of Practice drafting process is not multi-stakeholder, including civil society, academics and independent experts, this could mean an industry-led process; essentially Big Tech writing their own rules,” one person from civil society, who declined to be named commenting on an evolving situation, told Euractiv. Euractiv

Palestinians living abroad have accused Microsoft of closing their email accounts without warning, cutting them off from crucial online services. Microsoft says they violated its terms of service - a claim they dispute. Read more. BBC

Imagine a board meeting where an AI named ‘robo-director’ offers insights alongside its human counterparts. While it sounds like science fiction, this article claims it's starting to happen now. These AI models aim to deliver unbiased input by examining vast datasets, although their legal status and liability remain uncertain. Governance Intelligence

Fast Company claims that the "The Wild West of online political advertising needs transparency tools ahead of the election". And with wars raging in multiple global hot spots and platforms like X and Facebook struggling to monitor and report on political ads effectively, the author states, the risks of electoral interference and voter manipulation are significant. Fast Company

Ina Fried at Axios provides context around Microsoft giving up its observer seat on OpenAI's board, including what it means for Apple executive Phil Schiller, who was previously planned to be invited as another observer. Axios

Federal officials step down from the Coalition for Health AI's board. The resignations come shortly after CHAI received pushback from Republican lawmakers concerned about the FDA’s participation in the group. The legislators sent a letter to the agency last month arguing the partnership could create conflicts of interest in the government, noting tech giants like Microsoft and Google as well as health systems who use AI also take part in the coalition. Health Care Dive

An impassioned opinion piece by journalist Kate Graham-Shaw in Scientific American claims, "We Cannot Cede Control of Weapons to Artificial Intelligence". Explore her deeply human-centered analysis in her own words. Scientific American

Republicans vow to repeal President Biden's executive order on AI. Nevertheless, according to a poll conducted in late June by the AI Policy Institute (AIPI), 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.” Meanwhile, NetChoice, which counts Google, Meta, and Amazon among its members, and a number of think tanks and tech lobbyists, have railed against the executive order since its introduction, arguing it could stifle innovation. Explore both sides of the argument here. Time | Executive Order on AI | 2024 GOP Platform, Annotated by CNN

Microsoft and the United Arab Emirates are getting caught in the crossfire of the U.S.-China tech war. American spy agencies have warned U.S. officials that the Emirate's leading AI firm G42’s connections with large Chinese companies like Huawei could make it a conduit for the Chinese government to access sensitive information, according to a recent investigation by The New York Times. Microsoft couldn’t get support for the deal from the Biden administration until G42 agreed to sever its Chinese business links. Rest of World | New York Times

Contributor: Sudeshna Mukherjee

Subscribe to The Ethical Tech Digest

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe