U k r V i s t i

l o a d i n g

Global Initiative for AI Safety: New Proposals Unveiled

A new initiative aimed at establishing international bans on the dangerous applications of artificial intelligence was presented at the UN General Assembly.

image

During a session at the UN General Assembly, a group of politicians, scientists, and activists introduced a new initiative called "Global Call for AI Red Lines". The primary objective is to create international prohibitions on the most dangerous applications of artificial intelligence by the end of 2026.

The document has already garnered support from over 200 individuals, including notable figures like former Irish President Mary Robinson and ex-Colombian President Juan Manuel Santos. Among the signatories are also leading scientists such as Geoffrey Hinton and Yoshua Bengio, regarded as the "godfathers" of AI.

Participants are urging governments to reach agreements on fundamental restrictions to eliminate "categorically unacceptable risks". While specific rules are not detailed, the document offers examples of potential "red lines", including a ban on using AI to launch nuclear weapons, mass surveillance of citizens, or creating systems that cannot be shut down.

The organizers suggest that any future agreements should be based on three core principles:

  • a clear list of prohibited practices;
  • independent verification and auditing mechanisms;
  • the establishment of an international body to oversee the implementation of agreements.

At the same time, the final determination of boundaries and procedures is left to the discretion of states. Initiators recommend holding special summits and working groups to align positions.

The United States has already committed to "not allowing AI to control nuclear weapons", a pledge made during the Biden administration. However, some representatives from the Trump administration expressed concerns that companies developing AI were not permitting their technologies to be used for domestic surveillance, which could complicate the adoption of global security decisions expected by AI experts.