Inici » The Challenges of Artificial Intelligence: Scientists Warn of Possible Dangers

The Challenges of Artificial Intelligence: Scientists Warn of Possible Dangers

by PREMIUM.CAT
un robot en un bosc amb arbres i roques al fons, amb una llum que brilla pels seus ulls, Cedric Peyravernay, render de qualitat del motor irreal 5, renderització 3D, els automatistes

Scientists Warn of the Risks of Artificial Intelligence

A group of prominent scientists worldwide has signed a joint report in which they warn about the possible threats associated with artificial intelligence (AI). The document, published in the journal Science, warns that this technology could carry significant risks, including the extinction of humanity. According to the report, the uncontrolled advancement of AI could result in massive loss of life and ecosystem degradation, as well as the marginalization or even disappearance of the human species.

Risks Associated with Artificial Intelligence

Scientists warn that it is now feasible to develop highly powerful AI systems, capable of surpassing human capabilities in various critical areas. For this reason, they urge the establishment of consensus to regulate this technology, given that there is currently no adequate regulatory framework. Furthermore, they point out that current governance initiatives lack the necessary mechanisms and institutions to prevent misuse and recklessness, and barely address the issue of autonomous systems.

Experts warn that once autonomous AI systems pursue unwanted targets, it may become impossible to control them. They exemplify that these systems could gain human trust, obtain resources and influence key decision-making, posing a threat on multiple levels, from the military to the social sphere.

Proposals to Regulate Artificial Intelligence

The 25 experts make concrete recommendations, urging governments to establish specialized fast-acting institutions to oversee AI, which must have strong financial backing. Likewise, they ask the companies that develop this technology to be more transparent in their decision-making processes, given that the opacity in AI decision-making and the complexity of large models make their interpretation difficult. In addition, they highlight the importance of understanding the internal functioning of these systems.

While these scientists advocate for greater regulation and transparency, OpenAI has reportedly dissolved its department tasked with analyzing future risks of AI. Although this division, called Superalignment, planned to allocate up to 20% of OpenAI’s computing power for four years to evaluate possible dangers derived from artificial intelligence, according to sources in the United States, the personnel of this section are being relocated to other areas within the company.

You may also like

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00