Experts call for a halt to AI advancements.

In recent years, we have encountered several Technological advances such as GPT-4, a humanized language model of the new version of ChatGPT, from Open-AI, which possesses Artificial Intelligence (AI), capable of learning so quickly and answering a myriad of questions with a surprising level of quality. But while these capabilities are undeniably useful in many ways, advances in AI raise many concerns about the future of civilization.

In an open letter published on the website futureoflife.org, Hundreds of experts propose a six-month suspension of research on systems more powerful than GPT-4., from the creator of ChatGPT, and they issue a warning about the The risk that technological advances may pose to humanity.Furthermore, the same group calls for the creation of regulatory bodies and accountability for damages caused by AI.

According to experts, advanced AI could represent a profound shift in the history of life on Earth and should be planned and managed with proportionate care and resources. However, this level of planning and management is not happening, although the last few months have seen AI labs locked in a headlong rush to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control.

In the letter, the group raises several questions such as:

  • To what extent should we allow machines to flood our information channels with propaganda and falsehoods? 
  • Should we automate all jobs, including the enjoyable ones? 
  • Should we develop non-human minds that would eventually outnumber us, be smarter, become obsolete, and replace us? Should we risk losing control of our civilization?

READ ALSO: "Marketing through Artificial Intelligence"

READ ALSO: "10 examples of AI's presence in everyday life"

Experts urge that powerful AI systems should only be developed when there is confidence that their effects will be positive and their risks will be manageable. In a recent statement, the OpenAI He stated that at some point, it may be important to obtain an independent review before starting to train future systems, and for more advanced efforts, to agree to limit the growth rate of computing power used to create new models.

Therefore, the experts request in the letter that all AI laboratories immediately halt, for at least six months, the training of AI systems more powerful than GPT-4. This pause should be public, verifiable, and include all major stakeholders. If such a pause cannot be implemented quickly, governments should intervene and institute a moratorium. They also request that... AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development that are rigorously audited and overseen by independent external experts.These protocols should ensure that the systems that adhere to them are secure beyond any reasonable doubt.

Should laboratories really put a pause on AI advancements?

SEE ALSO: "WhatsApp feature blocks conversations with a password"

SEE ALSO: “Chatbot and ChatGPT: How are they related?”

Related posts

Shopify: contratações só após teste com IA

Shopify: Hiring only after AI testing.

Google une IA generativa e busca tradicional: o novo passo na disputa contra o ChatGPT

Google combines generative AI and traditional search: the next step in the battle against ChatGPT.

‘IA agêntica’: o modelo que amplia a integração entre robôs e humanos

'Agent AI': the model that expands the integration between robots and humans.

Matrix Go lança Morpheus e aposta na IA Agêntica

Matrix Go launches Morpheus and bets on Agency AI.

“Genesis Mission”: a new billion-dollar milestone to lead the global AI race.

Amazon e IA: expansão bilionária impulsiona infraestrutura global

Amazon and AI: Billion-dollar expansion drives global infrastructure.