From an exciting innovation to an ominous threat: tech leaders call for a halt in AI development
By: L. Pereira
While recent advances regarding artificial intelligence have made the technology world mesmerized, many of its founders have already begun expressing deep concern.
Following the release of Open-AI, more than 1000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to haul their development, especially when concerning the most advanced systems. Those who participated in the creation of the program even launched an open letter in March this year entailing that AI Tools present a “profound risk to society and humanity.” In the letter, Elon Musk claims developers “are locked in an out of control race” and cannot “understand, predict or reliably control” how artificial intelligence behaves, expressing how dangerous it has gotten to humans. Others who signed the letter include Steve Wozniak, co-founder of Apple, Andrew Yang, an established entrepreneur, and Rachel Bronson, a 2020 US presidential candidate. Also, the president of the Bulletin of Atomic Scientists and creator of the Doomsday Clock Martyl Langsdorf was another big name to have publicly agreed to what was expressed in the letter.
The push to develop more powerful chatbots has led to a race within the technology community, where the next pioneers will be determined. But even in their early phases, these developers have already been faced with extreme criticism from pre-established and respected pioneers. Along with the fear of artificial intelligence surpassing human control and jobs, the programs developed are still faulty and criticized for wide-spreading misinformation.
Overall, the open letter called for a pause in the development of AI systems more powerful than Chat GPT-4. The pause should provide time for the introduction of “share safety protocols” (Elon Musk) and prevent the programs from presenting real harm to humanity. The open letter also explicitly claims that artificial intelligence may only be fully explored once we are aware of the risks, how to control them, and fully understand what allows them to “learn”.
Your comment will be posted after it is approved.
Leave a Reply.