Due to the massive resources required alone, there will only be a few companies that can be pioneers in training AI models, Sam Altman said in a hearing in the US Senate in Washington on Tuesday. They would have to be under strict supervision.
Altman’s OpenAI triggered the current AI hype with the text machine ChatGPT and the software that can generate images based on text descriptions. ChatGPT formulates texts by estimating the likely continuation of a sentence word by word. One consequence of this procedure is that the software invents not only correct information but also completely incorrect information – but no difference is recognizable for the user. Because of this, there is a fear that their skills could be used, for example, to produce and spread misinformation. Altman also expressed this concern at the hearing.
Government agency to test AI models
Altman proposed creating a new government agency that could put AI models to the test. A series of security tests should be provided for artificial intelligence – for example, whether it could spread independently. Companies that do not comply with prescribed standards should have their license revoked. The AI systems should also be able to be checked by independent experts.
Altman acknowledged that AI technology could eliminate some jobs through automation in the future. At the same time, however, it has the potential to create “much better jobs”.
During the hearing in a Senate subcommittee, Altmann did not rule out the possibility that OpenAI programs could also be available with advertising instead of as a subscription as is currently the case.
more from economy