Google CEO Sundar Pichai Advocates For AI Regulation: "Consequences are a Must"
The CEO of Google, Sundar Pichai, has expressed his belief that artificial intelligence (AI) is one of our most significant technological advancements. In a recent “60 Minutes” interview, Pichai explained that AI is becoming increasingly crucial to Google’s business. The company recently released an AI chatbot called Bard and developed a prototype named “Project Starlink” to create a more lifelike video conferencing experience.
Pichai emphasized the importance of there being consequences for creating deep fake videos that cause harm to society. However, Pichai also acknowledged the need for government regulation of AI, particularly given the emergence of deep fakes. He suggested that the approach to the technology would be “no different” from how Google tackled spam and Gmail, using better algorithms to detect and manage potential risks.
This call for regulation is not new. In March, an open letter signed by several tech leaders, including Elon Musk and Steve Wozniak, as well as CEOs, called for a six-month pause on AI development to assess potential risks. The letter has since gained over 26,000 signatures.
Pichai’s comments highlight the mixed reactions that AI technology is receiving, with some seeing it as a revolutionary and necessary advancement and others expressing concerns about potential risks and the need for regulation. While AI has shown immense potential in various industries, it is also essential to consider the ethical implications of its use and development.