Deepfakes — morphed content made using AI — has emerged as a big problem. In the last few weeks, a few Indian celebrities’ deepfake videos have been doing rounds on social media. Google is investing big in AI-related tech and is aware of the dangers it can pose. In a blog post, a senior Google executive said that the company testing for a wide range of safety and security risks, including the rise of new forms of AI-generated, photo-realistic, synthetic audio or video content known as “synthetic media”.
Michaela Browning, vice president, government affairs & public policy, Google Asia Pacific, said that there is “no silver bullet” to combat deep fakes and AI-generated misinformation. “It requires a collaborative effort, one that involves open communication, rigorous risk assessment, and proactive mitigation strategies,” she said in the blog post. Citing the example of YouTube, Browning said that Google uses a combination of people and machine learning technologies to enforce Community Guidelines, with reviewers across Google operating around the world. “In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is helping to continuously increase both the speed and accuracy of our content moderation systems,” she said.
Collaboration is the way forward
In India, Google is working with policymakers, researchers, and experts to develop effective solutions. Google has invested USD $1M in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI. “This center will foster collective effort — involving not just researchers, but domain experts, developers, community members, policy makers and more – in getting AI right, and localizing it to the Indian context,” she said.
Furthermore, the tech giant is also collaborating with the Indian government for “a multi-stakeholder discussion aligns with our commitment to addressing this challenge together and ensuring a responsible approach to AI.” As per Browning and Google, a multistakeholder approach and fostering responsible AI development, can ensure that AI is used for more good than bad.
Michaela Browning, vice president, government affairs & public policy, Google Asia Pacific, said that there is “no silver bullet” to combat deep fakes and AI-generated misinformation. “It requires a collaborative effort, one that involves open communication, rigorous risk assessment, and proactive mitigation strategies,” she said in the blog post. Citing the example of YouTube, Browning said that Google uses a combination of people and machine learning technologies to enforce Community Guidelines, with reviewers across Google operating around the world. “In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is helping to continuously increase both the speed and accuracy of our content moderation systems,” she said.
Collaboration is the way forward
In India, Google is working with policymakers, researchers, and experts to develop effective solutions. Google has invested USD $1M in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI. “This center will foster collective effort — involving not just researchers, but domain experts, developers, community members, policy makers and more – in getting AI right, and localizing it to the Indian context,” she said.
Furthermore, the tech giant is also collaborating with the Indian government for “a multi-stakeholder discussion aligns with our commitment to addressing this challenge together and ensuring a responsible approach to AI.” As per Browning and Google, a multistakeholder approach and fostering responsible AI development, can ensure that AI is used for more good than bad.