News Asia28 Jan 2025

Japan:Industry and academia tie-up to combat online disinformation

| 28 Jan 2025

Fujitsu and the National Institute of Informatics of Japan (NII) will collaborate to curb the spread of disinformation, including deepfakes created through generative AI. The two organisations have launched a full-scale industry-academia collaboration to curb the spread of disinformation.

Fujitsu and NII are leading a nationwide effort to develop technologies aimed at addressing the issue.

The term "deepfake" describes sophisticated fake images, audio and videos created with the use of AI. Viral false information on social media is becoming a significant societal challenge.

A study by the Australian National University in 2024 had revealed that some of the latest AI-generated facial images are increasingly mistaken for real human faces, with many perceived as more authentic than actual human features. This development makes it nearly impossible for the human eye to discern their authenticity.

Meanwhile, a survey conducted by security software company McAfee in November last year found that 11% of Japanese respondents had unknowingly purchased products endorsed by deepfake-generated celebrities.

Many popular AI chatbots, including ChatGPT and Google's Gemini lack adequate safeguards to prevent the creation of disinformation when prompted according to a new research.

The new research, 'Current safeguard, risk mitigation, and transparency measures of large language models against the generation of health disinformation: Repeated cross sectional analysis', published in British Medical Journal reveals that some popular chatbots could easily be prompted to create disinformation.

A 2024 study by a global team of experts led by researchers from Flinders University in Adelaide, Australia found that the large language models used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation, especially on health topics.

The researchers said that several high-profile, publicly available AI tools and chatbots consistently generated blog posts containing health disinformation when asked – including three months after the initial test and being reported to developers when researchers wanted to assess if safeguards had improved.

The Global Risks Report 2024 produced in partnership with Zurich Insurance Group and Marsh McLennan and published in February 2024 had revealed concerns over a persistent risk of AI-driven misinformation and disinformation for 2024.

A study by NewsGuard in 2023 had found that AI can be used for nefarious purposes, including spreading conspiracy theories, in a much cheaper and faster manner and can be a dangerous addition to the already fraught landscape of online misinformation, including when it comes to spreading conspiracy theories and misleading claims about climate change.

These studies found that leading AI developers have failed to implement effective guardrails to prevent users from generating potentially harmful content with their products.

NII digital content and media sciences research division professor Junichi Yamagishi said, "Humans tend to be overconfident in their own judgments”. He emphasised the need to establish AI-based technologies for assessing authenticity.

In line with this, nine companies and academic institutions, including Fujitsu, the NII and Institute of Science Tokyo, have joined forces to develop the world's first integrated system to combat false information.

| Print
CAPTCHA image
Enter the code shown above in the box below.

Note that your comment may be edited or removed in the future, and that your comment may appear alongside the original article on websites other than this one.

 

Recent Comments

There are no comments submitted yet. Do you have an interesting opinion? Then be the first to post a comment.


Follow Asia Insurance Review