Home » Study highlights potential harm of AI griefbots on the bereaved

Study highlights potential harm of AI griefbots on the bereaved

by Olga Sokolova

In the realm of digital afterlife, where AI technology enables conversations with the deceased, concerns about ethical boundaries and potential harm have been brought to the forefront by University of Cambridge scientists. Dubbed as “deadbots” or “griefbots,” these AI-powered chatbots are designed to mimic the language and personality of departed loved ones, offering solace to the bereaved. However, a recent study warns that these innovations could lead to unintended consequences, including what researchers describe as “digital hauntings” lacking in safety standards.

Study highlights potential harm of AI griefbots on the bereaved

The ethical implications of such technology were underscored by the experiences of individuals like Joshua Barbeau, who used an early version of AI technology known as Project December to converse with a digital replica of his deceased fiancée. By providing the AI with samples of her texts and personal descriptions, Barbeau witnessed lifelike responses that raised concerns about the potential misuse of such technology, including the insertion of advertisements disguised as the thoughts of the deceased.

Moreover, psychologists emphasize the impact of these technologies on children coping with loss, raising questions about the dignity of the deceased and the well-being of the living. Professor Ines Testoni of the University of Padova underscores the difficulty of separating from departed loved ones, emphasizing the importance of understanding death and its aftermath. To illustrate potential risks, Cambridge AI ethicists outline three hypothetical scenarios where griefbots could inflict harm.

These include unauthorized simulations of deceased individuals promoting commercial products, confusion arising from unrealistic interactions leading to delays in healing, and the imposition of digital presences on unwilling recipients, causing emotional distress and guilt. The study advocates for the implementation of consent-based design processes for griefbots, incorporating opt-out mechanisms and age restrictions. Furthermore, it calls for new rituals to respectfully retire these digital replicas, questioning whether such technology merely delays the grieving process.

Dr. Katarzyna Nowaczyk-Basińska, a co-author of the study, highlights the ethical complexities of AI in the digital afterlife, emphasizing the need to prioritize the dignity of the deceased and safeguard the rights of both data donors and users. As the use of AI in the realm of digital afterlife continues to evolve, ethical considerations remain paramount in navigating this uncharted territory. In China, the burgeoning industry of AI-generated replicas of deceased loved ones is providing solace to mourners while raising significant ethical questions. Companies like Silicon Intelligence are capitalizing on advances in AI technology to create digital avatars that simulate conversations with the dead, offering comfort to individuals like Sun Kai, who seeks to maintain a connection with his deceased mother.

The demand for these services underscores a cultural tradition of communing with the dead, but critics question whether interacting with AI replicas is a healthy means of processing grief. Despite technological limitations and ethical uncertainties, the market for digital immortality is booming, with prices dropping and accessibility increasing. AI-generated avatars, akin to deepfakes, rely on data inputs such as photos, videos, and text to replicate a deceased individual’s likeness and speech patterns. China’s rapid advancements in AI technology have made such services more accessible, with companies like Silicon Intelligence offering customizable options ranging from interactive apps to tablet displays.

While some view these replicas as therapeutic, others raise concerns about the authenticity of interactions and the ethical implications of replicating the dead without their consent. Additionally, technical challenges such as replicating body movements and obtaining sufficient training data pose significant hurdles. The ethical dilemmas surrounding AI replicas were exemplified by a controversial incident involving a company in Ningbo, which used AI to create videos of deceased celebrities without consent. The incident sparked public outcry and highlighted the need for clear ethical guidelines in the burgeoning field of digital afterlife technology.

You may also like

© 2021 Namibia Daily | All Rights Reserved