In a recent development in the world of artificial intelligence (AI), American psychologist Martin Seligman has been replicated as an AI chatbot named "Ask Martin." This creation is part of a growing trend of AI chatbots modeled after real individuals. The virtual Seligman was developed by a team of researchers in Beijing and Wuhan, initially without Seligman's knowledge or permission. The chatbot was built by feeding all of Seligman's written works into cutting-edge AI software, resulting in a virtual version of Seligman that can provide advice and wisdom based on his ideas.
Seligman shared the chatbot with friends and family, and the response was overwhelmingly positive. However, the creation of AI replicas of living individuals without their consent raises ethical concerns. In the case of Seligman and Belgian celebrity psychotherapist Esther Perel, both individuals eventually chose to accept the bots rather than challenge their existence. However, the question of how to shut down these digital replicas remains unanswered, as training AI on copyrighted works is not illegal.
The rise of AI-generated digital replicas has prompted some members of Congress to propose legislation to regulate their use. The NO FAKES Act, currently being circulated in the Senate Judiciary Committee, would require the makers of AI-generated replicas to obtain licenses from the original human and allow individuals to authorize and profit from their use. However, even if this bill were to pass, it would face challenges in enforcing regulations on a global scale.
The issue of AI replicas also raises concerns about the protection of intellectual property rights and the potential for unauthorized use. American lawmakers are particularly concerned about the benefits that Big Tech and China may gain from the work of American creators. However, the jurisdictional challenges posed by the global nature of AI technology make it difficult to address these issues effectively.
Furthermore, the use of AI replicas in China raises additional concerns due to the government's pervasive electronic surveillance policies. Chinese citizens using the virtual Seligman may unwittingly share their thoughts with authorities, potentially leading to repercussions in a country where false diagnoses of mental illness have been used to target dissidents.
As AI-generated replicas become more prevalent, policymakers are facing the urgent task of establishing rules and regulations surrounding their use. Seligman, who has experienced the co-opting of his work in the past, believes that the virtual Seligman will ultimately do more good than harm. He sees it as a way for his ideas to continue benefiting people long after he is gone, describing it as a form of scientific immortality.