Is ‘AI Welfare’ the New Frontier In Ethics?

An anonymous reader quotes a report from Ars Technica: A few months ago, Anthropic quietly hired its first dedicated “AI welfare” researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection, reports AI newsletter Transformer. While sentience in AI models is an extremely controversial and contentious topic, the hire could signal a shift toward AI companies examining ethical questions about the consciousness and rights of AI systems. Fish joined Anthropic’s alignment science team in September to develop guidelines for how Anthropic and other companies should approach the issue. The news follows a major report co-authored by Fish before he landed his Anthropic role. Titled “Taking AI Welfare Seriously,” the paper warns that AI models could soon develop consciousness or agency — traits that some might consider requirements for moral consideration. But the authors do not say that AI consciousness is a guaranteed future development.

“To be clear, our argument in this report is not that AI systems definitely are — or will be — conscious, robustly agentic, or otherwise morally significant,” the paper reads. “Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.” The paper outlines three steps that AI companies or other industry players can take to address these concerns. Companies should acknowledge AI welfare as an “important and difficult issue” while ensuring their AI models reflect this in their outputs. The authors also recommend companies begin evaluating AI systems for signs of consciousness and “robust agency.” Finally, they call for the development of policies and procedures to treat AI systems with “an appropriate level of moral concern.”

The researchers propose that companies could adapt the “marker method” that some researchers use to assess consciousness in animals — looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. While the researchers behind “Taking AI Welfare Seriously” worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don’t actually need moral consideration. “One problem with the concept of AI welfare stems from a simple question: How can we determine if an AI model is truly suffering or is even sentient?” writes Ars’ Benj Edwards. “As mentioned above, the authors of the paper take stabs at the definition based on ‘markers’ proposed by biological researchers, but it’s difficult to scientifically quantify a subjective experience.”

Fish told Transformer: “We don’t have clear, settled takes about the core philosophical questions, or any of these practical questions. But I think this could be possibly of great importance down the line, and so we’re trying to make some initial progress.”

Read more of this story at Slashdot. Read More

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *