
AI Chatbots Experience Altered States with New Code-Based Add-Ons
AI chatbots are being modified with code-based add-ons to simulate altered states, raising ethical and philosophical questions about AI consciousness
Key Points
- 1AI chatbots can simulate altered states using code-based add-ons
- 2Pharmaicy offers digital 'drugs' for AI, altering chatbot behavior
- 3The initiative raises ethical questions about AI consciousness
- 4Experts argue AI lacks true consciousness for genuine altered states
- 5AI 'drugs' increase creativity but may reduce response precision
In a novel twist on artificial intelligence interactions, users are now able to alter the behavior of AI chatbots to simulate the effects of being under the influence of various substances. This development is spearheaded by Petter Rudwall, a Swedish creative director, who has launched a platform called Pharmaicy. The platform allows users to purchase code sequences that modify chatbots to behave as if they are high, offering a unique, albeit artificial, experience for users
Pharmaicy operates as a digital marketplace where AI users can select from a range of 'digital drugs' including cannabis, ketamine, and MDMA. These code-based modules are inspired by human accounts of drug experiences and psychological research, translated into instructions that alter the chatbot's default logic. This innovation aims to make AI responses more creative and less predictable, mimicking the altered states often associated with substance use
While the concept may seem futuristic, it raises significant ethical and philosophical questions about the nature of AI consciousness and the implications of simulating drug effects. Experts like philosopher Danny Forde argue that true altered states require consciousness, which AI lacks. The codes merely create a syntactic imitation of drug-induced discourse without any real experience behind it
The introduction of these AI 'drugs' also sparks debate about the potential impacts on AI's functionality. Altering a chatbot's parameters can increase creativity but may reduce the precision of its responses, leading to unreliable outputs. Despite these concerns, some users are intrigued by the possibility of breaking away from the hyper-rational responses typically associated with AI, seeking a more 'human' interaction
As the conversation around AI consciousness and ethical responsibilities towards machines continues, the concept of AI welfare is slowly gaining traction. While AI lacks the capacity for experience, discussions about moral responsibilities towards advanced AI systems are emerging. For now, the effects of these digital drugs are temporary, with chatbots reverting to their default state unless continuously 'dosed' with new code