A recent phenomenon has been observed in the popular conversational AI platform ChatGPT, where users discovered that the chatbot freezes or crashes when asked about specific names, including David Mayer, Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza. After investigating, it appears that these individuals may have requested their information to be restricted or “forgotten” online, leading to the AI’s unusual behavior.
Forecast for 6 months: As more users discover and experiment with ChatGPT’s limitations, the platform’s developers may release updates to address these issues, potentially leading to improved handling of sensitive information and reduced instances of the AI freezing or crashing.
Forecast for 1 year: The incident may lead to increased scrutiny and regulation of AI models, particularly in regards to their handling of personal data and online presence. This could result in more stringent guidelines and standards for AI development, potentially affecting the future of conversational AI platforms.
Forecast for 5 years: As AI technology advances, we can expect to see more sophisticated and nuanced handling of sensitive information, including the ability to distinguish between public figures and individuals who have requested their information to be restricted. This could lead to a more balanced and respectful approach to online presence and data management.
Forecast for 10 years: The incident may mark a turning point in the development of AI, as researchers and developers prioritize transparency, accountability, and user control over AI decision-making processes. This could lead to a new era of AI that is more empathetic, respectful, and aligned with human values.