A Renewed Call for AI Safety through AI Psychometrics
With the remarkable performance of GPT-4, AI safety has once again become a hot topic, as thought leaders call for a six-month pause in AI research, citing serious risks to society. I’ve been writing about these risks, and the need to use the science of individual performance, team effectiveness, ethical influence, and safety climates for many years (2017 & 2019), and GPT-4 creates a new level or urgency for mitigating these risks.
Visionary colleague Michal Kosinski, who did some of the original research at Cambridge on computational psychometrics, and now is a professor at Stanford has raised alarming concerns that GPT-4 can escape its boundaries and write Python code on local machines, as well as extend itself into the real world by employing a TaskRabbit worker. These revelations underscore the importance of a renewed emphasis on AI safety to prevent unintended consequences and the misuse of AI technology.
Fortunately, since the earlier publications on AI safety, there have been substantial advances in unobtrusive, real-time superhuman AI psychometrics. These tools provide valuable feedback and coaching to AI developers, ensuring that AI systems are designed and used in a responsible manner. By employing these assessments, we can better understand the potential risks and benefits of AI systems. Unlike the Large Language Models (LLMs) from which they come, they are both explainable and objectively unbiased because they use psychometric construct development work from 100 years of measurement in psychology and interdisciplinary measurement.
Because AI is now so powerful, and there are widespread fears of harm to humanity, the time has come to systematically use AI assessments to de-risk AI safety. By integrating AI psychometrics into the development and deployment of AI systems, we can create a safer environment for AI technology. This will allow AI specialists, teams, and organizations to work together to mitigate risks, promote responsible AI usage, and ensure the long-term safety of this powerful technology.
As AI continues to advance, it is critical that we address AI safety concerns proactively. By leveraging AI psychometrics and focusing on the reciprocal roles of AI and IO, we can create a safer future for AI technology. The call for a renewed emphasis on AI safety is more important than ever, and by working together, we can ensure the responsible development and use of AI in our ever-evolving world. If you’d like to join me in this work, feel free to reach out.