Values in Interaction between Humans & Large-Language Models
Do Large Language Models have Values? Can they form a coherent value system, similar to the values of individuals? And what values do we learn when interacting with them?
In the Educational Values in Human-LLM Interaction project, funded by Google, we collaborate with Prof. Amir Globerson and Prof. Gal Elidan. We investigate large language models, computerized systems such as Chat GPT, Bard and Gemini. These systems stormed our lives with their advanced capabilities. We ask whether LLM's can recreate a complex value system, that resembles the value system humans hold. We further ask whether LLM's can produce value systems of multiple individuals, resembling a human populations.
As many individuals now interact with LLM's often, we ask whether such interactions can influence peoples' values, leading to value shifts.
The Team
Prof. Ella Daniel
Principal Investigator
Dr. Naama Rosen
Postdoctoral fellow
Collaborators
Prof. Amir Globerson
The Blavatnik School of Computer Science, Tel Aviv University
Google
Prof. Gal Elidan
Department of Statistics, the Hebrew University of Jerusalem
Google
News & Events
July 15, 2024
Presenting at the AI for Sustainability and Education: Annual meeting of the Center for AI and Data Science at Tel Aviv University and Google