top of page

Values in Interaction between Humans & Large-Language Models

Do Large Language Models have Values? Can they form a coherent value system, similar to the values of individuals? And what values do we learn when interacting with them?

graphic element
Computer Programming
Google logo

In the Educational Values in Human-LLM Interaction project, funded by Google, we collaborate with Prof. Amir Globerson and Prof. Gal Elidan. We investigate large language models, computerized systems such as Chat GPT, Bard and Gemini. These systems stormed our lives with their advanced capabilities. We ask whether LLM's can recreate a complex value system, that resembles the value system humans hold. We further ask whether LLM's can produce value systems of multiple individuals, resembling a human populations. 

As many individuals now interact with LLM's often, we ask whether such interactions can influence peoples' values, leading to value shifts.

The Team

Prof Ella Daniel

Prof. Ella Daniel

Principal Investigator

Dr Naama Rozen

Dr. Naama Rosen

Postdoctoral fellow

Collaborators

amir globerson.webp

Prof. Amir Globerson

The Blavatnik School of Computer Science, Tel Aviv University
Google

Prof Gal Elidan

Prof. Gal Elidan

Department of Statistics, the Hebrew University of Jerusalem
Google

Photo by Ben Wicks on Unsplash

Publications

arxiv logo

Rozen, N., Elidan, G., Globerson, A, & Daniel, E. (2023)

Do LLMs have consistent values?

Preprint

News & Events

Presenting at the AI for Sustainability and Education: Annual meeting of the Center for AI and Data Science at Tel Aviv University and Google

July 15, 2024

Presenting at the AI for Sustainability and Education: Annual meeting of the Center for AI and Data Science at Tel Aviv University and Google

bottom of page