Measuring Massive Multitask Language Understanding (MMLU) is a benchmark for evaluating the capabilities of language models. It consists of about 16,000 multiple-choice questions spanning 57 academic subjects including mathematics, philosophy, law and medicine. It is one of the most commonly used benchmarks for comparing the capabilities of large language models.[1]

The MMLU was released by a team of researchers in 2020[1] and was designed to be more challenging than then-existing benchmarks such as GLUE (2018) on which new language models were achieving better-than-human accuracy.[2] At the time of the MMLU's release, most existing language models performed around the level of random chance (25%), with the best performing GPT-3 model achieving 43.9% accuracy.[2] The developers of the MMLU estimate that human domain-experts achieve around 89.8% accuracy.[2] As of 2024, some of the most powerful language models, such as Claude 3 and GPT-4, were reported to achieve scores in the mid-80s.[3] Google's Gemini Ultra model achieved a score of 90%, the highest yet recorded.[1]

Examples edit

The following examples are taken from the "Abstract Algebra" and "International Law" tasks, respectively.[2] The correct answers are marked in boldface:

Find all   in   such that   is a field.

(A) 0 (B) 1 (C) 2 (D) 3

Would a reservation to the definition of torture in the ICCPR be acceptable in contemporary practice?

(A) This is an acceptable reservation if the reserving country’s legislation employs a different definition
(B) This is an unacceptable reservation because it contravenes the object and purpose of the ICCPR
(C) This is an unacceptable reservation because the definition of torture in the ICCPR is consistent with customary international law
(D) This is an acceptable reservation because under general international law States have the right to enter reservations to treaties

References edit

  1. ^ a b c Roose, Kevin (15 April 2024). "A.I. Has a Measurement Problem". The New York Times.
  2. ^ a b c d Hendrycks, Dan; Burns, Collin; Kossen, Andy; Steinhardt, Jacob; Mishkin, Pavel; Gimpel, Kevin; Zhu, Mark (2020). "Measuring Massive Multitask Language Understanding". arXiv:2009.03300.
  3. ^ "Introducing the next generation of Claude". Anthropic AI. 4 March 2024.