Measuring Determinism in Large Language Models for Software Code Review

Measuring Determinism in Large Language Models for Software Code Review

Measuring Determinism in Large Language Models for Software Code Review

Date

Date

Date

28.02.2025

28.02.2025

28.02.2025

Service

Service

Service

Client

Client

Client

Measuring Determinism in Large Language Models for Software Code Review

Eugene Klishevich, Yegor Denisov-Blanch, Simon Obstbaum, Igor Ciobanu, Michal Kosinski

Large Language Models (LLMs) promise to streamline software code reviews, but their ability to produce consistent assessments remains an open question. In this study, we tested four leading LLMs -- GPT-4o mini, GPT-4o, Claude 3.5 Sonnet, and LLaMA 3.2 90B Vision -- on 70 Java commits from both private and public repositories. By setting each model's temperature to zero, clearing context, and repeating the exact same prompts five times, we measured how consistently each model generated code-review assessments. Our results reveal that even with temperature minimized, LLM responses varied to different degrees. These findings highlight a consideration about the inherently limited consistency (test-retest reliability) of LLMs -- even when the temperature is set to zero -- and the need for caution when using LLM-generated code reviews to make real-world decisions.

More projects

Contact me

I’m always open to collaborate on innovative projects and research.

Send me a message

Contact me

I’m always open to collaborate on innovative projects and research.

Send me a message

Contact me

I’m always open to collaborate on innovative projects and research.

Send me a message