Join us in Atlanta on April 10th to explore the landscape of security workers. Explore the vision, benefits, and use cases of AI for security teams. request an invitation here.
new research from Google's DeepMind The research unit has found that artificial intelligence systems can outperform human fact-checkers when evaluating the accuracy of information produced by large-scale language models.
A paper entitled “”Factuality of long sentences in large language models” was published. Preprint server arXivintroduces a technique called Search-Augmented Factuality Evaluator (SAFE). SAFE uses a large-scale language model to break down the generated text into individual facts and uses Google search results to determine the accuracy of each claim.
“SAFE uses LLM to decompose a long-form response into a set of individual facts, submit a search query to Google Search, and use a multi-step inference process to determine whether the facts are supported. and evaluate the accuracy of each fact in the search results,'' the authors explained.
'Superhuman' performance sparks debate
Researchers compared SAFE to human annotators on a dataset of approximately 16,000 facts and found that SAFE's ratings matched human ratings 72% of the time. Even more remarkable is that in a sample of 100 disagreements between SAFE and a human rater, SAFE's judgment was found to be correct in 76% of the cases.
Although the paper claims that „LLM agents can achieve superhuman evaluation performance,“ some experts have questioned what „superhuman“ here actually means. There is.
gary marcusThe prominent AI researcher, who frequently criticizes exaggerated claims, says that in this case, „superhumans“ are simply „true human fact-checkers who are better than poorly paid crowdworkers.“ He suggested on Twitter that this might mean something.
„This characterization is therefore misleading,“ he says. „That's like saying chess software in 1985 was superhuman.“
Marcus makes a valid point. To truly demonstrate superhuman performance, her SAFE must be benchmarked against expert human fact-checkers, not just crowdsourced employees. The specific details of the human evaluator, such as qualifications, compensation, and fact-checking process, are critical to properly contextualizing the results.
Cost reduction and benchmarking of top models
One obvious advantage of SAFE is cost. Researchers found that using AI systems is about 20 times cheaper than human fact-checkers. As the amount of information generated by language models continues to explode, it becomes increasingly important to have an economical and scalable way to verify claims.
The DeepMind team used SAFE to evaluate the factual accuracy of 13 top language models across four families (Gemini, GPT, Claude, and PaLM-2) in a new benchmark called LongFact. The results show that larger models generally have fewer factual errors.
However, even the best-performing models produced a significant number of false claims. This highlights the risks of relying too much on language models that can fluently represent inaccurate information. Automated fact-checking tools like SAFE can play an important role in mitigating these risks.
Transparency and human standards matter
The SAFE code and LongFact dataset are Open sourced on GitHubMore transparency is still needed regarding the human baselines used in research to enable other researchers to review and build on the work. Understanding the details of a crowdworker's background and process is essential to assessing her SAFE capabilities in the appropriate context.
As tech giants race to develop ever more powerful language models for applications ranging from search to virtual assistants, the ability to automatically fact-check the output of these systems proves crucial. there's a possibility that. Tools like SAFE are an important step in building new layers of trust and accountability.
However, it is important that the development of such important technologies occurs openly, with input from a wide range of stakeholders, beyond the walls of a single company. Rigorous and transparent benchmarking of human experts, not just crowdworkers, is essential to measure true progress. Only then can we assess the real impact that automated fact-checking has on fighting misinformation.