Primers • Factuality in LLMs
- Overview
- Types of Factuality (Taxonomy)
- Assessing Factuality in Large Language Models
- Improving Factuality in Large Language Models
- Open Research Directions in Factuality
- Integrating Factuality into Model Design and Deployment: Architectural and Systems Considerations
- Evaluation Case Studies and Implementation Walkthroughs
- End-to-End Factuality Evaluation Pipeline (Implementation Blueprint)
- Further Reading
- References
- Citation
Overview
-
Factuality in Large Language Models (LLMs) refers to the degree to which a model’s generated outputs align with verifiable, external truths about the world. In simple terms, factuality measures whether an LLM’s claims are true according to established knowledge sources. This property has become central to evaluating the reliability and safety of LLMs, as factual errors can lead to misinformation, reduced trust, and potential downstream harms in real-world applications.
-
At its core, factuality distinguishes itself from related but distinct concepts such as hallucination, truthfulness, and coherence. Whereas hallucination generally denotes any generated content unsupported by evidence (or inconsistent with source material), factuality strictly concerns accuracy with respect to external ground truth. As highlighted by Wei et al. (2024), factuality does not concern internal model consistency, but rather whether responses conform to factual knowledge from credible external sources.
-
Mathematically, factuality can be formalized as a function \(F(x, y) \in [0, 1]\) where \(x\) is the prompt or input and \(y\) is the generated output, and \(F(x, y)\) represents the proportion of factual claims in \(y\) that are supported by ground-truth evidence. In practice, this function is estimated through either human evaluation or automated factuality scorers that verify claims against external knowledge bases, retrieval systems, or the open web.
-
The problem of factuality is particularly pronounced in open-ended, generative tasks—such as summarization, question answering, or long-form exposition—where the model must synthesize multiple factual statements. Wei et al. (2024) emphasize that long-form responses compound factual errors because each additional sentence introduces new factual claims, magnifying the space of potential inaccuracies. Their LongFact benchmark and SAFE (Search-Augmented Factuality Evaluator) method explicitly address this by decomposing long-form responses into atomic factual units and verifying each through multi-step search and reasoning.
-
Conversely, short-form factuality, studied by Ye et al. (2024), focuses on concise responses—such as factoid question answering, completion, or classification—where a single or few facts must be correct. These tasks lend themselves to higher precision evaluation (e.g., binary true/false scoring) and have traditionally been benchmarked via datasets like TruthfulQA, HaluEval, and FreshQA. Ye et al. demonstrate that short-form factuality requires specialized modeling techniques, as conventional metrics often fail to capture nuanced factual correctness across multiple-choice or short-answer settings.
-
Together, these perspectives suggest that factuality is multifaceted and scale-dependent: short-form factuality assesses atomic correctness in compact outputs, while long-form factuality measures consistency and completeness across extended discourse. Both require decompositional evaluation—either at the token, sentence, or fact level—to rigorously capture factual alignment with external truth.
-
Formally, a comprehensive definition of factuality in LLMs thus entails:
- Objective grounding: factual claims must correspond to verifiable real-world information from trustworthy sources.
- Granular evaluation: each claim, rather than the whole response, must be judged for correctness.
- Scale sensitivity: different methods apply for short versus long responses due to the varying density and interdependence of factual units.
- Reference independence: ideally, factuality evaluation should not rely on a single reference text but on dynamically retrieved or multi-source evidence, as implemented in SAFE by Wei et al. (2024).
-
This conceptualization motivates a taxonomy of factuality types and assessment strategies—spanning short-form and long-form, extractive and generative, intrinsic and extrinsic—which will be developed in the next section.
Types of Factuality (Taxonomy)
-
Factuality in large language models (LLMs) encompasses multiple overlapping but distinct notions of truth, consistency, and grounding. Each type of factuality captures a different relationship between a model’s output, the world, and the context it is conditioned on. This taxonomy organizes the main factuality types into eight interrelated categories, integrating insights from recent empirical and theoretical work.
-
We order factuality types roughly by increasing complexity of verification: from concise, atomic truth-checking tasks up toward multi-claim, context-rich verification. The list is not strictly linear (many types overlap), but it provides a narrative progression from simpler to more challenging factuality modalities. Each factuality type is described with (1) definition, (2) key properties or challenges, (3) representative benchmarks or methods (with links), and (4) evaluation metrics or strategies. We cover the following types of factuality:
- Short-Form Factuality
- Truthfulness (Intrinsic World Knowledge)
- Faithfulness / Extrinsic Factuality
- Groundedness (in Retrieval-Augmented Generation)
- Long-Form Factuality
- Hallucination Error Typologies and Diagnostic Factuality
Short-Form Factuality
-
Short-form factuality concerns the correctness of concise, atomic responses—typically to single factual questions. It is among the easiest to verify because only one core fact is asserted.
- Definition:
- Let \(x\) be a prompt (e.g. a fact-seeking question) and \(y\) be the model’s output, containing exactly one factual claim. Then
- Because only one atomic fact is in \(y\), no further decomposition is needed.
-
Key Properties & Challenges:
- Single-answer scope: The question is designed so that there is exactly one indisputable correct answer, reducing ambiguity or alternate valid phrasings.
- Objective verification: The answer can (in principle) be matched against authoritative sources (e.g. Wikipedia, knowledge graphs, academic references).
- Automatable grading: Because there is only one fact per output, correctness can be adjudicated via exact match or normalized matching heuristics.
- Calibration & abstention: A desirable model should not only answer when confident but also abstain when uncertain. Short-form setups allow one to test whether a model “knows what it knows.”
- Domain shift and compositional limitations: Success in short-form factuality does not automatically generalize to more complex or multi-step tasks.
-
Representative Benchmarks & Methods:
- SimpleQA by Wei et al. (2024) introduces a benchmark of 4,326 short, fact-seeking questions. Each question has exactly one indisputable answer, and responses are graded as correct, incorrect, or not attempted.
- TruthfulQA by Lin et al. (2021) tests whether LLMs reproduce human misconceptions or common falsehoods rather than factual truth.
- HaluEval by Li et al. (2023) is a large-scale benchmark of hallucination detection and factual errors in short outputs.
- FreshQA by Vu et al., (2023) evaluates models’ ability to answer time-sensitive factual questions, probing whether models keep up with changing world knowledge.
- Chinese SimpleQA (He et al., 2025) adapts the short-form factuality paradigm to Chinese, following the SimpleQA design. (He et al., 2025)
-
Evaluation Metrics:
-
Correct / Incorrect / Not Attempted: Each question receives one of these labels. The “not attempted” label allows models to abstain.
-
Overall factual score:
\[F = \frac{# \text{Correct}}{# \text{Total}}\] - Precision / Recall / F-score on attempted answers:
- Let (P = \frac{# \text{Correct}}{# \text{Attempted}}) and (R = \frac{# \text{Correct}}{# \text{Total}}). Then
- This balances correctness when attempting with abstention behavior.
- Calibration / Expected Calibration Error (ECE) / Reliability curves: We assess whether the model’s confidence estimates correlate with actual correctness, which is vital for decision-making and abstention strategies.
-
Intrinsic / Truthfulness Factuality
-
Truthfulness (often called intrinsic factuality) is the property that a model’s outputs align with real-world facts, independent of any contextual input document. It is “simpler” (in the sense of lower verification complexity) than long-form or grounded checks because there is no need to reconcile with an external context, only to compare with known truth sources.
- Definition:
- Given a prompt \(x\) (e.g. a factual question or request) and model output \(y\), truthfulness means that the factual claims in \(y\) correspond to actual, verifiable facts in the real world. Formally, for each atomic claim \(a_i \in \phi(y)\):
- Then an overall truthfulness score is:
- In many short-form settings, there is just one \(a_i\), making this equivalent to the short-form definition. But truthfulness extends to multi-claim outputs even without a source document.
-
Key Properties & Challenges:
- Knowledge dependency / training-time memorization: The model must internalize correct facts during pretraining or fine-tuning.
- Outdated or shifting facts: As the world changes (e.g. population, political office, recent events), the model’s static memory may become stale.
- Ambiguity and nuance: Some “facts” are disputable or depend on interpretation, requiring careful definition of ground truth.
- Plausibility bias: Models may prefer generating plausible but incorrect statements because those are more common in training data.
- Internal signal vs. output mismatch: A model may internally “know” a true fact but still output a false variant due to decoding or prompt dynamics. For example, “LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations” finds that internal states encode truthfulness even when the output is incorrect. (cf. Orgad et al. (2025))
- Predicting truthfulness from activations: Some work uses internal activations (e.g. local intrinsic dimension) to detect whether a generation is likely to be true. (cf. Yin et al. (2024))
-
Representative Benchmarks & Methods:
- TruthfulQA (Lin et al. (2021)) is perhaps the canonical truthfulness benchmark; it probes whether models repeat common falsehoods vs. actual facts.
- SimpleQA (Ye et al. (2024)) measures correctness + calibration in short-form factoid settings.
- Using internal probes or representation-based detectors, e.g. LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations (cf. Orgad et al. (2025)).
- Methods that estimate truthfulness based on internal activation statistics, such as local intrinsic dimension (LID) in Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension (cf. Yin et al., 2024).
- Approaches to self-alignment / self-evaluation where the model judges its own correctness, e.g. Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation. (cf. Zhang et al. (2024))
-
Evaluation Metrics & Strategies:
- Atomic correctness rate: Fraction of atomic claims judged true (as above).
- Binary true / false / unknown labels on outputs or claims.
- Calibration of predicted confidence vs. actual correctness: checking whether the model’s confidence aligns with truth.
- Representation-based scoring or detection: measuring metrics like activation-based truth scores or LID to predict whether output is true or false.
- Self-evaluation consistency: having the model re-assess its output and compare with ground truth, using disagreement as a signal.
Faithfulness / Extrinsic Factuality
-
Faithfulness (often called extrinsic factuality) measures whether a model’s output is consistent with provided evidence \(D\) (for example, a source article, retrieved passages, a database, or a table). Unlike truthfulness, which compares to the external world directly, faithfulness constrains correctness relative to the conditioning context: a faithful output does not contradict \(D\), does not invent unsupported facts, and accurately preserves relationships present in \(D\).
-
Definition:
-
Given a source \(D\), a prompt \(x\), and an output \(y\), let \(\phi(y)={a_1,\dots,a_N\) be atomic claims extracted from \(y\). A claim \(a_i\) is faithful if it is entailed by \(D\) and unfaithful if it is contradicted by \(D\) or not supported. A simple faithfulness score is:
\[F_{\text{faith}}(y \mid D);=;\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}\big[\text{Entail}(D, a_i)\big]\]- with task-specific choices for the entailment relation (e.g., NLI, QA-agreement). This focuses evaluation on alignment with \(D\) rather than open-world truth. Related formalizations appear throughout FEVER, FactCC, QAGS, and SummaC.
-
-
Key properties and challenges:
-
Evidence dependence: Faithfulness is conditioned on the given context \(D\). The same answer may be truthful globally yet unfaithful if \(D\) does not support it (or directly contradicts it). This makes corpus selection and evidence coverage critical. FEVER explicitly couples claim verification with evidence retrieval from Wikipedia. (cf. Thorne et al. (2018); Thorne et al. (2018b)).
-
Granularity mismatch: Summaries and long responses mix many claims; entailment models trained at sentence level can miss document-level inconsistencies. SummaC revisits NLI for document-level consistency via sentence segmentation and score aggregation. (cf. Laban et al. (2021); Laban et al. (2022)).
-
Hallucination vs. contradiction: Unfaithfulness includes fabricated details not in \(D\) (hallucinations) and outright contradictions. FactCC trains a classifier to detect contradictions and unsupported content relative to the source. (cf. Kryściński et al. (2019)).
-
Question-centric checking: QAGS converts faithfulness into question-answer agreement: if asking aligned questions of the source and the summary yields the same answers, the summary is more likely faithful. (cf. Wang et al. (2020)).
-
Domain and style transfer: Methods tuned for news summarization can degrade on narratives or specialized domains; ongoing work explores faithfulness in new genres (e.g., StorySumm). (cf. StorySumm, Jin et al. (2024)).
-
-
Representative benchmarks and methods:
-
FEVER: claim verification with evidence retrieval from Wikipedia. (cf. Thorne et al. (2018); Thorne et al. (2018b)).
-
FactCC: NLI-style model for detecting contradictions/unsupported content in summaries. (cf. Kryściński et al. (2019); code: GitHub).
-
QAGS: QA-based agreement between source and summary as a proxy for faithfulness. (cf. Wang et al. (2020); ACL version: link).
-
SummaC: document-level inconsistency detection via NLI aggregation. (cf. Laban et al. (2021); TACL version: Laban et al. (2022)).
-
Faithful reasoning methods: techniques to encourage intermediate steps that are entailed by inputs, e.g., “Faithful Reasoning Using Large Language Models.” (cf. Creswell & Shanahan (2022)).
-
-
Evaluation metrics and strategies:
-
Entailment-based scoring: Compute \(\mathrm{Entail}(D,a_i)\) using an NLI model; aggregate to obtain \(F_{\text{faith}}\). SummaC shows improvements by aligning NLI granularity to document-level comparisons. (cf. Laban et al. (2021)).
-
QA-agreement: Generate questions from \(y\), answer them against \(D\) and \(y\); measure agreement as a faithfulness proxy (QAGS). (cf. Wang et al. (2020)).
-
Claim-level precision/recall: Label each (a_i) as supported, contradicted, or not-enough-info; report supported-claim precision and contradiction rates (FEVER/FactCC-style). (cf. Thorne et al. (2018); Kryściński et al. (2019)).
-
Error localization and aggregation: * Highlight spans responsible for contradictions or unsupported claims; aggregate by sentences, entities, or relations to guide remediation (SummaC, FactCC). (cf. Laban et al. (2021); Kryściński et al. (2019)).
-
-
Relation to groundedness and RAG:
- Faithfulness evaluates consistency with the given \(D\); groundedness in RAG additionally requires that \(D\) itself be relevant and sufficient. Frameworks such as RAGAS explicitly score faithfulness alongside context relevance and answer self-containment in retrieval-augmented setups. See Es et al. (2023) and the EACL demo write-up Es et al. (2024).
Groundedness (Retrieval-Augmented Factuality)
-
Groundedness is the property that every factual statement produced by a model is supported by explicitly retrieved or provided evidence. It is a specialized form of extrinsic factuality that applies primarily to retrieval-augmented generation (RAG) systems. While faithfulness checks consistency relative to a fixed input \(D\), groundedness additionally requires that \(D\) itself be relevant and sufficient to justify the answer.
-
Definition:
- Let \(R(x) = {d_1, d_2, \dots, d_M}\) be the retrieved evidence for a user query \(x\), and let \(y\) be the model’s generated output. Each atomic claim \(a_i \in \phi(y)\) is grounded if there exists some \(d_j \in R(x)\) such that \(d_j \models a_i\) (i.e., \(d_j\) entails \(a_i\)). The groundedness score is:
- This distinguishes groundedness from truthfulness (which compares to global world knowledge) and from faithfulness (which assumes a given context without testing retrieval quality).
-
Key Properties & Challenges:
-
Joint dependency on retrieval and generation: Groundedness depends on both (1) whether the retriever returns evidence relevant to the user query and (2) whether the generator limits itself to that evidence. See Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks by Lewis et al. (2020).
-
Dynamic evidence selection: RAG systems can update retrieval at inference time. Factuality must therefore be recomputed per query since retrieved corpora may change (cf. Karpukhin et al., (2020)).
-
Faithfulness vs. sufficiency: An answer can be faithful to retrieved text yet ungrounded if retrieval was incomplete. For instance, a model quoting an irrelevant passage verbatim is faithful but not grounded.
-
Error propagation: Errors in retrieval (e.g., missing or noisy documents) directly reduce groundedness, even if the model itself performs perfect reasoning.
-
Citation verification: Proper groundedness can be verified by requiring the model to cite or link supporting evidence snippets. This forms the basis of AttributionEval by Rashkin et al. (2021).
-
-
Representative Benchmarks & Methods:
-
RAGAS: Reference-Free Evaluation for Retrieval-Augmented Generation (Es et al., 2023) provides four key metrics: Faithfulness, Context Precision, Context Recall, and Answer Relevance, all computed without gold references. An updated EACL demo is available in Es et al. (2024).
-
AttributionEval: Evaluates whether model outputs cite supporting documents and whether those citations entail the generated claims (cf. Rashkin et al., 2021).
-
FACT-RAG: A dataset for factuality evaluation in retrieval-augmented QA (cf. Li et al., 2024).
-
Faithful RAG: Improved retriever-generator coordination for factual consistency (cf. Menon et al., 2024).
-
LongFact / SAFE (Wei et al., 2024): While focused on long-form generation, these use search-augmented pipelines similar to RAG to check each atomic claim.
-
-
Evaluation Metrics & Strategies:
- Context precision and recall (RAGAS):
- Precision measures how much retrieved context is actually used by the model; recall measures whether all necessary evidence is retrieved.
- Faithfulness (RAGAS / AttributionEval):
- Checks whether the model’s generated claims are supported by retrieved evidence.
- Answer relevance:
- Measures semantic alignment between the model’s response and the user query; low relevance can indicate factual drift.
- Citation entailment accuracy:
- Proportion of cited snippets that entail the corresponding claims (AttributionEval).
- Overall groundedness score:
- Weighted aggregation of faithfulness, evidence precision/recall, and attribution correctness.
- Context precision and recall (RAGAS):
-
Relation to Other Factuality Types:
- Groundedness subsumes faithfulness but adds the retrieval dimension. A model can be faithful yet ungrounded if its input context omits relevant evidence. In contrast, a grounded model’s responses are both supported and sourced. As retrieval-augmented architectures proliferate, groundedness evaluation—especially metrics like RAGAS—is becoming a standard measure for open-domain factual reliability.
Long-Form Factuality
-
Long-form factuality assesses factual accuracy across extended, multi-sentence or multi-paragraph outputs such as summaries, reports, explanations, or essays. It is more complex to verify than short-form or grounded factuality because the number of claims grows with length, dependencies emerge among claims, and some claims require multi-hop reasoning or synthesis across sources.
-
Definition:
- Let the model’s output \(y = [s_1, s_2, \dots, s_N]\) contain \(N\) sentences, each possibly encoding multiple factual statements. Define the set of atomic claims \(\phi(y) = {a_1, a_2, \dots, a_M}\). Each claim \(a_i\) is evaluated against either ground-truth evidence or open-world truth (if no reference is provided). The long-form factuality score is:
- This “atomic-fact” decomposition was formalized in FActScore by Min et al. (2023), which remains a central methodology for evaluating factual accuracy in multi-sentence generation.
-
Key Properties & Challenges
-
Combinatorial claim growth: The number of verifiable facts increases roughly linearly with output length, making human annotation expensive and automatic verification challenging. This phenomenon is highlighted in LongFact by Wei et al. (2024), which showed that even a few-paragraph answer can contain dozens of distinct factual assertions.
-
Cross-sentence dependencies: Some claims depend on others for context or coreference resolution (“he,” “it,” “the company”), complicating atomic evaluation.
-
Partial correctness: A claim may be half-true (e.g., correct entity, wrong date). Binary scoring can underestimate nuance; graded or token-level alignment methods like QAFactEval (Fabbri et al., 2022) address this issue.
-
Reasoning and compositionality: Multi-hop or derived facts (e.g., cause-effect, temporal ordering) require reasoning that may be correct even if intermediate retrieved facts differ in wording.
-
Accumulated factual drift: When the model generates long outputs autoregressively, small factual deviations early in the text can propagate or compound later.
-
Representative Benchmarks & Methods:
-
LongFact (Wei et al., 2024): A benchmark targeting open-ended, long-form responses. Each output is decomposed into atomic claims that are verified via web search.
-
SAFE (Search-Augmented Factuality Evaluator) (Wei et al., 2024): An automatic evaluation pipeline combining large-model reasoning with real-time retrieval. SAFE performs multi-step verification of each claim against the web and aggregates support scores for a final factuality estimate.
-
FActScore (Min et al., 2023): A dataset-agnostic scoring framework that measures the proportion of atomic facts supported by evidence; it underlies many subsequent metrics.
-
QAFactEval (Fabbri et al., 2022): Generates factual questions from model outputs and verifies answers using QA over source documents, enabling graded correctness.
-
SummaC (Laban et al., 2021): Extends entailment-based faithfulness scoring to long summaries via document-level natural-language-inference aggregation.
-
Evaluation Metrics & Strategies:
- Atomic-fact support rate
- Fraction of atomic claims supported by evidence (core measure in FActScore, LongFact).
- Weighted claim confidence
- Aggregate support probabilities from retrieval or verifier models to weight each (f_i) (SAFE).
- Question-answer alignment
- Precision/recall between answers derived from the output vs. evidence (QAFactEval).
- Sentence-level entailment averaging
- Mean NLI confidence across sentences (SummaC).
- Human verification rate
- Human agreement or correction rate on factual claims, often used to validate automatic metrics.
- Atomic-fact support rate
-
Illustrative Example:
- For a biography-style output:
“Marie Curie won two Nobel Prizes, in 1903 for Chemistry and in 1911 for Physics.”
- Atomic decomposition yields:
- Marie Curie won two Nobel Prizes — supported;
- 1903 for Chemistry — incorrect (1903 was Physics);
- 1911 for Physics — incorrect (1911 was Chemistry).
- Thus, \(F_{\text{long}} = \tfrac{1}{3}\).
-
Relation to Other Factuality Types: Long-form factuality generalizes short-form evaluation to open-ended text and connects closely to groundedness: SAFE uses retrieval to verify long-form claims automatically. It bridges intrinsic truthfulness and extrinsic faithfulness by assessing both factual accuracy and evidence sufficiency across multiple, dependent statements.
Hallucination Typologies and Error-Level Factuality
-
Hallucination-level factuality focuses on errors and deviations from truth within model outputs rather than correctness itself. It provides a complementary taxonomy of how and why large language models produce factually inaccurate statements. Evaluating hallucinations requires analyzing both the type (intrinsic vs. extrinsic) and severity (minor distortion vs. full fabrication) of factual error.
-
Definition:
- A hallucination is a generated statement (a_i) such that:
-
Let \(\phi(y) = {a_1, \dots, a_M}\) be the set of atomic claims from a model output \(y\). Then the hallucination rate is defined as \(H(y) = \frac{1}{M} \sum_{i=1}^M \text{Hallucination}(a_i)\), and factuality is inversely related to hallucination frequency \(F(y) = 1 - H(y)\).
-
This error-centric formulation appears in recent hallucination taxonomies such as Beyond Hallucinations: A Taxonomy of Factual Errors in LLMs by Ye et al. (2024).
-
Key Properties and Subtypes:
-
Following Ye et al. (2024) and extended taxonomies from Ji et al. (2023) and Zhou et al. (2023), hallucinations can be divided into several key forms:
-
Intrinsic vs. Extrinsic Hallucinations
- Intrinsic hallucination: internally inconsistent with the model’s own output or prompt context.
- Example: contradicting earlier statements in a multi-paragraph answer.
- Extrinsic hallucination: inconsistent with external reality or reference evidence.
- Example: inventing a nonexistent paper or misquoting a data value.
- Intrinsic hallucination: internally inconsistent with the model’s own output or prompt context.
- Entity-level errors
- Errors in names, dates, numbers, or categorical attributes.
- Example: claiming “The Eiffel Tower is in Berlin.”
- Relational errors
- Misrepresenting relationships between correct entities.
- Example: “Einstein was a student of Niels Bohr.”
- Compositional or inferential errors
- Incorrectly combining true facts into a false conclusion.
- Example: “Because Newton invented calculus, he won a Nobel Prize.”
- Unsupported synthesis
- The model extrapolates beyond the given evidence—particularly common in summarization or RAG pipelines.
- Omission-induced misrepresentation
- Leaving out context that makes an otherwise true statement misleading.
- Fabricated references and citations
- A modern failure mode in retrieval-augmented or citation-heavy models—see Faithful or Fake? Synthesizing and Detecting Hallucinated Citations in LLMs by Feng et al. (2024).
-
-
-
Representative Benchmarks and Methods:
-
HaluEval (Li et al., 2023): A large-scale benchmark for hallucination detection, distinguishing factual vs. nonfactual short-form outputs.
-
TruthfulQA (Lin et al., 2021): Evaluates whether models generate human-like falsehoods rather than true answers.
-
FActScore (Min et al., 2023): Decomposes outputs into atomic claims and checks which ones are hallucinated (unsupported).
-
Beyond Hallucinations (Ye et al., 2024): Proposes a multi-level factual error taxonomy, distinguishing atomic-level inaccuracies from complex distortions, and evaluates models’ ability to abstain from uncertain responses.
-
Hallucination Taxonomy for LLMs (Ji et al., 2023) (Ji et al., 2023): Provides a structured overview distinguishing factual, linguistic, and reasoning hallucinations across tasks.
-
-
Evaluation Metrics:
-
Hallucination rate: \(H(y) = \frac{\# \text{hallucinated claims}}{\# \text{total claims}}\).
-
Fact precision and recall: Precision = fraction of generated facts that are true; recall = fraction of reference facts correctly reproduced.
-
Severity-weighted hallucination score (Ye et al., 2024): Each error type can be weighted by severity (minor, moderate, major) to reflect impact on downstream usability.
-
Detection accuracy: Accuracy of automatic hallucination classifiers (binary or multi-label) relative to human annotations.
-
Confidence calibration correlation: Measures whether hallucinations correspond to low model confidence—a strong indicator of model self-awareness.
-
-
Relation to Other Factuality Types:
-
Hallucination analysis operates as a negative mirror to all other factuality types:
- Short-form factuality: Single-claim hallucinations (binary true/false).
- Truthfulness: Extrinsic hallucinations relative to world knowledge.
- Faithfulness / Groundedness: Extrinsic hallucinations relative to the provided source.
- Long-form factuality: Compound hallucinations that accumulate across sentences or discourse.
-
Thus, hallucination-level factuality acts as a unifying diagnostic lens that exposes where factual degradation originates: retrieval, reasoning, or generation.
-
Assessing Factuality in Large Language Models
- Evaluating factuality requires translating conceptual definitions into measurable outcomes. Because factuality is multi-faceted—ranging from atomic short-form correctness to multi-paragraph consistency—assessment must balance precision, scalability, and faithfulness to human judgment. Below we describe practical mechanisms, architectures, and toolkits for implementing factuality evaluation in LLM pipelines.
Human Evaluation Frameworks
-
Human annotation remains the gold standard for factuality assessment, particularly for open-ended or domain-specific generation.
-
Annotation Granularity:
-
Binary judgments (True/False/Unverifiable): Used in FEVER (Thorne et al., 2018) and TruthfulQA (Lin et al., 2021), where annotators verify each claim against trusted sources.
-
Fine-grained labels: Many datasets (e.g., FactCC by Kryściński et al., 2019; QAFactEval by Fabbri et al., 2022) include categories such as Supported, Contradicted, Not Enough Info, and Partially Supported.
-
Sentence- or claim-level segmentation: Implemented using dependency parsing or OpenIE systems (e.g., AllenNLP Open Information Extraction). These help convert long outputs into atomic propositions for annotators to label efficiently.
-
-
Annotation Workflow Implementation:
-
In practice, a human evaluation pipeline typically includes:
- Claim extraction: automatic splitting of text into candidate facts using syntactic parsing, OpenIE, or GPT-based segmenters.
- Evidence retrieval: querying sources like Wikipedia APIs or specialized databases.
- Claim-evidence pairing: ensuring each fact is checked against relevant evidence rather than full documents.
- Human validation interface: implemented using tools such as Prodigy, Label Studio, or custom web dashboards that highlight claims and retrieved evidence side by side.
- Inter-annotator agreement: calculated via Cohen’s \(\kappa\) or Krippendorff’s \(\alpha\) to ensure consistency.
-
To reduce annotation cost, multi-phase pipelines often first filter with automatic verifiers (e.g., NLI models) and send only uncertain cases to human reviewers.
-
Automatic Evaluation Methods
- Automatic factuality metrics scale evaluation to large model outputs. These can be grouped into entailment-based, QA-based, retrieval-based, and representation-based approaches.
Entailment-Based Methods
- Core principle: Test whether each output claim is entailed by reference text.
- Tools: Pretrained NLI models such as DeBERTa-large-MNLI (He et al., 2021), or custom NLI heads fine-tuned for factuality (e.g., FactCC).
-
Implementation detail:
- Extract candidate sentences from generated text.
- For each, compute entailment probability \(P(\text{entail} \mid \text{premise=reference, hypothesis=claim})\).
- Aggregate via mean or thresholded majority vote.
-
Example pseudocode:
entail_score = nli_model(premise=reference_text, hypothesis=generated_sentence) if entail_score > 0.5: fact_label = "supported"
- Benchmarks: SummaC (Laban et al., 2021) improves robustness using aggregation over multiple NLI predictions.
QA-Based Methods
- Core principle: Reformulate factual checking as question answering consistency.
-
Pipeline:
- Generate questions from the model’s output using a question-generation model (e.g., T5-base-QG).
- Ask these questions to both (a) the source document and (b) the generated text.
- Compare answers using semantic similarity (e.g., cosine similarity in sentence-BERT space).
- Metric: \(F_{\text{QA}} = \frac{1}{N}\sum_i \mathbf{1}[\text{Ans}_i^{\text{source}} \approx \text{Ans}_i^{\text{model}}].\)
- Example: QAFactEval (Fabbri et al., 2022) automates this process and achieves strong correlation with human judgments.
Retrieval-Augmented Evaluation
- Core principle: Evaluate claims by retrieving external evidence via search APIs (e.g., Bing, Wikipedia, or a local corpus).
-
Implementation:
- Use a retriever such as Contriever (Izacard et al., 2021) or BM25 for keyword search.
- Retrieve top-k passages for each claim.
- Use an entailment model to verify if the claim is supported.
- Frameworks: SAFE and LongFact (Wei et al., 2024) integrate large-model reasoning with search-based verification.
Representation-Based Methods
- Core principle: Predict factual correctness from model activations or internal uncertainty.
-
Approaches:
- Intrinsic dimension: low-dimensional manifolds correlate with truthful outputs (Yin et al., 2024).
- Activation probes: classifiers trained on hidden states to predict truth vs. hallucination (Orgad et al., 2025).
- Confidence-based calibration: compute Expected Calibration Error (ECE) to assess confidence alignment with correctness.
Hybrid Evaluation and Self-Assessment
-
Hybrid methods combine model-based automatic scoring with human or LLM-as-a-judge review.
-
LLM-as-a-Judge Frameworks:
- Recent work treats factual evaluation as an LLM task in itself.
- Example: FActScore (Min et al., 2023) uses GPT-4 to extract, verify, and score atomic facts.
- SAFE integrates an LLM to reason over retrieved passages and assign support/contradiction labels.
-
Implementation:**
system_prompt = "You are a factuality evaluator. For each claim, determine if it is supported, contradicted, or unverifiable." result = gpt4_api(prompt=system_prompt + claims_and_evidence)
- Pros: scalable and interpretable; Cons: depends on evaluator model reliability and bias.
-
Hybrid Human–AI Loops:
- Automatically pre-score claims, then use human verification for borderline cases (e.g., entailment probability between 0.4 and 0.6).
- Maintain active learning datasets to fine-tune verifiers where automatic systems are uncertain.
Implementation Details and Pipeline Design
-
A typical factuality-evaluation pipeline can be implemented as follows:
-
Claim Extraction
- Tokenize and parse generated text using spaCy or syntactic chunkers.
- Split compound sentences into minimal factual propositions.
- Optional: Use LLM-based extraction (
gpt-4o-mini
with structured JSON schema).
-
Evidence Retrieval
- For grounded tasks: retrieve from input context \(D\).
- For open-world factuality: query APIs (Wikipedia Search API, DuckDuckGo, or local embeddings).
- Index corpora with FAISS or Elasticsearch for efficient lookup.
-
Verification
- Run NLI entailment using DeBERTa-MNLI, or use RAGAS’s
faithfulness
metric. - Compute entailment probability, contradiction rate, and missing-evidence frequency.
- Aggregate using weighted averages or geometric means.
- Run NLI entailment using DeBERTa-MNLI, or use RAGAS’s
-
Scoring and Calibration
- Normalize scores into [0,1].
- Use reliability diagrams to visualize calibration between confidence and correctness.
- Store per-claim results for interpretability dashboards (e.g., Streamlit or Gradio UI).
-
Automation
- Orchestrate with Python pipelines (Airflow, Prefect) for batch evaluation.
- Cache retrieval results for reproducibility.
-
Current Limitations and Open Challenges
- Reference incompleteness: truth may exist outside reference documents, producing false negatives.
- Semantic drift: paraphrased but equivalent statements may be incorrectly penalized.
- Cross-modal factuality: most methods only handle text, not multimodal contexts.
- Temporal dynamics: few evaluators handle time-sensitive factuality where the truth changes over time (FreshQA, Vu et al., 2023).
- Evaluator bias: using one LLM to judge another can propagate shared hallucination patterns.
- Efficiency vs. interpretability: neural verifiers are efficient but less explainable than human annotation.
Improving Factuality in Large Language Models
-
The pursuit of factual reliability in large language models (LLMs) involves modifying the model, data, or decoding process so that generated text aligns with verifiable reality or given evidence. Improvements can occur at three main stages:
- Training-time interventions (data and objective shaping)
- Inference-time strategies (retrieval, verification, and reasoning control)
- Post-hoc correction and self-evaluation (automatic or human-in-the-loop feedback)
-
Below, each stage is detailed with implementation mechanisms, representative research, and practical considerations.
Training-Time Interventions
- Training interventions aim to improve factuality by changing what the model learns and how it learns it.
Data Curation and Filtering
- High-quality factual corpora:
- Prefer curated sources (Wikipedia, academic papers, government datasets) over noisy web text.
- Implementation: automatic filtering via language-model-based classifiers trained to detect hallucination, using datasets like TruthfulQA (Lin et al., 2021) or SimpleQA (Ye et al., 2024).
- Knowledge refresh pipelines:
- Periodically update pretraining data to reflect new world facts (e.g., current events, updated encyclopedic entries). Implementation example: maintain a retrieval index of recently crawled Wikipedia dumps and re-fine-tune the model on corrections.
Instruction and Alignment Tuning
- Supervised Fine-Tuning (SFT):
- Include factual instruction datasets such as Tulu 2 (Wang et al., 2023) that enforce truthful responses during instruction following.
- Reinforcement Learning from Human Feedback (RLHF):
- Integrate factual preference signals—annotators mark truthful vs. hallucinated generations. The reward model then biases the policy toward accurate responses. Implementation sketch:
reward = factuality_judge(output, reference) loss = -torch.mean(log_probs * reward)
Constitutional AI (Bai et al., 2022) extends this by replacing human judges with rule-based critiques emphasizing factual correctness.
- Contrastive Fine-Tuning (CFT):
- Train models on positive–negative factual pairs, e.g., True vs. False claims or Supported vs. Unsupported summaries. FactCC (Kryściński et al., 2019) provides a canonical dataset for this.
Knowledge Injection and Editing
- Parametric knowledge updates:
- Methods like ROME (Meng et al., 2022) and MEMIT (Meng et al., 2023) modify specific neurons or attention patterns to fix factual errors inside the model.
- Continual fine-tuning:
- Fine-tune with updated factual triples (subject, relation, object) to preserve model fluency while correcting outdated parameters. Implementation: merge updated knowledge modules using low-rank adapters (LoRA) for efficient parameter updates.
Inference-Time Strategies
- Inference interventions control the generation process so that model outputs are verified or grounded as they are produced.
Retrieval-Augmented Generation (RAG)
- Retrieve relevant context from an external corpus before or during generation (Lewis et al., 2020).
-
Implementation steps:
- Embed query using Contriever or BM25.
- Retrieve top-k passages \({d_1, \dots, d_k}\).
- Concatenate retrieved context with prompt.
- Decode response conditioned on context.
- Frameworks: LangChain, LlamaIndex, or RAGAS (Es et al., 2023).
Self-Consistency and Verification Loops
-
SelfCheckGPT (Manakul et al., 2023): sample multiple generations (y_1, \dots, y_n) and measure semantic consistency across them. Inconsistency suggests low factual confidence. Implementation: cosine similarity between sentence-BERT embeddings of each pair.
-
Verifier-in-the-Loop Decoding: Interleave generation with factual checks. At each step, generate a candidate token or sentence, pass it to a verifier model (e.g., NLI classifier), and block outputs that contradict evidence. Example pseudocode:
for token in decoder_step(): if verifier(sentence+token) < threshold: reject(token)
Retrieval-Time Filtering
-
Incorporate document-level entailment or ranking by factual support (as in Faithful RAG, Menon et al., 2024).
-
Combine retrievers with answer consistency filters, keeping only passages that yield identical answers under QA reformulation.
Confidence-Aware Decoding
- Re-rank beams by a weighted objective:
\(\text{score} = \lambda \cdot P(y|x) + (1-\lambda)\cdot \text{FactualityVerifier}(y)\)
- Adjust \(\lambda\) to trade fluency for factuality.
Post-Hoc Correction and Self-Evaluation
- After a model produces an output, factual errors can be detected and repaired without retraining.
Automated Post-Editing
- ReFact (Peng et al., 2023): generate candidate corrections for identified false claims by querying external sources.
-
Pipeline:
- Detect unsupported claims with NLI or QA-based verifiers.
- Retrieve supporting evidence.
- Regenerate the corrected claim conditioned on retrieved snippets.
Self-Reflection and Critique
-
SelfCheckGPT / Reflexion (Shinn et al., 2023): have the model critique its own outputs and regenerate corrections. Implementation:
critique = model("Critique factual errors in: " + output) corrected = model("Rewrite factually correct version of: " + output + " considering: " + critique)
-
Iterative Feedback Loops: Alternate between generation, critique, and verification. Continue until a factual confidence threshold is reached.
Retrieval-Assisted Correction
- Incorporate external search during self-critique, effectively transforming post-hoc verification into RAG-like correction. SAFE (Wei et al., 2024) embodies this idea: each atomic claim is searched and re-evaluated before the final answer is accepted.
System-Level Integration
-
To integrate factuality improvement into deployed systems:
- Modular design: Separate retriever, generator, verifier, and scorer modules.
- Incremental pipelines: Run verifiers asynchronously to re-evaluate old responses as world knowledge changes.
- Feedback logging: Store factuality scores in metadata for future fine-tuning.
- Human oversight: Automatically flag low-confidence outputs for human review.
- Continuous retraining: Use verified logs as training data for factual reinforcement learning.
Open Challenges
- Verifier reliability: Automatic verifiers can themselves hallucinate or overfit to lexical overlap.
- Evolving knowledge: Static parameter memory cannot keep up with fast-changing domains.
- Trade-offs: Improving factuality can reduce creativity or stylistic diversity.
- Explainability: Users often require explanations for why a model considers something factual.
- Scalability: Multi-stage verification pipelines are computationally heavy, especially for multi-document outputs.
Open Research Directions in Factuality
-
Although factuality evaluation and mitigation have advanced rapidly, large language models (LLMs) still struggle with persistent and subtle factual errors—especially in dynamic, multi-hop, or multimodal contexts. The field is now transitioning from detecting hallucinations to engineering truth maintenance systems for LLMs.
-
Factuality research is converging on a multi-dimensional goal: ensuring that LLMs are accurate, calibrated, and explainably grounded across changing knowledge and modalities. Achieving this vision requires integrating factual evaluation into the entire model lifecycle—training, inference, verification, and continual maintenance—supported by human oversight and transparent evidence tracking.
Benchmarking and Standardization
- Problem:
- Current factuality benchmarks differ widely in task scope, verification method, and evaluation metrics. This prevents cross-model comparability and slows the pace of robust factual evaluation.
- Challenges:
- Fragmented metrics (e.g., entailment, QA consistency, search-based).
- Non-uniform definitions of “support” or “truth.”
- Limited multi-domain coverage (news, biomedical, finance, etc.).
-
Research Directions:
- Unified factuality schema: A standardized taxonomy for factual labels, unifying supported / contradicted / unverifiable across domains (see FEVER 2.0 proposal).
- Meta-benchmarks: Combine short-form (SimpleQA, Ye et al., 2024), long-form (LongFact, Wei et al., 2024), and grounded (RAGAS, Es et al., 2023) into integrated suites.
- Open evaluation APIs: Public leaderboards that use automatic verifiers for reproducibility, akin to EvalGauntlet or HELM 2.0 (Liang et al., 2022).
- Implementation Idea:
- Design a modular benchmark generator where prompts, references, and verifier configurations are stored in versioned JSON schemas. Evaluators can plug in any model or verifier with standardized metrics output (e.g.,
faithfulness_score
,truthfulness_score
,groundedness_score
).
- Design a modular benchmark generator where prompts, references, and verifier configurations are stored in versioned JSON schemas. Evaluators can plug in any model or verifier with standardized metrics output (e.g.,
Dynamic and Temporal Factuality
- Problem:
- Most benchmarks assume static truths, but factual reality evolves—people change roles, numbers shift, and scientific findings update.
- Examples:
- FreshQA (Vu et al., 2023) shows factual accuracy declines rapidly for time-sensitive knowledge.
- Temporal RAG systems attempt to incorporate timestamped retrieval but lack standard evaluation metrics.
- Research Directions:
- Temporal knowledge modeling: Represent time-indexed facts \(f_t = (s, r, o, t)\) with continuous-time embeddings.
- Dynamic retrievers: Retrieve sources within a temporal window \([t-\Delta, t]\) to simulate up-to-date factuality.
- Time-aware verifiers: Condition entailment models on document timestamps.
- Streaming fine-tuning: Incrementally update model parameters using rolling factual logs while preserving stability (akin to continual learning with replay buffers).
-
Implementation Example:
retrieved_docs = retriever(query, time_range=("2025-09-01","2025-10-01")) verifier_output = temporal_nli(model_output, retrieved_docs)
- Evaluate factual accuracy as a function of recency to measure temporal robustness.
Verifier and Evaluator Alignment
- Problem:
- Automatic factuality verifiers (e.g., NLI or QA-based) often disagree with human judgment, especially on paraphrases or nuanced contradictions.
-
Research Directions:
- LLM-Judge alignment: Train verifiers using human rationales that explain why a claim is factual or false.
- Cross-model calibration: Regularize verifier confidence scores to match human likelihoods (Expected Calibration Error minimization).
- Verifiers with reasoning traces: Instead of binary outputs, require verifiers to produce chain-of-verification explanations.
- Self-evaluating verifiers: Use meta-models that critique verifier outputs for bias or inconsistency.
-
Implementation Detail:
- Train multi-task factual verifiers jointly on FEVER, QAFactEval, and RAGAS datasets using multi-objective loss: \(\mathcal{L} = \mathcal{L}*{\text{entail}} + \alpha \mathcal{L}*{\text{QA}} + \beta \mathcal{L}_{\text{retrieval}}\)
- Use LoRA adapters for efficient multi-domain adaptation.
Continual Truth Maintenance
- Problem:
- Even if a model is factually correct at deployment, its knowledge base decays as reality changes—a phenomenon sometimes called factual drift.
-
Research Directions:
- Neural truth maintenance systems (TMS): Architectures that detect contradictions between old and new facts, updating internal memory selectively.
- Parametric vs. non-parametric storage: Combine static parametric knowledge with dynamic retrieval stores for hybrid memory.
- Knowledge expiration and confidence decay: Assign temporal priors that decay over time to encourage evidence refresh.
- Verification pipelines for production systems: Automatically detect outdated answers via periodic factual audits.
- Implementation Idea:
- Maintain a factual ledger of model claims and their verification status. Each claim is periodically re-evaluated with current evidence sources, triggering fine-tuning if factual drift exceeds a threshold.
if drift_score > 0.3: fine_tune(model, updated_claims)
- Projects such as Retrieval-Augmented Continual Learning (cf. Rosset et al., 2023) explore this concept in depth.
Multimodal and Cross-Domain Factuality
- Problem:
- Most factuality research focuses on text-only LLMs, while real-world applications (e.g., journalism, scientific reporting, education) often require cross-modal reasoning—connecting text to images, tables, or structured data.
-
Research Directions:
- Visual factuality: Detecting mismatches between textual captions and image content (see VisualNews dataset, Liu et al., 2023).
- Table grounding: Factual verification against structured data (e.g., TabFact, Chen et al., 2020).
- Multimodal retrieval and grounding: Retrieve both text and visual evidence for joint verification (e.g., MMRAG by Gong et al., 2024).
- Cross-domain adaptation: Align factual verifiers trained on general corpora to specialized domains like medicine or law.
-
Implementation Note:
- Use Vision-Text Transformers (e.g., BLIP-2) to jointly encode evidence and output captions.
- Extend factuality metrics to multimodal entailment:
\(F_{\text{multi}} = \frac{1}{N}\sum_i \mathbf{1}[(I_i, T_i) \models a_i]\)
- where \(I_i\) is image evidence and \(T_i\) textual context.
Explainability, Uncertainty, and Human Collaboration
- Problem:
- Users often demand justified factual claims, not just correct ones. Evaluations that provide only scalar scores fail to communicate uncertainty or rationale.
-
Research Directions:
- Justificatory factuality: Require models to cite or quote evidence explicitly within generated text (AttributionEval; Rashkin et al., 2021).
- Uncertainty quantification: Calibrate factual confidence using entropy, variance, or epistemic uncertainty measures.
- Human-AI factual collaboration: Design interactive interfaces where models present claims and evidence, allowing humans to confirm or reject in real time.
- Visualization systems: Develop factual dashboards that visualize claim-level support scores and source provenance.
-
Implementation Example:
- Integrate retrieval plus confidence visualization in a chat system:
response = model(query, with_sources=True) for claim in response.claims: print(f"{claim.text} — support: {claim.support_score:.2f} (source: {claim.source_url})")
- This aligns with open-source projects like TruthScope (OpenAI, 2024) and FactFlow (Google DeepMind, 2025).
Integrating Factuality into Model Design and Deployment: Architectural and Systems Considerations
-
Ensuring factual reliability in deployed LLM systems is not only a modeling challenge but a systems engineering problem. Modern architectures must combine model-level controls, retrieval and verification pipelines, real-time monitoring, and human oversight. This section describes practical system patterns and design principles for integrating factuality into production-scale language model deployments.
-
Integrating factuality into system design transforms LLMs from black-box text generators into auditable, evidence-driven reasoning systems. This requires multi-layer architectures that blend retrieval, verification, and monitoring; human oversight integrated with automated scoring; and transparent governance mechanisms that treat factuality as a first-class operational metric alongside latency, throughput, and accuracy.
Factuality-Centric Architecture Overview
-
A robust factuality-aware LLM system follows a four-layer design pattern:
-
Knowledge Layer (Retrieval and Storage)
- Responsible for dynamically sourcing verified facts and maintaining up-to-date evidence.
-
Common implementations:
- Vector stores such as FAISS, Pinecone, or Vespa for dense retrieval.
- Hybrid indexing (BM25 + dense embeddings) to improve recall.
- Incremental data refreshing via ETL pipelines keeps factual sources current.
-
Reasoning Layer (Model Inference and Verification)
- Houses one or more large language models (base generator + verifier).
-
Models are composed using routing or mixture-of-experts (MoE) patterns:
- Generator model produces the response.
- Verifier model checks factual support (entailment or RAG verification).
- The two modules communicate through structured interfaces (e.g., JSON claims).
-
Evaluation Layer (Factuality Scoring and Calibration)
- Performs post-generation analysis using factuality metrics such as faithfulness, groundedness, and truthfulness.
- Implements calibration using reliability diagrams or Expected Calibration Error (ECE).
- Tracks per-claim confidence to allow risk-based output gating.
-
Governance Layer (Monitoring and Feedback)
- Maintains continuous logs of factuality metrics over time.
- Implements fact audit dashboards showing score distributions, claim categories, and retrieval sources.
- Routes uncertain or unsupported outputs to human reviewers for correction or fine-tuning.
-
-
Architecture Example (Modular):
Knowledge | Reasoning | Evaluation / Governance |
---|---|---|
Retriever API | Generator LLM | Factuality Scorer (SAFE) |
Search Index | Verifier LLM | Confidence Calibrator |
Evidence Cache | RAG Pipeline | Fact Audit Dashboard |
Model Composition and Integration Patterns
Generator–Verifier Pipeline
-
The simplest pattern uses two interacting models:
- Generator \(M_g\): produces candidate response \(y\).
- Verifier \(M_v\): evaluates factuality of \(y\) relative to evidence \(D\).
-
Implementation sketch:
response = generator(prompt, context) claims = extract_claims(response) factuality_scores = verifier(claims, context) if mean(factuality_scores) < threshold: response = generator(revise(response, factuality_scores))
- The verifier can be a fine-tuned NLI model (e.g., DeBERTa-large-MNLI) or an LLM-based evaluator (e.g., GPT-4).
- This pattern underlies FActScore (Min et al., 2023) and SAFE (Wei et al., 2024).
Iterative Self-Verification Loop
-
An enhanced version uses self-reflection:
- Model generates an initial draft.
- Critique model identifies factual inconsistencies.
- Regeneration occurs until factual confidence exceeds threshold.
-
Loop example:
draft = model(prompt)
for _ in range(max_iter):
critique = verifier("Critique factual accuracy of: " + draft)
if critique.confidence > 0.9:
break
draft = model("Revise using critique: " + critique.text)
- This self-verification approach is widely used in Reflexion (Shinn et al., 2023) and SelfCheckGPT (Manakul et al., 2023).
Multi-Stage MoE (Mixture of Experts) Routing
-
At production scale, factual pipelines often employ Mixture-of-Experts routing:
- Lightweight base model handles general prompts.
- Specialized factual models handle domain-critical queries (e.g., biomedical, legal).
- Router dynamically dispatches requests based on domain classifier output.
-
Implementation example:
domain = domain_classifier(prompt)
if domain == "medical":
model = factual_expert_med
elif domain == "finance":
model = factual_expert_fin
else:
model = general_model
Continuous Factual Monitoring in Production
- Factual reliability cannot be guaranteed statically—it requires continuous auditing.
Real-Time Fact Logging
- Every model output is logged with associated claims, retrieval context, and verification results.
-
Metadata schema:
{ "prompt": "...", "response": "...", "claims": ["c1", "c2"], "factuality_score": 0.87, "retrieved_sources": ["wiki:Marie_Curie", "nytimes.com/..."] }
Factual Drift Detection
- Periodically re-evaluate historical responses using updated evidence sources.
- Compute factual drift rate:
\(\text{Drift} = 1 - \frac{F_t}{F_{t-1}}\)
- where \(F_t\) is average factuality score at time \(t\).
- High drift triggers retraining or knowledge updates.
Alerting and SLA Metrics
- Define service-level agreements (SLAs) for factuality (e.g., 95 % supported claims).
- Integrate with observability tools (Prometheus, Grafana) to visualize factuality metrics over time.
Deployment Practices for High-Reliability Applications
Human-in-the-Loop Oversight
- Route outputs below confidence threshold to human reviewers.
- Annotated corrections feed into continual fine-tuning.
- Common in legal, medical, and financial deployments.
Controlled Generation Policies
- Restrict model responses using guardrails or Constitutional AI-style policies (Bai et al., 2022).
- Implement generation rules such as: “Do not state factual claims without verifiable citations.”
Real-Time Evidence Citation
- Require models to cite sources inline: “Marie Curie won the Nobel Prize in Physics (1903) [Wikipedia](https://en.wikipedia.org/wiki/Marie_Curie).”
- Implement citation extraction using text-span linking to retrieval documents, as in AttributionEval (Rashkin et al., 2021).
Shadow Evaluation Pipelines
- Deploy separate factuality evaluation service parallel to user-facing responses.
- Allows measuring factual degradation without impacting latency.
Tooling and Implementation Ecosystem
-
Common tools supporting factuality-aware LLM deployment include:
- LangChain and LlamaIndex: orchestration and retrieval pipelines.
- RAGAS (Es et al., 2023): metrics for RAG factuality evaluation.
- EvalGauntlet and HELM 2.0: benchmark orchestration and standard reporting.
- Factool (Google, 2024): open-source factual audit dashboard with plug-in verifier adapters.
- Airflow / Prefect: for scheduled verification and retraining pipelines.
- Streamlit / Gradio: for building real-time factuality inspection dashboards.
-
Example deployment snippet:
from ragas import evaluate
result = evaluate(model_response, retrieved_docs)
log_factuality(result['faithfulness'], result['groundedness'])
Governance and Compliance Considerations
Factuality Audits
-
Enterprises should conduct periodic factual audits, documenting:
- Datasets used for knowledge updates
- Benchmarks and metrics applied
- Drift statistics and retraining intervals
Accountability and Traceability
- Maintain an immutable factuality ledger recording all outputs, sources, and verification scores.
- Enable post-hoc traceability for legal or scientific verification.
Transparency to End Users
- Present factual confidence indicators or citations in user interfaces.
- Use traffic-light-style indicators (green = verified, yellow = uncertain, red = unsupported).
Regulatory Alignment
- As factual reliability becomes a regulatory expectation (e.g., EU AI Act, FDA-AI validation), factual monitoring pipelines will serve as compliance evidence.
Evaluation Case Studies and Implementation Walkthroughs
-
We examine four widely adopted systems for factuality evaluation:
- RAGAS (Retrieval-Augmented Generation Assessment)
- SAFE (Search-Augmented Factuality Evaluator)
- FActScore (Fact Verification via LLM-as-a-Judge)
- QAFactEval (Question-Answer Consistency Evaluation)
-
Each system embodies a distinct factuality paradigm: grounded, long-form, human-simulated, and QA-based respectively.
Case Study A: RAGAS — Retrieval-Augmented Evaluation
-
Reference: RAGAS: Automated Evaluation for Retrieval-Augmented Generation by Es et al. (2023); demo in Es et al. (2024).
- Purpose:
- RAGAS evaluates factuality in retrieval-augmented generation (RAG) pipelines—systems that combine retrieval from an external knowledge base with LLM reasoning.
- Metrics:
-
RAGAS reports four complementary metrics:
- Faithfulness: the proportion of model statements supported by retrieved context.
- Context Precision: how much of the retrieved evidence was actually used in the answer.
- Context Recall: whether retrieved evidence covers all needed facts.
- Answer Relevance: semantic alignment between generated answer and user query.
-
-
Implementation Workflow:
-
Retrieve evidence:
from langchain.retrievers import BM25Retriever retriever = BM25Retriever.from_texts(corpus_texts) retrieved_docs = retriever.get_relevant_documents(query)
-
Generate model answer:
answer = llm.generate_answer(query, context=retrieved_docs)
-
Evaluate with RAGAS:
from ragas import evaluate metrics = evaluate(answer, retrieved_docs) print(metrics) # {'faithfulness': 0.92, 'context_precision': 0.88, ...}
-
Interpretation:
faithfulness < 0.8
⇒ model introduces unsupported claims.context_recall < 0.6
⇒ retriever missed relevant passages.- Combined Factuality Score can be computed as \(F_{\text{rag}} = \frac{1}{4}(F_{\text{faith}} + P_{\text{ctx}} + R_{\text{ctx}} + R_{\text{ans}}).\)
-
-
Engineering Notes:
- Uses sentence-BERT embeddings for semantic similarity.
- Retrieval evidence is chunked (≈ 200 tokens) and cached.
- Implemented as a composable evaluation node in LangChain pipelines.
Case Study B: SAFE — Search-Augmented Factuality Evaluator
-
Reference: LongFact: Long-Form Factuality Evaluation with Search-Augmented Generators by Wei et al. (2024).
-
Purpose: SAFE performs atomic-fact decomposition and real-time web verification to evaluate factual correctness in long-form model outputs.
-
Pipeline Overview:
- Claim extraction: Use GPT-4 or a structured parser to segment model text into atomic factual claims.
- Search and retrieval: Query search APIs (e.g., Bing, Serper) for each claim.
- Verification: Use entailment scoring (DeBERTa-MNLI) between claim and retrieved snippets.
- Aggregation: Compute supported / contradicted / unverifiable ratios.
-
Implementation Sketch:
claims = gpt4_extract_claims(output_text) for c in claims: evidence = web_search(c) entail_prob = entailment_model(premise=evidence, hypothesis=c) labels.append(entail_prob > 0.5) safe_score = sum(labels)/len(labels)
- Engineering Notes:
- Parallelized retrieval via async HTTP requests.
- Uses caching to reduce redundant searches.
- Achieves ~0.8 correlation with expert human judgment across open-domain QA tasks.
- Interpretation:
- SAFE’s design shows how factuality evaluation can scale dynamically without predefined gold references. It directly operationalizes the principle that factuality = verifiable support + absence of contradiction.
Case Study C: FActScore — LLM-as-a-Judge Framework
-
Reference: FActScore: Fine-grained Factuality Evaluation for LLMs by Min et al. (2023).
-
Purpose: FActScore measures factual accuracy across multiple factual units using an LLM itself as the verifier. It exemplifies the “LLM-as-a-judge” paradigm for automatic yet explainable scoring.
-
Pipeline Steps:
- Claim decomposition: GPT-4 decomposes each output into atomic facts.
- Verification prompt: For each claim, GPT-4 is asked whether it is Supported / Refuted / Not Enough Info given the source text.
- Scoring: \(F_{\text{act}} = \frac{#\text{Supported}}{#\text{Total Claims}}.\)
- Optional: integrate human sampling for calibration.
-
Implementation Example:
prompt = "For each claim below, label as Supported, Refuted, or NEI based on the context."
scores = gpt4_api(prompt + context_and_claims)
factscore = compute_ratio(scores, label="Supported")
-
Engineering Notes:
- Prompts use structured JSON outputs for reliability.
- Batch evaluation with rate-limit aware retry loops.
- Outputs stored in databases with per-claim confidence for interpretability.
-
Strengths:
- Requires no retriever; works for both extractive and generative models.
- Produces human-readable rationales for each label.
- Correlates closely with expert annotation (\(\rho\) ≈ 0.82).
Case Study D: QAFactEval — Question-Answer Consistency
-
Reference: QAFactEval: Improved Factual Consistency Evaluation for Summarization by Fabbri et al. (2022).
-
Purpose: QAFactEval reframes factuality as answer consistency: if the same questions yield equivalent answers when asked to both the source and the model’s summary, the summary is factually consistent.
-
Pipeline Workflow:
- Question generation: Train a question-generation model (e.g., T5-base) on summarization data to produce factual questions from the model output.
- Answer extraction: Ask each question to (a) the source document and (b) the model summary using a QA model such as RoBERTa-Squad.
- Answer comparison: Compute cosine similarity between both answers using sentence-BERT embeddings.
- Score aggregation:
\(F_{\text{QA}} = \frac{1}{N}\sum_{i=1}^{N}\mathbb{1}\big[\text{sim}(a_i^{src},a_i^{sum})>\tau\big].\)
- Typical threshold \(\tau\) ≈ 0.8.
-
Implementation Snippet:
questions = qg_model.generate(summary)
for q in questions:
ans_src = qa_model(q, source_doc)
ans_sum = qa_model(q, summary)
sim = cosine_similarity(emb(ans_src), emb(ans_sum))
labels.append(sim > 0.8)
qa_fact_score = sum(labels)/len(labels)
-
Engineering Notes:
- Uses batched inference for QA to handle long documents.
- Allows semantic matching via embeddings to tolerate paraphrase differences.
- Supports hybrid QA models for domain-specific data (BioASQ, FinQA).
Comparative Analysis
System | Domain | Evidence Source | Automation | Output Granularity | Strengths | Weaknesses |
---|---|---|---|---|---|---|
RAGAS | RAG pipelines | Retrieved docs | Full auto | Sentence / doc | Context precision/recall | Needs retriever context |
SAFE | Open-world | Web search | Semi-auto | Claim | Handles long-form | High latency |
FActScore | General | Source or reference | LLM-based | Claim | Explains reasoning | LLM evaluator bias |
QAFactEval | Summarization | Reference doc | Full auto | Question | Semantic flexibility | Sensitive to QA quality |
Implementation Integration Example
- Goal: Combine RAGAS + FActScore for hybrid evaluation in a production retrieval-augmented chatbot.
from ragas import evaluate as ragas_eval
from factscore import factscore_eval
docs = retriever(query)
answer = llm.generate(query, context=docs)
ragas_scores = ragas_eval(answer, docs)
factscore = factscore_eval(answer, reference_docs=docs)
factuality_report = {
"faithfulness": ragas_scores['faithfulness'],
"context_precision": ragas_scores['context_precision'],
"factscore": factscore,
"overall": 0.5 * ragas_scores['faithfulness'] + 0.5 * factscore
}
- Deployment Tip: Store
factuality_report
in telemetry logs to monitor factual trends and trigger retraining when long-term averages decline.
Insights and Takeaways
-
System selection depends on context: RAGAS for retrieval systems, SAFE for open-domain long-form, FActScore for general tasks, QAFactEval for summarization.
-
Hybrid scoring improves robustness: Combining metrics (e.g., RAGAS + FActScore) provides both precision (entailment-based) and recall (semantic-QA-based) perspectives.
-
LLM verifiers are scalable but fragile: Using smaller, distilled factual verifiers for initial filtering improves efficiency and reduces dependency on proprietary models.
-
Operational importance: These frameworks form the practical core of factual monitoring pipelines described in Section 1.6.
End-to-End Factuality Evaluation Pipeline (Implementation Blueprint)
- This section gives a concrete, implementation-level blueprint you can adapt in production. It shows how to build a factuality service that scores model outputs across truthfulness, faithfulness, groundedness, and long-form factuality, using off-the-shelf retrieval, verifiers, and LLM-as-judge components.
High-level architecture
-
Service consists of five subsystems wired through a message bus (or async job queue).
-
Ingress: Receives model outputs plus optional context \(D\), request metadata, and model confidences. Batches items for evaluation.
-
Claim extraction: Turns text into atomic claims with either a rules pipeline (spaCy + dependency chunks) or an LLM extractor returning structured JSON claims.
-
Evidence acquisition: For faithfulness: use the provided \(D\). For groundedness/truthfulness: retrieve evidence via BM25 (Pyserini/Anserini) and/or dense k-NN (FAISS) over a domain corpus; optionally add web search for open-domain SAFE-style checks. Pyserini/Anserini implements BM25; FAISS provides billion-scale vector search.
-
Verification engines: Entailment/NLI (e.g., DeBERTa-MNLI), QA-agreement (QAFactEval), reference-free RAG evaluation (RAGAS), and LLM-as-judge scoring (FActScore, SAFE).
-
Aggregation and reporting: Combines per-claim labels into task-specific scores; logs per-claim rationales, supporting snippets, and URLs for audit.
-
Data contracts and schemas
-
Use explicit, versioned JSON contracts to keep modules decoupled.
- Ingress payload:
{ "version": "1.0", "prompt": "...", "response": "...", "context_docs": [ {"id": "doc1", "text": "...", "meta": {...}}, ... ], "task": "summarization|rag_qa|open_qa|generation", "model_info": {"name": "my-llm", "logprob": true}, "timestamps": {"request": "..."} }
- Claim record:
{ "cid": "c_000123", "text": "Marie Curie won the 1903 Nobel Prize in Chemistry.", "spans": [ [start,end] ], "norm": {"entities":[...], "dates":[...]} }
- Evidence record:
{ "cid": "c_000123", "source_id": "wiki:Marie_Curie#sec3", "snippet": "...", "url": "https://en.wikipedia.org/...", "score": 17.6 }
- Verification result:
{ "cid":"c_000123", "method":"nli", "label":"contradicted", "p_entail":0.08, "p_contra":0.81, "rationale":"..." }
- Report:
{ "scores": { "faithfulness": 0.86, "ragas": {"faithfulness":0.88,"ctx_precision":0.82,"ctx_recall":0.76,"answer_relevance":0.91}, "factscore": 0.73 }, "by_claim": [ ... ], "citations": [ {"cid":"c_000123", "url":"..."} ] }
Claim extraction
-
Pipeline options:
- Rule-based splitter:
- Sentence split \(\rightarrow\) dependency parse \(\rightarrow\) split coordinated clauses \(\rightarrow\) normalize coreference to make each claim self-contained (this mirrors SAFE’s “revise to be self-contained” step).
- LLM extractor:
- Prompt an LLM with a schema like:
You will extract atomic, verifiable claims from the text. Return JSON: [{"cid":"...", "text":"...", "evidence_need":"world|context"}]
- Rule-based splitter:
-
Validate with JSON schema; reject/repair malformed outputs.
Evidence acquisition
-
Faithfulness (has context D): Treat \(D\) as the retrieval set. For each claim \(a_i\), build a targeted sub-index (BM25) over sentences/paragraphs in \(D\) and fetch top-\(k\) candidates.
-
Groundedness (RAG): Index your corpus with BM25 (Pyserini/Anserini) and FAISS for hybrid recall; rerank with cosine over sentence-BERT or a cross-encoder. Evaluate later with RAGAS metrics (faithfulness, context precision/recall, answer relevance).
-
Truthfulness / open-domain: Adopt SAFE-style web search when no authoritative internal corpus exists: generate focused queries per claim, fetch top snippets, and cache.
Verification engines
-
Given claim \(a_i\) and candidate evidence \(E_i={e_{i_1},\dots,e_{i_k}}\):
-
Entailment/NLI: Compute \(p_{ij}=\Pr(\text{entail} \mid e_{ij}, a_i)\). Label supported if \(\max_j p_{ij} \ge \tau\). Use DeBERTa-MNLI or domain-tuned NLI.
-
QA-agreement (QAFactEval): Generate questions from the claim/output, answer them on the source and on the output, and compare answers by semantic similarity. QAFactEval reports stronger correlation than many entailment baselines for summarization.
-
Reference-free RAG metrics (RAGAS): Compute faithfulness plus context precision/recall and answer relevance; ideal for end-to-end RAG stacks and CI checks.
-
LLM-as-judge: FActScore: decompose to atomic facts and score fraction supported by a reliable source. SAFE: LLM issues multi-step queries, reasons over web snippets, labels each claim (supported/irrelevant/not-supported) and aggregates.
-
Score aggregation
-
For a response with claims \(\phi(y)={a_1,\dots,a_N}\):
-
Faithfulness (context-bound):
\[F_{\text{faith}}(y\mid D)=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}[\exists e\in D:\ e\models a_i].\] -
RAG groundedness (RAGAS):
- Aggregate faithfulness, context precision, context recall, and answer relevance into a dashboard score \(F_{\text{rag}}=\tfrac{1}{4}\big(F_{\text{faith}}+P_{\text{ctx}}+R_{\text{ctx}}+R_{\text{ans}}\big).\)
- RAGAS exposes each component so you can enforce per-metric SLOs.
-
Atomic factuality (FActScore/SAFE):
\[F_{\text{atomic}}=\frac{#\text{Supported}}{#\text{Total Claims}},\quad \text{with counts from an LLM judge or NLI+retrieval.}\]- FActScore formalizes atomic decomposition; SAFE adds search-augmented judging for long-form.
-
Calibration overlay:
- Log model confidence \(c\) for each claim and compute expected calibration error (ECE) between \(c\) and correctness labels; use reliability diagrams to gate risky answers.
-
Orchestration and performance
-
Job graph: Ingress \(\rightarrow\) extraction \(\rightarrow\) retrieval \(\rightarrow\) verification \(\rightarrow\) aggregation. Run each stage as an idempotent worker; cache retrieval and NLI outputs by hash of claim text.
-
Batching and vectorization: Batch NLI and QA calls; vectorize FAISS queries; pre-compute dense embeddings for your corpus for low latency. FAISS supports GPU k-selection for fast nearest-neighbor.
-
Back-pressure control: If web search (SAFE) saturates, downsample to suspicious claims (e.g., high perplexity, low entailment margin) and fall back to in-corpus verification.
-
Observability: Emit per-stage timing, cache hit rates, retrieval recall@k, and verifier confidence histograms. Alert when groundedness or faithfulness SLOs slip.
CI/CD and regression testing
-
Golden sets: Curate a small, mixed suite: short-form factoids, summarization faithfulness, RAG QA, and long-form essays. Include known tricky paraphrases and entity/date swaps.
-
Automated gates: Block deploys if any of: \(F_{\text{faith}}<\tau_1\), \(P_{\text{ctx}}<\tau_2\), \(R_{\text{ctx}}<\tau_3\), \(F_{\text{atomic}}<\tau_4\), or ECE worsens by \(\ge\Delta\).
-
RAGAS integration: Use the maintained LangChain integration to evaluate QA chains as part of CI (smoke tests per PR).
Storage and audit trails
-
Immutable audit log: Persist original text, claims, evidence snippets, URLs, and labels. Expose a read-only dashboard for auditors.
-
Provenance linking: Each user-visible sentence shows expandable citations to its supporting evidence (doc id, URL, line spans). Attribution-style evaluation can be layered here.
-
Drift monitoring: Re-run verification on a rolling window of past responses as corpora update; alert when the fraction of claims switching label exceeds a threshold.
Example minimal deployment (pseudo-code)
- Below is a compact blueprint you can adapt with your stack.
def evaluate_response(prompt, response, context_docs=None):
claims = extract_atomic_claims(response) # LLM or rules
if context_docs:
evidence_map = build_bm25(context_docs).retrieve(claims) # faithfulness
else:
evidence_map = hybrid_retrieve(claims) # BM25 + FAISS over corpus
nli_labels = nli_verify(claims, evidence_map) # DeBERTa-MNLI
qa_agree = qafacteval(claims, context_docs or top_docs(claims))
ragas_scores = ragas_eval(response, context_docs or top_docs(claims))
lfj = llm_as_judge_safe(claims) # optional: SAFE for long-form
report = aggregate_scores(nli_labels, qa_agree, ragas_scores, lfj)
log_report(prompt, response, claims, evidence_map, report)
return report
- where:
nli_verify
uses DeBERTa-MNLI.qafacteval
follows QAFactEval.ragas_eval
follows RAGAS.llm_as_judge_safe
implements SAFE.
Operational SLOs and policies
- Per-task thresholds:
- Summarization: \(F_{\text{faith}}>0.9\) and contradiction rate < 1 %.
- RAG QA: \(F_{\text{rag}}>0.85\) with \(P_{\text{ctx}},R_{\text{ctx}}>0.8\).
- Long-form: \(F_{\text{atomic}}>0.75\) or F1@\(K\) above team target (SAFE).
-
Confidence-aware routing: If ECE indicates overconfidence, force abstention or human review for claims with low entailment margins or low RAGAS faithfulness.
- Change management: Every retriever or model update must pass the golden CI gates and a shadow run on a recent production slice.
What to use when
-
You already have source docs: Run entailment + QAFactEval for faithfulness; keep RAGAS off unless the system is truly RAG.
-
You are building RAG: Adopt RAGAS for day-to-day health, plus periodic LLM-as-judge audits (FActScore) on samples.
-
You ship long essays: SAFE or FActScore with web or internal search; store per-claim support and show citations inline.
-
This blueprint mirrors best-practice components in the literature: RAGAS for retrieval-augmented pipelines, FActScore for atomic long-form scoring, SAFE for search-augmented long-form evaluation, and QAFactEval for QA-style faithfulness—wired together with standard IR stacks (Pyserini/BM25, FAISS) and strong NLI baselines (DeBERTa-MNLI).
Further Reading
- RAGAS Documentation (Hugging Face)
- LongFact and SAFE: Evaluating Factuality in LLMs (Google DeepMind Blog)
- FActScore: Evaluating Factuality of LLMs (Google Research Blog)
- QAFactEval GitHub Repository (Salesforce Research)
- FEVER: Fact Extraction and Verification Challenge
- TruthfulQA Benchmark and Leaderboard (OpenAI Evals)
- HaluEval Benchmark Repository
- RAGAS GitHub Repository
- AttributionEval: Evaluating Evidence Attribution in LLMs (Meta AI)
- FactCC Repository (Salesforce Research)
References
- SimpleQA: Benchmarking Calibration and Factuality in Short-Form Question Answering
- LongFact: Long-Form Factuality Evaluation with Search-Augmented Generators (SAFE)
- TruthfulQA: Measuring How Models Mimic Human Falsehoods
- HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
- FreshLLMs / FreshQA: Refreshing Large Language Models with Search Engine Augmentation
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
- Reflexion: Language Agents with Verbal Reinforcement Learning
- Constitutional AI: Harmlessness from AI Feedback
- FactCC: Factual Consistency Checking for Abstractive Summarization
- ROME: Locating and Editing Factual Associations in GPT
- MEMIT: Mass Editing Memory in a Transformer
- RAGAS: Automated Evaluation for Retrieval-Augmented Generation
- FaithfulRAG: Fact-Level Conflict Modeling for Context-Faithful Retrieval-Augmented Generation
- Editing Factual Knowledge in Language Models (KnowledgeEditor style / ReFact-like)
- TÜLU 2: A Framework for Truthful Instruction Fine-Tuning
- HELM: Holistic Evaluation of Language Models
- FActScore: Fine-Grained Factuality Evaluation for LLMs
- QAFactEval: Improved Factual Consistency Evaluation for Summarization
- TabFact: A Large-Scale Dataset for Table-Based Fact Verification
- VisualNews: Benchmarking Factual Consistency in Vision-Language Models — reinterpreted
- A Survey of Multimodal Retrieval-Augmented Generation
- Measuring Attribution in Natural Language Generation Models
- Lifelong Knowledge Editing for LLMs via Retrieval-Augmented Continual Prompt Learning (RECIPE)
- Factuality of Large Language Models: A Survey
Citation
@article{Chadha2020DistilledFactualityInLLMs,
title = {Factuality in LLMs},
author = {Chadha, Aman and Jain, Vinija},
journal = {Distilled AI},
year = {2020},
note = {\url{https://aman.ai}}
}