
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/">
  <dc:identifier>https://unilib.phaidrabg.rs/o:8175</dc:identifier>
  <dc:identifier>doi:10.5281/zenodo.13863828</dc:identifier>
  <dc:identifier>ISSN: 2957-4935</dc:identifier>
  <dc:identifier>ISBN: 978-92-9083-669-8</dc:identifier>
  <dc:creator id="https://orcid.org/0000-0001-7326-059X">Ljajić, Adela</dc:creator>
  <dc:creator id="https://orcid.org/0000-0001-6902-3639">Košprdić, Miloš</dc:creator>
  <dc:creator id="https://orcid.org/0000-0002-7679-1676">Bašaragin, Bojana</dc:creator>
  <dc:creator id="https://orcid.org/0000-0002-4180-0050">Medvecki, Darija</dc:creator>
  <dc:creator>Cassano, Lorenzo</dc:creator>
  <dc:creator id="https://orcid.org/0000-0003-2706-9676">Milošević, Nikola</dc:creator>
  <dc:publisher>Leibniz Supercomputing Centre LRZ</dc:publisher>
  <dc:description xml:lang="eng">In this paper, we introduce the Verif.ai project, a pioneering open-source scientific question-answering system,
designed to provide answers that are not only referenced
but also automatically vetted and verifiable. The components of the system are (1) an Information Retrieval system
combining semantic and lexical search techniques over scientific papers (PubMed), (2) a Retrieval-Augmented Generation (RAG) module using fine-tuned generative model
(Mistral 7B) and retrieved articles to generate claims with
references to the articles from which it was derived, and
(3) a Verification engine, based on a fine-tuned DeBERTa
and XLM-RoBERTa models on Natural Language Inference
task using SciFACT dataset. The verification engine crosschecks the generated claim and the article from which the
claim was derived, verifying whether there may have been
any hallucinations in generating the claim. By leveraging
the Information Retrieval and RAG modules, Verif.ai excels
in generating factual information from a vast array of scientific sources. At the same time, the Verification engine
rigorously double-checks this output, ensuring its accuracy
and reliability. This dual-stage process plays a crucial role in
acquiring and confirming factual information, significantly
enhancing the information landscape. Our methodology
could significantly enhance scientists’ productivity, concurrently fostering trust in applying generative language models
within scientific domains, where hallucinations and misinformation are unacceptable.</dc:description>
  <dc:title xml:lang="eng">Scientific QA system with verifiable answers</dc:title>
  <dc:type>info:eu-repo/semantics/conferenceProceedings</dc:type>
  <dc:language>eng</dc:language>
  <dc:source>Prcocededings of the 6th International Open Search Symposium #ossym2024</dc:source>
  <dc:source>startpage: 59</dc:source>
  <dc:source>endpage: 64</dc:source>
  <dc:rights>http://creativecommons.org/licenses/by-nd/4.0/legalcode</dc:rights>
  <dc:date>2024</dc:date>
  <dc:format>application/pdf</dc:format>
  <dc:format>8991896 bytes</dc:format>
</oai_dc:dc>
