Haystack docs home page

Question Answering on a Knowledge Graph

Open In Colab

Haystack allows storing and querying knowledge graphs with the help of pre-trained models that translate text queries to SPARQL queries. This tutorial demonstrates how to load an existing knowledge graph into haystack, load a pre-trained retriever, and execute text queries on the knowledge graph. The training of models that translate text queries into SPARQL queries is currently not supported.

# Install the latest release of Haystack in your own environment
#! pip install farm-haystack

# Install the latest master of Haystack
!pip install --upgrade pip
!pip install git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab,graphdb]
# Here are some imports that we'll need

import subprocess
import time
from pathlib import Path

from haystack.nodes import Text2SparqlRetriever
from haystack.document_stores import GraphDBKnowledgeGraph
from haystack.utils import fetch_archive_from_http

Downloading Knowledge Graph and Model

# Let's first fetch some triples that we want to store in our knowledge graph
# Here: exemplary triples from the wizarding world
graph_dir = "data/tutorial10"
s3_url = "https://fandom-qa.s3-eu-west-1.amazonaws.com/triples_and_config.zip"
fetch_archive_from_http(url=s3_url, output_dir=graph_dir)

# Fetch a pre-trained BART model that translates text queries to SPARQL queries
model_dir = "../saved_models/tutorial10_knowledge_graph/"
s3_url = "https://fandom-qa.s3-eu-west-1.amazonaws.com/saved_models/hp_v3.4.zip"
fetch_archive_from_http(url=s3_url, output_dir=model_dir)

Launching a GraphDB instance

# Unfortunately, there seems to be no good way to run GraphDB in colab environments
# In your local environment, you could start a GraphDB server with docker
# Feel free to check GraphDB's website for the free version https://www.ontotext.com/products/graphdb/graphdb-free/
print("Starting GraphDB ...")
status = subprocess.run(
    [
        "docker run -d -p 7200:7200 --name graphdb-instance-tutorial docker-registry.ontotext.com/graphdb-free:9.4.1-adoptopenjdk11"
    ],
    shell=True,
)
if status.returncode:
    raise Exception(
        "Failed to launch GraphDB. Maybe it is already running or you already have a container with that name that you could start?"
    )
time.sleep(5)

Creating a new GraphDB repository (also known as index in haystack's document stores)

# Initialize a knowledge graph connected to GraphDB and use "tutorial_10_index" as the name of the index
kg = GraphDBKnowledgeGraph(index="tutorial_10_index")

# Delete the index as it might have been already created in previous runs
kg.delete_index()

# Create the index based on a configuration file
kg.create_index(config_path=Path(graph_dir + "repo-config.ttl"))

# Import triples of subject, predicate, and object statements from a ttl file
kg.import_from_ttl_file(index="tutorial_10_index", path=Path(graph_dir + "triples.ttl"))
print(f"The last triple stored in the knowledge graph is: {kg.get_all_triples()[-1]}")
print(f"There are {len(kg.get_all_triples())} triples stored in the knowledge graph.")
# Define prefixes for names of resources so that we can use shorter resource names in queries
prefixes = """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX hp: <https://deepset.ai/harry_potter/>
"""
kg.prefixes = prefixes

# Load a pre-trained model that translates text queries to SPARQL queries
kgqa_retriever = Text2SparqlRetriever(knowledge_graph=kg, model_name_or_path=model_dir + "hp_v3.4")

Query Execution

We can now ask questions that will be answered by our knowledge graph! One limitation though: our pre-trained model can only generate questions about resources it has seen during training. Otherwise, it cannot translate the name of the resource to the identifier used in the knowledge graph. E.g. "Harry" -> "hp:Harry_potter"

query = "In which house is Harry Potter?"
print(f'Translating the text query "{query}" to a SPARQL query and executing it on the knowledge graph...')
result = kgqa_retriever.retrieve(query=query)
print(result)
# Correct SPARQL query: select ?a { hp:Harry_potter hp:house ?a . }
# Correct answer: Gryffindor

print("Executing a SPARQL query with prefixed names of resources...")
result = kgqa_retriever._query_kg(
    sparql_query="select distinct ?sbj where { ?sbj hp:job hp:Keeper_of_keys_and_grounds . }"
)
print(result)
# Paraphrased question: Who is the keeper of keys and grounds?
# Correct answer: Rubeus Hagrid

print("Executing a SPARQL query with full names of resources...")
result = kgqa_retriever._query_kg(
    sparql_query="select distinct ?obj where { <https://deepset.ai/harry_potter/Hermione_granger> <https://deepset.ai/harry_potter/patronus> ?obj . }"
)
print(result)
# Paraphrased question: What is the patronus of Hermione?
# Correct answer: Otter

About us

This Haystack notebook was made with love by deepset in Berlin, Germany

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.

Some of our other work:

Get in touch: Twitter | LinkedIn | Slack | GitHub Discussions | Website

By the way: we're hiring!