Getting Started with QuePasa

Welcome to the QuePasa Developer Documentation. QuePasa offers RAG (Retriever-Augmented Generation) as a Service, enabling the integration of advanced search and answer generation capabilities directly into your applications. This guide will walk you through the essential steps to integrate QuePasa's APIs, from obtaining your API credentials to implementing the search and RAG functionalities.

Getting started with QuePasa involves three major steps:

  1. Getting access credentials

  2. Uploading the data sets you want to search or generate answers from, using our document upload API.

  3. Using Search APIs to retrieve ranked documents or use the RAG API to generate coherent, contextually relevant natural language answers based on your data.

Obtaining API Credentials Copied!

To access QuePasa's APIs, you need an authorization token. It contains a client identifier (client_id) and the token itself ( <CLIENT_ID>:<TOKEN> ). This credential is required for authenticating and making your API requests.

Contact us via  hello@quepasa.ai  or  Calendly  or get the token via  Discord bot

Steps to Obtain Credentials via Discord bot

  1. Join our Discord server  https://discord.gg/M9RB4cRDAt

  2. Upon joining, the QuePasa Bot will send you a direct message with instructions.

  3. Use the  /start  command to sign up and receive API credentials.

  4. Note : You can further use the QuePasa Bot as the GUI for your API ( see here ), as well as Discord server for support (use  #support  channel)

Quick start Copied!

Upload file

curl -X POST https://api.quepasa.ai/api/v1/upload/data/files/default \
  -H "Content-Type: multipart/form-data" \
  -H "Authorization: Bearer $YOUR_SECRET_TOKEN" \
  -F "file=@./TimeTravel101ForBeginners.pdf"


Retrieve answer

curl -X POST 'https://api.quepasa.ai/api/v1/retrieve/answer' \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $QUEPASA_API_KEY" \
  -d '{
    "question": "Can I un-eat yesterday burrito?"
  }'

Upsert document

import requests

response = requests.post(
    f"https://api.quepasa.ai/api/v1/upload/data/documents/{DOMAIN}",
    headers = {
        'Content-Type': 'application/json',
        'Authorization': f'Bearer {YOUR_SECRET_TOKEN}',
    },
    json = [
        {
            # Required fields
            'id': "llm", # string
            'url': "https://en.wikipedia.org/wiki/Large_language_model",

            'title': "Large language model",
            'language': "en", # two-char language code in lowercase
            'text': """
A large language model (LLM) is a computational model capable of language generation or other natural language processing tasks. As language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a self-supervised and semi-supervised training process.[

The largest and most capable LLMs, as of August 2024, are artificial neural networks built with a decoder-only transformer-based architecture, which enables efficient processing and generation of large-scale text data. Modern models can be fine-tuned for specific tasks or can be guided by prompt engineering.
These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on.

Some notable LLMs are OpenAI's GPT series of models (e.g., GPT-3.5, GPT-4 and GPT-4o; used in ChatGPT and Microsoft Copilot), Google's Gemini (the latter of which is currently used in the chatbot of the same name), Meta's LLaMA family of models, IBM's Granite models initially released with Watsonx, Anthropic's Claude models, and Mistral AI's models.
""".strip(),
            # 'text': "", # send text
            # 'html': "", # or send text
            # 'markdown': "", # or send markdown

            # Optional fields:
            # - 'keywords': document keywords, string, by default empty
            # - 'created_at': "2024-05-20T07:26:06Z", # document creation datetime, by default datetime of first creation of this document via API
            # - 'updated_at': "2024-05-20T07:26:06Z", # document last update datetime, by default datetime of last update of this document via API
        },
    ],
)

print( response )
response.json()

Upload file

import requests

file_path = "TTTMWeb.pdf"

# Open the file in binary mode
with open( file_path, 'rb' ) as f:
    # Send the POST request with the file
    response = requests.post(
        f"https://api.quepasa.ai/api/v1/upload/data/files/{DOMAIN}",
        headers = {
            'Authorization': f'Bearer {YOUR_SECRET_TOKEN}',
        },
        data = {
            'language': "en", # Optional, Two-character language code (e.g., 'en').
        },
        files = {
            'file': f,
        },
    )


Retrieve answer

import requests

response = requests.post(
    "https://api.quepasa.ai/api/v1/retrieve/answer",
    headers = {
        'Content-Type': 'application/json',
        'Authorization': f'Bearer {YOUR_SECRET_TOKEN}',
    },
    json = {
        'question': "What is LLM?"

        # Optional:
        # 'domain': DOMAIN,
        # 'user_info': {
        #     'id': 'replace-with-some-user-id'
        # }
    },
)

response_json_full = response.json()
print( response_json_full['data']['markdown'] )
pip install quepasa

Upsert document, check batch and retrieve answer

import os
import time
from pprint import pprint

import quepasa
from quepasa.rest import ApiException

# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.

# Configure Bearer authorization (Opaque): bearerAuth
configuration = quepasa.Configuration(
    access_token = os.environ["BEARER_TOKEN"]
)


# Enter a context with an instance of the API client
with quepasa.ApiClient(configuration) as api_client:
    # Create an instance of the API class
    client = quepasa.DefaultApi(api_client)

    domain = "default" # The name of a group of documents. Defaults to "default".
    doc_id = "llm"

    documents = [
        {
            # Required fields
            'id': doc_id, # string
            'url': "https://en.wikipedia.org/wiki/Large_language_model",

            'title': "Large language model",
            'language': "en", # two-char language code in lowercase
            'text': """
A large language model (LLM) is a computational model capable of language generation or other natural language processing tasks. As language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a self-supervised and semi-supervised training process.

The largest and most capable LLMs, as of August 2024, are artificial neural networks built with a decoder-only transformer-based architecture, which enables efficient processing and generation of large-scale text data. Modern models can be fine-tuned for specific tasks or can be guided by prompt engineering.
These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on.

Some notable LLMs are OpenAI's GPT series of models (e.g., GPT-3.5, GPT-4 and GPT-4o; used in ChatGPT and Microsoft Copilot), Google's Gemini (the latter of which is currently used in the chatbot of the same name), Meta's LLaMA family of models, IBM's Granite models initially released with Watsonx, Anthropic's Claude models, and Mistral AI's models.
            """.strip(),
            # 'html': "", # or send text
            # 'markdown': "", # or send markdown

            # Optional fields:
            # - 'keywords': document keywords, string, by default empty
            # - 'created_at': "2024-05-20T07:26:06Z", # document creation datetime, by default datetime of first creation of this document via API
            # - 'updated_at': "2024-05-20T07:26:06Z", # document last update datetime, by default datetime of last update of this document via API
        },
    ]


    # Upsert document
    print("The response of client.replace_documents:")
    response = client.replace_documents(domain, documents)
    pprint(response)

    batch_id = response.data.batch_id


    # Wait until indexation is finished
    while batch_id != None:
        print("The response of client.get_batch_status:")
        response = client.get_batch_status(batch_id)
        pprint(response)

        time.sleep(10)
        if response.status == 'Batch state: done':
            break


    print("The response of client.retrieve_answer:")
    response = client.retrieve_answer({
        'question': "What is LLM?",
    })
    pprint(response)
    print(response.data.markdown)

Upload file, check batch and retrieve answer

import os
import time
from pprint import pprint

import quepasa
from quepasa.rest import ApiException


# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.

# Configure Bearer authorization (Opaque): bearerAuth
configuration = quepasa.Configuration(
    access_token = os.environ["BEARER_TOKEN"]
)


# Enter a context with an instance of the API client
with quepasa.ApiClient(configuration) as api_client:
    # Create an instance of the API class
    client = quepasa.DefaultApi(api_client)

    domain = "default" # The name of a group of documents. Defaults to "default".
    filename = "TimeTravel101ForBeginners.pdf"

    # Upload file
    print("The response of client.upsert_files:")
    response = client.upsert_files(domain, filename)
    pprint(response)

    batch_id = response.data.batch_id


    # Wait until indexation is finished
    while batch_id != None:
        print("The response of client.get_batch_status:")
        response = client.get_batch_status(batch_id)
        pprint(response)

        time.sleep(10)
        if response.status == 'Batch state: done':
            break


    print("The response of client.retrieve_answer:")
    response = client.retrieve_answer({
        'question': "Can I un-eat yesterday burrito?",
    })
    pprint(response)
    print(response.data.markdown)

REST API Copied!

See docs: https://docs.quepasa.ai/reference

Google Colab Copied!

For your convenience, the complete code is available as Colab Notebook.

https://colab.research.google.com/drive/1SvkS7821Q5HJR5qoqZWGqMoGW-jHKTdB?usp=sharing

Make sure to fill in the  quepasa_token  in the Secrets tab.

Python SDK Copied!

pip install quepasa

See docs:  https://github.com/askrobot-io/quepasa-python