Rule: Huggingface Question Answering (Huggingface)
Base data:
Summary:
The Huggingface Question Answering rule, takes a text/context and a question and uses one of the many Question Answering models and runs it and collects the output into a long text field of choice using the Question Answering.
You can choose between using the free inference API where available or host a dedicated endpoint via their system.
If you run the model frequently or need to have it readily available, the dedicated endpoint is the only way to go.
Module needed:
Huggingface
Field types to populate:
- Text (Plain)
- Text (plain, long)
- Text (formatted)
- Text (formatted, long)
Base Fields types to use as context:
- Text (plain)
- Text (plain, long)
- Text (formatted)
- Text (formatted, long)
- Text (formatted, long, with summary)
Extra Requirements:
You need a Huggingface account and in the case of using it in production, a setup endpoint on the dedicated endpoint api.
Prompting tips:
Just put the actual text to contextualize here, no prompt or command is needed.
Extra Settings:
None
Extra Advanced Settings:
Type of Inference
Choose the type of inference to use, between the free API or the dedicated endpoint.
Huggingface Model
If you use the free API here, you have to give the namespace to a Summarization model, that allows to use the free dedicated api.
Huggingface Endpoint URL
If you use the dedicated endpoint API here, you have to give the url to an endpoint that hosts a Summarization model.
Question
The question - currently hardcoded, but in future version it will be able to take tokens.
Threshold
The threshold of answer certainty to pass before considered answered.
Possible example use cases:
- Answer question.
- Answer complex questions that GPT/Gemini can't answer for some niche field, on a finetuned model.
Help improve this page
You can:
- Log in, click Edit, and edit this page
- Log in, click Discuss, update the Page status value, and suggest an improvement
- Log in and create a Documentation issue with your suggestion