steamship.agents.tools.question_answering package#

Submodules#

steamship.agents.tools.question_answering.prompt_database_question_answerer module#

class steamship.agents.tools.question_answering.prompt_database_question_answerer.PromptDatabaseQATool(facts: Optional[List[str]] = None, *, name: str = 'PromptDatabaseQATool', agent_description: str = 'Used to answer questions about the number of subway stations in US cities. The input is the question about subway stations. The output is the answer as a sentence.', human_description: str = 'Answers questions about the number of subway stations in US cities.', rewrite_prompt: str = 'Instructions:\nPlease rewrite the following passage to be incredibly polite, to a fault.\nPassage:\n{input}\nRewritten Passage:', question_answering_prompt: Optional[str] = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {{input}}\n\nHelpful Answer:")[source]#

Bases: TextRewritingTool

Example tool to illustrate how one can create a tool with a mini database embedded in a prompt.

To use:

tool = PromptDatabaseQATool(
facts=[

“Sentence with fact 1”, “Sentence with fact 2”

], ai_description=”Used to answer questions about SPECIFIC_THING. “

“The input is the question and the output is the answer.”

)

facts: List[str]#
question_answering_prompt: Optional[str]#

steamship.agents.tools.question_answering.vector_search_qa_tool module#

Answers questions with the assistance of a VectorSearch plugin.

class steamship.agents.tools.question_answering.vector_search_qa_tool.VectorSearchQATool(*, name: str = 'VectorSearchQATool', agent_description: str = ('Used to answer questions. ', 'The input should be a plain text question. ', 'The output is a plain text answer'), human_description: str = 'Answers questions with help from a Vector Database.', embedding_index_handle: Optional[str] = 'embedding-index', embedding_index_version: Optional[str] = None, question_answering_prompt: Optional[str] = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {question}\n\nHelpful Answer:", source_document_prompt: Optional[str] = 'Source Document: {text}', embedding_index_config: Optional[dict] = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance-handle': 'text-embedding-ada-002'}}, load_docs_count: int = 2, embedding_index_instance_handle: str = 'default-embedding-index')[source]#

Bases: Tool

Tool to answer questions with the assistance of a vector search plugin.

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

answer_question(question: str, context: AgentContext) List[Block][source]#
embedding_index_config: Optional[dict]#
embedding_index_handle: Optional[str]#
embedding_index_instance_handle: str#
embedding_index_version: Optional[str]#
get_embedding_index(client: Steamship) EmbeddingIndexPluginInstance[source]#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

load_docs_count: int#
name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

question_answering_prompt: Optional[str]#
run(tool_input: List[Block], context: AgentContext) Union[List[Block], Task[Any]][source]#

Answers questions with the assistance of an Embedding Index plugin.

Inputs#

input: List[Block]

A list of blocks to be rewritten if text-containing.

memory: AgentContext

The active AgentContext.

Output#

output: List[Blocks]

A lit of blocks containing the answers.

source_document_prompt: Optional[str]#

Module contents#

class steamship.agents.tools.question_answering.PromptDatabaseQATool(facts: Optional[List[str]] = None, *, name: str = 'PromptDatabaseQATool', agent_description: str = 'Used to answer questions about the number of subway stations in US cities. The input is the question about subway stations. The output is the answer as a sentence.', human_description: str = 'Answers questions about the number of subway stations in US cities.', rewrite_prompt: str = 'Instructions:\nPlease rewrite the following passage to be incredibly polite, to a fault.\nPassage:\n{input}\nRewritten Passage:', question_answering_prompt: Optional[str] = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {{input}}\n\nHelpful Answer:")[source]#

Bases: TextRewritingTool

Example tool to illustrate how one can create a tool with a mini database embedded in a prompt.

To use:

tool = PromptDatabaseQATool(
facts=[

“Sentence with fact 1”, “Sentence with fact 2”

], ai_description=”Used to answer questions about SPECIFIC_THING. “

“The input is the question and the output is the answer.”

)

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

facts: List[str]#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

question_answering_prompt: Optional[str]#
rewrite_prompt: str#
class steamship.agents.tools.question_answering.VectorSearchQATool(*, name: str = 'VectorSearchQATool', agent_description: str = ('Used to answer questions. ', 'The input should be a plain text question. ', 'The output is a plain text answer'), human_description: str = 'Answers questions with help from a Vector Database.', embedding_index_handle: Optional[str] = 'embedding-index', embedding_index_version: Optional[str] = None, question_answering_prompt: Optional[str] = "Use the following pieces of memory to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{source_text}\n\nQuestion: {question}\n\nHelpful Answer:", source_document_prompt: Optional[str] = 'Source Document: {text}', embedding_index_config: Optional[dict] = {'embedder': {'config': {'dimensionality': 1536, 'model': 'text-embedding-ada-002'}, 'fetch_if_exists': True, 'plugin_handle': 'openai-embedder', 'plugin_instance-handle': 'text-embedding-ada-002'}}, load_docs_count: int = 2, embedding_index_instance_handle: str = 'default-embedding-index')[source]#

Bases: Tool

Tool to answer questions with the assistance of a vector search plugin.

agent_description: str#

Description for use in an agent in order to enable Action selection. It should include a short summary of what the Tool does, what the inputs to the Tool should be, and what the outputs of the tool are.

answer_question(question: str, context: AgentContext) List[Block][source]#
embedding_index_config: Optional[dict]#
embedding_index_handle: Optional[str]#
embedding_index_instance_handle: str#
embedding_index_version: Optional[str]#
get_embedding_index(client: Steamship) EmbeddingIndexPluginInstance[source]#
human_description: str#

Human-friendly description. Used for logging, tool indices, etc.

load_docs_count: int#
name: str#

The short name for the tool. This will be used by Agents to refer to this tool during action selection.

question_answering_prompt: Optional[str]#
run(tool_input: List[Block], context: AgentContext) Union[List[Block], Task[Any]][source]#

Answers questions with the assistance of an Embedding Index plugin.

Inputs#

input: List[Block]

A list of blocks to be rewritten if text-containing.

memory: AgentContext

The active AgentContext.

Output#

output: List[Blocks]

A lit of blocks containing the answers.

source_document_prompt: Optional[str]#