HR Demo
Overview
This use case demonstrates how you can use the GenAI Builder Snaps to create an HR Q&A example that answers questions from an employee handbook.
Problem
Organizations face the challenge of efficiently addressing staff inquiries regarding company policies, benefits, and related matters. Leveraging employee handbooks and HR documents for automatic responses presents an opportunity to streamline this process, yet implementing such a system poses challenges during technological integration, thereby impacting the accuracy and accessibility of data.
Solution
Developing a comprehensive AI-driven system that integrates with existing employee handbooks and HR documents can effectively address the challenge of providing automatic answers for staff inquiries about company policies, benefits, and related matters. This solution leverages employee handbooks and HR documents to provide automatic answers to staff inquiries about company policies, benefits, and more.
Understanding the Solution
This use case demonstrates how to use the GenAI Builder Snaps to create an HR Q&A pipeline integration with a chatbot that answers questions from an employee handbook. This example consists of two pipelines:
- Indexer: Loads data from the source document. This pipeline extracts text from a PDF file of an employee handbook, breaks it into chunks, generates embeddings for each chunk using Azure OpenAI, and indexes the embeddings in Pinecone for search.
- Retriever: Retrieves data to answer questions. This pipeline connects to an AI chatbot using Azure OpenAI services. It takes in a question, generates embeddings using the Azure OpenAI Embedder Snap, queries similar embeddings in Pinecone, constructs a context for the Azure OpenAI Prompt Generator Snap, and passes it to the Azure OpenAI Chat Completions Snap to generate a response.
- Access to a Pinecone instance (sign up for a free account at https://www.pinecone.io) with an existing index
- Valid Azure OpenAI API Key Account or OpenAI API Key Account to access the resources from Azure OpenAI or OpenAI
- A source document to upload, such as an employee handbook
- Download the patterns from the SnapLogic Public Pattern Library
If you cannot find the patterns in the Pattern Library, you can download them from here:
- Indexer pipeline: Download this pipeline.
- Retriever pipeline: Download this pipeline.
Indexer pipeline
In this example, we will load data into a vector database using the Indexer pipeline because it populates the Pinecone Index.- Import the Indexer pattern pipeline. (Download this pipeline)
- Configure the File Reader Snap to read
a source document that will provide the answers to questions from users. In the
File field, upload the source document. In this
example we uploaded an employee handbook.
- Leave the PDF Parser Snap with default settings.
- Configure the Chunker Snap with
the settings in the following screenshot.
- Open the Azure OpenAI Embedder Snap and click
on the Account tab to create a new account.
- Enter a meaningful name for the account and replace
myendpoint
with the relevant name for your Azure OpenAI endpoint. - Validate and save your new account. If the account setup is successful,
return to the Settings tab of the Snap.
- Enter a meaningful name for the account and replace
- Open the Azure OpenAI Embedder Snap and configure the settings as follows.
Note: Alternatively, you can use the OpenAI Embedder Snap, but you might need to replace the Embedder in the Pattern or import the appropriate one for which you have an account. - Configure the Mapper Snap to map the
output of the embedder Snap to the Pinecone Upsert Snap.
- Map the $embedding object from the Embedder Snap to the $values object in Pinecone.
- Map the metadata object in Pinecone by mapping $original.chunk to $metadata.chunk.
- Map $metadata.source to Employee
Handbook.pdf—this allows the Retriever pipeline to
return the source file to answer a question.
- Open the Pinecone Upsert Snap and click the
Account tab to create a new account with your
Pinecone API Key. Validate it to make sure it works.
- Save your account and return to the Settings tab.
- Configure the Pinecone Upsert Snap to retrieve existing indexes in
Pinecone. Note: You cannot create an Index spontaneously, so you must ensure it already exists in Pinecone (or create one with the Pinecone web interface).
- Optional. Set your Namespace. Namespaces in
Pinecone create a logical separation between vectors within an index and
can be created spontaneously during pipeline execution.
- Validate the pipeline to verify the data in the output preview. Execute the pipeline when you are ready to commit your data to Pinecone.
Retriever pipeline
This example pipeline demonstrates how to answer questions using the data loaded into Pinecone from the Indexer pipeline.- Import the Retriever pattern pipeline (Download
the pipeline). Initially, the pipeline looks like this:
- This pattern is part of a series, so it includes a Snap that is unnecessary for
this example. So, remove the first Snap in the pipeline.
- Right-click on the HTTP Router Snap and choose Disable Snap.
- Click the circle between the HTTP Router and the Azure OpenAI Embedder Snap to disconnect them.
- Move the HTTP Router Snap out of the way on the canvas. Your pipeline should now look like this:
- In the Asset Palette on the left, search for the JSON Generator Snap, drag it onto the canvas, and connect it to the Azure OpenAI Embedder Snap.
- Configure the JSON Generator Snap as
follows:
- Click on the Edit JSON button in the Settings tab.
- Highlight and delete all the text from the template.
- Replace
"Your question here."
with a question from the document you uploaded in your Indexer Pipeline. For instance, if you uploaded an employee handbook, you could ask,"When do I get paid?"
- Open the Azure OpenAI Embedder Snap.
- Click the Account tab and select the account you created in the Indexer example.
- Click the suggestion icon () for the Deployment ID and choose the Deployment ID you previously chose in the Indexer example.
- Set the Text to embed field to $prompt.
- Open the Mapper Snap connected to
the Azure OpenAI Embedder Snap and configure it as shown below:
- Open the Pinecone Query Snap and click the Account tab to create a new account with your Pinecone API Key. Validate it to make sure it works.
- Configure the Pinecone Query Snap as shown below.
Tip: Optional. In the Namespace field, select your existing namespace if you have created one.
- Configure the Mapper Snap connected to the Pinecone Query Snap as shown below.
Note:
- The $original JSON key displays when an upstream Snap has an implicit Pass through (in the previous Mapper Snap we explicitly enabled Pass through). This enables to access the original JSON document into the upstream Snap.
- If you are validating your pipeline along the way or using Dynamic Validation, the Target Schema does not display in the Mapper Snap until you complete the upcoming steps for the Azure OpenAI Prompt Generator Snap.
- Click on the Azure OpenAI Prompt Generator
Snap to set up your prompt.
- Click the Edit prompt and ensure your default
prompt appears as shown below.
- On lines 4-6 there is mustache templating with the values
{{#context}} {{source}} {{/context}}
which is the same as the jsonPath($, "$context[*].source") from the Mapper Snap that we just configured. You can customize the prompt and include data in this manner for your other pipelines going forward.
- Click the Edit prompt and ensure your default
prompt appears as shown below.
- Open the Azure OpenAI Chat Completions Snap.
- Click the Account tab and select the account you created in the Indexer example.
- Click the suggestion icon () for the Deployment ID and choose the Deployment ID you previously chose in the Indexer example. This ID might differ from the one you chose in the previous Azure OpenAI Snap because we now need to select an LLM model instead of an embedding one.
- Set the Prompt field to
$prompt.
- Validate the pipeline.
- Click the output preview of the Azure OpenAI Chat Completions Snap. The
answer to the prompt is displayed under
$choices[0].message.content
as shown below.