Use Case: HR Demo

Overview

This use case demonstrates how you can use the GenAI Builder Snaps to create an HR Q&A example that answers questions from an employee handbook.

Problem

Organizations face the challenge of efficiently addressing staff inquiries regarding company policies, benefits, and related matters. Leveraging employee handbooks and HR documents for automatic responses presents an opportunity to streamline this process, yet implementing such a system poses challenges during technological integration, thereby impacting the accuracy and accessibility of data.

Solution

Developing a comprehensive AI-driven system that integrates with existing employee handbooks and HR documents can effectively address the challenge of providing automatic answers for staff inquiries about company policies, benefits, and related matters. This solution leverages employee handbooks and HR documents to provide automatic answers to staff inquiries about company policies, benefits, and more.

Understanding the Solution

This use case demonstrates how to use the GenAI Builder Snaps to create an HR Q&A pipeline integration with a chatbot that answers questions from an employee handbook. This example consists of two pipelines:

  • Indexer: Loads data from the source document. This pipeline extracts text from a PDF file of an employee handbook, breaks it into chunks, generates embeddings for each chunk using Azure OpenAI, and indexes the embeddings in Pinecone for search.
  • Retriever: Retrieves data to answer questions. This pipeline connects to an AI chatbot using Azure OpenAI services. It takes in a question, generates embeddings using the Azure OpenAI Embedder Snap, queries similar embeddings in Pinecone, constructs a context for the Azure OpenAI Prompt Generator Snap, and passes it to the Azure OpenAI Chat Completions Snap to generate a response.
Note:

If you cannot find the patterns in the Pattern Library, you can download them from here:

Indexer pipeline

In this example, we will load data into a vector database using the Indexer pipeline because it populates the Pinecone Index.
Note: Loading data into the vector database is necessary only when updates to the source files could impact the responses to queries. For example, in this HR example, you should replace the uploaded employee handbook source document if there are policy changes so that the answers to questions are accurate.

Indexer pipeline

Note: The following steps assume you have already imported the pipelines or are building them as you go through the steps.
  1. Import the Indexer pattern pipeline. (Download this pipeline)
  2. Configure the File Reader Snap to read a source document that will provide the answers to questions from users. In the File field, upload the source document. In this example we uploaded an employee handbook.

    File Reader Snap Configuration

  3. Leave the PDF Parser Snap with default settings.
  4. Configure the Chunker Snap with the settings in the following screenshot.
    Chunker Snap Configuration

  5. Open the Azure OpenAI Embedder Snap and click on the Account tab to create a new account.
    1. Enter a meaningful name for the account and replace myendpoint with the relevant name for your Azure OpenAI endpoint.
    2. Validate and save your new account. If the account setup is successful, return to the Settings tab of the Snap.

      Azure OpenAI API Key Account Configuration

  6. Open the Azure OpenAI Embedder Snap and configure the settings as follows.

    Azure OpenAI Embedder Snap Configuration

    Note: Alternatively, you can use the OpenAI Embedder Snap, but you might need to replace the Embedder in the Pattern or import the appropriate one for which you have an account.
  7. Configure the Mapper Snap to map the output of the embedder Snap to the Pinecone Upsert Snap.
    1. Map the $embedding object from the Embedder Snap to the $values object in Pinecone.
    2. Map the metadata object in Pinecone by mapping $original.chunk to $metadata.chunk.
    3. Map $metadata.source to Employee Handbook.pdf—this allows the Retriever pipeline to return the source file to answer a question.
      Mapper Snap Configuration

  8. Open the Pinecone Upsert Snap and click the Account tab to create a new account with your Pinecone API Key. Validate it to make sure it works.
    1. Save your account and return to the Settings tab.
    2. Configure the Pinecone Upsert Snap to retrieve existing indexes in Pinecone.
      Note: You cannot create an Index spontaneously, so you must ensure it already exists in Pinecone (or create one with the Pinecone web interface).
    3. Optional. Set your Namespace. Namespaces in Pinecone create a logical separation between vectors within an index and can be created spontaneously during pipeline execution.
      Pinecone Upsert Snap Configuration

  9. Validate the pipeline to verify the data in the output preview. Execute the pipeline when you are ready to commit your data to Pinecone.

Retriever pipeline

This example pipeline demonstrates how to answer questions using the data loaded into Pinecone from the Indexer pipeline.

  1. Import the Retriever pattern pipeline (Download the pipeline). Initially, the pipeline looks like this:

  2. This pattern is part of a series, so it includes a Snap that is unnecessary for this example. So, remove the first Snap in the pipeline.
    1. Right-click on the HTTP Router Snap and choose Disable Snap.
    2. Click the circle between the HTTP Router and the Azure OpenAI Embedder Snap to disconnect them.
    3. Move the HTTP Router Snap out of the way on the canvas. Your pipeline should now look like this:


  3. In the Asset Palette on the left, search for the JSON Generator Snap, drag it onto the canvas, and connect it to the Azure OpenAI Embedder Snap.
  4. Configure the JSON Generator Snap as follows:
    1. Click on the Edit JSON button in the Settings tab.
    2. Highlight and delete all the text from the template.
    3. Replace "Your question here." with a question from the document you uploaded in your Indexer Pipeline. For instance, if you uploaded an employee handbook, you could ask, "When do I get paid?"


  5. Open the Azure OpenAI Embedder Snap.
    1. Click the Account tab and select the account you created in the Indexer example.
    2. Click the suggestion icon () for the Deployment ID and choose the Deployment ID you previously chose in the Indexer example.
    3. Set the Text to embed field to $prompt.


  6. Open the Mapper Snap connected to the Azure OpenAI Embedder Snap and configure it as shown below:

  7. Open the Pinecone Query Snap and click the Account tab to create a new account with your Pinecone API Key. Validate it to make sure it works.
  8. Configure the Pinecone Query Snap as shown below.


    Tip: Optional. In the Namespace field, select your existing namespace if you have created one.
  9. Configure the Mapper Snap connected to the Pinecone Query Snap as shown below.
    Note:
    • The $original JSON key displays when an upstream Snap has an implicit Pass through (in the previous Mapper Snap we explicitly enabled Pass through). This enables to access the original JSON document into the upstream Snap.
    • If you are validating your pipeline along the way or using Dynamic Validation, the Target Schema does not display in the Mapper Snap until you complete the upcoming steps for the Azure OpenAI Prompt Generator Snap.


  10. Click on the Azure OpenAI Prompt Generator Snap to set up your prompt.
    1. Click the Edit prompt and ensure your default prompt appears as shown below.


    2. On lines 4-6 there is mustache templating with the values {{#context}} {{source}} {{/context}} which is the same as the jsonPath($, "$context[*].source") from the Mapper Snap that we just configured. You can customize the prompt and include data in this manner for your other pipelines going forward.
  11. Open the Azure OpenAI Chat Completions Snap.
    1. Click the Account tab and select the account you created in the Indexer example.
    2. Click the suggestion icon () for the Deployment ID and choose the Deployment ID you previously chose in the Indexer example. This ID might differ from the one you chose in the previous Azure OpenAI Snap because we now need to select an LLM model instead of an embedding one.
    3. Set the Prompt field to $prompt.


  12. Validate the pipeline.
  13. Click the output preview of the Azure OpenAI Chat Completions Snap. The answer to the prompt is displayed under $choices[0].message.content as shown below.


After configuring the Indexer and Retriever pipelines, the next step is to integrate them with your chatbot UI for functional deployment. This integration enables the chatbot to leverage the indexed and retrieved data, enhancing its conversational capabilities.