GenAI App Builder FAQ

Frequently Asked Questions

Question: Which chunkers are available in the Chunker Snap and which levels of chunking (character, phrase, or document) are supported?

Answer: We currently support a paragraph-based chunker (in implementation) that supports character limit or token limit, but any chunking is possible in our platform through pipeline design.

Question: Is there a choice, among the embeddings provided by the LLM?

Answer: We support all OpenAI embedding algorithms, but you may also leverage the Titan embedding models using the Amazon Bedrock Snap Pack.

Question: Since the search in the vector database can use different algorithms (such as cosine similarity or kNN), can these be chosen or are they implemented by default?

Answer: This depends on your vector database. Some allow changing the search type at the database level, but some do not.

Question: When uploading source files (such as a PDF), is there a possibility to retrieve only the modified files and update the database?

Answer: Yes, but this depends on the implementation of your RAG approach. You can build a RAG-like pipeline to support this functionality, but if you store the data in a way it can be updated/upserted to your vector database.