Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates text information extraction and retrieval using advanced embedding models for similarity searches.
The ebd_tool
node is designed to facilitate the extraction and retrieval of relevant information from a given text file based on a specific query. This node leverages advanced embedding models and vector stores to perform similarity searches, making it highly effective for tasks that require understanding and extracting contextually relevant information from large text datasets. By utilizing embeddings from HuggingFace and FAISS for vector storage, the ebd_tool
ensures accurate and efficient information retrieval. This tool is particularly beneficial for AI artists who need to quickly find and utilize specific information from extensive text files without manually sifting through the content.
This parameter specifies the path to the embedding model that will be used for generating embeddings. The model path is crucial as it determines the quality and type of embeddings generated, which directly impacts the accuracy of the similarity search. The default value is None
.
This parameter controls whether the node is enabled or disabled. If set to "disable", the node will not perform any operations. This is useful for temporarily turning off the node without removing it from the workflow. The options are enable
and disable
, with the default being enable
.
This parameter contains the text content of the file to be processed. It is a required input and must be provided for the node to function. The content of this file will be split into chunks and used to create the knowledge base for similarity searches.
This parameter defines the number of top similar documents to retrieve during the similarity search. A higher value of k
will return more documents, which can be useful for comprehensive searches but may also include less relevant results. The default value is 5
.
This parameter specifies the device to be used for computation. Options include auto
, cuda
, mps
, and cpu
. The auto
option will automatically select the best available device. The default value is auto
.
This parameter determines the size of the text chunks into which the file content will be split. Smaller chunk sizes can lead to more granular searches but may increase processing time. The default value is 200
.
This parameter specifies the overlap between consecutive text chunks. Overlapping chunks can help capture context that spans across chunk boundaries, improving the accuracy of the similarity search. The default value is 50
.
This output parameter contains the response generated by the node, which includes the relevant information extracted from the file based on the input query. The response is formatted as a string and provides a summary of the most relevant content found in the file.
path
parameter points to a valid and appropriate embedding model to achieve the best results.chunk_size
and chunk_overlap
parameters to balance between processing time and search granularity based on the specific needs of your task.k
parameter to control the number of similar documents retrieved, which can help in refining the search results to be more or less comprehensive.device
parameter to leverage available hardware acceleration, such as GPUs, for faster processing.path
parameter and ensure it points to a valid embedding model file.file_content
parameter is empty or missing, which is required for the node to function.file_content
parameter.device
parameter and ensure it is set to a supported option (auto
, cuda
, mps
, cpu
).chunk_size
, chunk_overlap
, and path
parameters to ensure they are correctly configured and compatible with the input data.© Copyright 2024 RunComfy. All Rights Reserved.