Visit ComfyUI Online for ready-to-use ComfyUI environment
Automatically detect and filter NSFW content from images using advanced machine learning models for a clean image processing pipeline.
The DetectorForNSFW
node is designed to help you automatically detect and filter out Not Safe For Work (NSFW) content from images. This node leverages advanced machine learning models to identify explicit content, ensuring that your image processing pipeline remains clean and appropriate for all audiences. By integrating this node into your workflow, you can automatically replace detected NSFW content with a placeholder image or simply flag it for review, thus maintaining a safe and professional environment. This node is particularly useful for AI artists and developers who need to manage large volumes of images and want to ensure that their content adheres to community guidelines and standards.
This parameter accepts the image or a batch of images that you want to analyze for NSFW content. The images should be in a tensor format, which is a common data structure used in machine learning for handling multi-dimensional arrays. The quality and resolution of the input images can impact the accuracy of the detection.
This optional parameter specifies the name of the model to be used for NSFW detection. If not provided, a default model will be used. The model name should correspond to a pre-trained NSFW detection model available in your system. The choice of model can affect the detection accuracy and performance.
This parameter defines the resolution at which the images will be processed for detection. The default value is 320, but you can adjust it based on your needs. Higher resolutions may provide more accurate results but will require more computational resources.
This parameter specifies the computational provider to be used for running the model. The default value is "CPU", but you can also use other providers like "GPU" if available. Using a GPU can significantly speed up the detection process.
This optional parameter allows you to specify an alternative image that will replace any detected NSFW content. If not provided, a default placeholder image (a white image) will be used. This can be useful for maintaining the visual integrity of your content while filtering out inappropriate material.
This parameter allows you to pass additional keyword arguments that can fine-tune the detection process. For example, you can specify confidence levels for different types of NSFW content, such as female_genitalia_exposed=0.9
, to control the sensitivity of the detection.
This output parameter provides the processed images in tensor format. If NSFW content is detected, the corresponding images will be replaced with the alternative image or the default placeholder image. This allows you to seamlessly integrate the filtered images into your workflow.
This output parameter provides detailed information about the detection results in JSON format. It includes the detection results for each image, indicating whether NSFW content was found and providing confidence scores for the detected content. This information can be used for logging, auditing, or further processing.
detect_size
parameter based on your computational resources and the required accuracy.provider
to optimize the detection speed, especially if you have access to a GPU.alternative_image
parameter to maintain the visual consistency of your content when NSFW content is detected.detect_size
parameter to a lower value that your system can handle.© Copyright 2024 RunComfy. All Rights Reserved.