Visit ComfyUI Online for ready-to-use ComfyUI environment
Upscale latent images with various scaling methods for high-quality AI image generation.
The LatentPixelScale
node is designed to upscale latent images, which are intermediate representations used in AI image generation processes. This node allows you to enhance the resolution of these latent images using various scaling methods, ensuring that the upscaled images maintain high quality and detail. By leveraging different upscaling techniques and optional model-based upscaling, this node provides flexibility and control over the upscaling process. It is particularly useful for AI artists looking to improve the resolution of their generated images without compromising on quality. The node also supports the use of tiled VAE (Variational Autoencoder) for efficient processing of large images.
This parameter represents the latent images that you want to upscale. It is the core input for the node and is required for the upscaling process.
This parameter allows you to choose the method used for upscaling the latent images. The available options are nearest-exact
, bilinear
, lanczos
, and area
. Each method has its own characteristics in terms of quality and computational efficiency. For example, nearest-exact
is fast but may produce blocky results, while lanczos
offers high-quality results but is computationally intensive.
This parameter defines the factor by which the latent images will be upscaled. It is a floating-point value with a default of 1.5, a minimum of 0.1, and a maximum of 10000. Adjusting this factor allows you to control the degree of upscaling applied to the latent images.
This parameter specifies the Variational Autoencoder (VAE) model used for encoding and decoding the latent images. The VAE is essential for converting latent representations back into pixel space and vice versa.
This boolean parameter determines whether to use a tiled VAE for processing. When enabled, it allows for efficient handling of large images by processing them in smaller tiles. The default value is False
, with labels enabled
and disabled
to indicate its state.
This optional parameter allows you to specify an upscale model for model-based upscaling. When provided, the node will use this model to enhance the resolution of the latent images, potentially offering better quality than traditional upscaling methods.
This output represents the upscaled latent images. It is the primary result of the upscaling process and can be further processed or decoded into pixel space.
This output provides the upscaled images in pixel space. It is useful for previewing the final upscaled images and ensuring that the upscaling process has achieved the desired quality and resolution.
scale_method
options to find the best balance between quality and performance for your specific use case.scale_factor
parameter to control the degree of upscaling. Higher values will increase the resolution but may also require more computational resources.use_tiled_vae
for efficient processing of large images, especially when working with high-resolution outputs.upscale_model_opt
parameter to leverage model-based upscaling for potentially better quality results.ValueError: Invalid scale factor
scale_factor
parameter is set to a value outside the allowed range (0.1 to 10000).scale_factor
is within the specified range and adjust it accordingly.TypeError: Missing required parameter 'samples'
samples
parameter, which is required for the upscaling process, is not provided.samples
parameter.RuntimeError: VAE model not specified
vae
parameter is not specified, which is necessary for encoding and decoding latent images.vae
parameter.AttributeError: 'NoneType' object has no attribute 'upscale'
upscale_model_opt
parameter is set to None
but the node attempts to use model-based upscaling.upscale_model_opt
parameter or ensure that the node is configured to use traditional upscaling methods.© Copyright 2024 RunComfy. All Rights Reserved.