The Remove Tool uses generative AI to eliminate unwanted objectives from your images. By analyzing the surrounding context, it not only erases the object but also generates a background that blends with the rest of the image.
You can find the Remove feature here:
Currently, the Remove Tool can only be applied to the final result of all the other filters. The other filters will be locked once the Remove Tool has been applied. Edits made by the Remove Tool must be discarded to change the other filters. Switching images in a batch will require those edits to be discarded.
Due to the resource-intensive nature of this feature, it is currently not supported on Macs with AMD GPUs. It also does not work on MacOS 12 (Monterey) and below.
The Remove Tool must be enabled the first time it’s used on Mac after installation or update. A popup will appear and will take 3-5 minutes to initialize. After that, the tool will be ready to use. There is no setup required on Windows.
The Remove Tool uses mainly the same brush settings as the Sharpen filter. The settings in the panel let you add or subtract from the mask or change the brush size. The brackets [ ] keyboard shortcut also changes the brush size.
The mask controls restores or clears masks between iterations. Restoring the mask reapplies the previous mask used for processing.
Speed - Quality Slider
The Remove Tool's performance depends on the computer's system profile and the selected editing priorities. Prioritizing speed conserves resources and reduces processing steps on weaker computers while prioritizing quality yields optimal results when hardware performance is not a constraint.
Padding defines the mask margin for context. Max padding is ideal for removing large objects in complex scenes, such as removing a person from a crowd, as it gives the AI model enough context to replace them appropriately. On the other hand, use no padding for small objects on uniform backgrounds, like removing a pen from a white tabletop, where the model can extrapolate surrounding colors for a seamless result.
In the "Erase Area" mode (default) the AI model pre-fills the masked region with colors from the surrounding pixels. When you opt for the "Keep Area" mode, the masked region remains unedited before processing and the outcome is determined solely by our training data. This choice can have an impact on the colors of the final result.
If results from the first try are not ideal, reprocessing generally improves the quality. This may not be intuitive but the generative model will produce different and improved options. Also experiment with different padding and guidance settings. This will alter the image information that the model receives and thus will generate different results.
Size & Distance
Keep mask sizes below ¼ of the total image size. Process each object individually when dealing with multiple objects requiring separate masks. Ensure that the maximum distance between masked objects does not exceed 2000 pixels.
For best results, mask the entire object rather than a partial section. Additionally, include the shadows and reflections of the objects. If those are not removed, and especially if they’re in contact with the object, the AI model may attempt to replace the object with something similar.