Starting from iOS 17, SwiftUI has the capability to customize the rendering of UI elements using modifiers. Having a Metal background and understanding its principles lets you use them more effectively. In this episode, I’m going to show how SwiftUI renders UI, how to use these modifiers with examples, and which limitations it has.
First of all, to have a more complete picture, we need to slightly extend CoreAnimation’s scheme of UI rendering:

SwiftUI provides the following modifiers to customize its render pipeline:
.drawingGroup bakes the grouped view into an image and treats it as a single layer..colorEffect applies a color-modifying function to a layer’s subhierarchy..distortionEffect applies a coordinate-modifying function to a layer’s subhierarchy..layerEffect applies a function (with access to the entire previous layer’s state) to a layer’s subhierarchy..drawingGroupFirstly, let’s look at an example where we have a complex view applying some distortion effects to its children, as well as a layer effect to the entire view:

What we can see here is that effects are applied to each element in the hierarchy separately (though coordinates are used in a space of the whole subhierarchy). However, if we group certain elements, they’re treated as one image. So, the .drawingGroup modifier bakes the subhierarchy into a single image or layer, and any future modifications are applied to this single image rather than to each element of the subhierarchy.
Let’s examine a piece of code for a closer look at this particular modifier, where one button is processed as piece of the subhierarchy while the other is prebaked into an image:
HStack {
Button("Simple button", action: {})
.buttonStyle(.bordered)
.distortionEffect(
ShaderLibrary.wave(.float(0), .float(10)),
maxSampleOffset: .zero)
Button("Simple button", action: {})
.buttonStyle(.bordered)
.drawingGroup()
.distortionEffect(
ShaderLibrary.wave(.float(0), .float(10)),
maxSampleOffset: .zero)
}

The .drawingGroup has two parameters:
opaque: Bool = false indicates whether the baked image is opaque or transparent. An opaque image can be placed more quickly since it doesn’t require blending.colorMode: ColorRenderingMode = .nonLinear determines the color mode of the resulting image.When to use:
When NOT to use:
.colorEffectThis modifier applies a color effect to your elements: based on the input color and position, it computes the resulting color.
Let’s start with a simple Swift example:
Button("Simple button", action: {})
.buttonStyle(.bordered)
.visualEffect { content, geoProxy in // (1)
content // (2)
.colorEffect(ShaderLibrary.colorShaderA( // (3)
.float2(geoProxy.size.width, geoProxy.size.height))) // (4)
}
.visualEffect to get the element’s size without using GeometryReader.colorShaderA is the name of our function in the .metal file.Now we can go deeper into the Metal function (don't forget to include <SwiftUI/SwiftUI_Metal.h>):
#include <SwiftUI/SwiftUI_Metal.h>
[[ stitchable ]] // (1)
half4 colorShaderA( // (2)
float2 position, // (3)
half4 currentColor, // (4)
float2 size // (5)
) {
if (position.x < 0 || position.x > size.x) { // (6)
return half4(0, 0, 1, 1);
}
return half4(half2(floor(position / 10) * 10 / size), 0, 1); // (7)
}
position is the pixel’s location in subhierarchy space (this value might be modified by a distortion effect — see below in the Pipeline section). This parameter is mandatory.currentColor is the pixel’s current color, also mandatory.size is our custom parameter. You can introduce as many custom parameters as you need.
.distortionEffectThis modifier changes the position from which the pixel value is sampled. As the value might be picked from outside the image, the modifier has an additional parameter:
Button("Simple button", action: {})
.buttonStyle(.bordered)
.drawingGroup() // (1)
.distortionEffect(ShaderLibrary.distortionShader( // (2)
.float(10)), // (3)
maxSampleOffset: CGSize(width: 10, height: 0) // (4)
)
distortionEffect, which is implemented in our Metal function distortionShader.#include <SwiftUI/SwiftUI_Metal.h>
[[ stitchable ]] // (1)
float2 distortionShader( // (2)
float2 position, // (3)
float shift // (4)
) {
auto offset = float2(shift, 0.0) * sin(position.y * 0.25);
return position + offset; // (5)
}
[[ stitchable ]].float2 value — the new position from which the sampler should read the pixel. The function name must be the same as it is on the Swift side.position of the pixel.
.layerEffectYou can think of this modifier as an extended combination of the previous two: it has access to the entire image of the elements and can sample pixels directly.
Image("test").resizable().scaledToFit()
.layerEffect(ShaderLibrary.layerShader(), // (1)
maxSampleOffset: .zero) // (2)
layerEffect implemented in our Metal function layerShader. This time it doesn’t have any custom parameters.#include <SwiftUI/SwiftUI_Metal.h>
[[ stitchable ]] // (1)
half4 layerShader( // (2)
float2 position, // (3)
SwiftUI::Layer layer // (4)
// ... // (5)
) {
auto curPosition = position;
auto delta = position - float2(200, 200);
auto curDist = length(delta);
if (curDist < 100) {
curPosition += delta * (curDist / 100);
}
auto value = layer.sample(curPosition); // (6)
return value; // (7)
}
[[ stitchable ]].half4 color value.position is the pixel’s coordinates (a mandatory parameter).layer is the CoreAnimation layer from which we read pixel data.layer at curPosition.
Now when we’ve tried all these modifiers and can perform several experiments to gain a better understanding of how everything works under the hood, we need to prepare a few Metal functions:
#include <metal_stdlib>
#include <SwiftUI/SwiftUI_Metal.h>
using namespace metal;
[[ stitchable ]] half4 colorShaderA(float2 position, half4 currentColor, float2 size) {
if (position.x < 0 || position.x > size.x) {
return half4(0, 0, 1, 1);
}
return half4(half2(floor(position / 100) * 100 / size), 0, 1);
}
[[ stitchable ]] half4 colorShaderB(float2 position, half4 currentColor) {
return currentColor.bgra;
}
[[ stitchable ]] float2 distortionShader(float2 position) {
return position + float2(20, 0.0) * sin(position.y * 0.05);
}
[[ stitchable ]] half4 layerShader(float2 position, SwiftUI::Layer layer) {
auto curPosition = position;
auto delta = position - float2(200, 200);
auto curDist = length(delta);
if (curDist < 100) {
curPosition += delta * (curDist / 100);
}
auto value = layer.sample(curPosition);
return value;
}
[[ stitchable ]] half4 layerShaderEmpty(float2 position, SwiftUI::Layer layer) {
return layer.sample(position);
}
So now, by combining them in different orders, we can glean how it works under the hood. Let’s start with this:
Image("test").resizable().scaledToFit()
.visualEffect { content, proxy in
content
.colorEffect(
ShaderLibrary.colorShaderA(
.float2(proxy.size.width,
proxy.size.height)))
.colorEffect(
ShaderLibrary.colorShaderB())
.distortionEffect(
ShaderLibrary.distortionShader(),
maxSampleOffset: .zero)
}

Here, we can see that color effects are combined with the distortion effect. Otherwise, we wouldn’t see these red areas:
colorShaderA, blue areas appear when the position is out of boundscolorShaderB.If we bake the image with an empty layer effect, we get the result as it would look if the color and distortion effects were not combined:
Image("test").resizable().scaledToFit()
.visualEffect { content, proxy in
content
.colorEffect(
ShaderLibrary.colorShaderA(
.float2(proxy.size.width,
proxy.size.height)))
.colorEffect(
ShaderLibrary.colorShaderB())
.layerEffect(
ShaderLibrary.layerShaderEmpty(),
maxSampleOffset: .zero)
.distortionEffect(
ShaderLibrary.distortionShader(),
maxSampleOffset: .zero)
}

If we try to combine distortion effects as shown below, we find that they aren’t stacked; after applying each, the result is baked:
Image("test").resizable().scaledToFit()
.visualEffect { content, proxy in
content
.colorEffect(
ShaderLibrary.colorShaderA(
.float2(proxy.size.width,
proxy.size.height)))
.colorEffect(
ShaderLibrary.colorShaderB())
.distortionEffect(
ShaderLibrary.distortionShader(),
maxSampleOffset: .zero)
.distortionEffect(
ShaderLibrary.distortionShader(),
maxSampleOffset: .zero)
}

From these observations, we can draw the following conclusions:
So, in the code below we have three render passes:

And the color + distortion pipeline should look like this:

As everything in SwiftUI, these modifiers are not only limited with iOS version, but also with several others. You should be aware of them and know what is behind them to not be surprised.
You may be wondering about the yellow stripes marked with 🚫. The documentation states:
Views backed by AppKit or UIKit views may not render into the filtered layer. Instead, they log a warning and display a placeholder image to highlight the error.
...
Unable to render flattened version of PlatformViewRepresentableAdaptor<BridgedColorPicker>.
Unable to render flattened version of PlatformViewRepresentableAdaptor<Switch>.
Unable to render flattened version of PlatformViewRepresentableAdaptor<SystemSlider>.
Unable to render flattened version of PlatformViewRepresentableAdaptor<CircularUIKitProgressView>.
...
So, you can’t apply these modifiers to UIKit-based elements.
If you create a huge layout (for instance, by using a very long scroll view) and then apply .layerEffect to the scroll view’s parent, you may end up with an empty screen and console messages similar to:
RBLayer: unable to create texture: BGRA8Unorm, [19977, 2484]
This occurs because .layerEffect requires the entire layout to be prebaked, effectively treating the view it’s applied to as a single image. Consequently, if your layout exceeds the maximum available texture size (16384×16384), Metal cannot create a texture and throws this error.

.drawingGroupAs we can already know, these effects modifiers are applied to each layer in the subhierarchy separately, which means you need to apply .drawingGroup first in some cases. You can see that on an example below:

But be aware that if you apply the .drawingGroup to a view after a special background effect (for example, .ultraThinMaterial), the view will be baked separately from the whole layout and, as a result, you won’t get the beautiful blur. So bake your view before background effects (if you really need it).

maxSampleOffsetLayer and distortion effects can extend your view’s image beyond its original bounds. Though they have a maxSampleOffset parameter that allows extending the visible area while keeping the view’s original position, the view might be cropped by its parent because the image goes beyond the parent’s borders as well. Therefore, a solution can be to apply padding before grouping instead of using maxSampleOffset (just pass .zero there):

<SwiftUI/SwiftUI_Metal.h>CIFilters, so if you have any experience with it, it’ll be easier to pick up these new features.