<Prev | Content

15 UI pt.2: SwiftUI

Starting from iOS 17, SwiftUI has the capability to customize the rendering of UI elements using modifiers. Having a Metal background and understanding its principles lets you use them more effectively. In this episode, I’m going to show how SwiftUI renders UI, how to use these modifiers with examples, and which limitations it has.

Overview

First of all, to have a more complete picture, we need to slightly extend CoreAnimation’s scheme of UI rendering:

  • Though SwiftUI can render some of its components directly using CoreAnimation, some still rely on UIKit (which uses CoreAnimation under the hood too).
  • CoreAnimation uses Metal, CoreGraphics, or CoreText, depending on a layer’s type, to “bake” it.
  • Then, the prebaked elements are composed into the full layout (we’ll see how it works a bit further).

SwiftUI provides the following modifiers to customize its render pipeline:

  • .drawingGroup bakes the grouped view into an image and treats it as a single layer.
  • .colorEffect applies a color-modifying function to a layer’s subhierarchy.
  • .distortionEffect applies a coordinate-modifying function to a layer’s subhierarchy.
  • .layerEffect applies a function (with access to the entire previous layer’s state) to a layer’s subhierarchy.

Modifiers

.drawingGroup

Firstly, let’s look at an example where we have a complex view applying some distortion effects to its children, as well as a layer effect to the entire view:

What we can see here is that effects are applied to each element in the hierarchy separately (though coordinates are used in a space of the whole subhierarchy). However, if we group certain elements, they’re treated as one image. So, the .drawingGroup modifier bakes the subhierarchy into a single image or layer, and any future modifications are applied to this single image rather than to each element of the subhierarchy.

Let’s examine a piece of code for a closer look at this particular modifier, where one button is processed as piece of the subhierarchy while the other is prebaked into an image:

HStack {
    Button("Simple button", action: {})
        .buttonStyle(.bordered)
        .distortionEffect(
            ShaderLibrary.wave(.float(0), .float(10)),
            maxSampleOffset: .zero)

    Button("Simple button", action: {})
        .buttonStyle(.bordered)
        .drawingGroup()
        .distortionEffect(
            ShaderLibrary.wave(.float(0), .float(10)),
            maxSampleOffset: .zero)
}

The .drawingGroup has two parameters:

  • opaque: Bool = false indicates whether the baked image is opaque or transparent. An opaque image can be placed more quickly since it doesn’t require blending.
  • colorMode: ColorRenderingMode = .nonLinear determines the color mode of the resulting image.

When to use:

  • Your subhierarchy can be animated as a single piece, making the baked animation perform more efficiently.
  • You need to apply a distortion or layer effect to the entire subhierarchy rather than each individual element.

When NOT to use:

  • You have interactive or rapidly changing elements (because baking takes time).
  • You’re working with UIKit-based elements (they aren’t supported).
  • Your subhierarchy can’t be effectively presented as a single image.

.colorEffect

This modifier applies a color effect to your elements: based on the input color and position, it computes the resulting color.

Let’s start with a simple Swift example:

Button("Simple button", action: {})
    .buttonStyle(.bordered)
    .visualEffect { content, geoProxy in                                // (1)
        content                                                         // (2)
            .colorEffect(ShaderLibrary.colorShaderA(                    // (3)
                .float2(geoProxy.size.width, geoProxy.size.height)))    // (4)
    }
  1. I use .visualEffect to get the element’s size without using GeometryReader.
  2. We apply our modifiers to that proxy content (though we could apply them directly to the view, we need its size here).
  3. We apply the color effect, where colorShaderA is the name of our function in the .metal file.
  4. We only pass our custom parameters (the view size); standard parameters are passed by Metal itself (see below).

Now we can go deeper into the Metal function (don't forget to include <SwiftUI/SwiftUI_Metal.h>):

#include <SwiftUI/SwiftUI_Metal.h>

[[ stitchable ]]                                                    // (1)
half4 colorShaderA(                                                 // (2)
    float2 position,                                                // (3)
    half4 currentColor,                                             // (4)
    float2 size                                                     // (5)
) {
    if (position.x < 0 || position.x > size.x) {                    // (6)
        return half4(0, 0, 1, 1);
    }
    return half4(half2(floor(position / 10) * 10 / size), 0, 1);    // (7)
}
  1. The function is stitchable (see more details here), so it’s integrated into the element’s rendering pipeline.
  2. It must return half4, and its name should match what you use on the Swift side.
  3. position is the pixel’s location in subhierarchy space (this value might be modified by a distortion effect — see below in the Pipeline section). This parameter is mandatory.
  4. currentColor is the pixel’s current color, also mandatory.
  5. size is our custom parameter. You can introduce as many custom parameters as you need.
  6. We mark out-of-bounds pixels with a blue color.
  7. In bounds, we have a simple red-green grid.

.distortionEffect

This modifier changes the position from which the pixel value is sampled. As the value might be picked from outside the image, the modifier has an additional parameter:

Button("Simple button", action: {})
    .buttonStyle(.bordered)
    .drawingGroup()                                     // (1)
    .distortionEffect(ShaderLibrary.distortionShader(   // (2)
        .float(10)),                                    // (3)
        maxSampleOffset: CGSize(width: 10, height: 0)   // (4)
    )
  1. We bake the button before distorting so that we deal with a single image instead of two (text and background).
  2. We apply distortionEffect, which is implemented in our Metal function distortionShader.
  3. We pass our custom parameter (the shift amplitude). Standard parameters are passed by Metal, just like in the color effect.
  4. We specify the maximum extension of the resulting view (extending by 10 along the x-axis — see the Metal function below).
#include <SwiftUI/SwiftUI_Metal.h>

[[ stitchable ]]                // (1)
float2 distortionShader(        // (2)
    float2 position,            // (3)
    float shift                 // (4)
) {
    auto offset = float2(shift, 0.0) * sin(position.y * 0.25);
    return position + offset;   // (5)
}
  1. All effects must be marked [[ stitchable ]].
  2. The distortion effect must return a float2 value — the new position from which the sampler should read the pixel. The function name must be the same as it is on the Swift side.
  3. This is the mandatory parameter representing the original position of the pixel.
  4. This is our custom parameter.
  5. We return the new position with a shifted x-coordinate.

.layerEffect

You can think of this modifier as an extended combination of the previous two: it has access to the entire image of the elements and can sample pixels directly.

Image("test").resizable().scaledToFit()
    .layerEffect(ShaderLibrary.layerShader(),   // (1)
                 maxSampleOffset: .zero)        // (2)
  1. We apply the layerEffect implemented in our Metal function layerShader. This time it doesn’t have any custom parameters.
  2. We could extend the resulting image, but here we choose to keep it within its original bounds.
#include <SwiftUI/SwiftUI_Metal.h>

[[ stitchable ]]                                    // (1)
half4 layerShader(                                  // (2)
    float2 position,                                // (3)
    SwiftUI::Layer layer                            // (4)
    // ...                                          // (5)
) {
    auto curPosition = position;
    auto delta = position - float2(200, 200);
    auto curDist = length(delta);

    if (curDist < 100) {
        curPosition += delta * (curDist / 100);
    }

    auto value = layer.sample(curPosition);         // (6)

    return value;                                   // (7)
}
  1. The layer effect is also an effect, so it follows the same pattern as the others—each must be marked [[ stitchable ]].
  2. The layer effect returns a half4 color value.
  3. position is the pixel’s coordinates (a mandatory parameter).
  4. layer is the CoreAnimation layer from which we read pixel data.
  5. You can define custom parameters here, but in this example, there are none.
  6. We retrieve the pixel’s value from the given layer at curPosition.
  7. Finally, we return the new color value.

Pipeline

Now when we’ve tried all these modifiers and can perform several experiments to gain a better understanding of how everything works under the hood, we need to prepare a few Metal functions:

#include <metal_stdlib>
#include <SwiftUI/SwiftUI_Metal.h>
using namespace metal;



[[ stitchable ]] half4 colorShaderA(float2 position, half4 currentColor, float2 size) {
    if (position.x < 0 || position.x > size.x) {
        return half4(0, 0, 1, 1);
    }
    return half4(half2(floor(position / 100) * 100 / size), 0, 1);
}

[[ stitchable ]] half4 colorShaderB(float2 position, half4 currentColor) {
    return currentColor.bgra;
}


[[ stitchable ]] float2 distortionShader(float2 position) {
    return position + float2(20, 0.0) * sin(position.y * 0.05);
}


[[ stitchable ]] half4 layerShader(float2 position, SwiftUI::Layer layer) {
    auto curPosition = position;
    auto delta = position - float2(200, 200);
    auto curDist = length(delta);

    if (curDist < 100) {
        curPosition += delta * (curDist / 100);
    }

    auto value = layer.sample(curPosition);

    return value;
}


[[ stitchable ]] half4 layerShaderEmpty(float2 position, SwiftUI::Layer layer) {
    return layer.sample(position);
}

So now, by combining them in different orders, we can glean how it works under the hood. Let’s start with this:

Image("test").resizable().scaledToFit()
    .visualEffect { content, proxy in
        content
            .colorEffect(
                ShaderLibrary.colorShaderA(
                    .float2(proxy.size.width,
                            proxy.size.height)))
            .colorEffect(
                ShaderLibrary.colorShaderB())
            .distortionEffect(
                ShaderLibrary.distortionShader(),
                maxSampleOffset: .zero)
    }

Here, we can see that color effects are combined with the distortion effect. Otherwise, we wouldn’t see these red areas:

  • In colorShaderA, blue areas appear when the position is out of bounds
  • Then they’re changed to red in colorShaderB.

If we bake the image with an empty layer effect, we get the result as it would look if the color and distortion effects were not combined:

Image("test").resizable().scaledToFit()
    .visualEffect { content, proxy in
        content
            .colorEffect(
                ShaderLibrary.colorShaderA(
                    .float2(proxy.size.width,
                            proxy.size.height)))
            .colorEffect(
                ShaderLibrary.colorShaderB())
            .layerEffect(
                ShaderLibrary.layerShaderEmpty(),
                maxSampleOffset: .zero)
            .distortionEffect(
                ShaderLibrary.distortionShader(),
                maxSampleOffset: .zero)
    }

If we try to combine distortion effects as shown below, we find that they aren’t stacked; after applying each, the result is baked:

Image("test").resizable().scaledToFit()
    .visualEffect { content, proxy in
        content
            .colorEffect(
                ShaderLibrary.colorShaderA(
                    .float2(proxy.size.width,
                            proxy.size.height)))
            .colorEffect(
                ShaderLibrary.colorShaderB())
            .distortionEffect(
                ShaderLibrary.distortionShader(),
                maxSampleOffset: .zero)
            .distortionEffect(
                ShaderLibrary.distortionShader(),
                maxSampleOffset: .zero)
    }

From these observations, we can draw the following conclusions:

  • Sequential color effects can be stacked in a single render pass.
  • Color effects that occur before a distortion effect can be combined with it as well.
  • After applying a layer effect or a distortion effect, the result is baked.

So, in the code below we have three render passes:

And the color + distortion pipeline should look like this:

Limitations and possible problems

As everything in SwiftUI, these modifiers are not only limited with iOS version, but also with several others. You should be aware of them and know what is behind them to not be surprised.

UIKit-based elements

You may be wondering about the yellow stripes marked with 🚫. The documentation states:

Views backed by AppKit or UIKit views may not render into the filtered layer. Instead, they log a warning and display a placeholder image to highlight the error.

...
Unable to render flattened version of PlatformViewRepresentableAdaptor<BridgedColorPicker>.
Unable to render flattened version of PlatformViewRepresentableAdaptor<Switch>.
Unable to render flattened version of PlatformViewRepresentableAdaptor<SystemSlider>.
Unable to render flattened version of PlatformViewRepresentableAdaptor<CircularUIKitProgressView>.
...

So, you can’t apply these modifiers to UIKit-based elements.

Layout size

If you create a huge layout (for instance, by using a very long scroll view) and then apply .layerEffect to the scroll view’s parent, you may end up with an empty screen and console messages similar to:

RBLayer: unable to create texture: BGRA8Unorm, [19977, 2484]

This occurs because .layerEffect requires the entire layout to be prebaked, effectively treating the view it’s applied to as a single image. Consequently, if your layout exceeds the maximum available texture size (16384×16384), Metal cannot create a texture and throws this error.

Be aware of .drawingGroup

As we can already know, these effects modifiers are applied to each layer in the subhierarchy separately, which means you need to apply .drawingGroup first in some cases. You can see that on an example below:

But be aware that if you apply the .drawingGroup to a view after a special background effect (for example, .ultraThinMaterial), the view will be baked separately from the whole layout and, as a result, you won’t get the beautiful blur. So bake your view before background effects (if you really need it).

Padding vs maxSampleOffset

Layer and distortion effects can extend your view’s image beyond its original bounds. Though they have a maxSampleOffset parameter that allows extending the visible area while keeping the view’s original position, the view might be cropped by its parent because the image goes beyond the parent’s borders as well. Therefore, a solution can be to apply padding before grouping instead of using maxSampleOffset (just pass .zero there):

Conclusion

  • SwiftUI is baked with Metal.
  • Metal-based effects in SwiftUI are a powerful tool.
  • Don't forget to include <SwiftUI/SwiftUI_Metal.h>
  • These tools have some limitations and nuances you should be aware of.
  • SwiftUI effects are very similar to CIFilters, so if you have any experience with it, it’ll be easier to pick up these new features.
  • Now you have more understanding of how it works and can do it more optimally.

<Prev | Content