The main thread rendering is the best example of why we need to protect multiple types with a mutually exclusive access. You may have a massive collection of UIViewControllers, UIViews, or SwiftUI views running in paralel, but in the end, you should update your user interface on the main thread.
If you are unfamiliar with the actor concept, look at my dedicated “Thread safety in Swift with actors” post.
That’s why Swift provides us @MainActor. Any UIViewController or UIView you create inherits @MainActor access from its definition. SwiftUI’s View protocol also defines its body property with @MainActor. This means your view’s body, view, or controller always runs on the main thread and protects you from accidentally updating the user interface from the background thread.
To fully understand the idea of the global actors, let’s inspect the @MainActor type a bit further.
@globalActor actor MainActor : GlobalActor {
static let shared: MainActor
}
As you can see in the code example above, the MainActor type is defined with actor keyword and conforms to the GlobalActor protocol. It also has the @globalActor attribute. The GlobalActor protocol requires you to specify the shared property to create a shared, also called a global instance of the actor.
@Observable @MainActor final class Store {
// ...
}
Now, we can easily mark any type we need with the @MainActor attribute to isolate it to the main actor. This means all the work in the particular type runs exclusively on the main actor.
Let’s move forward and build our own global actor. Assume that you have a set of types accessing the local storage and you want to keep files conflict-free on the disk by running exclusively.
@globalActor actor StorageActor: GlobalActor {
static let shared = StorageActor()
}
As you can see in the example above, we define the StorageActor type conforming to the GlobalActor protocol using the actor keyword. The @globalActor attribute allows us to mark any type, function, or property with the @StorageActor.
@StorageActor final class Cache {
let folder: URL
init(folder: URL) {
self.folder = folder
}
func get(_ key: String) -> Data? {
// ...
}
func set(data: Data, for key: String) {
// ...
}
}
@StorageActor final class Database<Value> {
let folder: URL
init(folder: URL) {
self.folder = folder
}
func search(matching query: String) -> [Value] {
// ...
}
}
Here, we create Сache and Database types using the @StorageActor attribute. It allows us to run them on a shared, mutually exclusive actor, managed by the StorageActor we created before.
Why do we use global actors rather than defining Cache and Database types as actors? We can define Cache and Database as actors. Still, in this case, every instance of the Cache or Database types will run on an independent actor and protect its access alone. By marking our types with the @StorageActor, we belong them to a single, mutually exclusive, shared instance of the StorageActor.
@Observable final class Store {
private(set) var data: Data?
@StorageActor func load() async {
let path: String = "some path"
let content = FileManager.default.contents(atPath: path)
await MainActor.run {
self.data = content
}
}
}
Remember that you can mark with the @StorageActor attribute not only types but also functions or properties of any type.
Today, we learned why and how to use global actors in Swift. You don’t need to use global actors often in your apps. However, they become handy in particular cases, such as main thread rendering. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>The Swift Async Algorithms package is another package that Apple maintains and provides us. You can always become a part of this great community by contributing to the package on GitHub.
The Swift Async Algorithms package offers a set of functions allowing us to combine two or three async sequences into a single sequence. For example, you can merge two async sequences in a single one and observe values from the resulting sequence.
@Observable final class CalendarStore {
private(set) var events: [Event] = []
func observeEvents() async {
let dayChanges = NotificationCenter.default.notifications(named: .NSCalendarDayChanged)
let timezoneChanges = NotificationCenter.default.notifications(named: .NSSystemTimeZoneDidChange)
for await change in merge(dayChanges, timezoneChanges) {
await fetchEvents()
}
}
func fetchEvents() async {
// ...
}
}
As you can see in the example above, we use the merge function that allows us to create a single sequence then observe day and timezone changes at once. The Swift Async Algorithms package provides not only merge functions but also combineLatest, zip, chain, and join.
@Observable final class CalendarStore {
private(set) var events: [Event] = []
func observeEvents() async {
let dayChanges = NotificationCenter.default.notifications(named: .NSCalendarDayChanged)
let timezoneChanges = NotificationCenter.default.notifications(named: .NSSystemTimeZoneDidChange)
for await change in zip(dayChanges, timezoneChanges) {
await fetchEvents()
}
}
func fetchEvents() async {
// ...
}
}
The Swift Async Algorithms package also includes grouping and filtering operators from the Swift Algorithms package but applies to async sequences like compacted for filtering nil values or chunking and removing duplicates.
To learn more about the Swift Algorithms package, take a look at my “Discovering Swift Algorithms package” post.
The Swift Async Algorithms package introduces a few operators, allowing us to manipulate the sequence using time, similar to the Combine framework. For example, you can debounce and throttle async sequences.
@Observable final class CalendarStore {
private(set) var events: [Event] = []
func observeEvents() async {
let dayChanges = NotificationCenter.default.notifications(named: .NSCalendarDayChanged)
let timezoneChanges = NotificationCenter.default.notifications(named: .NSSystemTimeZoneDidChange)
for await change in merge(dayChanges, timezoneChanges).debounce(for: .seconds(1)) {
await fetchEvents()
}
}
func fetchEvents() async {
// ...
}
}
As you can see in the example above, we use the debounce function to wait for a particular period of time before emitting a value. Another helpful type that we have in The Swift Async Algorithms package is AsyncTimerSequence. It emits the current date at a given interval.
@Observable final class CalendarStore {
private(set) var events: [Event] = []
func observeEvents() async {
let dayChanges = NotificationCenter.default.notifications(named: .NSCalendarDayChanged)
let timezoneChanges = NotificationCenter.default.notifications(named: .NSSystemTimeZoneDidChange)
let timer = AsyncTimerSequence(interval: .seconds(5), clock: .suspending)
for await interval in timer {
await fetchEvents(in: Date.now)
}
}
func fetchEvents(in date: Date) async {
// ...
}
}
The AsyncChannel type allows us to replace passthrough subjects from the Combine framework. It is a great way to bridge the part of the code that doesn’t support async context with the async context in your app.
let channel = AsyncChannel<UUID>()
Task {
for await id in channel {
print(id)
}
}
await channel.send(UUID())
await channel.send(UUID())
channel.finish()
As you can see in the example above, we use the send function on an instance of the AsyncChannel type to emit values. Conversely, the AsyncChannel conforms to the AsyncSequence protocol to support for-each loop with the await keyword. Remember to call the finish function on the channel to close the sequence.
let channel = AsyncThrowingChannel<UUID>()
Task {
for await id in channel {
print(id)
}
}
await channel.send(UUID())
await channel.fail(SomeError())
There is also the AsyncThrowingChannel type with a similar functionality supporting failing with errors. Whenever you need to close the channel with an error, you can use the fail function on an instance of the AsyncThrowingChannel type.
Today we discovered the Swift Async Algorithms package, allowing us to move completely from the Combine framework to the Swift Concurrency feature. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>The Swift Collections package contains a few collection types that may help you improve the performance of your apps if you apply them whenever needed instead of using generic Array, Dictionary, and Set types. The Swift Collections package lives on Github, where you can find it and add to your project.
Dictionary and Set types that Swift language provides us store values in a single flat hash table that you copy on every write or mutation. The Swift Collection package introduces TreeDictionary and TreeSet types implementing Compressed Hash-Array Mapped Prefix Trees. In other words, TreeDictionary and TreeSet types hold values in the tree-based structure, allowing the efficient updating of only the needed branches.
Imagine a calendar app where you store an event array per date and use the standard Dictionary type. You might need to implement paging and load events per visible month and store them in an instance of the Dictionary type. While the user scrolls through months, your app loads a bunch of events and copies the whole dictionary on every load, even when previously loaded events didn’t change.
@Observable final class CalendarStore {
typealias Fetch = (DateInterval) async -> [Event]
private(set) var events: TreeDictionary<Date, [Event]> = [:]
private let fetch: Fetch
init(fetch: @escaping Fetch) {
self.fetch = fetch
}
func fetchEvents(inside interval: DateInterval) async {
let newEvents = await fetch(interval)
let groupedByDate = TreeDictionary(grouping: newEvents, by: \.date)
events.merge(groupedByDate) { $1 }
}
}
For this case, the Swift Collections package introduces the TreeDictionary and TreeSet types that link the unchanged parts with the changed branches under the hood without copying the whole dictionary into the memory. The TreeDictionary type provides us with the very same APIs that the Dictionary type has and optimizes memory for us under the hood.
The TreeDictionary is still a struct, but the implementation uses the UnsafeMutablePointer type to access memory and mutate it directly without copying on write. Another benefit of the TreeDictionary and TreeSet types is the optimized way to compare because of their tree-based nature. Usually, they handle this operation in a constant time.
let oldEvents: TreeDictionary<Date, [Event]> = //...
let newEvents: TreeDictionary<Date, [Event]> = //...
newEvents.keys.subtracting(oldEvents.keys)
Another tree-based structure that the Swift Collections package provides us is the Heap type. The Heap type stores comparable elements and allows you to query for the minimal or maximal element quickly.
struct Event: Identifiable, Comparable {
static func < (lhs: Event, rhs: Event) -> Bool {
lhs.priority < rhs.priority
}
let id = UUID()
let date: Date
let priority: Int
}
@Observable final class EventStore {
typealias Fetch = () async -> [Event]
private(set) var events: Heap<Event> = []
private let fetch: Fetch
init(fetch: @escaping Fetch) {
self.fetch = fetch
}
var nextEvent: Event? { events.max }
func fetchEvents() async {
let allEvents = await fetch()
events.insert(contentsOf: allEvents)
}
}
As you can see in the example above, we fetch the calendar events and populate the heap with them. The Event type conforms to the Comparable protocol and allows us to get the minimal and maximal elements depending on the event priority.
@Observable final class EventStore {
private(set) var events: Heap<Event> = []
func printEvents() {
for event in events.unordered {
print(event)
}
}
}
You can access the unordered read-only array of elements stored in the Heap type whenever needed. Remember that you can’t access the sorted collection of items from the heap. It is, after all, a heap.
How often do you need to access values in a set or dictionary in the order you have added them? Unfortunately, the flat hash table that Dictionary and Set types use doesn’t allow to keep the adding order of elements. The Swift Collection package introduces the OrderedSet and OrderedDictionary types to solve the issue.
let letters: OrderedSet = ["a", "b", "c"]
for element in letters {
print(element)
}
print(letters[0])
print(letters.contains("b"))
print(letters.isSuperset(of: ["a", "b", "c", "d"]))
The OrderedSet type allows us to access the element by index like the Array type but keeps elements unique.
printArray(letters.elements) // Array
printSet(letters.unordered) // Set
Whenever you need to pass the elements of the OrderedSet as an Array, you can use the elements property, or you can use the unordered property whenever you want to extract the plain Set of the elements. Remember, the OrderedSet type implements most of the functions from the SetAlgebra protocol but doesn’t conform to it, that’s why it has the unordered property.
let lettersAndNumbers: OrderedDictionary = [
"a": 1,
"b": 2,
"c": 3
]
print(lettersAndNumbers["a"])
print(lettersAndNumbers.elements[0])
The OrderedDictionary behaves very similarly to the OrderedSet type and allows you to access the dictionary both by key and index.
Deque is another collection type that the Swift Collections package provides us. Deque is almost identical to the Array type, except it offers efficient insert and removal from both ends of the collection.
var deque: Deque = [1, 2, 3, 4]
deque.prepend(0)
deque.append(5)
deque.popFirst()
deque.popLast()
deque[0]
The Deque type implements a double-ended queue, allowing us to insert and remove elements from the ends of the collection at O(1) complexity, which may become very handy when you build any queue functionality in your app.
Today, we discovered another great Swift package provided by Apple. The community constantly works on the package and adds more value to it. So, check the documentation and find the valuable collection types that may improve your apps. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>You can write every algorithm from the Swift Algorithms package, but you must maintain and test it in this case. On the other hand, you can depend on the ready-to-use package that you can include in every app you work on without code duplication and be sure that the community members well-tested the package.
Remember, you can always become a part of this great community by contributing to this package.
First, you must add the package to the project in the Xcode project settings screen. The “Package Dependencies” tab allows you to add or remove any package you need. The Swift Algorithms package lives on Github, where you can easily follow updates, browse pull requests, and monitor issues.
The Swift Algorithms package contains tons of valuable collections and sequence algorithms. It is nearly impossible to cover them as part of a single post, but I will cover my favorite things.
I mainly work on health-related apps, and my users’ privacy is crucial to me. That’s why I make all of my calculations on the devices. Searching a massive array of health-related data for a particular item is a common task.
As you may know, the binary search is the best way to find an item in the sorted data array. Usually, we query HealthKit for sorted data, allowing us to use binary search efficiently. Binary search requires your data to be sorted by the key you seek.
let someDate = Date.now
let index = heartRates.partitioningIndex { $0.startDate >= someDate }
guard
index != heartRates.endIndex,
heartRates[index].startDate == someDate
else {
return nil
}
return heartRates[index]
The Swift Algorithms package provides the partitioningIndex function, the generalized version of the binary search. It uses the same logic as the binary search.
Still, instead of returning an item, it returns the index of the first item, dividing your collection into two parts where any item from the first part returns false for your predicate, and any item from the second part always returns true for the same predicate.
We must wrap its results with an additional guard statement verifying them. Whenever the partitioningIndex function can’t find the relevant index, it returns the end of the collection.
We also verify that the resulting index divides the array into two partitions, and an item for the index also equals the item we are looking for. There might be a case where you can find an index dividing the array using your predicate, but the array doesn’t contain the value you want.
Dividing a collection into chunks is another common task in my apps. You might need to divide the collection into chunks of any count or by any additional logic. The Swift Algorithms package provides us with chunking API for this particular case.
let numbers = [1, 2, 3, 4, 5, 6]
print(numbers.chunks(ofCount: 2))
// [1, 2]
// [3, 4]
// [5, 6]
The Swift Algorithms package provides the chunks function, taking a count of items in a single chunk as the parameter and returning the subsequence array.
My app has a more interesting situation where the particular logic should drive chunking. In my case, I need the chunks where items have time intervals between them no longer than one hour.
sleepSamples.chunked { $1.startDate.timeIntervalSince($0.endDate) < 3600 }
As you can see in the example above, we use the chunked function with the predicate, where we can compare two adjacent elements of the collection and decide whenever we want to put them into the same chunk.
Almost every app has a situation where you have a collection with optional values, and you need to keep only non-nil values. For this case, the Swift Algorithms package introduces the compacted functions.
let array: [Int?] = [10, nil, 30, nil, 2, 3, nil, 5]
let withNoNils = array.compacted()
// Array(withNoNils) == [10, 30, 2, 3, 5]
Another common task is to remove the duplicates from a collection of elements, and you can easily do it with the help of the uniqued function.
let numbers = [1, 2, 3, 3, 2, 3, 3, 2, 2, 2, 1]
let unique = numbers.uniqued()
// Array(unique) == [1, 2, 3]
Another situation I came across in my apps is extracting some number of minimal or maximal elements from the collection. You can easily do that with the Swift Algorithms package’s min, max, or minAndMax functions.
let numbers = [7, 1, 6, 2, 8, 3, 9]
let smallestThree = numbers.min(count: 3)
// [1, 2, 3]
let numbers = [7, 1, 6, 2, 8, 3, 9]
let largestThree = numbers.max(count: 3)
// [7, 8, 9]
How often do you need to get the particular count of the random elements from the collection? The Swift Algorithms package has the randomSample function, taking the count as a single parameter and returning an array of the random elements.
let numbers = [7, 1, 6, 2, 8, 3, 9]
let randomNumbers = numbers.randomSample(count: 3)
The Swift Algorithms package provides us with the combinations function, allowing us to combine every element of the collection with each other.
let colors = ["fuchsia", "cyan", "mauve", "magenta"]
for combo in colors.combinations(ofCount: 3) {
print(combo.joined(separator: ", "))
}
// fuchsia, cyan, mauve
// fuchsia, cyan, magenta
// fuchsia, mauve, magenta
// cyan, mauve, magenta
As you can see in the example above, the combinations function takes only one parameter, defining the number of elements that it should use per combination.
Today, we discovered only the visible part of Swift Algorithms iceberg. There are many things to learn, and I encourage you to check its documentation and replace your custom implementation with it. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>In visionOS, an ornament presents controls and information related to a window without crowding or obscuring the window’s contents. visionOS uses ornaments to display toolbars, tab bars, etc. But you can build your custom ornaments too. Let’s start by creating our first ornament using old but gold TabView.
struct ExampleView1: View {
var body: some View {
TabView {
Text("List")
.tabItem {
Label("List", systemImage: "checklist")
}
Text("Favorites")
.tabItem {
Label("Favorites", systemImage: "star")
}
}
}
}
As you can see in the example above, we don’t do something special. We use the TabView that the SwiftUI framework offers us from the very first version. This is another place where the magic of the declarative framework works seamlessly. SwiftUI automatically adapts its environment and creates an ornament to display the tab bar. We can’t control the ornament that the system creates, and it provides us with default behavior while we hover it.
TabView automatically creates an ornament to provide us access to navigation in a very native way. We can also use the Toolbar API that the SwiftUI framework provides to build another type of ornament.
struct ExampleView2: View {
var body: some View {
Text("Hello World")
.toolbar {
ToolbarItem(placement: .bottomOrnament) {
Button("New", systemImage: "pencil") {
// new action
}
}
ToolbarItem(placement: .bottomOrnament) {
Button("Save", systemImage: "square.and.arrow.down") {
// save action
}
}
}
}
}
In the example above, we use the Toolbar API to place action controls in an ornament. The ToolbarPlacement type provides us the bottomOrnament property, allowing us to place controls in the ornament below the window. While adapting your app to visionOS, you might need different toolbar placements depending on the platform. In this case, you can use conditional compilation directives to provide the particular placement.
struct ExampleView2: View {
var body: some View {
Text("Hello World")
.toolbar {
ToolbarItem(placement: placement) {
Button("New", systemImage: "pencil") {
// new action
}
}
ToolbarItem(placement: placement) {
Button("Save", systemImage: "square.and.arrow.down") {
// save action
}
}
}
}
private var placement: ToolbarItemPlacement {
#if os(visionOS)
return .bottomOrnament
#else
return .primaryAction
#endif
}
}
We learned how easily SwiftUI creates ornaments to adapt the look and feel of visionOS. But we can go further and create custom ornaments to control its position, look, and feel. SwiftUI provides the ornament view modifier, allowing us to develop super-custom ornament.
struct ContentView: View {
var body: some View {
Text("Hello World!")
.ornament(
visibility: .visible,
attachmentAnchor: .scene(.bottomTrailing),
contentAlignment: .bottom
) {
VStack {
Button("New", systemImage: "pencil") {
// new action
}
Button("Save", systemImage: "square.and.arrow.down") {
// save action
}
}
.labelStyle(.iconOnly)
.padding(.vertical)
.glassBackgroundEffect()
}
}
}
As you can see in the example above, we use the ornament view modifier. The ornament view modifier takes a set of parameters. The visibility parameter allows us to control when the framework displays the ornament. We can hide it by passing the hidden value.
The attachmentAnchor parameter allows us to control the position of the ornament. It will enable us to define the point of the scene where we want to attach the ornament.
The contentAlignment parameter allows us to define which point of the ornament framework should use while calculating its attachment point in conjuction with the attachmentAnchor parameter.
The last parameter of the ornament view modifier is the ViewBuilder closure, which allows us to provide the content of the ornament. As you can see, we also use the glassBackgroundEffect view modifier to add the visionOS-styled background to our content.
Today, we learned how to use the SwiftUI framework to improve the user experience of our apps on visionOS by using the new ornament concept. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>What I love about SwiftUI is how it automatically adapts to the platform. You don’t need to do anything to run your app written in SwiftUI on visionOS. It works out of the box. But you can always improve the user experience by going forward and adapting the platform features.
struct ContentView: View {
var body: some View {
NavigationSplitView {
List {
// list content
}
.navigationTitle("Models")
.toolbar {
ToolbarItem(placement: .bottomOrnament) {
Button("open", systemImage: "doc.badge.plus") {
}
}
ToolbarItem(placement: .bottomOrnament) {
Button("open", systemImage: "link.badge.plus") {
}
}
}
} detail: {
Text("Choose something from the sidebar")
}
}
}
To learn more about building apps supporting multiple windows, take a look at my dedicated “Window management in SwiftUI” post.
In the example above, we use the new toolbar placement called bottomOrnament. Ornament in visionOS is the place outside the window presenting controls connected to the window. You can also create them manually by using the new ornament view modifier.
struct ContentView: View {
var body: some View {
NavigationSplitView {
List {
// list content
}
.navigationTitle("Models")
.ornament(attachmentAnchor: .scene(.leading)) {
// Place your views here
}
} detail: {
Text("Choose something from the sidebar")
}
}
}
The new ornament view modifier allows us to create an ornament with a particular anchor point for the window it is connected to. Another option to adapt your app content to the immersive experience that visionOS provides is to use the transform3DEffect and rotation3DEffect view modifiers to incorporate depth effects.
Your apps can display 2D and 3D content side by side in the same scene on visionOS. We can use the RealityKit framework to present 3D content in this case. For example, RealityKit provides us with the Model3D SwiftUI view, allowing us to display 3D models from the USDZ or reality files.
struct ContentView: View {
var body: some View {
NavigationSplitView {
List(Model.all) { model in
NavigationLink {
Model3D(named: model.name)
} label: {
Text(verbatim: model.name)
}
}
.navigationTitle("Models")
} detail: {
Model3D(named: "robot")
}
}
}
Model3D view works similarly to the AsyncImage view and loads the model asynchronously. You can also use another variant of the Model3D initializer, which allows you to customize the model configuration and add a placeholder view.
struct ContentView: View {
var body: some View {
NavigationSplitView {
List(Model.all) { model in
NavigationLink {
Model3D(
url: Bundle.main.url(
forResource: model.name,
withExtension: "usdz"
)!
) { resolved in
resolved
.resizable()
.aspectRatio(contentMode: .fit)
} placeholder: {
ProgressView()
}
} label: {
Text(verbatim: model.name)
}
}
.navigationTitle("Models")
} detail: {
Model3D(named: "robot")
}
}
}
While presenting 3D content in your app, you can use the windowStyle modifier to enable volumetric display of your content. The volumetric style allows your content to grow in the third dimension to match the model’s size.
For more complex 3D scenes, we can use the RealityView and populate it with 3D content.
struct ContentView: View {
var body: some View {
NavigationSplitView {
List(Model.all) { model in
NavigationLink {
RealityView { content in
// load the content and add to the scene
}
} label: {
Text(verbatim: model.name)
}
}
.navigationTitle("Models")
} detail: {
Text("Choose something from the sidebar")
}
}
}
The third option on visionOS is the fully immersive experience, allowing us to dive into the 3D scene by hiding everything around by focusing on your scene.
@main
struct MyApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "solar-system") {
SolarSystemView()
}
}
}
As you can see in the example above, we define a scene by using the ImmersiveSpace type. It allows us to enable it by using the openImmersiveSpace environment value.
struct MyMenuView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
var body: some View {
Button("Enjoy immersive space") {
Task {
await openImmersiveSpace(id: "solar-system")
}
}
}
}
We can also use the dismissImmersiveSpace environment value to dismiss the immersive space. Remember that you can only display one immersive space at a time.
struct SolarSystemView: View {
@Environment(\.dismissImmersiveSpace) private var dismiss
var body: some View {
// Immersive experience
Button("Dismiss") {
Task {
await dismiss()
}
}
}
}
Today, we learned the basics of the SwiftUI framework for the brand new visionOS platform. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>The StoreKitTest framework allows us to write tests for in-app product purchasing, refunding, and restoring features. You can cover almost every aspect of the in-app purchase with tests using the StoreKitTest framework. Before starting, you should create a StoreKit Configuration File.
To learn more about the basics of the StoreKit 2, take a look at my “Mastering StoreKit 2” post.
The StoreKitTest framework provides us with the SKTestSession type. Using an instance of the SKTestSession type, we can purchase in-app products, manage transactions, refund and expire subscriptions, etc.
Let’s start by creating a test case for our StoreKit-related features. I usually have a type called SettingsStore, which defines user configuration and handles in-app purchases. We will cover the in-app purchase management part of the SettingsStore with tests by using the StoreKitTest framework.
@MainActor final class StoreKitTests: XCTestCase {
func testProductPurchase() async throws {
let session = try SKTestSession(configurationFileNamed: "SugarBot Food Calorie Counter")
session.disableDialogs = true
session.clearTransactions()
}
}
As you can see in the example above, we initialize an instance of the SKTestSession type. Then, we call the clearTransactions function to remove all the transactions we may have stored from the previous launches. We also turn off dialogs to automate the purchase confirmation flow easily.
Now, we can use our SettingsStore type to purchase products and process subscription status. The SKTestSession type also allows us to buy a product that simulates the purchase outside the app. For example, it might be a purchased product with family sharing enabled.
@MainActor final class StoreKitTests: XCTestCase {
var store: SettingsStore!
override func setUp() {
store = SettingsStore()
}
func testProductPurchase() async throws {
let session = try SKTestSession(configurationFileNamed: "SugarBot Food Calorie Counter")
session.disableDialogs = true
session.clearTransactions()
try await session.buyProduct(identifier: "annual")
guard let product = try await Product.products(for: ["annual"]).first else {
return XCTFail("Can't load products...")
}
let status = try await product.subscription?.status ?? []
await store.processSubscriptionStatus(status)
XCTAssertFalse(store.activeSubscriptions.isEmpty)
}
}
As you can see in the example above, we use the buyProduct function on an instance of the SKTestSession type to simulate a purchase. We can also use the expireSubscription function of the SKTestSession type to expire ongoing subscriptions and verify how our app processes this data.
@MainActor final class StoreKitTests: XCTestCase {
var store: SettingsStore!
override func setUp() {
store = SettingsStore()
}
func testExpiredProduct() async throws {
let session = try SKTestSession(configurationFileNamed: "SugarBot Food Calorie Counter")
session.disableDialogs = true
session.clearTransactions()
let transaction = try await session.buyProduct(identifier: "annual")
let activeProducts = try await Product.products(for: ["annual"])
let activeStatus = try await activeProducts.first?.subscription?.status ?? []
await store.processSubscriptionStatus(activeStatus)
XCTAssertFalse(store.activeSubscriptions.isEmpty)
try session.expireSubscription(productIdentifier: "annual")
let expiredProducts = try await Product.products(for: ["annual"])
let expiredStatus = try await expiredProducts.first?.subscription?.status ?? []
await store.processSubscriptionStatus(expiredStatus)
XCTAssertTrue(store.activeSubscriptions.isEmpty)
}
}
The SKTestSession type also allows us to simulate product refunds using the refundTransaction function. Another exciting option is to test how the app reacts to transaction updates.
let transaction = try await session.buyProduct(identifier: "annual")
// verify purchase ...
try session.refundTransaction(identifier: UInt(transaction.id))
// verify refund ...
You can also use the askToBuyEnabled property to enable the ask-to-buy feature and then use the approveAskToBuyTransaction or declineAskToBuyTransaction functions to approve or decline purchases. In this case, the transaction should change from pending to successful.
session.askToBuyEnabled = true
await store.purchase("annual")
// verify purchase ...
let declined = store.pendingTrancations.first?.id ?? 0
try session.declineAskToBuyTransaction(identifier: UInt(declined.id))
// verify purchase ...
await store.purchase("annual")
// verify purchase ...
let approved = store.pendingTrancations.first?.id ?? 0
try session.approveAskToBuyTransaction(identifier: UInt(approved.id))
// verify purchase ...
As you can see in the example above, we use an instance of the SKTestSession type to simulate ask-to-buy and verify the behavior of our app while the purchase is approved or declined.
This week, we learned how to use the StoreKitTest framework to verify how our app handles in-app purchases and user flows like refunds, ask-to-buy, and subscription expiration. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
]]>In the previous post, we discussed the map view’s camera position. Let me update your memory with the quick code example.
struct ContentView: View {
@State private var position: MapCameraPosition = .camera(
.init(centerCoordinate: .newYork, distance: 10_000_000)
)
var body: some View {
Map(position: $position) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
Marker("Seattle", monogram: Text("SE"), coordinate: .seattle)
Marker("San Francisco", monogram: Text("SF"), coordinate: .sanFrancisco)
}
.onChange(of: position) {
print(position.camera?.centerCoordinate)
print(position.positionedByUser)
}
}
}
As you can see in the example above, we use the onChange view modifier to track changes in the two-way binding of the camera position. Unfortunately, we can’t get the direct camera position from the binding in the case of user drag. For this particular case, MapKit API introduces the onMapCameraChange view modifier.
struct ContentView: View {
@State private var position: MapCameraPosition = .camera(
.init(centerCoordinate: .newYork, distance: 10_000_000)
)
var body: some View {
Map(position: $position) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
Marker("Seattle", monogram: Text("SE"), coordinate: .seattle)
Marker("San Francisco", monogram: Text("SF"), coordinate: .sanFrancisco)
}
.onMapCameraChange(frequency: .continuous) { context in
print(context.camera)
print(context.region)
print(context.rect)
}
}
}
In the example above, we use the onMapCameraChange view modifier to track camera changes as soon as the camera position changes. MapKit API allows us to set the frequency of the onMapCameraChange listener by passing an instance of the MapCameraUpdateFrequency type.
The MapCameraUpdateFrequency enum provides us with two options: continuous and onEnd. The first defines nearly real-time changes in the camera position. The second fires whenever the camera position drags finish.
struct ContentView: View {
@State private var position: MapCameraPosition = .camera(
.init(centerCoordinate: .newYork, distance: 10_000_000)
)
var body: some View {
Map(position: $position) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
Marker("Seattle", monogram: Text("SE"), coordinate: .seattle)
Marker("San Francisco", monogram: Text("SF"), coordinate: .sanFrancisco)
}
.onMapCameraChange(frequency: .onEnd) { context in
print(context.camera)
print(context.region)
print(context.rect)
}
}
}
The second parameter of the onMapCameraChange view modifier is the action closure, which can handle camera position updates. The action closure provides us with an instance of the MapCameraUpdateContext type defining the current map camera, rectangle, and region.
The new MapKit API also introduces the mapCameraKeyframeAnimator view modifier, allowing us to animate the map camera using a keyframe animator.
struct ContentView: View {
@State private var trigger = false
@State private var position: MapCameraPosition = .camera(
.init(centerCoordinate: .newYork, distance: 10_000_000)
)
var body: some View {
Map(position: $position) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
Marker("Seattle", monogram: Text("SE"), coordinate: .seattle)
Marker("San Francisco", monogram: Text("SF"), coordinate: .sanFrancisco)
}
.mapCameraKeyframeAnimator(trigger: trigger) { camera in
KeyframeTrack(\MapCamera.centerCoordinate) {
LinearKeyframe(.newYork, duration: 2)
LinearKeyframe(.seattle, duration: 2)
LinearKeyframe(.sanFrancisco, duration: 2)
}
KeyframeTrack(\MapCamera.distance) {
LinearKeyframe(camera.distance, duration: 2)
LinearKeyframe(camera.distance * 2, duration: 2)
LinearKeyframe(camera.distance, duration: 2)
}
}
.task {
trigger.toggle()
}
}
}
As you can see in the example above, we use the mapCameraKeyframeAnimator view modifier to define a trigger value. Trigger value allows us to animate the map camera whenever the trigger value changes.
The second parameter of the mapCameraKeyframeAnimator view modifier is the KeyframesBuilder closure, which allows us to define a set of keyframe tracks. Inside these tracks, we describe the transition states to iterate our animation.
As you can see, we can animate all the properties of the MapCamera type. In our example, we animate the map camera’s center location and distance. The KeyframesBuilder closure also provides us with the initial value of the map camera, allowing us to read the value of the map camera before animation.
Last, the topic to cover is the map selection feature. The Map view provides an initializer with a selection parameter, allowing us to offer a two-way binding for map content selection.
struct ContentView: View {
@State private var selection: Int?
@State private var position: MapCameraPosition = .camera(
.init(centerCoordinate: .newYork, distance: 10_000_000)
)
var body: some View {
Map(position: $position, selection: $selection) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
.tag(1)
Marker("Seattle", monogram: Text("SE"), coordinate: .seattle)
.tag(2)
Marker("San Francisco", monogram: Text("SF"), coordinate: .sanFrancisco)
.tag(3)
}
.onChange(of: selection) {
print("selection changed:", selection)
}
}
}
In the example above, we define a state property to store the currently selected value of the map. We also annotate our markers using the tag view modifier. Remember that the type of the selection property must be the same as the tag you provide to the map content.
Today, we learned how to handle interactions on the map using the set of new view modifiers which is the part of the new rich MapKit API in SwiftUI. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
The new MapKit API introduces the MapCameraBounds type, allowing us to limit the bounds of the map view. The MapCameraBounds type has a few initializers that we can use to create camera bounds from the instance of MKMapRect or MKCoordinateRegion.
extension CLLocationCoordinate2D {
static let newYork: Self = .init(
latitude: 40.730610,
longitude: -73.935242
)
}
let rect = MKMapRect(
origin: MKMapPoint(.newYork),
size: MKMapSize(width: 1, height: 1)
)
struct ContentView: View {
var body: some View {
Map(bounds: MapCameraBounds(centerCoordinateBounds: rect)) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
}
}
}
As you can see in the example above, we use the MKMapRect type to define the visible bounds of the map that user can’t leave by using any interaction.
To create an instance of the MKMapRect type, we should call the initializer with origin and size parameters. We can use any instance of the CLLocationCoordinate2D type to define an origin point. The second parameter must be an instance of the MKMapSize, representing the width and height in map points.
Now, we can use an instance of the MKMapRect type to pass into the initializer of the MapCameraBounds type to limit our map to a particular rectangle. We can also allow users to zoom in or out to a limited amount of meters using maximumDistance and minimumDistance parameters of the MapCameraBounds initializer.
struct ContentView: View {
var body: some View {
Map(
bounds: MapCameraBounds(
centerCoordinateBounds: rect,
minimumDistance: 10,
maximumDistance: 100
)
) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
}
}
}
You may have a set of coordinates you want to zoom in and limit to the rectangle displaying these markers. In this case, you can create an instance of the MKMapRect type per coordinate and use the union function on the MKMapRect type to create a rectangle including all the coordinates.
let coordinates: [CLLocationCoordinate2D] = [.newYork, .sanFrancisco, .seattle]
let rect = coordinates
.map { MKMapRect(origin: .init($0), size: .init(width: 1, height: 1)) }
.reduce(MKMapRect.null) { $0.union($1) }
We discussed how to use the MKMapRect in pair with the MapCameraBounds type to limit our map to a particular rectangle. The MKMapRect uses map points to represent a rectangle. MKMapPoint uses the 2D projection of the map on a flat surface to calculate x and y on the map. You can use x, y, and coordinate properties of the MKMapPoint type to convert coordinates to map points and back.
Whenever you want to use latitude and longitude deltas instead of map points, you can use the MKCoordinateRegion type. It provides functionality similar to MKMapRect but operates on other units.
The MapKit provides the MapCameraPosition type that we can use for two-way binding of the recently visible camera position. We can create an instance of the MapCameraPosition type by passing MKMapRect, MKCoordinateRegion, MKMapItem, CLLocationCoordinate2D, etc.
struct ContentView: View {
@State private var position: MapCameraPosition = .camera(
.init(centerCoordinate: .newYork, distance: 0)
)
var body: some View {
Map(
position: $position,
bounds: MapCameraBounds(
centerCoordinateBounds: rect,
minimumDistance: 10,
maximumDistance: 100
)
) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
}
.onAppear {
position = .camera(.init(centerCoordinate: .sf, distance: 0))
}
}
}
We can also use the MapCameraPosition type to ask for a map view to follow the user location.
struct ContentView: View {
@State private var position: MapCameraPosition = .userLocation(
followsHeading: true,
fallback: .rect(rect)
)
var body: some View {
Map(
position: $position,
bounds: MapCameraBounds(
centerCoordinateBounds: rect,
minimumDistance: 10,
maximumDistance: 100
)
) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
}
}
}
As I said before, we can use MapCameraPosition for two-way binding, which means we can query an instance of the MapCameraPosition type to read some data.
struct ContentView: View {
@State private var position: MapCameraPosition = .rect(
MKMapRect(
origin: MKMapPoint(.newYork),
size: MKMapSize(width: 1, height: 1)
)
)
var body: some View {
Map(
position: $position,
bounds: MapCameraBounds(
centerCoordinateBounds: rect,
minimumDistance: 10,
maximumDistance: 100
)
) {
Marker("New York", monogram: Text("NY"), coordinate: .newYork)
}
.onChange(of: position) {
print(position.positionedByUser)
print(position.camera)
print(position.region)
print(position.rect)
}
.onAppear {
position = .camera(.init(centerCoordinate: .newYork, distance: 0))
}
}
}
As you can see in the example above, we use an instance of the MapCameraPosition to access the recent camera, region, rectangle, etc, of the map. All of the mentioned fields are optional and will be non-nil values if the particular instance of the MapCameraPosition type is used.
There is also the positionedByUser property. It is a boolean value defining whenever the camera is positioned by the user or positioned by the developer programmatically.
Today, we learned how to manage the map camera position using the new MapCameraPosition type, part of the new rich MapKit API. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!
The new MapKit API introduces the mapStyle view modifier, allowing us to customize the style of the data presented on the map.
struct ContentView: View {
var body: some View {
Map {
// ...
}
.mapStyle(.imagery(elevation: .realistic))
}
}
As you can see in the example above, we use the mapStyle view modifier with the imagery style and realistic elevation. Another option for the elevation parameter of the imagery style is flat.
SwiftUI provides us with a set of predefined and configurable map styles. In the previous example, we used a style called imagery. By default, the SwiftUI framework uses the standard style. The standard style allows us to configure the elevation of the map, points of interest we want to include or exclude from the map, and whenever we need to show traffic.
struct ContentView: View {
var body: some View {
Map {
// ...
}
.mapStyle(
.standard(
elevation: .flat,
pointsOfInterest: .excluding([.store]),
showsTraffic: false
)
)
}
}
Another option is the hybrid style, allowing us to display imagery, roads, and road names on the map. The hybrid style also configures elevation, traffic, and points of interest.
struct ContentView: View {
var body: some View {
Map {
// ...
}
.mapStyle(
.hybrid(
elevation: .flat,
pointsOfInterest: .including([.airport]),
showsTraffic: true
)
)
}
}
MapKit supports different types of interactions with the map. You can zoom, pan, pitch, and rotate the content on the map. By default, SwiftUI activates all of the available gestures. But you can easily limit available interactions to the list of the preferred ones.
struct ContentView: View {
var body: some View {
Map(interactionModes: [.pan, .pitch]) {
// ...
}
}
}
Whenever you import MapKit in pair with SwiftUI, you will get access to the particular SwiftUI views you can use as map controls. These views include MapScaleView, MapCompass, MapPitchToggle, MapUserLocationButton, and MapZoomStepper views.
struct ContentView: View {
var body: some View {
Map {
// ...
}
.mapControls {
MapScaleView()
MapCompass()
}
}
}
You can use these views in pair with the mapControls view modifier to specify controls for any map instances sharing the same environment in the SwiftUI view hierarchy.
Whenever you place MapScaleView or MapCompass views inside the mapControls view modifier, you allow SwiftUI to control the placement of the map controls. In this case, SwiftUI decides the placement of the control depending on the platform running the app.
As you may have noticed, MapScaleView and other map controls are simple SwiftUI views, which means you can use them outside of the mapControls view modifier anywhere you want. In this case, to bind a map control to a particular map instance, you should use the mapScope view modifier.
struct MapScopeExample: View {
@Namespace private var favoritesMap
var body: some View {
VStack {
Map(scope: favoritesMap) {
// favorite pins
}
HStack {
MapScaleView(scope: favoritesMap)
MapCompass(scope: favoritesMap)
}
}
.mapScope(favoritesMap)
}
}
As you can see in the example above, we use Namespace property wrapper to generate a map identifier to bind controls to the map instance. You can also use the mapControlVisibility view modifier when you need to change the automatic visibility configuration to always visible or hidden.
struct MapScopeExample: View {
@Namespace private var favoritesMap
var body: some View {
VStack {
Map(scope: favoritesMap) {
// favorite pins
}
HStack {
MapScaleView(scope: favoritesMap)
MapCompass(scope: favoritesMap)
.mapControlVisibility(.hidden)
}
}
.mapScope(favoritesMap)
}
}
Today, we learned how to customize map presentation in SwiftUI. SwiftUI provides a flexible and easy-to-use API for configuring map controls, styles, and interaction modes. I hope you enjoy the post. Feel free to follow me on Twitter and ask your questions related to this post. Thanks for reading, and see you next week!