
There’s a moment every motorsport fan knows. The camera cuts away from the race leader, zooms into the pit lane, and suddenly you’re watching a completely different operation: four people changing tyres, someone adjusting the front wing, a fuelling rig disconnecting in a blur of motion, all in under three seconds. Then the car rejoins the track, and the driver carries on as if nothing happened.
That’s what side effects feel like in UI development.
The user sees a smooth screen transition, a list that populates, a dialog that appears at just the right moment. What they don’t see is the coroutine that launched in the background, the listener that was cleaned up on exit, the scope that held everything together just long enough to be useful. All that happens off-screen. In the pit lane.
In Kotlin Multiplatform, there’s an extra layer to this. Your business logic lives in commonMain — one engine powering two cars. But the cockpit looks different depending on which car you’re driving. Jetpack Compose and SwiftUI both need to manage those off-screen moments, and they do it in ways that are sometimes equivalent, sometimes parallel, and sometimes just… different.
This article is about those differences. Specifically, what your Compose and SwiftUI UIs are each doing to talk to the same shared ViewModel — and why the controls in each cockpit are shaped the way they are.
The scenario
To keep things concrete, we’ll follow a single screen through four moments:
- The screen appears — time to load some data
- The screen disappears — time to clean up
- The user taps “Refresh” — time to reload on demand
- Something goes wrong — time to show a dialog
Same screen, same shared ViewModel, two platforms. Let’s start with the engine.
The shared ViewModel
Both UIs will consume this ViewModel, written once in commonMain:
// commonMain
class ItemListViewModel : ViewModel() {
private val _uiState = MutableStateFlow<UiState>(UiState.Loading)
val uiState: StateFlow<UiState> = _uiState.asStateFlow()
suspend fun loadItems() {
_uiState.value = UiState.Loading
try {
val items = repository.getItems()
_uiState.value = UiState.Success(items)
} catch (e: Exception) {
_uiState.value = UiState.Error(e.message ?: "Unknown error")
}
}
fun onErrorDismissed() {
_uiState.value = UiState.Loading
}
}
sealed class UiState {
object Loading : UiState()
data class Success(val items: List<String>) : UiState()
data class Error(val message: String) : UiState()
}
The ViewModel does the work. It manages state, handles errors, exposes a single StateFlow that both platforms can observe. The UI layer just needs to know when to start, when to stop, and when to react.
That’s the job of side effects.
A quick note before we start the engine: In a real KMP project, you’d typically call
loadItems()inside the ViewModel’sinitblock, so data starts loading the moment the ViewModel is created — no UI trigger needed. We’re calling it from the UI here purely to demonstrate how side effects work. Don’t let the example drive your architecture. You’ll also noticeloadItems()is declared assuspend— that’s intentional for this demo, since we’re calling it from withinLaunchedEffectand.task, both of which provide a coroutine scope. In a production ViewModel triggered frominit, you’d useviewModelScope.launchinternally and keep the function non-suspending.
Moment 1: green light — screen appears
The race starts. The screen enters composition on Android, or appears in the SwiftUI view hierarchy on iOS. Both UIs need to tell the ViewModel: go.
Compose — LaunchedEffect
@Composable
fun ItemListScreen(viewModel: ItemListViewModel) {
val uiState by viewModel.uiState.collectAsStateWithLifecycle()
LaunchedEffect(Unit) {
viewModel.loadItems()
}
// ... rest of UI
}
LaunchedEffect launches a coroutine tied to the composable’s lifecycle. The Unit key means it runs exactly once when the composable enters composition. When the composable leaves, the coroutine is automatically cancelled — you don’t manage that yourself.
SwiftUI — .task
struct ItemListView: View {
@StateObject var viewModel: ItemListViewModelWrapper
var body: some View {
// ... rest of UI
.task {
await viewModel.loadItems()
}
}
}
.task is SwiftUI’s equivalent, introduced in iOS 15. It launches an async task tied to the view’s lifetime, starts when the view appears, and is cancelled automatically when it disappears. The symmetry with LaunchedEffect is real.
A word on the iOS side: SwiftUI can’t consume a
commonMainViewModel directly. You’ll need a thin wrapper that bridges the shared ViewModel to SwiftUI’s observation system, typically conforming toObservableObjectand republishing theStateFlowas@Publishedproperties. The wrapper is intentionally omitted here to keep the focus on side effects — but it’s a topic worthy of its own article. What matters for this discussion is that both UIs are reacting to the same underlying state, however it arrives.
Under the hood
This one maps cleanly. Both are lifecycle-aware async launchers that handle their own cancellation. The meaningful difference is in what they’re tied to: LaunchedEffect is scoped to Compose’s composition, .task to SwiftUI’s view lifecycle. In a KMP context, that distinction rarely matters — both fire at the right moment, both clean up after themselves.
Verdict: clean equivalent. Different cockpit, same pedal.
Moment 2: safety car — screen leaves
The user navigates away. Maybe a sensor has been registered, a listener attached, a callback hooked up. Time to clean house.
In a well-architected KMP project, the ViewModel handles most of its own cleanup — cancelling coroutines, closing flows — so you may not need heavy teardown on the UI side. But there are still cases where the UI itself registers something and needs to unregister it: analytics observers, platform-specific listeners, logging hooks.
Compose — DisposableEffect
@Composable
fun ItemListScreen(viewModel: ItemListViewModel) {
DisposableEffect(Unit) {
val analyticsObserver = AnalyticsObserver()
analyticsObserver.start()
onDispose {
analyticsObserver.stop()
}
}
// ... rest of UI
}
DisposableEffect keeps setup and teardown together in one block. The onDispose lambda runs when the composable leaves composition. If you register something, you unregister it in the same place — hard to forget one without the other.
SwiftUI — .onAppear / .onDisappear
struct ItemListView: View {
@StateObject var viewModel: ItemListViewModelWrapper
private let analyticsObserver = AnalyticsObserver()
var body: some View {
// ... rest of UI
.onAppear {
analyticsObserver.start()
}
.onDisappear {
analyticsObserver.stop()
}
}
}
SwiftUI has no single equivalent to DisposableEffect. You get two separate modifiers: .onAppear and .onDisappear. Functionally they achieve the same result, but setup and teardown are physically separated. Nothing in the language enforces that they stay in sync.
Under the hood
This is the first genuine gap. DisposableEffect is designed around the idea that if you open something, you close it in the same breath. SwiftUI trusts you to keep .onAppear and .onDisappear consistent yourself.
In practice, for most KMP screens, this gap is narrower than it looks. If your ViewModel is well-behaved — and it should be — the heavy lifting of cancellation happens in commonMain, not in the UI. The UI teardown is a secondary concern.
Verdict: partial equivalent. SwiftUI needs two modifiers where Compose needs one. In KMP, the ViewModel often closes the gap anyway.
Moment 3: manual override — user triggers a refresh
Lifecycle-driven side effects are one thing. But what about user-driven actions? The user taps “Refresh.” You need to launch a coroutine from a button callback outside of any composable’s natural lifecycle entry point.
Compose — rememberCoroutineScope
@Composable
fun ItemListScreen(viewModel: ItemListViewModel) {
val scope = rememberCoroutineScope()
Button(onClick = {
scope.launch {
viewModel.loadItems()
}
}) {
Text("Refresh")
}
}
rememberCoroutineScope gives you a CoroutineScope tied to the composable’s lifecycle. You hold it, you decide when to launch, and it cancels automatically when the composable leaves composition — so if the user taps “Refresh” and immediately navigates away, the coroutine won’t outlive the screen.
SwiftUI — Task { } in a button action
struct ItemListView: View {
@StateObject var viewModel: ItemListViewModelWrapper
var body: some View {
Button("Refresh") {
Task {
await viewModel.loadItems()
}
}
}
}
SwiftUI doesn’t give you an explicit scope. Instead, you wrap the async call in Task, which creates an unstructured task. Unlike .task, this isn’t tied to the view lifecycle. If needed, you can keep a reference to the task and cancel it manually — but for simple user-triggered actions, the syntax stays concise without requiring you to manage a scope object.
Under the hood
Both arrive at the same outcome: an async call launched from a user action. But the philosophies differ. Compose hands you the scope explicitly — you’re aware of it, you use it, you know it exists. SwiftUI wraps the async call in Task and keeps the mechanics out of sight. This is consistent with SwiftUI’s general design: prioritise brevity, accept some opacity in exchange.
Neither is wrong. Compose’s explicitness makes the mechanics visible. SwiftUI’s ergonomics keep the code shorter, at the cost of some transparency.
Verdict: functional equivalent. Compose gives you the throttle; SwiftUI keeps it mostly hidden.
Moment 4: red flag — something goes wrong
The ViewModel emits an error state. Both UIs need to react: show a dialog, let the user dismiss it, notify the ViewModel.
This is the most instructive comparison in the article, because SwiftUI’s approach looks fundamentally different from Compose’s — and it isn’t.
Compose: remember + AlertDialog
@Composable
fun ItemListScreen(viewModel: ItemListViewModel) {
val uiState by viewModel.uiState.collectAsStateWithLifecycle()
var errorMessage by remember { mutableStateOf<String?>(null) }
LaunchedEffect(uiState) {
if (uiState is UiState.Error) {
errorMessage = (uiState as UiState.Error).message
}
}
errorMessage?.let { message ->
AlertDialog(
onDismissRequest = {
errorMessage = null
viewModel.onErrorDismissed()
},
confirmButton = {
TextButton(
onClick = {
errorMessage = null
viewModel.onErrorDismissed()
}
) {
Text("OK")
}
},
title = { Text("Something went wrong") },
text = { Text(message) }
)
}
// rest of UI
}
The machinery is in plain sight. You observe state, derive a condition from it, and render the dialog conditionally. When errorMessage is non-null, the dialog appears. When onErrorDismissed() flips the state back, the dialog disappears. You can trace every step.
For clarity: this example intentionally mirrors the SwiftUI approach by introducing local UI state. In a typical Compose implementation you might derive the dialog condition directly from
uiStateinstead of copying it intoremember. The goal here is to demonstrate how local state works in Compose, not to prescribe an architecture.
SwiftUI — .alert(isPresented:)
struct ItemListView: View {
@StateObject var viewModel: ItemListViewModel
@State private var showError = false
@State private var errorMessage = ""
var body: some View {
// ... rest of UI
.onChange(of: viewModel.uiState) { state in
if case .error(let message) = state {
errorMessage = message
showError = true
}
}
.alert("Something went wrong", isPresented: $showError) {
Button("OK") {
viewModel.onErrorDismissed()
showError = false
}
} message: {
Text(errorMessage)
}
}
}
.alert(isPresented:) feels like magic. But look at what’s actually happening: .onChange watches the ViewModel’s state, sets a local boolean when an error arrives, and .alert observes that boolean and re-renders when it flips. That’s the same mechanism as Compose — state change triggers a UI update — wrapped in a modifier that hides the conditional render.
SwiftUI made the dialog a first-class modifier. Compose made the dialog a first-class composable. The underlying model is identical.
Under the hood
Both platforms are doing the same thing: a piece of shared state drives a conditional UI element. The ViewModel, living in commonMain and unaware of either platform, emits an error. Both UIs observe it, derive a boolean from it, and render a dialog in response.
Compose shows you every gear change. SwiftUI hands you a steering wheel with fewer visible controls. Same engine. Different dashboard.
Verdict: same mechanism, different abstraction level. The magic in SwiftUI’s .alert is Compose’s explicit state flow, gift-wrapped.
The pit wall view
Here’s how the four moments map across both platforms:
| Moment | Compose | SwiftUI |
|---|---|---|
| Screen appears | LaunchedEffect | .task |
| Screen leaves | DisposableEffect | .onAppear + .onDisappear |
| User action | rememberCoroutineScope | Task { } in button |
| Error dialog | collectAsState + AlertDialog | .onChange + .alert |
What’s not here — and why
You might have noticed that SideEffect and produceState didn’t make the race.
SideEffect runs on every successful recomposition — it’s designed for syncing Compose state with non-Compose systems. It has no meaningful SwiftUI parallel, and in a KMP project where the ViewModel manages shared state, you’ll rarely reach for it.
produceState converts non-Compose state into Compose state. In a pure Compose app it has a role, but in KMP your ViewModel is already producing state via StateFlow. Using produceState on top of that would be a detour.
Both are worth knowing. Neither belongs in this race.
Same finish line
KMP doesn’t make the platforms the same — it never claimed to. What it does is move the business logic above the platform boundary, so the engine is shared even when the cockpit isn’t.
Your ItemListViewModel in commonMain never changed. It emitted state, responded to function calls, and let the UI figure out the controls. Compose and SwiftUI have different side effect APIs, different levels of abstraction, different philosophies about how much machinery to show you. But they’re reacting to the same events and serving the same ViewModel.
Same track. Different cockpits.
The KMP Bits app is available on App Store and Google Play — built entirely with KMP.
Comments
Loading comments...