visionOS

visionOS Migration: What Every iOS Developer Gets Wrong

Sebastian Kotarski

Sebastian Kotarski

January 15, 2025
3 min read

The visionOS Migration Trap

Every iOS developer I talk to thinks migrating their ARKit app to visionOS will take 2-3 weeks. In my experience, the actual timeline is 2-3 months — unless you know exactly where the pitfalls are.

Having migrated several complex ARKit apps to visionOS, including one that was featured by Apple at launch, I've compiled the five critical differences that catch every developer off guard.

1. There Is No ARSession

This is the biggest mental shift. On iOS, everything revolves around ARSession — you configure it, start it, receive delegate callbacks, and process frames.

On visionOS, there is no ARSession. The system handles all world tracking automatically. You interact with the world through ARKitSession and its data providers, which work fundamentally differently:

// iOS: You control the session
let session = ARSession()
let config = ARWorldTrackingConfiguration()
session.run(config)

// visionOS: You request capabilities
let session = ARKitSession()
let worldTracking = WorldTrackingProvider()
try await session.run([worldTracking])

This means all your delegate-based code needs to be rewritten to use async/await patterns.

2. Hand Tracking Replaces Touch

On visionOS, there's no touch screen. Users interact with your app through gaze, hand gestures, and voice. Every UITapGestureRecognizer and touchesBegan handler is irrelevant.

You need to implement spatial gestures:

RealityView { content in
    content.add(modelEntity)
}
.gesture(
    SpatialTapGesture()
        .targetedToEntity(modelEntity)
        .onEnded { event in
            handleTap(on: event.entity)
        }
)

3. Windows, Volumes, and Spaces

visionOS has three container types that have no iOS equivalent:

  • Windows: 2D content (like regular SwiftUI views)
  • Volumes: 3D content in a bounded box
  • Immersive Spaces: Full 3D content in the user's environment

Your AR experience likely needs an Immersive Space, but your UI should live in Windows. This architectural split requires rethinking your entire view hierarchy.

4. Performance Constraints Are Different

Vision Pro renders at 90fps for each eye (effectively 180fps). The thermal budget is much tighter than iPhone. Your performance optimization strategies need to change:

  • Texture sizes must be smaller (the headset has less GPU memory available for your app)
  • Draw calls matter more (aim for under 100 per frame)
  • Physics simulations need to be lighter

5. Testing Is Fundamentally Different

The visionOS simulator is useful for layout and basic logic, but it cannot simulate:

  • Real hand tracking accuracy
  • Actual device thermal limits
  • True spatial audio behavior
  • Real-world lighting estimation

You need device time. And Vision Pro developer kits are expensive and limited. Plan accordingly.

My Recommendation

If you're considering a visionOS migration, start with a thorough architecture review. Identify which parts of your iOS app can be reused (business logic, networking, data models) and which need complete rewrites (UI, AR session management, interaction handling).

The companies that succeed are the ones that plan the migration properly, rather than trying to port code file by file.

Need help planning your migration? Let's discuss your project.

Tags

visionOS
ARKit
spatial computing
migration

Share this article

Need Help With Your iOS Project?

I help startups and enterprises solve critical iOS, ARKit, and visionOS issues. From performance problems to app store rejections.