Skip to main content

Sleep Cycle SDK - iOS Documentation

Overview

The Sleep Cycle SDK for iOS enables developers to integrate advanced sleep analysis capabilities into their applications. The SDK provides real-time sleep tracking using audio and motion sensors, delivering detailed sleep insights, stage transitions, and detected events throughout the night.

System Requirements

Minimum iOS Version:
  • iOS: 16.0+
  • macOS: 13.0+
Swift:
  • Swift Version: 5.9+
  • The SDK is written in Swift and provides a Swift-native API with async/await support

Installation

Swift Package Manager

Add the Sleep Cycle SDK to your project using Swift Package Manager:
  1. In Xcode, select FileAdd Package Dependencies
  2. Enter the package URL: https://github.com/MDLabs/sleepcycle-sdk-swift
  3. Select the version you want to use (Semantic Versioning)
  4. Add the package to your target
Alternatively, add it to your Package.swift:
dependencies: [
    .package(url: "https://github.com/MDLabs/sleepcycle-sdk-swift", from: "1.0.0")
]
The SDK requires an API key for authorization. Contact Sleep Cycle to obtain credentials.

Prerequisites

Permissions

The SDK requires microphone access for audio-based sleep analysis. Add the following to your Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>We need access to the microphone to analyze your sleep patterns and detect snoring.</string>
For motion-based analysis, you may also need:
<key>NSMotionUsageDescription</key>
<string>We use motion data to track your sleep movements.</string>

Background modes

To ensure continuous sleep analysis throughout the night, enable the appropriate background modes in your app’s capabilities:
  1. Open your project in Xcode
  2. Select your app target
  3. Go to “Signing & Capabilities”
  4. Add “Background Modes” capability
  5. Enable “Audio”
Alternatively, add this to your Info.plist:
<key>UIBackgroundModes</key>
<array>
    <string>audio</string>
</array>
The SDK uses the audio background mode to maintain continuous audio processing during sleep analysis. Your app should also implement proper session management to prevent iOS from suspending the analysis process.

General

It’s typical to call initialize() during application startup, often in your app’s initialization phase or in a SwiftUI App struct. This ensures the SDK is ready when your views need it. The SDK maintains its state across the app lifecycle. The SDK is thread-safe and uses Swift concurrency (async/await) for all asynchronous operations.

Initialize the SDK

The SDK requires authentication before use. The initialization process validates your credentials and determines available features.
import SleepCycleSDK

Task {
    do {
        let features = try await SleepCycleSdk.initialize(
            logLevel: .info,
            apiKey: "your-api-key-here"
        )
        print("Authorized with features: \(features)")
    } catch {
        print("Initialization error: \(error)")
    }
}
Parameters:
  • logLevel: Controls SDK logging verbosity (.error, .info, .debug)
  • logger: Optional custom logger
  • apiKey: API key used for authorization
Return value:
  • Returns SleepAnalysisFeatures on success and transitions to SdkState.initialized.
  • Throws on authorization failure.
Access feature flags at runtime:
if SleepCycleSdk.isFeatureEnabled(\.audioEvents) {
    // Present snore/talk event UI
}

Start a sleep analysis session

Once initialized, you can start a sleep analysis session:
import SleepCycleSDK

Task {
    do {
        try await SleepCycleSdk.startAnalysis(
            config: SleepAnalysisConfig(
                useAudio: true,
                useAccelerometer: true
            ),
            at: Date(),
            using: nil
        )
    } catch {
        print("Failed to start analysis: \(error)")
    }
}
Parameters:
  • config: Configuration object specifying which sensors to use
  • at: The start time for the analysis (defaults to current time)
  • using: Optional DataSource (e.g., live or file replay)

Stop a session

To stop an active analysis session and retrieve the results:
Task {
    do {
        let result = try await SleepCycleSdk.stopAnalysis(at: Date())

        let statistics = result.statistics
        print("Total sleep duration: \(statistics.totalSleepDuration ?? 0)")
        print("Sleep efficiency: \(statistics.sleepEfficiency ?? 0)")

        if let snoreSessions = statistics.snoreSessions {
            for session in snoreSessions {
                print("Snoring session: \(session.interval)")
            }
        }

        if let events = result.events?[.snoring] {
            for event in events {
                print("Snoring detected: \(event.interval), p=\(event.probability)")
            }
        }
    } catch {
        print("Failed to stop analysis: \(error)")
    }
}

Real-time events

The SDK provides real-time updates via AsyncStream publishers:
Task {
    await withTaskGroup(of: Void.self) { group in
        group.addTask {
            for await events in SleepCycleSdk.eventStream {
                for event in events {
                    print("Event: \(event.type) from \(event.source) with p=\(event.probability)")
                }
            }
        }

        group.addTask {
            for await state in SleepCycleSdk.stateStream {
                print("SDK state: \(state)")
            }
        }
    }
}

Audio clips

The SDK can capture short audio recordings when specific sleep events are detected, such as snoring, sleep talking, or coughing. Create an AudioEventListener using AudioClipsConfig and AudioClipsReceiver:
import SleepCycleSDK

let listener = SleepCycleSdk.createAudioClipsProducer(
    audioClipsConfig: AudioClipsConfig(
        activeTypes: [
            .snoring: EventTypeConfig(minDuration: 0.5),
            .talking: EventTypeConfig(minDuration: 0.5)
        ],
        clipLength: 5.0
    ),
    receiver: MyAudioHandler()
)

try await SleepCycleSdk.startAnalysis(
    config: SleepAnalysisConfig(useAudio: true),
    eventListeners: [listener]
)
Each AudioClip contains audio samples ([Float]), metadata, and event context. Use custom listeners (conforming to AudioEventListener) for advanced audio workflows.

Monitor SDK state

Monitor SDK state changes using the AsyncStream:
import SleepCycleSDK

Task {
    for await state in SleepCycleSdk.stateStream {
        switch state {
        case .uninitialized:
            print("SDK not initialized")
        case .initialized:
            print("SDK ready")
        case .running:
            print("Analysis in progress")
        }
    }
}
Get the current state synchronously:
let currentState = SleepCycleSdk.currentState

Public data types

  • SleepAnalysisConfig: Configures sensors used during analysis.
    • useAudio: Bool
    • useAccelerometer: Bool
  • SleepAnalysisFeatures: Feature flags returned on successful initialization.
    • sleepStaging: Bool, smartAlarm: Bool, audioEvents: Bool, snoringDetection: Bool
  • AnalysisResult: Returned by stopAnalysis(at:).
    • startDate: Date, endDate: Date, statistics: SleepStatistics, events: [EventType: [Event]]?
  • SleepStatistics: Aggregated KPIs computed for the session.
    • sessionInterval: DateInterval
    • totalSleepDuration: TimeInterval?, sleepOnsetLatency: TimeInterval?, sleepEfficiency: Double?
    • finalWakeTime: Date?, numberOfAwakenings: Int?
    • sleepStages: [(SleepStage, DateInterval)]?, sleepStageDurations: [SleepStage: TimeInterval]?
    • snoreSessions: [SnoreSession]?, snoreTime: TimeInterval?
  • EventType: The kind of detected event (e.g., .snoring, .talking, .movement).
  • Event: A detected event with timing and confidence.
    • type: EventType, interval: DateInterval, probability: Double, source: EventSource
  • SdkState: SDK lifecycle state.
    • .uninitialized, .initialized, .running
  • SleepCycleSdkError: Error cases thrown by SDK APIs.
  • DataSource: Protocol for providing custom audio/motion input to the SDK.