Sleep Cycle SDK - iOS Documentation
Overview
The Sleep Cycle SDK for iOS enables developers to integrate advanced sleep analysis capabilities into their applications. The SDK provides real-time sleep tracking using audio and motion sensors, delivering detailed sleep insights, stage transitions, and detected events throughout the night.
System Requirements
Minimum iOS Version:
Swift:
- Swift Version: 5.9+
- The SDK is written in Swift and provides a Swift-native API with async/await support
Installation
Swift Package Manager
Add the Sleep Cycle SDK to your project using Swift Package Manager:
- In Xcode, select File → Add Package Dependencies
- Enter the package URL:
https://github.com/MDLabs/sleepcycle-sdk-swift
- Select the version you want to use (Semantic Versioning)
- Add the package to your target
Alternatively, add it to your Package.swift:
dependencies: [
.package(url: "https://github.com/MDLabs/sleepcycle-sdk-swift", from: "1.0.0")
]
The SDK requires an API key for authorization. Contact Sleep Cycle to obtain credentials.
Prerequisites
Permissions
The SDK requires microphone access for audio-based sleep analysis. Add the following to your Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>We need access to the microphone to analyze your sleep patterns and detect snoring.</string>
For motion-based analysis, you may also need:
<key>NSMotionUsageDescription</key>
<string>We use motion data to track your sleep movements.</string>
Background modes
To ensure continuous sleep analysis throughout the night, enable the appropriate background modes in your app’s capabilities:
- Open your project in Xcode
- Select your app target
- Go to “Signing & Capabilities”
- Add “Background Modes” capability
- Enable “Audio”
Alternatively, add this to your Info.plist:
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
The SDK uses the audio background mode to maintain continuous audio processing during sleep analysis. Your app should also implement proper session management to prevent iOS from suspending the analysis process.
General
It’s typical to call initialize() during application startup, often in your app’s initialization phase or in a SwiftUI App struct. This ensures the SDK is ready when your views need it. The SDK maintains its state across the app lifecycle.
The SDK is thread-safe and uses Swift concurrency (async/await) for all asynchronous operations.
Initialize the SDK
The SDK requires authentication before use. The initialization process validates your credentials and determines available features.
import SleepCycleSDK
Task {
do {
let features = try await SleepCycleSdk.initialize(
logLevel: .info,
apiKey: "your-api-key-here"
)
print("Authorized with features: \(features)")
} catch {
print("Initialization error: \(error)")
}
}
Parameters:
logLevel: Controls SDK logging verbosity (.error, .info, .debug)
logger: Optional custom logger
apiKey: API key used for authorization
Return value:
- Returns
SleepAnalysisFeatures on success and transitions to SdkState.initialized.
- Throws on authorization failure.
Access feature flags at runtime:
if SleepCycleSdk.isFeatureEnabled(\.audioEvents) {
// Present snore/talk event UI
}
Start a sleep analysis session
Once initialized, you can start a sleep analysis session:
import SleepCycleSDK
Task {
do {
try await SleepCycleSdk.startAnalysis(
config: SleepAnalysisConfig(
useAudio: true,
useAccelerometer: true
),
at: Date(),
using: nil
)
} catch {
print("Failed to start analysis: \(error)")
}
}
Parameters:
config: Configuration object specifying which sensors to use
at: The start time for the analysis (defaults to current time)
using: Optional DataSource (e.g., live or file replay)
eventListeners: Optional array of AudioEventListener instances that receive callbacks during audio analysis. Use this to capture audio samples and events in real-time
Resume a session
The SDK supports resuming a previously started analysis session. This is useful when your app restarts or the background task is terminated by the system.
if SleepCycleSdk.isResumePossible() {
try await SleepCycleSdk.resumeAnalysis()
}
Stop a session
To stop an active analysis session and retrieve the results:
Task {
do {
let result = try await SleepCycleSdk.stopAnalysis(at: Date())
let statistics = result.statistics
print("Total sleep duration: \(statistics.totalSleepDuration ?? 0)")
print("Sleep efficiency: \(statistics.sleepEfficiency ?? 0)")
if let snoreSessions = statistics.snoreSessions {
for session in snoreSessions {
print("Snoring session: \(session.interval)")
}
}
if let events = result.events?[.snoring] {
for event in events {
print("Snoring detected: \(event.interval), p=\(event.probability)")
}
}
for breathingRate in result.breathingRates {
print("Breathing rate: \(breathingRate.bpm) bpm at \(breathingRate.timestamp)")
}
} catch {
print("Failed to stop analysis: \(error)")
}
}
Real-time events
The SDK provides real-time updates via AsyncStream publishers:
Task {
await withTaskGroup(of: Void.self) { group in
group.addTask {
for await events in SleepCycleSdk.eventStream {
for event in events {
print("Event: \(event.type) from \(event.source) with p=\(event.probability)")
}
}
}
group.addTask {
for await state in SleepCycleSdk.stateStream {
print("SDK state: \(state)")
}
}
}
}
Real-time breathing rate
The SDK provides real-time breathing rate measurements during analysis:
Task {
for await breathingRate in SleepCycleSdk.breathingRateStream {
print("Breathing rate: \(breathingRate.bpm) bpm (confidence: \(breathingRate.confidence))")
}
}
The stream emits BreathingRate objects containing the breathing rate measurement. Each measurement includes:
timestamp: Date - The time when the measurement was recorded
bpm: Double - Breathing rate in breaths per minute
confidence: Double - Confidence score of the measurement (0.0 to 1.0)
Real-time sleep staging (Experimental)
This feature is experimental and may change in future releases. The API and behavior are subject to modification without notice.
The SDK can provide real-time sleep stage predictions during analysis. This feature requires the realTimeSleepStaging capability to be enabled for your API key.
Task {
for await (interval, stage) in SleepCycleSdk.sleepStageStream {
switch stage {
case .awake:
print("Awake: \(interval)")
case .light:
print("Light sleep: \(interval)")
case .deep:
print("Deep sleep: \(interval)")
case .rem:
print("REM sleep: \(interval)")
}
}
}
The stream emits (DateInterval, SleepStage) tuples representing the time interval and predicted sleep stage. Sleep stages are emitted approximately every 30 seconds during analysis, providing near real-time feedback on sleep state transitions.
SleepStage values:
.awake - User is awake
.light - Light sleep (N1/N2)
.deep - Deep sleep (N3/slow-wave sleep)
.rem - REM (rapid eye movement) sleep
Event signatures
For snoring events, the Event.signature property contains a 16-dimensional feature vector that represents unique characteristics of the detected snore. Snore events from the same person are grouped close to each other in the signature space, allowing clustering of events by person.
Audio event listener
The AudioEventListener protocol allows you to receive real-time audio analysis updates during a session. Implement this protocol to access raw audio samples, event detection, and volume information as analysis progresses.
class MyAudioListener: AudioEventListener {
func onAudioAnalysisBatchCompleted(
audioSamples: [Float],
audioSampleRate: Int,
audioStartTime: Date,
audioEndTime: Date,
eventsStarted: [EventStartedInfo],
eventsEnded: [EventEndedInfo],
rms: [Float]
) {
// Process audio samples and events
}
}
let listener = MyAudioListener()
try await SleepCycleSdk.startAnalysis(
config: SleepAnalysisConfig(useAudio: true),
eventListeners: [listener]
)
The audioSamples parameter contains all processed audio data in sequence, without any gaps or overlap between batches. Each batch continues exactly where the previous batch ended, ensuring complete coverage of all analyzed audio.
Audio clips
The SDK can capture short audio recordings when specific sleep events are detected, such as snoring, sleep talking, or coughing.
Create an AudioEventListener using AudioClipsConfig and AudioClipsReceiver:
import SleepCycleSDK
let listener = SleepCycleSdk.createAudioClipsProducer(
audioClipsConfig: AudioClipsConfig(
activeTypes: [
.snoring: EventTypeConfig(minDuration: 0.5),
.talking: EventTypeConfig(minDuration: 0.5)
],
clipLength: 5.0
),
receiver: MyAudioHandler()
)
try await SleepCycleSdk.startAnalysis(
config: SleepAnalysisConfig(useAudio: true),
eventListeners: [listener]
)
Each AudioClip contains audio samples ([Float]), metadata, and event context. Use custom listeners (conforming to AudioEventListener) for advanced audio workflows.
Monitor SDK state
Monitor SDK state changes using the AsyncStream:
import SleepCycleSDK
Task {
for await state in SleepCycleSdk.stateStream {
switch state {
case .uninitialized:
print("SDK not initialized")
case .initialized:
print("SDK ready")
case .running:
print("Analysis in progress")
}
}
}
Get the current state synchronously:
let currentState = SleepCycleSdk.currentState
Public data types
-
SleepAnalysisConfig: Configures sensors used during analysis.
useAudio: Bool
useAccelerometer: Bool
-
SleepAnalysisFeatures: Feature flags returned on successful initialization.
sleepStaging: Bool, smartAlarm: Bool, audioEvents: Bool, snoringDetection: Bool, realTimeSleepStaging: Bool, apneaRiskDetection: Bool
-
AnalysisResult: Returned by
stopAnalysis(at:).
startDate: Date, endDate: Date, statistics: SleepStatistics, events: [EventType: [Event]]?, breathingRates: [BreathingRate]
-
SleepStatistics: Aggregated KPIs computed for the session.
sessionInterval: DateInterval
totalSleepDuration: TimeInterval?, sleepOnsetLatency: TimeInterval?, sleepEfficiency: Double?
finalWakeTime: Date?, numberOfAwakenings: Int?
sleepStages: [(SleepStage, DateInterval)]?, sleepStageDurations: [SleepStage: TimeInterval]?
snoreSessions: [SnoreSession]?, snoreTime: TimeInterval?
-
EventType: The kind of detected event (e.g.,
.snoring, .talking, .movement).
-
Event: A detected event with timing and confidence.
type: EventType, interval: DateInterval, probability: Double, source: EventSource
-
BreathingRate: A breathing rate measurement captured during sleep analysis.
timestamp: Date, bpm: Double, confidence: Double
-
SdkState: SDK lifecycle state.
.uninitialized, .initialized, .running
-
SleepCycleSdkError: Error cases thrown by SDK APIs.
-
DataSource: Protocol for providing custom audio/motion input to the SDK.