Sleep Cycle SDK - iOS Documentation
Overview
The Sleep Cycle SDK for iOS enables developers to integrate advanced sleep analysis capabilities into their applications. The SDK provides real-time sleep tracking using audio and motion sensors, delivering detailed sleep insights, stage transitions, and detected events throughout the night.
System Requirements
Minimum iOS Version:
Swift:
- Swift Version: 5.9+
- The SDK is written in Swift and provides a Swift-native API with async/await support
Installation
Swift Package Manager
Add the Sleep Cycle SDK to your project using Swift Package Manager:
- In Xcode, select File → Add Package Dependencies
- Enter the package URL:
https://github.com/MDLabs/sleepcycle-sdk-swift
- Select the version you want to use (Semantic Versioning)
- Add the package to your target
Alternatively, add it to your Package.swift:
dependencies: [
.package(url: "https://github.com/MDLabs/sleepcycle-sdk-swift", from: "1.1.0")
]
The SDK requires an API key for authorization. Contact Sleep Cycle to obtain credentials.
Prerequisites
Permissions
The SDK requires microphone access for audio-based sleep analysis. Add the following to your Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>We need access to the microphone to analyze your sleep patterns and detect snoring.</string>
For motion-based analysis, you may also need:
<key>NSMotionUsageDescription</key>
<string>We use motion data to track your sleep movements.</string>
Background modes
To ensure continuous sleep analysis throughout the night, enable the appropriate background modes in your app’s capabilities:
- Open your project in Xcode
- Select your app target
- Go to “Signing & Capabilities”
- Add “Background Modes” capability
- Enable “Audio”
Alternatively, add this to your Info.plist:
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
The SDK uses the audio background mode to maintain continuous audio processing during sleep analysis. Your app should also implement proper session management to prevent iOS from suspending the analysis process.
General
The SDK is thread-safe and uses Swift concurrency (async/await) for all asynchronous operations.
Initialize the SDK
The SDK requires authentication before use. The initialization process validates your credentials and determines available features.
import SleepCycleSDK
Task {
do {
let features = try await SleepCycleSdk.initialize(
apiKey: "your-api-key-here"
)
print("Authorized with features: \(features)")
} catch {
print("Initialization error: \(error)")
}
}
The returned SleepAnalysisFeatures indicates which capabilities are available for your API key:
sleepStaging Bool - Sleep staging analysis
smartAlarm Bool - Smart alarm functionality
audioEvents Bool - Audio event detection
snoringDetection Bool - Snoring detection
realTimeSleepStaging Bool - Real-time sleep staging
multiChannelAnalysis Bool - Multi-channel analysis (stereo, two channels)
Access feature flags at runtime:
if SleepCycleSdk.isFeatureEnabled(\.audioEvents) {
// Present snore/talk event UI
}
Get the SDK state
Monitor SDK state changes using the AsyncStream:
import SleepCycleSDK
Task {
for await state in SleepCycleSdk.stateStream {
switch state {
case .uninitialized:
print("SDK not initialized")
case .initialized:
print("SDK ready")
case .running:
print("Analysis in progress")
}
}
}
Get the current state synchronously:
let currentState = SleepCycleSdk.currentState
Start a sleep analysis session
Once initialized, you can start a sleep analysis session. The method returns a UUID that identifies the session.
import SleepCycleSDK
Task {
do {
let sessionId: UUID = try await SleepCycleSdk.startAnalysis(
config: SleepAnalysisConfig(
useAudio: true,
useAccelerometer: true
)
)
print("Analysis started with session ID: \(sessionId)")
} catch {
print("Failed to start analysis: \(error)")
}
}
Parameters:
config SleepAnalysisConfig - Configuration object specifying which sensors to use
at Date - The start time for the analysis (defaults to current time)
using DataSource? - Optional data source (e.g., live or file replay)
eventListeners [AudioEventListener] - Optional array of listeners that receive callbacks during audio analysis. Use this to capture audio samples and events in real-time
Resume a session
The SDK supports resuming a previously started analysis session. This is useful when your app restarts or the background task is terminated by the system.
if SleepCycleSdk.isResumePossible() {
try await SleepCycleSdk.resumeAnalysis()
}
Stop a session
To stop an active analysis session and retrieve the results:
Task {
do {
let result = try await SleepCycleSdk.stopAnalysis()
print("Session ID: \(result.sessionId)")
if let statistics = result.statistics {
print("Total sleep duration: \(statistics.totalSleepDuration ?? 0)")
print("Sleep efficiency: \(statistics.sleepEfficiency ?? 0)")
if let snoreSessions = statistics.snoreSessions {
for session in snoreSessions {
print("Snoring session: \(session.interval)")
}
}
}
for event in result.events {
print("\(event.type) detected: \(event.interval), p=\(event.probability)")
}
for breathingRate in result.breathingRates {
print("Breathing rate: \(breathingRate.bpm) bpm at \(breathingRate.timestamp)")
}
for stageInterval in result.sleepStageIntervals {
print("\(stageInterval.stage): \(stageInterval.interval)")
}
if let audioStats = result.audioStatistics {
for interval in audioStats.healthIntervals {
print("Audio \(interval.status): \(interval.interval)")
}
}
} catch {
print("Failed to stop analysis: \(error)")
}
}
Analysis result
The AnalysisResult contains the complete output of a sleep analysis session:
sessionId UUID - Unique session identifier
startTime Date - Session start time
endTime Date - Session end time
events [Event] - Detected sleep events
breathingRates [BreathingRate] - Breathing rate measurements
sleepStageIntervals [SleepStageInterval] - Sleep stage data
statistics SleepStatistics? - Aggregated sleep statistics (optional)
audioStatistics AudioStatistics? - Audio health statistics (optional)
SleepStatistics
When available, statistics contains aggregated metrics about the sleep session:
totalSleepDuration Double? - Total time spent sleeping
sleepOnsetLatency Double? - Time to fall asleep
sleepEfficiency Double? - Ratio of sleep to time in bed (0.0 to 1.0)
finalWakeTime Date? - Time of final awakening
numberOfAwakenings Int? - Number of awakenings during the night
snoreTime Double? - Total time spent snoring
snoreSessions [SnoreSession]? - Individual snoring sessions
sleepStageDurations [SleepStage: Double]? - Duration per sleep stage
AudioStatistics
When available, audioStatistics contains information about audio input health throughout the session.
Real-time events
The SDK provides real-time event updates during analysis through an AsyncStream API:
Task {
for await events in SleepCycleSdk.eventStream {
for event in events {
switch event.type {
case .movement:
handleMovement(event)
case .snoring:
handleSnoring(event)
case .talking:
handleTalking(event)
case .coughing:
handleCoughing(event)
}
}
}
}
Each Event contains:
type EventType - The type of event
interval DateInterval - Time interval of the event
probability Double - Confidence score (0.0 to 1.0)
source EventSource - Source of detection
sessionId UUID - The session this event belongs to
signature [Float]? - Optional feature vector (for snoring events)
Real-time breathing rate
The SDK provides real-time breathing rate measurements during analysis:
Task {
for await breathingRate in SleepCycleSdk.breathingRateStream {
print("Breathing rate: \(breathingRate.bpm) bpm (confidence: \(breathingRate.confidence))")
}
}
Each BreathingRate contains:
timestamp Date - The time when the measurement was recorded
bpm Double - Breathing rate in breaths per minute
confidence Double - Confidence score of the measurement (0.0 to 1.0)
sessionId UUID - The session this measurement belongs to
Real-time sleep staging (Experimental)
This feature is experimental and may change in future releases. The API and behavior are subject to modification without notice.
The SDK can provide real-time sleep stage predictions during analysis. This feature requires the realTimeSleepStaging capability to be enabled for your API key.
Task {
for await stageInterval in SleepCycleSdk.sleepStageStream {
switch stageInterval.stage {
case .awake:
print("Awake: \(stageInterval.interval)")
case .light:
print("Light sleep: \(stageInterval.interval)")
case .deep:
print("Deep sleep: \(stageInterval.interval)")
case .rem:
print("REM sleep: \(stageInterval.interval)")
}
}
}
The stream emits SleepStageInterval objects approximately every 30 seconds during analysis, providing near real-time feedback on sleep state transitions.
Real-time audio health
The SDK monitors the health of the audio input during analysis and emits status updates when the audio state changes:
Task {
for await update in SleepCycleSdk.audioHealthStream {
switch update.status {
case .healthy:
print("Audio input healthy")
case .flatline:
print("Audio flatline detected")
case .missingInput:
print("Audio input missing")
}
}
}
AudioHealthStatus values:
.healthy - Audio input contains a varying signal
.flatline - Constant value detected (non-functional microphone or muted input)
.missingInput - No audio input received for an extended period
Event signatures
For snoring events, the Event.signature property contains a 16-dimensional feature vector that represents unique characteristics of the detected snore. Snore events from the same person are grouped close to each other in the signature space, allowing clustering of events by person.
Audio event listener
The AudioEventListener protocol allows you to receive real-time audio analysis updates during a session. Implement this protocol to access raw audio samples, event detection, and volume information as analysis progresses.
class MyAudioListener: AudioEventListener {
func onAudioAnalysisBatchCompleted(
sessionId: UUID,
audioSamples: [Float],
audioSampleRate: Int,
audioStartTime: Date,
audioEndTime: Date,
eventsStarted: [EventStartedInfo],
eventsEnded: [EventEndedInfo],
rms: [Float]
) {
// Process audio samples and events
}
}
let listener = MyAudioListener()
try await SleepCycleSdk.startAnalysis(
config: SleepAnalysisConfig(useAudio: true),
eventListeners: [listener]
)
The audioSamples parameter contains all processed audio data in sequence, without any gaps or overlap between batches. Each batch continues exactly where the previous batch ended, ensuring complete coverage of all analyzed audio.
Audio clips
The SDK can capture short audio recordings when specific sleep events are detected, such as snoring, sleep talking, or coughing.
Create an AudioEventListener using AudioClipsConfig and AudioClipsReceiver:
import SleepCycleSDK
let listener = SleepCycleSdk.createAudioClipsProducer(
audioClipsConfig: AudioClipsConfig(
activeTypes: [
.snoring: EventTypeConfig(minDuration: 0.5),
.talking: EventTypeConfig(minDuration: 0.5)
],
clipLength: 5.0
),
receiver: MyAudioHandler()
)
try await SleepCycleSdk.startAnalysis(
config: SleepAnalysisConfig(useAudio: true),
eventListeners: [listener]
)
Each AudioClip contains:
startTime Date - Start timestamp
type EventType - The event type that triggered the capture
samples [Float] - Raw audio samples
sampleRate Int - Sample rate in Hz
sessionId UUID - The session this clip belongs to
Multi-channel analysis
The SDK supports analyzing two channels simultaneously using a stereo audio source. Multi-channel analysis separates the data source lifecycle from individual session lifecycles, allowing you to start and stop sessions on each channel independently.
The stereo stream is expected to come from two separate mono microphones combined into a single stereo stream, with one microphone per channel.
This requires the multiChannelAnalysis feature to be enabled for your API key.
Channel separation
When using stereo input, ChannelSeparationConfig controls how audio events are assigned to channels. Built-in presets:
.bedSideMics — bedside microphones placed apart (default)
.centeredMicArray — closely spaced microphone array
.detectionStrengthOnly() — no spatial filtering, uses only detection confidence
All parameters (mic distance, ambiguous zone, confidence threshold, per-event-type settings) can be tuned individually to fit your specific hardware setup and use case.
Data source lifecycle
Start the data source with a stereo audio configuration before starting individual sessions:
import SleepCycleSDK
let dataSource = SleepCycleSdk.createLiveDataSource()
try await SleepCycleSdk.startDataSource(
using: dataSource,
channelSeparationConfig: .bedSideMics
)
Starting sessions on each channel
Once the data source is running, start a session on each channel:
let primarySessionId: UUID = try await SleepCycleSdk.startAnalysis(
channel: .primary,
config: SleepAnalysisConfig(useAudio: true, useAccelerometer: true)
)
let secondarySessionId: UUID = try await SleepCycleSdk.startAnalysis(
channel: .secondary,
config: SleepAnalysisConfig(useAudio: true, useAccelerometer: false)
)
AnalysisChannel values:
.primary - First audio channel (or mono)
.secondary - Second audio channel in stereo
Stopping sessions independently
Each session can be stopped independently to retrieve its result:
let primaryResult = try await SleepCycleSdk.stopAnalysis(channel: .primary)
let secondaryResult = try await SleepCycleSdk.stopAnalysis(channel: .secondary)
Stopping the data source
After all sessions have been stopped, stop the data source:
await SleepCycleSdk.stopDataSource()
Calling stopDataSource() while sessions are still active will force-stop them and discard their results. To retrieve results, stop each session first.