Documentation Index
Fetch the complete documentation index at: https://sdk.sleepcycle.com/llms.txt
Use this file to discover all available pages before exploring further.
Sleep Cycle SDK - iOS Documentation
Overview
The Sleep Cycle SDK for iOS enables developers to integrate advanced sleep analysis capabilities into their applications. The SDK provides real-time sleep tracking using audio and motion sensors, delivering detailed sleep insights, stage transitions, and detected events throughout the night.System Requirements
Minimum iOS Version:- iOS: 16.0+
- macOS: 13.0+
- Swift Version: 5.9+
- The SDK is written in Swift and provides a Swift-native API with async/await support
Installation
Swift Package Manager
Add the Sleep Cycle SDK to your project using Swift Package Manager:- In Xcode, select File → Add Package Dependencies
- Enter the package URL:
https://github.com/MDLabs/sleepcycle-sdk-swift - Select the version you want to use (Semantic Versioning)
- Add the package to your target
Package.swift:
The SDK requires an API key for authorization. Contact Sleep Cycle to obtain credentials.
Prerequisites
Permissions
The SDK requires microphone access for audio-based sleep analysis. Add the following to yourInfo.plist:
Background modes
To ensure continuous sleep analysis throughout the night, enable the appropriate background modes in your app’s capabilities:- Open your project in Xcode
- Select your app target
- Go to “Signing & Capabilities”
- Add “Background Modes” capability
- Enable “Audio”
Info.plist:
General
The SDK is thread-safe and uses Swift concurrency (async/await) for all asynchronous operations.Initialize the SDK
The SDK requires authentication before use. The initialization process validates your credentials and determines available features.SleepAnalysisFeatures indicates which capabilities are available for your API key:
sleepStaging Bool - Sleep staging analysissmartAlarm Bool - Smart alarm functionalityaudioEvents Bool - Audio event detectionsnoringDetection Bool - Snoring detectionrealTimeSleepStaging Bool - Real-time sleep stagingmultiChannelAnalysis Bool - Multi-channel analysis (stereo, two channels)Get the SDK state
Monitor SDK state changes using the AsyncStream:Start a sleep analysis session
Once initialized, you can start a sleep analysis session. The method returns aUUID that identifies the session.
config SleepAnalysisConfig - Configuration object specifying which sensors to useat Date - The start time for the analysis (defaults to current time)using DataSource? - Optional data source (e.g., live or file replay)eventListeners [AudioEventListener] - Optional array of listeners that receive callbacks during audio analysis. Use this to capture audio samples and events in real-timeResume a session
The SDK supports resuming a previously started analysis session. This is useful when your app restarts or the background task is terminated by the system.Stop a session
To stop an active analysis session and retrieve the results:Analysis result
TheAnalysisResult contains the complete output of a sleep analysis session:
sessionId UUID - Unique session identifierstartTime Date - Session start timeendTime Date - Session end timeevents [Event] - Detected sleep eventsbreathingRates [BreathingRate] - Breathing rate measurementssleepStageIntervals [SleepStageInterval] - Sleep stage datastatistics SleepStatistics? - Aggregated sleep statistics (optional)audioStatistics AudioStatistics? - Audio health statistics (optional)SleepStatistics
When available,statistics contains aggregated metrics about the sleep session:
totalSleepDuration Double? - Total time spent sleepingsleepOnsetLatency Double? - Time to fall asleepsleepEfficiency Double? - Ratio of sleep to time in bed (0.0 to 1.0)finalWakeTime Date? - Time of final awakeningnumberOfAwakenings Int? - Number of awakenings during the nightsnoreTime Double? - Total time spent snoringsnoreSessions [SnoreSession]? - Individual snoring sessionssleepStageDurations [SleepStage: Double]? - Duration per sleep stageAudioStatistics
When available,audioStatistics contains information about audio input health throughout the session.
Real-time events
The SDK provides real-time event updates during analysis through an AsyncStream API:Event contains:
type EventType - The type of eventinterval DateInterval - Time interval of the eventprobability Double - Confidence score (0.0 to 1.0)source EventSource - Source of detectionsessionId UUID - The session this event belongs tosignature [Float]? - Optional feature vector (for snoring events)Real-time breathing rate
The SDK provides real-time breathing rate measurements during analysis:BreathingRate contains:
timestamp Date - The time when the measurement was recordedbpm Double - Breathing rate in breaths per minuteconfidence Double - Confidence score of the measurement (0.0 to 1.0)sessionId UUID - The session this measurement belongs toReal-time sleep staging (Experimental)
The SDK can provide real-time sleep stage predictions during analysis. This feature requires therealTimeSleepStaging capability to be enabled for your API key.
SleepStageInterval objects approximately every 30 seconds during analysis, providing near real-time feedback on sleep state transitions.
Real-time audio health
The SDK monitors the health of the audio input during analysis and emits status updates when the audio state changes:AudioHealthStatus values:
.healthy - Audio input contains a varying signal.flatline - Constant value detected (non-functional microphone or muted input).missingInput - No audio input received for an extended periodEvent signatures
For snoring events, theEvent.signature property contains a 16-dimensional feature vector that represents unique characteristics of the detected snore. Snore events from the same person are grouped close to each other in the signature space, allowing clustering of events by person.
Audio event listener
TheAudioEventListener protocol allows you to receive real-time audio analysis updates during a session. Implement this protocol to access raw audio samples, event detection, and volume information as analysis progresses.
audioSamples parameter contains all processed audio data in sequence, without any gaps or overlap between batches. Each batch continues exactly where the previous batch ended, ensuring complete coverage of all analyzed audio.
Audio clips
The SDK can capture short audio recordings when specific sleep events are detected, such as snoring, sleep talking, or coughing. Create anAudioEventListener using AudioClipsConfig and AudioClipsReceiver:
AudioClip contains:
startTime Date - Start timestamptype EventType - The event type that triggered the capturesamples [Float] - Raw audio samplessampleRate Int - Sample rate in HzsessionId UUID - The session this clip belongs toMulti-channel analysis
The SDK supports analyzing two channels simultaneously using a stereo audio source. Multi-channel analysis separates the data source lifecycle from individual session lifecycles, allowing you to start and stop sessions on each channel independently. The stereo stream is expected to come from two separate mono microphones combined into a single stereo stream, with one microphone per channel. This requires themultiChannelAnalysis feature to be enabled for your API key.
Channel separation
When using stereo input,ChannelSeparationConfig controls how audio events are assigned to channels. Built-in presets:
.bedSideMics— bedside microphones placed apart (default).centeredMicArray— closely spaced microphone array.detectionStrengthOnly()— no spatial filtering, uses only detection confidence
Data source lifecycle
Start the data source with a stereo audio configuration before starting individual sessions:Starting sessions on each channel
Once the data source is running, start a session on each channel:AnalysisChannel values:
.primary - First audio channel (or mono).secondary - Second audio channel in stereoStopping sessions independently
Each session can be stopped independently to retrieve its result:Stopping the data source
After all sessions have been stopped, stop the data source:stopDataSource() while sessions are still active will force-stop them and discard their results. To retrieve results, stop each session first.