Skip to main content

Sleep Cycle SDK - Android Documentation

Overview

The Sleep Cycle SDK for Android enables developers to integrate advanced sleep analysis capabilities into their applications. The SDK provides real-time sleep tracking using audio and motion sensors, delivering detailed sleep insights and events throughout the night.

System Requirements

Minimum Android API Level:
  • Min SDK: API level 28 (Android 9.0 Pie)
  • Compile SDK: API level 35
Kotlin:
  • Kotlin Version: 1.9+ (JVM target 11)
  • The SDK is written in Kotlin and provides a Kotlin-first API

Installation

Find the latest version on Maven Central.

Groovy DSL

Add the Sleep Cycle SDK dependency to your build.gradle:
dependencies {
    implementation "com.sleepcycle.sdk:sdk-android:<latest-version>"
}

Kotlin DSL

Add the Sleep Cycle SDK dependency to your build.gradle.kts:
dependencies {
    implementation("com.sleepcycle.sdk:sdk-android:<latest-version>")
}

Prerequisites

Permissions

The SDK requires microphone access for audio-based sleep analysis:
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
The SDK automatically includes the WAKE_LOCK permission in its manifest to keep the device awake during analysis:
<uses-permission android:name="android.permission.WAKE_LOCK" />

Keeping the analysis active using a foreground service

To ensure continuous sleep analysis throughout the night, you must implement a foreground service. This prevents Android from terminating the analysis process during extended periods. It is up to the host app to start the foreground service correctly. The service must declare appropriate foreground service types in its manifest to specify what system resources it needs access to. For sleep analysis, you’ll typically need the health or microphone service types, which grant access to health sensors and microphone respectively.

General

The SDK is thread safe and can be called from any thread.

Initialize the SDK

The SDK requires authentication before use. The initialization process validates your credentials and determines available features.
import com.sleepcycle.sdk.SleepCycleSdk

try {
    val features = SleepCycleSdk.initialize(
        context = applicationContext,
        apiKey = "your-api-key-here"
    )
    Log.d("SDK", "Authorized with features: $features")
} catch (e: Exception) {
    Log.e("SDK", "Authorization failed: ${e.message}")
}
The returned SleepAnalysisFeatures indicates which capabilities are available for your API key:
sleepStaging Boolean - Sleep staging analysis
smartAlarm Boolean - Smart alarm functionality
audioEvents Boolean - Audio event detection
snoringDetection Boolean - Snoring detection
realTimeSleepStaging Boolean - Real-time sleep staging
multiChannelAnalysis Boolean - Multi-channel analysis (stereo, two channels)

Get the SDK state

Monitor SDK state changes using the StateFlow:
import com.sleepcycle.sdk.SdkState

val sdkStateFlow: StateFlow<SdkState> = SleepCycleSdk.sdkStateFlow
Get the current state:
val currentState: SdkState = SleepCycleSdk.getState()

Start a sleep analysis session

Once initialized, you can start a sleep analysis session. The method returns a UUID that identifies the session.
import com.sleepcycle.sdk.SleepAnalysisConfig

try {
    val sessionId: UUID = SleepCycleSdk.startAnalysis(
        config = SleepAnalysisConfig(
            useAudio = true,
            useAccelerometer = true
        )
    )
    Log.d("SDK", "Analysis started with session ID: $sessionId")
} catch (e: Exception) {
    Log.e("SDK", "Failed to start analysis: ${e.message}")
}
Parameters:
config SleepAnalysisConfig - Configuration object specifying which sensors to use
startMillisUtc Long - Analysis start time in UTC milliseconds (defaults to current time)
dataSource DataSource? - Optional custom data source. When null, the SDK uses live device sensors
audioEventListeners List<AudioEventListener> - Optional list of listeners that receive callbacks during audio analysis

Resume a session

The SDK supports resuming a previously started analysis session. This is useful when your app restarts or the foreground service is terminated by the system.
try {
    if (SleepCycleSdk.isResumePossible()) {
        SleepCycleSdk.resumeAnalysis()
    }
} catch (e: Exception) {
    Log.e("SDK", "Failed to resume analysis: ${e.message}")
}

Stop a session

To stop an active analysis session and retrieve the results:
try {
    val result: AnalysisResult? = SleepCycleSdk.stopAnalysis()

    result?.let { analysisResult ->
        val events = analysisResult.events
        val breathingRates = analysisResult.breathingRates
        val sleepStageIntervals = analysisResult.sleepStageIntervals

        analysisResult.statistics?.let { statistics ->
            Log.d("SDK", "Sleep duration: ${statistics.totalSleepDurationSeconds}")
        }

        events.forEach { event ->
            Log.d("SDK", "${event.type} from ${event.startTime} to ${event.endTime}")
        }

        breathingRates.forEach { breathingRate ->
            Log.d("SDK", "Breathing rate: ${breathingRate.bpm} bpm at ${breathingRate.timestampSecondsUtc}")
        }

        analysisResult.audioStatistics?.let { audioStats ->
            audioStats.healthIntervals.forEach { interval ->
                Log.d("SDK", "Audio ${interval.status}: ${interval.interval}")
            }
        }
    }
} catch (e: Exception) {
    Log.e("SDK", "Failed to stop analysis: ${e.message}")
}

Analysis result

The AnalysisResult contains the complete output of a sleep analysis session:
sessionId UUID - Unique session identifier
startSecondsUtc Double - Session start time in UTC seconds
endSecondsUtc Double - Session end time in UTC seconds
events List<Event> - Detected sleep events
breathingRates List<BreathingRate> - Breathing rate measurements
sleepStageIntervals List<SleepStageInterval> - Sleep stage data
statistics SleepStatistics? - Aggregated sleep statistics (nullable)
audioStatistics AudioStatistics? - Audio health statistics (nullable)

SleepStatistics

When available, statistics contains aggregated metrics about the sleep session:
totalSleepDurationSeconds Double - Total time spent sleeping
sleepOnsetLatencySeconds Double? - Time to fall asleep
sleepEfficiency Double - Ratio of sleep to time in bed (0.0 to 1.0)
finalWakeTimeSecondsUtc Double? - Time of final awakening (UTC seconds)
numberOfAwakenings Int - Number of awakenings during the night
snoreTimeSeconds Double - Total time spent snoring
snoreSessions List<SnoreSession> - Individual snoring sessions
sleepStageDurationsSeconds Map<SleepStage, Double> - Duration per sleep stage

AudioStatistics

When available, audioStatistics contains information about audio input health throughout the session.

Real-time events

The SDK provides real-time event updates during analysis through a Flow API:
import com.sleepcycle.sdk.Event
import com.sleepcycle.sdk.EventType

lifecycleScope.launch {
    SleepCycleSdk.eventFlow.collect { events: List<Event> ->
        events.forEach { event ->
            when (event.type) {
                EventType.MOVEMENT -> handleMovement(event)
                EventType.SNORING -> handleSnoring(event)
                EventType.TALKING -> handleTalking(event)
                EventType.COUGHING -> handleCoughing(event)
            }
        }
    }
}
Each Event contains:
type EventType - The type of event
startTime Double - Start timestamp in UTC seconds
endTime Double - End timestamp in UTC seconds
probability Float - Confidence score (0.0 to 1.0)
source EventSource - Source of detection
sessionId UUID - The session this event belongs to
signature FloatArray? - Optional feature vector (for snoring events)

Real-time breathing rate

The SDK provides real-time breathing rate measurements during analysis:
import com.sleepcycle.sdk.BreathingRate

lifecycleScope.launch {
    SleepCycleSdk.breathingRateFlow.collect { breathingRate: BreathingRate ->
        Log.d("SDK", "Breathing rate: ${breathingRate.bpm} bpm (confidence: ${breathingRate.confidence})")
    }
}
Each BreathingRate contains:
timestampSecondsUtc Double - Time of measurement in seconds since Unix epoch
bpm Float - Breathing rate in breaths per minute
confidence Float - Confidence level of the measurement (0.0 to 1.0)
sessionId UUID - The session this measurement belongs to

Real-time sleep staging (Experimental)

This feature is experimental and may change in future releases. The API and behavior are subject to modification without notice.
The SDK can provide real-time sleep stage predictions during analysis. This feature requires the realTimeSleepStaging capability to be enabled for your API key.
import com.sleepcycle.sdk.SleepStage
import com.sleepcycle.sdk.SleepStageInterval

lifecycleScope.launch {
    SleepCycleSdk.sleepStageFlow.collect { stageInterval: SleepStageInterval ->
        when (stageInterval.stage) {
            SleepStage.AWAKE -> Log.d("SDK", "Awake: ${stageInterval.interval}")
            SleepStage.LIGHT -> Log.d("SDK", "Light sleep: ${stageInterval.interval}")
            SleepStage.DEEP -> Log.d("SDK", "Deep sleep: ${stageInterval.interval}")
            SleepStage.REM -> Log.d("SDK", "REM sleep: ${stageInterval.interval}")
        }
    }
}
The flow emits SleepStageInterval objects approximately every 30 seconds during analysis, providing near real-time feedback on sleep state transitions.

Real-time audio health

The SDK monitors the health of the audio input during analysis and emits status updates when the audio state changes:
import com.sleepcycle.sdk.AudioHealthUpdate
import com.sleepcycle.sdk.AudioHealthStatus

lifecycleScope.launch {
    SleepCycleSdk.audioHealthFlow.collect { update: AudioHealthUpdate ->
        when (update.status) {
            AudioHealthStatus.HEALTHY -> Log.d("SDK", "Audio input healthy")
            AudioHealthStatus.FLATLINE -> Log.w("SDK", "Audio flatline detected")
            AudioHealthStatus.MISSING_INPUT -> Log.w("SDK", "Audio input missing")
        }
    }
}
AudioHealthStatus values:
HEALTHY - Audio input contains a varying signal
FLATLINE - Constant value detected (non-functional microphone or muted input)
MISSING_INPUT - No audio input received for an extended period

Event signatures

For snoring events, the Event.signature property contains a 16-dimensional feature vector that represents unique characteristics of the detected snore. Snore events from the same person are grouped close to each other in the signature space, allowing clustering of events by person.

Audio event listener

The AudioEventListener interface allows you to receive real-time audio analysis updates during a session. Implement this interface to access raw audio samples, event detection, and volume information as analysis progresses.
val audioEventListener = object : AudioEventListener {
    override fun onAudioAnalysisBatchCompleted(
        audioSamples: FloatArray,
        audioSampleRate: Int,
        audioStartTime: Double,
        audioEndTime: Double,
        eventsStarted: List<EventStartedInfo>,
        eventsEnded: List<EventEndedInfo>,
        rms: FloatArray,
        sessionId: UUID
    ) {
        // Process audio samples and events
    }
}

try {
    SleepCycleSdk.startAnalysis(
        config = SleepAnalysisConfig(useAudio = true),
        audioEventListeners = listOf(audioEventListener)
    )
} catch (e: Exception) {
    Log.e("SDK", "Failed to start analysis: ${e.message}")
}
The audioSamples parameter contains all processed audio data in sequence, without any gaps or overlap between batches. Each batch continues exactly where the previous batch ended, ensuring complete coverage of all analyzed audio.

Audio clips

The SDK can capture short audio recordings when specific sleep events are detected, such as snoring, sleep talking, or coughing. To use audio clips, create an audio clips producer and pass it to startAnalysis:
import com.sleepcycle.sdk.*

// Configure which events trigger audio clips
val audioClipsConfig = AudioClipsConfig(
    activeTypes = hashMapOf(
        EventType.SNORING to EventTypeConfig(minDuration = 0.5),
        EventType.TALKING to EventTypeConfig(minDuration = 0.5)
    ),
    clipLength = 10.0  // Clip duration in seconds
)

// Implement receiver to handle captured clips
val audioClipsReceiver = object : AudioClipsReceiver {
    override fun onAudioClipReceived(audioClip: AudioClip) {
        // Process or store the audio clip
    }
}

try {
    // Create audio clips producer
    val audioClipsProducer = SleepCycleSdk.createAudioClipsProducer(
        config = audioClipsConfig,
        receiver = audioClipsReceiver
    )

    // Pass to startAnalysis
    SleepCycleSdk.startAnalysis(
        config = SleepAnalysisConfig(useAudio = true),
        audioEventListeners = listOf(audioClipsProducer)
    )
} catch (e: Exception) {
    Log.e("SDK", "Failed to start analysis with audio clips: ${e.message}")
}
Each AudioClip contains:
startTime Double - Start timestamp in seconds
type EventType - The event type that triggered the capture
samples FloatArray - Raw audio samples
sampleRate Int - Sample rate in Hz
sessionId UUID - The session this clip belongs to

Multi-channel analysis

The SDK supports analyzing two channels simultaneously using a stereo audio source. Multi-channel analysis separates the data source lifecycle from individual session lifecycles, allowing you to start and stop sessions on each channel independently. The stereo stream is expected to come from two separate mono microphones combined into a single stereo stream, with one microphone per channel. This requires the multiChannelAnalysis feature to be enabled for your API key.

Channel separation

When using stereo input, ChannelSeparationConfig controls how audio events are assigned to channels. Built-in presets:
  • BED_SIDE_MICS — bedside microphones placed apart (default)
  • CENTERED_MIC_ARRAY — closely spaced microphone array
  • DETECTION_STRENGTH_ONLY — no spatial filtering, uses only detection confidence
All parameters (mic distance, ambiguous zone, confidence threshold, per-event-type settings) can be tuned individually to fit your specific hardware setup and use case.

Data source lifecycle

Start the data source with a stereo audio configuration before starting individual sessions:
try {
    val dataSource = SleepCycleSdk.createLiveDataSource(
        audioFormat = DataSource.AudioFormat.STEREO
    )

    SleepCycleSdk.startDataSource(
        dataSource = dataSource,
        channelSeparationConfig = ChannelSeparationConfig.BED_SIDE_MICS
    )
} catch (e: Exception) {
    Log.e("SDK", "Failed to start data source: ${e.message}")
}

Starting sessions on each channel

Once the data source is running, start a session on each channel:
import com.sleepcycle.sdk.AnalysisChannel

try {
    val primarySessionId: UUID = SleepCycleSdk.startMultiChannelAnalysis(
        channel = AnalysisChannel.PRIMARY,
        config = SleepAnalysisConfig(useAudio = true, useAccelerometer = true)
    )

    val secondarySessionId: UUID = SleepCycleSdk.startMultiChannelAnalysis(
        channel = AnalysisChannel.SECONDARY,
        config = SleepAnalysisConfig(useAudio = true, useAccelerometer = false)
    )
} catch (e: Exception) {
    Log.e("SDK", "Failed to start multi-channel analysis: ${e.message}")
}
AnalysisChannel values:
PRIMARY - First audio channel (or mono)
SECONDARY - Second audio channel in stereo

Stopping sessions independently

Each session can be stopped independently to retrieve its result:
try {
    val primaryResult: AnalysisResult? = SleepCycleSdk.stopAnalysis(
        sessionId = primarySessionId
    )

    val secondaryResult: AnalysisResult? = SleepCycleSdk.stopAnalysis(
        sessionId = secondarySessionId
    )
} catch (e: Exception) {
    Log.e("SDK", "Failed to stop analysis: ${e.message}")
}

Stopping the data source

After all sessions have been stopped, stop the data source:
try {
    SleepCycleSdk.stopDataSource()
} catch (e: Exception) {
    Log.e("SDK", "Failed to stop data source: ${e.message}")
}
Calling stopDataSource() while sessions are still active will force-stop them and discard their results. To retrieve results, stop each session first.