Conversation
Add StreamingInfo and DownloadInfo dataclasses Add STREAMING status to JobStatus enum Add streaming progress update methods to JobService Add tts_cache_key and generation_attempt for retry support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement HLSService with segment upload, playlist generation Add pre-signed URL generation for secure access Add TTS caching for idempotent regeneration Add cleanup methods for HLS artifacts 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add combine_voice_and_music_hls method for streaming output Implement progressive segment upload during generation Maintain backward compatibility with existing MP3 output Add _prepare_mixed_audio helper for shared audio mixing logic 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement segment concatenation via FFmpeg Add pre-signed URL generation for downloads Include idempotency check to avoid regeneration Add cleanup of temp files after generation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add HLS mode for meditation processing with streaming output Implement download endpoint for MP3 generation Add ENABLE_HLS_STREAMING feature flag Include TTS caching and retry logic for resilience Maintain backward compatibility with base64 mode 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add POST /job/{job_id}/download endpoint
Add GET method to CORS allowed methods
Add ENABLE_HLS_STREAMING environment variable
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add automatic expiration for HLS streaming artifacts (7 days) Add automatic expiration for download artifacts (7 days) Add automatic expiration for job metadata (30 days) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add TestHLSIntegration for service-level tests Add TestFeatureFlag for env var parsing tests Add TestEndToEndFlow for handler integration tests Cover job creation, status transitions, download, and cleanup 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add hls.js for web HLS playback Add react-native-webview for mobile HLS player 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement HLS.js integration for streaming audio Handle Safari native HLS fallback Add playback controls and state management Add native mobile stub (actual mobile playback via WebView) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Create WebView-based HLS player for iOS/Android Add platform-specific implementations (mobile WebView, web hook) Handle bidirectional postMessage communication Include HTML template with HLS.js CDN integration 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add streaming job response handling Return playlist URL when streaming begins Add download URL fetch capability Add BackendMeditationCallStreaming function Maintain backward compatibility with base64 mode 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add HLSPlayer integration for streaming mode Support both streaming and legacy base64 modes Add streaming event callbacks (complete, error) Maintain backward compatibility with existing API 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add DownloadButton component with presigned URL support Handle native file download via expo-file-system Handle web download via window.open Add download progress tracking Add retry functionality on failure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update useMeditation hook for streaming support Add playlist URL and streaming state management Integrate DownloadButton component Handle streaming status updates during polling 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add ErrorBoundary for catching unexpected errors Add StreamingError with user-friendly error messages Add MeditationLoading with streaming-aware states Add integration test suite for HLS streaming flow 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mark all verification items as complete All tasks implemented and tested 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove unused imports (Linking, Platform, time, json, os) - Fix React hooks violations: - Remove setState calls in effect bodies - Avoid accessing refs during render - Use queueMicrotask for deferred state updates - Fix unused variables (pressed, localFilePath, hasContent) - Update tests to match new HLS player behavior: - isLoading starts as false until HLS.js callbacks fire - Play/Pause button text depends on player state - Auto-fix import sorting with ruff
WalkthroughAdds end-to-end HLS streaming: new HLSService and DownloadService, FFmpeg HLS audio path, job/status streaming lifecycle and download endpoint; lambda routing and feature flag; frontend HLS player/hook, download UI, types, tests, and S3 lifecycle policy updates. (≈29 words) Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Lambda
participant AudioSvc as FFmpegAudioService
participant HLSSvc as HLSService
participant S3
participant DownloadSvc
Client->>Lambda: POST /meditation (enable_streaming=true)
Lambda->>Lambda: create job (STREAMING)
Lambda-->>Client: job_id (and streaming metadata if available)
Lambda->>AudioSvc: combine_voice_and_music_hls(...)
AudioSvc->>AudioSvc: prepare mixed audio
loop per segment
AudioSvc->>HLSSvc: upload_segment(segment_data)
HLSSvc->>S3: PUT segment_*.ts
HLSSvc->>S3: PUT/UPDATE playlist.m3u8
AudioSvc->>Lambda: progress_callback(segments_completed)
Lambda->>Lambda: update_streaming_progress()
end
AudioSvc-->>Lambda: segments complete
Lambda->>Lambda: mark_streaming_complete()
Client->>Lambda: GET /job/{job_id}
Lambda-->>Client: JobStatus (streaming/completed) + playlist_url
Client->>Lambda: POST /job/{job_id}/download
Lambda->>DownloadSvc: generate_mp3_and_get_url()
DownloadSvc->>HLSSvc: list_segments()
DownloadSvc->>S3: GET segments -> FFmpeg concat -> PUT downloads/{job}.mp3
DownloadSvc-->>Lambda: presigned MP3 URL
Lambda-->>Client: download_url
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (30)
frontend/components/ScreenComponents/MeditationLoading.tsx (2)
23-36: Optional: Consider removing the unreachable default case.Since
LoadingStateis an exhaustive union type, thedefaultcase at line 34 will never be reached. While this provides defensive coding, you could either:
- Remove the default case to let TypeScript enforce exhaustiveness, or
- Keep it for runtime safety if the type could be widened in the future.
function getLoadingMessage(state: LoadingState): string { switch (state) { case 'starting': return 'Starting meditation...'; case 'preparing': return 'Preparing audio...'; case 'ready': return 'Ready to play'; case 'streaming': return 'Streaming...'; - default: - return 'Loading...'; } }
49-49: Simplify thehasTotalcondition.The condition checks
!== undefined && !== null && > 0, but since you're already checking!== nullexplicitly and TypeScript handles the type narrowing, you can simplify this.- const hasTotal = segmentsTotal !== undefined && segmentsTotal !== null && segmentsTotal > 0; + const hasTotal = segmentsTotal != null && segmentsTotal > 0;Note: Using
!=(loose equality) checks bothnullandundefinedin a single comparison.frontend/components/ErrorBoundary.tsx (1)
12-80: ErrorBoundary implementation looks solid; consider future‑proofing the custom fallback APIThe lifecycle usage and default UI are correct and idiomatic. If you expect callers to need access to the error/reset logic in custom fallbacks, you might later want to support a render‑prop signature (e.g.,
fallback?: (error: Error, onReset: () => void) => ReactNode) instead of just aReactNode, but that’s purely optional for now.frontend/hooks/useHLSPlayer.ts (1)
7-47: Mobile HLS stub is fine; minor type/ergonomics nitsThe stub does what it needs to for native platforms. Two small, optional tweaks:
- Match the
seeksignature to the interface for clarity:seek: (_time: number) => void.- If callers might ever mutate the state object, consider returning a shallow copy (
{ ...noopState }) to avoid shared reference across hook consumers.
Nothing blocking here.docs/plans/Phase-2.md (2)
77-88: Add languages to fenced code blocks to satisfy markdownlintSeveral commit‑message code fences use plain ``` without a language (e.g., around the “Commit Message Template” sections). To clear MD040, annotate them as text:
-**Commit Message Template**: -``` +**Commit Message Template**: +```text Author & Commiter : HatmanStack ... -``` +```Apply this pattern to all similar blocks (Tasks 1–10).
522-623: Align plan details with the actual DownloadButton implementationThe plan here still references
frontend/components/DownloadButton.tsxandDownloadButtonPropswithjobId/userId, but the code under test usesfrontend/components/DownloadButton/DownloadButton.tsxand props likedownloadAvailable,onGetDownloadUrl,fileName, etc. It’s worth updating the doc so future work/maintenance isn’t confused by the older API description.backend/tests/unit/test_download_service.py (1)
94-191: DownloadService tests have strong coverage; optional extra failure caseThese tests exercise initialization, key generation, existence checks, happy‑path MP3 generation, FFmpeg failure, segment download failure, and URL generation. If you want to be exhaustive, you could add a small test for the “upload MP3 to S3 fails” branch in
generate_mp3, but what’s here is already solid.tests/frontend/unit/HLSPlayer-test.tsx (1)
9-39: WebView mock and tests look good;storedRefis unusedThe custom WebView mock and message plumbing are well set up for exercising the HLSPlayer contract. The
storedRefvariable is never read, so you can safely drop it (and its assignments) to keep the test helper lean.docs/plans/Phase-1.md (1)
1-690: LGTM! Comprehensive Phase 1 HLS implementation documentation.The documentation provides:
- Clear phase goals and success criteria
- Detailed task breakdown with prerequisites
- Implementation steps for each component
- Verification checklists and testing instructions
- Commit message templates for consistency
The plan effectively guides implementation of:
- HLS streaming infrastructure
- Job model extensions
- Segment/playlist management
- MP3 concatenation service
- Lambda handler integration
- S3 lifecycle rules
Note: Static analysis flagged missing language specifiers on fenced code blocks (lines 92, 199, 293, 385, 483, 542, 598, 679). While these are minor markdown formatting issues, adding language specifiers (e.g.,
bash,text) would improve syntax highlighting.tests/frontend/unit/useHLSPlayer-test.tsx (2)
167-187: Consider verifying state changes, not just callback registration.These tests only verify that event listeners are registered. For more robust coverage, invoke the callbacks and assert the resulting state changes.
it('should track playing state via audio events', () => { - renderHook(() => useHLSPlayer('https://example.com/playlist.m3u8')); + const { result } = renderHook(() => useHLSPlayer('https://example.com/playlist.m3u8')); // Find the play event listener const playCallback = mockAudioElement.addEventListener.mock.calls.find( (call: [string, () => void]) => call[0] === 'play' )?.[1]; expect(playCallback).toBeDefined(); + + act(() => { + playCallback?.(); + }); + + const [state] = result.current; + expect(state.isPlaying).toBe(true); });
319-333: Safari fallback test could be enhanced.The test verifies
canPlayTypeis called but doesn't assert that the native path is taken (e.g.,mockHlsInstance.loadSourceshould NOT be called when native HLS is supported). Consider adding a negative assertion.it('should use native HLS when supported', () => { // Mock canPlayType to return support mockAudioElement.canPlayType.mockReturnValue('maybe'); renderHook(() => useHLSPlayer('https://example.com/playlist.m3u8')); - // Should set src directly instead of using HLS.js - // Note: This test may not work exactly as expected due to hook implementation - // but it verifies the canPlayType is called expect(mockAudioElement.canPlayType).toHaveBeenCalledWith( 'application/vnd.apple.mpegurl' ); + // Native path sets src directly, HLS.js should not be used + expect(mockAudioElement.src).toBe('https://example.com/playlist.m3u8'); + expect(mockHlsInstance.loadSource).not.toHaveBeenCalled(); });backend/tests/integration/test_hls_integration.py (2)
13-21: Fixturemock_aws_servicesis defined but not used.The
mock_aws_servicesfixture patches environment variables but is not referenced by any test. Either use it (add to test parameters) or remove it.@pytest.fixture def mock_aws_services(): """Mock AWS services for integration tests.""" with patch.dict(os.environ, { "ENABLE_HLS_STREAMING": "true", "AWS_S3_BUCKET": "test-bucket", "AWS_AUDIO_BUCKET": "test-audio-bucket", }): yieldIf this fixture should apply to all tests in
TestHLSIntegration, consider using@pytest.mark.usefixtures("mock_aws_services")on the class.
209-242: Module reload pattern is fragile; refactor feature flag to be read at runtime.The tests currently rely on
importlib.reload(lambda_handler)to pick up environment variable changes. While this works because the module has minimal side effects, module reload is inherently fragile and depends on implementation details. If the module gains side effects (e.g., service initialization), these tests will break. RefactorENABLE_HLS_STREAMINGto be read at runtime (e.g., in method calls or via a property) rather than evaluated once at module load, making the feature flag dynamic without requiring module reloads.frontend/components/ScreenComponents/MeditationControls.tsx (1)
158-162: Retry clears error before verifying playback succeeds.Setting
hlsErrortonullbefore callingplay()means if the retry fails immediately, the error state briefly disappears then reappears, causing a UI flash. Consider using a loading state or waiting for playback confirmation.frontend/hooks/useHLSPlayer.web.ts (1)
188-202: HLS configuration could be extracted to a constant.The same HLS configuration is duplicated in the retry function (lines 329-334). Extract to a shared constant for maintainability.
+const HLS_CONFIG = { + liveSyncDuration: 3, + liveMaxLatencyDuration: 10, + liveDurationInfinity: true, + lowLatencyMode: true, + manifestLoadingMaxRetry: 4, + manifestLoadingRetryDelay: 1000, + levelLoadingMaxRetry: 4, + levelLoadingRetryDelay: 1000, + fragLoadingMaxRetry: 6, + fragLoadingRetryDelay: 1000, +}; // In the main effect: - const hls = new Hls({ - liveSyncDuration: 3, - liveMaxLatencyDuration: 10, - ... - }); + const hls = new Hls(HLS_CONFIG); // In retry: - const hls = new Hls({ - liveSyncDuration: 3, - ... - }); + const hls = new Hls(HLS_CONFIG);docs/plans/Phase-0.md (1)
155-161: Add language specifiers to fenced code blocks.Static analysis flagged missing language identifiers. This improves syntax highlighting and tooling support.
Apply these changes:
Line 155-161: Add
textor leave as plain-``` +```text s3://float-cust-data/Line 164-175: Add
textor leave as plain-``` +```text s3://float-cust-data/Line 359-364: Add
text-``` +```text type(scope): brief descriptionAlso applies to: 164-175, 359-364
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
69-83: Consider extracting the delay constant.The
100msdelay for auto-play initialization appears in two places (lines 76 and 97). Consider extracting to a named constant for clarity and maintainability.+const AUTOPLAY_DELAY_MS = 100; + // Load playlist when URL changes useEffect(() => { if (playlistUrl) { if (isReadyRef.current) { sendCommand('load', { url: playlistUrl }); if (autoPlay) { - setTimeout(() => sendCommand('play'), 100); + setTimeout(() => sendCommand('play'), AUTOPLAY_DELAY_MS); }backend/src/services/download_service.py (1)
32-39: Overly broad exception handling.Catching
Exceptionincheck_mp3_existsmasks the difference between "object doesn't exist" and other errors (network issues, permissions, etc.). Consider catching the specific S3 exception.+from botocore.exceptions import ClientError + def check_mp3_exists(self, user_id: str, job_id: str) -> bool: """Check if MP3 has already been generated for this job.""" key = self.get_download_key(user_id, job_id) try: self.storage_service.s3_client.head_object(Bucket=self.bucket, Key=key) return True - except Exception: + except ClientError as e: + if e.response['Error']['Code'] == '404': + return False + raise # Re-raise unexpected errors - return Falsefrontend/components/HLSPlayer/hlsPlayerHtml.ts (1)
150-168: Consider adding retry limiting for error recovery.The
NETWORK_ERRORcase callshls.startLoad()without tracking retry attempts. If the network is persistently down, this could create an infinite retry loop. HLS.js has built-in retry limits per-request, but consider adding application-level tracking for repeated fatal errors.This is a low-risk concern since HLS.js internal retries will eventually exhaust, but for robustness:
let networkErrorCount = 0; const MAX_NETWORK_RETRIES = 3; // In error handler: case Hls.ErrorTypes.NETWORK_ERROR: if (networkErrorCount++ < MAX_NETWORK_RETRIES) { hls.startLoad(); } else { sendMessage('error', { message: 'Network error after retries', fatal: true }); } break;frontend/types/api.ts (1)
9-13: Naming convention inconsistency between API and client types.
StreamingInfousessnake_case(playlist_url,segments_completed) matching the API, whileMeditationResultusescamelCase(playlistUrl,segmentsCompleted). This is intentional (API vs client types), but consider adding a note or organizing into separate namespaces for clarity.Consider grouping types by purpose:
// API Response types (match backend snake_case) export namespace Api { export interface StreamingInfo { ... } export interface JobStatusResponse { ... } } // Client-side types (camelCase) export interface MeditationResult { ... }Or add a comment explaining the convention.
Also applies to: 61-74
backend/src/services/ffmpeg_audio_service.py (2)
307-405: Significant code duplication withcombine_voice_and_music.This method duplicates most of the logic from
combine_voice_and_music(lines 79-187). Consider refactoringcombine_voice_and_musicto use_prepare_mixed_audiointernally, reducing duplication.def combine_voice_and_music( self, voice_path: str, music_list: List[str], timestamp: str, output_path: str ) -> List[str]: """Combine voice and music into a single MP3 file.""" - print("Combining Audio") - # ... 100+ lines of duplicated code ... + print("Combining Audio") + mixed_path, updated_music = self._prepare_mixed_audio(voice_path, music_list, timestamp) + try: + shutil.move(mixed_path, output_path) + return updated_music + finally: + if os.path.exists(mixed_path): + os.remove(mixed_path)This would require updating
_prepare_mixed_audioto return the music list as noted in the previous comment.
336-397: Consider adding timeouts to FFmpeg subprocess calls.Multiple
subprocess.runcalls in this method lack timeouts. While individual operations are smaller, a stalled FFmpeg process could still cause issues. Consider adding consistent timeouts across all subprocess calls.+FFMPEG_STEP_TIMEOUT = 120 # 2 minutes per step subprocess.run( [...], check=True, capture_output=True, + timeout=FFMPEG_STEP_TIMEOUT, )backend/src/services/job_service.py (2)
139-148: Add logging for missing job inmark_streaming_started.Unlike
update_streaming_progress(lines 102-106), this method silently returns when the job is not found, which may complicate debugging.def mark_streaming_started( self, user_id: str, job_id: str, playlist_url: str, ): """Mark job as streaming and set initial playlist URL.""" job_data = self.get_job(user_id, job_id) if not job_data: + logger.warning( + "Cannot mark streaming started: job not found", + extra={"data": {"job_id": job_id}} + ) return
108-116: Consider extracting default streaming/download dict initialization.The same default streaming dict structure is duplicated in
update_streaming_progress,mark_streaming_started,mark_streaming_complete, andmark_download_ready. A helper method would reduce repetition.Also applies to: 153-160
frontend/app/(tabs)/explore.tsx (2)
119-126: Avoid swallowing all errors silently.The empty catch block swallows all exceptions, including network errors or unexpected failures, making issues harder to debug. Consider logging at debug level.
// Check if download is already available try { const completionResult = await response.waitForCompletion(); if (completionResult.downloadAvailable) { setDownloadAvailable(true); } - } catch { + } catch (err) { // Silent fail - download might not be available yet + console.debug('waitForCompletion check failed:', err); }
156-159: Consider exposing stream errors to the UI.Currently
handleStreamErroronly logs the error. Consider surfacing this to users via a toast or error state for better UX.backend/src/handlers/lambda_handler.py (1)
596-610: Minor: Duplicate job lookup in download request flow.
_handle_download_requestfetches the job at line 597 for authorization, thenhandle_download_requestfetches it again at line 370. Consider passingjob_datato avoid the redundant S3 read.frontend/components/BackendMeditationCall.tsx (2)
204-223: Add validation for download URL in response.
fetchDownloadUrlassumesdata.download_urlexists but the backend can return an error object instead. Consider validating the response structure.const data: DownloadResponse = await response.json(); + if (!data.download_url) { + throw new Error(data.error?.message || 'No download URL returned'); + } return data.download_url;
104-199: Consider consolidating polling functions.
pollJobStatusForStreamingandpollUntilCompleteshare significant logic. A parameterized helper could reduce duplication:async function pollJobStatus( jobId: string, userId: string, lambdaUrl: string, returnOnStreaming: boolean = false, onStatusUpdate?: (status: JobStatusResponse) => void ): Promise<JobStatusResponse>backend/src/services/hls_service.py (1)
41-56: Abstraction leak: Directs3_clientaccess bypasses StorageService interface.
HLSServiceaccessesstorage_service.s3_clientdirectly throughout (lines 45, 72, 96, 118, 200, 219, 233, 283), bypassing theStorageServiceabstraction. This couples the service to the S3 implementation.Consider extending
StorageServicewith methods likegenerate_presigned_url()andupload_bytes()if you need to support alternative storage backends in the future.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (40)
backend/src/handlers/lambda_handler.py(8 hunks)backend/src/models/domain.py(1 hunks)backend/src/services/download_service.py(1 hunks)backend/src/services/ffmpeg_audio_service.py(3 hunks)backend/src/services/hls_service.py(1 hunks)backend/src/services/job_service.py(4 hunks)backend/template.yaml(4 hunks)backend/tests/integration/test_hls_integration.py(1 hunks)backend/tests/unit/test_download_service.py(1 hunks)backend/tests/unit/test_hls_service.py(1 hunks)backend/tests/unit/test_services.py(1 hunks)docs/plans/Phase-0.md(1 hunks)docs/plans/Phase-1.md(1 hunks)docs/plans/Phase-2.md(1 hunks)docs/plans/README.md(1 hunks)frontend/app/(tabs)/explore.tsx(7 hunks)frontend/components/BackendMeditationCall.tsx(6 hunks)frontend/components/DownloadButton/DownloadButton.tsx(1 hunks)frontend/components/DownloadButton/index.ts(1 hunks)frontend/components/ErrorBoundary.tsx(1 hunks)frontend/components/HLSPlayer/HLSPlayer.tsx(1 hunks)frontend/components/HLSPlayer/HLSPlayer.web.tsx(1 hunks)frontend/components/HLSPlayer/hlsPlayerHtml.ts(1 hunks)frontend/components/HLSPlayer/index.ts(1 hunks)frontend/components/ScreenComponents/MeditationControls.tsx(6 hunks)frontend/components/ScreenComponents/MeditationLoading.tsx(1 hunks)frontend/components/StreamingError.tsx(1 hunks)frontend/hooks/index.ts(1 hunks)frontend/hooks/useHLSPlayer.ts(1 hunks)frontend/hooks/useHLSPlayer.web.ts(1 hunks)frontend/package.json(1 hunks)frontend/types/api.ts(1 hunks)tests/frontend/integration/hls-streaming-test.tsx(1 hunks)tests/frontend/unit/BackendMeditationCall-test.tsx(2 hunks)tests/frontend/unit/DownloadButton-test.tsx(1 hunks)tests/frontend/unit/HLSPlayer-test.tsx(1 hunks)tests/frontend/unit/MeditationControls-test.tsx(2 hunks)tests/frontend/unit/MeditationLoading-test.tsx(1 hunks)tests/frontend/unit/StreamingError-test.tsx(1 hunks)tests/frontend/unit/useHLSPlayer-test.tsx(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (21)
frontend/components/DownloadButton/DownloadButton.tsx (4)
frontend/components/DownloadButton/index.ts (1)
DownloadButtonProps(6-6)frontend/constants/Colors.ts (1)
Colors(4-3888)frontend/components/ThemedView.tsx (1)
ThemedView(17-26)frontend/components/ThemedText.tsx (1)
ThemedText(27-116)
frontend/components/ScreenComponents/MeditationLoading.tsx (3)
frontend/components/ThemedView.tsx (1)
ThemedView(17-26)frontend/constants/Colors.ts (1)
Colors(4-3888)frontend/components/ThemedText.tsx (1)
ThemedText(27-116)
tests/frontend/unit/StreamingError-test.tsx (1)
frontend/components/StreamingError.tsx (1)
getErrorMessage(99-99)
backend/tests/unit/test_hls_service.py (1)
backend/src/services/hls_service.py (18)
get_hls_prefix(22-24)get_segment_key(26-29)get_playlist_key(31-34)get_tts_cache_key(36-39)generate_presigned_url(41-56)generate_playlist_url(58-61)generate_segment_url(63-66)upload_segment(68-88)upload_segment_from_file(90-112)upload_playlist(114-131)generate_live_playlist(133-181)finalize_playlist(183-194)upload_tts_cache(196-213)download_tts_cache(215-227)tts_cache_exists(229-236)list_segments(238-249)cleanup_hls_artifacts(251-277)download_segment(279-290)
frontend/components/HLSPlayer/HLSPlayer.web.tsx (2)
frontend/components/HLSPlayer/HLSPlayer.tsx (2)
HLSPlayerProps(11-20)HLSPlayerRef(22-26)frontend/hooks/useHLSPlayer.web.ts (1)
useHLSPlayer(34-368)
tests/frontend/unit/BackendMeditationCall-test.tsx (2)
tests/frontend/unit/utils/setup.ts (1)
MOCK_LAMBDA_URL(24-24)frontend/components/BackendMeditationCall.tsx (1)
BackendMeditationCallStreaming(291-378)
tests/frontend/unit/MeditationControls-test.tsx (2)
tests/frontend/integration/test-utils.tsx (4)
screen(331-331)fireEvent(330-330)waitFor(329-329)act(333-333)tests/frontend/unit/utils/testUtils.tsx (1)
waitFor(194-196)
frontend/components/HLSPlayer/HLSPlayer.tsx (2)
frontend/components/HLSPlayer/HLSPlayer.web.tsx (2)
HLSPlayerProps(9-18)HLSPlayerRef(20-24)frontend/components/HLSPlayer/hlsPlayerHtml.ts (1)
hlsPlayerHtml(7-230)
tests/frontend/unit/HLSPlayer-test.tsx (3)
frontend/components/HLSPlayer/HLSPlayer.web.tsx (1)
HLSPlayerRef(20-24)frontend/components/HLSPlayer/HLSPlayer.tsx (1)
HLSPlayerRef(22-26)frontend/components/HLSPlayer/index.ts (1)
HLSPlayerRef(7-7)
frontend/components/ErrorBoundary.tsx (3)
frontend/components/ThemedView.tsx (1)
ThemedView(17-26)frontend/components/ThemedText.tsx (1)
ThemedText(27-116)frontend/constants/Colors.ts (1)
Colors(4-3888)
tests/frontend/integration/hls-streaming-test.tsx (1)
frontend/components/ErrorBoundary.tsx (1)
render(48-79)
backend/tests/unit/test_services.py (2)
backend/src/services/job_service.py (11)
JobService(29-357)create_job(36-70)JobStatus(21-26)update_streaming_progress(91-137)mark_streaming_started(139-166)mark_streaming_complete(168-199)mark_download_ready(201-223)mark_download_completed(225-235)set_tts_cache_key(237-245)increment_generation_attempt(247-257)get_job(259-280)backend/src/services/ffmpeg_audio_service.py (2)
FFmpegAudioService(26-455)combine_voice_and_music_hls(189-305)
frontend/types/api.ts (1)
frontend/components/BackendMeditationCall.tsx (3)
StreamingInfo(467-467)JobStatusResponse(467-467)MeditationResult(467-467)
tests/frontend/unit/useHLSPlayer-test.tsx (2)
frontend/hooks/useHLSPlayer.web.ts (1)
useHLSPlayer(34-368)frontend/hooks/index.ts (1)
useHLSPlayer(7-7)
frontend/app/(tabs)/explore.tsx (1)
frontend/components/BackendMeditationCall.tsx (2)
StreamingMeditationResponse(43-52)BackendMeditationCallStreaming(291-378)
backend/src/services/ffmpeg_audio_service.py (4)
backend/src/utils/logging_utils.py (1)
get_logger(85-110)backend/src/services/audio_service.py (3)
AudioService(5-18)get_audio_duration(8-8)select_background_music(16-18)backend/src/services/storage_service.py (1)
StorageService(5-14)backend/src/services/hls_service.py (5)
HLSService(15-290)upload_segment_from_file(90-112)generate_live_playlist(133-181)upload_playlist(114-131)finalize_playlist(183-194)
backend/tests/integration/test_hls_integration.py (6)
backend/src/services/hls_service.py (7)
generate_presigned_url(41-56)HLSService(15-290)upload_segment(68-88)generate_live_playlist(133-181)tts_cache_exists(229-236)upload_tts_cache(196-213)cleanup_hls_artifacts(251-277)backend/src/services/s3_storage_service.py (1)
download_json(31-46)backend/src/services/job_service.py (7)
JobService(29-357)create_job(36-70)JobStatus(21-26)update_job_status(72-89)mark_streaming_started(139-166)update_streaming_progress(91-137)mark_streaming_complete(168-199)backend/src/services/download_service.py (2)
DownloadService(19-192)generate_mp3(58-173)backend/src/handlers/lambda_handler.py (5)
lambda_handler(486-515)LambdaHandler(50-473)handle_meditation_request(112-145)handle_job_status(354-366)handle_download_request(368-409)backend/src/config/constants.py (1)
InferenceType(4-6)
frontend/components/ScreenComponents/MeditationControls.tsx (4)
frontend/components/HLSPlayer/HLSPlayer.web.tsx (1)
HLSPlayerRef(20-24)frontend/components/ThemedView.tsx (1)
ThemedView(17-26)frontend/constants/Colors.ts (1)
Colors(4-3888)frontend/components/ThemedText.tsx (1)
ThemedText(27-116)
backend/tests/unit/test_download_service.py (1)
backend/src/services/download_service.py (5)
get_download_key(28-30)check_mp3_exists(32-39)get_download_url(41-56)generate_mp3(58-173)generate_mp3_and_get_url(175-192)
tests/frontend/unit/MeditationLoading-test.tsx (2)
tests/frontend/integration/test-utils.tsx (1)
screen(331-331)frontend/components/ScreenComponents/MeditationLoading.tsx (1)
getLoadingMessage(95-95)
backend/src/services/hls_service.py (3)
backend/src/utils/logging_utils.py (1)
get_logger(85-110)backend/src/services/storage_service.py (1)
StorageService(5-14)backend/src/services/s3_storage_service.py (1)
delete_object(66-80)
🪛 LanguageTool
docs/plans/Phase-1.md
[uncategorized] ~12-~12: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...download endpoint concatenates segments on demand - Idempotent regeneration works (cached...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
docs/plans/Phase-2.md
[style] ~386-~386: Consider using “incomplete” to avoid wordiness.
Context: ...s_in: number }` - Handle errors (job not complete, generation failed) 6. Maintain backwa...
(NOT_ABLE_PREMIUM)
🪛 markdownlint-cli2 (0.18.1)
docs/plans/Phase-1.md
92-92: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
199-199: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
293-293: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
385-385: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
483-483: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
542-542: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
598-598: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
679-679: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/plans/Phase-0.md
155-155: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
164-164: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
359-359: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/plans/Phase-2.md
78-78: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
193-193: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
305-305: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
421-421: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
509-509: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
612-612: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
694-694: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
787-787: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
862-862: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
945-945: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (85)
frontend/package.json (2)
32-32: No action needed on react-native-webview dependency.The
react-native-webviewpackage is essential for the mobile HLS player implementation inHLSPlayer.tsx, where it embeds HLS.js within a WebView for audio streaming on iOS and Android. The web platform uses the npmhls.jspackage directly viauseHLSPlayer.web.ts. Both dependencies are correctly specified and actively used. React-native-webview ^13.10.5 is fully compatible with Expo ~52.0.40 and does not cause conflicts or duplication with Expo's WebView support.Likely an incorrect or invalid review comment.
32-32: The caret pinning for hls.js is appropriate and aligns with the project's versioning strategy.v1.6.15 is the latest stable release of hls.js as of December 2025, with no documented breaking changes in any subsequent releases (v1.7.0 exists only as a pre-release alpha). The caret prefix (^1.6.15) is consistent with how other dependencies in the project are pinned (e.g., expo-av, expo-file-system, react-native-webview), so no adjustment is needed.
frontend/components/ScreenComponents/MeditationLoading.tsx (1)
1-18: LGTM! Clean type definitions.The LoadingState union type and MeditationLoadingProps interface are well-designed. The explicit
nullallowance forsegmentsTotalproperly supports both determinate and indeterminate progress states.tests/frontend/unit/MeditationControls-test.tsx (3)
25-38: LGTM! Well-structured HLSPlayer mock.The mock correctly implements the imperative ref API with
play,pause, andseekmethods as jest functions. Returningnullfrom the component is appropriate for unit testing the control logic without rendering concerns.
48-248: LGTM! Comprehensive test coverage for both modes.The test suite effectively covers:
- Legacy base64 mode with existing playback scenarios
- New HLS streaming mode with playlist URL handling
- Mode preference (streaming takes precedence)
- Boundary cases (null playlist URL, isStreaming false)
The comment at line 244 appropriately acknowledges the mock limitation for testing callbacks. Consider adding integration tests to verify the actual HLS player callback behavior end-to-end.
250-265: LGTM! Loading state priority test is valuable.This test confirms that the
isCallingloading state correctly takes precedence over streaming props, ensuring users see appropriate feedback during API calls regardless of the playback mode.backend/template.yaml (3)
55-66: LGTM! HLS streaming flag properly configured.The
ENABLE_HLS_STREAMING: 'true'environment variable is correctly placed in the Globals section, ensuring all Lambda functions have access to the feature flag. This aligns with the HLS streaming implementation described in the PR.
80-84: LGTM! CORS updated for GET requests.Adding the GET method to
AllowMethodsappropriately supports the new job status and download endpoints while maintaining existing POST and OPTIONS support.
151-162: LGTM! Download endpoints follow consistent patterns.The new
/job/{job_id}/downloadendpoints (POST for triggering, OPTIONS for CORS) follow the established API design pattern and integrate cleanly with the existing job status endpoints.tests/frontend/unit/MeditationLoading-test.tsx (1)
1-109: LGTM! Excellent and comprehensive test coverage.The test suite thoroughly covers:
- All four loading states with correct messages
- Progress display variants (determinate, indeterminate, hidden)
- Edge cases (undefined segments, null total)
- Direct testing of the
getLoadingMessagehelperThe test structure is clear, uses appropriate testIDs, and validates both positive and negative cases.
frontend/components/HLSPlayer/index.ts (1)
1-7: LGTM! Clean barrel export with helpful documentation.The index file provides a clean public API surface for the HLSPlayer component and its types. The comment explaining Metro's platform-specific resolution (
.web.tsxfor web builds) is valuable context for developers.frontend/components/DownloadButton/index.ts (1)
1-6: LGTM! Consistent barrel export pattern.The DownloadButton index follows the same clean barrel export pattern as HLSPlayer, providing a consistent public API surface across components.
tests/frontend/unit/StreamingError-test.tsx (2)
10-62: LGTM! Solid rendering and interaction tests.The test suite properly validates:
- Error message display
- Conditional retry button rendering based on
canRetryprop- Default retry button behavior (shown by default)
- Retry callback invocation
The setup includes proper mock cleanup in
beforeEach.
64-114: LGTM! Comprehensive error message mapping tests.The
getErrorMessagetest suite thoroughly validates error categorization and user-facing message selection across:
- Network errors (Network, Fetch, Connection keywords)
- Playback errors (HLS, manifest, playlist keywords)
- Generation errors (Generation, timeout keywords)
- Download errors (Download keyword)
- Unknown/default errors
This ensures users receive appropriate, actionable feedback for different failure scenarios.
docs/plans/README.md (1)
1-43: LGTM! Excellent implementation plan documentation.This README provides:
- Clear business value (2 minutes → 4 seconds perceived wait time)
- Comprehensive architectural overview (HLS segments, progressive playlist, fault tolerance)
- Detailed prerequisites (tools, environment, dependencies)
- Structured phase breakdown with token estimates
- Navigation to detailed phase documents
The documentation will help developers understand the scope and approach of the HLS streaming feature.
frontend/hooks/index.ts (1)
5-8: Hook barrel looks correctExports are minimal and consistent with the hook/modules layout; this is a clean central import surface for HLS and theme hooks.
tests/frontend/integration/hls-streaming-test.tsx (6)
10-69: LGTM! Mock setup is comprehensive and correct.The mock implementations properly simulate:
- expo-av Audio.Sound API
- expo-file-system file operations
- HLSPlayer component with ref forwarding and imperative handle
- Auto-play behavior triggered via useEffect
The mock structure effectively supports the integration test scenarios.
82-144: LGTM! Complete streaming flow tests are well-structured.The test suite effectively validates:
- HLS player rendering in streaming mode
- Fallback to legacy mode when not streaming
- Generate button display when no content available
- Loading indicator during generation
Good use of testID for reliable element queries.
146-189: LGTM! Download flow tests cover key scenarios.The test suite validates:
- Download button rendering when available
- Proper hiding when download unavailable
- Callback invocation on button press
Test assertions correctly verify the expected behavior.
191-216: LGTM! Error recovery tests validate retry behavior.The test suite correctly verifies:
- StreamingError component rendering
- Retry button presence and functionality
- Callback invocation on retry
218-265: LGTM! ErrorBoundary tests properly handle error scenarios.The test suite correctly:
- Creates a component that throws to test error catching
- Suppresses console.error to avoid test output noise
- Validates fallback UI rendering and retry functionality
- Verifies onError callback invocation
267-366: LGTM! Remaining test suites provide comprehensive coverage.The test suites effectively validate:
- Streaming to download transition with onStreamComplete callback
- Base64 fallback mode behavior and mode prioritization
- Platform-specific control rendering and interaction
The tests cover important edge cases and mode transitions.
backend/tests/unit/test_services.py (5)
794-830: LGTM! Job creation tests validate streaming field initialization.The tests correctly verify:
- Job creation without streaming doesn't include streaming/download fields
- Job creation with streaming includes all required HLS metadata fields
- Proper initialization values for streaming fields
832-840: LGTM! JobStatus enum test confirms STREAMING state exists.The test validates all job status values including the new STREAMING state, ensuring the enum is properly defined for the HLS workflow.
842-900: LGTM! Streaming progress update tests are comprehensive.The tests correctly verify:
- Progress updates persist correctly
- started_at timestamp is set on first segment completion
- All streaming metadata fields are updated appropriately
901-1065: LGTM! Remaining JobService HLS tests provide complete coverage.The test suite validates:
- Streaming lifecycle methods (mark_streaming_started, mark_streaming_complete)
- Download readiness tracking (mark_download_ready, mark_download_completed)
- TTS cache key management (set_tts_cache_key)
- Generation attempt tracking (increment_generation_attempt)
- Backward compatibility with jobs lacking streaming fields
All tests use appropriate mocks and verify expected data persistence.
1067-1099: LGTM! FFmpeg HLS service tests validate service injection.The tests correctly verify:
- HLS service is optional in constructor (backward compatibility)
- HLS service can be injected when needed
- combine_voice_and_music_hls raises ValueError when HLS service is missing
- HLS_SEGMENT_DURATION constant is configured correctly (5 seconds)
frontend/components/HLSPlayer/HLSPlayer.web.tsx (4)
9-24: LGTM! Interface definitions are clear and complete.The HLSPlayerProps and HLSPlayerRef interfaces provide:
- Comprehensive callback hooks for playback events
- Proper typing for playlist URL and control methods
- Optional autoPlay flag for flexible initialization
30-51: LGTM! Component structure and ref exposure are correct.The component properly:
- Uses forwardRef to accept a ref parameter
- Exposes play, pause, and seek methods via useImperativeHandle
- Correctly depends on controls from the hook
53-98: LGTM! State tracking and callback logic are well-implemented.The component correctly:
- Tracks previous state values using refs to avoid duplicate callbacks
- Fires callbacks only on actual state transitions
- Handles auto-play when playlist loads
- Provides time updates continuously via onTimeUpdate
- Properly handles error, loading, playing, and complete states
The use of refs for previous state tracking is appropriate to avoid stale closures in useEffect dependencies.
99-105: LGTM! Component correctly returns null for invisible player.The component is designed to be invisible (audio-only), with playback handled entirely by the hook. This is appropriate for the HLS streaming use case.
tests/frontend/unit/BackendMeditationCall-test.tsx (5)
17-27: LGTM! Import and test setup updated correctly.The file now imports BackendMeditationCallStreaming alongside the existing BackendMeditationCall, and adds mockPlaylistUrl constant for HLS testing.
259-311: LGTM! Streaming start test validates playlist URL return.The test correctly:
- Mocks job submission and streaming status polling
- Verifies jobId, playlistUrl, and isStreaming fields
- Confirms waitForCompletion and getDownloadUrl functions are exposed
313-346: LGTM! Base64 fallback test validates non-streaming path.The test correctly verifies that when a job completes without streaming, the function:
- Detects isStreaming as false
- Saves base64 data to file system
- Returns responseMeditationURI with file path
348-448: LGTM! Status polling and completion tests are comprehensive.The tests correctly verify:
- onStatusUpdate callback invocation during polling
- waitForCompletion progression to completed state
- Final result includes download availability and segment completion data
450-562: LGTM! Download and error handling tests are thorough.The test suite validates:
- getDownloadUrl calls the correct download endpoint
- Download URL is returned and verified
- Failed job status throws appropriate errors
- Structured error messages are handled correctly
All mock sequences and assertions are correct.
frontend/components/DownloadButton/DownloadButton.tsx (4)
14-42: LGTM! Interface and state management are well-defined.The component properly defines:
- DownloadButtonProps with all necessary callbacks and configuration
- DownloadState type covering all possible states
- State tracking for download progress
43-107: LGTM! Download handler logic is comprehensive and platform-aware.The handleDownload function correctly:
- Prevents concurrent downloads
- Fetches presigned download URL
- Handles web downloads via window.open
- Implements native downloads with progress tracking via expo-file-system
- Provides error handling with retry option
- Shows appropriate user alerts
The logic properly handles both platforms and error scenarios.
109-127: LGTM! Button text logic reflects download state correctly.The getButtonText function provides appropriate text for each state:
- 'Preparing...' during URL fetching
- 'Downloading X%' with progress
- 'Downloaded' on completion
- 'Retry Download' on error
- 'Download' for idle state
129-162: LGTM! Button rendering and UI logic are correct.The component properly:
- Hides when download is not available
- Disables during fetching and downloading
- Shows activity indicator during active operations
- Applies correct styling based on pressed/disabled states
- Uses testID for testing
backend/tests/unit/test_hls_service.py (7)
7-28: LGTM! Test fixture and service initialization are correct.The mock_storage_service fixture properly mocks:
- S3 client for AWS operations
- list_objects and delete_object methods with default return values
The initialization test validates service setup.
29-66: LGTM! Key generation tests validate correct path construction.The tests correctly verify:
- HLS prefix follows pattern:
{user_id}/hls/{job_id}/- Segment keys use zero-padded 3-digit indices
- Playlist key uses
playlist.m3u8filename- TTS cache key uses
voice.mp3filename
67-117: LGTM! Presigned URL generation tests cover success and failure.The tests validate:
- generate_presigned_url returns correct URL on success
- Returns None when S3 operation fails
- generate_playlist_url and generate_segment_url delegate correctly
118-163: LGTM! Upload tests verify content type and error handling.The tests correctly validate:
- Segment upload with correct ContentType "video/MP2T"
- Segment upload from file
- Playlist upload with ContentType "application/vnd.apple.mpegurl"
- Error handling returns False on failure
164-232: LGTM! Playlist generation tests are comprehensive.The tests validate:
- Basic live playlist includes required HLS tags
- ENDLIST is excluded for live playlists (is_complete=False)
- ENDLIST is included when is_complete=True
- Custom segment durations are properly formatted
- finalize_playlist generates complete playlist with ENDLIST
233-284: LGTM! TTS cache tests validate lifecycle operations.The tests correctly verify:
- upload_tts_cache calls upload_file
- download_tts_cache calls download_file
- download returns False on error
- tts_cache_exists calls head_object and returns True/False appropriately
285-358: LGTM! Segment management tests validate cleanup and configuration.The tests correctly verify:
- list_segments filters for .ts files only
- cleanup_hls_artifacts deletes all objects under HLS prefix
- cleanup handles individual delete errors gracefully (best-effort)
- download_segment calls download_file
- URL_EXPIRY and SEGMENT_DURATION constants are configured correctly
backend/src/models/domain.py (3)
70-99: LGTM! StreamingInfo dataclass is well-designed.The StreamingInfo class properly:
- Defines all necessary HLS streaming metadata fields
- Provides to_dict for serialization
- Implements from_dict class method with proper defaults
- Handles optional data gracefully
101-124: LGTM! DownloadInfo dataclass follows consistent pattern.The DownloadInfo class correctly:
- Defines download metadata fields (available, url, downloaded)
- Provides to_dict and from_dict methods matching StreamingInfo pattern
- Handles optional data with proper defaults
126-149: LGTM! ProcessingJob extensions support HLS workflow.The ProcessingJob updates correctly:
- Add 'streaming' to status field documentation
- Add streaming and download Optional fields
- Add tts_cache_key for retry capability
- Add generation_attempt counter with default value 1
The extensions properly support the new HLS streaming and download functionality while maintaining backward compatibility.
tests/frontend/unit/useHLSPlayer-test.tsx (2)
1-62: Well-structured mock setup for HLS.js testing.The mock configuration correctly mirrors the HLS.js API including static properties (
Events,ErrorTypes,ErrorDetails) and instance methods. The pattern of usingObject.assign(instance, mockHlsInstance)in the constructor allows tests to access the mock instance for assertions.
335-365: Good coverage of URL change handling.The tests correctly verify that changing the URL triggers HLS destruction and reinitialization with the new source, and that setting URL to null resets state appropriately.
backend/tests/integration/test_hls_integration.py (2)
249-277: Test correctly validates streaming-enabled job creation flow.The test verifies that a meditation request with HLS enabled returns a job with proper streaming fields. Good coverage of the async job creation path.
308-348: Good validation of download request error states.These tests verify that the download endpoint properly validates job completion status and download availability before proceeding. The error codes (
JOB_NOT_COMPLETED,DOWNLOAD_NOT_AVAILABLE) match the expected contract.frontend/components/ScreenComponents/MeditationControls.tsx (2)
177-213: Streaming mode UI is well-structured.The component correctly wires HLSPlayer callbacks, provides play/pause controls with appropriate visual feedback, and shows completion state. Good use of
testIDattributes for testability.
215-253: Legacy mode preserved for backwards compatibility.The component correctly falls back to expo-av Audio playback when streaming is not active, maintaining compatibility with the existing base64 flow.
frontend/hooks/useHLSPlayer.web.ts (1)
162-173: Safari native HLS path looks correct.The implementation properly sets the audio source directly and uses a one-time
canplaylistener for autoplay. The generic audio error handler (lines 112-119) will catch playback issues.docs/plans/Phase-0.md (2)
1-6: Comprehensive foundation document for HLS implementation.This document provides clear architecture decisions with rationale, detailed technical specifications, and practical guidance for implementation. The ADR format with consequences section is particularly valuable for future reference.
45-65: Implementation correctly follows ADR-003.The ADR decision is properly implemented. On web,
useHLSPlayer.web.tsuses HLS.js directly with HTMLAudioElement (appropriate for web browsers). On mobile, the native stub inuseHLSPlayer.tsintentionally returns no-op values because mobile playback delegates to the separateHLSPlayer.tsxcomponent, which usesreact-native-webviewto embed HLS.js in a WebView—exactly as the ADR specifies. The.web.tssuffix indicates web-specific implementation, not a discrepancy with the ADR.Likely an incorrect or invalid review comment.
frontend/components/HLSPlayer/HLSPlayer.tsx (4)
22-35: LGTM!The
HLSPlayerRefinterface matches the web implementation, andWebViewMessageprovides proper typing for WebView communication.
55-67: LGTM!The
sendCommandhelper properly guards against sending commands before WebView is ready, and the imperative handle cleanly exposes the player controls.
85-144: LGTM!Comprehensive message handling with proper error boundaries. The switch statement covers all expected message types, and the nullish coalescing provides safe defaults for optional fields.
146-166: Verify ifallowFileAccessis necessary.Since the WebView loads inline HTML (
source={{ html: hlsPlayerHtml }}),allowFileAccess={true}may not be required. This setting allows the WebView to access local files, which could be a security consideration on some platforms.If file access is only needed for the HLS.js CDN script, the current
originWhitelist={['*']}should be sufficient. Consider removingallowFileAccessunless there's a specific requirement.backend/src/services/download_service.py (3)
41-56: LGTM!Clean presigned URL generation with appropriate error logging. The 1-hour expiry aligns with the constant defined at the top.
109-115: Path escaping is adequate for typical temp paths.The single-quote escaping works for shell-style quoting. Since paths are generated by
tempfile.mkdtemp(), they're unlikely to contain special characters. The current approach is acceptable.
170-173: LGTM!Proper cleanup in the
finallyblock ensures temp files are removed even on errors. Usingignore_errors=Trueis appropriate here to avoid masking original exceptions.frontend/components/HLSPlayer/hlsPlayerHtml.ts (2)
121-132: LGTM!The HLS.js configuration is appropriate for low-latency streaming with reasonable retry parameters. The retry delays (1000ms) and max retries (4-6) provide good resilience without excessive waiting.
180-226: LGTM!Command handling is comprehensive with proper error handling for the
play()promise. The dual message listeners (document and window) correctly handle React Native WebView's platform-specific message delivery.frontend/types/api.ts (1)
36-47: LGTM!The
JobStatusResponseinterface comprehensively covers all job states. The union type forerror(string | ApiError) provides flexibility for both simple error messages and structured errors.backend/src/services/ffmpeg_audio_service.py (1)
17-20: LGTM!Good use of
TYPE_CHECKINGto avoid circular imports while still providing type hints forHLSService. The logger is properly configured.backend/src/services/job_service.py (2)
36-70: Approve streaming-aware job creation.The
enable_streamingparameter and conditional initialization of streaming/download fields cleanly extends the existing job structure while maintaining backward compatibility.
247-257: LGTM onincrement_generation_attempt.Returning 1 when job is not found is a sensible default that allows retry logic to proceed.
frontend/app/(tabs)/explore.tsx (3)
72-77: Approve new streaming state variables.The streaming state structure (
playlistUrl,isStreaming,streamingResponse,downloadAvailable) cleanly separates streaming concerns from legacy base64 mode.
82-98: Approve streaming status callback integration.The status update callback properly handles streaming progress updates and download availability during polling.
305-319: Approve MeditationControls and DownloadButton integration.The streaming-related props are correctly wired to both components, enabling HLS playback and download functionality.
backend/src/handlers/lambda_handler.py (3)
43-47: Approve feature flag and retry configuration.
ENABLE_HLS_STREAMINGallows gradual rollout, andMAX_GENERATION_ATTEMPTS = 3provides reasonable retry limits.
249-322: Approve HLS meditation processing flow.The
_process_meditation_hlsmethod correctly implements:
- TTS caching for retry scenarios
- Progress callbacks during segment generation
- Proper state transitions (PROCESSING → STREAMING → COMPLETED)
354-366: Approve fresh presigned URL generation in status checks.Regenerating the playlist URL ensures clients always receive valid presigned URLs, which is necessary since they expire.
frontend/components/BackendMeditationCall.tsx (3)
54-59: Approve polling configuration constants.The exponential backoff with jitter (50% increase, 20% jitter, 30s cap, 5min timeout) is well-tuned for reducing server load while maintaining responsiveness.
340-354: Approve continuation function closures.
waitForCompletionandgetDownloadUrlcorrectly capture initial state and poll as needed. The closures maintain proper references to job context.
291-378: ApproveBackendMeditationCallStreamingimplementation.The two-phase flow (submit → poll for streaming start → return with continuations) enables immediate playback while background generation continues. Fallback to base64 mode ensures backward compatibility.
backend/src/services/hls_service.py (4)
10-12: Approve configuration constants.
SEGMENT_DURATION = 5seconds is appropriate for HLS (common range is 2-10s).URL_EXPIRY = 3600(1 hour) provides reasonable access window.
133-181: Approve live playlist generation.The HLS playlist follows spec correctly:
EXT-X-PLAYLIST-TYPE:EVENTallows segment additions during streamingTARGETDURATIONrounds up per spec requirementsENDLISTis added only when completeNote: Presigned segment URLs are embedded directly. Since playlists are regenerated on each status poll (lambda_handler.py line 362), this should work, but long-running streams could hit URL expiry if clients cache the playlist.
251-277: Approve HLS cleanup with partial failure tolerance.The method correctly continues cleanup even if individual deletions fail, logging warnings for troubleshooting. Returning
Trueon partial success is reasonable for non-critical cleanup operations.
196-236: Approve TTS cache management methods.The cache upload/download/exists methods enable retry capability by persisting expensive TTS audio for failed jobs. Error handling appropriately differentiates between expected misses (debug log) and failures (error log).
| // HLS player state | ||
| const hlsPlayerRef = useRef<HLSPlayerRef>(null); | ||
| const [isHLSPlaying, setIsHLSPlaying] = useState(false); | ||
| const [hlsError, setHlsError] = useState<Error | null>(null); | ||
| const [streamEnded, setStreamEnded] = useState(false); | ||
|
|
||
| // HLS playback controls | ||
| const handleHLSPlay = useCallback(() => { | ||
| if (isHLSPlaying) { | ||
| hlsPlayerRef.current?.pause(); | ||
| setIsHLSPlaying(false); | ||
| } else { | ||
| hlsPlayerRef.current?.play(); | ||
| setIsHLSPlaying(true); | ||
| } | ||
| }, [isHLSPlaying]); |
There was a problem hiding this comment.
HLS play/pause state may desync from actual player state.
handleHLSPlay toggles isHLSPlaying optimistically before the player action completes. If hlsPlayerRef.current?.play() fails or is blocked (e.g., autoplay policy), the state will be incorrect. Consider relying on handlePlaybackStart callback from the HLSPlayer instead.
const handleHLSPlay = useCallback(() => {
if (isHLSPlaying) {
hlsPlayerRef.current?.pause();
- setIsHLSPlaying(false);
+ // State will be updated by handlePlaybackComplete/onPause callback
} else {
hlsPlayerRef.current?.play();
- setIsHLSPlaying(true);
+ // State will be updated by handlePlaybackStart callback
}
}, [isHLSPlaying]);Alternatively, add error handling if the player methods return promises.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In frontend/components/ScreenComponents/MeditationControls.tsx around lines 96
to 111, the handleHLSPlay function optimistically toggles isHLSPlaying before
the HLS player's actual play/pause completes which can desync state if play() is
blocked or fails; update the handler to not set local state immediately but
instead call the player's play()/pause() and set isHLSPlaying only after the
player confirms the change (either by awaiting and handling the promise returned
by play() and catching errors to keep state consistent, or by removing the
optimistic toggle and relying on the HLSPlayer's
handlePlaybackStart/handlePlaybackPause callbacks to set isHLSPlaying and
setHlsError appropriately).
| hls.on(Events.ERROR, (_, data) => { | ||
| if (data.fatal) { | ||
| switch (data.type) { | ||
| case ErrorTypes.NETWORK_ERROR: | ||
| // Try to recover from network errors | ||
| hls.startLoad(); | ||
| break; | ||
| case ErrorTypes.MEDIA_ERROR: | ||
| // Try to recover from media errors | ||
| hls.recoverMediaError(); | ||
| break; | ||
| default: | ||
| // Cannot recover | ||
| setState(prev => ({ | ||
| ...prev, | ||
| isLoading: false, | ||
| error: new Error(`HLS fatal error: ${data.details}`), | ||
| })); | ||
| hls.destroy(); | ||
| break; | ||
| } | ||
| } else { | ||
| // Non-fatal errors - HLS.js will handle retry | ||
| if (data.details === ErrorDetails.BUFFER_STALLED_ERROR) { | ||
| setState(prev => ({ ...prev, isLoading: true })); | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
HLS.js fatal NETWORK_ERROR recovery retry handling best practices
💡 Result:
Quick best-practice checklist for handling fatal NETWORK_ERRORs in hls.js
-
Know which errors are marked fatal and how to recover
- Manifest/level load errors (MANIFEST_LOAD_ERROR, LEVEL_LOAD_ERROR / TIMEOUT / PARSING) are treated as NETWORK_ERROR and flagged fatal by hls.js; a call to hls.startLoad() is the documented recovery action. [1][2]
-
Use hls.js retry configuration (don't roll your own naive retry)
- Configure RetryConfig (maxNumRetry, retryDelayMs, maxRetryDelayMs, backoff) and implement shouldRetry to decide retries per-response (status, timeout, retryCount). Prefer exponential backoff and a reasonable max attempts. [2]
-
Handle ERROR events defensibly
- Listen for Hls.Events.ERROR, inspect data.type / data.details / data.response and only trigger retries or startLoad when appropriate (e.g., manifest/level failures). For media decoding errors use hls.recoverMediaError(), not startLoad. Avoid indiscriminate recover calls that cause visual glitches. [1][4]
-
Defer and suspend network activity when offline / broken pipe
- Check navigator.onLine and subscribe to online/offline events; pause retries while offline and resume with backoff when online. Recent hls.js fixes defer segment requests when network is lost — use stopLoad/startLoad accordingly. [3]
-
Remove or mark repeatedly failing renditions
- If a level/URL consistently fails, call hls.removeLevel(...) or remove that URL from your manifest list to avoid repeated failures and poor UX. [1]
-
Instrument and surface user feedback
- Track retry counts, HTTP status codes (0, 404, 5xx, 429), and timeouts; show a transient “Reconnecting…” UI and a permanent error message after max retries. Use shouldRetry to limit retries for non-transient errors (404, 410).
-
Practical defaults (starter guidance)
- maxNumRetry: 3–6, retryDelayMs: 1000–3000ms, maxRetryDelayMs: 30–60s, backoff: exponential.
- For manifest/playlist loads prefer slightly higher retries than segment loads (playlist is critical). Use fewer retries for 4xx responses. [2][1]
References
- hls.js error detection & handling / recovery notes. [1]
- hls.js RetryConfig (maxNumRetry, retryDelayMs, backoff, shouldRetry). [2]
- hls.js changelog: defer segment requests when network connection is lost (network-recovery improvement). [3]
- hls.js Error types / ErrorDetails (NETWORK_ERROR, MANIFEST_LOAD_ERROR, LEVEL_LOAD_ERROR). [4]
If you want, I can paste a minimal code example showing: ERROR event handling + RetryConfig + shouldRetry + online/offline handling.
🏁 Script executed:
# Check the file context to understand the full implementation
cd frontend/hooks && head -n 260 useHLSPlayer.web.ts | tail -n 40Repository: HatmanStack/float
Length of output: 1235
🏁 Script executed:
# Search for HLS configuration in the file
rg "RetryConfig|maxNumRetry|retryDelay|shouldRetry" frontend/hooks/useHLSPlayer.web.tsRepository: HatmanStack/float
Length of output: 43
🏁 Script executed:
# Look at the full file to see HLS initialization
cd frontend/hooks && wc -l useHLSPlayer.web.tsRepository: HatmanStack/float
Length of output: 84
🏁 Script executed:
# View HLS initialization code
cd frontend/hooks && head -n 150 useHLSPlayer.web.ts | tail -n 80Repository: HatmanStack/float
Length of output: 2515
🏁 Script executed:
# Search for where Hls instance is created
rg "new Hls|new HLS" frontend/hooks/useHLSPlayer.web.ts -A 10Repository: HatmanStack/float
Length of output: 780
🏁 Script executed:
# Get more context around both HLS initializations
cd frontend/hooks && sed -n '120,220p' useHLSPlayer.web.tsRepository: HatmanStack/float
Length of output: 3217
🏁 Script executed:
# Look for the second Hls initialization in context
cd frontend/hooks && sed -n '280,330p' useHLSPlayer.web.tsRepository: HatmanStack/float
Length of output: 1169
🏁 Script executed:
# View the complete ERROR handler and surrounding context to understand the flow
cd frontend/hooks && sed -n '220,270p' useHLSPlayer.web.tsRepository: HatmanStack/float
Length of output: 1467
Check error details before recovering from network errors.
The code calls hls.startLoad() on all fatal NETWORK_ERROR events without inspecting data.details. While the configured retry limits (manifestLoadingMaxRetry: 4, levelLoadingMaxRetry: 4) prevent infinite loops, calling startLoad() indiscriminately bypasses HLS.js best practices. Manifest and level load failures warrant recovery via startLoad(), but calling it for all network error types can mask issues. Consider checking data.details to confirm the error is a manifest or level failure before attempting recovery. Additionally, implement a shouldRetry callback to handle non-transient HTTP errors (404, 410) appropriately, and ensure the HLS instance created in the retry() callback includes the same retry configuration.
🤖 Prompt for AI Agents
In frontend/hooks/useHLSPlayer.web.ts around lines 229 to 256, the handler
unconditionally calls hls.startLoad() for all fatal NETWORK_ERRORs; update it to
inspect data.details and only call startLoad() for manifest/level load failures
(e.g., ErrorDetails.MANIFEST_LOAD_ERROR, ErrorDetails.LEVEL_LOAD_ERROR),
otherwise fall through to the default error handling; add a shouldRetry callback
(or equivalent logic in the network/XHR loader config) to detect non-transient
HTTP statuses (404, 410) and prevent retries for those codes; and when
recreating the Hls instance in any retry() callback, ensure the new instance is
initialized with the same retry configuration (manifestLoadingMaxRetry,
levelLoadingMaxRetry and any XHR hooks) and reattach the same event
handlers/state so retry behavior and limits remain consistent.
| // Small delay then re-set URL to trigger effect | ||
| setTimeout(() => { | ||
| playlistUrlRef.current = url; | ||
|
|
||
| if (supportsNativeHLS()) { | ||
| audio.src = url; | ||
| audio.load(); | ||
| audio.play().catch(console.warn); | ||
| return; | ||
| } | ||
|
|
||
| if (Hls.isSupported()) { | ||
| const hls = new Hls({ | ||
| liveSyncDuration: 3, | ||
| liveMaxLatencyDuration: 10, | ||
| liveDurationInfinity: true, | ||
| lowLatencyMode: true, | ||
| }); | ||
| hlsRef.current = hls; | ||
|
|
||
| hls.on(Events.MANIFEST_PARSED, () => { | ||
| setState(prev => ({ ...prev, isLoading: false })); | ||
| audio.play().catch(console.warn); | ||
| }); | ||
|
|
||
| hls.on(Events.ERROR, (_, data) => { | ||
| if (data.fatal) { | ||
| setState(prev => ({ | ||
| ...prev, | ||
| isLoading: false, | ||
| error: new Error(`HLS error: ${data.details}`), | ||
| })); | ||
| } | ||
| }); | ||
|
|
||
| hls.loadSource(url); | ||
| hls.attachMedia(audio); | ||
| } | ||
| }, 100); |
There was a problem hiding this comment.
Retry logic misses event handlers present in main initialization.
The retry path registers only MANIFEST_PARSED and ERROR handlers, but the main initialization also handles LEVEL_LOADED for duration updates and stream completion detection. This inconsistency could cause missing state updates after retry.
Additionally, the setTimeout callback doesn't check if the component is still mounted, which could cause state updates on an unmounted component.
Consider:
- Reusing the same event handler setup logic
- Adding a mounted check or cleanup for the timeout
const retry = useCallback(() => {
+ let cancelled = false;
// Reset state
setState({ ... });
// ... cleanup code ...
setTimeout(() => {
+ if (cancelled) return;
// ... retry logic ...
}, 100);
+
+ return () => { cancelled = true; };
}, [supportsNativeHLS]);Committable suggestion skipped: line range outside the PR's diff.
| it('should show preparing state while fetching URL', async () => { | ||
| // Make getDownloadUrl slow | ||
| const slowGetDownloadUrl = jest.fn().mockImplementation( | ||
| () => new Promise(resolve => setTimeout(() => resolve('https://example.com/download.mp3'), 100)) | ||
| ); | ||
|
|
||
| render( | ||
| <DownloadButton | ||
| downloadAvailable={true} | ||
| onGetDownloadUrl={slowGetDownloadUrl} | ||
| /> | ||
| ); | ||
|
|
||
| fireEvent.press(screen.getByTestId('download-button')); | ||
|
|
||
| await waitFor(() => { | ||
| expect(screen.getByText('Preparing...')).toBeTruthy(); | ||
| }); | ||
| }); | ||
|
|
||
| it('should show downloading state during download', async () => { | ||
| // Make download slow | ||
| const slowDownloadAsync = jest.fn().mockImplementation( | ||
| () => new Promise(resolve => setTimeout(() => resolve({ uri: 'mock-directory/meditation.mp3' }), 100)) | ||
| ); | ||
| mockDownloadResumable.downloadAsync = slowDownloadAsync; | ||
|
|
||
| render( | ||
| <DownloadButton | ||
| downloadAvailable={true} | ||
| onGetDownloadUrl={mockOnGetDownloadUrl} | ||
| /> | ||
| ); | ||
|
|
||
| fireEvent.press(screen.getByTestId('download-button')); | ||
|
|
||
| await waitFor(() => { | ||
| expect(screen.getByText(/Downloading/)).toBeTruthy(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find the test file
find . -name "DownloadButton-test.tsx" -type fRepository: HatmanStack/float
Length of output: 106
🏁 Script executed:
#!/bin/bash
# Read the test file to verify the implementation
cat -n ./tests/frontend/unit/DownloadButton-test.tsxRepository: HatmanStack/float
Length of output: 11488
🏁 Script executed:
#!/bin/bash
# Find and read the DownloadButton component to understand state management
find . -name "DownloadButton.tsx" -o -name "DownloadButton.ts" -o -name "DownloadButton.jsx" | head -5Repository: HatmanStack/float
Length of output: 116
🏁 Script executed:
#!/bin/bash
# Read the DownloadButton component
cat -n ./frontend/components/DownloadButton/DownloadButton.tsxRepository: HatmanStack/float
Length of output: 6182
🏁 Script executed:
#!/bin/bash
# Check if jest.useFakeTimers is used elsewhere in the test suite
rg "useFakeTimers|fake.*timer" tests/ --type tsx --type ts -iRepository: HatmanStack/float
Length of output: 88
🏁 Script executed:
#!/bin/bash
# Check if jest.useFakeTimers is used elsewhere in the test suite
rg "useFakeTimers|fake.*timer" tests/ -iRepository: HatmanStack/float
Length of output: 195
🏁 Script executed:
#!/bin/bash
# Look at one of the example usages to confirm the pattern
sed -n '1,50p' tests/frontend/unit/HLSPlayer-test.tsx | head -30Repository: HatmanStack/float
Length of output: 1032
The "Preparing" and "Downloading" state tests rely on real timers; align with project patterns by using jest.useFakeTimers()
These tests use setTimeout-based delays to simulate slow async operations, which is fragile and inconsistent with other test files in the project (e.g., HLSPlayer-test.tsx, IncidentColoring-test.tsx both use jest.useFakeTimers()). Switch to fake timers for these state assertion tests to make them deterministic and follow the established testing pattern:
beforeEach(() => {
jest.useFakeTimers();
jest.clearAllMocks();
setPlatform('ios');
});
afterEach(() => {
jest.runOnlyPendingTimers();
jest.useRealTimers();
});Then remove the setTimeout calls from the mock implementations and let the component state changes be triggered synchronously during act() blocks or within waitFor().
🤖 Prompt for AI Agents
In tests/frontend/unit/DownloadButton-test.tsx around lines 122-161, the two
tests use setTimeout-based mocks which rely on real timers; switch to the
project's fake-timer pattern: add a beforeEach that calls jest.useFakeTimers(),
jest.clearAllMocks(), and setPlatform('ios') and an afterEach that calls
jest.runOnlyPendingTimers() and jest.useRealTimers(); replace the mock
implementations that wrap setTimeout with immediately-resolved Promises (e.g.,
Promise.resolve(...)) and, within the tests, drive the fake timers forward
inside act()/waitFor() using jest.advanceTimersByTime(...) so the component
transitions to "Preparing" and "Downloading" deterministically.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
14-23: Interface definition looks correct.The
HLSPlayerPropsinterface is well-defined with appropriate optional callbacks. Note that a past review flagged an interface mismatch withHLSPlayer.web.tsxwhereonPlaybackCompletewas incorrectly typed asvoidinstead of() => void. Ensure the web version has been updated to match this correct signature.frontend/hooks/useHLSPlayer.web.ts (2)
232-259: Network error recovery could be more selective.The handler calls
hls.startLoad()for all fatalNETWORK_ERRORevents without inspectingdata.details. While the configured retry limits prevent infinite loops, best practice is to checkdata.details(e.g.,MANIFEST_LOAD_ERROR,LEVEL_LOAD_ERROR) before attempting recovery. Non-recoverable errors like 404s should be handled differently.case ErrorTypes.NETWORK_ERROR: - // Try to recover from network errors - hls.startLoad(); + // Only recover from manifest/level load errors + if (data.details === ErrorDetails.MANIFEST_LOAD_ERROR || + data.details === ErrorDetails.LEVEL_LOAD_ERROR || + data.details === ErrorDetails.LEVEL_LOAD_TIMEOUT) { + hls.startLoad(); + } else { + setState(prev => ({ + ...prev, + isLoading: false, + error: new Error(`HLS network error: ${data.details}`), + })); + hls.destroy(); + } break;
320-366: Retry timeout lacks unmount protection.The
setTimeoutat line 321 doesn't check if the component is still mounted, which could cause state updates on an unmounted component. The retry's ERROR handler also doesn't attempt recovery like the main initialization.const retry = useCallback(() => { + let cancelled = false; // Reset state setState({...}); // ... cleanup code ... setTimeout(() => { + if (cancelled) return; // ... retry logic ... }, 100); + + // Note: This cleanup won't work as-is since retry returns void. + // Consider using a ref to track cancellation state instead. }, [supportsNativeHLS]);Consider using a
cancelledRefpattern or moving the async initialization into a separate effect that can be properly cleaned up.
🧹 Nitpick comments (14)
docs/HLS_STREAMING.md (1)
39-49: Add language specifier to fenced code block.The fenced code block lacks a language specifier which can cause linting warnings. Since this shows a directory structure, consider using
textorplaintext.-``` +```text s3://float-cust-data/frontend/components/HLSPlayer/hlsPlayerHtml.ts (2)
150-168: Consider adding retry limits for fatal error recovery.The NETWORK_ERROR and MEDIA_ERROR recovery handlers (
hls.startLoad()andhls.recoverMediaError()) could potentially cause infinite retry loops if the error persists. HLS.js may re-emit fatal errors after failed recovery attempts.Consider tracking recovery attempts and falling back to a fatal error message after a threshold:
+ let networkRecoveryAttempts = 0; + let mediaRecoveryAttempts = 0; + const MAX_RECOVERY_ATTEMPTS = 3; + hls.on(Hls.Events.ERROR, (event, data) => { console.error('HLS Error:', data); if (data.fatal) { switch (data.type) { case Hls.ErrorTypes.NETWORK_ERROR: - hls.startLoad(); + if (networkRecoveryAttempts < MAX_RECOVERY_ATTEMPTS) { + networkRecoveryAttempts++; + hls.startLoad(); + } else { + sendMessage('error', { + message: 'Network error: max retries exceeded', + fatal: true + }); + } break; case Hls.ErrorTypes.MEDIA_ERROR: - hls.recoverMediaError(); + if (mediaRecoveryAttempts < MAX_RECOVERY_ATTEMPTS) { + mediaRecoveryAttempts++; + hls.recoverMediaError(); + } else { + sendMessage('error', { + message: 'Media error: max retries exceeded', + fatal: true + }); + } break;
98-111: Early return on same URL may prevent intentional reloads.The check
if (currentUrl === url) return;prevents reloading the same source. If the user wants to retry the same URL after an error, this check blocks it. Consider clearingcurrentUrlon error or adding aforceparameter.- function loadSource(url) { - if (currentUrl === url) return; + function loadSource(url, force = false) { + if (currentUrl === url && !force) return; currentUrl = url;Then in error handling, you could call
loadSource(currentUrl, true)for retries.backend/src/services/job_service.py (2)
172-203: Silent failures should log warnings for debugging.
mark_streaming_completereturns silently when job is not found (line 180-181), unlikemark_streaming_startedwhich logs a warning. For consistency and debuggability, add warning logs.def mark_streaming_complete( self, user_id: str, job_id: str, segments_total: int, ): """Mark streaming as complete, set status to COMPLETED, enable download.""" job_data = self.get_job(user_id, job_id) if not job_data: + logger.warning( + "Cannot mark streaming complete: job not found", + extra={"data": {"job_id": job_id, "user_id": user_id}} + ) returnSame pattern should apply to
mark_download_ready(line 213-214),mark_download_completed(line 232-233), andset_tts_cache_key(line 244-245).
251-261: Return value of 1 when job not found may mask issues.
increment_generation_attemptreturns1when the job doesn't exist, which could mask underlying problems and make debugging harder. Consider returningNoneor raising an exception.def increment_generation_attempt(self, user_id: str, job_id: str) -> int: """Increment and return the generation attempt count.""" job_data = self.get_job(user_id, job_id) if not job_data: + logger.warning( + "Cannot increment generation attempt: job not found", + extra={"data": {"job_id": job_id, "user_id": user_id}} + ) return 1At minimum, add a warning log so this case is observable.
backend/src/services/hls_service.py (1)
41-56: Direct s3_client access bypasses StorageService abstraction.Multiple methods access
self.storage_service.s3_clientdirectly (lines 45, 72, 96, 118, 200, 219, 233, 283) rather than going through theStorageServiceinterface. This creates tight coupling and makes it harder to test or swap storage implementations.Consider extending
StorageServicewith the needed methods (e.g.,generate_presigned_url,put_object,upload_file,download_file,head_object) or creating a specializedHLSStorageService.frontend/components/ScreenComponents/MeditationControls.tsx (1)
141-151: Extract inline styles for consistency.Multiple inline styles (
{ padding: 50 },{ padding: 20 }, etc.) are used throughout the component. Consider extracting these to thestylesobject for consistency and maintainability.+// In your styles definition: + loadingContainer: { + padding: 50, + }, + streamingContainer: { + padding: 20, + }, // In component: -<ThemedView style={{ padding: 50 }}> +<ThemedView style={styles.loadingContainer}>frontend/components/HLSPlayer/HLSPlayer.tsx (3)
66-70: Commands called before WebView ready are silently dropped.The imperative
play(),pause(), andseek()methods will silently do nothing if called before the WebView signalsready. Consider either queuing these commands or documenting this behavior so callers know to wait for readiness.
72-85: Missing cleanup forsetTimeoutcalls.The
setTimeoutcalls at lines 78 and 99 are not cleaned up if the component unmounts orplaylistUrlchanges before the timeout fires. This could cause state updates on unmounted components.+ const autoPlayTimeoutRef = useRef<NodeJS.Timeout | null>(null); + // Load playlist when URL changes useEffect(() => { + // Clear any pending autoplay timeout + if (autoPlayTimeoutRef.current) { + clearTimeout(autoPlayTimeoutRef.current); + autoPlayTimeoutRef.current = null; + } + if (playlistUrl) { if (isReadyRef.current) { sendCommand('load', { url: playlistUrl }); if (autoPlay) { - setTimeout(() => sendCommand('play'), AUTOPLAY_DELAY_MS); + autoPlayTimeoutRef.current = setTimeout(() => sendCommand('play'), AUTOPLAY_DELAY_MS); } } else { // Store URL to load when WebView is ready pendingUrlRef.current = playlistUrl; } } + + return () => { + if (autoPlayTimeoutRef.current) { + clearTimeout(autoPlayTimeoutRef.current); + } + }; }, [playlistUrl, sendCommand, autoPlay]);
143-145: Consider propagating JSON parse errors to the parent.Currently, JSON parse errors are only logged to console. If
onErroris provided, consider calling it to notify the parent of communication failures with the WebView.} catch (e) { console.error('Error parsing WebView message:', e); + onError?.(new Error('Failed to parse WebView message')); }frontend/hooks/useHLSPlayer.web.ts (1)
179-190: Native HLS path looks good but listener could fire after unmount.The
{ once: true }option is good practice, but the listener could still fire after the component unmounts. Consider storing a reference and removing it in the cleanup function if this causes issues in practice.backend/src/services/ffmpeg_audio_service.py (1)
115-178: Originalcombine_voice_and_musiclacks subprocess timeouts.The legacy
combine_voice_and_musicmethod still has no timeout on itssubprocess.runcalls (lines 116, 128, 142, 154, 166), while the new_prepare_mixed_audioproperly includestimeout=FFMPEG_STEP_TIMEOUT. Consider adding timeouts to maintain consistency and prevent hangs.subprocess.run( [ self.ffmpeg_executable, "-i", music_path, "-filter:a", f"volume={DEFAULT_MUSIC_VOLUME_REDUCTION}dB", music_volume_reduced_path, ], check=True, + capture_output=True, + timeout=FFMPEG_STEP_TIMEOUT, )Apply similar changes to the other
subprocess.runcalls in this method.backend/src/handlers/lambda_handler.py (2)
335-344: Retry mechanism is now functional via self-invocation.The retry logic addresses the previous review concern by explicitly calling
_invoke_async_meditationinstead of re-raising the exception. This properly triggers a new async Lambda invocation for retry.Consider adding exponential backoff between retries to avoid overwhelming resources during transient failures.
581-588: Path parsing for download endpoint is functional but fragile.The logic to extract
job_idby finding "download" and taking the previous part works, but is sensitive to path structure changes. Consider extracting a shared path-parsing utility for consistency.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
backend/src/handlers/lambda_handler.py(8 hunks)backend/src/services/download_service.py(1 hunks)backend/src/services/ffmpeg_audio_service.py(3 hunks)backend/src/services/hls_service.py(1 hunks)backend/src/services/job_service.py(7 hunks)backend/template.yaml(4 hunks)docs/HLS_STREAMING.md(2 hunks)frontend/components/BackendMeditationCall.tsx(6 hunks)frontend/components/HLSPlayer/HLSPlayer.tsx(1 hunks)frontend/components/HLSPlayer/hlsPlayerHtml.ts(1 hunks)frontend/components/ScreenComponents/MeditationControls.tsx(6 hunks)frontend/components/StreamingError.tsx(1 hunks)frontend/hooks/useHLSPlayer.web.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- backend/template.yaml
- frontend/components/StreamingError.tsx
- frontend/components/BackendMeditationCall.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
frontend/components/ScreenComponents/MeditationControls.tsx (2)
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
HLSPlayerRef(25-29)frontend/components/HLSPlayer/HLSPlayer.web.tsx (1)
HLSPlayerRef(20-24)
frontend/hooks/useHLSPlayer.web.ts (2)
frontend/hooks/useHLSPlayer.ts (3)
HLSPlayerState(7-14)HLSPlayerControls(16-21)useHLSPlayer(45-47)frontend/hooks/index.ts (3)
HLSPlayerState(8-8)HLSPlayerControls(8-8)useHLSPlayer(7-7)
backend/src/services/hls_service.py (3)
backend/src/utils/logging_utils.py (1)
get_logger(85-110)backend/src/services/storage_service.py (1)
StorageService(5-14)backend/src/services/s3_storage_service.py (1)
delete_object(66-80)
🪛 GitHub Actions: CI
backend/src/services/download_service.py
[error] 41-41: S3 head_object key path mismatch: expected user_id/downloads/{job_id}.mp3, but computed downloads/user123/job456.mp3 (Not found exception in tests).
backend/src/services/hls_service.py
[error] 1-1: HLS key prefix generation is incorrect. Tests expect 'user123/hls/job456/...' but code produces 'hls/user123/job456/...'.
🪛 LanguageTool
docs/HLS_STREAMING.md
[grammar] ~78-~78: Use a hyphen to join words.
Context: ...S3 4. Returns pre-signed download URL (1 hour expiry)
(QB_NEW_EN_HYPHEN)
🪛 markdownlint-cli2 (0.18.1)
docs/HLS_STREAMING.md
39-39: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (21)
frontend/components/HLSPlayer/hlsPlayerHtml.ts (1)
31-31: LGTM! SRI attribute properly added.The HLS.js CDN script now includes the
integrityandcrossoriginattributes for subresource integrity verification.backend/src/services/job_service.py (1)
36-70: LGTM! Clean job creation with streaming support.The conditional initialization of streaming fields only for meditation jobs with
enable_streaming=Trueis well-structured, avoiding unnecessary fields for non-streaming jobs.backend/src/services/hls_service.py (2)
133-181: LGTM! Well-structured HLS playlist generation.The live playlist generation correctly implements HLS spec requirements including proper tags (
#EXTM3U,#EXT-X-VERSION,#EXT-X-TARGETDURATION,#EXT-X-MEDIA-SEQUENCE,#EXT-X-PLAYLIST-TYPE:EVENT) and handles both live and finalized states with#EXT-X-ENDLIST.
251-277: Cleanup continues despite individual delete failures.The
cleanup_hls_artifactsmethod returnsTrueeven if some artifacts failed to delete (lines 261-265 catch and log but continue). This is acceptable for best-effort cleanup, but the caller should be aware thatTruedoesn't guarantee all artifacts were deleted. Consider returning the count of failed deletions or logging a summary.frontend/components/ScreenComponents/MeditationControls.tsx (2)
105-113: Play/pause state handling is now asymmetric but reasonable.The current approach where
pauseupdates state immediately butplayrelies on thehandlePlaybackStartcallback is an improvement over the previous fully-optimistic approach. However, if the user rapidly taps play/pause, there could be brief state inconsistencies.This is acceptable for now, but for a fully robust solution, consider making both operations callback-driven or handling the
play()promise rejection to revert state.
179-215: LGTM! Clean HLS streaming mode implementation.The streaming mode branch properly integrates the HLSPlayer component with appropriate callbacks for playback events, error handling, and stream completion. The UI provides clear feedback including the "Meditation complete" message when streaming ends.
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
148-169: LGTM - WebView configuration is appropriate for off-screen audio playback.The off-screen rendering pattern (1x1px, opacity 0, pointerEvents none) is a valid approach for audio-only HLS playback. The permissive
originWhitelistis acceptable since the HTML source is local/embedded.frontend/hooks/useHLSPlayer.web.ts (2)
27-41: Interface definitions match the mobile stub.The
HLSPlayerStateandHLSPlayerControlsinterfaces correctly match the types exported fromfrontend/hooks/useHLSPlayer.ts, ensuring consistent API between web and mobile implementations.
72-87: Audio element lifecycle is properly managed.The creation and cleanup of the audio element is well-structured with SSR safety checks and proper cleanup on unmount.
backend/src/services/download_service.py (4)
37-47: Existence check handles errors correctly.The
check_mp3_existsmethod properly distinguishes between "not found" (404) and unexpected errors, re-raising the latter for appropriate handling upstream.
144-156: FFmpeg timeout handling is properly implemented.The subprocess timeout and
TimeoutExpiredexception handling address the previous review feedback. The 5-minute timeout is reasonable for concatenation tasks.
191-208: Orchestration method is clean and correct.The
generate_mp3_and_get_urlmethod correctly combines generation and URL retrieval with proper null propagation on failure.
94-95: The segment key naming convention in HLS service usessegment_{segment_index:03d}.tsformat (line 29 of hls_service.py), which ensures proper lexicographic ordering. Thesorted(segment_keys)call is safe and appropriate for this naming pattern.backend/src/services/ffmpeg_audio_service.py (4)
32-40: Constructor properly accepts optional HLS service.The constructor correctly accepts an optional
HLSServicedependency, allowing the service to operate in both HLS and legacy modes.
257-264: FFmpeg HLS timeout handling is properly implemented.The subprocess call includes timeout and properly catches
TimeoutExpired, converting it to a descriptive exception. This addresses the previous review feedback.
318-421: Helper method correctly returns updated music list.The
_prepare_mixed_audiomethod addresses the previous review feedback by returning a tuple containing both the mixed audio path and the updated music list. All subprocess calls properly include timeouts.
271-300: Segment upload loop handles failures appropriately.The loop properly raises an exception on upload failure, preventing partial/inconsistent HLS streams. The progress callback provides good visibility into generation status.
backend/src/handlers/lambda_handler.py (4)
52-65: Service initialization properly wires HLS dependencies.The constructor correctly initializes
HLSService,DownloadService, and passeshls_servicetoFFmpegAudioService, enabling the new streaming functionality.
369-413: Download request handler is well-structured.The method properly validates job state, checks download availability, and returns appropriate error responses. The optional
job_dataparameter optimization avoids duplicate lookups when the caller already has the data.
415-427: Download completion handler properly triggers cleanup.The method correctly marks the download as completed and triggers HLS artifact cleanup, ensuring S3 resources are freed after the user has downloaded their file.
265-289: TTS caching for retry scenarios is well-implemented.The caching pattern correctly avoids regenerating expensive TTS audio on retries. The cache is populated on first generation and retrieved on subsequent attempts, optimizing retry performance.
| def get_download_key(self, user_id: str, job_id: str) -> str: | ||
| """Get S3 key for the downloadable MP3.""" | ||
| return f"downloads/{user_id}/{job_id}.mp3" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for references to get_download_key and S3 key patterns
rg -n "get_download_key|downloads.*job_id|user_id.*downloads" --type=py -C 3Repository: HatmanStack/float
Length of output: 4065
🏁 Script executed:
# Look for test files related to download_service
fd -e py test | head -20Repository: HatmanStack/float
Length of output: 800
🏁 Script executed:
# Check for S3 key patterns in general
rg -n "downloads/|/downloads/" --type=py -C 2 | head -100Repository: HatmanStack/float
Length of output: 3625
🏁 Script executed:
# Search for pipeline/CI configuration files
fd -e yml -e yaml -e json | grep -E "(pipeline|ci|workflow)" | head -10Repository: HatmanStack/float
Length of output: 43
Fix S3 key path format mismatch.
The implementation at line 35 returns downloads/{user_id}/{job_id}.mp3, but tests expect user_id/downloads/{job_id}.mp3. This causes test failures at lines 47, 102, and 140 in backend/tests/unit/test_download_service.py.
def get_download_key(self, user_id: str, job_id: str) -> str:
"""Get S3 key for the downloadable MP3."""
- return f"downloads/{user_id}/{job_id}.mp3"
+ return f"{user_id}/downloads/{job_id}.mp3"🤖 Prompt for AI Agents
In backend/src/services/download_service.py around lines 33 to 35, the S3 key
format is incorrect: it currently returns "downloads/{user_id}/{job_id}.mp3" but
tests expect "{user_id}/downloads/{job_id}.mp3". Update the get_download_key
implementation to build the path with user_id first followed by "downloads" and
the job_id filename (i.e., "{user_id}/downloads/{job_id}.mp3") so the returned
key matches test expectations.
| def get_hls_prefix(self, user_id: str, job_id: str) -> str: | ||
| """Get S3 key prefix for HLS artifacts.""" | ||
| return f"hls/{user_id}/{job_id}/" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for test files that reference HLS prefix or key format
find . -type f -name "*test*.py" | head -20Repository: HatmanStack/float
Length of output: 836
🏁 Script executed:
# Search for HLS-related test files and their expectations
rg -l "hls.*prefix|get_hls_prefix" --type pyRepository: HatmanStack/float
Length of output: 180
🏁 Script executed:
# Search documentation files
find . -type f -name "*HLS*" -o -name "*hls*" | grep -E "\.(md|txt)$"Repository: HatmanStack/float
Length of output: 84
🏁 Script executed:
# Check the actual implementation in hls_service.py to see context around line 24 and the s3_client accesses
rg -n "def get_hls_prefix|s3_client" backend/src/services/hls_service.py -A 3Repository: HatmanStack/float
Length of output: 1687
🏁 Script executed:
cat -n backend/tests/unit/test_hls_service.pyRepository: HatmanStack/float
Length of output: 16768
🏁 Script executed:
cat -n docs/HLS_STREAMING.mdRepository: HatmanStack/float
Length of output: 3493
Critical: HLS key prefix format does not match test expectations.
The tests explicitly expect the format user123/hls/job456/... (line 36 of backend/tests/unit/test_hls_service.py), but the current implementation produces hls/user123/job456/.... This is causing test failures across multiple test cases.
def get_hls_prefix(self, user_id: str, job_id: str) -> str:
"""Get S3 key prefix for HLS artifacts."""
- return f"hls/{user_id}/{job_id}/"
+ return f"{user_id}/hls/{job_id}/"Update the prefix format to match the test expectations and then update docs/HLS_STREAMING.md line 41 to reflect the correct format.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def get_hls_prefix(self, user_id: str, job_id: str) -> str: | |
| """Get S3 key prefix for HLS artifacts.""" | |
| return f"hls/{user_id}/{job_id}/" | |
| def get_hls_prefix(self, user_id: str, job_id: str) -> str: | |
| """Get S3 key prefix for HLS artifacts.""" | |
| return f"{user_id}/hls/{job_id}/" |
🤖 Prompt for AI Agents
In backend/src/services/hls_service.py around lines 22 to 24, the returned S3
HLS key prefix is currently "hls/{user_id}/{job_id}/" but tests expect
"{user_id}/hls/{job_id}/"; change the return to f"{user_id}/hls/{job_id}/". Also
update docs/HLS_STREAMING.md at line 41 to reflect the new prefix format
"{user_id}/hls/{job_id}/" so documentation matches implementation and tests.
| 2. `POST /job/{job_id}/download` triggers MP3 generation | ||
| 3. Backend concatenates segments to single MP3, uploads to S3 | ||
| 4. Returns pre-signed download URL (1 hour expiry) |
There was a problem hiding this comment.
Minor: Use hyphenated compound adjective.
For grammatical correctness, "1 hour" should be hyphenated when used as a compound adjective modifying "expiry".
-4. Returns pre-signed download URL (1 hour expiry)
+4. Returns pre-signed download URL (1-hour expiry)🧰 Tools
🪛 LanguageTool
[grammar] ~78-~78: Use a hyphen to join words.
Context: ...S3 4. Returns pre-signed download URL (1 hour expiry)
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In docs/HLS_STREAMING.md around lines 76 to 78, the phrase "1 hour expiry" is
used; change it to the hyphenated compound adjective "1-hour expiry" to be
grammatically correct (and update any similar occurrences in the file to use the
hyphenated form).
Backend:
- Fix retry mechanism: self-invoke Lambda on failure instead of re-raise
- Restructure S3 keys: hls/{user_id}/{job_id} for lifecycle rule matching
- Fix S3 lifecycle prefixes: remove leading slashes
- Add FFmpeg subprocess timeouts (2min/step, 5min total)
- Fix check_mp3_exists: catch specific ClientError for 404
- Fix duplicate job lookup in download flow: pass job_data
- Add logging for missing job in mark_streaming_started
- Fix music list return: _prepare_mixed_audio returns updated list
Frontend:
- Add download URL validation in fetchDownloadUrl
- Fix HLS play/pause: don't set state optimistically for play
- Fix error category ordering: check download before generation
- Add SRI hash to HLS.js CDN script
- Extract HLS_CONFIG constant to avoid duplication
- Fix retry: include LEVEL_LOADED handler for duration/completion
- Extract AUTOPLAY_DELAY_MS constant
Tests:
- Update S3 key expectations for new structure (hls/, downloads/, jobs/)
- Use ClientError with 404 code instead of generic Exception
Docs:
- Delete outdated docs/plans directory
- Update HLS_STREAMING.md with current architecture
40b1941 to
5669337
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (6)
frontend/components/ScreenComponents/MeditationControls.tsx (1)
102-113: State management improved but still has minor asymmetry.The comment explains the intent: play relies on callback, pause updates immediately. However, this creates asymmetric behavior - if
handlePlaybackStartisn't called (e.g., autoplay blocked),isHLSPlayingstays false which is correct. But thepause()call sets state immediately without confirming pause succeeded. This is acceptable sincepause()is synchronous and reliable, as noted in the comment.The past review concern about desync is partially addressed - play now relies on callback, though pause still updates optimistically. This is a reasonable tradeoff.
frontend/hooks/useHLSPlayer.web.ts (2)
232-259: Network error recovery still lacks error detail inspection.The handler calls
hls.startLoad()for all fatalNETWORK_ERRORtypes without checkingdata.details. Per HLS.js best practices, manifest/level load failures warrantstartLoad(), but other network errors may not be recoverable this way. Consider inspectingdata.detailsbefore recovery.This was flagged in a previous review and remains partially unaddressed.
320-366: Retry timeout lacks cleanup on unmount.The
setTimeoutinretry()doesn't track whether the component is still mounted. If the component unmounts during the 100ms delay, state updates will be attempted on an unmounted component.This was flagged in a previous review. Consider adding a cleanup mechanism:
const retry = useCallback(() => { + const isMounted = { current: true }; // Reset state setState({ ... }); // ... - setTimeout(() => { + const timeoutId = setTimeout(() => { + if (!isMounted.current) return; // ... retry logic ... }, 100); + + // Return cleanup function or track in ref for effect cleanup }, [supportsNativeHLS]);Note: Since
retryis a callback (not an effect), you may need to track the timeout ID in a ref and clear it in the main cleanup effect.docs/HLS_STREAMING.md (1)
76-78: Minor: Use hyphenated compound adjective.For grammatical correctness, "1 hour expiry" should be hyphenated when used as a compound adjective.
backend/src/services/download_service.py (1)
33-35: Align download S3 key layout with existing conventions/tests.
get_download_keycurrently returnsdownloads/{user_id}/{job_id}.mp3, while the rest of the codebase stores user data under a{user_id}/...prefix, and earlier tests/docs were written against{user_id}/downloads/{job_id}.mp3. This mismatch will either break those tests or create an inconsistent S3 layout for user artifacts.Consider switching to a user‑rooted path:
def get_download_key(self, user_id: str, job_id: str) -> str: """Get S3 key for the downloadable MP3.""" - return f"downloads/{user_id}/{job_id}.mp3" + return f"{user_id}/downloads/{job_id}.mp3"backend/src/services/hls_service.py (1)
22-24: HLS S3 prefix format inconsistent with rest of the codebase/tests.
get_hls_prefixreturnshls/{user_id}/{job_id}/, while other artifacts are stored under a{user_id}/...root, and earlier tests/docs referenced{user_id}/hls/{job_id}/. That inconsistency makes it harder to reason about per‑user data layout and may still be causing test failures if expectations weren’t updated.If you want the user‑rooted layout, adjust the prefix:
def get_hls_prefix(self, user_id: str, job_id: str) -> str: """Get S3 key prefix for HLS artifacts.""" - return f"hls/{user_id}/{job_id}/" + return f"{user_id}/hls/{job_id}/"
🧹 Nitpick comments (16)
frontend/components/BackendMeditationCall.tsx (3)
99-198: Consider consolidating polling functions to reduce duplication.
pollJobStatusForStreaming,pollUntilComplete, andpollJobStatusshare ~90% identical code. A single generic polling function with a configurable completion predicate would improve maintainability.Example refactor pattern:
+type ShouldComplete = (status: JobStatusResponse) => boolean; + +async function pollJobStatusGeneric( + jobId: string, + userId: string, + lambdaUrl: string, + shouldComplete: ShouldComplete, + onStatusUpdate?: (status: JobStatusResponse) => void +): Promise<JobStatusResponse> { + // Shared polling logic... + if (shouldComplete(jobData)) { + return jobData; + } + // ... +} + +// Usage: +const pollForStreaming = (jobId, userId, url, cb) => + pollJobStatusGeneric(jobId, userId, url, + (s) => s.status === 'streaming' || s.status === 'completed', cb);
116-123: Consider adding retry logic for transient network errors.Network failures in
fetch()currently abort the entire polling loop. For mobile environments with intermittent connectivity, consider catching transient errors and retrying a few times before propagating.
458-459: Potential memory leak: Blob URLs are not revoked.
URL.createObjectURL()creates a reference that persists in memory until explicitly revoked or the document unloads. Consider tracking these URLs and callingURL.revokeObjectURL()when the audio is no longer needed.frontend/components/HLSPlayer/hlsPlayerHtml.ts (2)
150-168: Network error recovery may cause retry loops without backoff.When a fatal
NETWORK_ERRORoccurs,hls.startLoad()is called immediately. If the network remains unavailable, this could trigger rapid repeated retries. The HLS.js config has retry limits, butstartLoad()resets the retry count. Consider adding a manual backoff or tracking retry attempts to prevent excessive retries.+ let networkRetryCount = 0; + const maxNetworkRetries = 3; + hls.on(Hls.Events.ERROR, (event, data) => { console.error('HLS Error:', data); if (data.fatal) { switch (data.type) { case Hls.ErrorTypes.NETWORK_ERROR: - hls.startLoad(); + if (networkRetryCount < maxNetworkRetries) { + networkRetryCount++; + setTimeout(() => hls.startLoad(), 1000 * networkRetryCount); + } else { + sendMessage('error', { + message: 'Network error: max retries exceeded', + fatal: true + }); + } break;
98-111: Early return on duplicate URL prevents reloading after errors.If
loadSourceis called with the same URL after an error, it returns early without attempting to reload. Consider resettingcurrentUrlon fatal errors to allow retry with the same URL.hls.on(Hls.Events.ERROR, (event, data) => { console.error('HLS Error:', data); if (data.fatal) { switch (data.type) { // ... existing cases ... default: sendMessage('error', { message: 'HLS fatal error: ' + data.details, fatal: true }); + currentUrl = null; // Allow retry with same URL break; } } });frontend/components/ScreenComponents/MeditationControls.tsx (1)
42-67: Potential memory leak: blob URL not revoked.When a blob URL is created via
URL.createObjectURL(blob), it should be revoked when no longer needed to prevent memory leaks. The createdblobUrlis never revoked.if (uri.startsWith('blob:')) { const response = await fetch(uri); const blob = await response.blob(); const blobUrl = URL.createObjectURL(blob); uri = blobUrl; + // Schedule cleanup after playback + // Note: Consider tracking and revoking in cleanup }Consider tracking blob URLs and revoking them in the playback status update callback when playback finishes.
frontend/hooks/useHLSPlayer.web.ts (1)
376-378: Consider exposing the audio element ref for advanced use cases.The function always returns
nullfor the audio element, but the interface declaresHTMLAudioElement | null. While the comment explains internal management, consumers expecting access to the audio element for custom controls (e.g., visualizations) would be blocked. Consider either:
- Exposing the ref when needed
- Updating the return type documentation to clarify it's always null
This maintains API consistency with the interface in
useHLSPlayer.ts(line 44-46 in snippets).backend/src/services/job_service.py (1)
139-170: Minor inconsistency:_utcnow()called twice.
_utcnow()is called at line 155 forupdated_atand again at line 167 forstarted_at. These could be microseconds apart. Consider capturing it once for consistency.def mark_streaming_started( self, user_id: str, job_id: str, playlist_url: str, ): """Mark job as streaming and set initial playlist URL.""" job_data = self.get_job(user_id, job_id) if not job_data: logger.warning( "Cannot mark streaming started: job not found", extra={"data": {"job_id": job_id, "user_id": user_id}} ) return + now = _utcnow().isoformat() - job_data["status"] = JobStatus.STREAMING.value - job_data["updated_at"] = _utcnow().isoformat() + job_data["status"] = JobStatus.STREAMING.value + job_data["updated_at"] = now if "streaming" not in job_data: job_data["streaming"] = { "enabled": True, "playlist_url": None, "segments_completed": 0, "segments_total": None, "started_at": None, } job_data["streaming"]["playlist_url"] = playlist_url - job_data["streaming"]["started_at"] = _utcnow().isoformat() + job_data["streaming"]["started_at"] = nowbackend/tests/integration/test_hls_integration.py (1)
105-149: Test state isolation concern: Mock returns same dict reference.At line 127,
mock_storage_service.download_json.return_value = job_data.copy()creates a shallow copy, but subsequent calls todownload_jsonin the service methods will return the same dict. The test passes becauseupload_jsoncaptures the final state, but this doesn't accurately simulate S3 persistence behavior where eachget_jobwould return the previously saved state.This is acceptable for verifying the final state, but be aware it doesn't catch bugs where intermediate state matters.
docs/HLS_STREAMING.md (1)
39-49: Add language identifier to fenced code block.The code block starting at line 39 is missing a language identifier. Since it shows a directory/path structure, consider using
textorplaintext.-``` +```text s3://float-cust-data/ hls/{user_id}/{job_id}/backend/src/services/ffmpeg_audio_service.py (1)
318-421: Consider adding error handling for intermediate file cleanup in_prepare_mixed_audio.If any FFmpeg step fails with
CalledProcessErrororTimeoutExpired, intermediate temp files insettings.TEMP_DIR(lines 325-330) won't be cleaned up since there's no try/finally block. This could lead to disk space accumulation over time.def _prepare_mixed_audio( self, voice_path: str, music_list: List[str], timestamp: str ) -> tuple[str, List[str]]: """ Prepare mixed audio file (voice + music) for further processing. Returns tuple of (path to mixed audio file, updated music list). """ music_path = f"{settings.TEMP_DIR}/music_{timestamp}.mp3" music_volume_reduced_path = f"{settings.TEMP_DIR}/music_reduced_{timestamp}.mp3" music_length_reduced_path = f"{settings.TEMP_DIR}/music_length_reduced_{timestamp}.mp3" silence_path = f"{settings.TEMP_DIR}/silence_{timestamp}.mp3" voice_with_silence_path = f"{settings.TEMP_DIR}/voice_with_silence_{timestamp}.mp3" mixed_output_path = f"{settings.TEMP_DIR}/mixed_{timestamp}.mp3" + intermediate_paths = [ + music_path, music_volume_reduced_path, music_length_reduced_path, + silence_path, voice_with_silence_path + ] + # Clean up any existing files temp_paths = [ music_path, music_volume_reduced_path, music_length_reduced_path, silence_path, voice_with_silence_path, mixed_output_path ] for path in temp_paths: if os.path.exists(path): os.remove(path) - voice_duration = self.get_audio_duration(voice_path) - # ... rest of method ... + try: + voice_duration = self.get_audio_duration(voice_path) + # ... rest of FFmpeg steps ... - # Cleanup intermediate files (keep mixed output) - for path in [music_path, music_volume_reduced_path, music_length_reduced_path, - silence_path, voice_with_silence_path]: - if os.path.exists(path): - os.remove(path) + return mixed_output_path, updated_music_list - return mixed_output_path, updated_music_list + finally: + # Cleanup intermediate files (keep mixed output on success, clean all on failure) + for path in intermediate_paths: + if os.path.exists(path): + try: + os.remove(path) + except OSError: + passbackend/src/services/download_service.py (1)
66-190: MP3 generation flow looks solid; consider reducing direct S3 coupling.The end‑to‑end MP3 generation (segment listing, tempdir usage, FFmpeg invocation with timeout, and cleanup) is well structured. One small improvement would be to lean on the
StorageServiceabstraction (download_file, possibly anupload_filehelper) instead of reaching directly intoself.storage_service.s3_client, to keep DownloadService less tied to the concrete S3 implementation and easier to fake in tests.backend/src/services/hls_service.py (1)
251-277: Avoid relying on methods not declared onStorageService.
cleanup_hls_artifactscallsself.storage_service.delete_object, but theStorageServiceABC only declaresupload_json,download_file, andlist_objects. That means HLSService is depending on S3StorageService’s concrete API, which makes alternative implementations or fakes harder.Either:
- Promote
delete_objectto theStorageServiceinterface and implement it across backends, or- Move the cleanup logic into the S3‑specific service (or a small helper) so HLSService only uses the abstract methods.
backend/src/handlers/lambda_handler.py (3)
249-351: TTS cache and retry logic are solid but could be a bit more robust.The overall HLS path—PROCESSING status, optional TTS cache reuse, streaming progress updates via a callback, and explicit capped retries via self‑invocation—is well structured. Two small robustness gaps:
- You ignore the return value of
upload_tts_cachebut still callset_tts_cache_key. If the upload fails, the job will advertise a cache key that doesn’t exist. Consider only persisting the cache key when the upload succeeds.- In the retry branch, if
_invoke_async_meditationitself raises (e.g., IAM or Lambda misconfig), the job never gets marked FAILED despitegeneration_attemptbeing incremented. Wrapping the self‑invoke in a try/except and falling back to a FAILED status on invocation failure would keep jobs from getting stuck in PROCESSING.
355-367: Refreshing playlist URLs on every status read is correct but may be overkill.Refreshing the playlist’s pre‑signed URL on each
handle_job_statuscall guarantees a non‑expired URL, which is good for clients. If you ever see high traffic on this endpoint, you might want to cache the last refresh timestamp in the job record and only regenerate when the previous URL is close to expiry, to reduce S3 presign calls.
369-413: Download flow guards are sensible; be explicit about return shape.The download handler correctly enforces that the job is COMPLETED and that
download.availableis true before attempting MP3 generation, and it surfaces clear error codes/messages for the main failure modes. However, the signature declares-> Dict[str, Any]but the function can returnNonewhen a job is missing (ifjob_datais not passed). Either update the annotation toOptional[Dict[str, Any]]or move the “job not found” handling entirely into the caller so this method always returns a dict.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
backend/src/handlers/lambda_handler.py(8 hunks)backend/src/services/download_service.py(1 hunks)backend/src/services/ffmpeg_audio_service.py(3 hunks)backend/src/services/hls_service.py(1 hunks)backend/src/services/job_service.py(7 hunks)backend/template.yaml(4 hunks)backend/tests/integration/test_hls_integration.py(1 hunks)backend/tests/unit/test_download_service.py(1 hunks)backend/tests/unit/test_hls_service.py(1 hunks)docs/HLS_STREAMING.md(2 hunks)frontend/components/BackendMeditationCall.tsx(6 hunks)frontend/components/HLSPlayer/HLSPlayer.tsx(1 hunks)frontend/components/HLSPlayer/hlsPlayerHtml.ts(1 hunks)frontend/components/ScreenComponents/MeditationControls.tsx(6 hunks)frontend/components/StreamingError.tsx(1 hunks)frontend/hooks/useHLSPlayer.web.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- frontend/components/StreamingError.tsx
- backend/tests/unit/test_download_service.py
- frontend/components/HLSPlayer/HLSPlayer.tsx
- backend/template.yaml
🧰 Additional context used
🧬 Code graph analysis (8)
backend/src/handlers/lambda_handler.py (6)
backend/src/services/download_service.py (2)
DownloadService(24-208)generate_mp3_and_get_url(191-208)backend/src/services/hls_service.py (7)
HLSService(15-290)get_tts_cache_key(36-39)tts_cache_exists(229-236)download_tts_cache(215-227)upload_tts_cache(196-213)generate_playlist_url(58-61)cleanup_hls_artifacts(251-277)backend/src/services/job_service.py (11)
create_job(36-70)get_job(263-284)update_job_status(72-89)JobStatus(21-26)set_tts_cache_key(241-249)mark_streaming_started(139-170)update_streaming_progress(91-137)mark_streaming_complete(172-203)increment_generation_attempt(251-261)mark_download_ready(205-227)mark_download_completed(229-239)backend/src/services/ai_service.py (1)
generate_meditation(13-13)backend/src/providers/openai_tts.py (1)
synthesize_speech(14-29)backend/src/utils/audio_utils.py (1)
cleanup_temp_file(26-45)
backend/tests/unit/test_hls_service.py (1)
backend/src/services/hls_service.py (18)
get_hls_prefix(22-24)get_segment_key(26-29)get_playlist_key(31-34)get_tts_cache_key(36-39)generate_presigned_url(41-56)generate_playlist_url(58-61)generate_segment_url(63-66)upload_segment(68-88)upload_segment_from_file(90-112)upload_playlist(114-131)generate_live_playlist(133-181)finalize_playlist(183-194)upload_tts_cache(196-213)download_tts_cache(215-227)tts_cache_exists(229-236)list_segments(238-249)cleanup_hls_artifacts(251-277)download_segment(279-290)
frontend/hooks/useHLSPlayer.web.ts (2)
frontend/hooks/useHLSPlayer.ts (3)
HLSPlayerState(7-14)HLSPlayerControls(16-21)useHLSPlayer(45-47)frontend/hooks/index.ts (3)
HLSPlayerState(8-8)HLSPlayerControls(8-8)useHLSPlayer(7-7)
frontend/components/ScreenComponents/MeditationControls.tsx (2)
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
HLSPlayerRef(25-29)frontend/components/HLSPlayer/HLSPlayer.web.tsx (1)
HLSPlayerRef(20-24)
backend/tests/integration/test_hls_integration.py (7)
backend/src/services/hls_service.py (7)
generate_presigned_url(41-56)HLSService(15-290)upload_segment(68-88)generate_live_playlist(133-181)tts_cache_exists(229-236)upload_tts_cache(196-213)cleanup_hls_artifacts(251-277)backend/src/services/s3_storage_service.py (1)
download_json(31-46)backend/src/services/job_service.py (6)
JobService(29-361)create_job(36-70)update_job_status(72-89)mark_streaming_started(139-170)update_streaming_progress(91-137)mark_streaming_complete(172-203)backend/src/services/download_service.py (2)
DownloadService(24-208)generate_mp3(66-189)backend/src/handlers/lambda_handler.py (5)
lambda_handler(490-519)LambdaHandler(50-477)handle_meditation_request(112-145)handle_job_status(355-367)handle_download_request(369-413)backend/src/config/constants.py (1)
InferenceType(4-6)backend/src/models/requests.py (1)
MeditationRequest(34-62)
backend/src/services/ffmpeg_audio_service.py (3)
backend/src/utils/logging_utils.py (1)
get_logger(85-110)backend/src/services/audio_service.py (3)
AudioService(5-18)get_audio_duration(8-8)select_background_music(16-18)backend/src/services/hls_service.py (4)
upload_segment_from_file(90-112)generate_live_playlist(133-181)upload_playlist(114-131)finalize_playlist(183-194)
backend/src/services/hls_service.py (3)
backend/src/utils/logging_utils.py (1)
get_logger(85-110)backend/src/services/storage_service.py (1)
StorageService(5-14)backend/src/services/s3_storage_service.py (1)
delete_object(66-80)
frontend/components/BackendMeditationCall.tsx (2)
backend/src/models/responses.py (1)
MeditationResponse(38-43)frontend/types/api.ts (2)
MeditationResult(61-74)JobStatusResponse(36-47)
🪛 LanguageTool
docs/HLS_STREAMING.md
[grammar] ~78-~78: Use a hyphen to join words.
Context: ...S3 4. Returns pre-signed download URL (1 hour expiry)
(QB_NEW_EN_HYPHEN)
🪛 markdownlint-cli2 (0.18.1)
docs/HLS_STREAMING.md
39-39: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (45)
frontend/components/BackendMeditationCall.tsx (7)
3-7: LGTM!The type imports and interface definitions are well-structured. The
StreamingMeditationResponseinterface appropriately combines streaming capabilities with legacy field support for backward compatibility.Also applies to: 31-51
53-58: LGTM!The polling configuration with exponential backoff and jitter is well-designed. The constants are reasonable and the jitter implementation correctly prevents thundering herd scenarios.
Also applies to: 92-97
200-229: LGTM!The download URL fetch implementation correctly validates the response and extracts error messages appropriately.
278-291: LGTM!The conversion function correctly maps
JobStatusResponsetoMeditationResult. TheisStreamingflag appropriately reflects whether streaming data exists, regardless of current status.
293-384: LGTM!The streaming meditation call implementation is well-structured with clear separation between job submission, status polling, and continuation functions. The base64 fallback ensures backward compatibility.
386-443: LGTM!The legacy function maintains backward compatibility by waiting for full completion and requiring base64 results.
472-473: LGTM!Clean re-export of types for module consumers using proper
export typesyntax.frontend/components/HLSPlayer/hlsPlayerHtml.ts (2)
31-31: SRI attribute added - Good security practice.The script tag now includes
integrityandcrossoriginattributes, addressing the previous review concern about CDN security.
207-226: Robust message handling for cross-platform compatibility.Good implementation listening to both
documentandwindowmessage events to handle different React Native WebView messaging mechanisms across platforms. The try-catch blocks ensure graceful handling of malformed messages.frontend/components/ScreenComponents/MeditationControls.tsx (3)
141-151: Loading state rendered cleanly.The loading indicator is shown when
isCallingis true, providing good UX during meditation generation. The early return pattern keeps the component logic clear.
179-215: HLS streaming mode implementation looks solid.The HLSPlayer integration with proper ref handling, event callbacks, and autoPlay configuration is well structured. The conditional rendering of "Meditation complete" message when
streamEndedis true provides good user feedback.
160-164: Retry button needs to reload the HLS source, not just call play().Calling
hlsPlayerRef.current?.play()won't recover from errors during manifest fetch (network timeouts, 404s, etc.). HLSPlayerRef doesn't expose aretry()method—onlyplay(),pause(), andseek().To properly retry on HLS errors, consider adding a
reload()method to HLSPlayerRef that sends a 'load' command to re-initialize the HLS.js instance, or have the parent component resetplaylistUrlto trigger a fresh source load.Likely an incorrect or invalid review comment.
frontend/hooks/useHLSPlayer.web.ts (3)
184-189: Native HLS canplay listener could trigger multiple times on retry.When using native HLS, the
canplaylistener is added with{ once: true }, which is correct. However, on retry (line 324-328),audio.play()is called directly without waiting forcanplay. This is fine, but be aware that ifretry()is called while loading, there may be a race condition.
10-25: Good practice: Centralized HLS configuration.Extracting
HLS_CONFIGas a shared constant ensures consistency between the main initialization and retry logic. The configuration values (retry counts, delays, live streaming settings) are reasonable defaults.
73-87: Audio element lifecycle managed correctly.The effect properly creates the audio element on mount and cleans up (pause, clear src, null ref) on unmount. The SSR guard (
typeof document === 'undefined') is appropriate for web code that may run in SSR contexts.backend/src/services/job_service.py (7)
21-26: LGTM: New STREAMING status added to JobStatus enum.The addition of the
STREAMINGstatus properly extends the job lifecycle to support HLS streaming workflows.
36-70: LGTM: Job creation extended for HLS streaming.The
enable_streamingparameter and conditional initialization of streaming/download fields are well-structured with sensible defaults.
91-137: LGTM: Streaming progress tracking with backward compatibility.The
update_streaming_progressmethod properly initializes missing streaming data for backward compatibility and correctly setsstarted_aton the first segment.
172-203: LGTM: Streaming completion handling.Properly transitions status to COMPLETED and enables download availability.
205-261: LGTM: Download and cache management methods.The download readiness, completion tracking, TTS cache key storage, and generation attempt incrementing are all well-implemented with proper null checks.
325-356: LGTM: Cleanup method updated consistently.The prefix construction in
cleanup_expired_jobsis aligned with the new key structure.
263-284: No breaking change - this is a new feature with no existing jobs to migrate.The key format
jobs/{user_id}/{job_id}.jsonis consistently used throughout the job service and is part of the new async meditation generation feature. There is no evidence of an old key format{user_id}/jobs/{job_id}.jsonanywhere in the codebase, and no existing jobs require migration. The concern about impact on existing jobs does not apply here.Likely an incorrect or invalid review comment.
backend/tests/integration/test_hls_integration.py (7)
14-48: LGTM: Well-structured test fixtures.The fixtures properly mock AWS services and S3 client with sensible default return values.
54-68: LGTM: Job creation test validates streaming fields.The test correctly verifies that HLS-enabled jobs include the expected streaming and download field structure.
69-103: LGTM: HLS segment workflow and playlist format tests.Good coverage of segment upload verification and HLS playlist format validation including required directives.
151-172: LGTM: Download service test validates segment-missing scenario.Properly tests that
generate_mp3returnsNonewhen no segments exist.
174-209: LGTM: TTS cache and cleanup flow tests.Good coverage of cache existence checks and artifact cleanup with proper call count verification.
252-334: LGTM: End-to-end flow tests.Good coverage of meditation request with streaming, playlist URL refresh, and download request validation for both state and availability checks.
336-355: LGTM: Download availability validation test.Properly verifies the error response when job is completed but download is not yet available.
backend/tests/unit/test_hls_service.py (7)
7-14: LGTM: Clean fixture setup.The mock storage service fixture is well-structured with appropriate default return values.
21-66: LGTM: Key generation tests.Good coverage of HLS prefix, segment key (including boundary indices), playlist key, and TTS cache key generation.
67-116: LGTM: Pre-signed URL generation tests.Proper testing of success and failure paths for URL generation, including playlist and segment-specific URLs.
118-162: LGTM: Upload operation tests.Good coverage of segment upload (from bytes and file), including content-type verification and error handling.
164-231: LGTM: Playlist generation tests.Comprehensive testing of live playlist generation with various configurations: basic, with ENDLIST, and custom durations. Finalization test properly verifies ENDLIST presence.
233-302: LGTM: TTS cache and segment operation tests.Good coverage of upload, download (success/failure), existence check, and segment listing with proper filtering of
.tsfiles.
304-358: LGTM: Cleanup and configuration tests.Proper testing of artifact cleanup including graceful error handling, segment download, and configuration constants verification.
backend/src/services/ffmpeg_audio_service.py (7)
17-27: LGTM: New imports and configuration constants.The TYPE_CHECKING import for HLSService avoids circular imports, and the timeout constants are sensible (2 min per step, 5 min for full HLS).
32-40: LGTM: Constructor extended with optional HLS service.The optional
hls_serviceparameter maintains backward compatibility while enabling HLS functionality.
193-217: LGTM: HLS method signature and validation.The method properly validates that
hls_serviceis available before proceeding, and the docstring clearly documents parameters and return types.
221-236: LGTM: Temp directory and audio preparation.Creating a dedicated temp directory for HLS output and extracting common mixing logic to
_prepare_mixed_audiois clean. The estimated segment calculation is reasonable.
238-264: LGTM: FFmpeg HLS command with timeout handling.The FFmpeg command is properly constructed for HLS output, and the timeout handling with clear error messaging addresses the previous review feedback.
266-311: LGTM: Progressive segment upload with playlist updates.The segment upload loop properly handles duration extraction, uploads each segment, generates live playlists, and calls the progress callback. The final playlist finalization with ENDLIST is correct.
313-316: LGTM: Temp directory cleanup.The
finallyblock ensures the HLS temp directory is cleaned up even on failure.backend/src/handlers/lambda_handler.py (2)
119-146: Meditation request wiring for HLS streaming looks good.Creating the job with
enable_streaming=ENABLE_HLS_STREAMING, immediately kicking off async processing, and returning a response that includes a"streaming": {"enabled": True, "playlist_url": None}block when HLS is enabled is a clean contract for the client. This keeps the synchronous API surface stable while adding streaming metadata.
511-524: Download routing & auth mirror job‑status flow appropriately.The new
_handle_download_requestcorrectly parsesjob_idfrom/job/{job_id}/download, validatesuser_idfrom query params, enforces ownership, and then delegates toLambdaHandler.handle_download_requestwhile wrapping responses with CORS and error helpers. This keeps the authorization model consistent with_handle_job_status_request.
| @pytest.mark.integration | ||
| class TestFeatureFlag: | ||
| """Tests for ENABLE_HLS_STREAMING feature flag.""" | ||
|
|
||
| def test_feature_flag_parsing_true(self): | ||
| """Test feature flag is parsed as true.""" | ||
| with patch.dict(os.environ, {"ENABLE_HLS_STREAMING": "true"}): | ||
| # Re-import to pick up new env var | ||
| import importlib | ||
|
|
||
| from src.handlers import lambda_handler | ||
| importlib.reload(lambda_handler) | ||
|
|
||
| from src.handlers.lambda_handler import ENABLE_HLS_STREAMING | ||
| assert ENABLE_HLS_STREAMING is True | ||
|
|
||
| def test_feature_flag_parsing_false(self): | ||
| """Test feature flag is parsed as false.""" | ||
| with patch.dict(os.environ, {"ENABLE_HLS_STREAMING": "false"}): | ||
| import importlib | ||
|
|
||
| from src.handlers import lambda_handler | ||
| importlib.reload(lambda_handler) | ||
|
|
||
| from src.handlers.lambda_handler import ENABLE_HLS_STREAMING | ||
| assert ENABLE_HLS_STREAMING is False | ||
|
|
||
| def test_feature_flag_defaults_to_true(self): | ||
| """Test feature flag defaults to true when not set.""" | ||
| env_without_flag = {k: v for k, v in os.environ.items() if k != "ENABLE_HLS_STREAMING"} | ||
| with patch.dict(os.environ, env_without_flag, clear=True): | ||
| import importlib | ||
|
|
||
| from src.handlers import lambda_handler | ||
| importlib.reload(lambda_handler) | ||
|
|
||
| from src.handlers.lambda_handler import ENABLE_HLS_STREAMING | ||
| assert ENABLE_HLS_STREAMING is True | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -type f -name "lambda_handler.py"Repository: HatmanStack/float
Length of output: 231
🏁 Script executed:
fd -t f "lambda_handler.py"Repository: HatmanStack/float
Length of output: 141
🏁 Script executed:
cat -n backend/src/handlers/lambda_handler.pyRepository: HatmanStack/float
Length of output: 29156
🏁 Script executed:
sed -n '212,250p' backend/tests/integration/test_hls_integration.pyRepository: HatmanStack/float
Length of output: 1577
🏁 Script executed:
# Check if there's a conftest.py that might be handling module resets
find backend/tests -name "conftest.py" -type fRepository: HatmanStack/float
Length of output: 154
🏁 Script executed:
cat -n backend/tests/integration/conftest.pyRepository: HatmanStack/float
Length of output: 10318
🏁 Script executed:
cat -n backend/tests/conftest.pyRepository: HatmanStack/float
Length of output: 10404
🏁 Script executed:
# Check if pytest.mark.order is used or available in the project
grep -r "pytest.mark.order" backend/tests/ || echo "Not found"
grep -r "pytest-ordering" backend/ || echo "Not found"Repository: HatmanStack/float
Length of output: 80
Module reloading without state cleanup can cause test pollution.
The importlib.reload(lambda_handler) calls will re-evaluate the ENABLE_HLS_STREAMING feature flag correctly, but the module's global _handler cache (line 480) persists across reloads. Any cached LambdaHandler instances or module state will carry over to subsequent tests. Consider either: (1) explicitly resetting sys.modules['src.handlers.lambda_handler'] after each test, (2) using @pytest.fixture(autouse=True) to clean module state between tests, or (3) avoiding module reloads entirely by using dependency injection instead.
🤖 Prompt for AI Agents
In backend/tests/integration/test_hls_integration.py around lines 212-250, the
tests reload src.handlers.lambda_handler but do not clear the module state,
allowing the module-level _handler cache (and any LambdaHandler instances) to
persist and pollute subsequent tests; fix by ensuring module state is cleaned
between reloads — either clear sys.modules['src.handlers.lambda_handler'] before
re-importing in each test, or add a pytest fixture (autouse=True) that pops that
key from sys.modules (and resets any other module globals) before each test so
importlib.reload sees a fresh module, or refactor the tests to avoid reloads by
injecting the feature flag into the handler factory instead.
- Remove unreachable handle_download_complete method (dead code) - Fix TTS cache key: only set if upload succeeds - Handle self-invoke failure by marking job as FAILED - Add try/finally cleanup in _prepare_mixed_audio - Fix blob URL memory leaks with useRef and cleanup - Add network retry backoff with exponential delays - Reset currentUrl on fatal error to enable retry 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
frontend/components/ScreenComponents/MeditationControls.tsx (1)
126-134: Play/pause desync partially addressed.The previous review concern about optimistic state updates has been addressed for
play()by relying onhandlePlaybackStart. The pause remains optimistic with inline justification. This is acceptable since pause is synchronous.However, if
hlsPlayerRef.currentis null when pausing (edge case), state will still update to false without any action taken. Consider adding a guard:const handleHLSPlay = useCallback(() => { if (isHLSPlaying) { - hlsPlayerRef.current?.pause(); - setIsHLSPlaying(false); + if (hlsPlayerRef.current) { + hlsPlayerRef.current.pause(); + setIsHLSPlaying(false); + } } else { hlsPlayerRef.current?.play(); // State will be updated by handlePlaybackStart callback } }, [isHLSPlaying]);Based on past review comment on this same concern.
🧹 Nitpick comments (5)
backend/src/services/ffmpeg_audio_service.py (1)
351-416: Subprocess timeouts lack specific exception handling.Each
subprocess.runcall hastimeout=FFMPEG_STEP_TIMEOUT, butsubprocess.TimeoutExpiredis not caught. While thefinallyblock handles cleanup, the raw timeout exception propagates to callers without context about which step failed.Consider wrapping with context for easier debugging:
- subprocess.run( - [ - self.ffmpeg_executable, - "-i", music_path, - "-filter:a", f"volume={DEFAULT_MUSIC_VOLUME_REDUCTION}dB", - music_volume_reduced_path, - ], - check=True, - capture_output=True, - timeout=FFMPEG_STEP_TIMEOUT, - ) + try: + subprocess.run( + [ + self.ffmpeg_executable, + "-i", music_path, + "-filter:a", f"volume={DEFAULT_MUSIC_VOLUME_REDUCTION}dB", + music_volume_reduced_path, + ], + check=True, + capture_output=True, + timeout=FFMPEG_STEP_TIMEOUT, + ) + except subprocess.TimeoutExpired: + raise Exception(f"FFmpeg volume reduction timed out after {FFMPEG_STEP_TIMEOUT}s")backend/src/handlers/lambda_handler.py (4)
257-260: Unused variabletts_cache_keybefore conditional.
tts_cache_keyis computed at line 258 but only used at line 289 inside theelsebranch. If TTS cache exists, this computation is wasted (minor).Move the
tts_cache_keycomputation inside theelseblock where it's actually used:- tts_cache_key = self.hls_service.get_tts_cache_key(request.user_id, job_id) job_data = self.job_service.get_job(request.user_id, job_id) generation_attempt = job_data.get("generation_attempt", 1) if job_data else 1 # ... later in else branch ... + tts_cache_key = self.hls_service.get_tts_cache_key(request.user_id, job_id) if self.hls_service.upload_tts_cache(request.user_id, job_id, voice_path): self.job_service.set_tts_cache_key(request.user_id, job_id, tts_cache_key)
376-420: Method returns mixed response types.
handle_download_requestreturnsNone, an error dict with{"error": {...}}, or a success dict. While the caller handles all cases, this is inconsistent with the middleware pattern used elsewhere. Consider raising exceptions or using a consistent response wrapper.
504-506: Path matching could be more precise.The condition
"/download" in raw_pathis somewhat loose. While the POST method constraint helps, paths like/job/{job_id}/download-historywould incorrectly match.Consider using a more precise pattern:
- if http_method == "POST" and "/download" in raw_path: + if http_method == "POST" and raw_path.endswith("/download"): return _handle_download_request(handler, event)
612-614: Error responses should use appropriate HTTP status codes.Download errors like
JOB_NOT_COMPLETEDandDOWNLOAD_NOT_AVAILABLEreturnHTTP_BAD_REQUEST(400), but these could arguably be 409 (Conflict) or 403 (Forbidden) for more semantic correctness.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/src/handlers/lambda_handler.py(8 hunks)backend/src/services/ffmpeg_audio_service.py(3 hunks)frontend/components/HLSPlayer/hlsPlayerHtml.ts(1 hunks)frontend/components/ScreenComponents/MeditationControls.tsx(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/components/ScreenComponents/MeditationControls.tsx (5)
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
HLSPlayerRef(25-29)frontend/components/HLSPlayer/HLSPlayer.web.tsx (1)
HLSPlayerRef(20-24)frontend/components/ThemedView.tsx (1)
ThemedView(17-26)frontend/constants/Colors.ts (1)
Colors(4-3888)frontend/components/ThemedText.tsx (1)
ThemedText(27-116)
backend/src/handlers/lambda_handler.py (9)
backend/src/services/download_service.py (2)
DownloadService(24-208)generate_mp3_and_get_url(191-208)backend/src/services/hls_service.py (6)
HLSService(15-290)get_tts_cache_key(36-39)tts_cache_exists(229-236)download_tts_cache(215-227)upload_tts_cache(196-213)generate_playlist_url(58-61)backend/src/services/job_service.py (8)
create_job(36-70)get_job(263-284)update_job_status(72-89)mark_streaming_started(139-170)update_streaming_progress(91-137)mark_streaming_complete(172-203)increment_generation_attempt(251-261)mark_download_ready(205-227)backend/src/models/requests.py (1)
MeditationRequest(34-62)backend/src/utils/file_utils.py (1)
generate_timestamp(7-8)backend/src/services/ai_service.py (1)
generate_meditation(13-13)backend/src/services/gemini_service.py (1)
generate_meditation(197-210)backend/src/providers/openai_tts.py (1)
synthesize_speech(14-29)backend/src/handlers/middleware.py (3)
cors_middleware(18-37)create_error_response(165-175)create_success_response(178-183)
🔇 Additional comments (22)
backend/src/services/ffmpeg_audio_service.py (4)
22-27: LGTM on timeout constants.The timeout configuration provides sensible defaults: 2 minutes per FFmpeg step and 5 minutes for full HLS generation. These values appropriately bound long-running operations.
32-40: Constructor properly integrates HLS service dependency.The optional
hls_serviceparameter maintains backward compatibility while enabling HLS functionality when needed.
271-276: Fallback segment duration is reasonable for error recovery.The fallback to
HLS_SEGMENT_DURATIONwhen duration detection fails is acceptable. However, note that the last segment is typically shorter; this fallback could cause minor timing inaccuracies in the playlist if duration detection fails for that segment.
318-328: Well-structured helper method with proper cleanup.The
_prepare_mixed_audiomethod correctly returns both the mixed audio path and updated music list, addressing the prior review feedback. The intermediate file cleanup in thefinallyblock is properly implemented.backend/src/handlers/lambda_handler.py (7)
43-47: Feature flag and retry configuration look good.Environment-based feature flag for HLS streaming and configurable retry attempts provide operational flexibility.
56-63: Service wiring correctly integrates HLS dependencies.The dependency injection chain is properly set up:
hls_service→download_serviceandaudio_service.
119-145: Clean HLS-aware job creation and response.The job creation correctly passes the streaming flag, and the response appropriately signals HLS availability to the client.
175-186: Good defensive check for HLS mode consistency.The dual check of
ENABLE_HLS_STREAMINGandjob_data.streaming.enabledensures consistent behavior even if the feature flag changes after job creation.
331-357: Retry mechanism properly implemented with self-invocation.The retry logic correctly increments the attempt counter, self-invokes asynchronously for retry, and falls through to mark as failed if the retry invocation itself fails. This addresses the prior review feedback about non-functional retries.
368-373: Correct URL refresh for streaming jobs.Pre-signed URLs expire, so refreshing the playlist URL on each status request ensures clients always get valid URLs.
593-604: Authorization check correctly validates job ownership.The authorization pattern matches the job status endpoint, ensuring consistent access control across endpoints.
frontend/components/ScreenComponents/MeditationControls.tsx (5)
1-8: LGTM!Imports are clean and appropriate for the new HLS streaming functionality.
13-23: LGTM!HLS streaming props are well-typed and properly optional.
31-41: Good memory management for blob URLs.Proper cleanup on unmount and during playback lifecycle prevents memory leaks from orphaned blob URLs.
201-236: LGTM!HLS streaming mode implementation is clean. The
autoPlayprop provides good UX by starting playback immediately. State management via callbacks is properly wired.
239-276: LGTM!Legacy mode and generate button implementations are straightforward. Good use of distinct
testIDvalues for each button state.frontend/components/HLSPlayer/hlsPlayerHtml.ts (6)
1-6: LGTM!Clear documentation explaining the purpose and usage context of the HTML template.
7-30: LGTM!The HTML structure and CSS are appropriate for WebView embedding. The
playsinlineattribute on the audio element correctly prevents fullscreen playback on iOS.
31-31: LGTM!SRI (Subresource Integrity) hash has been correctly added to the HLS.js CDN script tag, addressing the security concern from the previous review.
33-98: LGTM!The initialization, message sending, audio event listeners, and native HLS detection are well-implemented. The
isFinitechecks for duration correctly handle live streams, and the comprehensive event coverage ensures proper state synchronization with React Native.
196-221: LGTM!Command handling is well-implemented with appropriate error handling for the
playcommand and defensive value clamping forsetVolume. TheisFinitecheck for seek time prevents invalid values.
223-242: LGTM!The dual message listener pattern correctly handles cross-platform React Native WebView behavior (iOS sends via
document, Android viawindow). The type check in the window listener and comprehensive error handling ensure robust message parsing.
- Fix critical bug: keep HLSPlayer mounted in error state so retry works - Clean up mixed audio file after HLS generation - Fix race condition in network retry setTimeout closure - Move tts_cache_key computation inside else block (avoid wasteful compute) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
frontend/components/ScreenComponents/MeditationControls.tsx (1)
123-156: HLS pause still flips local state optimistically; align both directions with player events.You’ve fixed the earlier optimistic
playtoggle by delegatingsetIsHLSPlaying(true)tohandlePlaybackStart, but the pause path still forcessetIsHLSPlaying(false)immediately afterpause(). Ifpause()can fail or be async in any environment, UI state could briefly claim “Play” while audio is still running. If feasible, consider making HLSPlayer expose an explicit pause callback (e.g.,onPlaybackPause) and let that be the single source of truth forisHLSPlaying, similar tohandlePlaybackStart/handlePlaybackComplete.
🧹 Nitpick comments (6)
backend/src/services/ffmpeg_audio_service.py (1)
272-301: Consider: Partial upload state on segment failure.If
upload_segment_from_filefails mid-way through the loop, some segments will already be uploaded to S3 but the job will fail. The HLS artifacts aren't cleaned up on failure, which could leave orphaned segments.This may be intentional for retry scenarios (TTS cache exists), but consider whether HLS cleanup should occur on failure when retries are exhausted.
backend/src/handlers/lambda_handler.py (3)
307-317: Music list from HLS generation is not persisted.The
music_listreturned fromcombine_voice_and_music_hlscontains the updated list of used music tracks, but it's not stored anywhere. In the base64 path, this is included in the response viacreate_meditation_response. For HLS, you may want to store this in the job data so clients can track which music was used.music_list, total_segments, segment_durations = self.audio_service.combine_voice_and_music_hls( voice_path=voice_path, music_list=request.music_list, timestamp=timestamp, user_id=request.user_id, job_id=job_id, progress_callback=progress_callback, ) # Mark streaming complete self.job_service.mark_streaming_complete(request.user_id, job_id, total_segments) + + # Optionally store music_list in job result for client reference + # self.job_service.update_job_result(request.user_id, job_id, {"music_list": music_list})
504-506: Routing condition may match unintended paths.The condition
"/download" in raw_pathis broad and could match unexpected paths like/download-statsor/user/download-history. Consider using a more specific pattern.# Check for download request (POST /job/{job_id}/download) - if http_method == "POST" and "/download" in raw_path: + if http_method == "POST" and "/job/" in raw_path and raw_path.endswith("/download"): return _handle_download_request(handler, event)
606-610: Redundant None check for result.Since
job_datais passed tohandle_download_requestand was already validated at lines 594-597, theresult is Nonecase (which only happens whenjob_data is None) cannot occur. This check is unreachable.# Handle download (pass job_data to avoid duplicate lookup) result = handler.handle_download_request(user_id, job_id, job_data) - if result is None: - response = create_error_response(HTTP_NOT_FOUND, f"Job {job_id} not found") - return cors_middleware(lambda e, c: response)(event, None) if "error" in result:frontend/components/ScreenComponents/MeditationControls.tsx (2)
26-88: Blob URL lifecycle is solid; consider also cleaning upsoundon unmount.The new
blobUrlRefhandling (revoke on unmount and when playback finishes) looks good and should avoid leaking blob URLs. One remaining gap issoundcleanup if the component unmounts while playback is still active or beforedidJustFinishfires – currently only the completion path unloads theAudio.Sound. Consider adding an effect with a cleanup that callssound.unloadAsync()(wrapped in a try/catch) on unmount to avoid orphaned audio resources.
174-232: Streaming mode + retry overlay now look correct; minor UX polish possible on retry.Keeping
HLSPlayermounted and overlaying the error UI fixes the earlier retry bug and makesretry-buttonfunctional withhlsPlayerRef. If you want slightly snappier UX, you could clearhlsError/streamEndedimmediately on Retry so the error banner disappears as soon as the user taps, not only afteronPlaybackStartfires:<Pressable - onPress={() => { - hlsPlayerRef.current?.play(); - }} + onPress={() => { + setHlsError(null); + setStreamEnded(false); + hlsPlayerRef.current?.play(); + }}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/src/handlers/lambda_handler.py(8 hunks)backend/src/services/ffmpeg_audio_service.py(3 hunks)frontend/components/HLSPlayer/hlsPlayerHtml.ts(1 hunks)frontend/components/ScreenComponents/MeditationControls.tsx(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/components/ScreenComponents/MeditationControls.tsx (2)
frontend/components/HLSPlayer/HLSPlayer.tsx (1)
HLSPlayerRef(25-29)frontend/components/HLSPlayer/HLSPlayer.web.tsx (1)
HLSPlayerRef(20-24)
backend/src/handlers/lambda_handler.py (6)
backend/src/services/download_service.py (1)
generate_mp3_and_get_url(191-208)backend/src/services/ffmpeg_audio_service.py (1)
combine_voice_and_music_hls(193-323)backend/src/services/hls_service.py (6)
HLSService(15-290)tts_cache_exists(229-236)download_tts_cache(215-227)get_tts_cache_key(36-39)upload_tts_cache(196-213)generate_playlist_url(58-61)backend/src/services/job_service.py (9)
create_job(36-70)get_job(263-284)update_job_status(72-89)JobStatus(21-26)mark_streaming_started(139-170)update_streaming_progress(91-137)mark_streaming_complete(172-203)increment_generation_attempt(251-261)mark_download_ready(205-227)backend/src/models/requests.py (1)
MeditationRequest(34-62)backend/src/utils/file_utils.py (1)
generate_timestamp(7-8)
🔇 Additional comments (11)
frontend/components/HLSPlayer/hlsPlayerHtml.ts (1)
142-152: No action required — property names are correct for HLS.js 1.5.0.The properties
data.details.totaldurationanddata.details.liveare the correct API property names in HLS.js 1.5.0. Both are documented in the official LevelDetails object:totaldurationis a number representing the sum duration of the level, andliveis a boolean indicating whether the playlist is live. The code correctly uses these properties for duration detection and stream completion logic.backend/src/services/ffmpeg_audio_service.py (2)
1-28: LGTM! Well-structured module setup with appropriate constants.The new imports, type-checking guard for HLSService, and timeout constants are well-organized. The timeout values (120s per step, 300s for full HLS) are reasonable for audio processing workloads.
32-40: LGTM! Clean optional dependency injection.The optional
hls_serviceparameter maintains backward compatibility while enabling HLS functionality when needed.backend/src/handlers/lambda_handler.py (7)
43-47: LGTM! Feature flag and retry configuration.The feature flag defaults to
truewhich is appropriate for a new feature rollout.MAX_GENERATION_ATTEMPTS = 3is a reasonable retry limit.
56-61: LGTM! Correct service dependency wiring.The services are composed correctly with proper dependency injection.
119-145: LGTM! Clean HLS integration in meditation request handling.The streaming info is correctly included conditionally, and the job creation properly passes the
enable_streamingflag.
175-186: LGTM! Robust HLS routing logic.The double-check on both the feature flag and job-level streaming enablement provides flexibility to handle flag changes between job creation and processing.
331-357: LGTM! Retry mechanism correctly implemented.The retry now uses self-invocation instead of re-raising (addressing the past review comment). The try/except around
_invoke_async_meditationensures invocation failures are handled gracefully, and the early return prevents double-marking the job status.
368-373: LGTM! Fresh URL generation for streaming jobs.Refreshing the pre-signed playlist URL on each status check ensures clients always receive valid URLs, avoiding expiration issues.
416-420: No action required. Theexpires_invalue of 3600 seconds matches theDOWNLOAD_URL_EXPIRYconstant defined in the download service (line 18 ofbackend/src/services/download_service.py), which is also used when generating the pre-signed URL (line 56). The configuration is consistent across both files.frontend/components/ScreenComponents/MeditationControls.tsx (1)
234-267: Legacy playback and fallback generate flows remain coherent with new streaming mode.The precedence order (
isCalling→ streaming mode → legacymeditationURI→ generate button) makes sense, and the addedlegacy-play-button/generate-buttontestIDs will help keep both paths covered as HLS evolves. No issues from this change set.
| subprocess.run( | ||
| [ | ||
| self.ffmpeg_executable, | ||
| "-i", music_path, | ||
| "-filter:a", f"volume={DEFAULT_MUSIC_VOLUME_REDUCTION}dB", | ||
| music_volume_reduced_path, | ||
| ], | ||
| check=True, | ||
| capture_output=True, | ||
| timeout=FFMPEG_STEP_TIMEOUT, | ||
| ) | ||
|
|
||
| # Step 2: Create silence | ||
| subprocess.run( | ||
| [ | ||
| self.ffmpeg_executable, | ||
| "-f", "lavfi", | ||
| "-i", f"anullsrc=r={settings.AUDIO_SAMPLE_RATE}:cl=stereo", | ||
| "-t", str(DEFAULT_SILENCE_DURATION), | ||
| silence_path, | ||
| ], | ||
| check=True, | ||
| capture_output=True, | ||
| timeout=FFMPEG_STEP_TIMEOUT, | ||
| ) | ||
|
|
||
| # Step 3: Add silence to voice | ||
| subprocess.run( | ||
| [ | ||
| self.ffmpeg_executable, | ||
| "-i", f"concat:{silence_path}|{voice_path}", | ||
| "-c", "copy", | ||
| voice_with_silence_path, | ||
| ], | ||
| check=True, | ||
| capture_output=True, | ||
| timeout=FFMPEG_STEP_TIMEOUT, | ||
| ) | ||
|
|
||
| # Step 4: Trim music to duration | ||
| subprocess.run( | ||
| [ | ||
| self.ffmpeg_executable, | ||
| "-i", music_volume_reduced_path, | ||
| "-t", str(total_duration), | ||
| music_length_reduced_path, | ||
| ], | ||
| check=True, | ||
| capture_output=True, | ||
| timeout=FFMPEG_STEP_TIMEOUT, | ||
| ) | ||
|
|
||
| # Step 5: Mix voice and music | ||
| subprocess.run( | ||
| [ | ||
| self.ffmpeg_executable, | ||
| "-i", music_length_reduced_path, | ||
| "-i", voice_with_silence_path, | ||
| "-filter_complex", | ||
| "[0:a][1:a]amix=inputs=2:duration=first:dropout_transition=2", | ||
| mixed_output_path, | ||
| ], | ||
| check=True, | ||
| capture_output=True, | ||
| timeout=FFMPEG_STEP_TIMEOUT, | ||
| ) |
There was a problem hiding this comment.
Missing subprocess.TimeoutExpired handling in _prepare_mixed_audio.
While the HLS generation method handles TimeoutExpired (lines 260-265), the _prepare_mixed_audio method uses timeouts but doesn't catch subprocess.TimeoutExpired. A timeout will raise an exception with less context than the explicit handling in the HLS path.
Consider wrapping each step or adding a try/except for TimeoutExpired:
+ except subprocess.TimeoutExpired as e:
+ logger.error(
+ "FFmpeg step timed out in _prepare_mixed_audio",
+ extra={"data": {"timeout": FFMPEG_STEP_TIMEOUT, "cmd": e.cmd}}
+ )
+ raise Exception(f"FFmpeg step timed out after {FFMPEG_STEP_TIMEOUT}s") from e
finally:
# Cleanup intermediate files (always, even on failure)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/src/services/ffmpeg_audio_service.py around lines 358 to 423,
_prepare_mixed_audio executes multiple subprocess.run calls with timeouts but
does not handle subprocess.TimeoutExpired; wrap the sequence (or each
subprocess.run) in a try/except subprocess.TimeoutExpired block that logs the
timeout with context (the ffmpeg command, timeout value, and any stdout/stderr
from the CalledProcessError if available) similar to the HLS generation handling
at lines ~260-265, then clean up any temp files and re-raise or raise a more
descriptive exception so callers get the same contextual error behavior as the
HLS path.
| }); | ||
|
|
||
| hls.on(Hls.Events.LEVEL_LOADED, (event, data) => { | ||
| if (data.details.totalduration) { |
There was a problem hiding this comment.
Fix falsy check for duration to handle zero-length streams.
The truthy check if (data.details.totalduration) will evaluate to false if the duration is exactly 0, which is a valid duration. Use an explicit undefined/null check instead.
Apply this diff to fix the check:
- hls.on(Hls.Events.LEVEL_LOADED, (event, data) => {
- if (data.details.totalduration) {
+ hls.on(Hls.Events.LEVEL_LOADED, (event, data) => {
+ if (data.details.totalduration !== undefined && data.details.totalduration !== null) {
sendMessage('durationchange', {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (data.details.totalduration) { | |
| if (data.details.totalduration !== undefined && data.details.totalduration !== null) { |
🤖 Prompt for AI Agents
In frontend/components/HLSPlayer/hlsPlayerHtml.ts around line 143, the condition
currently uses a truthy check `if (data.details.totalduration)` which
incorrectly treats 0 as absent; change it to an explicit null/undefined check
(e.g., `if (data.details.totalduration != null)` or `if (typeof
data.details.totalduration !== 'undefined' && data.details.totalduration !==
null)`) so zero-length streams are handled correctly while still excluding
missing values.
Summary by CodeRabbit
New Features
User Experience
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.