Skip to content(if available)orjump to list(if available)

Show HN: CrabCamera – Cross-platform camera plugin for Tauri desktop apps

Show HN: CrabCamera – Cross-platform camera plugin for Tauri desktop apps

5 comments

·September 7, 2025

After building several Tauri desktop apps, I kept hitting the same wall: there's no reliable way to access cameras across Windows, macOS, and Linux. Every project meant reinventing camera integration, dealing with platform-specific APIs, and debugging permission issues.

  So I built CrabCamera – a Tauri plugin that handles all the camera complexity for you.

  What it does:

  - One API, three platforms: Same Rust code works on Windows (DirectShow), macOS (AVFoundation), and Linux (V4L2)
  - Permission handling: Automatically requests camera permissions on each platform
  - Format conversion: Takes care of the messy bits between platform formats and what your app needs
  - Error handling: Proper Rust error types instead of mysterious crashes
  - Hot-plugging: Detects when cameras are connected/disconnected

  The problem it solves:

  Before CrabCamera, adding camera support to a Tauri app meant:
  1. Writing separate native code for each platform
  2. Managing three different permission systems
  3. Handling format conversions manually
  4. Debugging platform-specific edge cases
  5. Maintaining it all as OS APIs change

  Now it's just:
  use crabcamera::Camera;

  let camera = Camera::new()?;
  let frame = camera.capture_frame().await?;

  Why I built it:

  I was working on a plant monitoring app (botanica) that needed reliable camera access for time-lapse photography. Existing solutions were either abandoned, platform-specific, or required complex native
  bindings.

  The Tauri ecosystem is growing fast, but camera support was this obvious gap. Every desktop app eventually needs camera access – video calls, document scanning, AR features, security monitoring.

  Technical highlights:

  - Uses nokhwa for the heavy lifting but wraps it in Tauri-friendly APIs
  - Proper async/await support throughout
  - Memory-efficient streaming for video capture
  - Built-in image processing pipeline
  - Extensible plugin architecture

  What's next:

  - WebRTC integration for video calls
  - Built-in barcode/QR code scanning
  - Face detection hooks
  - Performance optimizations for 4K streams

  The crate is MIT licensed and available on crates.io. I'd love feedback from other Tauri developers who've wrestled with camera integration.

  Links:
  - Crates.io: https://crates.io/crates/crabcamera
  - GitHub: https://github.com/Michael-A-Kuykendall/crabcamera
  - Documentation: https://docs.rs/crabcamera

lucb1e

The wall of text here, as well as the wall of text on the submission, keeps using the word Tauri but not saying what this is. Wikipedia says Tauri are Crimean settlers. Think I found it now: https://tauri.app

That page says that "By using the OS’s native web renderer, the size of a Tauri app can be little as 600KB." sounds like an alternative for Electron basically

ge96

idk why when I see a lot of emojis in readmes I think vibecode

WD-42

The wall of text that doesn’t actually say that much is a dead giveaway.

foresterre

And also a lot of (unordered) lists. It however only took one more step to verify this: the code is two commits, which both have "(...) and claude committed" in their commit tag, and " Generated with Claude Code" in their commit message. This is not intended to be a judgement, more a neutral observation.

I thought the "demo_crabcamera.py" was funny with respect to vibecoding: it's not a demo (I already found it odd for a Tauri app to be demo-ed via a python script); it produces the description text posted by OP.

On a more serious note, it all looks reasonably complete like most AI generated projects, but also almost a one shot generated project which hasn't seen much use for it to mature. This becomes even more true when you look a bit deeper at the code, where there are unfinished methods like:

  pub fn get_device_caps(device_path: &str) -> Result<Vec<String>, CameraError> {
        // This would typically query V4L2 capabilities
        // For now, return common capabilities
        Ok(vec![
            "Video Capture".to_string(),
            "Streaming".to_string(),
            "Extended Controls".to_string(),
        ])
    }

The project states it builds on nokhwa for the real camera capture capabilities, but then conditionally includes platform libraries, which seem to be only used for tests (which means they could have been dev-dependencies), at least in the case of v4l, based on the results of GitHub's search within the repo.

Perhaps it all works, but it does feel a bit immature and it does come with the risks of AI generated code.

auraham

Thanks for sharing!