How to Pass App Store Review With Camera-Based Health Features
How to pass App Store and Google Play review with camera-based health features like rPPG vitals scanning — privacy, permissions, disclaimers, and submission strategies.

How to pass App Store review with camera-based health features
If you've ever built a camera health feature into a mobile app and submitted it for review, you probably know the specific kind of dread that comes with waiting for Apple or Google's verdict. Camera-based health features — particularly rPPG vitals scanning that reads heart rate, respiratory rate, or stress levels from a phone camera — sit at the intersection of two things app reviewers scrutinize hardest: camera access and health data.
According to Apple's 2024 App Store Transparency Report, roughly 25% of all app submissions were rejected out of 7.77 million total reviews that year. Privacy violations were the single most common reason. If your app requests camera access and handles health data, you're triggering both red flags simultaneously.
"Apps that access HealthKit, Camera APIs, or facial mapping tools may not use gathered data for marketing, advertising, or use-based data mining, including by third parties." — Apple App Store Review Guidelines, Section 5.1.2(vi)
What reviewers actually look at in camera health apps
The review process differs between Apple and Google, but both platforms care about the same core question: does this app use the camera and health data responsibly, and does the user understand what's happening?
Apple's review team is human. Real people download your app, tap through it, and check that the experience matches what you described. Google uses a mix of automated scanning and human review, with policy enforcement that can happen both pre- and post-publication.
For camera-based health features specifically, reviewers check several things. They open the app, trigger the camera permission dialog, and read what it says. They look at whether the app explains why it needs the camera before asking. They check health disclaimers. They examine the App Privacy nutrition label (Apple) or Data Safety section (Google) against what the app actually collects.
Here's where it gets specific to rPPG and similar camera-based vitals:
| Review area | Apple App Store | Google Play Store |
|---|---|---|
| Camera permission string | Must explain specific use ("to measure heart rate from facial blood flow changes") | Must justify in Data Safety section; generic strings rejected |
| Health disclaimers | Required in-app and in metadata if measuring vitals | Required in store listing; must state if not clinically validated |
| Data storage restrictions | HealthKit data cannot be stored in iCloud; strict on-device rules | Health Connect permissions audited; unused data types must be removed |
| Account type | Standard developer account | Must be verified as Organization with D-U-N-S number (as of January 2026) |
| Review method | Human reviewers test the app | Automated + human review; post-publication enforcement |
| Privacy label | App Privacy "nutrition label" must match actual collection | Data Safety section must be accurate and complete |
Camera permission strings that actually get approved
The permission dialog is often where camera health feature app store review fails. Both platforms reject apps that use vague camera permission strings. "This app needs access to your camera" will get you rejected. So will anything that doesn't match the actual use case.
For rPPG-based features, the permission string needs to explain the mechanism without making medical claims. Something along the lines of: "This app uses your front camera to detect subtle color changes in your face, which are used to estimate heart rate and other wellness metrics. No images are stored."
That last part matters. Apple's reviewers will specifically look for whether you explain data retention in the permission context. If you're processing frames in real-time and discarding them (which most rPPG SDKs do), say so.
A few patterns that work:
- State the specific health metric being measured
- Explain the mechanism briefly (facial color changes, blood flow detection)
- Clarify data retention (frames processed in real-time, not stored)
- Avoid medical terminology that implies diagnosis
A few patterns that get rejected:
- "Camera access is needed for health features"
- Any mention of "diagnosis" or "medical-grade" measurements
- Permission strings that don't match the App Privacy label
- Requesting camera access before the user reaches the health feature
That last point is worth emphasizing. Both Apple and Google expect you to request camera permission at the moment of use, not at app launch. If your onboarding flow asks for camera access before the user even sees the vitals scanning feature, expect a rejection.
Health disclaimers: where most developers get tripped up
Google's policy team updated their Health Connect requirements in March 2025 and added further restrictions effective January 2026. According to coverage from My App Monitor and ASO World, the changes require health apps to verify as an Organization (with a D-U-N-S number), audit Health Connect permissions to remove unused data types, and include clear disclaimers if the app's measurements are not clinically validated.
Apple's Section 5.1.1 on health data got tighter in 2025 as well. The AppInstitute review checklist for 2025 notes that apps handling health information must provide clear privacy disclosures and explain data use before collection begins.
For camera-based vitals specifically, the disclaimers need to cover:
- That the measurements are estimates, not clinical diagnostics
- That the app is not a medical device (unless it actually is, with regulatory clearance)
- That users should consult a healthcare professional for medical decisions
- How facial data is processed and whether it's stored
Where developers get caught is inconsistency. Your App Store metadata says one thing, your in-app disclaimer says another, and your privacy policy says a third. Reviewers at both platforms cross-reference these. A 2023 study published in the BMJ (authored by researchers at the University of Birmingham and Macquarie University) examined health apps on both platforms and found that many had contradictory privacy disclosures between their store listings and actual data practices. The platforms have since gotten stricter about this.
The testing account problem
Apple's review guidelines require that you provide a demo account with full access in your review notes. For camera-based health features, this creates an awkward situation: the reviewer needs to actually perform a face scan to test the feature, which means they need adequate lighting, a stable phone position, and sometimes 30 seconds of sitting still.
According to AppFollow's 2025 guide to App Store review, the best practice is to include detailed testing instructions in the review notes. For camera-based health apps, that means:
- Providing a test account with all features unlocked
- Writing step-by-step instructions for performing a scan
- Noting minimum lighting requirements
- Including a video demo as a fallback (attached in App Store Connect)
- Explaining expected results so the reviewer knows the feature worked
Google Play doesn't have the same formal test account requirement, but providing clear testing guidance in your internal tester notes significantly reduces the chance of a policy flag.
Privacy architecture decisions that affect review outcomes
The architectural choices you make about data handling directly influence whether your camera health feature passes review. Both platforms have moved toward requiring on-device processing whenever possible.
For rPPG SDK integrations, the cleanest architecture from a review perspective processes video frames entirely on-device, extracts vitals signals locally, and only sends the resulting metrics (heart rate number, SpO2 percentage) to a server — never raw frames or facial data.
Apple's guidelines under Section 5.1.2(vi) are explicit: data gathered from Camera APIs or facial mapping tools cannot be used for marketing, advertising, or data mining by third parties. If your rPPG SDK sends facial data to a third-party cloud service for processing, you'll need to disclose this prominently, and it may still get flagged.
A practical architecture comparison:
| Architecture | Review risk | Privacy compliance | User trust |
|---|---|---|---|
| Fully on-device processing | Low risk | Strong | Users see "no data leaves your phone" |
| On-device extraction, metrics sent to your server | Low-medium risk | Good with proper disclosure | Acceptable if explained clearly |
| Frames sent to your cloud for processing | High risk | Requires extensive disclosure | Users uncomfortable with facial data upload |
| Frames sent to third-party SDK cloud | Very high risk | Difficult to comply | Likely rejection or post-publication removal |
Google Play's January 2026 organizational requirements
One thing that caught developers off guard: Google Play now requires health apps to be published by verified Organization accounts, not individual developer accounts. This went into effect in January 2026 and requires a D-U-N-S number for verification.
According to reporting from My App Monitor, this means solo developers and small startups building health features need to register as an organization and obtain a D-U-N-S number before they can publish or update health apps. The verification process takes 2-4 weeks, so plan for it early.
This doesn't directly relate to camera features, but it catches many health app developers who weren't tracking Google's policy updates. If you're integrating an rPPG SDK and publishing to Google Play, verify your account type before you submit.
Common rejection reasons specific to camera-based health apps
Based on the research and published rejection patterns, here are the most frequent reasons camera health features get rejected:
- Vague camera permission string — not explaining why the camera is needed for health measurement
- Missing health disclaimers — not stating the app doesn't provide medical diagnoses
- Privacy label mismatch — the App Privacy / Data Safety section doesn't reflect actual camera data handling
- Pre-scan camera access request — asking for camera permission before the user reaches the health feature
- HealthKit integration errors — writing inaccurate data types to HealthKit or storing HealthKit data in iCloud
- Missing test instructions — reviewer couldn't figure out how to trigger the health scan
- Third-party data sharing — SDK sends facial data to external servers without adequate disclosure
A submission checklist that works
After going through the platform guidelines and common failure points, here's a practical checklist for submitting camera-based health features:
- Camera permission string explains the specific health measurement and mechanism
- Camera permission is requested at moment of use, not at launch
- In-app disclaimer states measurements are estimates, not clinical diagnostics
- In-app disclaimer advises consulting a healthcare professional
- Store metadata includes health-related disclaimers
- App Privacy label / Data Safety section accurately reflects data collection
- Privacy policy, store listing, and in-app disclosures are consistent
- Test account with full feature access provided in review notes
- Step-by-step testing instructions written for the reviewer
- Video demo of the camera health feature attached to submission
- HealthKit / Health Connect integration follows platform-specific rules
- On-device processing architecture used where possible
- No raw facial data sent to third-party services without disclosure
- Google Play account verified as Organization with D-U-N-S number
Frequently asked questions
How long does app store review take for health apps with camera features?
Apple typically reviews apps within 24-48 hours, but health-related apps sometimes take longer if they trigger additional scrutiny. Google Play reviews can take 3-7 days for health apps, and the January 2026 organizational verification requirement adds 2-4 weeks if your account isn't already verified. Plan for at least a week on each platform, and longer if it's your first submission with health features.
Can I use HealthKit to store rPPG-derived vital signs?
Yes, but with restrictions. Apple requires that HealthKit data be written accurately (don't write a heart rate estimate as a clinical measurement), stored only on-device or in an encrypted user-controlled backup, and never synced to iCloud by your app. Your app also needs the HealthKit entitlement and must request specific read/write permissions. If you write data to HealthKit, the App Privacy label must reflect this.
Do I need FDA clearance to pass app store review with vitals features?
Neither Apple nor Google require FDA clearance for app approval. However, both platforms require you to clearly disclose that your app is not a medical device if it lacks regulatory clearance. Apple's guidelines state that apps offering health measurements must include appropriate disclaimers. The regulatory question is separate from the app store question, but making health claims without clearance will get you rejected on both platforms.
What happens if my camera health feature gets rejected?
You'll receive a rejection notice citing the specific guideline you violated. On Apple, you can respond directly to the reviewer through the Resolution Center, provide additional context, and resubmit. On Google, the process is similar through the Play Console. The most productive approach is to fix the cited issue, add extra documentation (screenshots, testing steps, disclaimer screenshots), and resubmit. Repeated rejections for the same issue can flag your account.
Where this is heading
The app store review process for camera-based health features will keep getting stricter. Both Apple and Google are moving toward more granular data disclosure requirements, and health data sits in the highest-scrutiny category on both platforms.
For developers integrating rPPG SDKs, the path forward is clear: on-device processing, transparent permission dialogs, consistent privacy disclosures, and no medical claims without regulatory backing. Platforms like Circadify are building their SDKs with these review requirements baked in, processing frames on-device and providing pre-built permission strings and disclaimer templates that match current platform guidelines.
The developers who treat app store compliance as a first-class engineering concern — rather than something to figure out after building the feature — are the ones whose submissions go through on the first try.
