[0.3.0] - 2026-05-08
Added
- Offline / service worker support. The application code and on-device face-detection model files are now cached locally on first load and continue to function without network access on subsequent visits.
Changed
- MediaPipe runtime (WASM) and the face landmark model file are now bundled with the Face Value distribution and served from the same origin as the application itself. Previously these were fetched from
cdn.jsdelivr.net(jsDelivr) andstorage.googleapis.com(Google Cloud Storage), each of which received user IP addresses and request metadata on every load. Those third-party requests no longer occur. - Slight color adjustments to app icon
[0.2.1] - 2026-04-30
Fixed
- Window now scrolls to radio buttons, checkboxes, and toggles when focused.
[0.2.0] - 2026-03-06
Changed
- Logo tweak
[0.1.3] - 2026-03-05
Fixed
- Set aria-hidden on logo
[0.1.2] - 2026-03-03
Changed
- Tweaked guidance from “move camera left” to “point camera left” (and ditto for other directions)
[0.1.1] - 2026-03-03
Changed
- Add tagline under the header describing what Face Value does
- Move legal disclaimer from header to below the Start Camera button
Fixed
- Add
aria-detailson Start Camera button linking to “How to Use” instructions - Add
aria-labelandaria-controlsto “Use current” button so its purpose and target are clear - Add screen-reader-only app description to Start Camera’s
aria-describedby
[0.1.0] - 2026-02-07
Initial release.
Features
- Real-time face detection via MediaPipe FaceLandmarker
- Pull-based guidance: press a key to get spoken/screen-reader alignment instructions
- Alignment scoring (0–100%) with prioritized corrective instructions
- Guidance modes: body-only, camera-only, hybrid (auto), or full (both)
- Conference crop presets (16:9 desktop, 1:1 mobile) for video call framing
- Facial distance checking with configurable target face size and “use current” calibration
- Text-to-speech output with adjustable speed, alongside aria-live announcements
- Customizable shortcut key (default: F)
- Adjustable thresholds for position tolerance, orientation tolerance, and crown estimate
- Settings persisted to localStorage with runtime type validation
- Auto-start camera when permission is already granted
- Dark mode support via
prefers-color-scheme - All processing runs in-browser – no data leaves the device
Technical
- SSR-rendered UI via SolidJS + SSG plugin; no framework runtime shipped to client
- Pipeline architecture: camera -> detector -> pose -> guidance -> announcer
- Strict TypeScript with full test coverage (Vitest, 179 tests)
- ESLint (with solid, vitest, css-class-usage, unicode-typography plugins), Stylelint, Prettier
- GitHub Actions CI for lint, typecheck, and tests