Products

Two tightly integrated products, one credibility platform.

The SatCam System captures and reasons on the interview at the edge. The SatCam Cloud turns every session into compounding forensic intelligence, with governance and automation for federal-grade workflows.

Interview Assessment Sensor Platform

The edge system: sensing, synchronization, and on-device reasoning.

A synchronized depth + vision + audio + biosignal capture system with on-device AI compute. The full credibility pipeline runs on the device — including the Cortex reasoner — with a 3-second end-to-end delay. No network required.

SatCam — P1 prototype device

Four synchronized sensors, one time base

Every stream is time-stamped to correlate every facial micro-movement with a syllable of speech and the beat frequency of the heart.

SensorPurpose
ToF Depth
Indirect time-of-flight
Sub-mm facial muscle movement @ 0.5 m
RGB Global Shutter
High-resolution colour
Face, landmarks, pose, action-unit classification
mmWave Radar
Millimetre-wave
Non-contact heart & respiration rate
Beamforming Mic Array
Multi-element
Transcript, prosody, turn-taking
Sensing & capture

What the device sees, hears, and measures.

Depth

Micro-movement

An indirect time-of-flight camera captures facial muscle displacement with sub-millimeter resolution at 0.5 m — the only way to pick up suppression, lid-tightening, and dimpler cues that never show on RGB alone.

Vision

Vision

A global-shutter colour sensor produces distortion-free frames for face detection, landmarking, pose estimation, and action-unit classification. Global shutter preserves geometry during fast motion.

Audio

Audio

Multi-element array with spatial beamforming isolates the subject's voice from room noise. Downstream: prosody features, turn-taking, speaker identification, and time-stamped transcript.

Biosignals

Heart rate & respiration

Millimetre-wave radar resolves chest-wall motion for respiration rate. Paired with rPPG from the RGB feed, the system produces contactless heart rate — no chest strap, no electrodes.

Illumination

Invisible illumination

Four near-IR VCSEL emitters flood the subject with invisible infrared for ToF operation — Class-1 rated, so the subject perceives no light, no laser, no discomfort.

Sync

Synchronization

Hardware-anchored time stamps across all sensor streams. Downstream fusion can correlate a 40–200 ms action unit with a syllable and a heartbeat — frame-for-frame, no drift.

Edge AI & processing

The full credibility pipeline, running on the device.

Vision chain

Face → landmarks → pose → AUs

Face detection, landmark localization, pose estimation for frontalization, image normalization, and an action-unit classifier covering the full set of decision-relevant facial cues.

Audio chain

Transcript, prosody, speaker ID

Multi-language speech-to-text, prosodic feature extraction, and speaker-embedding identification — all time-aligned with the vision chain.

Fusion

Cortex reasoner, on-edge

Action units, prosody, transcript, and biosignals converge at the Cortex reasoner running locally on the on-device NPU — returning a score and a rationale every ~3 seconds.

Latency

~3 s end-to-end delay

Budgeted to preserve natural turn-taking in a live interview. The interviewer sees the score and next-question suggestion before they would have formulated their follow-up.

Air-gap

Zero-transmission mode

Operate fully offline — sensors, processing, inference, and the interviewer UI all run locally. Recordings and metadata are written only to the local NVMe SSD. Classified-environment ready.

Auditability

Rationale attached to every score

Cortex attaches the biometric and linguistic evidence that drove each credibility score — required for 2026 AML governance and federal audit trails, exported with the session.

Continuous

Learns from every session

Cortex improves in real time — no retraining cycle, no catastrophic forgetting. Model updates propagate in-place; the device you field in 2028 will outperform the one shipped in 2027.

Adaptive

Multi-language, multi-cohort

The speech chain supports multiple languages out of the box. Cortex adapts to new demographics and cohorts on-the-fly without a retraining cycle.

Security

Signed + encrypted session artifacts

Every session bundle is cryptographically signed and encrypted at rest. Chain-of-custody metadata is part of the export — tampering is detectable.

Interview Assessment Cloud Platform

Every session compounds into forensic intelligence.

Live interview signals and transcripts flow into the SatCam Cloud, where Cortex turns them into searchable evidence, cross-session correlation, automated reports, and governance artifacts — all traceable to the biometric signals that produced each claim.

Live Interview · on-edge + cloud

  • Real-Time Interview Steering
    Credibility and stress scores displayed on-device plus next-question suggestions, delivered within seconds of a subject's utterance.
  • Biometric Extraction
    Automated capture of physiological signals — muscle micro-movement, voice, heart rate, respiration — fused into a structured per-second vector.
  • Fused Transcription
    Time-stamped transcript inline-annotated with the biometric metadata that accompanied each token. Q/A boundaries and extracted claims surface automatically.
  • Multi-Language Support
    Speech-to-text covers most common languages.
  • Air-Gap Mode
    Device can run fully offline; when a link is available, signed session bundles sync to the cloud on the operator's schedule.
  • Operator Dashboard
    Live cues, confidence indicators, and a running transcript on the interviewer's display. Touchable, keyboard-navigable, and color-coded for glanceability.

Intelligence & Evidence Analysis

  • Transcript-Metadata Heat Maps
    Every claim annotated with the physiological and linguistic events that accompanied it — a forensic heat-map across the entire interview.
  • Bias Mitigation Analytics
    Surfaces patterns of interviewer bias across questions, cohorts, and operators. Flags systemic drift before it affects adjudication.
  • Interview Cross-Correlation
    Identifies conflicting answers across stakeholders, sessions, locations — an automated network of contradictions across all stored interviews
  • Natural Language Query
    Ask Cortex a question about a session — or a cohort. Cortex returns a grounded answer with citations back to the transcript and biometric timeline.
  • Global Deep-Search
    Search across every session ever recorded uncovering relationships across all stored interviews.
  • Automated Alerts
    Automated extraction of entities, claims, and topics from the transcript can generate and send alerts to others automatically as they are happening.
  • Session Replay + Annotation
    Play back any session at any speed, scrubbable by claim, with biometric overlays. Human reviewers can annotate; annotations feed Cortex's continuous learning.

Automation & Report Generation

  • Automated Report Drafting
    Converts fused metadata and transcripts into investigative reports, legal summaries, or adjudication packets — drafted by Cortex, ready for human sign-off.
  • Explainable Documentation
    Generates the rationale behind every credibility score — the signals, the linguistic context, and the model's reasoning path. Required for 2026 AML governance.
  • Adjudication Automation
    Risk-based filtering flags low-risk cases, so human reviewers concentrate their time on the high-signal sessions.
  • Legal Summary Export
    Court-formatted exports with chain-of-custody, signed artifacts, and full rationale. Consumable by prosecution, defense, and the adjudicating authority.
  • Case Timelines
    Multi-session case views that stitch interviews into a single narrative — visualizing contradictions, consistency, and drift across stakeholders.
  • Scheduled Re-Assessment
    As Cortex learns, historical sessions can be re-scored against the current model. Changes are surfaced with a diff — and attributed back to the new evidence.

See the SatCam in action.

We are running capability demonstrations with customers and launch partners in September 2026.

Book a demo