Skip to content

Biometric attendance.
On any webcam.

Eliminate time theft. Deploy secure, offline biometric attendance on any computer in minutes. Zero upfront costs.

100% PrivacyOpen Source
Facenox desktop interface
Desktop client

Local Deployment.
No Subscription Fees.

Enroll and verify on-device. Zero cloud processing.

Liveness Detection

Built-in detection.

Standard systems are easily fooled by a photo or screen. Facenox scans for real skin texture and light patterns to instantly detect a spoof.

Your Device
Local Storage
Offline-First

No Internet Required.

Everything happens on your device. You control when and if you sync to the Dashboard.

AES-256 Local Encryption
Data Privacy

Privacy by Design

We never save any photos. We only save an encrypted face embedding, a set of numbers describing facial geometry that cannot be reverse-engineered back into a face.

GDPR COMPLIANT
ZERO-KNOWLEDGE
NO EXTERNAL APIS
Zero hardware cost

Any camera.
Zero vendor lock-in.

Every other biometric system ties you to expensive proprietary scanners. Facenox runs on what you already own.

The old way

Locked to vendor hardware.

  • Expensive proprietary scanners to buy and maintain
  • Annual support contracts you can't escape
  • Vendor can deactivate your system remotely
  • Single point of failure if the device breaks
Facenox

It Just Works.

No scanner needed

Enrollment and recognition run on whatever camera is already connected. Built-in, USB, or external capture card.

You own everything

Open source codebase. Local database. No vendor can remotely disable, update, or access your system.

Works with
Data boundary

Biometrics never leave the device.

Local Secure Device Vector
Stays on your device

Local

  • Biometric templates
  • Face enrollment photos
  • Recognition decisions
  • Attendance logs
Cloud Hub Vector
Syncs only when you allow it

Remote

  • Attendance record summaries
  • Member & group metadata
  • Device & sync status
  • Attendance sessions
Transparency

Auditable.Transparent.

Facenox desktop app is open source. From biometric extraction to local encryption, every line of code is auditable by anyone.

Zero Backdoors
# --- recognizer.py ---
 
1import asyncio
2import logging
3import time
4from typing import List, Dict, Tuple, Optional, Any
5 
6import numpy as np
7 
8from database.face import FaceDatabaseManager
9from .session_utils import init_face_recognizer_session
10 
22class FaceRecognizer:
23 def __init__(
24 self,
25 model_path: str,
26 input_size: Tuple[int, int],
27 similarity_threshold: float,
28 providers: Optional[List[str]],
29 ):
30 
94 async def _extract_embeddings(
95 self, image: np.ndarray, face_data_list: List[Dict]
96 ) -> List[np.ndarray]:
97 
110 aligned_faces = align_faces_batch(image, face_data_list, self.input_size)
115 batch_input = preprocess_batch(aligned_faces, self.INPUT_MEAN, self.INPUT_STD)
116 
121 outputs = await loop.run_in_executor(
122 None, lambda: self.session.run(None, feeds)
123 )
126 return normalize_embeddings_batch(embeddings)
127 
197 async def recognize_face(
198 self,
199 image: np.ndarray,
200 landmarks_5: List,
201 allowed_person_ids: Optional[List[str]] = None,
202 ) -> Dict:
206 embeddings = await self._extract_embeddings(image, face_data)
217 person_id, similarity = await self._find_best_match(
218 embedding, allowed_person_ids, organization_id
219 )
 
# --- detector.py ---
 
1import numpy as np
2import logging as log
3from typing import List
4from .session_utils import init_face_detector_session
5from .postprocess import process_detection
6 
10class FaceDetector:
11 def __init__(
12 self,
13 model_path: str,
14 input_size: tuple,
15 conf_threshold: float,
16 nms_threshold: float,
17 top_k: int,
18 min_face_size: int,
19 ):
20 
28 self.detector = init_face_detector_session(
29 model_path, input_size, conf_threshold, nms_threshold, top_k
30 )
31 
36 def detect_faces(
37 self, image: np.ndarray, enable_liveness: bool = False
38 ) -> List[dict]:
39 
43 orig_height, orig_width = image.shape[:2]
45 self.detector.setInputSize((orig_width, orig_height))
46 faces = self.detector.detect(image)[1]
47 
55 for face in faces:
56 landmarks_5 = face[4:14].reshape(5, 2)
57 detection = process_detection(
58 face, min_size, landmarks_5,
59 orig_width, orig_height, margin,
60 )
 
# --- cipher.py ---
 
1"""
2AES-256-GCM encryption for .facenox backup files.
3
4Blob layout: MAGIC(6) | SALT(16) | IV(12) | TAG(16) | CIPHERTEXT
5Key derivation: PBKDF2-HMAC-SHA256, 480k iterations.
6"""
7 
13import os
14import platform
15import hashlib
16 
19from cryptography.hazmat.primitives.ciphers.aead import AESGCM
20 
24SALT_SIZE = 16
25IV_SIZE = 12
26KEY_SIZE = 32
27PBKDF2_ITERS = 480_000
28FACENOX_MAGIC = b"FACENOX\x00\x01"
29 
31def _derive_key(password: str, salt: bytes) -> bytes:
32 return hashlib.pbkdf2_hmac(
33 "sha256", password.encode(), salt, PBKDF2_ITERS, dklen=KEY_SIZE
34 )
35 
37def encrypt_backup(plaintext: bytes, password: str) -> bytes:
38 salt = os.urandom(SALT_SIZE)
39 iv = os.urandom(IV_SIZE)
40 encrypted = AESGCM(_derive_key(password, salt)).encrypt(iv, plaintext, None)
41 return FACENOX_MAGIC + salt + iv + encrypted
42 
192def encrypt_local_data(plaintext: bytes) -> bytes:
193 key = get_machine_key()
194 iv = os.urandom(IV_SIZE)
195 encrypted = AESGCM(key).encrypt(iv, plaintext, None)
196 return iv + encrypted
 
# --- liveness_detector.py ---
 
1import cv2
2import numpy as np
3from typing import List, Dict, Optional
4from .session_utils import init_onnx_session
5from .preprocess import crop, extract_face_crops_from_detections
6from .postprocess import validate_detection, run_batch_inference
7 
17def probability_to_logit_threshold(p: float) -> float:
18 p = max(1e-6, min(1 - 1e-6, p))
19 return np.log(p / (1 - p))
20 
22class LivenessDetector:
23 def __init__(self, model_path, model_img_size, confidence_threshold, bbox_inc):
30 self.model_img_size = model_img_size
33 self.logit_threshold = probability_to_logit_threshold(confidence_threshold)
35 self.ort_session, self.input_name = self._init_session_(model_path)
36 self.track_memory = TrackLivenessMemory()
37 
48 def detect_faces(
49 self,
50 image: np.ndarray,
51 face_detections: List[Dict],
52 tracking_namespace: Optional[str] = None,
53 ) -> List[Dict]:
54 
59 rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
99 raw_logits = run_batch_inference(
100 face_crops, self.ort_session, self.input_name, self.model_img_size
101 )
102 
106 results = assemble_liveness_results(
107 valid_detections, raw_logits, self.logit_threshold, results
108 )
# --- recognizer.py ---
 
1import asyncio
2import logging
3import time
4from typing import List, Dict, Tuple, Optional, Any
5 
6import numpy as np
7 
8from database.face import FaceDatabaseManager
9from .session_utils import init_face_recognizer_session
10 
22class FaceRecognizer:
23 def __init__(
24 self,
25 model_path: str,
26 input_size: Tuple[int, int],
27 similarity_threshold: float,
28 providers: Optional[List[str]],
29 ):
30 
94 async def _extract_embeddings(
95 self, image: np.ndarray, face_data_list: List[Dict]
96 ) -> List[np.ndarray]:
97 
110 aligned_faces = align_faces_batch(image, face_data_list, self.input_size)
115 batch_input = preprocess_batch(aligned_faces, self.INPUT_MEAN, self.INPUT_STD)
116 
121 outputs = await loop.run_in_executor(
122 None, lambda: self.session.run(None, feeds)
123 )
126 return normalize_embeddings_batch(embeddings)
127 
197 async def recognize_face(
198 self,
199 image: np.ndarray,
200 landmarks_5: List,
201 allowed_person_ids: Optional[List[str]] = None,
202 ) -> Dict:
206 embeddings = await self._extract_embeddings(image, face_data)
217 person_id, similarity = await self._find_best_match(
218 embedding, allowed_person_ids, organization_id
219 )
 
# --- detector.py ---
 
1import numpy as np
2import logging as log
3from typing import List
4from .session_utils import init_face_detector_session
5from .postprocess import process_detection
6 
10class FaceDetector:
11 def __init__(
12 self,
13 model_path: str,
14 input_size: tuple,
15 conf_threshold: float,
16 nms_threshold: float,
17 top_k: int,
18 min_face_size: int,
19 ):
20 
28 self.detector = init_face_detector_session(
29 model_path, input_size, conf_threshold, nms_threshold, top_k
30 )
31 
36 def detect_faces(
37 self, image: np.ndarray, enable_liveness: bool = False
38 ) -> List[dict]:
39 
43 orig_height, orig_width = image.shape[:2]
45 self.detector.setInputSize((orig_width, orig_height))
46 faces = self.detector.detect(image)[1]
47 
55 for face in faces:
56 landmarks_5 = face[4:14].reshape(5, 2)
57 detection = process_detection(
58 face, min_size, landmarks_5,
59 orig_width, orig_height, margin,
60 )
 
# --- cipher.py ---
 
1"""
2AES-256-GCM encryption for .facenox backup files.
3
4Blob layout: MAGIC(6) | SALT(16) | IV(12) | TAG(16) | CIPHERTEXT
5Key derivation: PBKDF2-HMAC-SHA256, 480k iterations.
6"""
7 
13import os
14import platform
15import hashlib
16 
19from cryptography.hazmat.primitives.ciphers.aead import AESGCM
20 
24SALT_SIZE = 16
25IV_SIZE = 12
26KEY_SIZE = 32
27PBKDF2_ITERS = 480_000
28FACENOX_MAGIC = b"FACENOX\x00\x01"
29 
31def _derive_key(password: str, salt: bytes) -> bytes:
32 return hashlib.pbkdf2_hmac(
33 "sha256", password.encode(), salt, PBKDF2_ITERS, dklen=KEY_SIZE
34 )
35 
37def encrypt_backup(plaintext: bytes, password: str) -> bytes:
38 salt = os.urandom(SALT_SIZE)
39 iv = os.urandom(IV_SIZE)
40 encrypted = AESGCM(_derive_key(password, salt)).encrypt(iv, plaintext, None)
41 return FACENOX_MAGIC + salt + iv + encrypted
42 
192def encrypt_local_data(plaintext: bytes) -> bytes:
193 key = get_machine_key()
194 iv = os.urandom(IV_SIZE)
195 encrypted = AESGCM(key).encrypt(iv, plaintext, None)
196 return iv + encrypted
 
# --- liveness_detector.py ---
 
1import cv2
2import numpy as np
3from typing import List, Dict, Optional
4from .session_utils import init_onnx_session
5from .preprocess import crop, extract_face_crops_from_detections
6from .postprocess import validate_detection, run_batch_inference
7 
17def probability_to_logit_threshold(p: float) -> float:
18 p = max(1e-6, min(1 - 1e-6, p))
19 return np.log(p / (1 - p))
20 
22class LivenessDetector:
23 def __init__(self, model_path, model_img_size, confidence_threshold, bbox_inc):
30 self.model_img_size = model_img_size
33 self.logit_threshold = probability_to_logit_threshold(confidence_threshold)
35 self.ort_session, self.input_name = self._init_session_(model_path)
36 self.track_memory = TrackLivenessMemory()
37 
48 def detect_faces(
49 self,
50 image: np.ndarray,
51 face_detections: List[Dict],
52 tracking_namespace: Optional[str] = None,
53 ) -> List[Dict]:
54 
59 rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
99 raw_logits = run_batch_inference(
100 face_crops, self.ort_session, self.input_name, self.model_img_size
101 )
102 
106 results = assemble_liveness_results(
107 valid_detections, raw_logits, self.logit_threshold, results
108 )
Applications

Built for real-world environments.

Any Location
Standard Setup

Any Location

Install on any computer with a webcam. People can clock in and out on-device with no internet needed and no accounts required.

Isolated Sites
Offline Deployment

Isolated Sites

Perfect for factories, warehouses, or restricted sites. Runs completely offline with zero dependency on a network connection.

Multiple Branches
Multi-Site Scale

Multiple Branches

Deploy across unlimited locations. View consolidated attendance from every site through one management dashboard.

Setup

Set up in minutes.
Scale when you need to.

Download and Install

Download the installer for Windows, macOS or Linux depending on your machine and install it.

Enroll Your Team

Add members and register their face directly in the app.

Start Tracking

Registered members can now track attendance by simply facing the camera.

Optionally Sync

Pair with the management dashboard to view attendance remotely across all your locations.

Optional Remote

Centralized Device Control.

Remote Hub

Private

Facial maps never leave the local machine's firewall.

Logs

Time-in/out reports sync for payroll integration.

Monitoring

Monitor multiple sites and device health remotely.

Manage Multiple Branches.

Sync your local attendance logs to a central online dashboard to see everything in one place.

Remote Sync Pricing

Local is always free.
Remote Sync is a paid extension.

Free Forever

Everything you need for a single location.

Free
  • 1 Location
  • 2 Devices Synced
  • 10 Active Staff
  • 7-Day History
Get Started
Most Popular

Starter

Remote sync for growing teams.

$9/mo
  • 1 Location
  • Unlimited Devices
  • 50 Active Staff
  • 30-Day History
Get Started

Enterprise

Manage multiple branches.

$29/mo
  • Unlimited Locations
  • Unlimited Devices
  • 150 Active Staff
  • Lifetime History
Get Started

Ready to deploy?

Download desktop app for free, or start your workspace to use remote sync & manage multiple branches.