Skip to content
Vladimir Mandic edited this page Apr 16, 2021 · 53 revisions

Demos

Demos are included in /demo:


Main Demo

  • index.html: Full demo using Human ESM module running in Browesers,
    includes selectable backends and WebWorkers

You can run browser demo either live from git pages, by serving demo folder from your web server or use
included micro http2 server with source file monitoring and dynamic rebuild

On notes on how to use built-in micro server, see notes on Development Server


Demo Inputs

Demo is in demo/index.html loads demo/index.js

Demo can process:

  • Sample images
  • WebCam input
  • WebRTC input

Note that WebRTC connection requires a WebRTC server that provides a compatible media track such as H.264 video track
For such a WebRTC server implementation see https://github.com/vladmandic/stream-rtsp project
that implements a connection to IP Security camera using RTSP protocol and transcodes it to WebRTC
ready to be consumed by a client such as Human


Demo Options

Demo implements several ways to use Human library,
all configurable in browse.js:ui configuration object and in the UI itself:

const ui = {
  crop: true, // video mode crop to size or leave full frame
  columns: 2, // when processing sample images create this many columns
  facing: true, // camera facing front or back
  useWorker: false, // use web workers for processing
  worker: 'index-worker.js',
  samples: ['../assets/sample6.jpg', '../assets/sample1.jpg', '../assets/sample4.jpg', '../assets/sample5.jpg', '../assets/sample3.jpg', '../assets/sample2.jpg'],
  compare: '../assets/sample-me.jpg',
  useWebRTC: false, // use webrtc as camera source instead of local webcam
  webRTCServer: 'http://localhost:8002',
  webRTCStream: 'reowhite',
  console: true, // log messages to browser console
  maxFPSframes: 10, // keep fps history for how many frames
  modelsPreload: true, // preload human models on startup
  modelsWarmup: true, // warmup human models on startup
  busy: false, // internal camera busy flag
  buffered: true, // should output be buffered between frames
  bench: true, // show gl fps benchmark window
};

Additionally, some parameters are held inside Human instance:

human.draw.options = {
  color: <string>'rgba(173, 216, 230, 0.3)', // 'lightblue' with light alpha channel
  labelColor: <string>'rgba(173, 216, 230, 1)', // 'lightblue' with dark alpha channel
  shadowColor: <string>'black',
  font: <string>'small-caps 16px "Segoe UI"',
  lineHeight: <number>20,
  lineWidth: <number>6,
  pointSize: <number>2,
  roundRect: <number>28,
  drawPoints: <Boolean>false,
  drawLabels: <Boolean>true,
  drawBoxes: <Boolean>true,
  drawPolygons: <Boolean>true,
  fillPolygons: <Boolean>false,
  useDepth: <Boolean>true,
  useCurves: <Boolean>false,
  bufferedOutput: <Boolean>false,
  useRawBoxes: <Boolean>false,
};

Demo app can use URL parameters to override configuration values
For example:




Face 3D Rendering using OpenGL

face3d.html: Demo that uses Three.js for 3D OpenGL rendering of a detected face




Face Recognition Demo

demo/facematch.html: Demo that uses all face description and embedding features to
detect, extract and identify all faces plus calculate simmilarity between them

It highlights functionality such as:

  • Loading images
  • Extracting faces from images
  • Calculating face embedding descriptors
  • Finding face similarity and sorting them by similarity
  • Finding best face match based on a known list of faces and printing matches




NodeJS Demo

  • node.js: Demo using NodeJS with CommonJS module
    Simple demo that can process any input image
node demo/node.js
10:28:53.444 Human: version: 0.40.5 TensorFlow/JS version: 3.4.0
10:28:53.445 Human: platform: linux x64
10:28:53.445 Human: agent: NodeJS v15.7.0
10:28:53.445 Human: setting backend: tensorflow
10:28:53.505 Human: load model: /models/faceboxes
10:28:53.505 Human: load model: /models/iris
10:28:53.522 Human: load model: /models/age
10:28:53.529 Human: load model: /models/gender
10:28:53.535 Human: load model: /models/emotion
10:28:53.607 Human: load model: /models/handdetect
10:28:53.608 Human: load model: /models/handskeleton
10:28:53.698 Human: load model: /models/posenet
10:28:53.698 Human: tf engine state: 31020964 bytes 932 tensors
2021-03-06 10:28:53 INFO:  Loaded: [ 'posenet', 'handpose', 'age', 'gender', 'emotion', 'face', [length]: 6 ]
2021-03-06 10:28:53 INFO:  Memory state: { numTensors: 932, numDataBuffers: 932, numBytes: 31020964 }
2021-03-06 10:28:53 WARN:  Parameters: <input image> missing
2021-03-06 10:28:53 STATE:  Processing embedded warmup image: full
2021-03-06 10:28:54 DATA:  Face:  [
  {
    confidence: 0.9981339573860168,
    faceConfidence: undefined,
    boxConfidence: undefined,
    box: [ 43, 20, 182, 231, [length]: 4 ],
    mesh: undefined,
    boxRaw: null,
    meshRaw: undefined,
    annotations: undefined,
    age: 24.3,
    gender: 'female',
    genderConfidence: 0.84,
    emotion: [ { score: 0.83, emotion: 'neutral' }, { score: 0.12, emotion: 'sad' }, [length]: 2 ],
    embedding: [ [length]: 0 ],
    iris: 0
  },
]
2021-03-06 10:28:54 DATA:  Body: [
  {
    score: 0.9466612444204443,
    keypoints: [
      { score: 0.9937239289283752, part: 'nose', position: { x: 597, y: 126 } },
      { score: 0.994640588760376, part: 'leftEye', position: { x: 602, y: 113 } },
      { score: 0.9851681590080261, part: 'rightEye', position: { x: 597, y: 114 } },
      { score: 0.9937878251075745, part: 'leftEar', position: { x: 633, y: 131 } },
      { score: 0.8690065145492554, part: 'rightEar', position: { x: 584, y: 146 } },
      { score: 0.9881162643432617, part: 'leftShoulder', position: { x: 661, y: 228 } },
      { score: 0.9983603954315186, part: 'rightShoulder', position: { x: 541, y: 253 } },
      { score: 0.9678125381469727, part: 'leftElbow', position: { x: 808, y: 392 } },
      { score: 0.9479317665100098, part: 'rightElbow', position: { x: 461, y: 387 } },
      { score: 0.9611830711364746, part: 'leftWrist', position: { x: 896, y: 521 } },
      { score: 0.8795050382614136, part: 'rightWrist', position: { x: 323, y: 503 } },
      { score: 0.9769214391708374, part: 'leftHip', position: { x: 655, y: 540 } },
      { score: 0.9489732384681702, part: 'rightHip', position: { x: 567, y: 533 } },
      { score: 0.9663040041923523, part: 'leftKnee', position: { x: 646, y: 827 } },
      { score: 0.9643898010253906, part: 'rightKnee', position: { x: 561, y: 818 } },
      { score: 0.9095755815505981, part: 'leftAnkle', position: { x: 667, y: 1103 } },
      { score: 0.7478410005569458, part: 'rightAnkle', position: { x: 624, y: 1059 } },
      [length]: 17
    ]
  },
]
2021-03-06 10:28:54 DATA:  Hand: [ [length]: 0 ]
2021-03-06 10:28:54 DATA:  Gesture: [ { body: 0, gesture: 'leaning right' }, [length]: 1 ]
10:28:54.968 Human: Warmup full 621 ms

NodeJS Multi-process Demo

  • node-multiprocess.js and node-multiprocess-worker.js: Demo using NodeJS with CommonJS module
    Demo that starts n child worker processes for parallel execution
node node-multiprocess.js
2021-04-16 08:33:13 INFO:  @vladmandic/face-api version 1.1.12
2021-04-16 08:33:13 INFO:  User: vlado Platform: linux Arch: x64 Node: v15.7.0
2021-04-16 08:33:13 INFO:  FaceAPI multi-process test
2021-04-16 08:33:13 STATE: Main: started worker: 268453
2021-04-16 08:33:13 STATE: Main: started worker: 268459
2021-04-16 08:33:13 STATE: Main: started worker: 268460
2021-04-16 08:33:13 STATE: Main: started worker: 268466
2021-04-16 08:33:14 STATE: Worker: PID: 268459 TensorFlow/JS 3.4.0 FaceAPI 1.1.12 Backend: tensorflow
2021-04-16 08:33:14 STATE: Worker: PID: 268466 TensorFlow/JS 3.4.0 FaceAPI 1.1.12 Backend: tensorflow
2021-04-16 08:33:14 STATE: Worker: PID: 268460 TensorFlow/JS 3.4.0 FaceAPI 1.1.12 Backend: tensorflow
2021-04-16 08:33:14 STATE: Worker: PID: 268453 TensorFlow/JS 3.4.0 FaceAPI 1.1.12 Backend: tensorflow
2021-04-16 08:33:15 STATE: Main: dispatching to worker: 268466
2021-04-16 08:33:15 STATE: Main: dispatching to worker: 268460
2021-04-16 08:33:15 INFO:  Latency: worker initializtion:  1860 message round trip: 39
2021-04-16 08:33:15 DATA:  Worker received message: 268466 { test: true }
2021-04-16 08:33:15 STATE: Main: dispatching to worker: 268459
2021-04-16 08:33:15 STATE: Main: dispatching to worker: 268453
2021-04-16 08:33:15 DATA:  Worker received message: 268460 { image: 'demo/sample2.jpg' }
2021-04-16 08:33:15 DATA:  Worker received message: 268459 { image: 'demo/sample3.jpg' }
2021-04-16 08:33:15 DATA:  Worker received message: 268453 { image: 'demo/sample4.jpg' }
2021-04-16 08:33:15 DATA:  Worker received message: 268466 { image: 'demo/sample1.jpg' }
2021-04-16 08:33:17 DATA:  Main: worker finished: 268466 detected faces: 3
2021-04-16 08:33:17 STATE: Main: dispatching to worker: 268466
2021-04-16 08:33:17 DATA:  Main: worker finished: 268460 detected faces: 3
2021-04-16 08:33:17 STATE: Main: dispatching to worker: 268460
2021-04-16 08:33:17 DATA:  Worker received message: 268466 { image: 'demo/sample5.jpg' }
2021-04-16 08:33:17 DATA:  Worker received message: 268460 { image: 'demo/sample6.jpg' }
2021-04-16 08:33:17 DATA:  Main: worker finished: 268453 detected faces: 4
2021-04-16 08:33:17 STATE: Main: worker exit: 268453 0
2021-04-16 08:33:17 DATA:  Main: worker finished: 268459 detected faces: 3
2021-04-16 08:33:17 STATE: Main: worker exit: 268459 0
2021-04-16 08:33:18 DATA:  Main: worker finished: 268466 detected faces: 5
2021-04-16 08:33:18 STATE: Main: worker exit: 268466 0
2021-04-16 08:33:18 DATA:  Main: worker finished: 268460 detected faces: 4
2021-04-16 08:33:18 INFO:  Processed: 6 images in total: 4930 ms working: 3069 ms average: 511 ms
2021-04-16 08:33:18 STATE: Main: worker exit: 268460 0
Clone this wiki locally