Skip to content

Commit

Permalink
Strikethrough added
Browse files Browse the repository at this point in the history
Striken-off "trakcing is performed using FaceNet" on website and github README.
  • Loading branch information
bmaneesh committed May 22, 2024
1 parent 41919b1 commit 27850ec
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ PyAFAR GPU capabilities work only on Linux or WSL2.

![pyafar_pipeline](./images/pyafar_pipeline_updated.jpg)

- `Facial Landmarks, Head Pose and Tracking`: Face detection and landmark prediction is done using the [MediaPipe](https://research.google/pubs/pub48292/) library. Tracking is performed using the [FaceNet](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Schroff_FaceNet_A_Unified_2015_CVPR_paper.pdf). The Perspective-n-Point (PnP) method is used to predict Roll, Pitch and Yaw
- `Facial Landmarks, Head Pose and Tracking`: Face detection and landmark prediction is done using the [MediaPipe](https://research.google/pubs/pub48292/) library. <s>Tracking is performed using the [FaceNet](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Schroff_FaceNet_A_Unified_2015_CVPR_paper.pdf)</s>. The Perspective-n-Point (PnP) method is used to predict Roll, Pitch and Yaw
- `Face Normalization`: The landmark predictions are used to normalize faces using the [dlib](http://dlib.net/) library.
- `AU predictions`: Normalized faces are used for AU predictions (occurrence and intensity). Separate detection modules for occurrence are available for adults and infants. Intensity predictions are available for adults only.
- `Output`: PyAFAR can output frame-level predictions in CSV and JSON formats to enable easy reading with most platforms used by both computational as well as domain experts.
Expand Down
2 changes: 1 addition & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ <h3><b>Abstract</b></h3>
<p class="text-fig">Pipeline of PyAFAR.
</p>
<p>PyAFAR has separate modules for facial feature extraction and tracking, Face normalization and AU predictions:</p>
<p class="text-left">It uses <a href="https://research.google/pubs/pub48292/">MediaPipe</a> library for landmarks. Tracking is performed using the <a href="https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Schroff_FaceNet_A_Unified_2015_CVPR_paper.pdf">FaceNet</a>. The Perspective-n-Point (PnP) method is used to predict Roll, Pitch and Yaw.</p>
<p class="text-left">It uses <a href="https://research.google/pubs/pub48292/">MediaPipe</a> library for landmarks. <s>Tracking is performed using the <a href="https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Schroff_FaceNet_A_Unified_2015_CVPR_paper.pdf">FaceNet</a></s>. The Perspective-n-Point (PnP) method is used to predict Roll, Pitch and Yaw.</p>

<p class="text-left"> The landmark predictions are used to normalize faces using the <a href="http://dlib.net/">dlib</a> library.</p>
<p class="text-left"> Normalized faces are used for AU predictions (occurrence and intensity). Separate detection modules for occurrence are available for adults and infants. Intensity predictions are available for adults only.</p>
Expand Down

0 comments on commit 27850ec

Please sign in to comment.