The objective of this project is to capture and quantify facial movements.
Monitoring and rehabilitation of facial motor disorders are currently based on essentially subjective means of evaluation (e.g., muscle testing and clinical scores). The lack of quantitative data is a blocker in the effective management of patients.
In order to optimize patient care and monitoring, the FaceMocap platform provides state-of-the-art sub-millimeter-scale, three-dimensional analysis of facial movements.
Composed of 10 VICON™ Vantage V16 optoelectronic cameras, the data captured, processed and modeled on disabling pathologies that affect the face enable the personalization of treatments, in order to reduce invasive procedures, complications and length of hospital stay.
The objective of this project is to improve the analysis and monitoring of facial movements and their “anomalies” through Artificial Intelligence.
On the basis of the previous works, three focus areas have been identified thanks to the complementary expertise of the MIS (Modeling, Information & Systems, Amiens) and LML (Mathematics Laboratory of Lens) laboratories.
The main objective of this project is to evaluate the attention paid to the side of the face with facial movement abnormality of patients with facial palsy compared to healthy volunteers.
The objective of this project is to evaluate the effectiveness of a rehabilitation protocol using virtual reality on the reduction of syncinesia in patients with peripheral facial paralysis of recent onset (≤ 12 months) compared to the conventional rehabilitation protocols.