CN116403256A - Face motion capturing method, system, electronic equipment and medium - Google Patents

Face motion capturing method, system, electronic equipment and medium Download PDF

Info

Publication number
CN116403256A
CN116403256A CN202310330575.9A CN202310330575A CN116403256A CN 116403256 A CN116403256 A CN 116403256A CN 202310330575 A CN202310330575 A CN 202310330575A CN 116403256 A CN116403256 A CN 116403256A
Authority
CN
China
Prior art keywords
face
processed
image
focus
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310330575.9A
Other languages
Chinese (zh)
Inventor
胡锟
李寰宇
唐义祺
魏子昆
沈玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Li Huanyu
Shanghai Beifuting Technology Co ltd
Yunnan Yunke Characteristic Plant Extraction Laboratory Co ltd
Original Assignee
Shanghai Beifuting Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Beifuting Technology Co ltd filed Critical Shanghai Beifuting Technology Co ltd
Priority to CN202310330575.9A priority Critical patent/CN116403256A/en
Publication of CN116403256A publication Critical patent/CN116403256A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face motion capturing method, a face motion capturing system, electronic equipment and a medium, and relates to the technical field of face motion capturing, wherein the face motion capturing method comprises the following steps: acquiring dynamic information of a face to be processed; capturing and masking facial images to be processed of each frame based on a faceji universal interface so as to generate a primary mask image; performing key point matching on the primary mask type image based on a face key point matching algorithm to determine key points of the face to be processed; inputting the face image to be processed into a target detection model to obtain a focus area to be processed; determining a secondary mask type image according to the focus area to be processed, the key points of the face to be processed and the primary mask type image; the secondary mask image is used for masking non-focus areas in the face image to be processed by using the mask and displaying focus areas in the face image to be processed. The invention dynamically detects the human face and precisely reveals the skin disease area of the human face.

Description

Face motion capturing method, system, electronic equipment and medium
Technical Field
The invention relates to the technical field of facial motion capture, in particular to a facial motion capture method, a facial motion capture system, electronic equipment and a facial motion capture medium for marking a skin disease area.
Background
Many dermatological patients have a need to see doctors online due to geographical location and other reasons. However, when facial skin diseases occur, skin patients often feel anxiety, shy, or do not want to expose personal facial information, are reluctant to make face-to-face contact with doctors, and cannot show the doctors their own focus areas. This affects the doctor's communication with the dermatological patient and is not conducive to the doctor's knowledge of the dermatological patient's condition.
In the prior art, the skin disease area outside the facial skin disease area can be covered and wrapped by the skin disease patient, but most of the skin disease area is manually shielded and cannot be accurately revealed. In addition, the skin patient may rotate the face at random, and the shielding of the face also needs to be moved at the same time, which affects the doctor's efficiency of seeing the doctor.
Disclosure of Invention
The invention aims to provide a face motion capturing method, a face motion capturing system, electronic equipment and a medium, which are used for dynamically detecting a face and precisely exposing a skin disease area of the face.
In order to achieve the above object, the present invention provides the following solutions:
in a first aspect, the present invention provides a face motion capturing method, including:
acquiring dynamic information of a face to be processed; the dynamic information of the face to be processed comprises a plurality of frames of face images to be processed; each frame of the face image to be processed comprises a focus area and a non-focus area;
capturing and masking the face image to be processed on the basis of a faceji universal interface so as to generate a primary mask image;
performing key point matching on the primary mask type image based on a face key point matching algorithm to determine key points of the face to be processed;
inputting the face image to be processed into a target detection model to obtain a focus area to be processed; the target detection model is obtained by training according to a training sample set and a neural network; each sample in the training sample set comprises a face sample image and a focus area corresponding to the face sample image;
determining a secondary mask type image according to the focus area to be processed, the key points of the face to be processed and the primary mask type image; the secondary mask image is used for masking non-focus areas in the face image to be processed by using a mask and displaying focus areas in the face image to be processed.
Optionally, the performing keypoint matching on the primary mask image based on a face keypoint matching algorithm to determine a face keypoint to be processed specifically includes:
inputting the primary mask type image into a convolutional neural network for feature extraction to obtain a first face feature map;
performing Fourier pooling operation on the first face feature map to obtain a second face feature map;
and inputting the second face feature map into a pre-trained logistic regression classifier to obtain key points of the face to be processed.
Optionally, the face motion capturing method further includes:
and performing triangular segmentation on the key points of the face to be processed by adopting a Delaunay algorithm to obtain a triangular network of the face to be processed.
Optionally, the training process of the target detection model includes:
acquiring a plurality of face sample images;
labeling each face sample image to determine a focus area corresponding to the face sample image; the focus areas corresponding to the face sample images form a training sample set;
inputting the training sample set into a neural network model for training to obtain an optimal neural network model; the optimal neural network model is a target detection model.
Optionally, determining a secondary mask image according to the focus area to be processed, the key points of the face to be processed and the primary mask image specifically includes:
matching the focus area to be processed with the key points of the face to be processed to determine focus key points;
and based on the focus key points, performing focus image segmentation operation on the primary mask type image to obtain a secondary mask type image.
In a second aspect, the present invention provides a facial motion capture system, including:
the dynamic information acquisition module is used for acquiring the dynamic information of the face to be processed; the dynamic information of the face to be processed comprises a plurality of frames of face images to be processed; each frame of the face image to be processed comprises a focus area and a non-focus area;
the mask image generation module is used for capturing and masking the face image to be processed of each frame based on the faceji universal interface so as to generate a primary mask image;
the face key point determining module is used for carrying out key point matching on the primary mask image based on a face key point matching algorithm so as to determine the key points of the face to be processed;
the face focus determining module is used for inputting the face image to be processed into a target detection model so as to obtain a focus area to be processed; the target detection model is obtained by training according to a training sample set and a neural network; each sample in the training sample set comprises a face sample image and a focus area corresponding to the face sample image;
the face focus mask display module is used for determining a secondary mask type image according to the focus area to be processed, the key points of the face to be processed and the primary mask type image; the secondary mask image is used for masking non-focus areas in the face image to be processed by using a mask and displaying focus areas in the face image to be processed.
In a third aspect, the present invention provides an electronic device comprising a memory and a processor;
the memory is used for storing a computer program, and the processor is used for running the computer program to execute a face motion capture method.
A computer-readable storage medium storing a computer program;
the computer program when executed by the processor implements the steps of a face motion capture method.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a face motion capturing method, a face motion capturing system, electronic equipment and a face motion capturing medium, wherein detected face dynamic information to be processed is processed according to frames, and capturing and mask masking are carried out on each frame of face image to be processed based on a faceji universal interface to obtain a primary mask type image; and the universal interface of faceji is adopted, so that the image processing is more convenient. And then determining key points of the face to be processed based on a face key point matching algorithm, obtaining a focus area to be processed corresponding to the face image to be processed by utilizing a target detection model, and combining the primary mask image to obtain a secondary mask image capable of masking a non-focus area in the face and displaying the focus area in the face. The invention can realize accurate detection of the skin disease area and expose the skin disease area, and simultaneously mask other non-skin disease areas so as to achieve accurate facial motion capture of the skin disease area mark. In addition, the invention obtains the dynamic information of the human face, and carries out the detection processing on each frame of human face image, thereby being capable of precisely realizing the face shielding when the skin disease patient rotates the face, and being convenient for further improving the inquiry efficiency of doctors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a face motion capture method of the present invention;
fig. 2 is a schematic structural diagram of a facial motion capture system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a face motion capturing method, a face motion capturing system, electronic equipment and a medium, wherein the face of a skin patient is wrapped by a face capturing virtual image mask, and a doctor communicates with the illness state under the condition that the face of the patient is not exposed. And other facial areas are not exposed while lesion areas are dynamically displayed.
The invention will be further described in detail with reference to the drawings and detailed description below in order to make the objects, features and advantages of the invention more comprehensible.
Example 1
As shown in fig. 1, the present embodiment provides a face motion capturing method, including:
step 100, obtaining dynamic information of a face to be processed; the dynamic information of the face to be processed comprises a plurality of frames of face images to be processed; each frame of the face image to be processed comprises a focus area and a non-focus area. Wherein the focus area comprises rose acne, folliculitis, etc. on human face.
Specifically, when a dermatological patient and a doctor make a video visit, video data of the dermatological patient and the doctor are acquired.
Step 200, capturing and mask masking each frame of the face image to be processed based on the faceji universal interface to generate a primary mask image.
The faceji universal interface is used for capturing the face of each frame of facial image of a patient in real time when the skin patient communicates with a doctor in a video mode, and generating a mask type dynamic masking effect instead of personalized processing of static pictures. And the animation effect of the specific mask type dynamic masking can be automatically selected by the skin disease patient.
Step 300, performing key point matching on the primary mask image based on a face key point matching algorithm to determine key points of the face to be processed; after obtaining the face of the cartoon image style, a face key point matching algorithm is adopted to determine the specific orientation of the face part, such as the position of the nose, the mouth and the like. The face key point matching algorithm is based on a convolutional neural network.
Step 300 specifically includes:
1) And inputting the primary mask type image into a convolutional neural network for feature extraction to obtain a first face feature map.
2) And carrying out Fourier pooling operation on the first face feature map so as to compress the first face feature map to a one-dimensional frequency domain to obtain a second face feature map, thereby better capturing nonlinear features of the face information of the patient.
3) And inputting the second face feature map into a pre-trained logistic regression classifier to obtain key points of the face to be processed. The pre-trained logistic regression classifier is linear and can map captured nonlinear features (namely a second face feature map) to face key points so as to obtain key point information of a target image. The key points of the human face comprise nose positions, mouth positions, eye positions and the like.
The face motion capturing method further comprises the following steps:
after the positions of the key points of the face are obtained, performing triangulation on the key points of the face to be processed by adopting a Delaunay algorithm to obtain a triangular network of the face to be processed. The face in the face triangle network to be processed is divided into a plurality of triangle areas.
Step 400, inputting the face image to be processed into a target detection model to obtain a focus area to be processed; the target detection model is obtained by training according to a training sample set and a neural network; each sample in the training sample set comprises a face sample image and a focus area corresponding to the face sample image.
The training process of the target detection model comprises the following steps:
1) And acquiring a plurality of face sample images.
2) Labeling each face sample image to determine a focus area corresponding to the face sample image; the face sample images and focus areas corresponding to the face sample images form a training sample set. Specifically, the human face sample image is manually marked.
3) Inputting the training sample set into a neural network model for training to obtain an optimal neural network model; the optimal neural network model is a target detection model.
And 500, determining a secondary mask type image according to the focus area to be processed, the key points of the face to be processed and the primary mask type image. The secondary mask image is used for masking non-focus areas in the face image to be processed by using a mask and displaying focus areas in the face image to be processed.
Step 500 specifically includes:
1) And matching the focus area to be processed with the key points of the face to be processed to determine focus key points. If the focus occurs on the cheek, the edge key points corresponding to the focus area on the cheek are marked out by the face key points to be processed, and then the face key points in the focus area are searched.
2) And based on the focus key points, performing focus image segmentation operation on the primary mask type image to obtain a secondary mask type image.
In one specific application, after the focal area to be treated is determined, when a dermatological patient clicks on the area of facial capture (primary mask image), only the true face of the relevant focal area will be revealed for diagnosis by the doctor. For example, the patient is rosacea in the cheek area, and when the patient clicks on the cheek, the actual skin (including rosacea) containing the focal area in the cheek portion is displayed in the video, and the other portions are covered by the facial mask. In contrast to static cartoon image generation, such focal areas of the present invention may appear to the physician at multiple angles as the patient in video chat swings his head, without exposing all of the patient's facial information.
Example two
As shown in fig. 2, in order to execute a corresponding method of the above embodiment to achieve corresponding functions and technical effects, the present embodiment provides a face motion capture system, including:
a dynamic information acquisition module 101, configured to acquire dynamic information of a face to be processed; the dynamic information of the face to be processed comprises a plurality of frames of face images to be processed; each frame of the face image to be processed comprises a focus area and a non-focus area.
The mask image generating module 201 is configured to capture and mask the face image to be processed for each frame based on the faceji universal interface, so as to generate a primary mask image.
The face key point determining module 301 is configured to perform key point matching on the primary mask image based on a face key point matching algorithm, so as to determine a face key point to be processed.
The face focus determining module 401 is configured to input the face image to be processed into a target detection model to obtain a focus area to be processed; the target detection model is obtained by training according to a training sample set and a neural network; each sample in the training sample set comprises a face sample image and a focus area corresponding to the face sample image.
A face focus mask display module 501, configured to determine a secondary mask image according to the focus area to be processed, the key points of the face to be processed, and the primary mask image; the secondary mask image is used for masking non-focus areas in the face image to be processed by using a mask and displaying focus areas in the face image to be processed.
Example III
The embodiment provides an electronic device including a memory and a processor.
The memory is used for storing a computer program, and the processor is used for running the computer program to execute the face motion capture method in the first embodiment.
Optionally, the electronic device is a server.
A computer-readable storage medium storing a computer program; the steps of the face motion capture method in the first embodiment are implemented when the computer program is executed by the processor.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. The face motion capturing method is characterized by comprising the following steps of:
acquiring dynamic information of a face to be processed; the dynamic information of the face to be processed comprises a plurality of frames of face images to be processed; each frame of the face image to be processed comprises a focus area and a non-focus area;
capturing and masking the face image to be processed on the basis of a faceji universal interface so as to generate a primary mask image;
performing key point matching on the primary mask type image based on a face key point matching algorithm to determine key points of the face to be processed;
inputting the face image to be processed into a target detection model to obtain a focus area to be processed; the target detection model is obtained by training according to a training sample set and a neural network; each sample in the training sample set comprises a face sample image and a focus area corresponding to the face sample image;
determining a secondary mask type image according to the focus area to be processed, the key points of the face to be processed and the primary mask type image; the secondary mask image is used for masking non-focus areas in the face image to be processed by using a mask and displaying focus areas in the face image to be processed.
2. The face motion capture method of claim 1, wherein the performing keypoint matching on the primary mask image based on a face keypoint matching algorithm to determine a face keypoint to be processed specifically comprises:
inputting the primary mask type image into a convolutional neural network for feature extraction to obtain a first face feature map;
performing Fourier pooling operation on the first face feature map to obtain a second face feature map;
and inputting the second face feature map into a pre-trained logistic regression classifier to obtain key points of the face to be processed.
3. The face motion capture method of claim 1, further comprising:
and performing triangular segmentation on the key points of the face to be processed by adopting a Delaunay algorithm to obtain a triangular network of the face to be processed.
4. The method of claim 1, wherein the training process of the object detection model comprises:
acquiring a plurality of face sample images;
labeling each face sample image to determine a focus area corresponding to the face sample image; the focus areas corresponding to the face sample images form a training sample set;
inputting the training sample set into a neural network model for training to obtain an optimal neural network model; the optimal neural network model is a target detection model.
5. The method of claim 1, wherein determining a secondary mask image from the lesion area to be processed, the face keypoints to be processed, and the primary mask image comprises:
matching the focus area to be processed with the key points of the face to be processed to determine focus key points;
and based on the focus key points, performing focus image segmentation operation on the primary mask type image to obtain a secondary mask type image.
6. A facial motion capture system, the facial motion capture system comprising:
the dynamic information acquisition module is used for acquiring the dynamic information of the face to be processed; the dynamic information of the face to be processed comprises a plurality of frames of face images to be processed; each frame of the face image to be processed comprises a focus area and a non-focus area;
the mask image generation module is used for capturing and masking the face image to be processed of each frame based on the faceji universal interface so as to generate a primary mask image;
the face key point determining module is used for carrying out key point matching on the primary mask image based on a face key point matching algorithm so as to determine the key points of the face to be processed;
the face focus determining module is used for inputting the face image to be processed into a target detection model so as to obtain a focus area to be processed; the target detection model is obtained by training according to a training sample set and a neural network; each sample in the training sample set comprises a face sample image and a focus area corresponding to the face sample image;
the face focus mask display module is used for determining a secondary mask type image according to the focus area to be processed, the key points of the face to be processed and the primary mask type image; the secondary mask image is used for masking non-focus areas in the face image to be processed by using a mask and displaying focus areas in the face image to be processed.
7. An electronic device comprising a memory and a processor;
the memory is for storing a computer program, the processor is for running the computer program to perform the facial motion capture method of any of claims 1-5.
8. A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program;
the computer program when executed by a processor implements the steps of the facial motion capture method of any of claims 1-5.
CN202310330575.9A 2023-03-31 2023-03-31 Face motion capturing method, system, electronic equipment and medium Pending CN116403256A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310330575.9A CN116403256A (en) 2023-03-31 2023-03-31 Face motion capturing method, system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310330575.9A CN116403256A (en) 2023-03-31 2023-03-31 Face motion capturing method, system, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN116403256A true CN116403256A (en) 2023-07-07

Family

ID=87009738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310330575.9A Pending CN116403256A (en) 2023-03-31 2023-03-31 Face motion capturing method, system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116403256A (en)

Similar Documents

Publication Publication Date Title
Wang et al. Smartphone-based wound assessment system for patients with diabetes
CN112308932B (en) Gaze detection method, device, equipment and storage medium
CN109887077B (en) Method and apparatus for generating three-dimensional model
US20200234444A1 (en) Systems and methods for the analysis of skin conditions
CN112509119B (en) Spatial data processing and positioning method and device for temporal bone and electronic equipment
CN112712906B (en) Video image processing method, device, electronic equipment and storage medium
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
JP2022527007A (en) Auxiliary imaging device, control method and device for analysis of movement disorder disease
CN104182723B (en) A kind of method and apparatus of sight estimation
Eslami et al. Automatic vocal tract landmark localization from midsagittal MRI data
CN115601811B (en) Face acne detection method and device
CN113570645A (en) Image registration method, image registration device, computer equipment and medium
Niri et al. Multi-view data augmentation to improve wound segmentation on 3D surface model by deep learning
Al-Rahayfeh et al. Enhanced frame rate for real-time eye tracking using circular hough transform
CN111445575A (en) Image reconstruction method and device of Wirisi ring, electronic device and storage medium
CN109034023A (en) A kind of eye movement data determines method, apparatus, equipment and storage medium
Donuk et al. A CNN based real-time eye tracker for web mining applications
Giese et al. Metrics of the perception of body movement
CN116439691A (en) Joint activity detection method based on artificial intelligence and related equipment
CN116403256A (en) Face motion capturing method, system, electronic equipment and medium
CN116128942A (en) Registration method and system of three-dimensional multi-module medical image based on deep learning
Zhang et al. E-faceatlasAR: extend atlas of facial acupuncture points with auricular maps in augmented reality for self-acupressure
Jain et al. Innovative algorithms in computer vision
Bimbraw et al. Augmented reality-based lung ultrasound scanning guidance
CN117036877B (en) Emotion recognition method and system for facial expression and gesture fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240326

Address after: 200335 Floor 10, Building 2, Lingkong SOHO, No. 968 Jinzhong Road, Xinjing Town, Changning District, Shanghai

Applicant after: Shanghai Beifuting Technology Co.,Ltd.

Country or region after: China

Applicant after: Li Huanyu

Applicant after: Yunnan Yunke characteristic plant extraction laboratory Co.,Ltd.

Address before: 200335 Floor 10, Building 2, Lingkong SOHO, No. 968 Jinzhong Road, Xinjing Town, Changning District, Shanghai

Applicant before: Shanghai Beifuting Technology Co.,Ltd.

Country or region before: China

Applicant before: Li Huanyu

TA01 Transfer of patent application right