WO2019033570A1 - Procédé d'analyse de mouvement labial, appareil et support d'informations - Google Patents

Procédé d'analyse de mouvement labial, appareil et support d'informations Download PDF

Info

Publication number
WO2019033570A1
WO2019033570A1 PCT/CN2017/108749 CN2017108749W WO2019033570A1 WO 2019033570 A1 WO2019033570 A1 WO 2019033570A1 CN 2017108749 W CN2017108749 W CN 2017108749W WO 2019033570 A1 WO2019033570 A1 WO 2019033570A1
Authority
WO
WIPO (PCT)
Prior art keywords
lip
real
feature points
image
lips
Prior art date
Application number
PCT/CN2017/108749
Other languages
English (en)
Chinese (zh)
Inventor
陈林
张国辉
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019033570A1 publication Critical patent/WO2019033570A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the field of computer vision processing technologies, and in particular, to a lip motion analysis method, apparatus, and computer readable storage medium.
  • Lip motion capture is a biometric recognition technique that performs user lip motion recognition based on human facial feature information.
  • the application of lip motion capture is very extensive, and plays a very important role in many fields such as access control attendance and identity recognition, which brings great convenience to people's lives.
  • the capture of lip movements, the general product approach is to use the deep learning method to train the classification model of the lip features through deep learning, and then use the classification model to judge the characteristics of the lips.
  • the number of lip features depends entirely on the type of lip sample, such as judging mouth opening, closing mouth, then at least need to take a mouth, shut a large number of samples, if you want to judge the mouth, then It takes a lot of samples to pout and then retrain. This is not only time consuming, but also impossible to capture in real time.
  • the lip feature is judged based on the classification model of the lip feature, and it is not possible to analyze whether or not the recognized lip region is a human lip region.
  • the present application provides a lip motion analysis method, device and computer readable storage medium, the main purpose of which is to calculate the motion information of the lips in the real-time facial image according to the coordinates of the lip feature points, and realize the analysis of the lip region and the action on the lips. Capture in real time.
  • the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a lip motion analysis program, and the lip motion analysis program is executed by the processor to implement the following step:
  • a real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
  • a feature point recognition step inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
  • a lip region recognizing step determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
  • Lip movement judging step if the lip area is a human lip area, according to the real-time face The x and y coordinates of t lip feature points in the image are calculated, and the moving direction and moving distance of the lips in the real-time facial image are calculated.
  • the lip motion analysis program when executed by the processor, the following steps are further implemented:
  • Prompting step When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
  • the lip movement determining step comprises:
  • the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips;
  • the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
  • the present application further provides a lip motion analysis method, the method comprising:
  • a real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
  • a feature point recognition step inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
  • a lip region recognizing step determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
  • Lip motion judging step If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
  • the lip motion analysis program when executed by the processor, the following steps are further implemented:
  • Prompting step When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
  • the lip movement determining step comprises:
  • the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left corners of the lips;
  • the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
  • the present application further provides a computer readable storage medium including a lip motion analysis program, when the lip motion analysis program is executed by a processor, implementing the above Any step in the lip motion analysis method.
  • the lip motion analysis method, apparatus and computer readable storage medium determine whether a region composed of lip feature points is a human lip region by recognizing a lip feature point from a real-time facial image, and if so, according to the lip feature point
  • the coordinate calculation calculates the motion information of the lips, and the deep analysis of the samples of the various movements of the lips is not required, so that the analysis of the lip region and the real-time capture of the lip motion can be realized.
  • FIG. 1 is a schematic diagram of a preferred embodiment of an electronic device of the present application.
  • FIG. 2 is a block diagram of the lip motion analysis program of FIG. 1;
  • FIG. 3 is a flow chart of a preferred embodiment of a lip motion analysis method of the present application.
  • FIG. 4 is a schematic diagram showing the refinement of the step S40 of the lip motion analysis method of the present application.
  • the application provides an electronic device 1 .
  • FIG. 1 it is a schematic diagram of a preferred embodiment of the electronic device 1 of the present application.
  • the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • a computing function such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
  • the camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
  • Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 15 is used to implement connection communication between these components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, For example, the hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 11 is generally used for storing a lip motion analysis program 10 installed on the electronic device 1, a face image sample library, a human lip sample library, and being constructed and trained. Lip average model and lip classification model.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing lip motion analysis. Program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing lip motion analysis.
  • Program 10 and so on.
  • FIG. 1 shows only the electronic device 1 having the components 11-15 and the lip motion analysis program 10, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit.
  • a display may also be appropriately referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • an operating system and a lip motion analysis program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the lip motion analysis program 10 stored in the memory 11, the following is realized as follows step:
  • the real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm.
  • the camera 13 When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12.
  • the processor 12 receives the real-time image, it first acquires the size of the image to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
  • Feature point recognition step input the real-time facial image into a pre-trained lip average model, and use the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image.
  • the face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
  • the face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm.
  • ERT Regression Tress
  • t represents the cascading sequence number
  • ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
  • Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
  • each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
  • Each level of regression is based on feature points for prediction.
  • the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
  • the 15 feature points are randomly selected from the 20 feature points.
  • the first regression tree is trained, and the predicted value of the first regression tree and the true value of the partial feature points (15 features taken from each sample image)
  • the residual of the weighted average of the points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained.
  • the lip average model of the face is obtained, and the model file and the sample library are saved in the memory 11. Since the sample image of the training model marks 20 lip feature points, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
  • the real-time facial image is aligned with the lip average model, and then the feature extraction algorithm is used to search the real-time facial image to match the 20 lip feature points of the lip average model.
  • 20 lip feature points It is assumed that the 20 lip feature points recognized from the real-time facial image are still recorded as P1 to P20, and the coordinates of the 20 lip feature points are: (x 1 , y 1 ), (x 2 , y 2 ) , (x 3 , y 3 ), ..., (x 20 , y 20 ).
  • the upper and lower lips of the lip have eight feature points (respectively labeled as P1 to P8, P9 to P16), and the left and right lip angles respectively have two feature points (respectively labeled as P17 to P18). , P19 ⁇ P20).
  • 5 are located on the outer contour line of the upper lip (P1 to P5), 3 are located on the contour line of the upper lip (P6 to P8, and P7 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P9 to P13), and 3 are located in the outline of the lower lip (P14 to P16, and P15 is the central feature point on the inner side of the lower lip).
  • the feature extraction algorithm is a SIFT (scale-invariant feature transform) algorithm.
  • SIFT scale-invariant feature transform
  • the SIFT algorithm extracts the local features of each lip feature point from the lip model of the face, selects a lip feature point as the reference feature point, and finds the same or similar feature in the real-time face image as the local feature of the reference feature point.
  • the feature extraction algorithm may also be a SURF (Speeded Up Robust Features) algorithm, an LBP (Local Binary Patterns) algorithm, a HOG (Histogram of Oriented Gridients) algorithm, or the like.
  • SURF Speeded Up Robust Features
  • LBP Long Binary Patterns
  • HOG Histogram of Oriented Gridients
  • Lip region recognizing step determining a lip region based on the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region.
  • the m lip positive sample image and the k lip negative sample image are collected to form a second sample library.
  • the lip positive sample image refers to an image containing human lips, and the lip portion can be extracted from the face image sample library as a positive lip sample image.
  • a negative sample image of a lip refers to an image of a person's lip region being defective, or a lip in the image is not a human (eg, animal) lip, and a plurality of lips positive sample images and negative sample images form a second sample bank.
  • the local features of the positive sample image of each lip and the negative sample image of the lips are extracted.
  • the feature extraction algorithm is used to extract the Histogram of Oriented Gradient (HOG) feature of the lip sample image. Since the color information in the lip sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient of each pixel position is calculated accordingly. Direction values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells, and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region.
  • HOG Histogram of Oriented Gradient
  • the (Support Vector Machine, SVM) classifier performs training to obtain a lip classification model of the face.
  • a lip region can be determined according to the 20 lip feature points, and then the determined lip region is input into the trained lip classification model, and the result is determined according to the result of the model. Whether the determined lip area is the human lip area.
  • Lip motion judging step If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
  • the lip motion determining step includes:
  • the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips;
  • the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
  • the coordinates of the central feature point P7 on the inner side of the upper lip are (x 7 , y 7 ), the coordinates of the central feature point P15 on the inner side of the lower lip are (x 15 , y 15 ), and the lip region is human
  • the distance between the two points is as follows:
  • the coordinates of the left outer lip corner feature point P18 are (x 18 , y 18 ), and the coordinates of the feature points P1 and P9 closest to P18 on the outer contour lines of the upper and lower lips are (x 1 , y 1 ), respectively.
  • x 9 , y 9 connect P18 with P1 and P9 to form vectors respectively Calculation vector
  • the angle ⁇ between the calculation formula is as follows:
  • represents a vector
  • the angle between the angles can be calculated by calculating the angle of the angle. The smaller the angle, the greater the degree of left ankle.
  • the coordinates of the right outer lip corner feature point P20 are (x 20 , y 20 ), and the coordinates of the feature points P5 and P13 closest to P20 on the outer contour lines of the upper and lower lips are (x 5 , y 5 , respectively). ), (x 13 , y 13 ), connect P20 with P5 and P13 to form vectors Calculation vector
  • the angle between the calculations is as follows:
  • indicates a vector
  • the angle between the angles of the lips can be judged by calculating the angle of the angle; the smaller the angle, the greater the degree of right-handedness of the lips.
  • Prompting step When the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image. After inputting the lip region determined by the 20 lip feature points into the lip classification model, it is determined according to the model result that the lip region is not the human lip region, and the lip region of the person is not recognized, and the next lip motion judging step cannot be performed. The real-time image taken by the camera 13 is reacquired and the subsequent steps are performed.
  • the electronic device 1 of the present embodiment extracts a real-time facial image from a real-time image, recognizes a lip feature point in the real-time facial image by using a lip average model, and analyzes a lip region determined by a lip feature point using a lip classification model. If the lip region is a human lip region, the motion information of the lip in the real-time facial image is calculated according to the coordinates of the lip feature point, and the analysis of the lip region and the real-time capture of the lip motion are realized.
  • the lip motion analysis program 10 can also be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • FIG. 2 it is a block diagram of the lip motion analysis program 10 of FIG.
  • the lip motion analysis program 10 can be divided into: an acquisition module 110, an identification module 120, a determination module 130, a calculation module 140, and a prompt module 150.
  • the functions or operational steps implemented by the modules 110-150 are similar to the above, and are not described in detail herein, by way of example, for example:
  • the acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time face image from the real-time image by using a face recognition algorithm;
  • the recognition module 120 is configured to input the real-time facial image into a pre-trained lip average model, and use the lip average model to identify t lip feature points representing the lip position in the real-time facial image;
  • the determining module 130 is configured to determine a lip region according to the t lip feature points, input the lip region into a pre-trained lip classification model, and determine whether the lip region is a human lip region;
  • the calculating module 140 is configured to: when the lip region is a human lip region, calculate a moving direction and a moving distance of the lip in the real-time facial image according to the x and y coordinates of the t-lip feature points in the real-time facial image; and
  • the prompting module 150 is configured to: when the lip classification model determines that the lip region is not a human lip region, prompting that the human lip region is not detected from the current real-time image, and the lip motion cannot be determined, the flow returns to the real-time image capturing step, and the capturing is performed. A live image.
  • the present application also provides a lip motion analysis method.
  • a flow chart of a preferred embodiment of the lip motion analysis method of the present application is shown. The method can be performed by a device that can be implemented by software and/or hardware.
  • the lip motion analysis method includes steps S10 to S50.
  • Step S10 Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
  • the camera transmits the real-time image to the processor.
  • the processor receives the real-time image, first acquires the size of the image, and creates a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may also be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
  • step S20 the real-time facial image is input into the pre-trained lip average model, and the lip average model is used to identify t lip feature points representing the lip position in the real-time facial image.
  • the face feature recognition model is trained using the face image marking the lip feature point to obtain a lip average model for the face.
  • the face feature recognition model is an ERT algorithm.
  • the ERT algorithm is expressed as follows:
  • t represents the cascading sequence number
  • ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
  • Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
  • each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
  • Each level of regression is based on feature points for prediction.
  • the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
  • the 15 feature points are randomly selected from the 20 feature points.
  • the first regression tree is trained, and the predicted value of the first regression tree and the true value of the partial feature points (15 features taken from each sample image)
  • the residual of the weighted average of the points is used to train the second tree... and so on, until the predicted value of the Nth tree is trained and the true value of the part of the feature points is close to 0, and all the regression trees of the ERT algorithm are obtained.
  • the average model of the face's lips is obtained, and the model file and the sample library are saved. To the memory. Since the sample image of the training model marks 20 lip feature points, the trained lip average model of the face can be used to identify 20 lip feature points from the face image.
  • the real-time facial image is aligned with the lip average model, and then the feature extraction algorithm is used to search the real-time facial image for matching the 20 lip feature points of the lip average model.
  • 20 lip feature points It is assumed that the 20 lip feature points recognized from the real-time facial image are still recorded as P1 to P20, and the coordinates of the 20 lip feature points are: (x 1 , y 1 ), (x 2 , y 2 ) , (x 3 , y 3 ), ..., (x 20 , y 20 ).
  • the upper and lower lips of the lip have eight feature points (respectively labeled as P1 to P8, P9 to P16), and the left and right lip angles respectively have two feature points (respectively labeled as P17 to P18). , P19 ⁇ P20).
  • 5 are located on the outer contour line of the upper lip (P1 to P5), 3 are located on the contour line of the upper lip (P6 to P8, and P7 is the central feature point on the inner side of the upper lip); 8 of the lower lip Of the feature points, 5 are located on the outer contour line of the lower lip (P9 to P13), and 3 are located in the outline of the lower lip (P14 to P16, and P15 is the central feature point on the inner side of the lower lip).
  • the feature extraction algorithm may also be a SIFT algorithm, a SURF algorithm, an LBP algorithm, an HOG algorithm, or the like.
  • Step S30 determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region.
  • the m lip positive sample image and the k lip negative sample image are collected to form a second sample library.
  • the lip positive sample image refers to an image containing human lips, and the lip portion can be extracted from the face image sample library as a positive lip sample image.
  • a negative sample image of a lip refers to an image of a person's lip region being defective, or a lip in the image is not a human (eg, animal) lip, and a plurality of lips positive sample images and negative sample images form a second sample bank.
  • the local features of the positive sample image of each lip and the negative sample image of the lips are extracted.
  • a feature extraction algorithm is used to extract a direction gradient histogram (HOG) feature of the lip sample image. Since the color information in the lip sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient of each pixel position is calculated accordingly. Direction values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells, and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the Cell cells are combined into a large block.
  • HOG direction gradient histogram
  • the support vector machine classifier is trained by using the positive sample image of the lips, the negative sample image of the lips, and the extracted HOG feature to obtain a lip classification model of the face.
  • the 20 lips can be based on The feature point determines a lip region, and then inputs the determined lip region into the trained lip classification model, and judges whether the determined lip region is a human lip region based on the result obtained by the model.
  • Step S40 if the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
  • step S40 includes:
  • Step S41 calculating a distance between a central feature point of the inner side of the upper lip and a central feature point of the inner side of the lower lip in the real-time facial image, and determining the degree of opening of the lip;
  • Step S42 connecting the left outer lip feature point and the feature points closest to the left outer lip feature point on the outer contour of the upper and lower lips respectively to form a vector Calculation vector The angle between the left side of the lips;
  • Step S43 connecting the feature points of the right outer lip corner with the feature points closest to the feature points of the right outer lip corner on the outer contour lines of the upper and lower lips respectively to form a vector Calculation vector The angle between the right side of the lips is obtained.
  • the coordinates of the central feature point P7 on the inner side of the upper lip are (x 7 , y 7 ), the coordinates of the central feature point P15 on the inner side of the lower lip are (x 15 , y 15 ), and the lip region is human
  • the distance between the two points is as follows:
  • the coordinates of the left outer lip corner feature point P18 are (x 18 , y 18 ), and the coordinates of the feature points P1 and P9 closest to P18 on the outer contour lines of the upper and lower lips are (x 1 , y 1 ), respectively.
  • x 9 , y 9 connect P18 with P1 and P9 to form vectors respectively Calculation vector
  • the angle ⁇ between the calculation formula is as follows:
  • represents a vector
  • the angle between the angles can be calculated by calculating the angle of the angle. The smaller the angle, the greater the degree of left ankle.
  • the coordinates of the right outer lip corner feature point P20 are (x 20 , y 20 ), and the coordinates of the feature points P5 and P13 closest to P20 on the outer contour lines of the upper and lower lips are (x 5 , y 5 , respectively). ), (x 13 , y 13 ), connect P20 with P5 and P13 to form vectors Calculation vector
  • the angle between the calculations is as follows:
  • indicates a vector
  • the angle between the angles of the lips can be judged by calculating the angle of the angle; the smaller the angle, the greater the degree of right-handedness of the lips.
  • Step S50 when the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image.
  • the lip classification model determines that the lip region is not the human lip region, the prompt does not detect the human lip region from the current real-time image, and the lip motion cannot be determined, and the flow returns to the real-time image capturing step to capture the next real-time. image.
  • the lip classification model After inputting the lip region determined by the 20 lip feature points into the lip classification model, it is determined according to the model result that the lip region is not the human lip region, and the lip region of the person is not recognized, and the next lip motion judging step cannot be performed. Re-acquire the live image captured by the camera and follow the next steps.
  • the lip motion analysis method of the present embodiment uses the lip average model to identify the lip feature points in the real-time facial image, and uses the lip classification model to analyze the lip region determined by the lip feature point, if the lip region is a human lip region Then, according to the coordinates of the lip feature points, the motion information of the lips in the real-time facial image is calculated, and the analysis of the lip region and the real-time capture of the lip motion are realized.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium includes a lip motion analysis program, and when the lip motion analysis program is executed by the processor, the following operations are implemented:
  • Model construction steps construct and train a facial feature recognition model, obtain a lip average model on the face, and use the lip sample image to train the SVM to obtain a lip classification model;
  • a real-time facial image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image by using a face recognition algorithm;
  • a feature point recognition step inputting the real-time facial image into a pre-trained lip average model, and using the lip average model to identify t lip feature points representing the position of the lips in the real-time facial image;
  • a lip region recognizing step determining a lip region according to the t lip feature points, inputting the lip region into a pre-trained lip classification model, and determining whether the lip region is a human lip region;
  • Lip motion judging step If the lip region is a human lip region, the moving direction and the moving distance of the lip in the real-time facial image are calculated according to the x and y coordinates of the t lip feature points in the real-time facial image.
  • the lip motion analysis program when executed by the processor, the following operations are also implemented:
  • Prompting step When the lip classification model judges that the lip region is not the human lip region, the prompt does not detect the human lip region from the current live image, cannot determine the lip motion, and returns to the real-time facial image acquisition step.
  • the lip motion determining step includes:
  • the characteristic points of the left outer lip corner feature points and the feature points closest to the left outer lip corner feature points on the outer contour lines of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the left side of the lips;
  • the feature points of the right outer lip corner point and the feature points closest to the right outer lip corner feature point on the outer contour line of the upper and lower lips are respectively connected to form a vector Calculation vector The angle between the right side of the lips is obtained.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention porte sur un procédé d'analyse de mouvement labial, sur un appareil et sur un support d'informations. Le procédé consiste : à obtenir une image en temps réel prise par un appareil d'imagerie, et à extraire une image faciale en temps réel à partir de l'image en temps réel (S10); à entrer l'image faciale en temps réel dans un modèle labial moyen pré-appris, et à identifier des points labiaux caractéristiques t représentant des positions labiales dans l'image faciale en temps réel (S20); à déterminer une zone labiale en fonction des points labiaux caractéristiques t, à entrer la zone labiale dans un modèle pré-appris de classification labiale et à déterminer si la zone labiale est une zone labiale d'une personne (S30); si tel est le cas, à calculer une direction de mouvement labial et une distance de mouvement dans l'image faciale en temps réel selon les coordonnées x et y des points labiaux caractéristiques t dans l'image faciale en temps réel (S40). Des informations de mouvement labial dans l'image faciale en temps réel sont calculées en fonction des coordonnées des points labiaux caractéristiques, de façon à mettre en œuvre une analyse de zone labiale et une saisie de mouvement labial en temps réel.
PCT/CN2017/108749 2017-08-17 2017-10-31 Procédé d'analyse de mouvement labial, appareil et support d'informations WO2019033570A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710708364.9A CN107633205B (zh) 2017-08-17 2017-08-17 嘴唇动作分析方法、装置及存储介质
CN201710708364.9 2017-08-17

Publications (1)

Publication Number Publication Date
WO2019033570A1 true WO2019033570A1 (fr) 2019-02-21

Family

ID=61099627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108749 WO2019033570A1 (fr) 2017-08-17 2017-10-31 Procédé d'analyse de mouvement labial, appareil et support d'informations

Country Status (2)

Country Link
CN (1) CN107633205B (fr)
WO (1) WO2019033570A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738126A (zh) * 2019-09-19 2020-01-31 平安科技(深圳)有限公司 基于坐标变换的嘴唇剪切方法、装置、设备及存储介质
CN113095146A (zh) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 基于深度学习的嘴部状态分类方法、装置、设备和介质
WO2021224669A1 (fr) * 2020-05-05 2021-11-11 Ravindra Kumar Tarigoppula Système et procédé de commande de visualisation de contenu multimédia sur la base d'aspects comportementaux d'un utilisateur

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710836B (zh) * 2018-05-04 2020-10-09 南京邮电大学 一种基于级联特征提取的唇部检测及读取方法
CN108763897A (zh) * 2018-05-22 2018-11-06 平安科技(深圳)有限公司 身份合法性的校验方法、终端设备及介质
CN108874145B (zh) * 2018-07-04 2022-03-18 深圳美图创新科技有限公司 一种图像处理方法、计算设备及存储介质
CN110223322B (zh) * 2019-05-31 2021-12-14 腾讯科技(深圳)有限公司 图像识别方法、装置、计算机设备和存储介质
CN111241922B (zh) * 2019-12-28 2024-04-26 深圳市优必选科技股份有限公司 一种机器人及其控制方法、计算机可读存储介质
CN111259875B (zh) * 2020-05-06 2020-07-31 中国人民解放军国防科技大学 一种基于自适应语义时空图卷积网络的唇读方法
CN116405635A (zh) * 2023-06-02 2023-07-07 山东正中信息技术股份有限公司 一种基于边缘计算的多模态会议记录方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071289A1 (en) * 2005-09-29 2007-03-29 Kabushiki Kaisha Toshiba Feature point detection apparatus and method
CN104616438A (zh) * 2015-03-02 2015-05-13 重庆市科学技术研究院 一种用于疲劳驾驶检测的打哈欠动作检测方法
CN105139503A (zh) * 2015-10-12 2015-12-09 北京航空航天大学 一种唇动口型识别门禁系统及识别方法
CN106250815A (zh) * 2016-07-05 2016-12-21 上海引波信息技术有限公司 一种基于嘴部特征的快速表情识别方法
CN106485214A (zh) * 2016-09-28 2017-03-08 天津工业大学 一种基于卷积神经网络的眼睛和嘴部状态识别方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702199B (zh) * 2009-11-13 2012-04-04 华为终端有限公司 笑脸检测方法及装置、移动终端
CN104951730B (zh) * 2014-03-26 2018-08-31 联想(北京)有限公司 一种唇动检测方法、装置及电子设备
CN106529379A (zh) * 2015-09-15 2017-03-22 阿里巴巴集团控股有限公司 一种活体识别方法及设备
CN106997451A (zh) * 2016-01-26 2017-08-01 北方工业大学 嘴唇轮廓的定位方法
CN105975935B (zh) * 2016-05-04 2019-06-25 腾讯科技(深圳)有限公司 一种人脸图像处理方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071289A1 (en) * 2005-09-29 2007-03-29 Kabushiki Kaisha Toshiba Feature point detection apparatus and method
CN104616438A (zh) * 2015-03-02 2015-05-13 重庆市科学技术研究院 一种用于疲劳驾驶检测的打哈欠动作检测方法
CN105139503A (zh) * 2015-10-12 2015-12-09 北京航空航天大学 一种唇动口型识别门禁系统及识别方法
CN106250815A (zh) * 2016-07-05 2016-12-21 上海引波信息技术有限公司 一种基于嘴部特征的快速表情识别方法
CN106485214A (zh) * 2016-09-28 2017-03-08 天津工业大学 一种基于卷积神经网络的眼睛和嘴部状态识别方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738126A (zh) * 2019-09-19 2020-01-31 平安科技(深圳)有限公司 基于坐标变换的嘴唇剪切方法、装置、设备及存储介质
WO2021224669A1 (fr) * 2020-05-05 2021-11-11 Ravindra Kumar Tarigoppula Système et procédé de commande de visualisation de contenu multimédia sur la base d'aspects comportementaux d'un utilisateur
CN113095146A (zh) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 基于深度学习的嘴部状态分类方法、装置、设备和介质

Also Published As

Publication number Publication date
CN107633205A (zh) 2018-01-26
CN107633205B (zh) 2019-01-18

Similar Documents

Publication Publication Date Title
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
WO2019033570A1 (fr) Procédé d'analyse de mouvement labial, appareil et support d'informations
WO2019033572A1 (fr) Procédé de détection de situation de visage bloqué, dispositif et support d'informations
US10445562B2 (en) AU feature recognition method and device, and storage medium
CN109961009B (zh) 基于深度学习的行人检测方法、系统、装置及存储介质
WO2019033571A1 (fr) Procédé de détection de point de caractéristique faciale, appareil et support de stockage
WO2019033568A1 (fr) Procédé de saisie de mouvement labial, appareil et support d'informations
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
WO2019109526A1 (fr) Procédé et dispositif de reconnaissance de l'âge de l'image d'un visage, et support de stockage
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
WO2019041519A1 (fr) Procédé et dispositif de suivi de cible, et support de stockage lisible par ordinateur
WO2019033573A1 (fr) Procédé d'identification d'émotion faciale, appareil et support d'informations
WO2019071664A1 (fr) Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage
US8965117B1 (en) Image pre-processing for reducing consumption of resources
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
WO2016150240A1 (fr) Procédé et appareil d'authentification d'identité
JP5361524B2 (ja) パターン認識システム及びパターン認識方法
Vazquez-Fernandez et al. Built-in face recognition for smart photo sharing in mobile devices
JP6351243B2 (ja) 画像処理装置、画像処理方法
CN111178252A (zh) 多特征融合的身份识别方法
CN114783003A (zh) 一种基于局部特征注意力的行人重识别方法和装置
JP2021503139A (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2013206458A (ja) 画像における外観及びコンテキストに基づく物体分類
CN111160169A (zh) 人脸检测方法、装置、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 23.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17921689

Country of ref document: EP

Kind code of ref document: A1