CN110427849A - Face pose determination method and device, storage medium and electronic equipment - Google Patents

Face pose determination method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110427849A
CN110427849A CN201910668548.6A CN201910668548A CN110427849A CN 110427849 A CN110427849 A CN 110427849A CN 201910668548 A CN201910668548 A CN 201910668548A CN 110427849 A CN110427849 A CN 110427849A
Authority
CN
China
Prior art keywords
target
angle
face
estimation model
rotation direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910668548.6A
Other languages
Chinese (zh)
Other versions
CN110427849B (en
Inventor
陈泽洲
刘兆祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201910668548.6A priority Critical patent/CN110427849B/en
Publication of CN110427849A publication Critical patent/CN110427849A/en
Application granted granted Critical
Publication of CN110427849B publication Critical patent/CN110427849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a face pose determination method, a face pose determination device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring a regional image containing a target face from an original image; determining a target angle interval of the target face in each rotation direction through the area image and the trained interval estimation model; and determining the attitude angle of the target face in each rotation direction through the area image and the trained angle estimation model corresponding to the target angle interval, and taking the attitude angle as the face attitude information of the target face. Through the structure that contains two-layer model, after carrying out the angle interval constraint to every angle estimation model, the posture information of people's face in the image is confirmed to the angle estimation model that rethread angle interval corresponds, when improving the application scope and the flexibility that people's face gesture detected, avoids appearing the problem of angle value shake in people's face gesture detects, improves people's face gesture detection's accuracy.

Description

Human face posture determines method, apparatus, storage medium and electronic equipment
Technical field
This disclosure relates to field of image processing, and in particular, to a kind of human face posture determines method, apparatus, storage medium And electronic equipment.
Background technique
As the intelligence degree of electronic equipment is higher and higher, the application demand of analysis and intelligent recognition is carried out to image Gradually increase.Wherein, human face posture detection is the image recognitions related applications such as face alignment, fatigue detecting and human emotion's analysis Important component.In the related technology, the method for human face posture detection includes: by traditional template matching mode, by people Face image is matched with preset face template, and using the human face posture information of the template to match as facial image The posture information of middle face.But which is highly dependent on the setting of template, when face angle of arrival in the picture converts Angle change can not be immediately adapted to, smaller scope of application flexibility is also poor.The method of another human face posture detection is to be based on The 3D model method of key point in facial image, this method place one's entire reliance upon key point algorithm and 3D model, can be because of image There is the problem of angle value detected offset or shake in the variation of the pixel of middle key point.
Summary of the invention
To overcome the problems in correlation technique, purpose of this disclosure is to provide a kind of human face postures to determine method, dress It sets, storage medium and electronic equipment.
To achieve the goals above, according to the first aspect of the embodiments of the present disclosure, a kind of human face posture is provided and determines method, The described method includes:
The area image comprising target face is obtained from original image;
Determine the target face in each rotation direction by the area image and trained interval estimation model Upper locating target angle section;
By the corresponding trained angle estimation model of the area image and the target angle section, determine described in Attitude angle of the target face in each rotation direction, the human face posture information as the target face.
Optionally, described before obtaining the area image comprising target face in original image, the method is also wrapped It includes:
The angular range in target rotational direction is divided into multiple angular intervals, the target rotational direction is multiple described Any rotation direction in rotation direction;
Preset interval estimation model is trained by corresponding first training dataset in the target rotational direction, To obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, first training data is concentrated Each training data is used to characterize the angular interval locating on the target rotational direction of the face in each facial image;
Preset angle estimation model is trained by corresponding second training dataset of each angular interval, To obtain the corresponding trained angle estimation model of each angular interval;Wherein, second training data is concentrated Each training data is used to characterize attitude angle of the face on the target rotational direction in each facial image, and described the The corresponding attitude angle of multiple facial images that two training datas are concentrated is in identical angular interval.
Optionally, described to determine the target face every by the area image and trained interval estimation model Locating target angle section in a rotation direction, comprising:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target interval The input of model, with obtain the target face of target interval estimation model output be directed to it is every in the rotation direction The weighted value of a angular interval, the target interval estimation model is the corresponding trained interval estimation mould of the rotation direction Type;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle areas Between.
Optionally, described to pass through the corresponding trained angle estimation mould of the area image and the target angle section Type determines attitude angle of the target face in each rotation direction, the human face posture as the target face Information, comprising:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target angle The input of model, to obtain attitude angle of the target face of the target angle estimation model output in the rotation direction Estimated value is spent, the target angle estimation model is the corresponding trained angle estimation mould in each target angle section Type;
According to the corresponding weighted value in each target angle section, the weighting of multiple attitude angle estimated values is put down Attitude angle of the mean value as the target face in the rotation direction;
Attitude angle of the comprehensive target face in multiple rotation directions, to obtain the human face posture letter Breath.
According to the second aspect of an embodiment of the present disclosure, a kind of human face posture determining device is provided, described device includes:
Image collection module, for obtaining the area image comprising target face from original image;
Angular interval determining module, for determining the mesh by the area image and trained interval estimation model Mark face locating target angle section at each rotation direction;
Posture determining module, for passing through the corresponding trained angle of the area image and the target angle section Estimate model, determines attitude angle of the target face in each rotation direction, the people as the target face Face posture information.
Optionally, described device further include:
Angular interval division module, it is described for the angular range in target rotational direction to be divided into multiple angular intervals Target rotational direction is any rotation direction in multiple rotation directions;
First model training module, for passing through corresponding first training dataset in the target rotational direction to preset Interval estimation model is trained, to obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, institute The each training data for stating the first training data concentration is used to characterize the face in each facial image in the target rotational side Locating angular interval upwards;
Second model training module, for by corresponding second training dataset of each angular interval to preset Angle estimation model is trained, to obtain the corresponding trained angle estimation model of each angular interval;Wherein, institute The each training data for stating the second training data concentration is used to characterize the face in each facial image in the target rotational side Upward attitude angle, the corresponding attitude angle of multiple facial images that second training data is concentrated are in identical angle Section.
Optionally, the angular interval determining module, is used for:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target interval The input of model, with obtain the target face of target interval estimation model output be directed to it is every in the rotation direction The weighted value of a angular interval, the target interval estimation model is the corresponding trained interval estimation mould of the rotation direction Type;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle areas Between.
Optionally, the posture determining module, is used for:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target angle The input of model, to obtain attitude angle of the target face of the target angle estimation model output in the rotation direction Estimated value is spent, the target angle estimation model is the corresponding trained angle estimation mould in each target angle section Type;
According to the corresponding weighted value in each target angle section, the weighting of multiple attitude angle estimated values is put down Attitude angle of the mean value as the target face in the rotation direction;
Attitude angle of the comprehensive target face in multiple rotation directions, to obtain the human face posture letter Breath.
According to the third aspect of an embodiment of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with Machine program realizes the human face posture determination side that embodiment of the present disclosure first aspect provides when the computer program is executed by processor The step of method.
According to a fourth aspect of embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize embodiment of the present disclosure first party The step of human face posture that face provides determines method.
Through the above technical solutions, the disclosure can obtain the area image comprising target face from original image;It is logical It crosses the area image and trained interval estimation model determines the target face locating target angle at each rotation direction Spend section;By the area image trained angle estimation model corresponding with the target angle section, the target person is determined Attitude angle of the face in above-mentioned each rotation direction, the human face posture information as the target face.It can be by the inclusion of two The structure of layer model after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle of angular interval Estimation model determines the posture information of face in image, while improving the scope of application and flexibility of human face posture detection, It avoids the problem that angle of arrival value shake in human face posture detection, improves the accuracy of human face posture detection.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is the flow chart that a kind of human face posture shown according to an exemplary embodiment determines method;
Fig. 2 is the flow chart that another human face posture for implementing to exemplify according to Fig. 1 determines method;
Fig. 3 is the flow chart of the method for angular interval locating for a kind of determining face for implementing to exemplify according to Fig.2,;
Fig. 4 is the flow chart for implementing a kind of method of the determining human face posture information exemplified according to Fig.2,;
Fig. 5 is a kind of block diagram of human face posture determining device shown according to an exemplary embodiment;
Fig. 6 is the block diagram for implementing another human face posture determining device exemplified according to Fig.5,;
Fig. 7 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is the flow chart that a kind of human face posture shown according to an exemplary embodiment determines method, as shown in Figure 1, This method comprises:
Step 101, the area image comprising target face is obtained from original image.
It illustratively, can be by existing face recognition algorithms, for example, SeetaFace face recognition engine or MTCNN (Multi-task Cascaded Convolutional Networks, multitask level convolutional network) algorithm etc., determining should The target face in original image, then rectangular picture region locating for the target face is extracted, as the administrative division map Picture.The original image can be the frame image in picture or video.
Step 102, determine the target face in each rotation by the area image and trained interval estimation model Locating target angle section on direction.
Illustratively, which is trained in advance for carrying out to angular interval locating for the target face The machine learning model of analysis.In general, the attitude angle of face can be in (i.e. three rotations of three directions with the movement on head Direction) on change, including both direction rotary head to the left and right, nod upward or downward and the inclined head of both direction to the left and right.At this In open embodiment, the interval estimation model and multiple the following steps can be trained for each rotation direction respectively Angle estimation model in 103, so by these models to the attitude angle of target face at each rotation direction into Row determines.Specifically, the interval estimation model reality output is that target face in the area image is directed to a certain rotation The weighted value of each angular interval in direction, the weighted value can consider for characterizing the target face and each angular interval Matching degree.
Step 103, it by the area image trained angle estimation model corresponding with the target angle section, determines Attitude angle of the target face in above-mentioned each rotation direction, the human face posture information as the target face.
Illustratively, the angular range of any rotation direction in three rotation directions as described above can be divided into more A angular interval, and angle estimation model is established for each angular interval, and then combine the weighted value in the target angle section With the attitude angle estimated value of angle estimation model output, obtains and determine the target face in above-mentioned each rotation direction Attitude angle.It is understood that three attitude angles determined in above three rotation direction can form the target person The human face posture information of face.
In conclusion the disclosure can obtain the area image comprising target face from original image;Pass through the region Image and trained interval estimation model determine the target face locating target angle section at each rotation direction;It is logical The area image trained angle estimation model corresponding with the target angle section is crossed, determines the target face above-mentioned every Attitude angle in a rotation direction, the human face posture information as the target face.It can be by the inclusion of the knot of two-layer model Structure after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle estimation model pair of angular interval The attitude angle of face is detected in image, and then obtains human face posture information, in the applicable model for improving human face posture detection While enclosing with flexibility, avoids the problem that angle of arrival value shake in human face posture detection, improve the essence of human face posture detection Exactness.
Fig. 2 is the flow chart that another human face posture for implementing to exemplify according to Fig. 1 determines method, as shown in Fig. 2, Before above-mentioned steps 101, this method further include:
Step 104, the angular range on target rotational direction is divided into multiple angular intervals.
Wherein, which is any rotation direction in multiple rotation directions.
Illustratively, identical due to establishing the process that model and attitude angle determine at each rotation direction, at this Only to carry out angular interval division, model foundation and appearance on a rotation direction (the target rotational direction) in open embodiment The process that state angle determines determines that method is illustrated to human face posture.Specifically, based on the physiological make-up of human body, face is each The angular range that can be rotated in a rotation direction is all about 180 degree, the accuracy requirements that can be detected according to human face posture, It determines and the angular range of the 180 degree is divided into multiple angular intervals.For example, then can should every 30 degree of divisions, one section Angular range is divided into 6 angular intervals.
Step 105, by corresponding first training dataset in the target rotational direction to preset interval estimation model into Row training, to obtain the corresponding trained interval estimation model in the target rotational direction.
Wherein, each training data which concentrates is used to characterize the face in each facial image at this Locating angular interval on target rotational direction.
Illustratively, which can be predetermined neural network model, before above-mentioned steps 101, It can be by attitude angle of the face in the image data of a facial image and the facial image on the target rotational direction Locating angular interval is as a training sample data (i.e. the training data that the second training data is concentrated).By multiple Training sample data are trained the neural network model, and to obtain can determine, face is directed to each angular area in image Between weighted value interval estimation model.Alternatively, in another implementation, can also train can directly determine image The interval estimation model of the corresponding angular interval of middle face.
Step 106, by corresponding second training dataset of above-mentioned each angular interval to preset angle estimation model It is trained, to obtain the corresponding trained angle estimation model of above-mentioned each angular interval.
Wherein, each training data which concentrates is used to characterize the face in each facial image at this Attitude angle on target rotational direction, the corresponding attitude angle of multiple facial images which concentrates are in phase Same angular interval.
Illustratively, which may be predetermined neural network model, in step 106, can incite somebody to action Attitude angle of the face on the target rotational direction in the image data of one facial image and the facial image is as one A training sample data (i.e. the training data that the second training data is concentrated).It should be noted that an angular interval pair Second training dataset is answered, the corresponding attitude angle of multiple facial images which concentrates is in identical Angular interval.In this way, the multiple interval estimation models trained can be corresponded with multiple angular intervals.For example, above-mentioned The angular range of 180 degree be divided into 6 angular intervals.6 identical or different neural network models can be chosen at this time, This 6 neural network models are trained by 6 groups of different training datasets.After training, it can get 6 Angle estimation model corresponding to different angle section.
Fig. 3 is the flow chart of the method for angular interval locating for a kind of determining face for implementing to exemplify according to Fig.2, As shown in figure 3, above-mentioned steps 102, comprising:
Step 1021, any rotation direction being directed in multiple rotation directions, using the area image as target interval Estimate model input, with obtain the target interval estimation model output the target face be directed to it is every in the rotation direction The weighted value of a angular interval.
Wherein, target interval estimation model is the corresponding trained interval estimation model of the rotation direction.
Step 1022, the multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple targets Angular interval.
Still by taking the angular range of above-mentioned 180 degree is divided into 6 angular intervals as an example, inputted by the area image After region estimation model after training, target interval estimation model can export 6 weighted values, and each weighted value corresponds to one Angular interval.Preset quantity, such as 3, the maximum angle of weighted value can be selected from 6 angular intervals according to the weighted value It spends in section (i.e. target angle section).In this way, being also assured that in subsequent treatment process through this 3 angular intervals pair The angle estimation model answered detects the attitude angle of the target face.
Fig. 4 is the flow chart for implementing a kind of method of the determining human face posture information exemplified according to Fig.2, such as Fig. 4 It is shown, above-mentioned steps 103, comprising:
Step 1031, any rotation direction being directed in multiple rotation directions, using the area image as target angle The input of model is estimated, to obtain the attitude angle of the target face in the rotation direction of target angle estimation model output Spend estimated value.
Wherein, target angle estimation model is the corresponding trained angle estimation mould in above-mentioned each target angle section Type.
Step 1032, according to the corresponding weighted value in above-mentioned each target angle section, by multiple attitude angle estimated values Weighted average is as the attitude angle of the target face in the rotation direction.
Illustratively, preset quantity, such as 3 have been determined in step 102, it, can be by the region after angle estimation model Image is input to each angle estimation model, and then obtains 3 attitude angle estimated values of this 3 angle estimation models output. Again using the weighted value of the corresponding angular interval of the model for exporting the attitude angle estimated value as the power of the attitude angle estimated value Weight, calculates the weighted average of 3 attitude angle estimated values, the weighted average be the target face a rotation when to On attitude angle.
Step 1033, the comprehensive attitude angle of the target face in multiple rotation directions, to obtain human face posture letter Breath.
Illustratively, the multiple of 1031 and 1032 determinations through the above steps, such as three, three in rotation direction are integrated Attitude angle, it can obtain the human face posture information of the target face.
In conclusion the disclosure can obtain the area image comprising target face from original image;Pass through the region Image and trained interval estimation model determine the target face locating target angle section at each rotation direction;It is logical The area image trained angle estimation model corresponding with the target angle section is crossed, determines the target face above-mentioned every Attitude angle in a rotation direction, the human face posture information as the target face.It can be by the inclusion of the knot of two-layer model Structure after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle estimation mould of multiple angular intervals Type detects the attitude angle of face in image, and is merged according to the weight of angular interval to multiple attitude angles, And then human face posture information is obtained, while improving the scope of application and flexibility of human face posture detection, avoid human face posture The problem of angle of arrival value is shaken in detection improves the accuracy of human face posture detection.
Fig. 5 is a kind of block diagram of human face posture determining device shown according to an exemplary embodiment, as shown in figure 5, should Module 500 may include:
Image collection module 510, for obtaining the area image comprising target face from original image;
Angular interval determining module 520, for determining the mesh by the area image and trained interval estimation model Mark face locating target angle section at each rotation direction;
Posture determining module 530, for passing through the area image trained angle corresponding with the target angle section Estimate model, determines attitude angle of the target face in above-mentioned each rotation direction, the face appearance as the target face State information.
Fig. 6 is the block diagram for implementing another human face posture determining device exemplified according to Fig.5, as shown in fig. 6, should Device 500 further include:
Angular interval division module 540 should for the angular range in target rotational direction to be divided into multiple angular intervals Target rotational direction is any rotation direction in multiple rotation directions;
First model training module 550, for passing through corresponding first training dataset in the target rotational direction to default Interval estimation model be trained, to obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, should Each training data that first training data is concentrated is used to characterize the face in each facial image in the target rotational direction Locating angular interval;
Second model training module 560, for by corresponding second training dataset of above-mentioned each angular interval to pre- If angle estimation model be trained, to obtain the corresponding trained angle estimation model of above-mentioned each angular interval;Its In, each training data which concentrates is used to characterize the face in each facial image in the target rotational side Upward attitude angle, the corresponding attitude angle of multiple facial images which concentrates are in identical angular area Between.
Optionally, the angular interval determining module 520, is used for:
Any rotation direction being directed in multiple rotation directions, using the area image as target interval estimation model Input, to obtain each angular interval that the target face of target interval estimation model output is directed in the rotation direction Weighted value, the target interval estimate model be the corresponding trained interval estimation model of the rotation direction;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle sections.
Optionally, the posture determining module 530, is used for:
Any rotation direction being directed in multiple rotation directions, using the area image as target angle estimation model Input, to obtain attitude angle estimated value of the target face of target angle estimation model output in the rotation direction, The target angle estimates that model is the corresponding trained angle estimation model in above-mentioned each target angle section;
According to the corresponding weighted value in above-mentioned each target angle section, by the weighted average of multiple attitude angle estimated values As the attitude angle of the target face in the rotation direction;
The comprehensive attitude angle of the target face in multiple rotation directions, to obtain the face posture information.
In conclusion the disclosure can obtain the area image comprising target face from original image;Pass through the region Image and trained interval estimation model determine the target face locating target angle section at each rotation direction;It is logical The area image trained angle estimation model corresponding with the target angle section is crossed, determines the target face above-mentioned every Attitude angle in a rotation direction, the human face posture information as the target face.It can be by the inclusion of the knot of two-layer model Structure after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle estimation mould of multiple angular intervals Type detects the attitude angle of face in image, and is merged according to the weight of angular interval to multiple attitude angles, And then human face posture information is obtained, while improving the scope of application and flexibility of human face posture detection, avoid human face posture The problem of angle of arrival value is shaken in detection improves the accuracy of human face posture detection.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 7 is the block diagram of a kind of electronic equipment 700 shown according to an exemplary embodiment.As shown in fig. 7, the electronics is set Standby 700 may include: processor 701, memory 702, multimedia component 703, input/output (I/O) interface 704, Yi Jitong Believe component 705.
Wherein, processor 701 is used to control the integrated operation of the electronic equipment 700, true with the human face posture for completing above-mentioned Determine all or part of the steps in method.Memory 702 is for storing various types of data to support in the electronic equipment 700 Operation, these data for example may include the finger of any application or method for operating on the electronic equipment 700 Order and the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The storage Device 702 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random It accesses memory (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, disk or CD.Multimedia component 703 may include screen and audio component.Wherein Screen for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage Device 702 is sent by communication component 705.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O Interface 704 provides interface between processor 701 and other interface modules, other above-mentioned interface modules can be keyboard, mouse, Button etc..These buttons can be virtual push button or entity button.Communication component 705 is for the electronic equipment 700 and other Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication Component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 700 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part realizes that the human face posture for executing above-mentioned determines method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction, example are additionally provided It such as include the memory 702 of program instruction, above procedure instruction can be executed by the processor 701 of electronic equipment 700 on to complete The human face posture stated determines method.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, those skilled in the art are considering specification and practice After the disclosure, it is readily apparent that other embodiments of the disclosure, belongs to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, it can be combined in any appropriate way.Simultaneously between a variety of different embodiments of the disclosure Any combination can also be carried out, as long as it, without prejudice to the thought of the disclosure, equally should be considered as disclosure disclosure of that. The disclosure is not limited to the precision architecture being described above out, and the scope of the present disclosure is only limited by the attached claims System.

Claims (10)

1. a kind of human face posture determines method, which is characterized in that the described method includes:
The area image comprising target face is obtained from original image;
Target face institute at each rotation direction is determined by the area image and trained interval estimation model The target angle section at place;
By the corresponding trained angle estimation model of the area image and the target angle section, the target is determined Attitude angle of the face in each rotation direction, the human face posture information as the target face.
2. the method according to claim 1, wherein being obtained from original image comprising target face described Before area image, the method also includes:
The angular range in target rotational direction is divided into multiple angular intervals, the target rotational direction is multiple rotations Any rotation direction in direction;
Preset interval estimation model is trained by corresponding first training dataset in the target rotational direction, to obtain Take the corresponding trained interval estimation model in the target rotational direction;Wherein, each of described first training data concentration Training data is used to characterize the angular interval locating on the target rotational direction of the face in each facial image;
Preset angle estimation model is trained by corresponding second training dataset of each angular interval, to obtain Take the corresponding trained angle estimation model of each angular interval;Wherein, each of described second training data concentration Training data is used to characterize attitude angle of the face on the target rotational direction in each facial image, second instruction The corresponding attitude angle of multiple facial images practiced in data set is in identical angular interval.
3. according to the method described in claim 2, it is characterized in that, described estimated by the area image and trained section Meter model determines target face target angle section locating in each rotation direction, comprising:
The area image is estimated model by any rotation direction being directed in multiple rotation directions Input, to obtain each angle that the target face of target interval estimation model output is directed in the rotation direction The weighted value in section is spent, the target interval estimation model is the corresponding trained interval estimation model of the rotation direction;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle sections.
4. according to the method described in claim 3, it is characterized in that, described pass through the area image and the target angle area Between corresponding trained angle estimation model, determine attitude angle of the target face in each rotation direction, Human face posture information as the target face, comprising:
The area image is estimated model by any rotation direction being directed in multiple rotation directions Input, estimated with obtaining attitude angle of the target face of target angle estimation model output in the rotation direction Evaluation, the target angle estimation model are the corresponding trained angle estimation model in each target angle section;
According to the corresponding weighted value in each target angle section, by the weighted average of multiple attitude angle estimated values As attitude angle of the target face in the rotation direction;
Attitude angle of the comprehensive target face in multiple rotation directions, to obtain the human face posture information.
5. a kind of human face posture determining device, which is characterized in that described device includes:
Image collection module, for obtaining the area image comprising target face from original image;
Angular interval determining module, for determining the target person by the area image and trained interval estimation model Face locating target angle section at each rotation direction;
Posture determining module, for passing through the corresponding trained angle estimation of the area image and the target angle section Model determines attitude angle of the target face in each rotation direction, the face appearance as the target face State information.
6. device according to claim 5, which is characterized in that described device further include:
Angular interval division module, for the angular range in target rotational direction to be divided into multiple angular intervals, the target Rotation direction is any rotation direction in multiple rotation directions;
First model training module, for passing through corresponding first training dataset in the target rotational direction to preset section Estimation model is trained, to obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, described Each training data that one training data is concentrated is used to characterize the face in each facial image in the target rotational direction Locating angular interval;
Second model training module, for by corresponding second training dataset of each angular interval to preset angle Estimation model is trained, to obtain the corresponding trained angle estimation model of each angular interval;Wherein, described Each training data that two training datas are concentrated is used to characterize the face in each facial image in the target rotational direction Attitude angle, the corresponding attitude angle of multiple facial images that second training data is concentrated is in identical angular area Between.
7. device according to claim 6, which is characterized in that the angular interval determining module is used for:
The area image is estimated model by any rotation direction being directed in multiple rotation directions Input, to obtain each angle that the target face of target interval estimation model output is directed in the rotation direction The weighted value in section is spent, the target interval estimation model is the corresponding trained interval estimation model of the rotation direction;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle sections.
8. device according to claim 7, which is characterized in that the posture determining module is used for:
The area image is estimated model by any rotation direction being directed in multiple rotation directions Input, estimated with obtaining attitude angle of the target face of target angle estimation model output in the rotation direction Evaluation, the target angle estimation model are the corresponding trained angle estimation model in each target angle section;
According to the corresponding weighted value in each target angle section, by the weighted average of multiple attitude angle estimated values As attitude angle of the target face in the rotation direction;
Attitude angle of the comprehensive target face in multiple rotation directions, to obtain the human face posture information.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor The step of human face posture of any of claims 1-4 determines method is realized when row.
10. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one of claim 1-4 Human face posture the step of determining method.
CN201910668548.6A 2019-07-23 2019-07-23 Face pose determination method and device, storage medium and electronic equipment Active CN110427849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910668548.6A CN110427849B (en) 2019-07-23 2019-07-23 Face pose determination method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910668548.6A CN110427849B (en) 2019-07-23 2019-07-23 Face pose determination method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110427849A true CN110427849A (en) 2019-11-08
CN110427849B CN110427849B (en) 2022-02-08

Family

ID=68412040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910668548.6A Active CN110427849B (en) 2019-07-23 2019-07-23 Face pose determination method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110427849B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695438A (en) * 2020-05-20 2020-09-22 北京的卢深视科技有限公司 Head pose estimation method and device
CN112001932A (en) * 2020-09-01 2020-11-27 腾讯科技(深圳)有限公司 Face recognition method and device, computer equipment and storage medium
CN112949576A (en) * 2021-03-29 2021-06-11 北京京东方技术开发有限公司 Attitude estimation method, attitude estimation device, attitude estimation equipment and storage medium
CN113361361A (en) * 2021-05-31 2021-09-07 上海商汤临港智能科技有限公司 Method and device for interacting with passenger, vehicle, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318264A (en) * 2014-10-14 2015-01-28 武汉科技大学 Facial feature point tracking method based on human eye preferential fitting
WO2015029982A1 (en) * 2013-08-29 2015-03-05 日本電気株式会社 Image processing device, image processing method, and program
CN105718868A (en) * 2016-01-18 2016-06-29 中国科学院计算技术研究所 Face detection system and method for multi-pose faces
CN105760836A (en) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 Multi-angle face alignment method based on deep learning and system thereof and photographing terminal
CN107958439A (en) * 2017-11-09 2018-04-24 北京小米移动软件有限公司 Image processing method and device
CN108876731A (en) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 Image processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015029982A1 (en) * 2013-08-29 2015-03-05 日本電気株式会社 Image processing device, image processing method, and program
CN104318264A (en) * 2014-10-14 2015-01-28 武汉科技大学 Facial feature point tracking method based on human eye preferential fitting
CN105718868A (en) * 2016-01-18 2016-06-29 中国科学院计算技术研究所 Face detection system and method for multi-pose faces
CN105760836A (en) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 Multi-angle face alignment method based on deep learning and system thereof and photographing terminal
CN107958439A (en) * 2017-11-09 2018-04-24 北京小米移动软件有限公司 Image processing method and device
CN108876731A (en) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 Image processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕绍东 等: "基于姿态估计的多视角人脸检测", 《北京联合大学学报》 *
张洪明 等: "基于肤色模型、神经网络和人脸结构模型的平面旋转人脸检测", 《计算机学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695438A (en) * 2020-05-20 2020-09-22 北京的卢深视科技有限公司 Head pose estimation method and device
CN111695438B (en) * 2020-05-20 2023-08-04 合肥的卢深视科技有限公司 Head pose estimation method and device
CN112001932A (en) * 2020-09-01 2020-11-27 腾讯科技(深圳)有限公司 Face recognition method and device, computer equipment and storage medium
CN112001932B (en) * 2020-09-01 2023-10-31 腾讯科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium
CN112949576A (en) * 2021-03-29 2021-06-11 北京京东方技术开发有限公司 Attitude estimation method, attitude estimation device, attitude estimation equipment and storage medium
CN112949576B (en) * 2021-03-29 2024-04-23 北京京东方技术开发有限公司 Attitude estimation method, apparatus, device and storage medium
CN113361361A (en) * 2021-05-31 2021-09-07 上海商汤临港智能科技有限公司 Method and device for interacting with passenger, vehicle, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110427849B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN110427849A (en) Face pose determination method and device, storage medium and electronic equipment
CN106897658B (en) Method and device for identifying human face living body
EP3373202B1 (en) Verification method and system
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
KR101872367B1 (en) Guided fingerprint enrolment based on center of attention point
CN105393281B (en) Gesture decision maker and method, gesture operation device
CN105550637B (en) Profile independent positioning method and device
CN103718175B (en) Detect equipment, method and the medium of subject poses
CN108229318A (en) The training method and device of gesture identification and gesture identification network, equipment, medium
RU2708027C1 (en) Method of transmitting motion of a subject from a video to an animated character
CN110111418A (en) Create the method, apparatus and electronic equipment of facial model
CN108550176A (en) Image processing method, equipment and storage medium
CN109598234A (en) Critical point detection method and apparatus
CN106295533A (en) Optimization method, device and the camera terminal of a kind of image of autodyning
CN108229324A (en) Gesture method for tracing and device, electronic equipment, computer storage media
CN109325456A (en) Target identification method, device, target identification equipment and storage medium
CN111401318B (en) Action recognition method and device
CN105528078B (en) The method and device of controlling electronic devices
CN106548201A (en) The training method of convolutional neural networks, image-recognizing method and device
CN111369428A (en) Virtual head portrait generation method and device
CN110456904B (en) Augmented reality glasses eye movement interaction method and system without calibration
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN105205482B (en) Fast face feature recognition and posture evaluation method
CN106295530A (en) Face identification method and device
WO2021175020A1 (en) Face image key point positioning method and apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210303

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.