Summary of the invention
To overcome the problems in correlation technique, purpose of this disclosure is to provide a kind of human face postures to determine method, dress
It sets, storage medium and electronic equipment.
To achieve the goals above, according to the first aspect of the embodiments of the present disclosure, a kind of human face posture is provided and determines method,
The described method includes:
The area image comprising target face is obtained from original image;
Determine the target face in each rotation direction by the area image and trained interval estimation model
Upper locating target angle section;
By the corresponding trained angle estimation model of the area image and the target angle section, determine described in
Attitude angle of the target face in each rotation direction, the human face posture information as the target face.
Optionally, described before obtaining the area image comprising target face in original image, the method is also wrapped
It includes:
The angular range in target rotational direction is divided into multiple angular intervals, the target rotational direction is multiple described
Any rotation direction in rotation direction;
Preset interval estimation model is trained by corresponding first training dataset in the target rotational direction,
To obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, first training data is concentrated
Each training data is used to characterize the angular interval locating on the target rotational direction of the face in each facial image;
Preset angle estimation model is trained by corresponding second training dataset of each angular interval,
To obtain the corresponding trained angle estimation model of each angular interval;Wherein, second training data is concentrated
Each training data is used to characterize attitude angle of the face on the target rotational direction in each facial image, and described the
The corresponding attitude angle of multiple facial images that two training datas are concentrated is in identical angular interval.
Optionally, described to determine the target face every by the area image and trained interval estimation model
Locating target angle section in a rotation direction, comprising:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target interval
The input of model, with obtain the target face of target interval estimation model output be directed to it is every in the rotation direction
The weighted value of a angular interval, the target interval estimation model is the corresponding trained interval estimation mould of the rotation direction
Type;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle areas
Between.
Optionally, described to pass through the corresponding trained angle estimation mould of the area image and the target angle section
Type determines attitude angle of the target face in each rotation direction, the human face posture as the target face
Information, comprising:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target angle
The input of model, to obtain attitude angle of the target face of the target angle estimation model output in the rotation direction
Estimated value is spent, the target angle estimation model is the corresponding trained angle estimation mould in each target angle section
Type;
According to the corresponding weighted value in each target angle section, the weighting of multiple attitude angle estimated values is put down
Attitude angle of the mean value as the target face in the rotation direction;
Attitude angle of the comprehensive target face in multiple rotation directions, to obtain the human face posture letter
Breath.
According to the second aspect of an embodiment of the present disclosure, a kind of human face posture determining device is provided, described device includes:
Image collection module, for obtaining the area image comprising target face from original image;
Angular interval determining module, for determining the mesh by the area image and trained interval estimation model
Mark face locating target angle section at each rotation direction;
Posture determining module, for passing through the corresponding trained angle of the area image and the target angle section
Estimate model, determines attitude angle of the target face in each rotation direction, the people as the target face
Face posture information.
Optionally, described device further include:
Angular interval division module, it is described for the angular range in target rotational direction to be divided into multiple angular intervals
Target rotational direction is any rotation direction in multiple rotation directions;
First model training module, for passing through corresponding first training dataset in the target rotational direction to preset
Interval estimation model is trained, to obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, institute
The each training data for stating the first training data concentration is used to characterize the face in each facial image in the target rotational side
Locating angular interval upwards;
Second model training module, for by corresponding second training dataset of each angular interval to preset
Angle estimation model is trained, to obtain the corresponding trained angle estimation model of each angular interval;Wherein, institute
The each training data for stating the second training data concentration is used to characterize the face in each facial image in the target rotational side
Upward attitude angle, the corresponding attitude angle of multiple facial images that second training data is concentrated are in identical angle
Section.
Optionally, the angular interval determining module, is used for:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target interval
The input of model, with obtain the target face of target interval estimation model output be directed to it is every in the rotation direction
The weighted value of a angular interval, the target interval estimation model is the corresponding trained interval estimation mould of the rotation direction
Type;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle areas
Between.
Optionally, the posture determining module, is used for:
Any rotation direction being directed in multiple rotation directions is estimated the area image as target angle
The input of model, to obtain attitude angle of the target face of the target angle estimation model output in the rotation direction
Estimated value is spent, the target angle estimation model is the corresponding trained angle estimation mould in each target angle section
Type;
According to the corresponding weighted value in each target angle section, the weighting of multiple attitude angle estimated values is put down
Attitude angle of the mean value as the target face in the rotation direction;
Attitude angle of the comprehensive target face in multiple rotation directions, to obtain the human face posture letter
Breath.
According to the third aspect of an embodiment of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with
Machine program realizes the human face posture determination side that embodiment of the present disclosure first aspect provides when the computer program is executed by processor
The step of method.
According to a fourth aspect of embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize embodiment of the present disclosure first party
The step of human face posture that face provides determines method.
Through the above technical solutions, the disclosure can obtain the area image comprising target face from original image;It is logical
It crosses the area image and trained interval estimation model determines the target face locating target angle at each rotation direction
Spend section;By the area image trained angle estimation model corresponding with the target angle section, the target person is determined
Attitude angle of the face in above-mentioned each rotation direction, the human face posture information as the target face.It can be by the inclusion of two
The structure of layer model after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle of angular interval
Estimation model determines the posture information of face in image, while improving the scope of application and flexibility of human face posture detection,
It avoids the problem that angle of arrival value shake in human face posture detection, improves the accuracy of human face posture detection.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is the flow chart that a kind of human face posture shown according to an exemplary embodiment determines method, as shown in Figure 1,
This method comprises:
Step 101, the area image comprising target face is obtained from original image.
It illustratively, can be by existing face recognition algorithms, for example, SeetaFace face recognition engine or MTCNN
(Multi-task Cascaded Convolutional Networks, multitask level convolutional network) algorithm etc., determining should
The target face in original image, then rectangular picture region locating for the target face is extracted, as the administrative division map
Picture.The original image can be the frame image in picture or video.
Step 102, determine the target face in each rotation by the area image and trained interval estimation model
Locating target angle section on direction.
Illustratively, which is trained in advance for carrying out to angular interval locating for the target face
The machine learning model of analysis.In general, the attitude angle of face can be in (i.e. three rotations of three directions with the movement on head
Direction) on change, including both direction rotary head to the left and right, nod upward or downward and the inclined head of both direction to the left and right.At this
In open embodiment, the interval estimation model and multiple the following steps can be trained for each rotation direction respectively
Angle estimation model in 103, so by these models to the attitude angle of target face at each rotation direction into
Row determines.Specifically, the interval estimation model reality output is that target face in the area image is directed to a certain rotation
The weighted value of each angular interval in direction, the weighted value can consider for characterizing the target face and each angular interval
Matching degree.
Step 103, it by the area image trained angle estimation model corresponding with the target angle section, determines
Attitude angle of the target face in above-mentioned each rotation direction, the human face posture information as the target face.
Illustratively, the angular range of any rotation direction in three rotation directions as described above can be divided into more
A angular interval, and angle estimation model is established for each angular interval, and then combine the weighted value in the target angle section
With the attitude angle estimated value of angle estimation model output, obtains and determine the target face in above-mentioned each rotation direction
Attitude angle.It is understood that three attitude angles determined in above three rotation direction can form the target person
The human face posture information of face.
In conclusion the disclosure can obtain the area image comprising target face from original image;Pass through the region
Image and trained interval estimation model determine the target face locating target angle section at each rotation direction;It is logical
The area image trained angle estimation model corresponding with the target angle section is crossed, determines the target face above-mentioned every
Attitude angle in a rotation direction, the human face posture information as the target face.It can be by the inclusion of the knot of two-layer model
Structure after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle estimation model pair of angular interval
The attitude angle of face is detected in image, and then obtains human face posture information, in the applicable model for improving human face posture detection
While enclosing with flexibility, avoids the problem that angle of arrival value shake in human face posture detection, improve the essence of human face posture detection
Exactness.
Fig. 2 is the flow chart that another human face posture for implementing to exemplify according to Fig. 1 determines method, as shown in Fig. 2,
Before above-mentioned steps 101, this method further include:
Step 104, the angular range on target rotational direction is divided into multiple angular intervals.
Wherein, which is any rotation direction in multiple rotation directions.
Illustratively, identical due to establishing the process that model and attitude angle determine at each rotation direction, at this
Only to carry out angular interval division, model foundation and appearance on a rotation direction (the target rotational direction) in open embodiment
The process that state angle determines determines that method is illustrated to human face posture.Specifically, based on the physiological make-up of human body, face is each
The angular range that can be rotated in a rotation direction is all about 180 degree, the accuracy requirements that can be detected according to human face posture,
It determines and the angular range of the 180 degree is divided into multiple angular intervals.For example, then can should every 30 degree of divisions, one section
Angular range is divided into 6 angular intervals.
Step 105, by corresponding first training dataset in the target rotational direction to preset interval estimation model into
Row training, to obtain the corresponding trained interval estimation model in the target rotational direction.
Wherein, each training data which concentrates is used to characterize the face in each facial image at this
Locating angular interval on target rotational direction.
Illustratively, which can be predetermined neural network model, before above-mentioned steps 101,
It can be by attitude angle of the face in the image data of a facial image and the facial image on the target rotational direction
Locating angular interval is as a training sample data (i.e. the training data that the second training data is concentrated).By multiple
Training sample data are trained the neural network model, and to obtain can determine, face is directed to each angular area in image
Between weighted value interval estimation model.Alternatively, in another implementation, can also train can directly determine image
The interval estimation model of the corresponding angular interval of middle face.
Step 106, by corresponding second training dataset of above-mentioned each angular interval to preset angle estimation model
It is trained, to obtain the corresponding trained angle estimation model of above-mentioned each angular interval.
Wherein, each training data which concentrates is used to characterize the face in each facial image at this
Attitude angle on target rotational direction, the corresponding attitude angle of multiple facial images which concentrates are in phase
Same angular interval.
Illustratively, which may be predetermined neural network model, in step 106, can incite somebody to action
Attitude angle of the face on the target rotational direction in the image data of one facial image and the facial image is as one
A training sample data (i.e. the training data that the second training data is concentrated).It should be noted that an angular interval pair
Second training dataset is answered, the corresponding attitude angle of multiple facial images which concentrates is in identical
Angular interval.In this way, the multiple interval estimation models trained can be corresponded with multiple angular intervals.For example, above-mentioned
The angular range of 180 degree be divided into 6 angular intervals.6 identical or different neural network models can be chosen at this time,
This 6 neural network models are trained by 6 groups of different training datasets.After training, it can get 6
Angle estimation model corresponding to different angle section.
Fig. 3 is the flow chart of the method for angular interval locating for a kind of determining face for implementing to exemplify according to Fig.2,
As shown in figure 3, above-mentioned steps 102, comprising:
Step 1021, any rotation direction being directed in multiple rotation directions, using the area image as target interval
Estimate model input, with obtain the target interval estimation model output the target face be directed to it is every in the rotation direction
The weighted value of a angular interval.
Wherein, target interval estimation model is the corresponding trained interval estimation model of the rotation direction.
Step 1022, the multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple targets
Angular interval.
Still by taking the angular range of above-mentioned 180 degree is divided into 6 angular intervals as an example, inputted by the area image
After region estimation model after training, target interval estimation model can export 6 weighted values, and each weighted value corresponds to one
Angular interval.Preset quantity, such as 3, the maximum angle of weighted value can be selected from 6 angular intervals according to the weighted value
It spends in section (i.e. target angle section).In this way, being also assured that in subsequent treatment process through this 3 angular intervals pair
The angle estimation model answered detects the attitude angle of the target face.
Fig. 4 is the flow chart for implementing a kind of method of the determining human face posture information exemplified according to Fig.2, such as Fig. 4
It is shown, above-mentioned steps 103, comprising:
Step 1031, any rotation direction being directed in multiple rotation directions, using the area image as target angle
The input of model is estimated, to obtain the attitude angle of the target face in the rotation direction of target angle estimation model output
Spend estimated value.
Wherein, target angle estimation model is the corresponding trained angle estimation mould in above-mentioned each target angle section
Type.
Step 1032, according to the corresponding weighted value in above-mentioned each target angle section, by multiple attitude angle estimated values
Weighted average is as the attitude angle of the target face in the rotation direction.
Illustratively, preset quantity, such as 3 have been determined in step 102, it, can be by the region after angle estimation model
Image is input to each angle estimation model, and then obtains 3 attitude angle estimated values of this 3 angle estimation models output.
Again using the weighted value of the corresponding angular interval of the model for exporting the attitude angle estimated value as the power of the attitude angle estimated value
Weight, calculates the weighted average of 3 attitude angle estimated values, the weighted average be the target face a rotation when to
On attitude angle.
Step 1033, the comprehensive attitude angle of the target face in multiple rotation directions, to obtain human face posture letter
Breath.
Illustratively, the multiple of 1031 and 1032 determinations through the above steps, such as three, three in rotation direction are integrated
Attitude angle, it can obtain the human face posture information of the target face.
In conclusion the disclosure can obtain the area image comprising target face from original image;Pass through the region
Image and trained interval estimation model determine the target face locating target angle section at each rotation direction;It is logical
The area image trained angle estimation model corresponding with the target angle section is crossed, determines the target face above-mentioned every
Attitude angle in a rotation direction, the human face posture information as the target face.It can be by the inclusion of the knot of two-layer model
Structure after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle estimation mould of multiple angular intervals
Type detects the attitude angle of face in image, and is merged according to the weight of angular interval to multiple attitude angles,
And then human face posture information is obtained, while improving the scope of application and flexibility of human face posture detection, avoid human face posture
The problem of angle of arrival value is shaken in detection improves the accuracy of human face posture detection.
Fig. 5 is a kind of block diagram of human face posture determining device shown according to an exemplary embodiment, as shown in figure 5, should
Module 500 may include:
Image collection module 510, for obtaining the area image comprising target face from original image;
Angular interval determining module 520, for determining the mesh by the area image and trained interval estimation model
Mark face locating target angle section at each rotation direction;
Posture determining module 530, for passing through the area image trained angle corresponding with the target angle section
Estimate model, determines attitude angle of the target face in above-mentioned each rotation direction, the face appearance as the target face
State information.
Fig. 6 is the block diagram for implementing another human face posture determining device exemplified according to Fig.5, as shown in fig. 6, should
Device 500 further include:
Angular interval division module 540 should for the angular range in target rotational direction to be divided into multiple angular intervals
Target rotational direction is any rotation direction in multiple rotation directions;
First model training module 550, for passing through corresponding first training dataset in the target rotational direction to default
Interval estimation model be trained, to obtain the corresponding trained interval estimation model in the target rotational direction;Wherein, should
Each training data that first training data is concentrated is used to characterize the face in each facial image in the target rotational direction
Locating angular interval;
Second model training module 560, for by corresponding second training dataset of above-mentioned each angular interval to pre-
If angle estimation model be trained, to obtain the corresponding trained angle estimation model of above-mentioned each angular interval;Its
In, each training data which concentrates is used to characterize the face in each facial image in the target rotational side
Upward attitude angle, the corresponding attitude angle of multiple facial images which concentrates are in identical angular area
Between.
Optionally, the angular interval determining module 520, is used for:
Any rotation direction being directed in multiple rotation directions, using the area image as target interval estimation model
Input, to obtain each angular interval that the target face of target interval estimation model output is directed in the rotation direction
Weighted value, the target interval estimate model be the corresponding trained interval estimation model of the rotation direction;
The multiple angular intervals for having weight limit value in the rotation direction are obtained, as multiple target angle sections.
Optionally, the posture determining module 530, is used for:
Any rotation direction being directed in multiple rotation directions, using the area image as target angle estimation model
Input, to obtain attitude angle estimated value of the target face of target angle estimation model output in the rotation direction,
The target angle estimates that model is the corresponding trained angle estimation model in above-mentioned each target angle section;
According to the corresponding weighted value in above-mentioned each target angle section, by the weighted average of multiple attitude angle estimated values
As the attitude angle of the target face in the rotation direction;
The comprehensive attitude angle of the target face in multiple rotation directions, to obtain the face posture information.
In conclusion the disclosure can obtain the area image comprising target face from original image;Pass through the region
Image and trained interval estimation model determine the target face locating target angle section at each rotation direction;It is logical
The area image trained angle estimation model corresponding with the target angle section is crossed, determines the target face above-mentioned every
Attitude angle in a rotation direction, the human face posture information as the target face.It can be by the inclusion of the knot of two-layer model
Structure after carrying out angular interval constraint to each angle estimation model, then passes through the corresponding angle estimation mould of multiple angular intervals
Type detects the attitude angle of face in image, and is merged according to the weight of angular interval to multiple attitude angles,
And then human face posture information is obtained, while improving the scope of application and flexibility of human face posture detection, avoid human face posture
The problem of angle of arrival value is shaken in detection improves the accuracy of human face posture detection.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 7 is the block diagram of a kind of electronic equipment 700 shown according to an exemplary embodiment.As shown in fig. 7, the electronics is set
Standby 700 may include: processor 701, memory 702, multimedia component 703, input/output (I/O) interface 704, Yi Jitong
Believe component 705.
Wherein, processor 701 is used to control the integrated operation of the electronic equipment 700, true with the human face posture for completing above-mentioned
Determine all or part of the steps in method.Memory 702 is for storing various types of data to support in the electronic equipment 700
Operation, these data for example may include the finger of any application or method for operating on the electronic equipment 700
Order and the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The storage
Device 702 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random
It accesses memory (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory
(Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable
Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory
(Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as
ROM), magnetic memory, flash memory, disk or CD.Multimedia component 703 may include screen and audio component.Wherein
Screen for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include
One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage
Device 702 is sent by communication component 705.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O
Interface 704 provides interface between processor 701 and other interface modules, other above-mentioned interface modules can be keyboard, mouse,
Button etc..These buttons can be virtual push button or entity button.Communication component 705 is for the electronic equipment 700 and other
Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field
Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication
Component 705 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 700 can be by one or more application specific integrated circuit
(Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital
Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device,
Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array
(Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member
Part realizes that the human face posture for executing above-mentioned determines method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction, example are additionally provided
It such as include the memory 702 of program instruction, above procedure instruction can be executed by the processor 701 of electronic equipment 700 on to complete
The human face posture stated determines method.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality
The detail in mode is applied, in the range of the technology design of the disclosure, those skilled in the art are considering specification and practice
After the disclosure, it is readily apparent that other embodiments of the disclosure, belongs to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance
In the case where shield, it can be combined in any appropriate way.Simultaneously between a variety of different embodiments of the disclosure
Any combination can also be carried out, as long as it, without prejudice to the thought of the disclosure, equally should be considered as disclosure disclosure of that.
The disclosure is not limited to the precision architecture being described above out, and the scope of the present disclosure is only limited by the attached claims
System.