CN109840476A - A kind of shape of face detection method and terminal device - Google Patents
A kind of shape of face detection method and terminal device Download PDFInfo
- Publication number
- CN109840476A CN109840476A CN201811635950.6A CN201811635950A CN109840476A CN 109840476 A CN109840476 A CN 109840476A CN 201811635950 A CN201811635950 A CN 201811635950A CN 109840476 A CN109840476 A CN 109840476A
- Authority
- CN
- China
- Prior art keywords
- image
- target object
- block
- profile information
- terminal device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of shape of face detection method and terminal devices, are related to field of terminal technology, and the robustness to solve the problems, such as existing shape of face detection technique is poor.This method comprises: obtaining the first image and the second image of target object, which is depth image, which is two dimensional image, which includes face;According to first image, the profile information of the target object is obtained;According to the profile information, the target object in second image is handled, third image is obtained;According to the characteristic information of the target object in the third image, shape of face testing result is generated.The program is applied particularly in the scene of shape of face detection.
Description
Technical field
The present embodiments relate to field of terminal technology more particularly to a kind of shape of face detection method and terminal devices.
Background technique
With the continuous development of terminal technology, terminal device using more and more extensive.For example, being required under many scenes
Face character analysis is carried out, and shape of face detection is one of the common algorithms of face character analysis.
Currently, more commonly used shape of face detection method includes: method based on geometry local feature template matching and is based on
The method of parameter optimization carries out shape of face detection.Above two method is required to the two-dimension human face image progress feature to input and builds
Mould, however, because being illuminated by the light, (intense light irradiation or low-light can all make one picture for the influence such as background during shooting two dimensional image
Region blur, class portrait background can interfere portrait area), portrait edge feature may be weakened, to build to feature
Mould influences very big.Therefore, cause the robustness of existing shape of face detection technique poor.
Summary of the invention
The embodiment of the present invention provides a kind of shape of face detection method and terminal device, to solve the Shandong of existing shape of face detection technique
The poor problem of stick.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of shape of face detection method, this method comprises: obtaining target object
First image and the second image, first image are depth image, which is two dimensional image, which includes people
Face;According to first image, the profile information of the target object is obtained;According to the profile information, handle in second image
The target object obtains third image;According to the characteristic information of the target object in the third image, shape of face detection knot is generated
Fruit.
Second aspect, the embodiment of the invention provides a kind of terminal device, which includes: to obtain module, processing
Module and generation module;The acquisition module, for obtaining the first image and the second image of target object, which is deep
Image is spent, which is two dimensional image, which includes face;According to first image, the target object is obtained
Profile information;The processing module, the profile information for being obtained according to the acquisition module handle being somebody's turn to do in second image
Target object obtains third image;The generation module, for managing the target in the third image that module obtains according to this
The characteristic information of object generates shape of face testing result.
The third aspect the embodiment of the invention provides a kind of terminal device, including processor, memory and is stored in this and deposits
On reservoir and the computer program that can run on the processor, such as first is realized when which is executed by the processor
The step of shape of face detection method in aspect.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage mediums
Computer program is stored in matter, is realized when which is executed by processor such as the shape of face detection method in first aspect
Step.
In embodiments of the present invention, terminal device can obtain target pair first according to the depth image for including target object
The profile information of elephant is handled the two dimensional image for including target object further according to profile information, to obtain third image,
Shape of face detection finally is carried out to the human face region in third image, generates shape of face testing result.With this solution, due to illumination and
Although the factors such as class background are very big on the portrait feature influence in two dimensional image, almost do not have to the portrait feature of depth image
Have an impact, therefore combine the first image, the second image is handled, shape of face then is carried out to obtained third image and is detected
It is more acurrate to carry out the shape of face testing result that detects of shape of face to the second image compared to directly for the shape of face testing result arrived, thus
The robustness of shape of face detection technique can be improved.
Detailed description of the invention
Fig. 1 is a kind of configuration diagram of possible Android operation system provided in an embodiment of the present invention;
Fig. 2 is the flow chart of shape of face detection method provided in an embodiment of the present invention;
Fig. 3 is the structural schematic diagram of terminal device provided in an embodiment of the present invention;
Fig. 4 is the hardware schematic of terminal device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Term " first ", " second ", " third " and " the 4th " in description and claims of this specification etc. are to use
In the different object of difference, rather than it is used for the particular order of description object.For example, the first image, the second image, third image
It is rather than the particular order for describing image for distinguishing different images with the 4th image etc..
In embodiments of the present invention, " illustrative " or " such as " etc. words for indicate make example, illustration or explanation.This
Be described as in inventive embodiments " illustrative " or " such as " any embodiment or design scheme be not necessarily to be construed as comparing
Other embodiments or design scheme more preferably or more advantage.Specifically, use " illustrative " or " such as " etc. words purport
Related notion is being presented in specific ways.
In the description of the embodiment of the present invention, unless otherwise indicated, the meaning of " plurality " is refer to two or more,
For example, multiple processing units refer to two or more processing unit;Multiple element refers to two or more
Element etc..
The embodiment of the present invention provides a kind of shape of face detection method, and terminal device can be first according to the depth for including target object
Image obtains the profile information of target object, further according to profile information, handles the two dimensional image for including target object,
To obtain third image, shape of face detection finally is carried out to the human face region in third image, generates shape of face testing result.By this
Scheme, although since the factors such as illumination and class background are very big on the portrait feature influence in two dimensional image, to depth image
Portrait feature have little effect, therefore combine the first image, the second image is handled, then to obtained third figure
The shape of face testing result detected as carrying out shape of face is detected compared to directly the shape of face that shape of face detects is carried out to the second image
As a result more acurrate, so as to improve the robustness of shape of face detection technique.
Below by taking Android operation system as an example, introduce applied by shape of face detection method provided in an embodiment of the present invention
Software environment.
As shown in Figure 1, being a kind of configuration diagram of possible Android operation system provided in an embodiment of the present invention.Scheming
In 1, the framework of Android operation system includes 4 layers, be respectively as follows: application layer, application framework layer, system Runtime Library layer and
Inner nuclear layer (is specifically as follows Linux inner core).
Wherein, application layer includes each application program (including system application and in Android operation system
Tripartite's application program).
Application framework layer is the frame of application program, and developer can be in the exploitation for the frame for abiding by application program
In the case where principle, some application programs are developed based on application framework layer.
System Runtime Library layer includes library (also referred to as system library) and Android operation system running environment.Library is mainly Android behaviour
As system it is provided needed for all kinds of resources.Android operation system running environment is used to provide software loop for Android operation system
Border.
Inner nuclear layer is the operating system layer of Android operation system, belongs to the bottom of Android operation system software level.It is interior
Stratum nucleare provides core system service and hardware-related driver based on linux kernel for Android operation system.
By taking Android operation system as an example, in the embodiment of the present invention, developer can be based on above-mentioned Android as shown in Figure 1
The software program of shape of face detection method provided in an embodiment of the present invention is realized in the system architecture of operating system, exploitation, so that
The shape of face detection method can be run based on Android operation system as shown in Figure 1.I.e. processor or terminal can by
The software program is run in Android operation system realizes shape of face detection method provided in an embodiment of the present invention.
Terminal device in the embodiment of the present invention can be mobile terminal device, or immobile terminal equipment.It moves
Dynamic terminal device can be mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, wearable device, super shifting
Dynamic personal computer (ultra-mobile personal computer, UMPC), net book or personal digital assistant
(personal digital assistant, PDA) etc.;Immobile terminal equipment can be personal computer (personal
Computer, PC), television set (television, TV), automatic teller machine or self-service machine etc.;The embodiment of the present invention does not limit specifically
It is fixed.
The executing subject of shape of face detection method provided in an embodiment of the present invention can be (including the movement of above-mentioned terminal device
Terminal device and immobile terminal equipment), or it can be realized the functional module and/or function of this method in the terminal device
Energy entity can specifically determine that the embodiment of the present invention is not construed as limiting according to actual use demand.It is with terminal device below
Example, illustratively illustrates shape of face detection method provided in an embodiment of the present invention.
Refering to what is shown in Fig. 2, this method may include following step the embodiment of the invention provides a kind of shape of face detection method
Rapid 201- step 204.
Step 201, terminal device obtain the first image and the second image of target object.
First image is depth image, which is two dimensional image, which includes human face region.
Specifically, first image is the depth image of target image, which is the X-Y scheme of the target image
Picture includes target object in the target image, includes human face region in the target object.
Target image can be understood as the picture material of the first image and the second image, i.e. the first image and the second image are
For identical scene, the image of identical content imaging.
Depth image is the image stored in a manner of the depth information of target image, and two dimensional image is with target image
The image that the mode of two-dimensional signal stores.Two dimensional image be exactly do not include the flat image of depth information, including gray level image, coloured silk
Chromatic graph picture etc., the embodiment of the present invention is not construed as limiting.Wherein, color image include again RGB (Red Green Blue) color image,
YUV (brightness, coloration, coloration) color image etc., inventive embodiments are not construed as limiting.At present, it is generally the case that common depth map
Picture and two dimensional image are depth image (Depth Map) and RGB color image, also referred to as RGB-D image.
Terminal device can obtain depth image and two dimensional image (for example, from network downloading, obtaining and taking the photograph from other equipment
Camera shooting etc.), it can also (the usually combination of two cameras, one for clapping by camera on terminal device
Depth image, such as infrared camera or structure light video camera head are taken the photograph, one for shooting two dimensional image, such as colour imagery shot)
Shoot depth image and two dimensional image.It should be understood that the depth image and two dimensional image in the embodiment of the present invention be by
Registration, i.e. the coordinate of each pixel in depth image and two dimensional image is one-to-one, specific registration process ginseng
Existing the relevant technologies are examined, it will not go into details herein.
The target object includes human face region, that is, includes the face area of people, it can be understood as the target object is exactly face
Portion region;Or the target object is in addition to including face area, further includes other regions, such as neck area, body trunk area
Domain, four limbs region etc.;The embodiment of the present invention is not construed as limiting.
In embodiments of the present invention, depth image includes the depth information of human face region, and two dimensional image includes human face region
Two-dimensional signal.
Step 202, according to first image, terminal device obtains the profile information of the target object.
Optionally, profile information can be the image of the background other than contour images, such as removal target object.The profile
Information includes at least the profile information of human face region, can also include at least one following: profile information, the neck of head zone
The profile information of subregion, the profile information in body trunk region, the profile information in four limbs region etc., the embodiment of the present invention is not made
It limits.
Optionally, terminal device can be according to the depth value of each pixel in depth image and concentration gradient value etc.
Information obtains each pixel of the boundary profile of the target object, then corresponds to each pixel of the boundary profile
Two dimensional image (i.e. pixel matching) can then obtain profile information (contour images, the wheel of the target object in two dimensional image
Region, that is, contour images that each pixel on wide boundary surrounds).Concrete implementation process can refer to existing the relevant technologies, this
Inventive embodiments are not construed as limiting.
Optionally, terminal device can obtain the profile of the target object by the way that depth image is normalized
Information.
Illustratively, which can specifically be realized by following step 202a.
Step 202a, first image is normalized in terminal device, to obtain the profile letter of the target object
Breath.
In embodiments of the present invention, which is normalized, is referred to as carrying out the first image
Mapping processing, i.e., be converted to gray level image for depth image, formula is as follows:Wherein, di,jFor in depth map
Pixel (i, j) depth value, ti,jPixel (i, j) in the gray level image obtained after mapping is handled for depth map
Pixel value, ti,jFor integer, if the t being calculated according to above-mentioned formulai,jIt is not integer, rounding processing (above-mentioned formula need to be carried out
Rounding treatment process is not provided, and specific rounding processing can refer to existing the relevant technologies, and it will not go into details herein), range (d) is
Maximum value in depth map in each pixel depth value.
However, depth image, after mapping is handled, the value range of pixel value is generally large (for example, the value of pixel value
Range is 0-4096), and wherein Min-max is again excessively intensive, belongs to garbage, therefore simple for subsequent processing, at this
In inventive embodiments, truncation is carried out to the image after mapping, to remove garbage, the value range of pixel value is controlled
Between 0-255.The formula of truncation is as follows:Wherein, pi,jTo be obtained after truncation
Gray level image in pixel (i, j) pixel value, α, β are empirical values, and those skilled in the art can rule of thumb obtain.
Terminal device obtains the wheel of the target object after first image is normalized according to the above method
Wide information.
Optionally, terminal device can obtain the target pair by carrying out depth consistency division processing to depth image
The profile information of elephant.Specifically are as follows: terminal device first carries out truncation (pretreatment) to depth image, that is, removes depth image
Middle depth value is greater than the pixel of third threshold value, then the depth image after truncation is normalized, detailed process
Existing the relevant technologies can be referred to, it will not go into details herein.Wherein, the value of third threshold value can be those skilled in the art according to
What experience obtained, truncation pretreatment is first carried out in this way, then is normalized to the information of first image and be can simplify place
Reason process.
Although since the factors such as illumination and class background influence very the portrait feature (such as luminance information) in two dimensional image
Greatly, but the portrait feature (depth information) of depth image is had little effect, therefore the depth image of combining target object,
The profile information of acquisition is more accurate, and then the robustness of shape of face detection can be improved.
Step 203, according to the profile information, handle the target object in second image, obtain third image.
The contrast of the human face region of target object in third image is greater than the face of the target object in the second image
The comparison of the human face region of target object can be improved after handling the target object in second image in the contrast in region
Degree improves the robustness of existing shape of face detection technique so as to the accuracy of the high shape of face detection of body.
It should be understood that in the embodiment of the present invention, obtained after handling the target object in second image
Three images enhance the contrast in the region around the human face region and face of target object, and by carrying out to third image
Shape of face detection, can accurately obtain the shape of face of human face region.
Illustratively, which can be with are as follows: terminal device is according to the profile information, to the mesh in second image
Mark object executes at least one following processing: image enhancement processing, and extracts target object processing.The specific step
203 can be realized by following step 203a- step 203b, step 203c- step 203d, step 203e or step 203f.
Step 203a, terminal device carries out image increasing to the target object in second image according to the profile information
Strength reason.
After the first image is normalized in step 202, terminal device is by the profile of the profile information of acquisition
Each pixel on boundary corresponds in the second image, to the area surrounded in the second image with each pixel of the profile and border
The corresponding region in domain carries out image enhancement processing.
Image enhancement processing can be improved in the second image, corresponding with the region that each pixel of the profile and border surrounds
Region pixel value (brightness value), with improve the second image in target object region contrast, brightness of image, and
Picture quality etc..
Image enhancement processing can also be histogram equalization enhancing processing and the image enhancement based on Laplace operator
Existing the relevant technologies can be referred to Deng, detailed process, the embodiment of the present invention be not construed as limiting.
Further, terminal device carries out image increasing to the target object in second image according to the profile information
Strength reason can also include: terminal device according to the profile information, to the edge of the target object face in second image
Region carries out image enhancement processing, for example, the fringe region of face refers to: (outer boundary is for the inner boundary of the fringe region of face
Facial contour edge) distance (each pixel of the inner boundary of the fringe region of face of each pixel apart from facial contour edge
The shortest distance of the point apart from facial contour edge) be the 4th threshold value region.The value of 4th threshold value can be art technology
What personnel rule of thumb obtained.
The brightness of the human face region of target object in the second image can be enhanced by image enhancement processing, so as to
The accuracy of shape of face detection is improved to a certain extent.
Step 203b, for terminal device according to the profile information, extracting from second image after image enhancement processing should
Target object.
The target object is extracted from second image after image enhancement processing, that is, after removing the image enhancement processing
Background image in second image in addition to target object includes the target object to obtain, and does not include the of background image
Three images.Extract the target object from the second image after the image enhancement processing, such as from after image enhancement processing
The image that (encirclement) indicated by pixel corresponding with profile information is extracted in two images, can also be other methods, this hair
Bright embodiment is not construed as limiting.
By extracting the processing of target object, the interference of class background can be removed, so as to improve to a certain extent
The accuracy of shape of face detection.
After step 203a- step 203b processing, the region around the human face region and face of target object is enhanced
Contrast, so as to improve shape of face detection accuracy.
Step 203c, terminal device extracts the target object from second image according to the profile information.
The profile information that terminal device is obtained according to above-mentioned steps 202, by each picture of the profile and border in profile information
Vegetarian refreshments corresponds in the second image (pixel matching), to extract the target object from second image to get to including the mesh
Object is marked, does not include the intermediate image of background image.The target object is extracted from second image, such as from the second image
The image for extracting (encirclement) indicated by pixel corresponding with profile information, can also be other methods, the embodiment of the present invention
It is not construed as limiting.
By extracting the processing of target object, the interference of class background can be removed, so as to improve to a certain extent
The accuracy of shape of face detection.
Step 203d, terminal device is according to the profile information, to the target object extracted from second image into
Row image enhancement processing.
Terminal device is according to the profile information, to the target object extracted from second image (in intermediate image
Target object) carry out image enhancement processing, to obtain third image, specific description can be with reference pair above-mentioned steps 203a's
Associated description, it will not go into details herein.
The brightness of the human face region of target object in the second image can be enhanced by image enhancement processing, so as to
The accuracy of shape of face detection is improved to a certain extent.
By carrying out image enhancement processing to the target object in the second image and extracting target object processing, can be enhanced
The brightness of the human face region of target object in second image, and the interference of class background can be removed, so as to improve shape of face
The accuracy of detection.
Step 203e, terminal device carries out image increasing to the target object in second image according to the profile information
Strength reason.
Terminal device carries out image enhancement processing according to the profile information, to the target object in second image, obtains
To third image.Specific description can be with the associated description of reference pair above-mentioned steps 203a, and it will not go into details herein.
The brightness of the human face region of target object in the second image can be enhanced by image enhancement processing, so as to
The accuracy of shape of face detection is improved to a certain extent.
Step 203e can specifically be realized by following step 203e1.
Step 203e1, it is less than or equal to the feelings of first threshold in the brightness for detecting the human face region in second image
Under condition, according to the profile information, terminal device carries out image enhancement processing to the target object in second image.
First threshold can be preset in advance, determine that the embodiment of the present invention does not limit with specific reference to actual use situation
It is fixed.
Terminal device can obtain the face of the second image according to any one brightness of image evaluation algorithms in the prior art
The brightness in region, in the case where brightness is less than or equal to first threshold, according to the profile information, to the mesh in second image
It marks object and carries out image enhancement processing.
Brightness of image evaluation algorithms can refer to existing the relevant technologies, and it will not go into details for the embodiment of the present invention.To second figure
The process that target object as in carries out image enhancement processing can refer to the associated description of above-mentioned steps 203a, no longer superfluous herein
It states.
It in this way can be in the case where the brightness value of the human face region of the target object of the second image be undesirable, to this
Target object in second image carries out image enhancement processing.
Illustratively, judge that the clarity of the human face region in the second image is poor (facial characteristics is fuzzy) in terminal device
In the case where, terminal device carries out image enhancement processing to the target object in second image.
In this way terminal device can according to image there are the problem of, image is targetedly handled, so that face area
The feature in domain becomes apparent from, to improve shape of face detection success rate, improves the robustness of shape of face detection technique.
Step 203f, terminal device extracts the target object from second image according to the profile information.
Terminal device extracts the target object from second image according to the profile information, to obtain third image.Tool
The description of body can be with the associated description of reference pair above-mentioned steps 203c, and it will not go into details herein.
By extracting the processing of target object, the interference of class background can be removed, so as to improve to a certain extent
The accuracy of shape of face detection.
Step 203f can specifically be realized by following step 203f1.
Step 203f1, detecting that the similarity of human face region and target area in second image is greater than or equal to
In the case where second threshold, according to the profile information, terminal device extracts the target object from second image.
The target area is the region in second image around human face region.The target area can be the people's face
The background area in region, other areas of the distance on the boundary apart from human face region in preset range (can be preset in advance)
Domain, such as the target area can be neck area.
Second threshold can be preset in advance, determine that the embodiment of the present invention does not limit with specific reference to actual use situation
It is fixed.
Terminal device can obtain the people of the second image according to any one image similarity evaluation algorithms in the prior art
The similarity with target area in face region, in the case where similarity is greater than or equal to second threshold, according to the profile information,
The target object is extracted from second image.
Image similarity evaluation algorithms can refer to existing the relevant technologies, and it will not go into details for the embodiment of the present invention.According to the wheel
Wide information, the process that the target object is extracted from second image can be with reference to above-mentioned associated description, and details are not described herein again.
Illustratively, the larger with the similarity of target area of the human face region in the second image is judged in terminal device
In the case where (human face region and target area are difficult to distinguish), terminal device extracts the target object from second image, i.e.,
Allow background separation locating for target object and the target object in the second image.
In this way terminal device can according to image there are the problem of, image is targetedly handled, so that face area
The feature in domain becomes apparent from, to improve shape of face detection success rate, improves the robustness of shape of face detection technique.
Step 204, terminal device generate shape of face detection knot according to the characteristic information of the target object in the third image
Fruit.
Specifically, terminal device detects the human face region in the third image using shape of face detection algorithm, shape of face is generated
Testing result.The shape of face testing result is used to indicate the shape of face of the target object.Shape of face testing result for example may include melon seeds
Face, long face, round face, oval face or pyriform face etc..
Shape of face detection algorithm can refer to existing the relevant technologies, and it will not go into details herein.
Illustratively, which can specifically be realized by following step 204a- step 204d.
Step 204a, terminal device carries out image alignment processing to the third image, to obtain the 4th image.
Specific image alignment processing method can refer to existing the relevant technologies, and the embodiment of the present invention is not construed as limiting.
Step 204b, the 4th image is divided into N number of block by terminal device.
Optionally, the human face region in the 4th image is divided into N number of block by terminal device.
N is positive integer.Each block in N number of block respectively corresponds a weight, wherein first in N number of block
The weight of block is greater than the weight of the second block, which is the contour area for being located at the human face region in N number of block
Block, second block be N number of block in be located at the human face region non-contour area block.
Illustratively, N can value can be 5*5 or 8*8, determine that the present invention is real with specific reference to actual use situation
Example is applied to be not construed as limiting.And different weights are assigned to each block, boundary block weight is larger, other block weights are smaller.
The weight of the block of the contour area of human face region can further enhance people greater than the weight in other regions in this way
The characteristic information of the contour area in face region is conducive to the accuracy for improving shape of face detection, and existing shape of face detection skill can be improved
The robustness of art.
Step 204c, terminal device extracts the characteristic information of each block in N number of block, to obtain N group feature letter
Breath.
Illustratively, terminal device extracts the random pixel difference feature in each block based on partial circle domain.Every block feature
Dimension is d dimension, and integration can obtain d*N dimensional feature (i.e. one group of characteristic information).Wherein, d can value be 72,128 etc., the present invention implement
Example is not construed as limiting.
The detailed process for extracting the characteristic information of each block can be with reference to existing the relevant technologies, and the embodiment of the present invention is not
It limits.
Step 204d, the N group characteristic information is subjected to classification based training, terminal device is to generate shape of face testing result.
Specific classification based training process can refer to existing the relevant technologies, and the embodiment of the present invention is not construed as limiting.
Illustratively, terminal device can also examine the human face region in the third image using other shape of face detection algorithms
It surveys, generates shape of face testing result.Such as terminal device can also use method and base based on geometry local feature template matching
In the method etc. of parameter optimization, the human face region in the third image is detected, generates shape of face testing result.Specific implementation process
Existing the relevant technologies can be referred to, the embodiment of the present invention is not construed as limiting.
The embodiment of the invention provides a kind of shape of face detection method, terminal device can be first according to the depth for including target object
Image is spent, the profile information of target object is obtained, further according to profile information, at the two dimensional image including target object
Reason finally carries out shape of face detection to the human face region in third image to obtain third image, generates shape of face testing result.It is logical
The program is crossed, although since the factors such as illumination and class background are very big on the portrait feature influence in two dimensional image, to depth
The portrait feature of image has little effect, therefore combines the first image, handles the second image, then to the obtained
Three images carry out the shape of face testing result that shape of face detects, compared to the shape of face directly detected to the second image progress shape of face
Testing result is more acurrate, so as to improve the robustness of shape of face detection technique.
As shown in figure 3, the embodiment of the present invention provides a kind of terminal device 120, which includes: acquisition module
121, processing module 122 and generation module 123;The acquisition module 121, for obtaining the first image and the second figure of target object
Picture, first image are depth image, which is two dimensional image, which includes human face region;According to this
One image obtains the profile information of the target object;The processing module 122, the wheel for being obtained according to the acquisition module 121
Wide information handles the target object in second image, obtains third image;The generation module 123, for managing according to this
The characteristic information for the target object in the third image that module 122 obtains generates shape of face testing result.
Optionally, the processing module 122, specifically for the profile information obtained according to the acquisition module 121, to this
The target object in two images executes at least one following processing: image enhancement processing, and extracts at the target object
Reason.
Optionally, processing module 122, specifically for detecting that the brightness of the human face region in second image is less than
Or in the case where being equal to first threshold, according to the profile information that the acquisition module 121 obtains, to the target in second image
Object carries out image enhancement processing;Detecting that the similarity of human face region and target area in second image is greater than or waits
In the case where second threshold, according to the profile information that the acquisition module 121 obtains, the target is extracted from second image
Object, the target area are the region in second image around human face region.
Optionally, the generation module 123 is specifically used for carrying out image alignment processing to the third image, to obtain the 4th
Image;4th image is divided into N number of block;The characteristic information of each block in N number of block is extracted, to obtain N group
Characteristic information;The N group characteristic information is subjected to classification based training, to generate shape of face testing result, N is positive integer.
Optionally, each block in N number of block respectively corresponds a weight, wherein the first block in N number of block
Weight be greater than the second block weight, first block be N number of block in be located at the human face region contour area area
Block, second block are the block for being located at the non-contour area of the human face region in N number of block.
Optionally, acquisition module 121, specifically for first image is normalized, to obtain the target
The profile information of object.
Terminal device provided in an embodiment of the present invention can be realized each process shown in Fig. 2 in above method embodiment,
To avoid repeating, details are not described herein again.
The embodiment of the invention provides a kind of terminal device, terminal device can be first according to the depth map for including target object
Picture obtains the profile information of target object, further according to profile information, handles the two dimensional image for including target object, with
Third image is obtained, shape of face detection finally is carried out to the human face region in third image, generates shape of face testing result.Pass through the party
Case, although since the factors such as illumination and class background are very big on the portrait feature influence in two dimensional image, to depth image
Portrait feature has little effect, therefore combines the first image, handles the second image, then to obtained third image
The shape of face testing result that shape of face detects is carried out, is tied compared to directly the shape of face detection that shape of face detects is carried out to the second image
Fruit is more acurrate, so as to improve the robustness of shape of face detection technique.
A kind of hardware structural diagram of Fig. 4 terminal device of each embodiment to realize the present invention.As shown in figure 4, should
Terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104,
Sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, Yi Ji electricity
The components such as source 111.It will be understood by those skilled in the art that terminal device structure shown in Fig. 4 is not constituted to terminal device
Restriction, terminal device may include perhaps combining certain components or different components than illustrating more or fewer components
Arrangement.In embodiments of the present invention, terminal device include but is not limited to mobile phone, tablet computer, laptop, palm PC,
Vehicle-mounted terminal equipment, wearable device and pedometer etc..
Wherein, processor 110, for obtaining the first image and the second image of target object, which is depth
Image, second image are two dimensional image, which includes human face region;According to first image, the target pair is obtained
The profile information of elephant;According to the profile information, the target object in second image is handled, third image is obtained;According to this
The characteristic information of the target object in third image generates shape of face testing result.
Terminal device provided in an embodiment of the present invention, terminal device can first according to include target object depth image,
The profile information for obtaining target object is handled the two dimensional image for including target object, further according to profile information to obtain
Third image finally carries out shape of face detection to the human face region in third image, generates shape of face testing result.With this solution,
Since although the factors such as illumination and class background are very big on the portrait feature influence in two dimensional image, to the portrait of depth image
Feature has little effect, therefore combines the first image, handles the second image, then carries out to obtained third image
The shape of face testing result that shape of face detects, compared to directly carrying out the shape of face testing result that detects of shape of face more to the second image
Accurately, so as to improving the robustness of shape of face detection technique.
It should be understood that the embodiment of the present invention in, radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 110 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 101 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also by wireless communication system and network and other set
Standby communication.
Terminal device provides wireless broadband internet by network module 102 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 103 can be received by radio frequency unit 101 or network module 102 or in memory 109
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 103 can also provide and end
The relevant audio output of specific function that end equipment 100 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 103 includes loudspeaker, buzzer and receiver etc..
Input unit 104 is for receiving audio or video signal.Input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or network module 102.Mike
Wind 1042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 101 is converted in the case where telephone calling model.
Terminal device 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when terminal device 100 is moved in one's ear
Display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify terminal device posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 105 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generate the use with terminal device
Family setting and the related key signals input of function control.Specifically, user input unit 107 include touch panel 1071 and
Other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel 1071
Neighbouring operation).Touch panel 1071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 110, receiving area
It manages the order that device 110 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Seed type realizes touch panel 1071.In addition to touch panel 1071, user input unit 107 can also include other input equipments
1072.Specifically, other input equipments 1072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 1071 can be covered on display panel 1061, when touch panel 1071 is detected at it
On or near touch operation after, send processor 110 to determine the type of touch event, be followed by subsequent processing device 110 according to touching
The type for touching event provides corresponding visual output on display panel 1061.Although in Fig. 4, touch panel 1071 and display
Panel 1061 is the function that outputs and inputs of realizing terminal device as two independent components, but in some embodiments
In, can be integrated by touch panel 1071 and display panel 1061 and realize the function that outputs and inputs of terminal device, it is specific this
Place is without limitation.
Interface unit 108 is the interface that external device (ED) is connect with terminal device 100.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 108 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in terminal device 100 or can be used in 100 He of terminal device
Data are transmitted between external device (ED).
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of terminal device, utilizes each of various interfaces and the entire terminal device of connection
A part by running or execute the software program and/or module that are stored in memory 109, and calls and is stored in storage
Data in device 109 execute the various functions and processing data of terminal device, to carry out integral monitoring to terminal device.Place
Managing device 110 may include one or more processing units;Optionally, processor 110 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Terminal device 100 can also include the power supply 111 (such as battery) powered to all parts, optionally, power supply 111
Can be logically contiguous by power-supply management system and processor 110, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, terminal device 100 includes some unshowned functional modules, details are not described herein.
Optionally, the embodiment of the present invention also provides a kind of terminal device, may include above-mentioned processor as shown in Figure 4
110, memory 109, and it is stored in the computer program that can be run on memory 109 and on the processor 110, the calculating
Each process of shape of face detection method shown in Fig. 2 in above method embodiment is realized when machine program is executed by processor 110, and
Identical technical effect can be reached, to avoid repeating, details are not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program realizes shape of face detection method shown in Fig. 2 in above method embodiment when the computer program is executed by processor
Each process, and identical technical effect can be reached, to avoid repeating, details are not described herein again.Wherein, the computer can
Storage medium is read, such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access
Memory, RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes
Business device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (10)
1. a kind of shape of face detection method, which is characterized in that the described method includes:
The first image and the second image of target object are obtained, the first image is depth image, and second image is two
Image is tieed up, the target object includes human face region;
According to the first image, the profile information of the target object is obtained;
According to the profile information, the target object in second image is handled, third image is obtained;
According to the characteristic information of the target object in the third image, shape of face testing result is generated.
2. the method according to claim 1, wherein described according to the profile information, processing second figure
The target reference object as in, comprising:
According to the profile information, at least one following processing: figure is executed to the target object in second image
Image intensifying processing, and extract the target object processing.
3. according to the method described in claim 2, it is characterized in that, described according to the profile information, to second image
In the target object execute at least one of following processing: image enhancement processing, and extract the target object processing,
Include:
In the case where detecting that the brightness of the human face region in second image is less than or equal to first threshold, according to described
Profile information carries out image enhancement processing to the target object in second image;
It is greater than or equal to the feelings of second threshold in the similarity for detecting human face region and target area in second image
Under condition, according to the profile information, the target object is extracted from second image, the target area is described second
Region in image around human face region.
4. according to the method in any one of claims 1 to 3, which is characterized in that described according in the third image
The characteristic information of the target object generates shape of face testing result, comprising:
Image alignment processing is carried out to the third image, to obtain the 4th image;
4th image is divided into N number of block;
The characteristic information of each block in N number of block is extracted, to obtain N group characteristic information;
The N group characteristic information is subjected to classification based training, to generate shape of face testing result, N is positive integer.
5. according to the method described in claim 4, it is characterized in that, each block in N number of block respectively corresponds one
Weight, wherein the weight of the first block is greater than the weight of the second block in N number of block, and first block is described N number of
Positioned at the block of the contour area of the human face region in block, second block is to be located at the people in N number of block
The block of the non-contour area in face region.
6. according to the method in any one of claims 1 to 3, which is characterized in that it is described according to the first image, it obtains
The profile information of the target object, comprising:
The first image is normalized, to obtain the profile information of the target object.
7. a kind of terminal device, which is characterized in that the terminal device includes: to obtain module, processing module and generation module;
The acquisition module, for obtaining the first image and the second image of target object, the first image is depth image,
Second image is two dimensional image, and the target object includes human face region;According to the first image, the target is obtained
The profile information of object;
The processing module, the profile information for being obtained according to the acquisition module, is handled in second image
The target object obtains third image;
The generation module, the feature of the target object in the third image for being obtained according to the processing module
Information generates shape of face testing result.
8. terminal device according to claim 7, which is characterized in that the processing module, specifically for being obtained according to
The profile information that modulus block obtains executes the target object in second image at least one following
Reason: image enhancement processing, and extract the target object processing.
9. terminal device according to claim 8, which is characterized in that the processing module, specifically for detecting
The brightness of the human face region in the second image is stated less than or equal in the case where first threshold, is obtained according to the acquisition module
The profile information carries out image enhancement processing to the target object in second image;Detecting second image
In human face region and target area similarity be greater than or equal to second threshold in the case where, according to the acquisitions module acquisition
The profile information, extract the target object from second image, the target area is in second image
Region around human face region.
10. terminal device according to any one of claims 7 to 9, which is characterized in that the generation module, it is specific to use
In carrying out image alignment processing to the third image, to obtain the 4th image;4th image is divided into N number of block;
The characteristic information of each block in N number of block is extracted, to obtain N group characteristic information;The N group characteristic information is carried out
Classification based training, to generate shape of face testing result, N is positive integer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811635950.6A CN109840476B (en) | 2018-12-29 | 2018-12-29 | Face shape detection method and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811635950.6A CN109840476B (en) | 2018-12-29 | 2018-12-29 | Face shape detection method and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109840476A true CN109840476A (en) | 2019-06-04 |
CN109840476B CN109840476B (en) | 2021-12-21 |
Family
ID=66883499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811635950.6A Active CN109840476B (en) | 2018-12-29 | 2018-12-29 | Face shape detection method and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109840476B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112991210A (en) * | 2021-03-12 | 2021-06-18 | Oppo广东移动通信有限公司 | Image processing method and device, computer readable storage medium and electronic device |
WO2021190387A1 (en) * | 2020-03-25 | 2021-09-30 | 维沃移动通信有限公司 | Detection result output method, electronic device, and medium |
CN115170536A (en) * | 2022-07-22 | 2022-10-11 | 北京百度网讯科技有限公司 | Image detection method, model training method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339612A (en) * | 2008-08-19 | 2009-01-07 | 陈建峰 | Face contour checking and classification method |
JP2010098567A (en) * | 2008-10-17 | 2010-04-30 | Seiko Epson Corp | Head mount full-face type image display device |
CN106203263A (en) * | 2016-06-27 | 2016-12-07 | 辽宁工程技术大学 | A kind of shape of face sorting technique based on local feature |
CN106648042A (en) * | 2015-11-04 | 2017-05-10 | 重庆邮电大学 | Identification control method and apparatus |
CN106909875A (en) * | 2016-09-12 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | Face shape of face sorting technique and system |
CN107480613A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Face identification method, device, mobile terminal and computer-readable recording medium |
CN108053210A (en) * | 2017-11-20 | 2018-05-18 | 胡研 | A kind of shape of face method of payment, shape of face reserving method and traction equipment |
CN108734676A (en) * | 2018-05-21 | 2018-11-02 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
-
2018
- 2018-12-29 CN CN201811635950.6A patent/CN109840476B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339612A (en) * | 2008-08-19 | 2009-01-07 | 陈建峰 | Face contour checking and classification method |
JP2010098567A (en) * | 2008-10-17 | 2010-04-30 | Seiko Epson Corp | Head mount full-face type image display device |
CN106648042A (en) * | 2015-11-04 | 2017-05-10 | 重庆邮电大学 | Identification control method and apparatus |
CN106203263A (en) * | 2016-06-27 | 2016-12-07 | 辽宁工程技术大学 | A kind of shape of face sorting technique based on local feature |
CN106909875A (en) * | 2016-09-12 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | Face shape of face sorting technique and system |
CN107480613A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Face identification method, device, mobile terminal and computer-readable recording medium |
CN108053210A (en) * | 2017-11-20 | 2018-05-18 | 胡研 | A kind of shape of face method of payment, shape of face reserving method and traction equipment |
CN108734676A (en) * | 2018-05-21 | 2018-11-02 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021190387A1 (en) * | 2020-03-25 | 2021-09-30 | 维沃移动通信有限公司 | Detection result output method, electronic device, and medium |
CN112991210A (en) * | 2021-03-12 | 2021-06-18 | Oppo广东移动通信有限公司 | Image processing method and device, computer readable storage medium and electronic device |
CN115170536A (en) * | 2022-07-22 | 2022-10-11 | 北京百度网讯科技有限公司 | Image detection method, model training method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109840476B (en) | 2021-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034102A (en) | Human face in-vivo detection method, device, equipment and storage medium | |
CN108537776A (en) | A kind of image Style Transfer model generating method and mobile terminal | |
CN108989678A (en) | A kind of image processing method, mobile terminal | |
CN107231529A (en) | Image processing method, mobile terminal and storage medium | |
CN107707827A (en) | A kind of high-dynamics image image pickup method and mobile terminal | |
CN108234882A (en) | A kind of image weakening method and mobile terminal | |
CN107566749A (en) | Image pickup method and mobile terminal | |
CN107977652A (en) | The extracting method and mobile terminal of a kind of screen display content | |
CN107682639B (en) | A kind of image processing method, device and mobile terminal | |
CN107909583A (en) | A kind of image processing method, device and terminal | |
CN107895352A (en) | A kind of image processing method and mobile terminal | |
CN110490897A (en) | Imitate the method and electronic equipment that video generates | |
CN107786811B (en) | A kind of photographic method and mobile terminal | |
CN109840476A (en) | A kind of shape of face detection method and terminal device | |
CN111310575B (en) | Face living body detection method, related device, equipment and storage medium | |
CN107886321A (en) | A kind of method of payment and mobile terminal | |
CN108960179A (en) | A kind of image processing method and mobile terminal | |
CN110213485A (en) | A kind of image processing method and terminal | |
CN109272466A (en) | A kind of tooth beautification method and device | |
CN108259746A (en) | A kind of image color detection method and mobile terminal | |
CN109241832A (en) | A kind of method and terminal device of face In vivo detection | |
CN110516488A (en) | A kind of barcode scanning method and mobile terminal | |
CN109461124A (en) | A kind of image processing method and terminal device | |
CN110765924A (en) | Living body detection method and device and computer-readable storage medium | |
CN109816601A (en) | A kind of image processing method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |