CN108335276A - Image U.S. face method, apparatus and terminal device - Google Patents

Image U.S. face method, apparatus and terminal device Download PDF

Info

Publication number
CN108335276A
CN108335276A CN201810180248.9A CN201810180248A CN108335276A CN 108335276 A CN108335276 A CN 108335276A CN 201810180248 A CN201810180248 A CN 201810180248A CN 108335276 A CN108335276 A CN 108335276A
Authority
CN
China
Prior art keywords
image
face
region
depth
skin defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810180248.9A
Other languages
Chinese (zh)
Inventor
王会朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810180248.9A priority Critical patent/CN108335276A/en
Publication of CN108335276A publication Critical patent/CN108335276A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

This application discloses a kind of image U.S. face method, apparatus and terminal devices, wherein image U.S. face method includes:The coloured image captured by camera is captured by imaging sensor;The depth image using structure photogenerated is obtained by structured light sensor;Based on face recognition algorithms, human face region is extracted from coloured image;The face skin defect region in human face region is determined according to coloured image and depth image;And repair face skin defect region.Image U.S. face method, apparatus and terminal device of the embodiment of the present application, can be accurately located face skin defect region, be not necessarily to artificial post-processing, efficient, U.S. face effect is good.

Description

Image U.S. face method, apparatus and terminal device
Technical field
This application involves technical field of information processing more particularly to a kind of image U.S. face method, apparatus and terminal devices.
Background technology
With being constantly progressive for science and technology, the camera function of mobile phone is more powerful, and the pixel number of camera is also higher and higher, figure As more and more clear.Image clearly will produce some problems very much instead, if people are when self-timer, the details of all faces Such as freckle, whelk etc., it can all be clearly displayed in photo, bad experience of taking pictures can be brought to user, this is just needed U.S. face processing, flaw of the processing people in photo are carried out to image.Currently, portrait is beautified using U.S. face software, Most of is all manually to carry out repairing figure the later stage, cost of idle time, and efficiency is low.
Apply for content
A kind of image U.S. face method, apparatus of the application offer and terminal device, to solve in the prior art, later stage U.S. face Software beautifies portrait, of high cost, the low problem of efficiency.
The embodiment of the present application provides a kind of image U.S. face method, including:It is captured captured by camera by imaging sensor Coloured image;
The depth image using structure photogenerated is obtained by structured light sensor;
Based on face recognition algorithms, human face region is extracted from the coloured image;
The face skin defect region in the human face region is determined according to the coloured image and the depth image;With And
Repair face skin defect region.
Optionally, face skin defect region includes planar disfigurement region and stereo defects region, according to the coloured silk Color image and the depth image determine the face skin defect region in the human face region, including:
Planar disfigurement region is determined using the coloured image;
Stereo defects region is determined using the depth image.
Optionally, the depth image using structure photogenerated is obtained by structured light sensor, including:
By structured light projector to face projective structure light, and is obtained by the structured light sensor and pass through the people The structure light image of face;
Processing is decoded to the structure light image, to generate the depth image.
Optionally, processing is decoded to the structure light image, to generate the depth image, including:
Decode the corresponding phase information of deformation position pixel in the structure light image;
Convert the phase information to elevation information;
The depth image is generated according to the elevation information.
Optionally, face skin defect region is repaired, including:
When face skin defect region is planar disfigurement region, standard skin feature replacement to the plane is lacked Fall into region;And/or
When face skin defect region is stereo defects region, using the depth image to the stereo defects The depth information in region is repaired.
Another embodiment of the application provides a kind of image U.S. face device, including:Taking module, for passing through imaging sensor Capture the coloured image captured by camera;
Acquisition module, for obtaining the depth image using structure photogenerated by structured light sensor;
Extraction module extracts human face region for being based on face recognition algorithms from the coloured image;
Determining module, for determining the face skin in the human face region according to the coloured image and the depth image Skin defect area;And
Repair module, for repairing face skin defect region.
Optionally, face skin defect region includes planar disfigurement region and stereo defects region, the determining mould Block, including:
Planar disfigurement region is determined using the coloured image;
Stereo defects region is determined using the depth image.
Optionally, the acquisition module, is used for:
By structured light projector to face projective structure light, and is obtained by the structured light sensor and pass through the people The structure light image of face;
Processing is decoded to the structure light image, to generate the depth image.
Optionally, the acquisition module, is specifically used for:
Decode the corresponding phase information of deformation position pixel in the structure light image;
Convert the phase information to elevation information;
The depth image is generated according to the elevation information.
Optionally, the repair module, is used for:
When face skin defect region is planar disfigurement region, standard skin feature replacement to the plane is lacked Fall into region;And/or
When face skin defect region is stereo defects region, using the depth image to the stereo defects The depth information in region is repaired.
The application a further embodiment provides a kind of non-transitorycomputer readable storage medium, is stored thereon with computer journey Sequence realizes image U.S. face method as described in the application first aspect embodiment when the computer program is executed by processor.
The another embodiment of the application provides a kind of terminal device, including memory, processor and is stored in the memory Computer program that is upper and can running on the processor, image U.S. face method described in the application first aspect embodiment.
Technical solution provided by the embodiments of the present application can include the following benefits:
The coloured image captured by camera is captured by imaging sensor, and is obtained by structured light sensor and utilizes knot The depth image of structure photogenerated, is then based on face recognition algorithms, and human face region is extracted from the coloured image, further according to The coloured image and the depth image determine the face skin defect region in the human face region, and repair the people Face skin defect area, the U.S. face software relative to plane are handled, and can be accurately located face skin defect region, are not necessarily to people Work post-processing, efficient, U.S. face effect is good.
The additional aspect of the application and advantage will be set forth in part in the description, and will partly become from the following description It obtains obviously, or recognized by the practice of the application.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, wherein:
Fig. 1 is the flow chart according to image U.S. face method of the application one embodiment;
Fig. 2 is the flow chart that corresponding depth image is generated according to the decoding structure light image of the application one embodiment;
Fig. 3 is the schematic diagram of a scenario according to the structural light measurement of the application one embodiment;
Fig. 4 is the structure diagram according to image U.S. face device of the application one embodiment;
Fig. 5 is the structural schematic diagram according to the terminal device of the application one embodiment.
Specific implementation mode
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
Below with reference to the accompanying drawings the application image U.S. face method, apparatus and terminal device are described.
Fig. 1 is the flow chart according to image U.S. face method of the application one embodiment.
As shown in Figure 1, image U.S. face method includes:
S101 captures the coloured image captured by camera by imaging sensor.
Currently, mobile terminal technology is increasingly powerful, people can anywhere or anytime be taken pictures using smart mobile phone, and image is also got over It is more clear to come.However, image clearly will produce some problems very much instead, if people are when self-timer, all faces it is thin Section can be clearly displayed in photo such as freckle, whelk, bad experience of taking pictures can be brought to user, this is just needed U.S. face processing is carried out to image still, to obtain satisfied U.S. face processing, user can expend great effort and time, cost Height, poor user experience.To solve the above problems, the application proposes a kind of image U.S. face method, it can fast and effeciently be directed to people Face carries out U.S. face.
In one embodiment of the application, the coloured image captured by camera can be captured by imaging sensor.
S102 obtains the depth image using structure photogenerated by structured light sensor.
Specifically, can be by structured light projector to face projective structure light, and obtained and passed through by structured light sensor Then the structure light image of face is decoded processing to structure light image, to generate corresponding depth image.Wherein, it solves Code structure light image generates the process of corresponding depth image, as shown in Fig. 2, can further comprise:
S201 decodes the corresponding phase information of deformation position pixel in structure light image.
S202 converts phase information to elevation information.
S203 generates depth image according to elevation information.
That is, the relevant information of the threedimensional model of face can be carried out by the face projective structure light to user Acquisition.Wherein, structure light is infrared light, can be laser stripe, Gray code, sine streak or, the figures such as non-homogeneous speckle Case.Above-mentioned pattern can get the profile and depth information of face, such as the height of nose, shape of face after face.Due to knot The high precision of structure light can detect the other depth difference of grade, therefore, can detect the face depth of more details Information.
Below its concrete principle is illustrated by taking a kind of widely used fringe projection technology as an example.It is thrown using area-structure light When shadow, as shown in figure 3, generating sine streak by computer programming, extremely by projection by the sine streak Measured object demodulates the curved stripes and obtains phase using the bending degree modulated by object of camera shooting striped, then by phase It is converted into height.Certainly wherein crucial point be exactly system registration, including camera and projection device be registrated, no Then it is more likely to produce error.Specifically, the phase of the phase of curved stripes and reference stripe can be subtracted each other to obtain phase difference, it should Phase difference then characterizes elevation information of the measured object with respect to the plane of reference, then substitutes into phase and high-degree of conversion formula, to be waited for Survey the threedimensional model of object.It should be understood that in practical applications, according to the difference of concrete application scene, the application is implemented Structure light employed in example can also be any other pattern other than above-mentioned striped.
S103 is based on face recognition algorithms, human face region is extracted from coloured image.
Specifically, the position for obtaining face in coloured image, shared size etc. can be identified based on face recognition algorithms. Human face region is extracted by above- mentioned information.
S104 determines the face skin defect region in human face region according to coloured image and depth image.
Wherein, face skin defect region can be divided into two classes.The first kind is planar disfigurement region, such as freckle, with periphery Skin be in conplane;And the second class is stereo defects region, such as the recess and protrusion of skin, it is relative to periphery Skin tool there are one height or depth.
Specifically, it determines planar disfigurement region using coloured image, and stereo defects region is determined using depth image. For example, the spot on face, mole etc., their color are the colors of the skin on significant difference and periphery, therefore, can be passed through Feature extraction is carried out to coloured image, to determine above-mentioned planar disfigurement region.And it is directed to stereo defects region, due to face skin State be substantially smooth, that is to say, that the depth information of the skin of large area is continuous, and difference is little.And scar Trace, yellowish pink skin bulge etc. are all colors close to skin color, but have difference on depth information, therefore can base Stereo defects region is determined in above-mentioned depth information.
S105 repairs face skin defect region.
In order to more accurately be repaired to face skin defect region, for planar disfigurement region and stereo defects region Processing method be different.
Specifically, when face skin defect region is planar disfigurement region, standard skin feature replacement to plane is lacked Fall into region;And/or when face skin defect region is stereo defects region, using depth image to the depth in stereo defects region Degree information is repaired.
Example one:For the processing of freckle on face, since freckle is compared with skin, only differ greatly in color, and The difference of depth information is little, thus may determine that it is planar disfigurement, then use the feature of the skin on freckle periphery as marking Quasi- skin characteristic replaces freckle with it, is repaired to realize.
Example two:It is substantially not poor in color since scar is relative to surrounding skin for the processing of the scar on face It is different, the only protrusion on skin, therefore generated depth image can be utilized, the height of scar is repaired, it is made It is consistent with the height of surrounding skin, is repaired to realize.
Example three:For whelk, its color and depth information differs greatly with surrounding skin, then can first profit The depth of whelk is repaired with depth image, is then used as standard skin in the feature of the skin by whelk periphery Feature replaces whelk with it.Relative to the U.S. face method of existing two dimensional image, since the precision grade of structure light can reach First depth is repaired to millimeter rank it is thus determined that face skin defect region is more accurate, rather than simply two dimension is special The replacement of sign, repairing effect are more preferable.
Image U.S. face method of the embodiment of the present application captures the coloured image captured by camera by imaging sensor, And the depth image using structure photogenerated is obtained by structured light sensor, face recognition algorithms are then based on, from cromogram Human face region is extracted as in, the face skin defect region in human face region is determined further according to coloured image and depth image, And face skin defect region is repaired, the U.S. face software relative to plane is handled, and can be accurately located face skin defect Region is not necessarily to artificial post-processing, and efficient, U.S. face effect is good.
In order to realize that above-described embodiment, the application also proposed a kind of image U.S. face device.
Fig. 4 is the structure diagram according to image U.S. face device of the application one embodiment.
As shown in figure 4, the device includes taking module 410, acquisition module 420, extraction module 430,440 and of determining module Repair module 450.
Wherein, taking module 410, for capturing the coloured image captured by camera by imaging sensor.
Acquisition module 420, for obtaining the depth image using structure photogenerated by structured light sensor.
Extraction module 430 extracts human face region for being based on face recognition algorithms from coloured image.
Determining module 440, for determining the face skin defect area in human face region according to coloured image and depth image Domain.
Repair module 450, for repairing face skin defect region.
It should be noted that the aforementioned explanation to image U.S. face method, is also applied for the image of the embodiment of the present application U.S. face device, unpub details in the embodiment of the present application, details are not described herein.
Image U.S. face device of the embodiment of the present application captures the coloured image captured by camera by imaging sensor, And the depth image using structure photogenerated is obtained by structured light sensor, face recognition algorithms are then based on, from cromogram Human face region is extracted as in, the face skin defect region in human face region is determined further according to coloured image and depth image, And face skin defect region is repaired, the U.S. face software relative to plane is handled, and can be accurately located face skin defect Region is not necessarily to artificial post-processing, and efficient, U.S. face effect is good.
In order to realize that above-described embodiment, the application also propose a kind of non-transitorycomputer readable storage medium, deposit thereon Computer program is contained, the face side of image U.S. such as previous embodiment can be realized when the computer program is executed by processor Method.
In order to realize that above-described embodiment, the application also proposed a kind of terminal device.
As shown in figure 5, the terminal device 90 includes:Processor 91, memory 92 and image processing circuit 93.
Wherein, memory 92 is for storing executable program code;Processor 91 is stored by reading in memory 92 Executable program code and image processing circuit 93 handle image, to realize such as image U.S. face in previous embodiment Method.
S101 ' captures the coloured image captured by camera by imaging sensor.
S102 ' obtains the depth image using structure photogenerated by structured light sensor.
S103 ' is based on face recognition algorithms, human face region is extracted from coloured image.
S104 ' determines the face skin defect region in human face region according to coloured image and depth image.
S105 ' repairs face skin defect region.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
Although embodiments herein has been shown and described above, it is to be understood that above-described embodiment is example Property, it should not be understood as the limitation to the application, those skilled in the art within the scope of application can be to above-mentioned Embodiment is changed, changes, replacing and modification.

Claims (12)

1. a kind of image U.S. face method, which is characterized in that including:
The coloured image captured by camera is captured by imaging sensor;
The depth image using structure photogenerated is obtained by structured light sensor;
Based on face recognition algorithms, human face region is extracted from the coloured image;
The face skin defect region in the human face region is determined according to the coloured image and the depth image;And
Repair face skin defect region.
2. the method as described in claim 1, which is characterized in that face skin defect region include planar disfigurement region and Stereo defects region determines the face skin defect area in the human face region according to the coloured image and the depth image Domain, including:
Planar disfigurement region is determined using the coloured image;
Stereo defects region is determined using the depth image.
3. the method as described in claim 1, which is characterized in that obtain the depth using structure photogenerated by structured light sensor Image is spent, including:
By structured light projector to face projective structure light, and obtained by the face by the structured light sensor Structure light image;
Processing is decoded to the structure light image, to generate the depth image.
4. method as claimed in claim 3, which is characterized in that processing is decoded to the structure light image, to generate Depth image is stated, including:
Decode the corresponding phase information of deformation position pixel in the structure light image;
Convert the phase information to elevation information;
The depth image is generated according to the elevation information.
5. the method as described in claim 1, which is characterized in that face skin defect region is repaired, including:
When face skin defect region is planar disfigurement region, by standard skin feature replacement to the planar disfigurement area Domain;And/or
When face skin defect region is stereo defects region, using the depth image to the stereo defects region Depth information repaired.
6. a kind of image U.S. face device, which is characterized in that including:
Taking module, for capturing the coloured image captured by camera by imaging sensor;
Acquisition module, for obtaining the depth image using structure photogenerated by structured light sensor;
Extraction module extracts human face region for being based on face recognition algorithms from the coloured image;
Determining module, for determining that the face skin in the human face region lacks according to the coloured image and the depth image Fall into region;And
Repair module, for repairing face skin defect region.
7. device as claimed in claim 6, which is characterized in that face skin defect region include planar disfigurement region and Stereo defects region, the determining module, including:
Planar disfigurement region is determined using the coloured image;
Stereo defects region is determined using the depth image.
8. device as claimed in claim 6, which is characterized in that the acquisition module is used for:
By structured light projector to face projective structure light, and obtained by the face by the structured light sensor Structure light image;
Processing is decoded to the structure light image, to generate the depth image.
9. device as claimed in claim 8, which is characterized in that the acquisition module is specifically used for:
Decode the corresponding phase information of deformation position pixel in the structure light image;
Convert the phase information to elevation information;
The depth image is generated according to the elevation information.
10. device as claimed in claim 6, which is characterized in that the repair module is used for:
When face skin defect region is planar disfigurement region, by standard skin feature replacement to the planar disfigurement area Domain;And/or
When face skin defect region is stereo defects region, using the depth image to the stereo defects region Depth information repaired.
11. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, which is characterized in that the program Image U.S. face method as described in any one in claim 1-5 is realized when being executed by processor.
12. a kind of terminal device, which is characterized in that including memory and processor, storing computer in the memory can Reading instruction, when described instruction is executed by the processor so that the processor is executed such as any one of claim 1 to 5 institute The image U.S. face method stated.
CN201810180248.9A 2018-03-05 2018-03-05 Image U.S. face method, apparatus and terminal device Pending CN108335276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810180248.9A CN108335276A (en) 2018-03-05 2018-03-05 Image U.S. face method, apparatus and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810180248.9A CN108335276A (en) 2018-03-05 2018-03-05 Image U.S. face method, apparatus and terminal device

Publications (1)

Publication Number Publication Date
CN108335276A true CN108335276A (en) 2018-07-27

Family

ID=62930462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810180248.9A Pending CN108335276A (en) 2018-03-05 2018-03-05 Image U.S. face method, apparatus and terminal device

Country Status (1)

Country Link
CN (1) CN108335276A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN106920211A (en) * 2017-03-09 2017-07-04 广州四三九九信息科技有限公司 U.S. face processing method, device and terminal device
CN107392874A (en) * 2017-07-31 2017-11-24 广东欧珀移动通信有限公司 U.S. face processing method, device and mobile device
CN107493428A (en) * 2017-08-09 2017-12-19 广东欧珀移动通信有限公司 Filming control method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN106920211A (en) * 2017-03-09 2017-07-04 广州四三九九信息科技有限公司 U.S. face processing method, device and terminal device
CN107392874A (en) * 2017-07-31 2017-11-24 广东欧珀移动通信有限公司 U.S. face processing method, device and mobile device
CN107493428A (en) * 2017-08-09 2017-12-19 广东欧珀移动通信有限公司 Filming control method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
廖根为: "《监控录像系统中人像鉴定问题研究》", 30 June 2010 *
蔡晓东等: "基于特征点定位与肤色分割的胡子检测算法", 《视频应用于工程》 *

Similar Documents

Publication Publication Date Title
Moreno et al. Simple, accurate, and robust projector-camera calibration
CN104299218B (en) Projector calibration method based on lens distortion rule
US20120281087A1 (en) Three-dimensional scanner for hand-held phones
US20140078260A1 (en) Method for generating an array of 3-d points
CN113514008B (en) Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium
CN108564540B (en) Image processing method and device for removing lens reflection in image and terminal equipment
CN108534714A (en) Based on sinusoidal and binary system fringe projection quick three-dimensional measurement method
CN109155843A (en) Image projection system and image projecting method
CN101271575B (en) Orthogonal projection emendation method for image measurement in industry close range photography
CN107592449B (en) Three-dimensional model establishing method and device and mobile terminal
CN106595523B (en) A kind of Portable three-dimensional shape measurement system based on smart phone
CN103727898A (en) Rapid three-dimensional measurement system and method for correcting nonlinear distortion through lookup tables
CN101821580A (en) System and method for three-dimensional measurement of the shape of material objects
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN107370950B (en) Focusing process method, apparatus and mobile terminal
CN109510948A (en) Exposure adjustment method, device, computer equipment and storage medium
CN107463659B (en) Object searching method and device
CN107680039B (en) Point cloud splicing method and system based on white light scanner
CN107392874B (en) Beauty treatment method and device and mobile equipment
JP2007142495A (en) Planar projector and planar projection program
CN108510471A (en) Image orthodontic method, device and terminal device
CN108225218A (en) 3-D scanning imaging method and imaging device based on optical micro electro-mechanical systems
CN107229887A (en) Multi-code scanning device and multi-code scan method
CN106839989A (en) The scan method and scanning aid mark plate of a kind of spatial digitizer
CN110099220B (en) Panoramic stitching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180727

RJ01 Rejection of invention patent application after publication