CN106991379B - Human skin recognition method and device combined with depth information and electronic device - Google Patents

Human skin recognition method and device combined with depth information and electronic device Download PDF

Info

Publication number
CN106991379B
CN106991379B CN201710139681.3A CN201710139681A CN106991379B CN 106991379 B CN106991379 B CN 106991379B CN 201710139681 A CN201710139681 A CN 201710139681A CN 106991379 B CN106991379 B CN 106991379B
Authority
CN
China
Prior art keywords
area
depth
human skin
processing
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710139681.3A
Other languages
Chinese (zh)
Other versions
CN106991379A (en
Inventor
孙剑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710139681.3A priority Critical patent/CN106991379B/en
Publication of CN106991379A publication Critical patent/CN106991379A/en
Application granted granted Critical
Publication of CN106991379B publication Critical patent/CN106991379B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching

Abstract

The invention discloses a human skin identification method combined with depth information, which is used for processing scene data acquired by an imaging device. The human skin identification method comprises the following steps: processing scene data to identify a face region; processing scene data to acquire depth information of a face region; determining a portrait area according to the face area and the depth information; and processing the portrait area to merge the area with the color similar to the human face area in the portrait area into the human skin area. The invention also discloses a human skin recognition device and an electronic device. According to the human skin identification method, the human skin identification device and the electronic device which are combined with the depth information, disclosed by the embodiment of the invention, the human skin area is extracted based on the depth information, and the human skin area is identified in the human image area, so that the problem that other objects which are similar to the human skin in color are identified as the human body area by identifying the human skin area in the whole image is avoided, and the accuracy of identifying the human skin area is improved.

Description

Human skin recognition method and device combined with depth information and electronic device
Technical Field
The present invention relates to image processing technologies, and in particular, to a method, an apparatus and an electronic apparatus for recognizing human skin based on depth information.
Background
The existing human skin identification method identifies human skin by searching an area close to human face color, and other objects close to the human skin color can be identified to the human skin area, so that the accuracy of human skin area identification is influenced.
Disclosure of Invention
The embodiment of the invention provides a human skin identification method and device combined with depth information and an electronic device.
The invention relates to a human skin recognition method combined with depth information, which is used for processing scene data collected by an imaging device and comprises the following steps:
processing the scene data to identify a face region;
processing the scene data to acquire depth information of the face region;
determining a portrait area according to the face area and the depth information; and
and processing the portrait areas to merge areas with the similar colors to the human face areas in the portrait areas into human skin areas.
The human skin recognition device combined with the depth information is used for processing scene data collected by an imaging device and comprises a recognition module, an acquisition module, a determination module and a merging module. The recognition module is used for processing the scene data to recognize a face region; the acquisition module is used for processing the scene data to acquire depth information of the face region; the determining module is used for determining a portrait area according to the face area and the depth information; the merging module is used for processing the portrait areas so as to merge areas, with colors similar to the human face areas, in the portrait areas into human skin areas.
The electronic device comprises an imaging device and the human skin recognition device, wherein the human skin recognition device is electrically connected with the imaging device.
According to the depth information-combined human skin identification method, the depth information-combined human skin identification device and the electronic device, the human skin area is extracted based on the depth information, and the human skin area is identified in the human image area, so that the problem that other objects close to the human skin in color are identified as the human skin area by identifying the human skin area of the whole image is solved, and the accuracy of human skin area identification is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a human skin identification method incorporating depth information according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an electronic device according to an embodiment of the invention;
FIG. 3 is a state diagram of a human skin identification method incorporating depth information according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a method of human skin identification incorporating depth information according to some embodiments of the present invention;
FIG. 5 is a functional block diagram of an acquisition module in accordance with certain embodiments of the present invention;
FIG. 6 is a flow chart illustrating a method for human skin identification incorporating depth information in accordance with certain embodiments of the present invention;
FIG. 7 is a functional block diagram of an acquisition module in accordance with certain embodiments of the present invention;
FIG. 8 is a flow chart illustrating a method for human skin identification incorporating depth information in accordance with certain embodiments of the present invention;
FIG. 9 is a functional block diagram of a determination module in accordance with certain implementations of the invention;
FIG. 10 is a state diagram of a method for human skin identification incorporating depth information according to some embodiments of the present invention;
FIG. 11 is a state diagram of a method for human skin identification incorporating depth information according to some embodiments of the present invention;
FIG. 12 is a flow chart illustrating a method for human skin identification incorporating depth information in accordance with certain embodiments of the present invention;
FIG. 13 is a functional block diagram of a merge module in accordance with certain embodiments of the invention;
FIG. 14 is a schematic flow chart of a method of human skin identification incorporating depth information according to some embodiments of the present invention;
FIG. 15 is a functional block diagram of an electronic device according to some embodiments of the present invention; and
fig. 16 is a state diagram of a method for human skin identification incorporating depth information according to some embodiments of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1 to 2, the method for recognizing human skin in combination with depth information according to the embodiment of the present invention is used to process scene data collected by an imaging device 20, and the method for recognizing human skin includes the following steps:
s11: processing the scene data to identify a face region;
s12: processing the scene data to acquire depth information of the face region;
s13: determining a portrait area according to the face area and the depth information; and
s14: and processing the portrait areas to merge areas with the similar colors to the human face areas in the portrait areas into human skin areas.
The human skin recognition method with depth information according to the embodiment of the present invention is applied to the human skin recognition apparatus 10 according to the embodiment of the present invention. The human skin recognition device 10 of the embodiment of the present invention includes a recognition module 11, an acquisition module 12, a determination module 13, and a merging module 14. Step S11 may be implemented by the recognition module 11, step S12 may be implemented by the obtaining module 12, step S13 may be implemented by the determination module 13, and step S14 may be implemented by the merging module 14.
That is, the recognition module 11 is configured to process the scene data to recognize a face region; the obtaining module 12 is configured to process the scene data to obtain depth information of the face region; the determining module 13 is configured to determine a human face region according to the human face region and the depth information; the merging module 14 is configured to process the portrait area to merge an area of the portrait area that is similar in color to the face area into a skin area of the human body.
The human skin recognition device 10 of the embodiment of the present invention can be applied to the electronic device 100 of the embodiment of the present invention. That is, the electronic device 100 of the embodiment of the present invention includes the human skin recognition device 10 of the embodiment of the present invention. Of course, the electronic device 100 of the embodiment of the present invention further includes the imaging device 20. Wherein the human skin identification device 10 is electrically connected to the imaging device 20.
In some embodiments, the electronic device 100 includes a cell phone and/or a tablet computer. And are not intended to be limiting in any way. In an embodiment of the invention, the electronic device 100 is a mobile phone.
When identifying the skin area of the human body, the skin of the human body is generally identified for the whole image, the areas with the color similar to the skin color of the human face are found out, and the areas are merged into the skin area of the human body. But the image may contain other objects with the color similar to the color of the human face skin, such as a yellow table, a brown marble, etc. This can lead to inaccurate identification of skin regions of the human body.
Referring to fig. 3, in the embodiment of the present invention, a face region is first identified, a portrait region is determined according to depth information of the face region, and finally, regions with a color similar to that of a skin of a face are found in the portrait region and are merged into the skin region of the body. Therefore, other objects with the color similar to the color of the skin of the human face can be prevented from being merged into the skin area of the human body, and the accuracy of human face skin identification is improved. In the embodiment of the present invention, the portrait area is extracted based on the depth information. Because the acquisition of the depth information is not easily influenced by external environmental factors such as illumination and the like, the accuracy of the acquired portrait area is higher, and the accuracy of face skin identification can be further improved.
Preferably, for the face region identification process, a trained deep learning model based on color information and depth information can be used to detect whether a face exists in a main image of a scene. Deep learning model given a training set, the data in the training set includes color information and depth information of a human face. Therefore, the trained deep learning training model can deduce whether the face region exists in the current scene according to the color information and the depth information of the current scene. Because the acquisition of the depth information of the face area is not easily influenced by environmental factors such as illumination and the like, the face detection accuracy can be improved. Further, a portrait area located at substantially the same depth may be determined from the face of the person.
Referring to fig. 4, in some embodiments, the scene data includes a scene main image and a depth image corresponding to the scene main image, and the step S12 of processing the scene data to obtain the depth information of the face region includes the following steps:
s121: processing the depth image to obtain depth data corresponding to the face region; and
s122: processing the depth data to obtain the depth information.
Referring to fig. 5, in some embodiments, the obtaining module 11 includes a first processing unit 111 and a second processing unit 112. Step S111 may be implemented by the first processing unit 111, and step S112 may be implemented by the second processing unit 112.
That is, the first processing unit 111 is configured to process the depth image to obtain depth data corresponding to the face region; the second processing unit 112 is configured to process the depth data to obtain the depth information.
It will be appreciated that the scene data includes a scene primary image and a depth image corresponding to the scene primary image. The main image of the scene is an RGB color image, and the depth image comprises depth information of each person or object in the scene. Because the color information of the main image of the scene and the depth information of the depth image are in one-to-one correspondence, the depth information of the face area can be acquired in the corresponding depth image after the face area is detected.
It should be noted that, since the face region includes features such as a nose, an eye, and an ear, in the depth image, the depth data corresponding to the features such as the nose, the eye, and the ear in the face region in the depth image is different, for example, in a depth image captured in a case where the face is facing the imaging device 20, the depth data corresponding to the nose may be smaller, and the depth data corresponding to the ear may be larger. Therefore, in an embodiment of the present invention, the depth information of the face region obtained by processing the depth data of the face region may be a value or a range of values. When the depth information of the face region is a numerical value, the numerical value can be obtained by averaging the depth data of the face region, or by taking a median value of the depth data of the face region.
In some embodiments, the imaging device 20 includes a depth camera. The depth camera may be used to acquire a depth image. Wherein, the depth camera includes the depth camera based on structured light depth measurement and the depth camera based on TOF range finding.
Specifically, a depth camera for structured light depth ranging includes a camera and a projector. The projector projects structured light in a certain mode to a current scene to be shot, a light bar three-dimensional image modulated by people or objects in the scene is formed on the surface of each person or object in the scene, and the light bar two-dimensional distortion image can be obtained by detecting the light bar three-dimensional image through the camera. The degree of distortion of the light bars depends on the relative position between the projector and the camera and the profile or height of the individual person or object surface in the scene currently to be photographed. Because the relative position between the camera and the projector in the depth camera is fixed, the three-dimensional contour of the surface of each person or object in the scene can be reproduced by the distorted two-dimensional light bar image coordinates, and the depth information can be acquired. Structured light depth ranging has higher resolution and measurement accuracy, and can improve the accuracy of the acquired depth information.
The depth camera based on TOF (time of flight) distance measurement records the phase change of modulated infrared light emitted from a light emitting unit and emitted to an object through a sensor, and can obtain the depth distance of the whole scene in real time within a wavelength range according to the speed of light. The depth positions of the individual persons or objects in the scene to be shot at present are different, so that the time from the emission to the reception of the modulated infrared light is different, and the depth information of the scene can be obtained. The depth camera based on the TOF depth ranging is not influenced by the gray scale and the characteristics of the surface of the shot object when calculating the depth information, can quickly calculate the depth information, and has high real-time performance.
Referring to fig. 6, in some embodiments, the scene data includes a scene main image and a scene sub-image corresponding to the scene main image, and step S12 includes the following steps:
s123: processing the scene main image and the scene auxiliary image to acquire depth data corresponding to the face area; and
s124: processing the depth data to obtain the depth information.
Referring to fig. 7, in some embodiments, the obtaining module 12 includes a third processing unit 123 and a fourth processing unit 124. Step S123 may be implemented by the third processing unit 123, and step S124 may be implemented by the fourth processing unit 124.
That is, the third processing unit 123 is configured to process the scene main image and the scene sub-image to obtain depth data corresponding to the face region; the fourth processing unit 124 is configured to process the depth data to obtain the depth information.
In some embodiments, the imaging device 20 includes a primary camera and a secondary camera.
It is understood that the depth information may be obtained by a binocular stereo distance measurement method, and the scene data includes a scene main image and a scene sub-image. The main scene image is shot by the main camera, the secondary scene image is shot by the secondary camera, and the main scene image and the secondary scene image are RGB color images. In some examples, the primary camera and the secondary camera may be two cameras with the same specification, and the binocular stereo distance measurement is to use the two cameras with the same specification to image the same scene from different positions to obtain a stereo image pair of the scene, match a response ideal point of the stereo image pair through an algorithm to calculate a parallax, and finally recover depth information by using a triangulation-based method. In the following example, the primary camera for acquiring color information of the current scene and the secondary camera for recording depth data of the scene may be off-specification cameras. In this way, the depth data of the face region can be obtained by matching a stereo image pair of the scene main image and the scene sub-image. And then, processing the depth data of the desired face area to obtain the depth information of the face area. Because the face region comprises a plurality of features, the depth data corresponding to each feature may be different, and therefore, the depth information of the face region may be a numerical range; alternatively, the depth information may be averaged to obtain the depth information of the face region, or a median of the depth data may be taken to obtain the depth information of the face region.
Referring to fig. 8, in some embodiments, step S13 includes the following sub-steps:
s131: determining an estimated portrait area according to the face area;
s132: determining the depth range of the portrait area according to the depth information of the face area;
s133: determining a calculation portrait area which is connected with the face area and falls into the depth range according to the depth range of the portrait area;
s134: judging whether the calculated portrait area is matched with the estimated portrait area;
s135: and when the calculated portrait area is matched with the estimated portrait area, determining the calculated portrait area as the portrait area.
Referring to fig. 9, in some embodiments, the determining module 13 includes a first determining unit 131, a second determining unit 132, a third determining unit 133, a first judging unit 134, and a fourth determining unit 135. Step S131 may be implemented by the first determining unit 131, step S132 may be implemented by the second determining unit 132, step S133 may be implemented by the third determining unit 133, step S134 may be implemented by the first judging unit 134, and step S135 may be implemented by the fourth determining unit 135.
That is, the first determining unit 131 is configured to determine an estimated human face region according to the human face region; the second determining unit 132 is configured to determine a depth range of the portrait area according to the depth information of the face area; the third determining unit 133 is configured to determine, according to the depth range of the portrait area, a calculated portrait area that is connected to the face area and falls within the depth range; the first judging unit 134 is configured to judge whether the calculated portrait area matches the estimated portrait area; the fourth determining unit 135 is configured to determine the calculated portrait area as the portrait area when the calculated portrait area matches the estimated portrait area.
Referring to fig. 10, specifically, since the portrait during the shooting process has a plurality of behavioral postures, such as standing, squatting, and the like, after the face region is determined, the estimated portrait region is determined according to the current state of the face region, that is, the information of the current behavioral posture of the portrait is determined according to the current state of the face region. The pre-estimated portrait area is a matching sample library of the portrait area, and the sample library comprises behavior and posture information of various portraits. Because the portrait area includes the face area, that is, the portrait area and the face area are located in a certain depth range, after the depth information of the face area is determined, the depth range of the portrait area can be set according to the depth information of the face area, and the calculation portrait area which falls into the depth range and is connected with the face area is extracted according to the depth range of the portrait area. Since the scene in which the portrait is located may be complex when the portrait is taken, that is, there may be other objects at positions adjacent to the position in which the portrait is located and the objects are in contact with the human body, and the objects are within the depth range of the portrait area, the extraction of the portrait area is calculated to extract the part connected with the human face only within the depth range of the portrait area so as to remove other objects falling within the depth range of the portrait area. After the calculated portrait area is determined, the calculated portrait area needs to be matched with the estimated portrait area, and if the matching is successful, the calculated portrait area can be determined as the portrait area. If the matching is unsuccessful, the result indicates that other objects except the portrait may be contained in the calculated portrait area, and the recognition of the portrait area fails.
In another example, for a complex situation in a shooting scene, the computed portrait may be further subjected to region division, and regions with smaller areas are removed, it can be understood that, with respect to the portrait region, other regions with smaller areas may be obviously determined as non-portrait, so that interference with other objects in the same depth range as the portrait may be eliminated.
In some embodiments, processing the acquired portrait area further comprises the steps of:
processing a portrait area of a main image of a scene to obtain a color edge map;
processing depth information corresponding to a portrait area of a main image of a scene to obtain a depth edge map; and
and correcting the edge of the portrait area by using the color edge map and the depth edge map.
Referring to fig. 11, it can be understood that, since the color edge map includes edge information inside the portrait area, such as edge information of clothes, and the like, the accuracy of the depth information obtained at present is limited, such as some errors exist at the edges of fingers, hair, collars, and the like. Therefore, the color edge map and the depth edge map are used for correcting the edge of the portrait area together, on one hand, the edge and detail information of parts such as faces, clothes and the like contained in the portrait area can be removed, on the other hand, the edge parts such as fingers, hairs, collars and the like have higher accuracy, and therefore more accurate edge information of the outer contour of the portrait area can be obtained. Because the color edge map and the depth edge map only process the data corresponding to the image area, the data amount required to be processed is small, and the processing speed is high.
The commonly used edge detection algorithms include the Roberts operator, Sobel operator, Prewitt operator, Canny operator, L aplanian operator, L OG operator, etc. in some examples, any of the edge detection algorithms described above may be used to perform the calculation to obtain the color edge map, without limitation.
Further, in the process of acquiring the depth edge map, only the depth information corresponding to the portrait area needs to be processed, so that the obtained portrait area is firstly expanded, and the portrait area is expanded to retain details of the depth edge in the depth information corresponding to the portrait area. And then, carrying out filtering processing on the depth information corresponding to the portrait area after the expansion processing, so as to remove high-frequency noise carried in the depth information, so as to smooth the edge details of the depth edge map. And finally, converting the filtered data into gray value data, performing linear logistic regression combination on the gray value data, and combining the linear logistic regression by using an image edge probability density algorithm to obtain a depth edge map.
The single color edge map will retain the edge of the inner region of the portrait, while the single depth edge map has some errors, so it is necessary to remove the inner edge of the portrait in the color edge probability through the depth edge map and correct the accuracy of the outer contour in the depth edge map through the color edge map. Therefore, the edge of the portrait area is corrected by utilizing the depth edge image and the color edge image, and a relatively accurate portrait area can be obtained.
Referring to fig. 12, in some embodiments, the step S14 of processing the portrait area to merge the areas in the portrait area that are similar in color to the face area into a skin area of the human body includes the following sub-steps:
s141: processing the portrait area to obtain color data of each pixel point of the portrait area;
s142: judging whether the color data falls into a preset range or not; and
s143: and merging all the corresponding pixel points of which the color data fall into the preset range into the human skin area.
Referring to fig. 13, in some embodiments, the merging module 14 includes a fifth processing unit 141, a second determining unit 142, and a merging unit 143. Step S141 may be implemented by the fifth processing unit 141, step S142 may be implemented by the second determination unit 142, and step S143 may be implemented by the merging unit 143.
That is, the fifth processing unit 141 is configured to process the portrait area to obtain color data of each pixel of the portrait area; the second judging unit 142 is configured to judge whether the color data falls within a preset range; the merging unit 143 is configured to merge all corresponding pixel points of the color data falling into the preset range into the human skin region.
Specifically, firstly, a portrait area image in a scene main image in RGB format is converted into a portrait area image in YCrCb format, and the conversion may be performed by using the following formula to calculate color data of each pixel point in the portrait area image in YCrCb color space: Y-0.299R +0.587G +0.114B, Cr-0.500R +0.419G-0.081B +128, Cb-0.169R-0.331G +0.500B + 128. And detecting each pixel point in the portrait region image in the YCrCb format. If the color data of the pixel points fall into the preset range, namely 133 < Cr < 173 > and 77 < Cb < 127, the pixel points are merged into the human skin area.
In this way, only the human skin area is identified in the human image area, and the interference of other objects with the color similar to that of the skin area can be eliminated. In addition, all human skin regions in the portrait region, including skin regions of the face, neck, hands, etc., can be identified.
Referring to fig. 14, the method for identifying human skin according to the embodiment of the present invention further includes the following steps:
s15: and carrying out special effect treatment on the human skin area.
Referring to fig. 15, in some embodiments, the human skin identification device 10 further includes a processing module 15. Step S15 may be implemented by the processing module 15.
That is, the processing module 15 is used for performing special effect processing on the human skin region.
Referring to fig. 16, in this way, special effect processing such as whitening and skin polishing can be performed on the skin area of the human body, so as to obtain an image with a better visual effect, and improve the use experience of the user.
The electronic device 100 of the embodiment of the present invention further includes a housing, a memory, a circuit board, and a power circuit. The circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic device 100; the memory is used for storing executable program codes; the human skin identification device 10 executes a program corresponding to the executable program code by reading the executable program code stored in the memory to implement the human skin identification method of any of the embodiments of the present invention described above.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and those skilled in the art can make changes, modifications, substitutions and alterations to the above embodiments within the scope of the present invention.

Claims (13)

1. A human skin recognition method combined with depth information is used for processing scene data collected by an imaging device, and is characterized by comprising the following steps:
processing the scene data to identify a face region;
processing the scene data to acquire depth information of the face region;
determining a portrait area according to the face area and the depth information;
the determining the human image area according to the human face area and the depth information comprises:
determining an estimated portrait area according to the face area;
determining the depth range of the portrait area according to the depth information of the face area;
determining a calculation portrait area which is connected with the face area and falls into the depth range according to the depth range of the portrait area;
judging whether the calculated portrait area is matched with the estimated portrait area;
determining the calculated portrait area as the portrait area when the calculated portrait area is matched with the estimated portrait area;
processing the portrait area to obtain a color edge map;
processing the depth information corresponding to the portrait area to obtain a depth edge map; and
correcting the edge of the portrait area by utilizing the color edge image and the depth edge image; and
and processing the portrait areas after the edge correction so as to merge areas, which are similar to the human face areas in color, in the portrait areas after the edge correction into human skin areas.
2. The method for recognizing human skin combining with depth information as claimed in claim 1, wherein the scene data includes a scene main image and a depth image corresponding to the scene main image, and the step of processing the scene data to obtain the depth information of the face region includes the sub-steps of:
processing the depth image to obtain depth data corresponding to the face region; and
processing the depth data to obtain the depth information.
3. The method as claimed in claim 1, wherein the scene data includes a scene primary image and a scene secondary image corresponding to the scene primary image, and the step of processing the scene data to obtain the depth information of the face region includes the following sub-steps:
processing the scene main image and the scene auxiliary image to acquire depth data corresponding to the face area; and
processing the depth data to obtain the depth information.
4. The method for recognizing human skin by combining depth information as claimed in claim 1, wherein the step of processing the edge-corrected portrait area to merge the area with the color similar to that of the face area in the edge-corrected portrait area as the human skin area comprises the following sub-steps:
processing the portrait area after the edge correction to obtain color data of each pixel point of the portrait area after the edge correction;
judging whether the color data falls into a preset range or not; and
and merging all the corresponding pixel points of which the color data fall into the preset range into the human skin area.
5. The human skin identification method in combination with depth information according to claim 1, wherein the human skin identification algorithm further comprises the steps of:
and carrying out special effect treatment on the human skin area.
6. A human skin identification device incorporating depth information for processing scene data collected by an imaging device, the human skin identification device comprising:
the recognition module is used for processing the scene data according to a deep learning model to recognize a face region, and the deep learning model is based on color information and depth information;
an acquisition module for processing the scene data to acquire depth information of the face region;
the determining module is used for determining a portrait area according to the face area and the depth information;
the determining module comprises:
the first determining unit is used for determining an estimated portrait area according to the face area;
a second determining unit, configured to determine a depth range of the portrait area according to depth information of the face area;
a third determining unit, configured to determine, according to a depth range of the portrait area, a computed portrait area that is connected to the face area and falls within the depth range;
the first judging unit is used for judging whether the calculated portrait area is matched with the estimated portrait area or not; and
a fourth determining unit, configured to determine the calculated portrait area as the portrait area when the calculated portrait area matches the estimated portrait area;
the determination module is further to:
processing the portrait area to obtain a color edge map;
processing the depth information corresponding to the portrait area to obtain a depth edge map; and
correcting the edge of the portrait area by utilizing the color edge image and the depth edge image;
and the merging module is used for processing the portrait areas after the edge correction so as to merge areas, with colors similar to the human face areas, in the portrait areas after the edge correction into human skin areas.
7. The apparatus for recognizing human skin according to claim 6, wherein the scene data comprises a scene main image and a depth image corresponding to the scene main image, the acquiring module comprises:
a first processing unit, configured to process the depth image to obtain depth data corresponding to the face region; and
a second processing unit for processing the depth data to obtain the depth information.
8. The apparatus for recognizing human skin according to claim 6, wherein the scene data comprises a scene primary image and a scene secondary image corresponding to the scene primary image, and the acquiring module comprises:
a third processing unit, configured to process the scene main image and the scene sub-image to obtain depth data corresponding to the face region; and
a fourth processing unit, configured to process the depth data to obtain the depth information.
9. The human skin identification device in combination with depth information of claim 6, wherein the merging module comprises:
a fifth processing unit, configured to process the portrait area after edge correction to obtain color data of each pixel point of the portrait area after edge correction;
a second judgment unit configured to judge whether the color data falls within a preset range; and
and the merging unit is used for merging all the corresponding pixel points of which the color data fall into the preset range into the human skin area.
10. The human skin identification device in combination with depth information of claim 6, wherein the human skin identification device further comprises:
and the processing module is used for carrying out special effect processing on the human skin area.
11. An electronic device, comprising:
an imaging device; and
the human skin identification device according to any one of claims 6 to 10, electrically connected to the imaging device.
12. The electronic device of claim 11, wherein the imaging device comprises a primary camera and a secondary camera.
13. The electronic device of claim 11, wherein the imaging device comprises a depth camera.
CN201710139681.3A 2017-03-09 2017-03-09 Human skin recognition method and device combined with depth information and electronic device Expired - Fee Related CN106991379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710139681.3A CN106991379B (en) 2017-03-09 2017-03-09 Human skin recognition method and device combined with depth information and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710139681.3A CN106991379B (en) 2017-03-09 2017-03-09 Human skin recognition method and device combined with depth information and electronic device

Publications (2)

Publication Number Publication Date
CN106991379A CN106991379A (en) 2017-07-28
CN106991379B true CN106991379B (en) 2020-07-10

Family

ID=59413137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710139681.3A Expired - Fee Related CN106991379B (en) 2017-03-09 2017-03-09 Human skin recognition method and device combined with depth information and electronic device

Country Status (1)

Country Link
CN (1) CN106991379B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563302B (en) * 2017-08-09 2020-10-02 Oppo广东移动通信有限公司 Face restoration method and device for removing glasses
CN107509031B (en) * 2017-08-31 2019-12-27 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107465903B (en) * 2017-09-08 2019-04-26 Oppo广东移动通信有限公司 Image white balance method, device and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101287607A (en) * 2005-08-12 2008-10-15 里克·B·耶格尔 System and method for medical monitoring and treatment through comsmetic monitoring and treatment
CN102737368A (en) * 2011-03-29 2012-10-17 索尼公司 Image processing apparatus, method, and program
CN103268475A (en) * 2013-05-10 2013-08-28 中科创达软件股份有限公司 Skin beautifying method based on face and skin color detection
CN104243951A (en) * 2013-06-07 2014-12-24 索尼电脑娱乐公司 Image processing device, image processing system and image processing method
CN104966267A (en) * 2015-07-02 2015-10-07 广东欧珀移动通信有限公司 User image beautifying method and apparatus
CN105069007A (en) * 2015-07-02 2015-11-18 广东欧珀移动通信有限公司 Method and device used for establishing beautifying database
CN105224917A (en) * 2015-09-10 2016-01-06 成都品果科技有限公司 A kind of method and system utilizing color space to create skin color probability map
CN106407909A (en) * 2016-08-31 2017-02-15 北京云图微动科技有限公司 Face recognition method, device and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101287607A (en) * 2005-08-12 2008-10-15 里克·B·耶格尔 System and method for medical monitoring and treatment through comsmetic monitoring and treatment
CN102737368A (en) * 2011-03-29 2012-10-17 索尼公司 Image processing apparatus, method, and program
CN103268475A (en) * 2013-05-10 2013-08-28 中科创达软件股份有限公司 Skin beautifying method based on face and skin color detection
CN104243951A (en) * 2013-06-07 2014-12-24 索尼电脑娱乐公司 Image processing device, image processing system and image processing method
CN104966267A (en) * 2015-07-02 2015-10-07 广东欧珀移动通信有限公司 User image beautifying method and apparatus
CN105069007A (en) * 2015-07-02 2015-11-18 广东欧珀移动通信有限公司 Method and device used for establishing beautifying database
CN105224917A (en) * 2015-09-10 2016-01-06 成都品果科技有限公司 A kind of method and system utilizing color space to create skin color probability map
CN106407909A (en) * 2016-08-31 2017-02-15 北京云图微动科技有限公司 Face recognition method, device and system

Also Published As

Publication number Publication date
CN106991379A (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN106909911B (en) Image processing method, image processing apparatus, and electronic apparatus
CN107025635B (en) Depth-of-field-based image saturation processing method and device and electronic device
CN106997457B (en) Figure limb identification method, figure limb identification device and electronic device
CN106991654B (en) Human body beautifying method and device based on depth and electronic device
CN106991377B (en) Face recognition method, face recognition device and electronic device combined with depth information
CN107016348B (en) Face detection method and device combined with depth information and electronic device
CN107480613B (en) Face recognition method and device, mobile terminal and computer readable storage medium
CN106851238B (en) Method for controlling white balance, white balance control device and electronic device
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
CN110168562B (en) Depth-based control method, depth-based control device and electronic device
US20080292192A1 (en) Human detection device and method and program of the same
CN106991688A (en) Human body tracing method, human body tracking device and electronic installation
CN107018323B (en) Control method, control device and electronic device
CN106991378B (en) Depth-based face orientation detection method and device and electronic device
TW201835805A (en) Method, system, and computer-readable recording medium for long-distance person identification
CN110400338B (en) Depth map processing method and device and electronic equipment
WO2011013079A1 (en) Depth mapping based on pattern matching and stereoscopic information
WO2016107638A1 (en) An image face processing method and apparatus
CN106991379B (en) Human skin recognition method and device combined with depth information and electronic device
CN107346419B (en) Iris recognition method, electronic device, and computer-readable storage medium
US20140375821A1 (en) Detection system
US10121260B2 (en) Orientation estimation method and orientation estimation device
CN106991376B (en) Depth information-combined side face verification method and device and electronic device
CN107148237B (en) Information processing apparatus, information processing method, and program
US10803625B2 (en) Detection system and picturing filtering method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200710

CF01 Termination of patent right due to non-payment of annual fee