CN112818874A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112818874A
CN112818874A CN202110157197.XA CN202110157197A CN112818874A CN 112818874 A CN112818874 A CN 112818874A CN 202110157197 A CN202110157197 A CN 202110157197A CN 112818874 A CN112818874 A CN 112818874A
Authority
CN
China
Prior art keywords
image
face
processed
depth information
face region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110157197.XA
Other languages
Chinese (zh)
Inventor
冯上栋
郑龙
黄泽洋
刘风雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Ekos Technology Co Ltd
Original Assignee
Dongguan Ekos Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Ekos Technology Co Ltd filed Critical Dongguan Ekos Technology Co Ltd
Priority to CN202110157197.XA priority Critical patent/CN112818874A/en
Publication of CN112818874A publication Critical patent/CN112818874A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides an image processing method, an image processing device, image processing equipment and a storage medium, and relates to the technical field of face recognition. Judging whether an image to be processed contains a face region or not, wherein the image to be processed is an image containing an object to be identified; if the image to be processed contains a face region, obtaining initial depth information of the face region according to the face region and an initial reference image, and obtaining a two-dimensional image of the face region; determining a first target reference image according to the initial depth information of the face region; calculating target depth information of the face area according to the first target reference image and the face area to obtain a three-dimensional image of the face area; and identifying the object to be identified based on the three-dimensional image of the face area and the two-dimensional image of the face area. By applying the embodiment of the application, the accuracy of identifying the object to be identified can be improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
With the development of science and technology, especially in the face recognition field, like the function such as people's face unblock, people's face payment arouses, the equipment (such as cell-phone, panel computer, intelligent lock etc.) of embedding 3D structure optical module has obtained extensive application, and wherein, 3D structure optical module combines together with the treater and is used for treating the three-dimensional formation of image of discerning the object.
At present, a processor directly calculates depth data of the whole area on an object image to be recognized, which is obtained by using a 3D structured light module, judges which reference image the object to be recognized is closer to according to the distance obtained by the depth data, uses the closest reference image as a target reference image, and finally performs three-dimensional imaging on the object to be recognized according to the target reference image, that is, recognizes the object to be recognized.
However, when a face is recognized, there may be a phenomenon that the face tilts forward, which may cause the face and the body of the object image to be recognized to be not on the same plane, and if a target reference image is directly obtained by using the entire area of the object image to be recognized, the target reference image may be selected inaccurately, thereby reducing the accuracy of recognizing the object to be recognized.
Disclosure of Invention
An object of the present application is to provide an image processing method, an image processing apparatus, an image processing device, and a storage medium, which can accurately select a target reference image, so as to improve the accuracy of identifying the object to be identified.
In a first aspect, an embodiment of the present application provides an image processing method, where the method is applied to an electronic device, and the method includes:
judging whether an image to be processed contains a face region or not, wherein the image to be processed is an image containing an object to be identified;
if the image to be processed contains a face region, obtaining initial depth information of the face region according to the face region and an initial reference image, and obtaining a two-dimensional image of the face region, wherein the initial reference image is one of the plurality of reference images, the plurality of reference images are stored in electronic equipment in advance, each reference image corresponds to different distance parameters, and the distance parameters are used for representing the distance between an object contained in the reference image and the electronic equipment;
determining a first target reference image according to the initial depth information of the face region;
calculating target depth information of the face region according to the first target reference image and the face region to obtain a three-dimensional image of the face region;
and identifying the object to be identified based on the three-dimensional image of the face area and the two-dimensional image of the face area.
Optionally, the obtaining initial depth information of the face region according to the face region and an initial reference image includes:
judging whether the face area is credible or not according to a face frame corresponding to the face area and a preset threshold;
and if the face area is credible, obtaining initial depth information of the face area according to the face area and an initial reference image.
Optionally, the method further comprises:
if the face area is not credible, obtaining initial depth information of the image to be processed according to a selectable area on the image to be processed and the initial reference image, wherein the selectable area comprises the face area and an area outside the face area.
Optionally, the determining a first target reference image according to the initial depth information of the face region includes:
determining the distance between the face part in the object to be recognized and the electronic equipment according to the initial depth information of the face region;
and determining a first target reference image according to the distance between the human face part in the object to be recognized and the electronic equipment and the distance parameters corresponding to the reference images.
Optionally, the determining, according to the initial depth information of the face region, a distance between a face part in the object to be recognized and the electronic device includes:
acquiring depth data corresponding to the face area and the number of the depth data according to the initial depth information of the face area;
and determining the distance between the face part in the object to be recognized and the electronic equipment according to the depth data corresponding to the face area and the number of the depth data.
Optionally, the method further comprises:
if the image to be processed does not contain the face region, obtaining initial depth information of the image to be processed according to a region on the image to be processed and the initial reference image, and obtaining a two-dimensional image of the object to be recognized;
determining the distance between the object to be identified and the electronic equipment according to the initial depth information of the image to be processed;
obtaining a second target reference image according to the distance between the object to be identified and the electronic equipment and the distance parameters corresponding to the reference images;
calculating target depth information of the image to be processed according to the second target reference image and the image to be processed to obtain a three-dimensional image of the object to be recognized;
and identifying the object to be identified based on the three-dimensional image of the object to be identified and the two-dimensional image of the object to be identified.
Optionally, the determining a distance between the object to be recognized and the electronic device according to the initial depth information of the image to be processed includes:
acquiring initial depth information corresponding to a central area on the image to be processed according to the initial depth information of the image to be processed;
and determining the distance between the object to be identified and the equipment according to the initial depth information corresponding to the central area on the image to be processed.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, where the apparatus is applied to an electronic device, and the apparatus includes:
the judging module is used for judging whether the image to be processed contains a face area or not, wherein the image to be processed is an image containing an object to be identified;
a first determining module, configured to, if the image to be processed includes a face region, obtain initial depth information of the face region according to the face region and an initial reference image, and obtain a two-dimensional image of the face region, where the initial reference image is one of the multiple reference images, the multiple reference images are stored in an electronic device in advance, each reference image corresponds to a different distance parameter, and the distance parameter is used to represent a distance between an object included in the reference image and the electronic device;
the second determining module is used for determining a first target reference image according to the initial depth information of the face region;
the calculation module is used for calculating target depth information of the face area according to the first target reference image and the face area so as to obtain a three-dimensional image of the face area;
and the recognition module is used for recognizing the object to be recognized based on the three-dimensional image of the face area and the two-dimensional image of the face area.
Optionally, the first determining module is specifically configured to determine whether the face area is trusted according to a face frame corresponding to the face area and a preset threshold; and if the face area is credible, obtaining initial depth information of the face area according to the face area and an initial reference image.
Optionally, the first determining module is further specifically configured to, if the face region is not trusted, obtain initial depth information of the image to be processed according to a selectable region on the image to be processed and the initial reference image, where the selectable region includes the face region and a region outside the face region.
Optionally, the second determining module is specifically configured to determine, according to the initial depth information of the face region, a distance between a face part in the object to be recognized and the electronic device; and determining a first target reference image according to the distance between the human face part in the object to be recognized and the electronic equipment and the distance parameters corresponding to the reference images.
Optionally, the second determining module is further specifically configured to obtain depth data corresponding to the face region and the number of the depth data according to the initial depth information of the face region; and determining the distance between the face part in the object to be recognized and the electronic equipment according to the depth data corresponding to the face area and the number of the depth data.
Optionally, the first determining module is further configured to, if the image to be processed does not include the face region, obtain initial depth information of the image to be processed according to a region on the image to be processed and the initial reference image, and obtain a two-dimensional image of the object to be recognized;
the second determining module is further configured to determine a distance between the object to be identified and the electronic device according to the initial depth information of the image to be processed; obtaining a second target reference image according to the distance between the object to be identified and the electronic equipment and the distance parameters corresponding to the reference images;
the calculation module is further configured to calculate target depth information of the image to be processed according to the second target reference image and the image to be processed, so as to obtain a three-dimensional image of the object to be recognized;
the identification module is further configured to identify the object to be identified based on the three-dimensional image of the object to be identified and the two-dimensional image of the face region.
Optionally, the second determining module is further specifically configured to obtain initial depth information corresponding to a central area on the image to be processed according to the initial depth information of the image to be processed; and determining the distance between the object to be identified and the equipment according to the initial depth information corresponding to the central area on the image to be processed.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor communicates with the storage medium through the bus, and the processor executes the machine-readable instructions to execute the steps of the image processing method according to the first aspect.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the image processing method of the first aspect.
The beneficial effect of this application is:
an embodiment of the application provides an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein the method may include: judging whether the image to be processed contains a face area or not, wherein the image to be processed is an image containing an object to be identified; if the image to be processed contains a face region, obtaining initial depth information of the face region according to the face region and an initial reference image, and obtaining a two-dimensional image of the face region, wherein the initial reference image is one of a plurality of reference images, the plurality of reference images are stored in electronic equipment in advance, each reference image corresponds to different distance parameters respectively, and the distance parameters are used for representing the distance between an object contained in the reference image and the electronic equipment; determining a first target reference image according to the initial depth information of the face region; calculating target depth information of the face area according to the first target reference image and the face area to obtain a three-dimensional image of the face area; and identifying the object to be identified based on the three-dimensional image of the face area and the two-dimensional image of the face area.
By adopting the image processing method provided by the embodiment of the application, after the electronic equipment acquires the image to be processed, whether the image to be processed contains the face region is judged firstly, if the image to be processed contains the face region, the electronic equipment determines a first target reference image from a plurality of reference images stored in advance on the basis of the face region and an initial reference image, and finally, a three-dimensional image of the face region is obtained according to the first target reference image and the face region, so that an object to be recognized contained in the image to be processed is recognized. That is, when the image to be processed includes a face region, the first target reference image is determined based on only the face region. Therefore, the phenomenon that the target reference image is not well selected when the face on the object to be recognized is inclined forward in the face recognition scene can be avoided. Namely, on the premise of accurately selecting the target reference image, the accuracy of identifying the object to be identified can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of an electronic device supporting face recognition according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Before explaining the embodiments of the present application in detail, an application scenario of the present application will be described first. The application scene may be a scene for recognizing a human face, such as a scene for human face payment and human face unlocking, and is not explained in the present application. The method can be applied to electronic equipment supporting face recognition. Fig. 1 is a schematic structural diagram of an electronic device supporting face recognition according to an embodiment of the present disclosure, as shown in fig. 1, the electronic device may include a structured light module 101 and a processor 102, where the structured light module 101 is connected to the processor 102, where the structured light module 101 may include an emitter and a collector, the emitter may be specifically a laser projector, the emitter is configured to project structured light onto a surface of an object to be recognized, and the collector is configured to shoot the object to be recognized through a camera to obtain a structured light image, that is, an image to be processed, where the camera may include an infrared camera and an RGB camera. The collector can send the collected image to be processed to the processor 102, the processor 102 can identify the face area on the image to be processed, when the face area is identified, a target reference image can be determined based on the face area and a preset initial reference image, then a three-dimensional image of the face area is obtained according to the target reference image and the face area, and then the object to be identified is identified.
The electronic equipment can be a mobile phone, a tablet personal computer, an intelligent door lock, wearable equipment and the like, and is not limited by the application.
Specifically, in a face unlocking scene, the electronic device can acquire a user image through a camera on the electronic device and recognize the user image, when a face area is recognized in the user image, a target reference image can be obtained based on the face area, then the user is recognized according to the face area and the target reference image, and when the recognition is successful, the electronic device is in an unlocking state.
The image processing method mentioned in the present application is exemplified as follows with reference to the accompanying drawings. Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application, which can be executed by the processor 102 in the image processing system. As shown in fig. 2, the method may include:
s201, judging whether the image to be processed contains a face area or not, wherein the image to be processed is an image containing an object to be identified.
The electronic device can acquire an image to be processed containing an object to be recognized through a camera thereon, wherein the object to be recognized can be a person or other morphological objects (such as an object, a scene, etc.). It should be noted that the present application mainly introduces the case where the object to be recognized is a human. The object to be recognized can be positioned in front of the electronic equipment, and the structured light module embedded in the electronic equipment can collect the image of the object to be recognized as the image to be processed.
After the processor acquires the image to be processed, the processor can extract related feature data from the image to be processed, and based on the feature data and preset human face feature data, whether a human face area is included in the image to be processed can be judged.
S202, if the image to be processed contains a face region, obtaining initial depth information of the face region according to the face region and an initial reference image, and obtaining a two-dimensional image of the face region.
The initial reference image is one of a plurality of reference images, the plurality of reference images are stored in the electronic device in advance, each reference image corresponds to a different distance parameter, and the distance parameters are used for representing the distance between an object contained in the reference image and the electronic device.
The structured light modules in the electronic device have a certain working range (0.3m-1.2m), which can be determined by the characteristics of the structured light modules, and different structured light modules have different working ranges. The reference images are structured light images acquired in advance on a certain plane away from the electronic equipment, each reference image corresponds to a reference position, and each reference position is represented by a distance parameter. For example, in the structured light module with a working range of (0.3m-1.2m), corresponding reference images can be respectively acquired from 3 reference positions, the distance parameter corresponding to each reference image can be respectively 40cm, 60cm and 80cm, and after the reference images are acquired, the reference images and the corresponding distance parameters can be stored in a correlated manner.
Generally, the initial image is a reference image of a plurality of reference images whose distance parameter satisfies a preset condition, where the preset condition includes: sorting the distance parameters according to the numerical values and then placing the distance parameters at the middle position. Continuing with the above example, a reference image with a distance parameter of 60cm may be taken as the initial reference image, for example.
The processor judges that the image to be processed acquired by the infrared camera comprises a face area, the face area can be brightened, and the face area is framed by a frame according to the boundary of the face. And extracting pixel information in the frame and pixel information in the initial image, calculating pixel deviation between the frame and the initial image, and calculating initial depth information corresponding to the face area according to camera parameters of the electronic equipment. Meanwhile, the processor can also extract the two-dimensional image of the human face area in the image to be processed acquired by the RGB camera in the same way.
S203, determining a first target reference image according to the initial depth information of the face area.
The processor can remove invalid depth information in the initial depth information of the face region, calculate the distance from the face on the object to be recognized to the electronic equipment according to the effective depth data on the initial depth image corresponding to the face region and the number of the effective depth data, and take the reference image closest to the face on the object to be recognized as a first target reference image. The first target reference picture may or may not be the initial reference picture. The first target reference image determined by the face area on the image to be processed is more in line with the actual situation, and the accuracy of identifying the object to be identified can be improved.
And S204, calculating target depth information of the face area according to the first target reference image and the face area to obtain a three-dimensional image of the face area.
And S205, identifying the object to be identified based on the three-dimensional image of the face area and the two-dimensional image of the face area.
And taking the first target reference image as a reference, acquiring pixel information on the first target reference image, pixel information on the face area and camera parameters, calculating target depth information of the face area, and constructing a target depth image of the face area according to the target depth information. And finally, transforming the target depth image through an image coordinate system, a camera coordinate system and a world coordinate system to construct a three-dimensional image of the face region.
In a face recognition scene, if a face is unlocked, the processor may recognize the face region based on the three-dimensional image of the face region and the two-dimensional image of the face region, match the recognition result with a pre-stored unlock image, and if the matching is successful, represent that the electronic device successfully recognizes the object to be recognized, that is, the electronic device is in an unlock state.
To sum up, in the image processing method provided by the application, after acquiring an image to be processed, the electronic device first determines whether the image to be processed includes a face region, and if the image to be processed includes the face region, the electronic device determines a first target reference image from a plurality of reference images stored in advance on the basis of the face region and an initial reference image, and finally obtains a three-dimensional image of the face region according to the first target reference image and the face region, so as to identify an object to be identified included in the image to be processed. That is, when the image to be processed includes a face region, the first target reference image is determined based on only the face region. Compared with the mode of determining the reference image according to the whole image to be processed in the prior art, the method and the device can avoid the phenomenon that the target reference image is not well selected when the face on the object to be recognized is inclined forwards in the face recognition scene. Namely, on the premise of accurately selecting the target reference image, the accuracy of identifying the object to be identified can be improved.
Fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 3, optionally, the obtaining initial depth information of the face region according to the face region and the initial reference image includes:
s301, judging whether the face area is credible according to the face frame corresponding to the face area and a preset threshold value.
The structured light module on the electronic device has a corresponding working range, which can be (0.3m-1.2m), and when the object to be recognized is farther away from the electronic device (such as 1.2m), the face frame of the face part on the object to be recognized on the electronic device is smaller; when the object to be recognized is farther away from the electronic device (for example, 0.3m), the face frame of the face part on the object to be recognized on the electronic device is larger.
Generally, the preset threshold is obtained in advance according to the sizes of the corresponding face frames when the plurality of objects to be recognized are respectively located at the farthest boundaries of the working range of the structured light module. Specifically, the face frame size corresponding to each object to be recognized at the farthest boundary (e.g., 1.2m) of the working range may be obtained, and the face frame sizes are averaged to obtain an averaged face frame size, where the averaged face frame size may be used as the preset threshold. The processor can compare the size of the face frame corresponding to the face area with the preset threshold value, and judge whether the face area is credible according to the comparison result. Since the face size of each person is different in reality, the threshold value can be obtained based on a plurality of objects to be recognized, and thus the accuracy of the confidence level in determining the face region can be improved.
And S302, if the face area is credible, obtaining initial depth information of the face area according to the face area and the initial reference image.
When the size of the face frame corresponding to the face area is larger than the preset threshold, it is characterized that the position of the object to be recognized does not exceed the farthest boundary of the working range of the structured light module, that is, the face area is credible. On the premise that the face region is credible, the processor calculates initial depth information corresponding to the face region according to the pixel information of the face region, the pixel information of the initial reference image, camera parameters and the like.
And S303, if the face region is not credible, obtaining initial depth information of the image to be processed according to the selectable region on the image to be processed and the initial reference image.
Wherein the selectable region includes the face region and a region outside the face region. For example, the selectable region may be a region including the whole image to be processed, or the selectable region may also be a partial region including the face region and a region outside the face region in the image to be processed. When the size of the face frame corresponding to the face area is smaller than the preset threshold, the fact that the position of the object to be recognized exceeds the farthest boundary of the working range of the structured light module is represented, and the face area is not credible. On the premise that the face region is not authentic, the processor needs to calculate the initial depth information corresponding to the image to be processed according to the pixel information corresponding to the face region on the image to be processed, the pixel information corresponding to the region outside the face region, the pixel information of the initial reference image, the camera parameter, and the like.
The method comprises the steps of determining the basis for acquiring a target reference image by judging whether a face area is credible, namely, when the face area is credible, taking the face area as the basis, so that the accuracy for acquiring the target reference image (first target reference image) can be improved, and on the premise that the accuracy of the target reference image is high, the accuracy for identifying an object to be identified can be further improved.
Fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 4, optionally, the determining the first target reference image according to the initial depth information of the face region includes:
s401, determining the distance between the face part in the object to be recognized and the electronic equipment according to the initial depth information of the face area.
Optionally, according to the initial depth information of the face region, obtaining depth data corresponding to the face region and the number of the depth data; and determining the distance between the face part in the object to be recognized and the electronic equipment according to the depth data corresponding to the face area and the number of the depth data.
The processor may calculate a distance (distance) between the face part of the object to be recognized and the electronic device according to the following formula:
Figure BDA0002932554010000151
here, sum represents the sum of the depth data, and size represents the number of depth data.
After the processor acquires the initial depth information of the face region, effective depth information in the face region can be extracted, the number of depth data contained in the effective depth information is counted, the sum of the depth data is calculated, and then the distance between the face part in the object to be recognized and the electronic equipment can be calculated based on the formula.
S402, determining a first target reference image according to the distance between the face part in the object to be recognized and the electronic equipment and the distance parameter corresponding to each reference image.
The distance between each reference image and the electronic device is pre-stored in the electronic device, after the distance between the face part in the object to be recognized and the electronic device is obtained, which reference image is closest to the face part in the object to be recognized can be judged based on the distance and the distance (distance parameter) between each reference image and the electronic device, and the reference image closest to the face part in the object to be recognized is taken as the first target reference image.
Fig. 5 is a schematic flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 5, the method may further include:
s501, if the image to be processed does not contain the face region, obtaining initial depth information of the image to be processed according to the region on the image to be processed and the initial reference image, and obtaining a two-dimensional image of the object to be recognized.
The processor may extract feature data in the image to be processed after acquiring the image to be processed, determine whether the feature data includes a face feature (e.g., a feature of five sense organs), and if the image to be processed does not have a feature matching the face feature, indicate that the image to be processed does not include the face region, and then the processor may obtain initial depth information of the image to be processed according to pixel information in the image to be processed acquired by using an infrared camera, pixel information of the initial reference image, camera parameters, and the like. Meanwhile, the processor can also extract a two-dimensional image of the object to be processed, which is acquired by the RGB camera, from the image to be processed.
S502, determining the distance between the object to be recognized and the electronic equipment according to the initial depth information of the image to be processed.
The processor can acquire the depth data corresponding to the image to be processed and the number of the depth data according to the initial depth information of the image to be processed, and then calculate the distance between the object to be recognized and the electronic device according to the depth data corresponding to the image to be processed and the number of the depth data in combination with the formula (1).
S503, obtaining a second target reference image according to the distance between the object to be recognized and the electronic equipment and the distance parameters corresponding to the reference images.
The distance between each reference image and the electronic device is pre-stored in the electronic device, and after the distance between the object to be recognized and the electronic device is obtained, which reference image is closest to the object to be recognized can be determined based on the distance and the distance (distance parameter) between each reference image and the electronic device, and the reference image closest to the object to be recognized is taken as the second target reference image.
S504, calculating target depth information of the image to be processed according to the second target reference image and the image to be processed to obtain a three-dimensional image of the object to be recognized.
And S505, identifying the object to be identified based on the three-dimensional image of the object to be identified and the two-dimensional image of the object to be identified.
And taking the second target reference image as a reference, acquiring pixel information on the second target reference image, pixel information of the image to be processed and camera parameters, calculating target depth information of the image to be processed, and constructing a target depth image of the image to be processed according to the target depth information. And finally, transforming the target depth image through an image coordinate system, a camera coordinate system and a world coordinate system to construct a three-dimensional image of the object to be recognized, and modeling the object to be recognized according to the three-dimensional image of the object to be recognized and the two-dimensional image of the object to be recognized to obtain a model corresponding to the object to be recognized.
Optionally, the determining the distance between the object to be recognized and the electronic device according to the initial depth information of the image to be processed includes: acquiring initial depth information corresponding to a central area on the image to be processed according to the initial depth information of the image to be processed; and determining the distance between the object to be recognized and the equipment according to the initial depth information corresponding to the central area on the image to be processed.
The initial depth information of the image to be processed comprises initial depth information of a central area and initial depth information of a non-central area, the central area can be set according to actual requirements, and the central area is not limited in the application. After the central area of the image to be processed is determined, the initial depth information corresponding to the central area of the image to be processed can be extracted, based on the initial depth information corresponding to the central area of the image to be processed, the depth data corresponding to the central area and the number of the depth data can be obtained, and the distance between the object to be recognized and the electronic device is calculated according to the depth data corresponding to the face area, the number of the depth data and the formula (1).
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 6, the apparatus may include:
the judging module 601 is configured to judge whether a to-be-processed image contains a face region, where the to-be-processed image is an image containing an object to be identified;
a first determining module 602, configured to, if the image to be processed includes a face region, obtain initial depth information of the face region according to the face region and an initial reference image, and obtain a two-dimensional image of the face region;
a second determining module 603, configured to determine a first target reference image according to the initial depth information of the face region;
a calculating module 604, configured to calculate target depth information of the face region according to the first target reference image and the face region, so as to obtain a three-dimensional image of the face region;
the recognition module 605 is configured to recognize the object to be recognized based on the three-dimensional image of the face region and the two-dimensional image of the face region.
Optionally, the first determining module 602 is specifically configured to determine whether the face area is trusted according to a face frame corresponding to the face area and a preset threshold; and if the face area is credible, obtaining the initial depth information of the face area according to the face area and the initial reference image.
Optionally, the first determining module 602 is further specifically configured to, if the face region is not trusted, obtain initial depth information of the image to be processed according to a selectable region on the image to be processed and the initial reference image, where the selectable region includes the face region and a region outside the face region.
Optionally, the second determining module 603 is specifically configured to determine, according to the initial depth information of the face region, a distance between a face part in the object to be recognized and the electronic device; and determining a first target reference image according to the distance between the human face part in the object to be recognized and the electronic equipment and the distance parameters corresponding to the reference images.
Optionally, the second determining module 603 is further specifically configured to obtain depth data corresponding to the face region and the number of the depth data according to the initial depth information of the face region; and determining the distance between the face part in the object to be recognized and the electronic equipment according to the depth data corresponding to the face area and the number of the depth data.
Optionally, the first determining module 602 is further configured to, if the image to be processed does not include a face region, obtain initial depth information of the image to be processed according to a region on the image to be processed and an initial reference image, and obtain a two-dimensional image of the object to be recognized;
the second determining module 603 is further configured to determine a distance between the object to be recognized and the electronic device according to the initial depth information of the image to be processed; obtaining a second target reference image according to the distance between the object to be identified and the electronic equipment and the distance parameters corresponding to the reference images;
the calculating module 604 is further configured to calculate target depth information of the image to be processed according to the second target reference image and the image to be processed, so as to obtain a three-dimensional image of the object to be identified;
the identification module 605 is further configured to identify the object to be identified based on the three-dimensional image of the object to be identified and the two-dimensional image of the object to be identified.
Optionally, the second determining module 603 is further specifically configured to obtain initial depth information corresponding to a central area on the image to be processed according to the initial depth information of the image to be processed; and determining the distance between the object to be identified and the equipment according to the initial depth information corresponding to the central area on the image to be processed.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 7 is another schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 7, the electronic device may be a mobile phone, a tablet computer, a wearable device, and the like. The electronic device may include: the image processing method comprises a processor 701, a storage medium 702 and a bus 703, wherein the storage medium 702 stores machine-readable instructions executable by the processor 701, when the electronic device runs, the processor 701 communicates with the storage medium 702 through the bus 703, and the processor 701 executes the machine-readable instructions to execute the steps of the image processing method. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the image processing method are executed.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. Alternatively, the indirect coupling or communication connection of devices or units may be electrical, mechanical or other.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. An image processing method, applied to an electronic device, the method comprising:
judging whether an image to be processed contains a face region or not, wherein the image to be processed is an image containing an object to be identified;
if the image to be processed contains a face region, obtaining initial depth information of the face region according to the face region and an initial reference image, and obtaining a two-dimensional image of the face region, wherein the initial reference image is one of the plurality of reference images, the plurality of reference images are stored in electronic equipment in advance, each reference image corresponds to different distance parameters, and the distance parameters are used for representing the distance between an object contained in the reference image and the electronic equipment;
determining a first target reference image according to the initial depth information of the face region;
calculating target depth information of the face region according to the first target reference image and the face region to obtain a three-dimensional image of the face region;
and identifying the object to be identified based on the three-dimensional image of the face area and the two-dimensional image of the face area.
2. The method of claim 1, wherein obtaining initial depth information of the face region according to the face region and an initial reference image comprises:
judging whether the face area is credible or not according to a face frame corresponding to the face area and a preset threshold;
and if the face area is credible, obtaining initial depth information of the face area according to the face area and an initial reference image.
3. The method of claim 2, further comprising:
if the face area is not credible, obtaining initial depth information of the image to be processed according to a selectable area on the image to be processed and the initial reference image, wherein the selectable area comprises the face area and an area outside the face area.
4. The method according to any one of claims 1 to 3, wherein the determining a first target reference image according to the initial depth information of the face region comprises:
determining the distance between the face part in the object to be recognized and the electronic equipment according to the initial depth information of the face region;
and determining a first target reference image according to the distance between the human face part in the object to be recognized and the electronic equipment and the distance parameters corresponding to the reference images.
5. The method according to claim 4, wherein the determining the distance between the face part in the object to be recognized and the electronic device according to the initial depth information of the face region comprises:
acquiring depth data corresponding to the face area and the number of the depth data according to the initial depth information of the face area;
and determining the distance between the face part in the object to be recognized and the electronic equipment according to the depth data corresponding to the face area and the number of the depth data.
6. The method according to any one of claims 1-3, further comprising:
if the image to be processed does not contain the face region, obtaining initial depth information of the image to be processed according to a region on the image to be processed and the initial reference image, and obtaining a two-dimensional image of the object to be recognized;
determining the distance between the object to be identified and the electronic equipment according to the initial depth information of the image to be processed;
obtaining a second target reference image according to the distance between the object to be identified and the electronic equipment and the distance parameters corresponding to the reference images;
calculating target depth information of the image to be processed according to the second target reference image and the image to be processed to obtain a three-dimensional image of the object to be recognized;
and identifying the object to be identified based on the three-dimensional image of the object to be identified and the two-dimensional image of the object to be identified.
7. The method of claim 6, wherein determining the distance between the object to be identified and the electronic device according to the initial depth information of the image to be processed comprises:
acquiring initial depth information corresponding to a central area on the image to be processed according to the initial depth information of the image to be processed;
and determining the distance between the object to be identified and the equipment according to the initial depth information corresponding to the central area on the image to be processed.
8. An image processing apparatus, applied to an electronic device, comprising:
the judging module is used for judging whether the image to be processed contains a face area or not, wherein the image to be processed is an image containing an object to be identified;
a first determining module, configured to, if the image to be processed includes a face region, obtain initial depth information of the face region according to the face region and an initial reference image, and obtain a two-dimensional image of the face region, where the initial reference image is one of the multiple reference images, the multiple reference images are stored in an electronic device in advance, each reference image corresponds to a different distance parameter, and the distance parameter is used to represent a distance between an object included in the reference image and the electronic device;
the second determining module is used for determining a first target reference image according to the initial depth information of the face region;
the calculation module is used for calculating target depth information of the face area according to the first target reference image and the face area so as to obtain a three-dimensional image of the face area;
and the recognition module is used for recognizing the object to be recognized based on the three-dimensional image of the face area and the two-dimensional image of the face area.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the image processing method according to any one of claims 1 to 7.
10. A storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
CN202110157197.XA 2021-02-03 2021-02-03 Image processing method, device, equipment and storage medium Pending CN112818874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110157197.XA CN112818874A (en) 2021-02-03 2021-02-03 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110157197.XA CN112818874A (en) 2021-02-03 2021-02-03 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112818874A true CN112818874A (en) 2021-05-18

Family

ID=75861512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110157197.XA Pending CN112818874A (en) 2021-02-03 2021-02-03 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112818874A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538315A (en) * 2021-08-20 2021-10-22 支付宝(杭州)信息技术有限公司 Image processing method and device
CN115273288A (en) * 2022-08-03 2022-11-01 深圳市杉川机器人有限公司 Unlocking method based on face recognition, intelligent door lock and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651941A (en) * 2016-09-19 2017-05-10 深圳奥比中光科技有限公司 Depth information acquisition method and depth measuring system
CN106875435A (en) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 Obtain the method and system of depth image
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109190539A (en) * 2018-08-24 2019-01-11 阿里巴巴集团控股有限公司 Face identification method and device
CN111882596A (en) * 2020-03-27 2020-11-03 浙江水晶光电科技股份有限公司 Structured light module three-dimensional imaging method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651941A (en) * 2016-09-19 2017-05-10 深圳奥比中光科技有限公司 Depth information acquisition method and depth measuring system
CN106875435A (en) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 Obtain the method and system of depth image
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109190539A (en) * 2018-08-24 2019-01-11 阿里巴巴集团控股有限公司 Face identification method and device
CN111882596A (en) * 2020-03-27 2020-11-03 浙江水晶光电科技股份有限公司 Structured light module three-dimensional imaging method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538315A (en) * 2021-08-20 2021-10-22 支付宝(杭州)信息技术有限公司 Image processing method and device
CN113538315B (en) * 2021-08-20 2024-02-02 支付宝(杭州)信息技术有限公司 Image processing method and device
CN115273288A (en) * 2022-08-03 2022-11-01 深圳市杉川机器人有限公司 Unlocking method based on face recognition, intelligent door lock and readable storage medium

Similar Documents

Publication Publication Date Title
KR102324706B1 (en) Face recognition unlock method and device, device, medium
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
WO2018166525A1 (en) Human face anti-counterfeit detection method and system, electronic device, program and medium
CN110569731B (en) Face recognition method and device and electronic equipment
JP6587435B2 (en) Image processing apparatus, information processing method, and program
US20060056664A1 (en) Security system
CN110728196B (en) Face recognition method and device and terminal equipment
CN112487921B (en) Face image preprocessing method and system for living body detection
CN113205057B (en) Face living body detection method, device, equipment and storage medium
CN110503760B (en) Access control method and access control system
CN108171138B (en) Biological characteristic information acquisition method and device
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN111382592B (en) Living body detection method and apparatus
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
JP2009245338A (en) Face image collating apparatus
CN110008943B (en) Image processing method and device, computing equipment and storage medium
CN112818874A (en) Image processing method, device, equipment and storage medium
CN108875549B (en) Image recognition method, device, system and computer storage medium
CN108628442B (en) Information prompting method and device and electronic equipment
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN109816543B (en) Image searching method and device
CN111368581A (en) Face recognition method based on TOF camera module, face recognition device and electronic equipment
CN112699811A (en) Living body detection method, apparatus, device, storage medium, and program product
CN106406507B (en) Image processing method and electronic device
CN116958795A (en) Method and device for identifying flip image, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination