CN113837966A - Filtering method and device for face depth map, electronic equipment and storage medium - Google Patents

Filtering method and device for face depth map, electronic equipment and storage medium Download PDF

Info

Publication number
CN113837966A
CN113837966A CN202111131981.XA CN202111131981A CN113837966A CN 113837966 A CN113837966 A CN 113837966A CN 202111131981 A CN202111131981 A CN 202111131981A CN 113837966 A CN113837966 A CN 113837966A
Authority
CN
China
Prior art keywords
face
filtering
depth map
pixel value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111131981.XA
Other languages
Chinese (zh)
Inventor
薛远
陈智超
吴坚
户磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilusense Technology Co Ltd
Hefei Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilusense Technology Co Ltd, Hefei Dilusense Technology Co Ltd filed Critical Beijing Dilusense Technology Co Ltd
Priority to CN202111131981.XA priority Critical patent/CN113837966A/en
Publication of CN113837966A publication Critical patent/CN113837966A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of image processing, and discloses a filtering method and device for a face depth map, electronic equipment and a storage medium. The filtering method of the face depth map comprises the following steps: identifying a pixel value range to which a pixel value of each pixel point in the target face depth image belongs from a plurality of preset pixel value ranges; according to a preset one-to-one correspondence relationship between the pixel value ranges and the filtering parameters, obtaining the filtering parameters corresponding to the pixel value range to which the pixel value of each pixel point belongs, and using the filtering parameters as the filtering parameters corresponding to each pixel point; and carrying out filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points. The method is applied to the filtering process of the face depth map, and the filtering effect is improved while the image quality is ensured.

Description

Filtering method and device for face depth map, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a filtering method and device for a face depth map, electronic equipment and a storage medium.
Background
A depth map is an image having as pixel values the distances (depths) from the image grabbers to the points in the scene, which directly reflects the geometry of the visible surface of the scene. At present, the acquisition modes of depth images are mainly divided into two types, one type is passive distance measurement sensing, and the most common method in the passive distance measurement sensing is binocular stereo vision; the other type is active distance measurement sensing, and active distance measurement sensing is that the equipment needs to transmit energy to complete the acquisition of depth information. Common methods used in active range sensing are tof (time of flight) cameras, structured light, laser scanning, and the like. In either way, the depth map obtained has errors or noise, some due to device measurement errors, and some due to algorithm accuracy.
Although the problem of the error of the depth map is solved by the filtering algorithm at present, the traditional filtering algorithm has an insignificant filtering effect on the plane area of the depth map, or cannot retain the detail information of the strong feature area or the boundary area.
Disclosure of Invention
The embodiment of the invention aims to provide a filtering method and device for a face depth map, an electronic device and a storage medium, wherein different filtering parameters are adopted for different pixel value areas in the depth map, so that the filtering effect is improved while the image quality is ensured.
In order to solve the above technical problem, an embodiment of the present invention provides a filtering method for a face depth map, including: identifying a pixel value range to which a pixel value of each pixel point in the target face depth image belongs from a plurality of preset pixel value ranges; according to a preset one-to-one correspondence relationship between the pixel value ranges and the filtering parameters, obtaining the filtering parameters corresponding to the pixel value range to which the pixel value of each pixel point belongs, and using the filtering parameters as the filtering parameters corresponding to each pixel point; and carrying out filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points.
The embodiment of the invention also provides a filtering device of the face depth map, which comprises:
the acquisition module is used for identifying a pixel value range to which a pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges;
the processing module is used for acquiring a filtering parameter corresponding to a pixel value range to which a pixel value of each pixel point belongs according to a preset one-to-one correspondence relationship between a plurality of pixel value ranges and a plurality of filtering parameters, and the filtering parameter is used as a filtering parameter corresponding to each pixel point; and carrying out filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method of filtering a face depth map as mentioned in the above embodiments.
The embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the filtering method for the face depth map mentioned in the above embodiment.
The filtering method for the face depth map provided by the embodiment of the invention identifies the pixel value range to which the pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges, wherein the pixel value ranges can reflect different areas of a face, the different areas of the face correspond to different filtering coefficients, the target face depth map is filtered according to the one-to-one correspondence relationship between the plurality of preset pixel value ranges and a plurality of filtering parameters, and the parts with prominent face characteristic points and the parts with unobtrusive face characteristic points can be filtered differently; the filter coefficient is directly determined according to the pixel value, so that the image quality is ensured, and the filter effect is improved.
In addition, the filtering method for a face depth map according to the embodiment of the present invention further includes, before obtaining a filtering parameter corresponding to a pixel value range to which a pixel value of each pixel point belongs according to a preset one-to-one correspondence relationship between a plurality of pixel value ranges and a plurality of filtering parameters: acquiring a sample face depth map, and respectively carrying out filtering processing on the sample face depth map by using a plurality of preset filtering parameters to acquire filtered sample face depth maps corresponding to the plurality of filtering parameters; segmenting each filtered sample face depth map according to face parts to obtain a plurality of face subregions, and determining pixel values of pixel points in each face subregion; establishing a corresponding relation between pixel values of pixels before filtering and pixel values of pixels after filtering according to pixel values of pixels in each face subregion and pixel values of pixels in a prestored sample face depth map; performing quality evaluation on the face subregions to obtain quality evaluation results corresponding to the face subregions, and selecting the face subregions with the quality evaluation results meeting preset conditions from a plurality of face subregions corresponding to each face part; wherein, the plurality of face subregions corresponding to each face part respectively belong to the plurality of filtered sample face depth maps; and establishing a corresponding relation between the pixel value range of the pixel points in the selected face subarea before filtering and the filtering parameters corresponding to the selected face subarea. The corresponding relation between the face pixel value range and the filtering parameters can be rapidly and accurately determined according to the quality evaluation result of each face subregion.
In addition, the filtering method for a face depth map according to an embodiment of the present invention performs quality evaluation on each face subregion to obtain a quality evaluation result corresponding to each face subregion, and selects a face subregion, of which the quality evaluation result satisfies a preset condition, from a plurality of face subregions corresponding to each face part, and includes: according to the evaluation indexes corresponding to the face parts of the face subregions, performing quality evaluation on the face subregions to obtain the quality scores of the face subregions; wherein the quality evaluation result is the quality score; and selecting the face subarea with the highest quality score from a plurality of face subareas corresponding to each face part as the face subarea with the quality evaluation result meeting the preset condition. And selecting different evaluation indexes for quality evaluation according to the face features corresponding to the sub-regions, so that the obtained filtering parameters can achieve a good filtering effect, and the detail information of the face can be ensured not to be lost.
In addition, the filtering method for a face depth map according to the embodiment of the present invention is a method for segmenting each filtered sample face depth map according to a face portion to obtain a plurality of face subregions, including: acquiring a face color image corresponding to the filtered sample face depth image; detecting a face part in the face color image to obtain the key point position of the face color image; the key point positions of the face color image are corresponded to a filtered sample face depth image, and the key point positions of the filtered sample face depth image are obtained; and segmenting the face region of the filtered sample face depth map according to the positions of the key points of the filtered sample face depth map to obtain a plurality of face subregions. The positions of key points of the human face can be easily determined according to the human face color image, and the positions of the pixel points in the two pictures are in one-to-one correspondence because the human face color image and the human face depth image have the same camera viewpoint, so that the positions of the key points of the human face depth image can be accurately determined.
In addition, the filtering method for a face depth map according to the embodiment of the present invention, after performing filtering processing on the target face depth map according to the filtering parameter corresponding to each pixel point, further includes: classifying pixel points which are filtered by the same filtering parameter into the same face subregion; acquiring depth characteristic information of each face subregion; and for each face subregion, when the depth feature information is not matched with a preset depth feature condition, adjusting a filtering parameter corresponding to the face subregion. After the target face depth map is subjected to filtering processing, whether filtering parameters need to be adjusted is determined according to the corresponding relation between the depth information of each filtering sub-region and a preset depth threshold value, namely after the target face depth map is filtered, the filtering parameters are optimized and fed back according to the filtering result so as to further accurately obtain the corresponding relation between the filtering parameters and the pixel value range.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a first flowchart of a filtering method for a face depth map according to an embodiment of the present invention;
fig. 2 is a second flowchart of a filtering method for a face depth map according to an embodiment of the present invention;
fig. 3 is a flowchart three of a filtering method for a face depth map according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a filtering apparatus for providing a face depth map according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following describes details of the implementation of the filtering method for a face depth map according to the present embodiment. The following disclosure provides implementation details for the purpose of facilitating understanding, and is not necessary to practice the present solution.
The embodiment of the invention relates to a filtering method of a face depth map, as shown in fig. 1, comprising the following steps:
step 101, identifying a pixel value range to which a pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges.
Specifically, the depth map refers to an image in which the distance (depth) from the image collector to each point in the scene is taken as a pixel value, and it directly reflects the geometric shape of the visible surface of the scene, so those skilled in the art can understand that the size of the pixel value and the distribution condition of the pixel value of a certain area in the face depth map can reflect different areas of the face. Such as: for the forehead area of the face, the pixel value is generally larger than that of the nose area, and the difference of the pixel values among all pixel points in the forehead area is smaller; for the nose area, the pixel value of the nose bridge area is small, the pixel value of the nose wing area is large, and the pixel value difference between the pixel points of the nose area is large. For planar regions such as the forehead region, the cheek region, and the chin region of the face, a better filtering effect is generally required without considering the detail features of these regions, because these regions do not contribute much to techniques such as face recognition and face feature detection. For the regions with obvious characteristics, such as the nose region, the eyebrow region, the lip region, etc., the detailed characteristics of the regions are generally required to be better preserved after filtering.
Step 102, obtaining a filtering parameter corresponding to a pixel value range to which a pixel value of each pixel point belongs according to a one-to-one correspondence relationship between a plurality of preset pixel value ranges and a plurality of filtering parameters, and using the filtering parameter as a filtering parameter corresponding to each pixel point.
Specifically, the preset pixel value range may reflect different regions of the human face, and the different regions of the human face correspond to different filter coefficients, that is, different filtering processes are performed on the human face with prominent features and the human face with unobtrusive features, so as to better balance the relationship between the filtering effect and the feature retention.
And 103, filtering the target face depth map according to the filtering parameters corresponding to the pixel points.
Specifically, after the filtering parameters corresponding to the pixel value ranges to which the pixel values of the pixels in the face depth map belong are determined, the positions of the pixels adopting the same filtering parameters can be obtained, the face depth map is segmented according to the positions of the pixels, the filtering parameters of the pixels belonging to the same region are the same, and filtering processing is performed in regions to improve the processing efficiency. For example, 1000 pixels (S1, S2, … … S1000) are present in a face depth map, and it is determined that the same filter parameter K1 is used for the pixels S1-S547 according to the one-to-one correspondence between a plurality of preset pixel value ranges and a plurality of filter parameters, and 365 pixels S1-S365 of the 547 pixels can be divided into an area according to the pixel point positions, the filter parameter K1 is used for the whole area for filtering, and other pixels are similar to the above processing method.
The filtering method for the face depth map provided by the embodiment of the invention identifies the pixel value range to which the pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges, wherein the pixel value ranges can reflect different areas of a face, the different areas of the face correspond to different filtering coefficients, the target face depth map is filtered according to the one-to-one correspondence relationship between the plurality of preset pixel value ranges and a plurality of filtering parameters, and the parts with prominent face characteristic points and the parts with unobtrusive face characteristic points can be filtered differently; the filter coefficient is directly determined according to the pixel value, so that the image quality is ensured, and the filter effect is improved.
The embodiment of the invention relates to a filtering method of a face depth map, as shown in fig. 2, comprising the following steps:
step 201, obtaining a sample face depth map, and performing filtering processing on the sample face depth map by using a plurality of preset filtering parameters, so as to obtain filtered sample face depth maps corresponding to the plurality of filtering parameters respectively.
Specifically, the acquired sample face depth image needs to be a face front image, the positions of five sense organs can be distinguished, for a face side image and a face upward shot image, only a face partial area can be displayed, and the corresponding relation between the pixel value and the filter parameter cannot be acquired through the images. Assuming that 10 filter parameters (K1, K2, … … K10) are preset, the sample face depth map a0 is filtered by using the 10 filter parameters K1, K2, … … K10, and 10 filtered sample face depth maps (a1, a2, … … a10) are obtained. The filtering processing in the embodiment may include a plurality of existing image filtering methods such as median filtering, mean filtering, gaussian filtering, guided filtering, bilateral filtering, and the like, the mean filtering may also filter edge information of an image while filtering noise, and the median filtering has a very obvious effect when removing salt and pepper noise and patch noise. Of course, this is merely an example, and other image filtering methods may be included in practical applications.
Step 202, segmenting each filtered sample face depth map according to the face parts, obtaining a plurality of face sub-regions, and determining pixel values of pixel points in each face sub-region.
Specifically, segmenting each filtered sample face depth map according to the face part, and acquiring a plurality of face sub-regions specifically includes: acquiring a face color image corresponding to the filtered sample face depth image; detecting a face part in the face color image to acquire a key point position of the face color image; the key point position of the face color image is corresponded to the filtered sample face depth image, and the key point position of the filtered sample face depth image is obtained; and segmenting the face region of the filtered sample face depth map according to the key point positions of the filtered sample face depth map to obtain a plurality of face subregions.
It should be noted that, for the same face, the face color image and the face depth image are respectively obtained at the same position, because the two images have the same camera viewpoint, the same pixel point position in the two images is the same, and the positions of the pixel points before and after filtering generally do not change, so that the key point position can be obtained through the face color image corresponding to the filtered sample face depth image, and the key point position of the face color image is the same as the key point position of the filtered sample face depth image, and then the face region of the sample face depth image is segmented according to the key point position to obtain a plurality of sub-region faces.
In addition, the key point refers to a key point capable of determining the position of five sense organs of the face, and the position of the key point for obtaining the color map of the face may adopt a plurality of face key point detection algorithms such as an asm (active Shape model) algorithm based on a model, an aam (active apply model) algorithm based on a model, a cpr (cascaded position regression) algorithm based on a cascade Shape regression, a method based on deep learning, and the like. Of course, in addition to obtaining the position of the key point of the face through the color image of the face, the position of the key point can also be obtained by directly processing the depth image of the face, and generally, an narf (normal Aligned Radial Feature) key point extraction algorithm, an SIFT (Scale-inverse Feature Transform) key point extraction algorithm, a HARRIS key point extraction algorithm, and the like can be adopted.
In addition, each filtered sample face depth image is divided according to the face part to obtain a plurality of face subregions, the same face part has a plurality of face subregions, and the plurality of face subregions belong to different sample face depth images. Such as: for 10 filtered sample face depth maps a1, a2 and … … a10, corresponding filter parameters are K1, K2 and … … K10, a1, a2 and … … a10 are divided according to the face parts, each map is divided into 5 face sub-regions (forehead region y1, eyebrow region y2, nose region y3, cheek region y4 and lip region y5), and for the nose parts, 10 face sub-regions (a1-y3, a2-y3, A3-y3, A4-y3, A5-y3, A6-y3, A7-y3, A8-y3, A9-y3 and a10-y3) are provided.
Step 203, establishing a corresponding relation between the pixel values of the pixels before filtering and the pixel values of the pixels after filtering according to the pixel values of the pixels in the sub-regions of the face and the pixel values of the pixels in the pre-stored sample face depth map.
Specifically, the pixel values of the depth map before and after filtering are changed, but the positions of the pixel points of the depth map are not changed, so that the corresponding relation between the pixel values before filtering and the pixel values after filtering can be quickly established for the same pixel point according to the position coordinates of the pixel point. Such as: for the sample face depth map a0, 10 different filtering parameters K1, K2, … … K10 are adopted for filtering, and then 10 pieces of filtered sample face depth maps a1, a2, … … a10 are obtained, so that a corresponding relationship can be established between the pixel value of each pixel point in the sample face depth map a0 and the pixel value of each pixel point in the filtered face depth map a1, and similarly, a0, a2, a0, A3, and the like can establish a corresponding relationship before and after the pixel value is filtered.
Step 204, performing quality evaluation on the face subregions to obtain quality evaluation results corresponding to the face subregions, and selecting the face subregions with the quality evaluation results meeting preset conditions from a plurality of face subregions corresponding to each face part; and the plurality of face subregions corresponding to each face part respectively belong to the plurality of different filtered sample face depth maps.
Specifically, step 204 specifically includes: according to the evaluation indexes corresponding to the face parts of the face subregions, performing quality evaluation on the face subregions to obtain the quality scores of the face subregions; wherein the quality evaluation result is a quality score; and selecting the face subarea with the highest quality score from a plurality of face subareas corresponding to each face part as the face subarea with the quality evaluation result meeting the preset condition.
It should be noted that, when a certain filtering parameter is used for filtering a face depth map, different face subregions have different requirements on filtering effects, more texture information and boundary information can be retained after filtering face parts such as nose, eyebrow, lip and the like, and meanwhile, the definition also meets a certain requirement, that is, the ambiguity is lower, while more texture information, boundary information and the like are retained after filtering face parts such as forehead, cheek and the like, and even slight ambiguity of the region after filtering can be received. Of course, the evaluation index may select other evaluation indexes and evaluation requirements according to subsequent operations (face recognition, face key point detection) that need to be performed, in addition to texture information, boundary information, ambiguity, and sharpness. In addition, for different face areas, the evaluation index may set a corresponding weight coefficient.
Such as: assuming that a sample face depth map a0 is filtered by using filtering parameters K1 and K2 to obtain filtered images a1 and a2, a1 and a2 are segmented according to face parts (a forehead region y1, an eyebrow region y2, a nose region y3, a cheek region y4 and a lip region y5) to obtain a plurality of face sub-regions, which are denoted as a1(a1-y1, a1-y2, a1-y1 and a1-y 1), a1(a1-y1, a1-y1 and a1-y 1), the quality score is obtained according to corresponding evaluation indexes for the nose region y1, and the quality score of the a1-y1 is found to be higher than that of the filtered image a1-y1, that the filtering parameters of the a1 are more suitable for the nose region. Therefore, the corresponding relation is established between the pixel value range of the pixel points in the nose region y3 of the image A1 before filtering and the filtering parameters, namely the following corresponding relation A0-y 3-A1-y 3-K1 exists.
In addition, if the highest quality score value is selected from a plurality of face regions corresponding to a certain face position, and the value is found to be lower than a preset score threshold value, the previously selected filtering parameters are not suitable for filtering of the face position, the previously selected filtering parameters are replaced, and filtering is performed again for quality evaluation.
Step 205, establishing a corresponding relationship between the pixel value range of the pixel points in the selected face sub-region before filtering and the filtering parameters corresponding to the selected face sub-region.
And step 206, identifying a pixel value range to which the pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges.
Step 207, according to the one-to-one correspondence relationship between the preset multiple pixel value ranges and the multiple filtering parameters, obtaining the filtering parameter corresponding to the pixel value range to which the pixel value of each pixel belongs, as the filtering parameter corresponding to each pixel point.
And 208, filtering the target face depth map according to the filtering parameters corresponding to the pixel points.
The implementation details of step 205, step 206, step 207, and step 208 in this embodiment are substantially the same as those of step 101, step 102, step 103, and step 104, and are not described herein again.
The filtering method of the face depth image provided by the embodiment of the invention can be used for obtaining a plurality of face subregions by carrying out filtering processing with different filtering parameters on the sample face depth image and carrying out face segmentation on the filtered image, and the positions of the key points of the face can be easily determined according to the face color image. And selecting different evaluation indexes for quality evaluation according to the face features corresponding to the sub-regions, so that the obtained filtering parameters can achieve a good filtering effect, and the detail information of the face can be ensured not to be lost.
The embodiment of the invention relates to a filtering method of a face depth map, as shown in fig. 3, comprising the following steps:
step 301, identifying a pixel value range to which a pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges.
Step 302, obtaining a filtering parameter corresponding to the pixel value range to which the pixel value of each pixel point belongs according to a one-to-one correspondence relationship between a plurality of preset pixel value ranges and a plurality of filtering parameters, and using the filtering parameter as the filtering parameter corresponding to each pixel point.
And 303, performing filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points.
Specifically, the specific implementation details of steps 301, 302, and 303 in this embodiment are substantially the same as those of steps 101, 102, and 103, and are not described herein again.
And 304, classifying the pixel points which are filtered by the same filtering parameter into the same face subregion.
Specifically, after the filtering processing is performed on the target face depth map according to the filtering parameters corresponding to the pixel points, the pixel points adopting the same filtering parameter can be clearly determined, and the pixel points belong to the same face sub-region. It should be noted that if the same filtering parameter is used in the forehead region and the cheek region, the pixel points in these regions are attributed to a face sub-region, but actually correspond to a face, the pixel points are at different face parts. Therefore, the face sub-region in this step may or may not correspond to the actual face part.
And 305, acquiring depth characteristic information of each face subregion.
Specifically, the depth feature information in this step includes the sum of all pixel values of the pixels in the face sub-region, the average value of all pixel values of the pixels in the face sub-region, and the gradient value of the face sub-region. Namely, the filtered depth map is evaluated through the depth feature information. Of course, this is only a specific example, and other information may be included in practical applications.
And step 306, for each face subregion, when the depth feature information is not matched with a preset depth feature condition, adjusting a filtering parameter corresponding to the face subregion.
Specifically, when the acquired depth feature information is a gradient value of the face sub-region, the depth feature condition may be a preset gradient threshold value, or a preset gradient value range, that is, if the gradient value of the face sub-region does not satisfy the preset depth feature condition, it indicates that the filtering effect of the face sub-region does not meet the requirement.
According to the filtering method of the face depth map provided by the embodiment of the invention, after the target face depth map is subjected to filtering processing, whether the filtering parameters need to be adjusted is determined according to the corresponding relation between the depth information of each filtering subarea and the preset depth threshold value, namely, after the target face depth map is filtered, the filtering parameters are optimized and fed back according to the filtering result, and further the corresponding relation between the filtering parameters and the pixel value range is accurate.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
The embodiment of the invention relates to a filtering device of a face depth map, as shown in fig. 4, the device comprises:
the acquiring module 401 is configured to identify a pixel value range to which a pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges;
a processing module 402, configured to obtain, according to a preset one-to-one correspondence relationship between the multiple pixel value ranges and multiple filtering parameters, a filtering parameter corresponding to a pixel value range to which a pixel value of each pixel point belongs, as a filtering parameter corresponding to each pixel point; and carrying out filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points.
It will be appreciated that this embodiment is an apparatus embodiment corresponding to the method embodiment described above, and that this embodiment can be implemented in cooperation with the above embodiment. The related technical details mentioned in the above embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described embodiments.
It should be noted that, all the modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, a unit which is not so closely related to solve the technical problem proposed by the present invention is not introduced in the present embodiment, but this does not indicate that there is no other unit in the present embodiment.
An embodiment of the present invention relates to an electronic device, as shown in fig. 5, including:
at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute the filtering method of the face depth map as mentioned in the above embodiments.
The electronic device includes: one or more processors 501 and a memory 502, with one processor 501 being an example in fig. 5. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example. The memory 502 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the algorithms corresponding to the processing strategies in the strategy space in the embodiment of the present application, in the memory 502. The processor 501 executes various functional applications and data processing of the device, namely, implements the above-described filtering method of the face depth map, by running the nonvolatile software program, instructions and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502 and when executed by the one or more processors 501 perform the method for filtering the face depth map in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
Embodiments of the present invention relate to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (9)

1. A filtering method for a face depth map is characterized by comprising the following steps:
identifying a pixel value range to which a pixel value of each pixel point in the target face depth image belongs from a plurality of preset pixel value ranges;
according to a preset one-to-one correspondence relationship between the pixel value ranges and the filtering parameters, obtaining the filtering parameters corresponding to the pixel value range to which the pixel value of each pixel point belongs, and using the filtering parameters as the filtering parameters corresponding to each pixel point;
and carrying out filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points.
2. The filtering method of the face depth map according to claim 1, wherein before the obtaining of the filtering parameter corresponding to the pixel value range to which the pixel value of each pixel point belongs according to the preset one-to-one correspondence relationship between the plurality of pixel value ranges and the plurality of filtering parameters, the method further comprises:
acquiring a sample face depth map, and respectively carrying out filtering processing on the sample face depth map by using a plurality of preset filtering parameters to acquire filtered sample face depth maps corresponding to the plurality of filtering parameters;
segmenting each filtered sample face depth map according to face parts to obtain a plurality of face subregions, and determining pixel values of pixel points in each face subregion;
establishing a corresponding relation between pixel values of pixels before filtering and pixel values of pixels after filtering according to pixel values of pixels in each face subregion and pixel values of pixels in a prestored sample face depth map;
performing quality evaluation on the face subregions to obtain quality evaluation results corresponding to the face subregions, and selecting the face subregions with the quality evaluation results meeting preset conditions from a plurality of face subregions corresponding to each face part; wherein, the plurality of face subregions corresponding to each face part respectively belong to the plurality of filtered sample face depth maps;
and establishing a corresponding relation between the pixel value range of the pixel points in the selected face subarea before filtering and the filtering parameters corresponding to the selected face subarea.
3. The filtering method of the face depth map according to claim 2, wherein the performing quality evaluation on the face sub-regions to obtain quality evaluation results corresponding to the face sub-regions, and selecting the face sub-regions whose quality evaluation results satisfy preset conditions from a plurality of face sub-regions corresponding to each face part comprises:
according to the evaluation indexes corresponding to the face parts of the face subregions, performing quality evaluation on the face subregions to obtain the quality scores of the face subregions; wherein the quality evaluation result is the quality score;
and selecting the face subarea with the highest quality score from a plurality of face subareas corresponding to each face part as the face subarea with the quality evaluation result meeting the preset condition.
4. The filtering method of the face depth map according to claim 2, wherein the segmenting each of the filtered sample face depth maps according to a face portion to obtain a plurality of face subregions includes:
acquiring a face color image corresponding to the filtered sample face depth image;
detecting a face part in the face color image to obtain the key point position of the face color image;
the key point positions of the face color image are corresponded to a filtered sample face depth image, and the key point positions of the filtered sample face depth image are obtained;
and segmenting the face region of the filtered sample face depth map according to the positions of the key points of the filtered sample face depth map to obtain a plurality of face subregions.
5. The method for filtering a face depth map according to claim 1, wherein after the filtering processing is performed on the target face depth map according to the filtering parameter corresponding to each pixel point, the method further comprises:
classifying pixel points which are filtered by the same filtering parameter into the same face subregion;
acquiring depth characteristic information of each face subregion;
and for each face subregion, when the depth feature information is not matched with a preset depth feature condition, adjusting a filtering parameter corresponding to the face subregion.
6. The filtering method of the face depth map according to claim 3, wherein the evaluation index includes: texture information, boundary information, ambiguity, sharpness.
7. A filtering device for a face depth map is characterized by comprising:
the acquisition module is used for identifying a pixel value range to which a pixel value of each pixel point in the target face depth map belongs from a plurality of preset pixel value ranges;
the processing module is used for acquiring a filtering parameter corresponding to a pixel value range to which a pixel value of each pixel point belongs according to a preset one-to-one correspondence relationship between a plurality of pixel value ranges and a plurality of filtering parameters, and the filtering parameter is used as a filtering parameter corresponding to each pixel point; and carrying out filtering processing on the target face depth map according to the filtering parameters corresponding to the pixel points.
8. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of filtering a face depth map as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the method of filtering a face depth map according to any one of claims 1 to 6.
CN202111131981.XA 2021-09-26 2021-09-26 Filtering method and device for face depth map, electronic equipment and storage medium Pending CN113837966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111131981.XA CN113837966A (en) 2021-09-26 2021-09-26 Filtering method and device for face depth map, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111131981.XA CN113837966A (en) 2021-09-26 2021-09-26 Filtering method and device for face depth map, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113837966A true CN113837966A (en) 2021-12-24

Family

ID=78970410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111131981.XA Pending CN113837966A (en) 2021-09-26 2021-09-26 Filtering method and device for face depth map, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113837966A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103503451A (en) * 2011-05-06 2014-01-08 西门子公司 Method and device for filtering coded image partitions
CN105469407A (en) * 2015-11-30 2016-04-06 华南理工大学 Facial image layer decomposition method based on improved guide filter
CN106780410A (en) * 2016-12-30 2017-05-31 飞依诺科技(苏州)有限公司 The generation method and device of a kind of harmonic wave scanning image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103503451A (en) * 2011-05-06 2014-01-08 西门子公司 Method and device for filtering coded image partitions
CN105469407A (en) * 2015-11-30 2016-04-06 华南理工大学 Facial image layer decomposition method based on improved guide filter
CN106780410A (en) * 2016-12-30 2017-05-31 飞依诺科技(苏州)有限公司 The generation method and device of a kind of harmonic wave scanning image

Similar Documents

Publication Publication Date Title
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
KR101856401B1 (en) Method, apparatus, storage medium, and device for processing lane line data
CN109636732A (en) A kind of empty restorative procedure and image processing apparatus of depth image
CN111723721A (en) Three-dimensional target detection method, system and device based on RGB-D
JP6955783B2 (en) Information processing methods, equipment, cloud processing devices and computer program products
CN109086724B (en) Accelerated human face detection method and storage medium
EP2733666B1 (en) Method for superpixel life cycle management
CN109685806B (en) Image significance detection method and device
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN110458019B (en) Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition
CN111369611B (en) Image pixel depth value optimization method, device, equipment and storage medium thereof
Kallasi et al. Computer vision in underwater environments: A multiscale graph segmentation approach
CN110889817B (en) Image fusion quality evaluation method and device
CN112614174A (en) Point cloud complementing and point cloud dividing method and device, electronic equipment and storage medium
CN114331919B (en) Depth recovery method, electronic device, and storage medium
CN113837966A (en) Filtering method and device for face depth map, electronic equipment and storage medium
CN114417906B (en) Method, device, equipment and storage medium for identifying microscopic image identification
Dimiccoli et al. Hierarchical region-based representation for segmentation and filtering with depth in single images
CN115984178A (en) Counterfeit image detection method, electronic device, and computer-readable storage medium
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN115063594B (en) Feature extraction method and device based on automatic driving
TWI831183B (en) Method for labeling image object and circuit system
CN113128430B (en) Crowd gathering detection method, device, electronic equipment and storage medium
Wang et al. Occlusion-aided weights for local stereo matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220513

Address after: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Applicant after: Hefei lushenshi Technology Co.,Ltd.

Address before: 100083 room 3032, North B, bungalow, building 2, A5 Xueyuan Road, Haidian District, Beijing

Applicant before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

Applicant before: Hefei lushenshi Technology Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20211224

RJ01 Rejection of invention patent application after publication