CN109903321A - Image processing method, image processing apparatus and storage medium - Google Patents

Image processing method, image processing apparatus and storage medium Download PDF

Info

Publication number
CN109903321A
CN109903321A CN201811360744.9A CN201811360744A CN109903321A CN 109903321 A CN109903321 A CN 109903321A CN 201811360744 A CN201811360744 A CN 201811360744A CN 109903321 A CN109903321 A CN 109903321A
Authority
CN
China
Prior art keywords
depth map
area
image processing
processed
repaired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811360744.9A
Other languages
Chinese (zh)
Inventor
王珏
张哲斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Max Way Technology Co Ltd
Original Assignee
Max Way Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Max Way Technology Co Ltd filed Critical Max Way Technology Co Ltd
Publication of CN109903321A publication Critical patent/CN109903321A/en
Withdrawn legal-status Critical Current

Links

Abstract

A kind of image processing method, image processing apparatus and storage medium.The image processing method includes: to obtain the depth map and target image that are directed to same picture, includes object to be processed in the picture;The profile of the object to be processed in picture is obtained based on target image;The area to be repaired in depth map is determined based on the profile of object to be processed;Optimize the depth value of the pixel of the area to be repaired in depth map.The image processing method can optimize the area to be repaired in depth map, quality problems present in Corrected Depth figure.

Description

Image processing method, image processing apparatus and storage medium
Technical field
Embodiment of the disclosure is related to a kind of image processing method, image processing apparatus and storage medium.
Background technique
In computer vision system, three-dimensional (Three-dimensional, 3D) image information is image segmentation, target All kinds of computer vision applications such as detection, object tracking provide a possibility that more.Depth map (Depth map) is as a kind of Conventional three-dimensional image information has been widely used in computer vision system.Depth map is similar to gray level image, it Each pixel pixel value be sensor distance object actual range.Usual depth map and color image (RGB image) It is registration, therefore there is one-to-one relationship between the pixel of the two.
Summary of the invention
A disclosure at least embodiment provides a kind of image processing method, comprising: obtains the depth map for being directed to same picture It include object to be processed in the picture with target image;Based on the target image obtain in the picture described in The profile of process object;The area to be repaired in the depth map is determined based on the profile of the object to be processed;Described in optimization The depth value of the pixel of area to be repaired in depth map.
For example, the area to be repaired packet in the image processing method that one embodiment of the disclosure provides, in the depth map Objective contour region is included, determines that the area to be repaired in the depth map includes: setting preset threshold;Judge in the depth map The distance of profile of pixel to the object to be processed whether be less than the preset threshold;By in the depth map with it is described to The distance of the profile of process object is less than the region where the pixel of the preset threshold as the objective contour region.
For example, the image processing method that one embodiment of the disclosure provides, further includes: the depth information based on the depth map Determine the area to be repaired in the depth map.
For example, the area to be repaired packet in the image processing method that one embodiment of the disclosure provides, in the depth map The loss of depth information region in the depth map is included, determines the area to be repaired in the depth map further include: described in judgement Whether whether the display gray scale of the pixel in depth map be completely black or be Quan Bai;It will show that gray scale is completely black in the depth map Or the region where complete white pixel is as the loss of depth information region.
For example, the image gradient of the depth map is set in the image processing method that one embodiment of the disclosure provides To be consistent with the image gradient of the target image, thus optimize the depth of the pixel of the area to be repaired in the depth map Value.
For example, optimizing the area to be repaired in the depth map in the image processing method that one embodiment of the disclosure provides The depth value of the pixel in domain includes:
Wherein, D indicates that the optimization depth value of the pixel of the area to be repaired in the depth map, ∩ indicate the depth map In area to be repaired,Indicate the boundary of the area to be repaired in the depth map,It indicates to be located in the depth map Area to be repaired boundary pixel depth value, L be N*N incidence matrix, N, m, l are the integer greater than 1.
For example, the incidence matrix indicates in the image processing method that one embodiment of the disclosure provides are as follows:
Wherein, LijIndicate (i, j) a element in the incidence matrix L, ∑kIndicate the covariance matrix of 3*3, wkTable Show the window centered on pixel k, μkIndicate window wkIn pixel depth value mean vector, I3Indicate the unit square of 3*3 Battle array, IiAnd IjPixel value of the image I at ith pixel and j-th of pixel is respectively indicated, ε indicates control parameter, δijExpression gram Cole's function in sieve, i, j are the integer greater than 1.
For example, the image processing method that one embodiment of the disclosure provides, further includes: to be processed described in the depth map Region where object carries out smothing filtering;Background area other than object to be processed described in the depth map is carried out smooth Filtering.
For example, in the image processing method that one embodiment of the disclosure provides, to be processed right described in the depth map It includes: using bilateral filtering method to the area where object to be processed described in the target image that the region of elephant, which carries out smothing filtering, Domain carries out protecting side smothing filtering, and obtains the navigational figure in the region where object to be processed described in the depth map;It is based on The navigational figure smoothly filters the region where object to be processed described in the depth map using guiding filtering method Wave.
For example, the expression formula of the guiding filtering method indicates in the image processing method that one embodiment of the disclosure provides Are as follows:
Wherein, the region where q indicates object to be processed described in the depth map is defeated after the guiding filtering Image out, I indicate that the navigational figure, p indicate input picture, Wij(I) the core weight parameter of filter is indicated.
For example, the image processing method that one embodiment of the disclosure provides, further includes: will be put down in the depth map by described Other than region where the sliding filtered object to be processed and the object to be processed after the smothing filtering Background area is fused together.
A disclosure at least embodiment also provides a kind of image processing apparatus, comprising: image acquisition unit is configured to obtain It include object to be processed in the picture for the depth map and target image of same picture;Profile acquiring unit, is configured to base The profile of the object to be processed described in the picture is obtained in the target image;Area to be repaired determination unit, is configured to The area to be repaired in the depth map is determined based on the profile of the object to be processed;And optimization unit, it is configured to optimize The depth value of the pixel of area to be repaired in the depth map.
For example, the image processing apparatus that one embodiment of the disclosure provides, further includes: filter unit is configured to the depth Background area other than object to be processed described in the region of object to be processed described in degree figure and the depth map carries out flat Sliding filtering;And integrated unit, it is configured to the object institute to be processed in the depth map after the smothing filtering Region and the object to be processed after the smothing filtering other than background area be fused together.
A disclosure at least embodiment also provides a kind of image processing apparatus, comprising: processor;Memory;One or more A computer program module, one or more of computer program modules be stored in the memory and be configured as by The processor executes, and one or more of computer program modules include realizing that disclosure any embodiment mentions for executing The instruction of the image processing method of confession.
A disclosure at least embodiment also provides a kind of storage medium, stores computer-readable instruction to non-transitory, when The non-transitory computer-readable instruction can execute the image of the disclosure any embodiment offer when being executed by computer The instruction of reason method.
Detailed description of the invention
In order to illustrate more clearly of the technical solution of the embodiment of the present disclosure, the attached drawing to embodiment is simply situated between below It continues, it should be apparent that, the accompanying drawings in the following description merely relates to some embodiments of the present disclosure, rather than the limitation to the disclosure.
Fig. 1 is a kind of flow chart for image processing method that one embodiment of the disclosure provides;
Fig. 2 is a kind of schematic diagram for image processing method that one embodiment of the disclosure provides;
Fig. 3 is an exemplary flow chart of step S130 shown in Fig. 1;
Fig. 4 is another exemplary flow chart of step S130 shown in Fig. 1;
Fig. 5 is a kind of schematic diagram for optimization area to be repaired that one embodiment of the disclosure provides;
Fig. 6 is the flow chart for another image processing method that one embodiment of the disclosure provides;
Fig. 7 is a kind of flow chart for smoothing filtering operation that one embodiment of the disclosure provides;
Fig. 8 is a kind of schematic diagram for acquisition navigational figure that one embodiment of the disclosure provides;
Fig. 9 is the schematic diagram before and after a kind of pair of depth map progress smoothing filtering operation that one embodiment of the disclosure provides;
Figure 10 is a kind of schematic diagram for image co-registration operation that one embodiment of the disclosure provides;
Figure 11 is a kind of schematic block diagram for image processing apparatus that one embodiment of the disclosure provides;And
Figure 12 is the schematic block diagram for another image processing apparatus that one embodiment of the disclosure provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present disclosure clearer, below in conjunction with the embodiment of the present disclosure Attached drawing, the technical solution of the embodiment of the present disclosure is clearly and completely described.Obviously, described embodiment is this public affairs The a part of the embodiment opened, instead of all the embodiments.Based on described embodiment of the disclosure, ordinary skill Personnel's every other embodiment obtained under the premise of being not necessarily to creative work, belongs to the range of disclosure protection.
Unless otherwise defined, the technical term or scientific term that the disclosure uses should be tool in disclosure fields The ordinary meaning for thering is the personage of general technical ability to be understood." first ", " second " used in the disclosure and similar word are simultaneously Any sequence, quantity or importance are not indicated, and are used only to distinguish different component parts.Equally, "one", " one " or The similar word such as person's "the" does not indicate that quantity limits yet, but indicates that there are at least one." comprising " or "comprising" etc. are similar Word mean to occur element or object before the word cover the element for appearing in the word presented hereinafter or object and its It is equivalent, and it is not excluded for other elements or object.The similar word such as " connection " or " connected " be not limited to physics or The connection of person's machinery, but may include electrical connection, it is either direct or indirect."upper", "lower", " left side ", " right side " etc. is only used for indicating relative positional relationship, after the absolute position for being described object changes, then the relative positional relationship May correspondingly it change.
The disclosure is illustrated below by several specific embodiments.In order to keep the following theory of the embodiment of the present disclosure Ming and Qing Chu and simplicity can omit the detailed description of known function and known elements.When the either component of the embodiment of the present disclosure is one When occurring in a above attached drawing, which is denoted by the same reference numerals in each attached drawing.
It is more next with the extensive use of mobile phone 3D mould group (for example, common TOF (Time-of-flight) mould group technology) More mobile phones can have 3D camera.3D camera can also generate and coloured silk other than it can export conventional color image Chromatic graph is as corresponding depth map.But often there are following quality problems in the depth map:
(1) due to the limitation of 3D mould group precision itself, so that the resolution ratio of depth map is lower, especially in foreground area The marginal portion precision of (for example, portrait) is not high;
(2) if there are some special reflective materials, such as hair in photographed scene, then being easy in depth map Form the hole region of not depth information;
(3) due in 3D camera infrared camera and main camera there are parallax, by depth map and color image After alignment, depth cavity will form at body rim, i.e., main camera can be seen but infrared camera cannot see that Region.
Therefore, when depth map is applied to some image processing, such as using the depth in depth map in personage's photography When degree information accurately blurs the background area of shooting image, need first to repair the above problem in depth map It is multiple, preferable image processing effect could be obtained in application.
One embodiment of the disclosure provides a kind of image processing method, comprising: obtain for same picture depth map with Target image includes object to be processed in the picture;The profile of the object to be processed in picture is obtained based on target image;Base The area to be repaired in depth map is determined in the profile of object to be processed;Optimize the depth of the pixel of the area to be repaired in depth map Angle value.
A disclosure at least embodiment is also provided to be situated between corresponding to the image processing apparatus of above-mentioned image processing method and storage Matter.
The image processing method that the embodiment of the present disclosure provides, can be based on the wheel of the object to be processed determined in target image Exterior feature determines the area to be repaired of depth map, to optimize to the area to be repaired in depth map, thus in Corrected Depth figure Existing quality problems.
Embodiment of the disclosure and some examples are described in detail with reference to the accompanying drawing.
Fig. 1 is an a kind of exemplary flow chart of image processing method that one embodiment of the disclosure provides.At the image Reason method can be realized in a manner of software, hardware or combinations thereof, be taken by such as mobile phone, laptop, desktop computer, network Processor in business device, the equipment such as digital camera is loaded and is executed, with quality problems present in Corrected Depth figure, so as to Preferably applied to the background in the application of some image processing, such as in personage photographs using depth information to shooting image Region carries out accurately virtualization etc..In the following, being said with reference to Fig. 1 image processing method provided a disclosure at least embodiment It is bright.As shown in Figure 1, the image processing method includes step S110 to step S140.
Step S110: obtaining the depth map and target image for being directed to same picture, includes object to be processed in the picture.
Step S120: the profile of the object to be processed in picture is obtained based on target image.
Step S130: the area to be repaired in depth map is determined based on the profile of object to be processed.
Step S140: the depth value of the pixel of the area to be repaired in optimization depth map.
For example, the object to be processed includes the foreground area shot in image, for example, the foreground area can be portrait, The portrait 11 as shown in the figure A in Fig. 2, naturally it is also possible to be include in image other target objects (for example, flower, bird, trees Deng), embodiment of the disclosure to this with no restriction.It should be noted that being said so that object to be processed is portrait as an example below It is bright.
For step S110, for example, the target image and depth map can be obtained by image collecting device appropriate.Example Such as, the pixel in the pixel and depth map in the target image is one-to-one.For example, the target image can be cromogram Picture, which can be based on various colour gamuts (for example, sRGB, NTSC, Adobe RGB etc.), or can also be black and white The other kinds of image such as image, embodiment of the disclosure to this with no restriction.
For example, the image collecting device can be stereocamera (for example, TOF camera), being also possible to other can be with Realize image collecting function component, embodiment of the disclosure to this with no restriction.It should for example, being obtained by image collecting device The method of depth map includes: passive ranging sensing and active depth sensing.For example, most common method packet in passive ranging sensing Include binocular stereo vision etc.;Active depth sensing mainly include TOF, laser radar Depth Imaging method, computer stereo vision at Picture, coordinate measuring machine method, Moire fringe technique, Structure light method etc..For example, target image and depth map difference for same picture As shown in the figure A and figure B in Fig. 2, the object to be processed in the picture is portrait 11.
It, can also be with for example, target image and depth map can be the original image that image collecting device directly collects It is the image obtained after being pre-processed to original image.Correspondingly, for example, in step s 110, the embodiment of the present disclosure mentions The image processing method of confession can also include carrying out pretreated operation to target image and depth map, to be conducive to simplify rear To the treatment process of depth map in continuous step.These image pretreatment operations can be eliminated unrelated in target image and depth map Information or noise information.For example, in the case where target image and depth map are photos, which may include Image scaling, compression or format conversion, color gamut conversion, gamma (Gamma) correction, image enhancement or noise reduction filtering are carried out to photo Deng processing, in the case where target image and depth map are videos, which may include the key frame etc. for extracting video.
For example, image acquisition unit can be provided, and the depth for being directed to same picture is obtained by the image acquisition unit Figure and target image;For example, it is also possible to pass through central processing unit (CPU), image processor (GPU), tensor processor (TPU), field programmable gate array (FPGA) or other shapes with data-handling capacity and/or instruction execution capability The processing unit and corresponding computer of formula instruct to realize the image acquisition unit.The processing unit can be general processor Or application specific processor, it can be the processor etc. based on X86 or ARM framework.
For step S120, for example, object to be processed is taken in the target image first, due to the shooting matter of target image Measure it is relatively high, and the pixel in the pixel and depth map of target image be it is one-to-one, so as to based in target image The object to be processed taken accurately obtains the profile of object to be processed in depth map.For example, figure can be scratched by Bayes (Bayesian Matting) algorithm, KNN scratch nomography, Poisson scratches figure (Poisson Matting) algorithm, are based on neural network Stingy nomography or the realizations such as other conventional algorithms in the art treat taking for process object.For example, implementing in the disclosure The stingy figure to portrait 11 shown in the figure A in Fig. 2 can be realized in example using the stingy nomography based on deep learning.In Fig. 2 Figure C shown in, to portrait 11 take result be white area, remaining black region be background parts.From the figure C in Fig. 2 Shown in scratch figure result and can be seen that scratch the edge of portrait 11 that figure obtains to target image more accurate, thus in depth The edge of corresponding obtained portrait 11 is also relatively more accurate in degree figure, thus in the next steps can be based on the edge of the portrait 11 Area to be repaired in accurate judgement depth map.
It should be noted that the stingy nomography based on deep learning can use conventional method in the art, herein It repeats no more.
For example, profile acquiring unit can be provided, and target image is based on by the profile acquiring unit and is obtained in picture In object to be processed profile;For example, it is also possible at by central processing unit (CPU), image processor (GPU), tensor Manage device (TPU) field programmable gate array (FPGA) or with its of data-handling capacity and/or instruction execution capability The processing unit and corresponding computer of its form instruct to realize the profile acquiring unit.
For step S130, in one example, can be determined based on the profile of object to be processed to be repaired in depth map Multiple region can also determine the area to be repaired in depth map in another example according to the depth information in depth map.Example Such as, the area to be repaired in depth map includes objective contour region, can also include loss of depth information region.For example, target Contour area can according to fig. 2 in figure C shown in scratch figure result (profile of object i.e. to be processed) determine, loss of depth information It region can basis
Scheme depth map shown in B in Fig. 2 to determine, it is specific to determine that method be introduced in detail below.
Fig. 3 shows the method flow diagram that the area to be repaired in depth map is determined based on the profile of object to be processed, Fig. 4 Show the method flow diagram that the area to be repaired in depth map is determined based on the depth information in depth map.That is, Fig. 3 For an exemplary flow chart of step S130 shown in Fig. 1, Fig. 4 is another example of step S130 shown in Fig. 1 Flow chart.Below with reference to Fig. 3 and Fig. 4, the determination method of area to be repaired is introduced in detail.
For example, in the example depicted in fig. 3, the method for the determination area to be repaired includes step S1311 to step S1313。
Step S1311: setting preset threshold.
For example, preset threshold is expressed as Td, the width that preset threshold Td is 5 pixels can be set.For example, by deep The region being less than where the pixel of preset threshold at a distance from the profile of object to be processed is spent in figure as objective contour region, such as Shown in figure D in Fig. 5, using apart from the edge of portrait 11 be 5 pixel wides region as objective contour region 10, i.e., it is to be repaired Multiple region.It should be noted that the setting of the preset threshold can be depending on the circumstances, embodiment of the disclosure does not make this Limitation.
Step S1312: judging whether pixel in depth map is less than preset threshold to the distance of profile of object to be processed, If so, thening follow the steps S1313.
For example, as shown in the figure D in Fig. 5, judge pixel in the depth map to the edge of portrait 11 distance whether Within preset threshold Td (i.e. 5 pixel wides).
Step S1313: will be less than at a distance from the profile of object to be processed where the pixel of preset threshold in depth map Region is as objective contour region.
For example, as shown in the figure D in Fig. 5, using 11 edge of portrait apart from the region that it is 5 pixel wides as target Contour area 10 (i.e. around the gray area at the edge of portrait 11).
For example, in the example depicted in fig. 4, the method for the determination area to be repaired can also include that step S1321 is extremely walked Rapid S1322.
Step S1321: whether the display gray scale for judging the pixel in depth map be completely black or be Quan Bai, if It is to then follow the steps S1322.
For example, the step can be for where the object to be processed in depth map region and depth map in main body The fringe region in (including the region and background area where object to be processed) is judged.
For example, whether the display gray scale for judging the pixel in depth map is completely black, including judge due to 3D camera module There are parallaxes for middle infrared camera and main camera, after by depth map and target image alignment, in the body edge of depth map The hole region (for example, black region 12 as shown in the figure B in Fig. 2) for lacking depth information that edge is formed, i.e., main camera It can see, but the region that infrared camera cannot see that.
For example, whether the display gray scale for judging the pixel in depth map is Quan Bai, including judge due to being deposited in photographed scene In some special reflective materials, such as hair, then being easy in the cavity that depth map is formed (for example, the figure B in such as Fig. 2 Shown in white area 13 on portrait 11).
Step S1322: it will show that gray scale is the region where completely black or complete white pixel as depth information in depth map Absent region.
For example, will show that gray scale is completely black region 12 and shows that gray scale is complete white region 13 shown in the figure B in Fig. 2 As loss of depth information region (i.e. gray area 20 and gray area 30 as shown in the figure D in Fig. 5), i.e., area to be repaired Domain.
For example, as shown in the figure D in Fig. 5, black region expression background area, white area expression foreground area (for example, Portrait 11), gray area indicates area to be repaired (including objective contour region and/or loss of depth information region).
For example, can include simultaneously step shown in Fig. 3 and Fig. 4, can also only include when judging area to be repaired Step shown in Fig. 3 or only include Fig. 4 shown in step, embodiment of the disclosure to this with no restriction.
For example, area to be repaired determination unit can be provided, and by the area to be repaired determination unit based on to be processed The profile of object determines the area to be repaired in depth map;For example, it is also possible to pass through central processing unit (CPU), image procossing Device (GPU), tensor processor (TPU), field programmable gate array (FPGA) have data-handling capacity and/or refer to The processing unit and corresponding computer instruction for enabling the other forms of executive capability realize area to be repaired determination unit.
For step S140, according to the area to be repaired judged in step s 130, optimize to be repaired in the depth map The depth value of the pixel of multiple region (i.e. gray area 10,20,30 shown in figure D in Fig. 5).
For example, the depth of the pixel of area to be repaired in depth map can be restored using the method based on image optimization Value.For example, the image gradient of depth map can be set as consistent with the image gradient of target image during optimization, Thus in order to optimize the area to be repaired in depth map pixel depth value.That is, if area on target image Domain is smooth, then corresponding region is also smooth on depth map., whereas if having on target image one it is strong The edge of variation, such as portrait is to the excessive of background, then should also there is the edge of a strong variations in depth map.
For example, in embodiment of the disclosure, the depth of restoring area can be estimated using the method for solution optimization method Value, the target of optimization are as follows:
Wherein, D indicates that the optimization depth value of the pixel of the area to be repaired in depth map, ∩ indicate to be repaired in depth map Multiple region,Indicate the boundary of the area to be repaired in depth map,Indicate the side for the area to be repaired being located in depth map The depth value of pixel at boundary, L are the incidence matrix of N*N, and N, m, l are the integer greater than 1.
For example, incidence matrix L defines the relationship between each pixel and other pixels around it in field.? That is the depth value of pixel is according to image itself in each small predetermined field (for example, matrix of 7*7 or 9*9) Information meets certain relationship.Laplacian Matrix used in figure (Image Matting) application is scratched for example, being typically employed in (Laplacian) this relationship is defined.
For example, incidence matrix L can be indicated are as follows:
Wherein, LijIndicate (i, j) a element in incidence matrix L, ∑kIndicate the covariance matrix of 3*3, wkIndicate with Window centered on pixel k, μkIndicate window wkIn pixel depth value mean vector, I3Indicate the unit matrix of 3*3, Ii And IjPixel value of the image I at ith pixel and j-th of pixel is respectively indicated, ε indicates control parameter, δijIt indicates in Crow Cole's function, i, j are the integer greater than 1.
For example, Cole's function in CrowImage I can be input picture.
For example, the depth value for solving pixel in area to be repaired becomes after defining the optimization object function in formula (1) The problem of one solution linear system.It is, for example, possible to use conjugate gradient method (Conjugate Gradient) Lai Youhua formula (1) objective function in.For example, figure E in Fig. 5 is depth map after the reparation obtained after optimization object function.Such as the figure in Fig. 5 Shown in E, it can be repaired well by above-mentioned optimization method area to be repaired (i.e. gray area) shown in the figure D in Fig. 5.
It should be noted that it is not limited to above-mentioned optimization method, it can also be using other conventional methods in the art to depth The depth value of the pixel of area to be repaired in degree figure optimizes.
For example, optimization unit can be provided, and pass through the pixel of the area to be repaired in the optimization unit optimization depth map Depth value;For example, it is also possible to by central processing unit (CPU), image processor (GPU), tensor processor (TPU), show The processing of field programmable logic gate array (FPGA) or the other forms with data-handling capacity and/or instruction execution capability Unit and corresponding computer instruction realize the optimization unit.
It should be noted that in embodiment of the disclosure, the process of the image processing method may include more or more Few operation, these operations can be executed sequentially or be executed parallel.Although the process of above-described image processing method includes Multiple operations that particular order occurs, but should be well understood, the sequence of multiple operations is not restricted by.It is above-described Image processing method can execute once, can also execute according to predetermined condition multiple.
Disclosure image processing method provided by the above embodiment, can be based on the object to be processed determined in target image Profile determine the area to be repaired in depth map, to be optimized to the area to be repaired in depth map, to correct depth Spend quality problems present in figure.
Fig. 6 is the flow chart for another image processing method that one embodiment of the disclosure provides.As shown in fig. 6, the disclosure The image processing method that embodiment provides, can also be filtered and merge to the depth image after reparation.For example, showing at one In example, which further includes carrying out smothing filtering to the foreground area of depth map and background area, that is, includes step S150 to step S160;In another example, the image processing method further include by depth map by filtering processing after Background area and foreground area are fused together, that is, include step S170.In the following, being carried out with reference to Fig. 6 to the image processing method Explanation.
Step S150: smothing filtering is carried out to the region where object to be processed in depth map.
Step S160: smothing filtering is carried out to the background area other than object to be processed in depth map.
Step S170: by the region where the object to be processed in depth map after smothing filtering and pass through smothing filtering The background area other than object to be processed afterwards is fused together.
For step S150, since object to be processed (for example, portrait) is generally closer from 3D camera, so depth value It is accurate to compare, also relatively finer, can use suitable smothing filtering to remove noise to obtain better display effect.
Fig. 7 is a kind of flow chart for smoothing filtering operation that one embodiment of the disclosure provides.That is,
Fig. 7 is an exemplary flow chart of step S150 shown in Fig. 6.As shown in fig. 7, the smoothing filtering operation packet Include step S151 and step S152.In the following, being illustrated with reference to Fig. 7 to the smoothing filtering operation.
Step S151: guarantor side is carried out to the region where object to be processed in target image using bilateral filtering method and is smoothly filtered Wave, and obtain the navigational figure in the region in depth map where object to be processed.
Due to the pixel in target image and depth image be it is one-to-one, can be according to the mesh after smothing filtering The navigational figure of logo image acquisition depth map.For example, the bilateral filtering method that uses can be with when carrying out smothing filtering to target image Realize that details are not described herein using conventional method in the art.It should be noted that gaussian filtering, mean value can also be used Filtering or other suitable filtering methods carry out the region where object to be processed in target image to protect side smothing filtering, this public affairs The embodiment opened to this with no restriction.
Fig. 8 is a kind of schematic diagram for acquisition navigational figure that one embodiment of the disclosure provides.For example, the figure F in Fig. 8 is pair Portrait in target image carries out the image after smothing filtering, and the figure G in Fig. 8 is to be obtained based on the target image after the smothing filtering The navigational figure of the depth map taken.For example, the navigational figure can be input picture, it is also possible to individual image, the disclosure Embodiment to this with no restriction.For example, guiding filtering operation is considered as protecting for one when navigational figure is input picture The filtering operation for holding edge can be used for the filtering of image reconstruction.
Step S152: be based on navigational figure, using guiding filtering method to the region where object to be processed in depth map into Row smothing filtering.
For example, the depth value of the output image (depth map i.e. after smothing filtering) after smothing filtering can be by its week The depth value of pixel in the window of one, side, which is weighted and averaged, to be got.
For example, the expression formula of guiding filtering method can indicate are as follows:
Wherein, q indicates that output image of the region after guiding filtering in depth map where object to be processed, I indicate Navigational figure, p indicate input picture, Wij(I) the core weight parameter of filter is indicated.
For example, in embodiment of the disclosure, which can be input picture.Certainly, embodiment of the disclosure With no restriction to this.
For example, the core weight parameter of filter can indicate are as follows:
Wherein, | w | indicate the number of pixel in window, μkWithIt is illustrated respectively in window wkThe mean value of middle image I and side Difference, IiAnd IjPixel value of the image I at ith pixel and j-th of pixel is respectively indicated, ε indicates control parameter, for controlling The smoothness of filtering.
For example, when control parameter ε is far smaller than varianceWhen, the depth value of pixel k, which is retained, not to be smoothed;Conversely, Then it is smoothed.
It should be noted that the concrete operation method of guiding filtering method can use conventional method in the art, herein It repeats no more.It is separately it should be noted that without being limited thereto, it can also be using other suitable methods in the art in depth map Region where object to be processed carries out smothing filtering.
Fig. 9 is the foreground area (object to be processed i.e. in depth map of a kind of pair of depth map providing of one embodiment of the disclosure The region at place) carry out smoothing filtering operation before and after schematic diagram.As shown in figure 9, the left side is smoothly filtered to foreground area Schematic diagram before wave operation, the right are the schematic diagrames carried out after smoothing filtering operation to foreground area.According to shown in Fig. 9 The noise that comparison diagram before and after smoothed filtering operation can be seen that the foreground area (i.e. portrait) in depth map has obtained effectively Ground inhibits;Meanwhile the image of the image edge locations such as collar, chin in foreground area has been got back and has been kept well, does not have It is blurred during smothing filtering.
It, can be with when the background area in depth map other than object to be processed carries out smothing filtering for step S160 Using with identical smooth filtering method in step S150, different filtering methods, embodiment of the disclosure pair also can be used This is with no restriction.For example, using with smooth filtering method (for example, guiding filtering method) identical in step S150 to depth When background area in figure is filtered, different control parameter ε can be set to control the smoothness of background area.It needs to infuse Meaning, specific operating method can refer to step S151 and step S152, and details are not described herein.For example, in depth map The schematic diagram after background area progress smothing filtering other than object to be processed is as shown in the P in Figure 10.
For example, filter unit can be provided, and by the filter unit to the region of object to be processed in depth map and Background area in depth map other than object to be processed carries out smothing filtering;For example, it is also possible to pass through central processing unit (CPU), tensor processor (TPU), image processor (GPU), field programmable gate array (FPGA) or have data The processing unit and corresponding computer instruction of processing capacity and/or the other forms of instruction execution capability realizes the filtering Unit.
For step S170, it is (to be processed i.e. in depth map that the foreground area after smothing filtering will be carried out respectively in depth map Region where object) and background area (background area i.e. in depth map other than object to be processed) merged, to obtain The completely depth map after smothing filtering.For example, can be realized using the image interfusion method based on small echo, can also adopt Realized with other conventional methods in the art, embodiment of the disclosure to this with no restriction.
Figure Q in Figure 10 is the schematic diagram of the depth map after a kind of carry out image co-registration that one embodiment of the disclosure provides. For example, by will be carried out shown in the right in the background area and Fig. 9 after progress smothing filtering shown in the figure P in Figure 10 Foreground area (portrait) after smothing filtering is merged, and fused image shown in the figure Q in Figure 10 is obtained.
For example, integrated unit can be provided, and by the integrated unit by depth map after smothing filtering wait locate Region where reason object and the background area other than the object to be processed after smothing filtering are fused together;For example, Central processing unit (CPU), image processor (GPU), tensor processor (TPU), field programmable gate battle array can be passed through It arranges the processing unit of (FPGA) or the other forms with data-handling capacity and/or instruction execution capability and correspondingly refers to It enables to realize the integrated unit.
It should be noted that in embodiment of the disclosure, the process of the image processing method may include more or more Few operation, these operations can be executed sequentially or be executed parallel.Although the process of above-described image processing method includes Multiple operations that particular order occurs, but should be well understood, the sequence of multiple operations is not restricted by.It is above-described Image processing method can execute once, can also execute according to predetermined condition multiple.
The image processing method that the embodiment of the present disclosure provides can be with outside the quality problems present in Corrected Depth figure Revised depth map is filtered, so that the quality of depth map is further improved, to be preferably applied for some images In the application of processing.
Figure 11 is a kind of schematic block diagram for image processing apparatus that one embodiment of the disclosure provides.For example, shown in Figure 11 Example in, the image processing apparatus 100 include image acquisition unit 110, profile acquiring unit 120, area to be repaired determine Unit 130 and optimization unit 140.For example, these units can be realized by hardware (such as circuit) module or software module etc..
The image acquisition unit 110 is configured to obtain the depth map and target image for being directed to same picture, includes in picture Object to be processed.For example, step S110 may be implemented in the image acquisition unit 110, concrete methods of realizing can refer to step The associated description of S110, details are not described herein.
The profile acquiring unit 120 is configured to the profile that target image obtains the object to be processed in picture.For example, Step S120 may be implemented in the profile acquiring unit 120, and concrete methods of realizing can refer to the associated description of step S120, This is repeated no more.
The profile that the area to be repaired determination unit 130 is configured to object to be processed determines to be repaired in depth map Region.For example, step S130 may be implemented in the area to be repaired determination unit 130, concrete methods of realizing can refer to step The associated description of S130, details are not described herein.
The optimization unit 140 is configured to the depth value of the pixel of the area to be repaired in optimization depth map.For example, the optimization Step S140 may be implemented in unit 140, and concrete methods of realizing can refer to the associated description of step S140, no longer superfluous herein It states.
For example, in another example, the image processing apparatus 100 further include filter unit and integrated unit (in figure not It shows).
The filter unit is configured to other than object to be processed in the region of object to be processed in depth map and depth map Background area carry out smothing filtering.For example, step S150 and step S160, specific implementation side may be implemented in the filter unit Method can refer to the associated description of step S150 and step S160, and details are not described herein.
The integrated unit is configured to will be by the region and depth map of object to be processed in the depth map after smothing filtering Background area other than object to be processed is fused together.For example, step S170 may be implemented in the integrated unit, specific implementation Method can refer to the associated description of step S170, and details are not described herein.
It should be noted that may include more or fewer circuits or unit, and each in embodiment of the disclosure Connection relationship between a circuit or unit is unrestricted, can according to actual needs depending on.The specific composition side of each circuit Formula is unrestricted, can be made of, can also be made of digit chip analog device according to circuit theory, or is applicable in other Mode constitute.
Figure 12 is the schematic block diagram for another image processing apparatus that one embodiment of the disclosure provides.As shown in figure 12, should Image processing apparatus 200 includes processor 210, memory 220 and one or more computer program modules 221.
For example, processor 210 is connect with memory 220 by bus system 230.For example, one or more computer journeys Sequence module 221 is stored in memory 220.For example, one or more computer program modules 221 include for executing this public affairs The instruction of the image processing method of any embodiment offer is provided.For example, the instruction in one or more computer program modules 221 It can be executed by processor 210.For example, bus system 230 can be common serial, parallel communication bus etc., the disclosure Embodiment to this with no restriction.
For example, the processor 210 can be central processing unit (CPU), image processor (GPU) or have at data The processing unit of reason ability and/or the other forms of instruction execution capability, can be general processor or application specific processor, and Other components in image processing apparatus 200 be can control to execute desired function.
Memory 220 may include one or more computer program products, which may include each The computer readable storage medium of kind form, such as volatile memory and/or nonvolatile memory.The volatile memory It such as may include random access memory (RAM) and/or cache memory (cache) etc..The nonvolatile memory It such as may include read-only memory (ROM), hard disk, flash memory etc..Can store on computer readable storage medium one or Multiple computer program instructions, processor 210 can run the program instruction, to realize in the embodiment of the present disclosure (by processor 210 realize) function and/or other desired functions, such as image processing method etc..In the computer-readable storage medium Various application programs and various data can also be stored in matter, such as preset threshold Td and application program are used and/or generated Various data etc..
It should be noted that the embodiment of the present disclosure does not provide the image processing apparatus 200 to indicate clear, succinct Whole component units.For the necessary function for realizing image processing apparatus 200, those skilled in the art can be according to specific needs There is provided, other unshowned component units be set, embodiment of the disclosure to this with no restriction.
Technical effect about image processing apparatus 100 and image processing apparatus 200 in different embodiments can refer to The technical effect of the image processing method provided in embodiment of the disclosure, which is not described herein again.
One embodiment of the disclosure also provides a kind of storage medium.For example, the storage medium non-transitory store computer Readable instruction, when non-transitory computer-readable instruction is any by that can execute the disclosure when computer (including processor) execution The image processing method that embodiment provides.
For example, the storage medium can be any combination of one or more computer readable storage mediums, such as one Computer readable storage medium includes the computer-readable program code for determining the area to be repaired in depth map, another meter Calculation machine readable storage medium storing program for executing includes the computer-readable program of the depth value of the pixel of the area to be repaired in optimization depth map Code.For example, computer can execute the journey stored in the computer storage medium when the program code is read by computer Sequence code executes the image processing method that for example disclosure any embodiment provides.
For example, storage medium may include the storage card of smart phone, the storage unit of tablet computer, personal computer It is hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable Any combination of aacompactadisk read onlyamemory (CD-ROM), flash memory or above-mentioned storage medium, or other are applicable to deposit Storage media.
There is the following to need to illustrate:
(1) embodiment of the present disclosure attached drawing relates only to the structure being related to the embodiment of the present disclosure, and other structures can refer to It is commonly designed.
(2) in the absence of conflict, the feature in embodiment of the disclosure and embodiment can be combined with each other to obtain New embodiment.
The above is only the exemplary embodiment of the disclosure, not for the protection scope of the limitation disclosure, this public affairs The protection scope opened is determined by the attached claims.

Claims (15)

1. a kind of image processing method, comprising:
The depth map and target image for being directed to same picture are obtained, includes object to be processed in the picture;
The profile of the object to be processed in the picture is obtained based on the target image;
The area to be repaired in the depth map is determined based on the profile of the object to be processed;
Optimize the depth value of the pixel of the area to be repaired in the depth map.
2. image processing method according to claim 1, wherein the area to be repaired in the depth map includes target wheel Wide region determines that the area to be repaired in the depth map includes:
Set preset threshold;
Judge whether pixel in the depth map is less than the preset threshold to the distance of profile of the object to be processed;
By in the depth map at a distance from the profile of the object to be processed less than the preset threshold pixel where area Domain is as the objective contour region.
3. image processing method according to claim 1, further includes:
The area to be repaired in the depth map is determined based on the depth information of the depth map.
4. image processing method according to claim 3, wherein the area to be repaired in the depth map includes the depth The loss of depth information region in figure is spent, determines the area to be repaired in the depth map further include:
Whether whether the display gray scale for judging the pixel in the depth map be completely black or be Quan Bai;
It will show that gray scale is the region where completely black or complete white pixel as the loss of depth information area in the depth map Domain.
5. image processing method according to claim 1, wherein by the image gradient of the depth map be set as with it is described The image gradient of target image is consistent, thus optimizes the depth value of the pixel of the area to be repaired in the depth map.
6. image processing method according to claim 1, wherein optimize the pixel of the area to be repaired in the depth map Depth value include:
Wherein, D indicates that the optimization depth value of the pixel of the area to be repaired in the depth map, ∩ indicate in the depth map Area to be repaired,Indicate the boundary of the area to be repaired in the depth map,Indicate be located at the depth map in The depth value of the pixel of the boundary of restoring area, L are the incidence matrix of N*N, and N, m, l are the integer greater than 1.
7. image processing method according to claim 6, wherein the incidence matrix indicates are as follows:
Wherein, LijIndicate (i, j) a element in the incidence matrix L, ∑kIndicate the covariance matrix of 3*3, wkIndicate with Window centered on pixel k, μkIndicate window wkIn pixel depth value mean vector, I3Indicate the unit matrix of 3*3, Ii And IjPixel value of the image I at ith pixel and j-th of pixel is respectively indicated, ε indicates control parameter, δijIt indicates in Crow Cole's function, i, j are the integer greater than 1.
8. image processing method according to claim 1, further includes:
Smothing filtering is carried out to the region where object to be processed described in the depth map;
Smothing filtering is carried out to the background area other than object to be processed described in the depth map.
9. image processing method according to claim 8, wherein to where object to be processed described in the depth map Region carries out smothing filtering
The region where object to be processed described in the target image is carried out using bilateral filtering method to protect side smothing filtering, and Obtain the navigational figure in the region where object to be processed described in the depth map;
Based on the navigational figure, the region where object to be processed described in the depth map is carried out using guiding filtering method Smothing filtering.
10. image processing method according to claim 9, wherein the expression formula of the guiding filtering method indicates are as follows:
Wherein, q indicates output figure of the region where object to be processed described in the depth map after the guiding filtering Picture, I indicate that the navigational figure, p indicate input picture, Wij(I) the core weight parameter of filter is indicated.
11. according to any image processing method of claim 8-10, further includes:
By the region where the object to be processed in the depth map after the smothing filtering and by described smooth Background area other than the filtered object to be processed is fused together.
12. a kind of image processing apparatus, comprising:
Image acquisition unit is configured to obtain the depth map and target image for being directed to same picture, includes wait locate in the picture Manage object;
Profile acquiring unit is configured to the profile that the target image obtains the object to be processed described in the picture;
Area to be repaired determination unit, be configured to the object to be processed profile determine it is to be repaired in the depth map Region;And
Optimize unit, is configured to optimize the depth value of the pixel of the area to be repaired in the depth map.
13. image processing apparatus according to claim 12, further includes:
Filter unit, be configured to described in the region of object to be processed described in the depth map and the depth map wait locate It manages the background area other than object and carries out smothing filtering;And
Integrated unit is configured to the region where the object to be processed in the depth map after the smothing filtering It is fused together with the background area other than the object to be processed after the smothing filtering.
14. a kind of image processing apparatus, comprising:
Processor;
Memory;One or more computer program modules, one or more of computer program modules are stored in described It in memory and is configured as being executed by the processor, one or more of computer program modules include for executing reality The instruction of the existing any image processing method of claim 1-11.
15. a kind of storage medium stores computer-readable instruction to non-transitory, when the non-transitory computer-readable instruction The instruction of -11 any image processing methods according to claim 1 can be executed when being executed by computer.
CN201811360744.9A 2018-10-16 2018-11-15 Image processing method, image processing apparatus and storage medium Withdrawn CN109903321A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201816161472A 2018-10-16 2018-10-16
US16/161,472 2018-10-16

Publications (1)

Publication Number Publication Date
CN109903321A true CN109903321A (en) 2019-06-18

Family

ID=66943293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811360744.9A Withdrawn CN109903321A (en) 2018-10-16 2018-11-15 Image processing method, image processing apparatus and storage medium

Country Status (1)

Country Link
CN (1) CN109903321A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288543A (en) * 2019-06-21 2019-09-27 北京迈格威科技有限公司 A kind of depth image guarantor side treating method and apparatus
CN110336942A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 A kind of virtualization image acquiring method and terminal, computer readable storage medium
CN110503704A (en) * 2019-08-27 2019-11-26 北京迈格威科技有限公司 Building method, device and the electronic equipment of three components
CN110675346A (en) * 2019-09-26 2020-01-10 武汉科技大学 Image acquisition and depth map enhancement method and device suitable for Kinect
CN112070708A (en) * 2020-08-21 2020-12-11 杭州睿琪软件有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113487514A (en) * 2021-07-22 2021-10-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and readable storage medium
CN115118948A (en) * 2022-06-20 2022-09-27 北京华录新媒信息技术有限公司 Method and device for repairing irregular occlusion in panoramic video

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288543A (en) * 2019-06-21 2019-09-27 北京迈格威科技有限公司 A kind of depth image guarantor side treating method and apparatus
CN110288543B (en) * 2019-06-21 2021-11-30 北京迈格威科技有限公司 Depth image edge-preserving processing method and device
CN110336942A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 A kind of virtualization image acquiring method and terminal, computer readable storage medium
CN110503704A (en) * 2019-08-27 2019-11-26 北京迈格威科技有限公司 Building method, device and the electronic equipment of three components
CN110675346B (en) * 2019-09-26 2023-05-30 武汉科技大学 Image acquisition and depth map enhancement method and device suitable for Kinect
CN110675346A (en) * 2019-09-26 2020-01-10 武汉科技大学 Image acquisition and depth map enhancement method and device suitable for Kinect
CN112070708A (en) * 2020-08-21 2020-12-11 杭州睿琪软件有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN112070708B (en) * 2020-08-21 2024-03-08 杭州睿琪软件有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113487514A (en) * 2021-07-22 2021-10-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and readable storage medium
CN115118948A (en) * 2022-06-20 2022-09-27 北京华录新媒信息技术有限公司 Method and device for repairing irregular occlusion in panoramic video
CN115118948B (en) * 2022-06-20 2024-04-05 北京华录新媒信息技术有限公司 Repairing method and device for irregular shielding in panoramic video

Similar Documents

Publication Publication Date Title
CN109903321A (en) Image processing method, image processing apparatus and storage medium
CN107680128B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
AU2017246470B2 (en) Generating intermediate views using optical flow
Krig Computer vision metrics: Survey, taxonomy, and analysis
US9600887B2 (en) Techniques for disparity estimation using camera arrays for high dynamic range imaging
US20170154204A1 (en) Method and system of curved object recognition using image matching for image processing
US9224362B2 (en) Monochromatic edge geometry reconstruction through achromatic guidance
US20190102872A1 (en) Glare Reduction in Captured Images
CN110660088A (en) Image processing method and device
US10949700B2 (en) Depth based image searching
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN107454332A (en) Image processing method, device and electronic equipment
WO2017052976A1 (en) A method and system of low-complexity histogram of gradients generation for image processing
CN108053363A (en) Background blurring processing method, device and equipment
CN114697623B (en) Projection plane selection and projection image correction method, device, projector and medium
Liu et al. Texture filtering based physically plausible image dehazing
CN107113421B (en) The detection method and device of a kind of optical system imaging quality
Chang et al. A self-adaptive single underwater image restoration algorithm for improving graphic quality
US20240022702A1 (en) Foldable electronic device for multi-view image capture
CN112132925A (en) Method and device for reconstructing underwater image color
KR101390455B1 (en) A Physically-based Approach to Reflection Separation
Kriener et al. Accelerating defocus blur magnification
WO2020055406A1 (en) Methods, devices, and computer program products for improved 3d mesh texturing
Zia et al. Exploring chromatic aberration and defocus blur for relative depth estimation from monocular hyperspectral image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190618