CN107633497A - A kind of image depth rendering intent, system and terminal - Google Patents

A kind of image depth rendering intent, system and terminal Download PDF

Info

Publication number
CN107633497A
CN107633497A CN201710777729.3A CN201710777729A CN107633497A CN 107633497 A CN107633497 A CN 107633497A CN 201710777729 A CN201710777729 A CN 201710777729A CN 107633497 A CN107633497 A CN 107633497A
Authority
CN
China
Prior art keywords
image
figures
rgb
depth
small
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710777729.3A
Other languages
Chinese (zh)
Inventor
邹泽东
黎礼明
刘勇
周剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tongjia Youbo Technology Co Ltd
Original Assignee
Chengdu Tongjia Youbo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tongjia Youbo Technology Co Ltd filed Critical Chengdu Tongjia Youbo Technology Co Ltd
Priority to CN201710777729.3A priority Critical patent/CN107633497A/en
Publication of CN107633497A publication Critical patent/CN107633497A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

This application discloses a kind of image depth rendering intent, including:Obtain corresponding with target object original RGB figures and depth map;Calculate the blur radius COC of depth map;Using convolution kernel corresponding with blur radius COC, processing is filtered to original RGB figures, is rendered image accordingly;Alpha fusions are done to rendering image and original RGB figures, obtain fused image.The application further utilizes convolution kernel corresponding with blur radius COC after the blur radius COC of depth map is calculated, and processing is filtered to original RGB figures, thus solves the problems, such as color leakage of the image in depth of field render process.In addition, the application further correspondingly discloses a kind of image depth rendering system and terminal.

Description

A kind of image depth rendering intent, system and terminal
Technical field
The present invention relates to computer vision and computer graphics techniques field, more particularly to a kind of image depth side of rendering Method, system and terminal.
Background technology
The depth of field (depth of filed, DOF) is an important concept in photography and optical imaging field.The depth of field Refer to before camera lens or other imaging devices, the scene or object that are taken can obtain the field of blur-free imaging image Distance range of the scape to camera lens.In obtained image is finally shot, the scene within field depth is clearly, and in scape Region before or after deep scope can then form fuzzy image.However, taken a picture using the camera or card of mobile phone When machine is taken pictures, due to can by the requirement of portable devices, the sensor and camera lens size, weight of equipment of camera all by Great limitation is arrived.So on devices, lens aperture very little, the image of shooting has the larger depth of field, the prospect back of the body Scape is more clear, and the regulating power of field depth is extremely limited, is unable to reach the Deep Canvas that slr camera can reach.Especially It is, there is the problem of color leakage, this also significantly limit image captured by smart mobile phone in terms of depth of field rendering effect Quality and effect.And the development trend of mobile device is always lighter and thinner in pursuit, so it is difficult to by adjusting intelligent hand The camera of machine obtains the effect of the shallow depth of field of large aperture.But the calculating performance of smart mobile phone carries in constantly lasting always It is high.So it is desirable to the computing capability using smart mobile phone, by the method for image post-processing, to solve image depth effect The problem of fruit color is revealed, this is also a focus of the area research.
The content of the invention
In view of this, it is an object of the invention to provide a kind of image depth rendering intent, system and terminal, it can gram The problem of taking the leakage of image depth effect color.Its concrete scheme is as follows:
A kind of image depth rendering intent, including:
Obtain corresponding with target object original RGB figures and depth map;
Calculate the blur radius COC of the depth map;
Using convolution kernel corresponding with the blur radius COC, processing is filtered to the original RGB figures, obtains phase That answers renders image;
Image is rendered to described and the original RGB figures do alpha fusions, obtains fused image.
Preferably, the process for obtaining corresponding with target object original RGB figures and depth map, including:
IMAQ is carried out to the target object, obtains the first RGB figures and the 2nd RGB figures;
Using the first RGB figures and the 2nd RGB figures, it is determined that corresponding depth map.
Preferably, the process for obtaining corresponding with target object original RGB figures and depth map, including:
IMAQ is carried out to the target object, obtains K RGB figures, wherein, K is the integer more than or equal to 3;
Using the K RGB figures, it is determined that corresponding depth map.
Preferably, it is described to utilize convolution kernel corresponding with the blur radius COC, the original RGB figures are filtered Processing, the process of image is rendered accordingly, including:
Down-sampling is carried out to any RGB figures in the original RGB figures, obtains the small figures of corresponding RGB;
Using convolution kernel corresponding with the blur radius COC and the small figures of the RGB, the small figures of the RGB are filtered Processing, obtains rendering small figure;
Render small figure to described and up-sample, obtain described rendering image.
Preferably, it is described to render image and the original RGB figures do alpha fusions to described, obtain fused image Process, including:
The image that renders is merged with any RGB figure progress alpha in the original RGB figures, obtains the fusion Image afterwards.
Preferably, the process of the blur radius COC for calculating the depth map, including:
Down-sampling is carried out to the depth map, obtains the small figure of corresponding depth;
Calculate blur radius COC corresponding to the small figure of the depth.
Preferably, before blur radius COC process corresponding to the calculating small figure of depth, in addition to:
Using quick Steerable filter device, corresponding filtering process is carried out to the small figure of the depth, it is small to lift the depth The precision of figure.
The invention also discloses a kind of image depth rendering system, including:
Image collection module, for obtaining corresponding with target object original RGB figures and depth map;
Blur radius computing module, for calculating the blur radius COC of the depth map;
RGB figure filtration modules, for utilizing convolution kernel corresponding with the blur radius COC, the original RGB figures are entered Row filtering process, is rendered image accordingly;
Image co-registration module, for rendering image and the RGB figures do alpha fusions to described, obtain fused image.
Preferably, the RGB figures filtration module, including:
RGB figure downsampling units, for carrying out down-sampling to any RGB figures in the original RGB figures, obtain corresponding The small figures of RGB;
The small figure filter units of RGB, it is right for utilizing convolution kernel corresponding with the blur radius COC and the small figures of the RGB The small figures of RGB are filtered processing, obtain rendering small figure.
Small figure up-sampling unit is rendered, for rendering small figure to described and up-sampling, is rendered image accordingly.
The invention also discloses a kind of terminal, including image acquisition device, processor and memory;Wherein, above-mentioned processing Device is stored in the instruction in above-mentioned memory to perform following steps by transferring:
Original RGB figures corresponding with target object are obtained by described image collector, and determine to scheme with the original RGB Corresponding depth map;
Calculate the blur radius COC of the depth map;
Using convolution kernel corresponding with the blur radius COC, processing is filtered to the original RGB figures, obtains phase That answers renders image;
Image is rendered to described and the original RGB figures do alpha fusions, obtains fused image.
In the present invention, image depth rendering intent, including:Obtain corresponding with target object original RGB figures and depth map; Calculate the blur radius COC of above-mentioned depth map;Using convolution kernel corresponding with above-mentioned blur radius COC, above-mentioned original RGB is schemed Processing is filtered, is rendered image accordingly;Image is rendered to above-mentioned and above-mentioned original RGB figures do alpha fusions, is obtained Fused image.In the present invention by calculating the blur radius COC of depth map, convolution corresponding with blur radius COC is obtained Core, original RGB figures are filtered by convolution kernel filtering process, it is to be understood that corresponding by calculating blur radius COC Convolution kernel, then convolution kernel filtering process is made to the pixel value in original RGB figures, obtains original RGB figures and render image accordingly, By such computational methods, can be found and pixel pair in original RGB figures in original RGB figures render image accordingly The pixel answered, such mapping relations one by one are exactly based on, the color that the depth of field in the prior art is rendered in image is solved and lets out Dew problem.
Moreover, processing, Neng Gouti are filtered to above-mentioned depth map by using quick Steerable filter device in the present invention Go up and state depth map in the discontinuous problem of depth, and image is rendered and above-mentioned original RGB figures do alpha fusions to above-mentioned, Fused image is obtained, can ensure that the region within the depth of field is clear by such method, the region blur beyond the depth of field, make The effect for obtaining the depth field imaging of image is more preferable.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis The accompanying drawing of offer obtains other accompanying drawings.
Fig. 1 is a kind of image depth rendering intent flow chart disclosed in the embodiment of the present invention one;
Fig. 2 is a kind of image depth rendering intent flow chart disclosed in the embodiment of the present invention two;
Fig. 3 is a kind of image depth rendering intent flow chart disclosed in the embodiment of the present invention three;
Fig. 4 is a kind of image depth rendering system structural representation disclosed in the embodiment of the present invention;
Fig. 5 is a kind of terminal device schematic diagram disclosed in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only the part of the embodiment of the present invention, rather than whole embodiments.Base In embodiments of the invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made Embodiment, belong to the scope of protection of the invention.
The embodiment of the present invention one discloses a kind of image depth rendering intent, and shown in Figure 1, this method includes:
Step S11:Obtain corresponding with target object original RGB figures and depth map;
In the present embodiment, the method for obtaining corresponding with target object original RGB figures and depth map has many kinds, such as The original RGB that target object is obtained by the camera function of mobile phone schemes, and the corresponding depth map of original RGB figures, can pass through bat The picture of different angle object is taken the photograph, depth map corresponding to above-mentioned RGB figures is obtained by the calculating of correlation, naturally it is also possible to pass through Time-of-flight camera (time of flight camera, TOF) or Kinect cameras shoot to obtain original RGB figures correspondingly Depth map, so as to obtain the various depth informations in picture, it is to be understood that above method be only obtain target object A part of method of corresponding original RGB figures and depth map, rather than all.
Step S12:Calculate the blur radius COC of above-mentioned depth map;
Wherein, blur radius COC calculation formula is:
In formula, COC is the blur radius of depth map, and U is to shoot object distance when RGB schemes, UfFor the distance of focusing, f is phase The focal length of machine, D are the diameter of camera lens.
Step S13:Using convolution kernel corresponding with above-mentioned blur radius COC, place is filtered to above-mentioned original RGB figures Reason, is rendered image accordingly;
Wherein, the calculation formula of convolution kernel corresponding with the pixel p in above-mentioned original RGB figures is:
In formula, wpThe weights of the pixel p points being located in above-mentioned original RGB figures centered on pixel q are represented, k is The normalization factor obtained after blur radius COC is normalized, σr、σsFor the standard deviation in codomain/spatial domain, Ip(r, g, B) the three-dimensional color value for being pixel p, Iq(r, g, b) is q three-dimensional color value, and p (x, y, z) is pixel p three-dimensional position Value, q (x, y, z) are pixel q three-dimensional position value.
Then pixel p calculated for pixel values mode is in filtered image:
I (p)=IR(p)*wp,
In formula, IR(p) be pixel p in RGB figures pixel value, wpFor the weights of convolution kernel.
It is understood that the blur radius COC by calculating depth map, and convolution kernel corresponding with blur radius COC, Processing is filtered to original RGB figures, is rendered image accordingly, by such mode, can solve prior art The problem of color is revealed in central depth image.
Step S14:Image is rendered to above-mentioned and the original RGB figures do alpha fusions, obtains fused image.
Wherein, the calculation formula of alpha fusions is:
Idst=alpha*Iblured+(1-alpha)*Iorignal
In formula, IbluredFor the above-mentioned image that renders, IorignalScheming for original RGB, alpha=COC*COC is weights, by Blur radius is directly tried to achieve.
It is understood that in this embodiment by rendering image and above-mentioned original RGB figures do alpha fusions to above-mentioned, Fused image is obtained, certainly, herein if it is intended to obtaining image quality preferably original RGB figures, common filter can be used Ripple processing method, remove the noise in original RGB figures.The imaging effect that target object can be obtained by such mode is more preferable Render image, and by the step for can ensure that the target object depth of field is clear with inner region, image-region beyond the depth of field It is fuzzy, so as to reach preferable image depth rendering effect.
It can be seen that in the present invention, corresponding with target object original RGB figures and depth map are obtained first;Calculate above-mentioned depth Spend the blur radius COC of figure;Using convolution kernel corresponding with above-mentioned blur radius COC, place is filtered to above-mentioned original RGB figures Reason, is rendered image accordingly;Image is rendered to above-mentioned and above-mentioned original RGB figures do alpha fusions, is schemed after being merged Picture.It is understood that in the present invention, by calculating convolution kernel corresponding to blur radius COC, then in original RGB figures Pixel value makees convolution kernel filtering process, obtains original RGB figures and renders image accordingly, by such computational methods, original RGB figures corresponding render can find pixel corresponding with pixel in original RGB figures in image, be exactly based on such Mapping relations one by one, solves the problems, such as image depth effect color leakage in the prior art.
The embodiment of the present invention two discloses a kind of specific image depth rendering intent, relative to a upper embodiment, this reality Apply example and made further instruction and optimization to technical scheme, shown in Figure 2, this method includes:
Step S21:IMAQ is carried out to target object, obtains the first RGB figures and the 2nd RGB figures, and utilize the first RGB Figure and the 2nd RGB figures, it is determined that corresponding depth map.
In the present embodiment, in order to obtain the original RGB of above-mentioned target object figures, mesh can be obtained using camera device The original RGB figures of object are marked, for example, using binocular camera shooting device, IMAQ is carried out to target object, obtains the first RGB figures Scheme with the 2nd RGB.It is understood that the first RGB figures and the 2nd RGB figures herein is simply to illustrate that the first RGB figures and the The position relationships of two RGB figures is different, such as, can be that two shootings on binocular camera shooting device are single during photographic subjects object Member is individually positioned in left side and right side or the diverse location of wanting photographic subjects object at other of target object, this Place is not construed as limiting.
Wherein, above-mentioned binocular camera shooting device can specifically be arranged on hand-held intelligent equipment, unmanned plane, robot, computer, In the equipment such as intelligent television.
It should be noted that in the present embodiment, it is the spatial relation using the first RGB figures and the 2nd RGB figures, counts Calculation obtains depth map corresponding to RGB figures, and computational methods include but is not limited to stereoscopic vision (stereo vision) and believed from motion The multiple three-dimensional scene structures (structure from motion) of Yan in breath.
Certainly, it may be otherwise not by the method for calculating, scheme corresponding depth map to obtain RGB, such as by some Camera with shooting depth map shoots to obtain depth map corresponding to RGB figures, such as, time-of-flight camera TOF can be passed through (time of flight camera), Kinect cameras directly shooting obtain original RGB figures corresponding to depth map, thus may be used To save calculating process above.
Step S22:Calculate the blur radius COC of depth map.
Specifically, in order to reduce amount of calculation, in the present embodiment, the blur radius COC of above-mentioned calculating depth map process, tool Body can include:
Down-sampling is carried out to depth map, obtains the small figure of corresponding depth, then calculates blur radius corresponding to the small figure of depth COC。
Specifically, in the present embodiment, being reduced by way of down-sampling to depth map, the small figure of depth is obtained, can What it is by understanding is by a manner of such, reducing the number to handling pixel in picture, improving the speed of service of image, subtract The consumption of running memory is lacked.To the mode of image down sampling, include but is not limited to, pass through the down-sampling mode of gaussian pyramid Down-sampling directly either is carried out to depth map and obtains the small figure of corresponding depth.
Wherein, the blur radius COC of the small figure of depth calculation formula is:
In formula, object distance when U is shooting RGB figures, UfFor the distance of focusing, f is the focal length of camera, and D is camera lens Diameter.
Further, herein, in order to improve the precision of the small figure of depth, quick Steerable filter device can also be utilized, to upper State the small figure of depth and carry out corresponding filtering process, to lift the precision of the small figure of above-mentioned depth.Wherein, the meter of quick Steerable filter device Calculating formula is:
In formula, i is pixel index, and it is ω using window that k, which represents, and radius is the index of r native window, IiIt is small for depth The pixel value of ith pixel, q in figureiRepresent the filtered output valve of ith pixel in the small figure of depth, akAnd bkRespectively kth The filter linearity parameter of individual window, wherein,
And
In formula, μkAnd σkThe average and variance of pixel value respectively in k-th of window, ε are the regularization of control smoothness Parameter, then after above-mentioned filtering process, the pixel value of output isWhereinRespectively ak、bkBe averaged Value.
It is understood that in the present embodiment, the small figure of depth is filtered using quick Steerable filter device, Ke Yiti The precision of the small figure of high depth, and can solve the discontinuous defect of depth map among prior art by this technological means, Certainly the step for, the operating procedures order among whole image processing can be reached according among practical operation What final purpose was adjusted, it is not construed as limiting herein.
Step S23:Down-sampling is carried out to any RGB figures in original RGB figures, obtains the small figures of corresponding RGB.
Specifically, in the present embodiment, original RGB figures are reduced by way of down-sampling, obtain corresponding RGB Small figure, it is to be appreciated that by such mode, in order to which the operating process with preceding sections is corresponding, be easy to the follow-up of image Processing, wherein the mode of down-sampling can be with the correlation step in parameter step S22, and here is omitted.
Step S24:Using convolution kernel corresponding with the small figure of blur radius COC and RGB, processing is filtered to the small figures of RGB, Obtain rendering small figure.
Wherein, the calculation formula of convolution kernel corresponding with the pixel p in the small figures of above-mentioned RGB is:
In formula, wpThe weights of the pixel p points being located in the small figures of above-mentioned RGB centered on pixel q are represented, k is pair The normalization factor that blur radius COC is obtained after being normalized, σr、σsFor the standard deviation in codomain/spatial domain, Ip(r, g, b) For pixel p three-dimensional color value, Iq(r, g, b) is pixel q three-dimensional color value, and p (x, y, z) is pixel p three-dimensional Positional value, q (x, y, z) are pixel q three-dimensional position value.
The calculation formula for then rendering pixel p pixel value in small figure is:
I (p)=IR(p)*wp
In formula, IR(p) pixel value for being original RGB figures midpoint p, wpFor the weights of convolution kernel.
It is understood that by calculating convolution kernel corresponding to blur radius COC, then to the pixel value in original RGB figures Make convolution kernel filtering process, obtain original RGB figures and render image accordingly, by such computational methods, in original RGB figures phase Rendering for answering can find pixel corresponding with pixel in original RGB figures in image, be exactly based on and such reflect one by one Relation is penetrated, the depth of field in the prior art is solved and renders color leakage problem in image.
Step S25:Up-sampled to rendering small figure, obtain rendering image.
Specifically, obtaining after rendering small figure, in order to make image co-registration with RGB figures before, it is more preferable to obtain image quality Image depth rendering effect, it is necessary to be enlarged rendering small figure, be expanded to the picture that size is corresponded to original RGB, herein Render small figure to above-mentioned and up-sample, obtain rendering image.Certainly herein, it is possibility to have other methods are entered to rendering small figure Row expansion is obtained rendering image, and the method that picture is enlarged is not construed as limiting herein.
Step S26:Alpha fusions are done to rendering image and original RGB figures, obtain fused image.
Wherein, the calculation formula of alpha fusions is:
Idst=alpha*Iblured+(1-alpha)*Iorignal
In formula, IbluredFor the image that renders after up-sampling, IorignalScheme for original RGB, alpha=COC*COC is power Value, is directly tried to achieve by blur radius.
It is understood that any one RGB in original RGB figures is schemed and renders image co-registration, scheme after being merged Picture, it is ensured that region of the image within the depth of field is clear, the region blur beyond the depth of field, so as to obtain rendering effect more Good renders image.
The embodiment of the present invention three discloses a kind of image depth rendering intent, and relative to a upper embodiment, the present invention is to skill Art scheme has made further instruction and optimization, and one kind shown in Figure 3 is based on multi-cam collecting device and implements the present invention The image depth Rendering algorithms flow chart of proposition, specifically:
Step S31:IMAQ is carried out to target object, obtains K RGB figures, wherein, wherein K is more than or equal to 3 Integer, and using K RGB figures, it is determined that corresponding depth map.
It is understood that in the present embodiment, using the more mesh camera devices for including K image unit, to object Body carries out IMAQ, obtains K RGB figures, specifically, being to obtain more accurate picture depth letter in depth map herein Breath, so collection K RGB figures to be carried out depth map corresponding to the RGB figures that the depth of field renders mutually to correct.
Wherein, above-mentioned multi-cam collecting device may be mounted at all directions and the position of wanted photographic subjects object, It is defined by reaching the purpose among practical operation.
Step S32:Calculate the blur radius COC of depth map.
Specifically, in the present embodiment, the blur radius COC of above-mentioned calculating depth map process, it can specifically include:
Down-sampling is carried out to depth map, obtains the small figure of corresponding depth, then calculates blur radius corresponding to the small figure of depth COC。
Wherein, the blur radius COC of the small figure of depth calculation formula is:
In formula, object distance when U is shooting RGB figures, UfFor the distance of focusing, f is the focal length of camera, and D is camera lens Diameter.
It is understood that generally during image is handled, in order to improve the operational efficiency of program, at the place of picture During reason, the size of image can be adjusted correspondingly, it is clear that technological means as utilization can improve program Operational efficiency, so being reduced by the way of down-sampling to picture, to reduce amount of calculation of the program among operation.
Further, here for the precision for improving the small figure of depth, quick Steerable filter device can also be utilized, to above-mentioned The small figure of depth carries out corresponding filtering process, to lift the precision of the small figure of above-mentioned depth.
It is understood that in the present embodiment, using quick Steerable filter device (fast-guide-filter) to depth Small figure is filtered, and can improve the precision of the small figure of depth, and can solve among prior art by this technological means The discontinuous defect of depth map, it is certain the step for whole image processing among operating procedures order be can be according to reality The final purpose to be reached is adjusted among operation, is not construed as limiting herein.
Step S33:Down-sampling is carried out to any RGB figures in original RGB figures, obtains the small figures of corresponding RGB;
Specifically, in the present embodiment, original RGB figures are reduced by way of down-sampling, obtain corresponding RGB Small figure, it is to be appreciated that by such mode, in order to which the operating process with preceding sections is corresponding, be easy to the follow-up of image Processing, wherein the mode of down-sampling can be with the correlation step in parameter step S22, and here is omitted.
Step S34:Using convolution kernel corresponding with the small figure of blur radius COC and RGB, processing is filtered to the small figures of RGB, Obtain rendering small figure;
It is understood that make by calculating convolution kernel corresponding to blur radius COC, then to the pixel value in the small figures of RGB Convolution kernel filtering process, small figure is rendered accordingly, by such computational methods, small figure is rendered accordingly in the small figures of RGB In can find pixel corresponding with pixel in the small figures of RGB, be exactly based on such mapping relations one by one, solve existing There is the depth of field in technology to render the color leakage problem in image.
Step S35:Up-sampled to rendering small figure, obtain rendering image.
It is understood that obtaining after rendering small figure, in order to make image co-registration with RGB figures before, obtain being imaged matter Better image depth of field rendering effect is measured, it is necessary to be enlarged small figure is rendered, is expanded to the figure that size is corresponded to original RGB Piece, render small figure to above-mentioned herein and up-sample, obtain rendering image.Certainly herein, it is possibility to have other methods are to wash with watercolours Contaminate small figure and be enlarged to obtain and render image, the method that picture is enlarged is not construed as limiting herein.
Step S36:Alpha fusions are done to rendering image and original RGB figures, obtain fused image.
Scheme it is understood that the RGB that the depth of field renders will be carried out in original RGB figures and render image co-registration, obtain Fused image, it is ensured that region of the image within the depth of field is clear, the region blur beyond the depth of field, so as to obtain wash with watercolours Dye effect preferably renders image.
Further, in order to improve the speed of service of program, GPGPU (GPGPU, i.e. General can also be utilized Purpose Graphic Process Unit, general-purpose computations graphics processor) make parallel acceleration processing to convolution kernel.It can manage It solution, by this method provided in an embodiment of the present invention, can not only ensure the robustness of image imaging, and can add Fast image renders calculating speed.And it is flat that this method provided in an embodiment of the present invention can also be widely used in other movements On platform, and then ensure the rendering effect of image depth, while also improve the experience of taking pictures of user.
Accordingly, it is shown in Figure 4 the embodiment of the invention also discloses a kind of image depth rendering system, the system bag Include:
Image collection module 41, for obtaining corresponding with target object original RGB figures and depth map;
In the present embodiment, the method for obtaining corresponding with target object original RGB figures and depth map has many kinds, such as The original RGB that target object is obtained by the camera function of mobile phone schemes, and the corresponding depth map of original RGB figures, can pass through bat The picture of different angle object is taken the photograph, depth map corresponding to above-mentioned RGB figures is obtained by the calculating of correlation, naturally it is also possible to pass through Time-of-flight camera (time of flight camera, TOF) or Kinect cameras shoot to obtain original RGB figures correspondingly Depth map, so as to obtain the various depth informations in picture, it is to be understood that above method be only obtain target object A part of method of corresponding original RGB figures and depth map, rather than all.
Blur radius computing module 42, for calculating the blur radius COC of above-mentioned depth map;
Wherein, blur radius COC calculation formula is:
In formula, COC is blur radius, and U is to shoot object distance when RGB schemes, UfFor the distance of focusing, f is the focal length of camera, D is the diameter of camera lens.
RGB figures filtration module 43, for utilizing convolution kernel corresponding with above-mentioned blur radius COC, above-mentioned original RGB is schemed Processing is filtered, is rendered image accordingly;
Wherein, the calculation formula of convolution kernel corresponding with pixel p in above-mentioned original RGB figures is:
In formula, wpThe weights of the pixel p points being located in above-mentioned original RGB figures centered on pixel q are represented, k is The normalization factor obtained after blur radius COC is normalized, σr、σsFor the standard deviation in codomain/spatial domain, Ip(r, g, B) the three-dimensional color value for being pixel p, Iq(r, g, b) is pixel q three-dimensional color value, and p (x, y, z) is the three of pixel p Positional value is tieed up, q (x, y, z) is pixel q three-dimensional position value.
Then the calculated for pixel values mode of pixel is in filtered image:
I (p)=IR(p)*wp
In formula, IR(p) pixel value for being original RGB figures midpoint p, wpFor the weights of convolution kernel.
It is understood that the blur radius COC by calculating depth map, and convolution kernel corresponding with blur radius COC, Processing is filtered to original RGB figures, is rendered image accordingly, by such mode, can solve prior art The problem of color is revealed in central depth image.
Image co-registration module 44, for rendering image and above-mentioned RGB figures do alpha fusions to above-mentioned, scheme after being merged Picture.
Wherein, the calculation formula of alpha fusions is:
Idst=alpha*Iblured+(1-alpha)*Iorignal
In formula, IbluredTo render image, IorignalScheme for original RGB, alpha=COC*COC is weights, by fuzzy half Directly try to achieve in footpath.
In the present embodiment, any one RGB in original RGB figures is schemed and renders image co-registration, schemed after being merged Picture, it is ensured that region of the image within the depth of field is clear, the region blur beyond the depth of field, so as to obtain rendering effect more Good renders image.
Specifically, above-mentioned image collection module 41, can include the first image acquisition units and the first depth map determines list Member;Wherein,
First image acquisition units, for carrying out IMAQ to the target object, obtain the first RGB figures and second RGB schemes;
First depth map determining unit, for being schemed using the first RGB figures and the 2nd RGB, it is determined that corresponding deep Degree figure.
Further, in order to improve the accuracy of the information of depth map in above-mentioned depth map, above-mentioned image collection module 41, The second image acquisition units and the second depth map determining unit can be included;Wherein,
Second image acquisition units, for carrying out IMAQ to above-mentioned target object, K RGB figures are obtained, wherein K is Integer more than or equal to 3;
Second depth map determining unit, for being schemed using above-mentioned K RGB, it is determined that corresponding depth map.
, can be with using the difference of image space relation in target object it is understood that by obtaining multiple RGB figures Depth information in depth image obtained by being calculated by two images is modified so that the depth information in depth map It is more accurate, so as to improve the precision of this algorithm.
Specifically, RGB figures filtration module 43, including the small figure filter unit of RGB figures downsampling unit, RGB and render small figure Up-sample unit;Wherein,
RGB figure downsampling units, for carrying out down-sampling to any RGB figures in above-mentioned original RGB figures, obtain corresponding The small figures of RGB;
The small figure filter units of RGB, it is right for utilizing convolution kernel corresponding with above-mentioned blur radius COC and the small figures of above-mentioned RGB The above-mentioned small figures of RGB are filtered processing, obtain rendering small figure.
Small figure up-sampling unit is rendered, for rendering small figure to above-mentioned and up-sampling, is rendered image accordingly.
Specifically, image co-registration module 44, including image fusion unit, wherein:
Image fusion unit, for the above-mentioned image that renders to be carried out into alpha with any RGB figures in the original RGB figures Fusion, obtains above-mentioned fused image.
Further, above-mentioned blur radius computing module 42, including depth map downsampling unit and blur radius calculate Unit, wherein:
Depth map downsampling unit, for carrying out down-sampling to above-mentioned depth map, obtain the small figure of corresponding depth;
Blur radius computing unit, for calculating blur radius COC corresponding to the small figure of above-mentioned depth.
It is understood that during image is handled, in order to improve the operational efficiency of program, in the treated of picture Cheng Zhong, the size of image can be adjusted correspondingly, it is clear that technological means as utilization can improve the operation of program Efficiency, so being reduced by the way of down-sampling to picture, to reduce amount of calculation of the program among operation.
Further, image depth rendering system provided in an embodiment of the present invention, in addition to, the small figure filtration module of depth 45;Wherein:
The small figure filtration module 45 of depth, for calculating mould corresponding to the small figure of depth in above-mentioned blur radius computing module Before pasting radius COC, using quick Steerable filter device, corresponding filtering process is carried out to the small figure of the depth, it is above-mentioned to be lifted The precision of the small figure of depth.
Wherein, the calculation formula of quick Steerable filter device is:
In formula, i is pixel index, and it is ω using window that k, which represents, and radius is the index of r native window, IiIt is small for depth The pixel value of ith pixel, q in figureiRepresent the filtered output valve of ith pixel in the small figure of depth, akAnd bkRespectively kth The filter linearity parameter of individual window, wherein,
And
In formula, μkAnd σkThe average and variance of pixel value respectively in k-th of window, ε are the regularization of control smoothness Parameter, then after above-mentioned filtering process, the pixel value of output isWhereinRespectively ak、bkBe averaged Value.
It is understood that in the present embodiment, the small figure of depth is filtered using quick Steerable filter device, Ke Yiti The precision of the small figure of high depth, and can solve the discontinuous defect of depth map among prior art by this technological means, Certainly the step for, the operating procedures order among whole image processing can be reached according among practical operation What final purpose was adjusted, it is not construed as limiting herein.
It is may be referred on above-mentioned modules and the more detailed course of work of unit disclosed in previous embodiment Corresponding contents, no longer repeated herein.
Accordingly, shown in Figure 5 the embodiment of the invention also discloses a kind of terminal, the terminal includes image acquisition device 51st, processor 52 and memory 53;Wherein, above-mentioned processor 52 by transfer the instruction being stored in above-mentioned memory 53 come Perform following steps:
Original RGB figures corresponding with target object are obtained by above-mentioned image acquisition device 51, and determined and above-mentioned original RGB Depth map corresponding to figure;
Calculate the blur radius COC of above-mentioned depth map;
Using convolution kernel corresponding with above-mentioned blur radius COC, processing is filtered to above-mentioned original RGB figures, obtains phase That answers renders image;
Image is rendered to above-mentioned and above-mentioned original RGB figures do alpha fusions, obtains fused image.
It is understood that the terminal in the present embodiment includes but is not limited to camera and video camera, and in memory 53 Instruction be also not necessarily limited to step listed above, the more detailed course of work may be referred to the phase disclosed in previous embodiment Content is answered, is no longer repeated herein.Of course for improve processor 52 operational efficiency, can also use call other the 3rd The related software of side, is also not construed as limiting herein.
Finally, it is to be noted that, herein, such as first and second or the like relational terms be used merely to by One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning Covering including for nonexcludability, so that process, method, article or equipment including a series of elements not only include that A little key elements, but also the other element including being not expressly set out, or also include for this process, method, article or The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence " including one ... ", not Other identical element in the process including the key element, method, article or equipment also be present in exclusion.
A kind of image depth rendering intent, system and the terminal that are there is provided are described in detail the present invention above, Specific case used herein is set forth to the principle and embodiment of the present invention, and the explanation of above example is simply used Understand the method and its core concept of the present invention in help;Meanwhile for those of ordinary skill in the art, according to the present invention's Thought, there will be changes in specific embodiments and applications, in summary, this specification content should not be construed as Limitation of the present invention.

Claims (10)

1. a kind of image depth rendering intent, it is characterised in that including:
Obtain corresponding with target object original RGB figures and depth map;
Calculate the blur radius COC of the depth map;
Using convolution kernel corresponding with the blur radius COC, processing is filtered to the original RGB figures, obtained corresponding Render image;
Image is rendered to described and the original RGB figures do alpha fusions, obtains fused image.
2. according to the method for claim 1, it is characterised in that it is described obtain original RGB corresponding with target object scheme and The process of depth map, including:
IMAQ is carried out to the target object, obtains the first RGB figures and the 2nd RGB figures;
Using the first RGB figures and the 2nd RGB figures, it is determined that corresponding depth map.
3. according to the method for claim 1, it is characterised in that it is described obtain original RGB corresponding with target object scheme and The process of depth map, including:
IMAQ is carried out to the target object, obtains K RGB figures, wherein, K is the integer more than or equal to 3;
Using the K RGB figures, it is determined that corresponding depth map.
4. according to the method for claim 1, it is characterised in that described to utilize convolution corresponding with the blur radius COC Core, processing is filtered to the original RGB figures, is rendered the process of image accordingly, including:
Down-sampling is carried out to any RGB figures in the original RGB figures, obtains the small figures of corresponding RGB;
Using convolution kernel corresponding with the blur radius COC and the small figures of the RGB, processing is filtered to the small figures of the RGB, Obtain rendering small figure;
Render small figure to described and up-sample, obtain described rendering image.
5. according to the method for claim 4, it is characterised in that described to render image and the original RGB figures are done to described Alpha is merged, and obtains the process of fused image, including:
The image that renders is merged with any RGB figure progress alpha in the original RGB figures, schemed after obtaining the fusion Picture.
6. according to the method described in any one of claim 1 to 5, it is characterised in that described to calculate fuzzy the half of the depth map Footpath COC process, including:
Down-sampling is carried out to the depth map, obtains the small figure of corresponding depth;
Calculate blur radius COC corresponding to the small figure of the depth.
7. according to the method for claim 6, it is characterised in that described to calculate blur radius corresponding to the small figure of depth Before COC process, in addition to:
Using quick Steerable filter device, corresponding filtering process is carried out to the small figure of the depth, to lift the small figure of the depth Precision.
A kind of 8. image depth rendering system, it is characterised in that including:
Image collection module, for obtaining corresponding with target object original RGB figures and depth map;
Blur radius computing module, for calculating the blur radius COC of the depth map;
RGB figure filtration modules, for utilizing convolution kernel corresponding with the blur radius COC, the original RGB figures are filtered Ripple processing, is rendered image accordingly;
Image co-registration module, for rendering image and the RGB figures do alpha fusions to described, obtain fused image.
9. image depth rendering system according to claim 8, it is characterised in that the RGB figures filtration module, including:
RGB figure downsampling units, for carrying out down-sampling to any RGB figures in the original RGB figures, obtain corresponding RGB Small figure;
The small figure filter units of RGB, for utilizing convolution kernel corresponding with the blur radius COC and the small figures of the RGB, to described The small figures of RGB are filtered processing, obtain rendering small figure.
Small figure up-sampling unit is rendered, for rendering small figure to described and up-sampling, is rendered image accordingly.
10. a kind of terminal, it is characterised in that including image acquisition device, processor and memory;Wherein, the processor leads to Toning, which is gone bail for, is stored in the instruction in the memory to perform following steps:
Original RGB figures corresponding with target object are obtained by described image collector, and determined corresponding with the original RGB figures Depth map;
Calculate the blur radius COC of the depth map;
Using convolution kernel corresponding with the blur radius COC, processing is filtered to the original RGB figures, obtained corresponding Render image;
Image is rendered to described and the original RGB figures do alpha fusions, obtains fused image.
CN201710777729.3A 2017-08-31 2017-08-31 A kind of image depth rendering intent, system and terminal Pending CN107633497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710777729.3A CN107633497A (en) 2017-08-31 2017-08-31 A kind of image depth rendering intent, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710777729.3A CN107633497A (en) 2017-08-31 2017-08-31 A kind of image depth rendering intent, system and terminal

Publications (1)

Publication Number Publication Date
CN107633497A true CN107633497A (en) 2018-01-26

Family

ID=61101784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710777729.3A Pending CN107633497A (en) 2017-08-31 2017-08-31 A kind of image depth rendering intent, system and terminal

Country Status (1)

Country Link
CN (1) CN107633497A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146941A (en) * 2018-06-04 2019-01-04 成都通甲优博科技有限责任公司 A kind of depth image optimization method and system based on net region division
CN109859136A (en) * 2019-02-01 2019-06-07 浙江理工大学 A method of Fuzzy Processing being carried out to image in depth of field rendering
WO2020038407A1 (en) * 2018-08-21 2020-02-27 腾讯科技(深圳)有限公司 Image rendering method and apparatus, image processing device, and storage medium
CN110889410A (en) * 2018-09-11 2020-03-17 苹果公司 Robust use of semantic segmentation in shallow depth of field rendering
CN112686939A (en) * 2021-01-06 2021-04-20 腾讯科技(深圳)有限公司 Depth image rendering method, device and equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693527A (en) * 2011-02-28 2012-09-26 索尼公司 Method and apparatus for performing a blur rendering process on an image
CN102968814A (en) * 2012-11-22 2013-03-13 华为技术有限公司 Image rendering method and equipment
CN104169966A (en) * 2012-03-05 2014-11-26 微软公司 Generation of depth images based upon light falloff
US20160286200A1 (en) * 2015-03-25 2016-09-29 Electronics And Telecommunications Research Institute Method of increasing photographing speed of photographing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693527A (en) * 2011-02-28 2012-09-26 索尼公司 Method and apparatus for performing a blur rendering process on an image
CN104169966A (en) * 2012-03-05 2014-11-26 微软公司 Generation of depth images based upon light falloff
CN102968814A (en) * 2012-11-22 2013-03-13 华为技术有限公司 Image rendering method and equipment
US20160286200A1 (en) * 2015-03-25 2016-09-29 Electronics And Telecommunications Research Institute Method of increasing photographing speed of photographing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹彦珏 等: "基于后处理的实时景深模拟与应用", 《计算机应用》 *
杨真 等: "基于快速引导滤波的景深实时渲染方法", 《中国体视学与图像分析》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146941A (en) * 2018-06-04 2019-01-04 成都通甲优博科技有限责任公司 A kind of depth image optimization method and system based on net region division
WO2020038407A1 (en) * 2018-08-21 2020-02-27 腾讯科技(深圳)有限公司 Image rendering method and apparatus, image processing device, and storage medium
US11295528B2 (en) 2018-08-21 2022-04-05 Tencent Technology (Shenzhen) Company Limited Image rendering method and apparatus, image processing device, and storage medium
CN110889410A (en) * 2018-09-11 2020-03-17 苹果公司 Robust use of semantic segmentation in shallow depth of field rendering
CN110889410B (en) * 2018-09-11 2023-10-03 苹果公司 Robust use of semantic segmentation in shallow depth of view rendering
CN109859136A (en) * 2019-02-01 2019-06-07 浙江理工大学 A method of Fuzzy Processing being carried out to image in depth of field rendering
CN112686939A (en) * 2021-01-06 2021-04-20 腾讯科技(深圳)有限公司 Depth image rendering method, device and equipment and computer readable storage medium
CN112686939B (en) * 2021-01-06 2024-02-02 腾讯科技(深圳)有限公司 Depth image rendering method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107633497A (en) A kind of image depth rendering intent, system and terminal
CN109474780B (en) Method and device for image processing
CN110493525B (en) Zoom image determination method and device, storage medium and terminal
CN108122191B (en) Method and device for splicing fisheye images into panoramic image and panoramic video
JP2017112602A (en) Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof
WO2019105261A1 (en) Background blurring method and apparatus, and device
CN105678809A (en) Handheld automatic follow shot device and target tracking method thereof
CN111754579B (en) Method and device for determining external parameters of multi-view camera
CN110868541B (en) Visual field fusion method and device, storage medium and terminal
CN106954007A (en) Camera device and image capture method
CN108510540A (en) Stereoscopic vision video camera and its height acquisition methods
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN111340737B (en) Image correction method, device and electronic system
CN106846249A (en) A kind of panoramic video joining method
CN112470192A (en) Dual-camera calibration method, electronic device and computer-readable storage medium
CN106204554A (en) Depth of view information acquisition methods based on multiple focussing image, system and camera terminal
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
CN106952247A (en) A kind of dual camera terminal and its image processing method and system
CN109819169A (en) Panorama shooting method, device, equipment and medium
WO2023236508A1 (en) Image stitching method and system based on billion-pixel array camera
CN109327656A (en) The image-pickup method and system of full-view image
CN115546043B (en) Video processing method and related equipment thereof
CN112150518A (en) Attention mechanism-based image stereo matching method and binocular device
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN110012236A (en) A kind of information processing method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180126

RJ01 Rejection of invention patent application after publication