CN110738677A - Full-definition imaging method and device for camera and electronic equipment - Google Patents

Full-definition imaging method and device for camera and electronic equipment Download PDF

Info

Publication number
CN110738677A
CN110738677A CN201910893392.1A CN201910893392A CN110738677A CN 110738677 A CN110738677 A CN 110738677A CN 201910893392 A CN201910893392 A CN 201910893392A CN 110738677 A CN110738677 A CN 110738677A
Authority
CN
China
Prior art keywords
image
full
depth
points
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910893392.1A
Other languages
Chinese (zh)
Inventor
王贵锦
范书沛
李文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910893392.1A priority Critical patent/CN110738677A/en
Publication of CN110738677A publication Critical patent/CN110738677A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

The embodiment of the invention provides camera full-resolution imaging method, device and electronic equipment, wherein the method comprises the steps of extracting real edge points in a target object focus stack image by adopting a maximum gradient flow operator, calculating depth values of the real edge points, extracting structural edge points based on the real edge points, transmitting the depth values of the real edge points to all pixel points of the focus stack image based on the structural edge points, obtaining a global depth map, obtaining a full-resolution image of a primary fusion of the target object based on the global depth map, removing fuzzy patterns of a structural edge area from the full-resolution image of the primary fusion based on the global depth map and the distribution of the structural edge points, obtaining a final full-resolution image, and performing camera full-resolution imaging of the target object based on the final full-resolution image.

Description

Full-definition imaging method and device for camera and electronic equipment
Technical Field
The invention relates to the technical field of full-focus image fusion, in particular to a camera full-definition imaging method, device and electronic equipment.
Background
A method for realizing the full-focus image fusion algorithm includes such steps as fusing series of images (focusing stack images) focused at different depth positions to obtain images focused at all image positions to realize the extension of image depth, and applying to depth extension of image by using the focusing stack images in such fields as depth estimation, medical imaging, three-dimensional scene understanding and biological recognition.
However, this method needs image transformation, and is complex in calculation, and this kind of method is very sensitive to the sparseness of the transformation domain, and the slight jitter of the coefficient estimation can generate noise at all positions of the final full-focus image.
The method based on the depth estimation can avoid the problems and generally comprises the following steps of firstly designing different sharpness measurement operators to extract gradient values of image edges, then selecting positions with the maximum gradient as depth values of the edge points, then designing different propagation methods to expand the depth values from sparse edge points to the image overall to obtain the depth values of pixel points of the image, and finally selecting pixel values under the images at corresponding depth positions point by point in a focusing stack according to an overall dense depth image to be fused to obtain a full focusing image with clear whole image.
However, in the above method based on depth estimation, the accuracy of edge depth estimation, the robustness of depth propagation algorithm, etc. all have very important influence on the final result, and in this method, the full-focus image and the depth map are point-by-point corresponding, for the edge point region (structure edge) where the depth jump occurs, when the back surface is focused, the front surface is blurred, the propagated blurred edge will block the sharp texture near the edge at the back surface, and when full-focus fusion is performed according to the real depth value, the above point-by-point corresponding algorithm will retain the remaining blurred edge propagated from the front surface at the back surface, resulting in the degradation of the quality of the obtained full-focus image, and steps will affect the imaging effect.
Disclosure of Invention
In order to overcome the above problems or at least partially solve the above problems, embodiments of the present invention provide camera full-resolution imaging method, apparatus and electronic device for effectively eliminating the adverse effect caused by blurred edges, thereby effectively improving the quality of a full-focus image and improving the imaging effect.
, an embodiment of the invention provides a camera full-definition imaging method, including:
extracting real edge points in a focusing stack image of a target object by adopting a maximum gradient flow operator, and calculating the depth value of the real edge points;
extracting a structure edge point based on the real edge point, and transmitting the depth value of the real edge point to all pixel points of the focusing stack image based on the structure edge point to obtain a global depth map;
acquiring a full-focus image of the primary fusion of the target object based on the global depth map, removing a fuzzy pattern of a structure edge region from the full-focus image of the primary fusion based on the global depth map and the distribution of the structure edge points, and acquiring a final full-focus image;
performing camera full-resolution imaging of the target object based on the final fully-focused image.
In a second aspect, an embodiment of the present invention provides an camera full-definition imaging device, including:
the extraction module is used for extracting a real edge point in a focusing stack image of a target object by adopting a maximum gradient flow operator and calculating a depth value of the real edge point;
a propagation module, configured to extract a structure edge point based on the real edge point, and propagate a depth value of the real edge point to all pixel points of the focus stack image based on the structure edge point, so as to obtain a global depth map;
the deblurring module is used for acquiring a full-focus image of the primary fusion of the target object based on the global depth map, removing a fuzzy pattern of a structure edge area from the full-focus image of the primary fusion based on the global depth map and the distribution of the structure edge points, and acquiring a final full-focus image;
and the imaging module is used for carrying out camera full-definition imaging on the target object based on the final full-focus image.
In a third aspect, an embodiment of the present invention provides electronic devices, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the camera full-definition imaging method according to .
In a fourth aspect, embodiments of the present invention provide non-transitory computer readable storage media having stored thereon computer instructions which, when executed by a computer, implement the steps of the camera full definition imaging method as described above in aspect .
According to the camera full-definition imaging method, the camera full-definition imaging device and the electronic equipment, the structural edge points of the focus stack image are extracted, the fuzzy patterns of the structural edge area are removed from the preliminarily fused full-focus image according to the distribution of the structural edge points, the adverse effect caused by the fuzzy edge can be effectively eliminated, and therefore the quality of the full-focus image is effectively improved, and the imaging effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, is briefly introduced in the drawings required in the description of the embodiments or the prior art, it is obvious that the drawings in the following description are embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a full-resolution camera imaging method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a full-resolution camera imaging method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a full-resolution camera imaging device according to an embodiment of the present invention;
fig. 4 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are partial embodiments of the present invention, rather than all embodiments.
Aiming at the problems that the edge depth estimation accuracy and the robustness of a depth propagation algorithm are high and the quality of an obtained full-focus image is not high in the prior art, the method and the device can effectively eliminate the adverse effect caused by the blurred edge by extracting the structural edge points of the focus stack image and removing the blurred pattern of the structural edge area from the preliminarily fused full-focus image according to the distribution of the structural edge points, so that the quality of the full-focus image is effectively improved and the imaging effect is improved. Embodiments of the present invention will be described and illustrated with reference to various embodiments.
Fig. 1 is a schematic flowchart of a full-resolution camera imaging method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101, extracting real edge points in the focusing stack image of the target object by adopting a maximum gradient flow operator, and calculating the depth value of the real edge points.
Specifically, by designing a maximum gradient flow operator, the gradient distribution condition of the edge points of the image is obtained, on the basis of which divergence calculation is performed on the maximum gradient flow operator, the edge points meeting constant divergence are extracted as the real edge points, and the position of the maximum gradient is taken as the depth value corresponding to the real edge points.
And S102, extracting the structural edge points based on the real edge points, and transmitting the depth values of the real edge points to all pixel points of the focusing stack image based on the structural edge points to obtain a global depth map.
On the basis of obtaining the real edge point, the embodiment of the invention further extracts the structure edge point steps according to the real edge point, and obtains the corresponding texture edge point at the same time.
S103, acquiring a full-focus image of the primary fusion of the target object based on the global depth map, removing the fuzzy pattern of the structural edge area from the full-focus image of the primary fusion based on the global depth map and the distribution of the structural edge points, and acquiring a final full-focus image.
The method comprises the steps of obtaining a full-focus image of a target object, obtaining a full-focus image of the full-focus image.
And S104, performing camera full-definition imaging on the target object based on the final full-focus image.
It can be understood that after the final full-focus image of the target object is obtained, the target object is clearly imaged according to the full-focus images, and finally, an image with all positions of the target object being clearer is obtained.
According to the camera full-definition imaging method provided by the embodiment of the invention, the structure edge points of the focus stack image are extracted, and the fuzzy patterns in the structure edge area are removed from the preliminarily fused full-focus image according to the distribution of the structure edge points, so that the adverse effect caused by the fuzzy edge can be effectively eliminated, the quality of the full-focus image is effectively improved, and the imaging effect is improved.
Optionally, according to the foregoing embodiments, the step of extracting the real edge point in the focus stack image of the target object, and calculating the depth value of the real edge point specifically includes:
designing the following maximum gradient flow operator to obtain the gradient distribution of the focusing stack image edge:
Figure BDA0002209477080000061
wherein i, j, k represent image ordinal numbers in the focus stack image, respectively, and Gi、Gj、GkRespectively representing gradient values of ith, j and k images in a focusing stack, and (x, y) representing two-dimensional coordinate positions of pixel points in the focusing stack images;
according to the following formula, extracting the edge point of which the maximum gradient flow operator satisfies that the divergence is greater than 0 as a real edge point, and taking the position of the maximum gradient as the depth value of the real edge point, wherein the following formula is as follows:
Figure BDA0002209477080000062
Figure BDA0002209477080000063
in the formula (I), the compound is shown in the specification,
Figure BDA0002209477080000064
indicating the depth value corresponding to the pixel point (x, y),
Figure BDA0002209477080000065
representing divergence calculation and n representing the number of images in the focus stack.
Specifically, the embodiment of the invention extracts the depth of the image edge. Firstly, designing the maximum gradient flow operator, and extracting a real edge point in a focusing stack image of a target object, wherein the operator describes the gradient distribution near the edge point of the image. Secondly, extracting real image edge points and calculating the depth value of the real image edge points according to the divergence calculation formula of the maximum gradient flow operator. That is, the edge point with the divergence greater than 0 is taken as the real edge point by the maximum gradient stream operator, and the position of the maximum gradient is taken as the depth value d of the edge point, and finally the real depth values of all sparse edge points are obtained by recording.
Optionally, according to the foregoing embodiments, the step of extracting the structure edge point based on the real edge point specifically includes propagating a depth value of the real edge point from the real edge point to all pixel points of the focus stack image by using a classical Laplacian depth propagation algorithm, and if a depth value of a pixel around any real edge point jumps after propagation and a depth value of any real edge point is smaller than a depth value of a pixel around the any 3526 real edge point, using any real edge point as the structure edge point, and using a real edge point other than the structure edge point as the texture edge point.
It can be understood that, in the embodiment of the present invention, according to the extracted real edge points of the focus stack image, steps are performed to extract the structure edge points and the texture edge points, that is, all the real edge points are divided into the structure edge points and the texture edge points by steps, specifically, after the depth values of the real edge points are obtained, an algorithm is first designed to extract the structure edge points.
Optionally, according to the foregoing embodiments, the step of propagating the depth value of the real edge point to all pixel points of the focus stack image specifically includes:
in the depth value propagation process, by finding the optimal depth distribution, the following cost loss energy is minimized:
Figure BDA0002209477080000071
in the formula, E (d) represents cost loss energy, d represents vector representation of the global depth map to be solved, and the size is [ N x 1 [ ]]Wherein N represents the number of pixel points contained in the image, two dimensions respectively represent a data item and a smooth item of the loss function, lambda represents a balance factor, and D represents the size of N]When the pixel point i is a real edge point, D (i, i) is 1,
Figure BDA0002209477080000072
the depth of the real edge point is represented, L represents a Laplacian matrix with a label, and the calculation formula is as follows:
χ(i,k)=(1-∏i)Ii+∏iμk
in the formula, I represents RGB image, (I, j) represents pixel point, omegakDenotes a local small window, δ, covering (i, j)ijRepresenting a kronek symbol, returns 1 when i ═ j, and returns 0, mu otherwisekSum-sigmakRespectively representing local small windows omegakMean and variance of inner pixel points, ПiThe method is used for distinguishing the structure edge points and the texture edge points, 1 is returned when the pixel points are the structure edge points, 0 is returned when the pixel points are the texture edge points, i represents a real edge point i, epsilon represents a regularization parameter, and U3An identity matrix of 3 x 3 size is shown.
Specifically, after the structure edge points of the image are extracted, the specific distribution of the structure edge points can be obtained. On the basis, the depth value of the real edge point is propagated to all the point positions of the focusing stack image from the real edge point by a design algorithm, and a global depth map is obtained. The depth propagation problem is then translated into the problem of finding the optimal depth distribution d to minimize the energy, the specific energy expression form being shown in the above expression of e (d). In the above formula, the weight matrix L is a weight matrix obtained by adjusting a conventional Laplacian matrix according to the structural edge distribution.
According to the depth propagation method based on the structure edge points and the texture edge points, the depth map propagated by the method not only keeps the sharpness of the depth mutation area, but also can ensure the smoothness of the depth value when the depth is not changed.
On the basis of obtaining the global depth map according to the depth propagation in the above embodiments, the initialization result FI of the corresponding full focus image can be obtained according to the global depth map0The formula is as follows:
FI0(x,y)=Id(x,y)(x,y);
in the formula, d (x, y) represents the depth value corresponding to the pixel point (x, y) in the focus stack image, FI0Representing image data resulting from focus stack image fusion.
Optionally, according to the embodiments, the step of removing the blurred pattern of the structure edge region from the preliminarily fused full-focus image specifically includes, based on distribution of the structure edge points, intercepting neighborhood image blocks of a given size of each structure edge point from the global depth map, and clustering pixels in each neighborhood image block into -type and second-type by using a clustering algorithm, for any neighborhood image block, calculating a -th depth value of a -type pixel and a second depth value of a second-type pixel, respectively, extracting a -th image block corresponding to a depth position and a pixel position from the focus stack image based on a -th depth value, extracting a second image block corresponding to a depth position and a pixel position from the focus stack image based on the second depth value, and based on all the -th image block and the second image block, removing the blurred pattern of the structure edge region from the preliminarily fused full-focus image by using a BEF-CNN network model.
Specifically, firstly, according to the obtained global depth map d and the distribution of the structure edge points, image blocks with the size of omega & omega are extracted from the global depth map for all the structure edge points, namely neighborhood image blocks, and for small block neighborhood image blocks p near the structure edge point i, the image edge generally divides the region into 2 small parts, so that the depth image blocks d are divided into 2 small partspSplitting into two parts s using Kmeans clustering1And s2And calculating the depth values of the two corresponding parts.
Optionally, the step of respectively calculating the depth value of the th class pixel and the second depth value of the second class pixel specifically includes:
for s of type 1 depth value a is calculated as follows:
for s of the second class2 depth value B is calculated as follows:
Figure BDA0002209477080000092
in the formula, i represents the ith pixel point in the neighborhood image block p, N represents the total number of the pixel points in the neighborhood image block p, and dp(i) Representing the ith in a neighborhood image block pDepth of pixel point, δ (i ∈ s)1)、δ(i∈s2) Representing a discriminant function, and outputting 1 when the condition is satisfied, otherwise outputting 0.
After that, in obtaining dpBased on two partial depth values A and B, extracting image block p with corresponding depth position and pixel position from focusing stack image according to the two depth valuesAAnd pB. The two image blocks respectively represent the distribution conditions of pixel points when two planes are focused, and are used as the input of a BEF-CNN network model to remove fuzzy areas in the preliminarily fused full-focus image.
In addition, on the basis of the above embodiments, after the step of extracting the second image block corresponding to the depth position and the pixel position from the focus stack image based on the second depth value, the method further includes performing point-by-point fusion on pixel points of the focus stack image corresponding to each neighborhood image block based on the depth distribution in each neighborhood image block to obtain an image block full focus image of each neighborhood image block, and correspondingly removing the fuzzy pattern of the structure edge area from the preliminarily fused full focus image based on the th image block, the second image block and the image block full focus image of all neighborhood image blocks by using a BEF-CNN network model.
It can be understood that when the depth distribution of the extracted neighborhood image block is more complex and does not strictly satisfy the two-classification property, the deblurring processing using the above embodiments may not achieve a better effect. Therefore, the embodiment of the invention introduces the third part of input of the BEF-CNN network model, namely, the image block full-focus image formed by respectively fusing the corresponding focus stack images point by point according to the depth distribution in the image blocks in each field. For the domain image block p, the specific calculation method of the image block full focus image is as follows:
in the formula (d)p(x, y) represents the depth value corresponding to the pixel point (x, y) in the image block p, pfRepresenting the image data resulting from the fusion of image blocks p.
Then, the 3 image blocks pA, pB, and pf are used as inputs of the BEF-CNN network model, and the blur pattern of the structural edge region is finally removed from the preliminarily fused all-in-focus image by the internal operation of the BEF-CNN network model, and the final all-in-focus image is output.
According to the embodiment of the invention, after the pf is input into the third part of the network model, image blocks are synthesized completely according to the depth map, so that the image blocks reflect the shape and depth difference of the edge, the two factors are important components of a fuzzy pattern influencing the edge area of the structure, and the image blocks are synthesized point by point, so that the sharpness of the image at other positions of the area near the edge of the structure is strictly ensured, and when the depth distribution of the image blocks is more complex and does not strictly meet the two classifications, a large number of clear edges can be reserved, namely pAAnd pBThe supplement is good.
To further illustrate the technical solution of the embodiment of the present invention, the embodiment of the present invention provides the following specific processing flow according to the above embodiments, but does not limit the scope of the embodiment of the present invention.
As shown in fig. 2, a flow diagram of a camera full-resolution imaging method according to another embodiment of the present invention mainly includes edge depth extraction, depth propagation and structure edge extraction, and generation of deblurred image blocks by using BEF-CNN, and specifically includes the following processing flows:
firstly, extracting each real edge point in a focusing stack image and calculating the depth of each real edge point by adopting a maximum gradient flow operator.
Secondly, extracting a structure edge area, obtaining a global depth by utilizing a Laplacian depth propagation method, and obtaining an initialized full-focus image on the basis.
And thirdly, extracting different image blocks from the global depth map according to the structure edge, designing a BEF-CNN network model to synthesize a clear image block without edge blurring, and replacing the corresponding area of the initialized full-focus image with the clear image block to integrally improve the performance of the full-focus image.
And finally, performing camera full-definition imaging on the target object by using the final full-focus image.
Wherein, the concrete structure of the designed BEF-CNN network model consists of 3 convolutional layers. At the input, the 3 small image blocks p of the above embodiments are combinedA、pBAnd pfAnd (3) carrying out splicing to obtain image matrixes of 9 channels, wherein the size of the image matrixes is 9 x omega, the th convolutional layer and the second convolutional layer respectively adopt a convolution kernel of 7 x 7 and a convolution kernel of 3 x 3 to obtain 128 characteristic graphs and 32 characteristic graphs, and in the last layer, a convolution kernel of 5 x 5 is adopted to finally obtain an image block of the size of omega of 3 channels.
Specifically, in the stage of network training, Light Field data of 4D Light Field Benchmark is used as a true value of a full focus image, and focus stack image data is generated according to the linear relation between the Light Field and a focus stack, wherein the data set comprises 20 groups of focus stack data, and each groups of focus stack data contain 49 images with different focus positions and the resolution of 512.
Simulation tests show that the embodiment of the invention can accurately calculate the depth values of all the points in the image and can effectively distinguish texture edges and structure edges in the image. Meanwhile, the CNN network is used for reconstructing image blocks near the structure edge except the fuzzy effect, and the reconstruction precision of the obtained full-focus image is superior to that of other existing methods.
Therefore, the description and the definition in the camera full-definition imaging method of each embodiment can be used for understanding each execution module in the embodiment of the present invention, and the description may specifically refer to the above embodiments, and will not be repeated herein.
According to embodiments of the present invention, a structure of a camera full-definition imaging apparatus is shown in fig. 3, which is a schematic structural diagram of a camera full-definition imaging apparatus provided in an embodiment of the present invention, and the apparatus can be used to implement full-definition imaging of a camera in each of the above-mentioned method embodiments, and the apparatus includes an extraction module 301, a propagation module 302, a deblurring module 303, and an imaging module 304.
Wherein:
the extraction module 301 is configured to extract a real edge point in the focus stack image of the target object by using a maximum gradient stream operator, and calculate a depth value of the real edge point; the propagation module 302 is configured to extract a structure edge point based on the real edge point, and propagate a depth value of the real edge point to all pixel points of the focus stack image based on the structure edge point to obtain a global depth map; the deblurring module 303 is configured to obtain a fully focused image of the preliminary fusion of the target object based on the global depth map, and remove a blurred pattern of a structural edge region from the fully focused image of the preliminary fusion based on the global depth map and the distribution of the structural edge points to obtain a final fully focused image; the imaging module 304 is used for performing camera full-resolution imaging of the target object based on the final full-focus image.
Specifically, the extraction module 301 extracts edge points of the image, i.e. real edge points, from the focus stack image of the target object, and calculates the depth values of the real edge points, respectively, specifically, by designing a maximum gradient stream operator, the extraction module 301 obtains the gradient distribution condition of the edge points of the image, on this basis, the extraction module 301 performs divergence calculation on the maximum gradient stream operator, extracts edge points satisfying fixed divergence as real edge points, and uses the position of the maximum gradient as the depth value corresponding to the real edge points.
Then, the propagation module 302 extracts the structure edge points in step according to the obtained real edge points, and obtains the corresponding texture edge points at the same time, on this basis, the propagation module 302 propagates the depth values of the real edge points to all pixel points of the focus stack image by improving the classical Laplacian depth propagation algorithm, so as to obtain the global pixel depth, and thus, a global depth map can be obtained.
Meanwhile, the deblurring module 303 intercepts neighborhood image blocks of each structural edge point from the global depth map according to the distribution of the structural edge points, and synthesizes clear image blocks without edge blur through -step processing of the neighborhood image blocks.
Finally, the imaging module 304 performs a clear imaging process on the target object according to the obtained final full-focus image, and finally obtains an image with all positions of the target object being clearer.
According to the camera full-definition imaging device provided by the embodiment of the invention, the corresponding execution module is arranged to extract the structure edge points of the focus stack image, and the fuzzy patterns of the structure edge area are removed from the preliminarily fused full-focus image according to the distribution of the structure edge points, so that the adverse effect caused by the fuzzy edge can be effectively eliminated, the quality of the full-focus image is effectively improved, and the imaging effect is improved.
It is understood that, in the embodiment of the present invention, each relevant program module in the apparatus of each of the above embodiments may be implemented by a hardware processor (hardware processor). Moreover, the camera full-definition imaging apparatus according to the embodiment of the present invention can implement the full-definition imaging process of the camera according to the above-mentioned method embodiments by using the above-mentioned program modules, and when the apparatus according to the embodiment of the present invention is used for implementing the full-definition imaging of the camera according to the above-mentioned method embodiments, the beneficial effects produced by the apparatus according to the embodiment of the present invention are the same as those of the corresponding above-mentioned method embodiments, and the above-mentioned method embodiments may be referred to, and details are not repeated here.
As still another aspects of the embodiments of the present invention, the present embodiment provides kinds of electronic devices according to the above embodiments, where the electronic devices include a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the computer program to implement the steps of the camera full-definition imaging method according to the above embodiments.
, the electronic device according to the embodiment of the present invention may further include a communication interface and a bus, referring to fig. 4, a schematic physical structure diagram of the electronic device according to the embodiment of the present invention includes at least memories 401, at least processors 402, a communication interface 403, and a bus 404.
The memory 401, the processor 402 and the communication interface 403 complete mutual communication through the bus 404, and the communication interface 403 is used for information transmission between the electronic device and the focus stack image device of the target object; the memory 401 stores a computer program that can be executed on the processor 402, and when the processor 402 executes the computer program, the steps of the camera full-definition imaging method according to the above embodiments are implemented.
It is understood that the electronic device at least includes a memory 401, a processor 402, a communication interface 403 and a bus 404, and the memory 401, the processor 402 and the communication interface 403 are connected in communication with each other through the bus 404, and can complete communication with each other, for example, the processor 402 reads program instructions of a camera full-definition imaging method from the memory 401. In addition, the communication interface 403 may also implement a communication connection between the electronic device and a focus stack image device of the target object, and may complete mutual information transmission, for example, implement acquisition of a focus stack image of the target object through the communication interface 403.
When the electronic device is running, the processor 402 calls the program instructions in the memory 401 to perform the methods provided by the above-mentioned method embodiments, including for example: extracting real edge points in a focusing stack image of a target object by adopting a maximum gradient flow operator, and calculating the depth value of the real edge points; extracting a structure edge point based on the real edge point, and transmitting the depth value of the real edge point to all pixel points of the focusing stack image based on the structure edge point to obtain a global depth map; acquiring a full-focus image of the primary fusion of the target object based on the global depth map, removing a fuzzy pattern of a structure edge area from the full-focus image of the primary fusion based on the global depth map and the distribution of the structure edge points, and acquiring a final full-focus image; and performing camera full-definition imaging and the like of the target object based on the final full-focus image.
Alternatively, all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in computer-readable storage media, and when the program is executed, the program may be executed, and the storage media may include various media capable of storing program codes, such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The invention further provides non-transitory computer readable storage media according to the above embodiments, wherein the computer instructions are stored thereon, and when executed by a computer, the steps of implementing the camera full-resolution imaging method according to the above embodiments include, for example, extracting a true edge point in a focus stack image of a target object by using a maximum gradient stream operator and calculating a depth value of the true edge point, extracting a structure edge point based on the true edge point and propagating the depth value of the true edge point to all pixel points of the focus stack image based on the structure edge point to obtain a global depth map, obtaining a full-resolution image of a preliminary fusion of the target object based on the global depth map, removing a blur pattern of the structure edge region from the preliminary fusion full-resolution image based on the distribution of the global depth map and the structure edge point to obtain a final full-resolution image, and performing camera full-resolution imaging of the target object based on the final full-resolution image.
According to the electronic device and the non-transitory computer readable storage medium provided by the embodiments of the present invention, by performing the steps of the camera full-resolution imaging method described in each of the embodiments, the structure edge points of the focus stack image are extracted, and the blur pattern of the structure edge area is removed from the preliminarily fused full-focus image according to the distribution of the structure edge points, so that adverse effects caused by the blur edge can be effectively eliminated, thereby effectively providing the quality of the full-focus image and improving the imaging effect.
It is understood that the above-described embodiments of the apparatus, electronic device and storage medium are merely illustrative, and the elements described as separate parts may or may not be physically separate, may be located in places, or may be distributed on different network elements.
Based on the understanding that the above technical solutions essentially or contributing to the prior art can be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk, etc., and includes instructions for causing computer devices (such as a personal computer, a server, or a network device, etc.) to execute the methods described in the above method embodiments or some parts of the method embodiments.
In addition, it should be understood by those skilled in the art that in the specification of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises an series of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
In the above description of exemplary embodiments of the invention, however, it should be understood that embodiments of the invention may be practiced without these specific details, that well-known methods, structures and techniques have not been shown in detail in examples in order not to obscure the understanding of this description, and that similarly, it should be understood that various features of embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof by for the purpose of streamlining the disclosure and aiding in the understanding of or more of the various inventive aspects.
However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1, camera full-resolution imaging method, comprising:
extracting real edge points in a focusing stack image of a target object by adopting a maximum gradient flow operator, and calculating the depth value of the real edge points;
extracting a structure edge point based on the real edge point, and transmitting the depth value of the real edge point to all pixel points of the focusing stack image based on the structure edge point to obtain a global depth map;
acquiring a full-focus image of the primary fusion of the target object based on the global depth map, removing a fuzzy pattern of a structure edge region from the full-focus image of the primary fusion based on the global depth map and the distribution of the structure edge points, and acquiring a final full-focus image;
performing camera full-resolution imaging of the target object based on the final fully-focused image.
2. The camera full-definition imaging method according to claim 1, wherein the step of extracting the structural edge points based on the real edge points specifically comprises:
and propagating the depth value of the real edge point from the real edge point to all pixel points of the focusing stack image by adopting a classical Laplacian depth propagation algorithm, and if the depth value of any pixel around real edge points jumps after propagation and the depth value of any real edge point is smaller than the depth value of the surrounding pixels, taking any real edge point as a structural edge point and taking the real edge points outside the structural edge point as texture edge points.
3. The camera full-definition imaging method according to claim 1 or 2, wherein the step of propagating the depth values of the real edge points to all pixel points of the focus stack image specifically comprises:
in the depth value propagation process, by finding the optimal depth distribution, the following cost loss energy is minimized:
wherein E (D) represents cost loss energy, D represents vector representation of the global depth map to be solved, lambda represents a balance factor, D is a diagonal matrix,
Figure FDA0002209477070000012
depth representing true edge points, L representingThe Laplacian matrix of the label has the following calculation formula:
Figure FDA0002209477070000021
χ(i,k)=(1-Πi)Iiiμk
in the formula, I represents RGB image, (I, j) represents pixel point, omegakDenotes a local small window, δ, covering (i, j)ijRepresenting a kronek symbol, returns 1 when i ═ j, and returns 0, mu otherwisekSum ΣkRespectively representing local small windows omegakMean and variance of inner pixel points, niThe method is used for distinguishing the structure edge points and the texture edge points, 1 is returned when the pixel points are the structure edge points, 0 is returned when the pixel points are the texture edge points, i represents a real edge point i, epsilon represents a regularization parameter, and U3An identity matrix of 3 x 3 size is shown.
4. The method according to claim 1, wherein the step of removing the blurred pattern of the structure edge region from the preliminarily fused full focus image specifically comprises:
based on the distribution of the structural edge points, intercepting a neighborhood image block with a given size of each structural edge point from the global depth map, and clustering pixels in each neighborhood image block into a th class and a second class by using a clustering algorithm;
for any neighborhood image block, respectively calculating a th depth value of the th type of pixel and a second depth value of the second type of pixel, extracting a th image block of a corresponding depth position and a corresponding pixel position from the focusing stack image based on the th depth value, and extracting a second image block of a corresponding depth position and a corresponding pixel position from the focusing stack image based on the second depth value;
removing the blurred pattern of the structural edge region from the preliminary fused full focus image using a BEF-CNN network model based on all of the th and second image patches.
5. The method according to claim 4, wherein the step of calculating depth value of the th type pixel and the second depth value of the second type pixel respectively comprises:
for said th class s1The depth value a is calculated as follows:
Figure FDA0002209477070000022
for said second class s2The depth value B is calculated as follows:
in the formula, i represents the ith pixel point in the neighborhood image block p, N represents the total number of the pixel points in the neighborhood image block p, and dp(i) The depth of the ith pixel point in the neighborhood image block p is represented, delta (i belongs to s)1)、δ(i∈s2) Representing a discriminant function, and outputting 1 when the condition is satisfied, otherwise outputting 0.
6. The camera full-definition imaging method according to claim 4 or 5, further comprising, after the step of extracting the second image block of the corresponding depth position and pixel position from the focus stack image based on the second depth value:
based on the depth distribution in each neighborhood image block, performing point-by-point fusion on pixel points of a focus stack image corresponding to each neighborhood image block to obtain an image block full focus image of each neighborhood image block;
correspondingly, based on the th image block, the second image block and the image block full-focus image of all the neighborhood image blocks, the BEF-CNN network model is utilized to remove the fuzzy pattern of the structural edge area from the preliminarily fused full-focus image.
7. The camera full-definition imaging method according to claim 1 or 2, wherein the step of extracting real edge points in a focus stack image of a target object and calculating depth values of the real edge points specifically comprises:
designing the following maximum gradient flow operators to obtain the gradient distribution of the focusing stack image edge:
Figure FDA0002209477070000032
wherein i, j, k represent image ordinal numbers in the focus stack image, respectively, and Gi、Gj、GkRespectively representing gradient values of ith, j and k images in a focusing stack, and (x, y) representing two-dimensional coordinate positions of pixel points in the focusing stack images;
according to the following formula, extracting an edge point of which the maximum gradient flow operator satisfies that the divergence is greater than 0 as the real edge point, and using the position of the maximum gradient as the depth value of the real edge point, wherein the following formula is as follows:
Figure FDA0002209477070000041
Figure FDA0002209477070000042
in the formula (I), the compound is shown in the specification,
Figure FDA0002209477070000043
indicating the depth value corresponding to the pixel point (x, y),
Figure FDA0002209477070000044
representing divergence calculation and n representing the number of images in the focus stack.
The full-resolution imaging device of camera, comprising:
the extraction module is used for extracting a real edge point in a focusing stack image of a target object by adopting a maximum gradient flow operator and calculating a depth value of the real edge point;
a propagation module, configured to extract a structure edge point based on the real edge point, and propagate a depth value of the real edge point to all pixel points of the focus stack image based on the structure edge point, so as to obtain a global depth map;
the deblurring module is used for acquiring a full-focus image of the primary fusion of the target object based on the global depth map, removing a fuzzy pattern of a structure edge area from the full-focus image of the primary fusion based on the global depth map and the distribution of the structure edge points, and acquiring a final full-focus image;
and the imaging module is used for carrying out camera full-definition imaging on the target object based on the final full-focus image.
Electronic device of 9, , comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the camera full-definition imaging method of any of claims 1 to 7, .
10, non-transitory computer readable storage medium having stored thereon computer instructions, wherein the computer instructions, when executed by a computer, implement the steps of the camera full resolution imaging method according to any of claims 1 to 7.
CN201910893392.1A 2019-09-20 2019-09-20 Full-definition imaging method and device for camera and electronic equipment Pending CN110738677A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910893392.1A CN110738677A (en) 2019-09-20 2019-09-20 Full-definition imaging method and device for camera and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910893392.1A CN110738677A (en) 2019-09-20 2019-09-20 Full-definition imaging method and device for camera and electronic equipment

Publications (1)

Publication Number Publication Date
CN110738677A true CN110738677A (en) 2020-01-31

Family

ID=69269363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910893392.1A Pending CN110738677A (en) 2019-09-20 2019-09-20 Full-definition imaging method and device for camera and electronic equipment

Country Status (1)

Country Link
CN (1) CN110738677A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012174A (en) * 2021-04-26 2021-06-22 中国科学院苏州生物医学工程技术研究所 Image fusion method, system and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371599A (en) * 2016-09-08 2017-02-01 清华大学 Method and device for high-precision fingertip positioning in depth image
CN107093194A (en) * 2017-03-22 2017-08-25 清华大学 A kind of sub-aperture image-pickup method and system
CN107689038A (en) * 2017-08-22 2018-02-13 电子科技大学 A kind of image interfusion method based on rarefaction representation and circulation guiding filtering
CN109840889A (en) * 2019-01-24 2019-06-04 华东交通大学 High-precision vision measurement method, device and system based on bionic Algorithm
CN110021024A (en) * 2019-03-14 2019-07-16 华南理工大学 A kind of image partition method based on LBP and chain code technology
CN110166684A (en) * 2018-06-29 2019-08-23 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371599A (en) * 2016-09-08 2017-02-01 清华大学 Method and device for high-precision fingertip positioning in depth image
CN107093194A (en) * 2017-03-22 2017-08-25 清华大学 A kind of sub-aperture image-pickup method and system
CN107689038A (en) * 2017-08-22 2018-02-13 电子科技大学 A kind of image interfusion method based on rarefaction representation and circulation guiding filtering
CN110166684A (en) * 2018-06-29 2019-08-23 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN109840889A (en) * 2019-01-24 2019-06-04 华东交通大学 High-precision vision measurement method, device and system based on bionic Algorithm
CN110021024A (en) * 2019-03-14 2019-07-16 华南理工大学 A kind of image partition method based on LBP and chain code technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUIJIN WANG 等: "All-in-focus with directional-max-gradient flow and labeled iterative depth propagatio", 《ELSEVIER》 *
WENTAO LI 等: "BLURRING-EFFECT-FREE CNN FOR OPTIMIZATION OF STRUCTURAL EDGES IN FOCUS STACKING", 《IEEE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012174A (en) * 2021-04-26 2021-06-22 中国科学院苏州生物医学工程技术研究所 Image fusion method, system and equipment
CN113012174B (en) * 2021-04-26 2024-02-09 中国科学院苏州生物医学工程技术研究所 Image fusion method, system and equipment

Similar Documents

Publication Publication Date Title
Gu et al. Learning dynamic guidance for depth image enhancement
Li et al. Blind image deblurring via deep discriminative priors
Cho et al. Weakly-and self-supervised learning for content-aware deep image retargeting
Li et al. Fast guided global interpolation for depth and motion
Dong et al. Color-guided depth recovery via joint local structural and nonlocal low-rank regularization
JP5645842B2 (en) Image processing apparatus and method using scale space
Liu et al. Depth restoration from RGB-D data via joint adaptive regularization and thresholding on manifolds
Lu et al. Deep texture and structure aware filtering network for image smoothing
Paramanand et al. Depth from motion and optical blur with an unscented kalman filter
Liu et al. Image de-hazing from the perspective of noise filtering
CN113256529B (en) Image processing method, image processing device, computer equipment and storage medium
KR102311796B1 (en) Method and Apparatus for Deblurring of Human Motion using Localized Body Prior
CN114049420B (en) Model training method, image rendering method, device and electronic equipment
Wang et al. Super-resolution of multi-observed RGB-D images based on nonlocal regression and total variation
Wang et al. Blurred image restoration using knife-edge function and optimal window Wiener filtering
Conde et al. Lens-to-lens bokeh effect transformation. NTIRE 2023 challenge report
Wang et al. A new method for nonlocal means image denoising using multiple images
Zhong et al. Deep attentional guided image filtering
Guo et al. Low-light image enhancement with joint illumination and noise data distribution transformation
CN113837941A (en) Training method and device for image hyper-resolution model and computer readable storage medium
Huang et al. Fast hole filling for view synthesis in free viewpoint video
CN110738677A (en) Full-definition imaging method and device for camera and electronic equipment
CN112509144A (en) Face image processing method and device, electronic equipment and storage medium
Hussein et al. Colorization using edge-preserving smoothing filter
Demir et al. Deep stacked networks with residual polishing for image inpainting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200131

WD01 Invention patent application deemed withdrawn after publication