CN112669355B - Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation - Google Patents

Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation Download PDF

Info

Publication number
CN112669355B
CN112669355B CN202110006480.2A CN202110006480A CN112669355B CN 112669355 B CN112669355 B CN 112669355B CN 202110006480 A CN202110006480 A CN 202110006480A CN 112669355 B CN112669355 B CN 112669355B
Authority
CN
China
Prior art keywords
full
depth
pixel
map
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110006480.2A
Other languages
Chinese (zh)
Other versions
CN112669355A (en
Inventor
何迪
邱钧
刘畅
李文月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202110006480.2A priority Critical patent/CN112669355B/en
Publication of CN112669355A publication Critical patent/CN112669355A/en
Application granted granted Critical
Publication of CN112669355B publication Critical patent/CN112669355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a focusing stack data splicing and fusion method and a system based on RGB-D super pixel segmentation, wherein the method comprises the following steps: step 1, calculating and generating a full focus map and a depth map by focus stack data, namely accurately registered RGB-D data; step 2, up-sampling the low-resolution 2D large-view-field image until the low-resolution 2D large-view-field image is consistent with the RGB-D data scale; step 3, performing super-pixel segmentation on the RGB-D data; step 4, extracting and matching characteristic points between the full-focus image and the large-view-field 2D image, and calculating an accurate homography transformation matrix of super pixels of the same depth layer; step 5, performing super-pixel transformation on depth layers to realize splicing and fusion of the full-focus map and the depth map; and 6, generating large-field focusing stack data from the large-field RGB-D data. The invention can realize the splicing and fusion of the focusing stack data under a plurality of groups of different visual angles.

Description

Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation
Technical Field
The invention relates to the technical field of computer vision and digital image processing, in particular to a method and a system for splicing and fusing focusing stack data based on RGB-D super-pixel segmentation.
Background
The focusing stack data acquired by the single-view imaging system are narrow-view, and the large-view imaging needs to splice and fuse multiple groups of data acquired by multiple views.
Each group of focusing stack data is a 2D image sequence focused on different depth layers, and finding images of corresponding depth layers among multiple groups of data is one of difficulties when the focusing stack data are spliced; the common 2D image stitching technology is to realize the coordinate transformation of the image based on a homography transformation matrix, wherein the homography transformation matrix can accurately describe the transformation relation between planes, and because the real scene is not a plane, an accurate unified homography transformation matrix cannot be obtained, so that the stitching can generate serious double images and dislocation; when the overlapping area between the data to be spliced is smaller, it is difficult to obtain enough information to splice the data; when multiple groups of data need to be spliced, if the data are spliced one by one, the calculation errors can be accumulated each time.
Disclosure of Invention
It is an object of the present invention to provide a method of stitching fusion of focus stack RGB-D data based on super pixel segmentation that overcomes or at least alleviates at least one of the above-mentioned drawbacks of the prior art.
To achieve the above object, the present invention provides a method for stitching and fusing focus stack data based on RGB-D super-pixel segmentation, the method comprising:
step 1, calculating and generating a full focus map and a depth map by focus stack data, namely accurately registered RGB-D data;
step 2, up-sampling the low-resolution 2D large-view-field image until the low-resolution 2D large-view-field image is consistent with the RGB-D data scale;
step 3, performing super-pixel segmentation on the RGB-D data;
step 4, extracting and matching characteristic points between the full-focus image and the large-view-field 2D image, and calculating an accurate homography transformation matrix of super pixels of the same depth layer;
step 5, performing super-pixel transformation on depth layers to realize splicing and fusion of the full-focus map and the depth map, and obtaining RGB-D data of a large view field;
and 6, generating large-field focusing stack data from the large-field RGB-D data.
Further, step 3 includes super-pixel segmentation of the full focus map and the depth map, which specifically includes:
step 31, converting the fully focused image from RGB color space to CIE-Lab color space, then each pixel corresponds to three-channel color information (l, a, b), 2D coordinate information (x, y), and defines the depth value of the corresponding position in the depth map as the depth value D of the pixel;
step 32, two pixels p in the full focus map i ,p j Color distance dis between 1 Defined as formula (1), two pixels p in the full focus map i ,p j Plane coordinate distance dis between 2 Defined as equation (2), two pixels p in the full focus map i ,p j Depth distance dis between 3 Defined as formula (3):
step 33, by comparing dis 1 、dis 2 And dis 3 Weighted summation, as shown in equation (4), defines two pixels p in the full focus map i ,p j Distance dis (p) i ,p j ):
dis(p i ,p j )=dis 1 (p i ,p j )+αdis 2 (p i ,p j )+βdis 3 (p i ,p j ) (4)
Wherein alpha and beta are weight parameters, and adjustment and selection are needed according to different scenes;
step 34, from the distance dis (p i ,p j ) Clustering pixels in the full focus map, forming a super pixel in each type, and completing super pixel segmentation of the full focus map;
and 35, dividing the corresponding pixels in the depth map into one super pixel for the pixels in any super pixel in the full focus map, and completing super pixel segmentation of the depth map.
Further, step 4 specifically includes:
step 41, matching the full-focus map with the extracted feature points of the large-field 2D image with the same scale after up-sampling, and removing outliers to obtain feature point pairs which can be accurately matched with the large-field 2D image in each super pixel;
step 42, approximating each superpixel to a plane with its interior aligned coordinates [ x ] capable of aligning with the exactly matching feature points in the large field of view 2D image 1 ,y 1 ,1] T 、[x 2 ,y 2 ,1] T The coordinate transformation between is described by a 3×3 homography transformation matrix H as equation (5):
and 43, establishing a linear regression equation set by at least 4 characteristic point pairs in the super pixels on each depth layer, and calculating and solving a homography transformation matrix H corresponding to the super pixels on the depth layer by using a least square method.
Further, step 5 specifically includes:
step 51, splicing and fusing the full focus map;
step 511, performing coordinate transformation on all super pixels in the full-focus map to the large-view-field 2D image by utilizing the homography matrix of each depth layer calculated in the step 3, and reserving full-focus map information for splicing;
step 512, fusing the spliced images by using a multi-band method;
step 52, splicing and fusing depth maps;
step 521, performing coordinate transformation on the super pixels of the depth map by using a homography transformation matrix which is the same as that of the full focus map, and performing stitching;
step 522, fusing the spliced depth maps in a weighted average manner;
the step 511 specifically includes:
5111, after coordinate transformation is directly performed on the full focus map by using a homography matrix, original coordinate values of each pixel are changed from an integer to a non-integer, and in the transformed coordinate system, pixel values of each integer coordinate are obtained by interpolation calculation by taking distance as weight through pixel values at non-integer coordinates around the pixel values;
in step 5112, all pixels are traversed to find the position with the pixel value of 0, the position of the hole is taken as the starting point, the position is detected in a certain range outwards along the four directions of up, down, left and right until four non-hole pixels are detected, the pixel values of the four positions are interpolated by taking the distance as the weight to obtain the pixel value of the hole position, and the hole part is complemented.
Further, step 6 specifically includes:
step 61, estimating a point spread function of each depth layer on different focus planes in the imaging system;
step 62, generating image sequences focused on different focal planes, namely large field-of-view focusing stack data, according to the large field-of-view full-focus map, the depth map and the point spread function, wherein the image sequences specifically comprise:
step 621, layering pixels in the large-view-field full-focus map according to depth values in the depth map to form a plurality of depth layers;
step 622, for each focal plane, convolving the full focus map with the point spread function estimated in step 61 layer by layer depth;
in step 623, for each focal plane, the convolved images are superimposed to obtain an image of the scene focused on the focal plane, forming large field of view focal stack data.
The invention also includes a system for stitching and fusing focus stack data based on RGB-D super pixel segmentation, the system comprising:
the data preprocessing device is used for calculating the focusing stack data to generate RGB-D data and upsampling the low-resolution 2D large-view-field image to be consistent with the RGB-D data scale;
super-pixel dividing means for performing super-pixel division on the RGB-D data;
the homography transformation matrix calculation device is used for extracting and matching characteristic points between the full-focus image and the large-view-field 2D image and calculating an accurate homography transformation matrix of the super pixels of the same depth layer;
splicing and fusing device for performing super-pixel transformation on depth layers to realize splicing and fusing of full-focus map and depth map
And the large-field-of-view focusing stack data generating device is used for generating large-field-of-view focusing stack data by calculating a large-field full-focusing map and a depth map.
Further, the pixel dividing apparatus specifically includes:
a conversion unit for converting the full-focus image from the RGB color space to the CIE-Lab color space, each pixel corresponding to three-channel color information (l, a, b), 2D coordinate information (x, y), and defining a depth value of a corresponding position in the depth map as a depth value D of the pixel;
a full focus image pixel distance definition subunit for defining two pixels p in the full focus map i ,p j Color distance dis between 1 Defined as formula (1), two pixels p in the full focus map i ,p j Plane coordinate distance dis between 2 Defined as equation (2), two pixels p in the full focus map i ,p j Depth distance dis between 3 Defined as formula (3):
and through the pair dis 1 、dis 2 And dis 3 Weighted summation, as shown in equation (4), defines two pixels p in the full focus map i ,p j Distance dis (p) i ,p j ):
dis(p i ,p j )=dis 1 (p i ,p j )+αdis 2 (p i ,p j )+βdis 3 (p i ,p j ) (4)
Wherein alpha and beta are weight parameters, and adjustment and selection are needed according to different scenes;
a full focus map dividing unit for dividing the image by the distance dis (p i ,p j ) And clustering pixels in the full focus map, wherein each class forms a super pixel, and completing super pixel segmentation of the full focus map.
And the depth map segmentation unit is used for dividing the pixels corresponding to any one super pixel in the full-focus map into one super pixel in the depth map, so as to complete the super pixel segmentation of the depth map.
Further, the homography transformation matrix calculation apparatus specifically includes:
the feature point extraction and matching unit is used for matching the full-focus image with the feature points extracted from the large-view-field 2D image with the same scale after up-sampling, and removing outliers to obtain feature point pairs which can be accurately matched with the large-view-field 2D image in each super pixel;
homography transformation matrix H calculation unit for approximating each superpixel as a plane, whose interior can be exactly matched with that in large field of view 2D imageAlignment coordinate [ x ] of feature point alignment 1 ,y 1 ,1] T 、[x 2 ,y 2 ,1] T The coordinate transformation between the two is described as a formula (5) by a homography transformation matrix H of 3 multiplied by 3, a linear regression equation set is established according to at least 4 groups of characteristic point pairs in the super pixels of each depth layer, and the homography transformation matrix H corresponding to the super pixels of the depth layer is calculated and solved by a least square method:
further, the splice fusion device specifically includes:
the splicing and fusing unit of the full focus map and the depth map comprises;
the full-focus map splicing subunit is used for carrying out coordinate transformation on all super pixels in the full-focus map to a large-view-field 2D image by utilizing the homography matrix of each depth layer calculated by the calculation unit, and reserving full-focus map information for splicing;
the full-focusing image fusion subunit is used for fusing the spliced full-focusing images by utilizing a multi-band method;
the depth map splicing subunit is used for carrying out coordinate transformation on the super pixels of the depth map by using a homography transformation matrix which is the same as that of the full-focus map, and splicing the super pixels;
the depth map fusion subunit is used for fusing the spliced depth maps in a weighted average mode;
the full focus map splicing subunit specifically comprises:
after coordinate transformation is directly carried out on the full focus map by using a homography matrix, the original coordinate value of each pixel is changed from an integer to a non-integer, and in the transformed coordinate system, the pixel value of each integer coordinate is obtained by carrying out interpolation calculation by taking the distance as a weight through the pixel value at the non-integer coordinate around the pixel value;
traversing all pixels, finding a hole position, taking the hole position as a starting point, detecting the hole position in a certain range outwards along the up-down, left-right directions until four non-hole pixels are detected, interpolating the pixel values of the four positions by taking the distance as a weight to obtain the pixel value of the hole position, and complementing the hole part.
Further, the large field-of-view focusing stack data generating device specifically includes:
a point spread function estimating unit for estimating a point spread function of the imaging system on different focal planes for each depth layer of the scene;
the large-view-field focusing stack data generating unit generates image sequences focused on different focusing planes according to the large-view-field full-focusing map, the depth map and the point spread function, namely large-view-field focusing stack data;
the large-view-field focusing stack data generating unit specifically comprises:
convolving the large-view-field full-focusing image layer by layer according to the depth value in the depth image, and calculating an image generated by each depth layer on the corresponding focusing surface;
and overlapping the images generated by convolution of different depth layers on each focusing surface to generate the images of the large-view-field scene on the focusing surfaces, wherein the sequence of the large-view-field images formed on all the focusing surfaces is the large-view-field focusing stack data.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the full focus map and the depth map generated by the focus stack are accurately registered RGB-D data, so that the problem that an image sequence corresponding to the focus depth is difficult to find when the focus stack data are directly spliced is avoided;
2. performing super-pixel segmentation in combination with depth information, each super-pixel can be approximated as a plane in space;
3. classifying the superpixels according to the depth layer, wherein the superpixels of the same class are on the same plane, and the characteristic points belonging to the same plane are calculated together, so that a more accurate homography transformation matrix can be obtained;
4. taking the 2D large-view-field image as a reference, when the overlapping area between the data to be spliced is smaller, a better splicing result can be obtained;
5. with the large-field-of-view 2D image as a reference, the accumulated error of successive stitching can be reduced when multiple groups of data are stitched.
Drawings
Fig. 1 is a schematic diagram of the pre-and post-conversion of the super pixel according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an integer coordinate pixel interpolation calculation from a non-integer coordinate pixel with distance as a weight according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of a post-processing transformed super-pixel hole according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
The embodiment of the invention provides a focusing stack RGD-D data splicing method based on super-pixel segmentation by taking a 2D large-view-field image as a reference, which comprises the following steps:
step 1, calculating and generating a full focus map and a depth map by focus stack data, namely accurately registered RGB-D data;
because the focusing stack data is an image sequence formed by focusing a scene on different focusing planes, the focusing position and depth information of the focusing stack data can be calculated according to the change of the focusing measure of an object point in the image sequence, so that a full focusing image and a depth image, namely, accurately registered RGB-D data, are generated.
And step 2, up-sampling the low-resolution 2D large-field image to be consistent with the RGB-D data scale.
From the focus stack data, a full focus map and a depth map, i.e. accurately registered RGB-D data, can be computationally generated from the focus measure. Extracting and matching feature points (such as SIFT feature points) in the full-focus image and the 2D large-view-field image, calculating the amplification ratio of the full-focus image and the large-view-field 2D image through the average coordinate distance between the feature points, and up-sampling the 2D large-view-field image according to the amplification ratio until the size of the 2D large-view-field image is consistent with that of RGB-D data, namely the number of pixels occupied by any object in a scene in the full-focus image or the depth image is the same as the number of pixels occupied by any object in the 2D large-view-field image.
Step 3, performing super-pixel segmentation on the RGB-D data, which specifically comprises the following steps:
step 31, converting the fully focused image from the RGB color space to the CIE-Lab color space, each pixel corresponds to three channel color information (l, a, b), 2D coordinate information (x, y). The full focus map and the depth map generated by the focus stack data are well registered, that is, the 2D coordinate information of the corresponding pixels in the two maps of each image point is the same, so that the depth value of each pixel in the depth map can be used as the depth information D of the pixels with the same coordinates in the full focus map.
Thus, each pixel p in the full focus map i Corresponds to a 6-dimensional feature vector [ l ] i ,a i ,b i ,x i ,y i ,d i ]The parameters of the pixel describe the color, the plane coordinate position and the depth information of the pixel respectively.
Step 32, two pixels p in the full focus map i ,p j Color distance dis between 1 Defined as formula (1), two pixels p in the full focus map i ,p j Plane coordinate distance dis between 2 Defined as equation (2), two pixels p in the full focus map i ,p j Depth distance dis between 3 Defined as formula (3):
step 33, by comparing dis 1 、dis 2 And dis 3 Weighted summation, as shown in equation (4), defines the distance dis (p) between two pixels i ,p j ):
dis(p i ,p j )=dis 1 (p i ,p j )+αdis 2 (p i ,p j )+βdis 3 (p i ,p j ) (4)
Wherein, alpha represents the weight for controlling the spatial distance of the pixel plane, the more the alpha value is, the more the segmented super-pixel shape is close to a rectangle, and when the shape of an object in a scene is irregular, the smaller alpha value is needed to be taken, so that the segmented super-pixel can better keep the shape at the boundary of the object. Beta represents the weight for controlling the depth distance of pixels, the closer the value of beta is to the depth value of the pixel in one super-pixel, the more tends to a plane, and when adjacent objects with similar colors and larger depth differences exist in a scene, the value of beta needs to be increased appropriately, so that the inside of one super-pixel cannot cross a plurality of depth layers as much as possible. The values of the two parameters are adjusted according to the characteristics of different scenes, so that the segmented super pixels can better keep the shape of the object boundary, and the depth inside each super pixel is more consistent and can be approximately regarded as a plane.
Step 34, using the K-means algorithm to calculate the distance dis (p i ,p j ) And clustering pixels in the full focus map, wherein each class forms a super pixel, so that super pixel segmentation of the full focus map is realized.
And 35, dividing the corresponding pixels in the depth map into one super pixel for the pixels in any super pixel in the full focus map, and completing super pixel segmentation of the depth map.
The super-pixel segmentation of the RGB-D data is completed by steps 31 to 35.
And 4, extracting and matching characteristic points between the full-focus image and the large-view-field 2D image, and calculating an accurate homography transformation matrix of the super pixels of the same depth layer. The step 4 specifically comprises the following steps:
and step 41, extracting SIFT feature points from the full-focus map and the up-sampled large-field-of-view 2D image with the same scale, and matching the SIFT feature points, and removing outliers by using RANSAC to obtain feature point pairs which can be accurately matched with the large-field-of-view 2D image in each super pixel.
Step 42, regarding each superpixel as approximately a plane, the interior of which can be aligned with the exact matching feature point in the large field of view 2D image [ x ] 1 ,y 1 ,1] T 、[x 2 ,y 2 ,1] T The coordinate transformation between can be described by a 3×3 homography transformation matrix H as equation (5):
the coordinates of a pair of feature points can write out 2 linear equations about H, more than 4 sets of feature point pairs can establish a linear regression equation set, the H is calculated and solved by a least square method, and the more the number of the used matched feature point pairs is, the more accurate the calculated H is. The super pixels are classified according to the depth information, the object points corresponding to the super pixels of the same depth layer can be approximately regarded as points on the same plane in space, and the corresponding homography transformation matrixes are the same. And establishing the same linear regression equation set by using the accurate matching characteristic point pairs contained in the super pixels of the same depth layer together to calculate the homography transformation matrix H of the depth layer.
And 5, performing super-pixel transformation on depth layers to realize the splicing and fusion of the full-focus map and the depth map. The step 5 specifically comprises the following steps:
and step 51, splicing and fusing the full focus map.
And 511, performing coordinate transformation on all super pixels in the full-focus map to the large-view-field 2D image by utilizing the homography matrix of each depth layer calculated in the step 3, and reserving full-focus map information for splicing.
After direct conversion, overlapping or voids may occur between different superpixels, as shown in fig. 1, where there is an overlapping region between superpixels 5 and 6, and voids occur between superpixels 2 and 5. In order to solve the two cases, the following processes are performed:
in step 5111, after the homography matrix is directly used for transforming the coordinate of the full focus map, the original coordinate value of each pixel is changed from an integer to a non-integer, and in the transformed coordinate system, the pixel value of each integer coordinate is obtained by interpolation calculation by taking the distance as the weight through the pixel value at the non-integer coordinate around the pixel value.
As shown in FIG. 2, O 1 ,O 2 ,O 3 ,O 4 Is a variant ofFour integer coordinate positions in the transformed coordinate system, a-i is a non-integer coordinate position after direct coordinate transformation of pixels in the full focus map, O 1 The pixel value at which is passed through its surrounding non-integer coordinates a, b, c, d, e is calculated as its pixel value and O 1 Is obtained by interpolation by taking the distance of the distance (O) as the weight 2 The pixel value at the position passes through the pixel values of the surrounding non-integer coordinates b, d, g and h and the pixel values of the surrounding non-integer coordinates are equal to O 2 The distance is obtained by interpolation of the weights, and the O can be obtained by the similar method 3 ,O 4 Pixel values at. The interpolation mode can effectively solve the problem of overlapping between super pixels after direct conversion.
In step 5112, all pixels are traversed, a position with a pixel value of 0, namely a hole position is found, the hole position is taken as a starting point, the position is detected in a certain range outwards along the four directions of up, down, left and right until four non-hole pixels are detected, namely the pixel value is not 0, the pixel values of the four positions are interpolated by taking the distance as a weight to obtain the pixel value of the hole position, and the hole part is completed.
As shown in fig. 3, after finding a point O with a pixel value of 0, four pixels U, D, L, R with non-0 pixel values are detected in four directions, and the pixel values of the four pixels are interpolated according to the distance from O as a weight to obtain the pixel value of O.
Step 512, after the stitching is completed, the stitched images are fused by using the multi-band method, so that the overall brightness is uniform, because the data shot from different viewing angles may appear non-uniform in illumination.
And step 52, splicing and fusing the depth maps.
Since the full focus map and the depth map are registered RGB-D data generated by focus stack calculation, the super pixels of the depth map are subjected to coordinate transformation by adopting a homography transformation matrix which is the same as that of the full focus map. Because the depth map expresses the geometric information of the scene, the influence of illumination does not exist, fusion is not needed by a multi-band method, and the situation of overlapping and hollowness is directly weighted and averaged.
And 6, generating large-field focusing stack data from the large-field RGB-D data. The step 6 specifically comprises the following steps:
step 61 estimates the point spread function of each depth layer at different focal planes in the imaging system. The imaging system is obtained through simulation, and specifically comprises the following steps: firstly, setting the aperture shape and size, and then, obtaining the point spread functions from different depths to different focusing planes by using an ideal point spread function formula.
Step 62, generating a sequence of images focused on different focal planes, i.e. large field of view focal stack data, from the large field of view full focus map, the depth map and the point spread function.
In step 621, the pixels in the large-view-field full-focus map are layered according to the depth values in the depth map, i.e. the pixels with the same depth values are directly separated into one layer according to the depth map.
Step 622, for each focus plane, convolving the full focus map with the point spread function estimated in step 61 layer by layer depth.
In step 623, for each focal plane, the convolved images are superimposed to obtain an image of the scene focused on the focal plane, forming large field of view focal stack data.
The embodiment of the invention also provides a system for splicing and fusing the RGB-D data of the focusing stack based on super pixel segmentation, which comprises a data preprocessing device, a super pixel segmentation device, a homography transformation matrix calculation device, a splicing and fusing device and a large-view-field focusing stack data generation device, wherein:
the data preprocessing device is used for calculating the focusing stack data to generate RGB-D data and upsampling the low-resolution 2D large-field image to be consistent with the RGB-D data scale. From the focus stack data, a full focus map and a depth map, i.e. accurately registered RGB-D data, can be computationally generated from the focus measure. Extracting and matching feature points (such as SIFT feature points) in the full-focus image and the 2D large-view-field image, calculating the amplification ratio of the full-focus image and the large-view-field 2D image through the average coordinate distance between the feature points, and up-sampling the 2D large-view-field image according to the amplification ratio until the size of the 2D large-view-field image is consistent with that of RGB-D data, namely the number of pixels occupied by any object in a scene in the full-focus image or the depth image is the same as the number of pixels occupied by any object in the 2D large-view-field image.
The super-pixel dividing means is for performing super-pixel division on the RGB-D data.
The super-pixel dividing device specifically comprises:
a conversion unit for converting the full-focus image from the RGB color space to the CIE-Lab color space, each pixel corresponding to three-channel color information (l, a, b), 2D coordinate information (x, y), and defining a depth value of a corresponding position in the depth map as a depth value D of the pixel;
a full focus image pixel distance definition subunit for defining two pixels p in the full focus map i ,p j Color distance dis between 1 Defined as formula (1), two pixels p in the full focus map 1 ,p 2 Plane coordinate distance dis between 2 Defined as equation (2), two pixels p in the full focus map 1 ,p 2 Depth distance dis between 3 Defined as formula (3):
by means of dis 1 、dis 2 And dis 3 Weighted summation, as shown in equation (4), defines two pixels p in the full focus map i ,p j Distance dis (p) i ,p j ):
dis(p i ,p j )=dis 1 (p i ,p j )+αdis 2 (p i ,p j )+βdis 3 (p i ,p j ) (4)
Wherein alpha and beta are weight parameters, and adjustment and selection are needed according to different scenes;
full focus imageA prime clustering unit for passing the distance dis (p i ,p j ) And clustering pixels in the full focus map, wherein each class forms a super pixel, and completing super pixel segmentation of the full focus map.
And the depth map segmentation unit is used for carrying out identical segmentation on the depth map according to the super-pixel segmentation of the full-focus map.
The homography transformation matrix calculation device is used for extracting and matching characteristic points between the full-focus image and the large-view-field 2D image and calculating an accurate homography transformation matrix of super pixels of the same depth layer.
The homography transformation matrix calculation device specifically comprises:
the characteristic point matching unit is used for extracting and matching characteristic points of the full-focus image and the large-view-field 2D image which is up-sampled and consistent with the full-focus image in scale, and obtaining characteristic point pairs which can be accurately matched with the large-view-field 2D image in each super pixel after outliers are removed;
homography transformation matrix H calculation unit for approximating each superpixel as a plane with its interior aligned coordinates [ x ] capable of aligning with the exactly matching feature points in the large field of view 2D image 1 ,y 1 ,1] T 、[x 2 ,y 2 ,1] T The coordinate transformation between the two is described as a formula (5) by a homography transformation matrix H of 3 multiplied by 3, and a linear regression equation set can be established according to at least 4 groups of characteristic point pairs in super pixels on the same depth layer, and the homography transformation matrix H corresponding to the super pixels of the depth layer is calculated and solved by a least square method:
the splicing and fusing device is used for carrying out super-pixel transformation on depth layers to realize splicing and fusing of the full-focus map and the depth map.
The splicing and fusing device specifically comprises:
the splicing and fusing unit of the full focus map and the depth map comprises;
the full-focus map splicing subunit is used for carrying out coordinate transformation on all super pixels in the full-focus map to a large-view-field 2D image by utilizing the homography matrix of each depth layer calculated by the calculation unit, and reserving full-focus map information for splicing;
and the full-focus map fusion subunit is used for fusing the spliced full-focus maps by utilizing a multi-band method.
The depth map splicing subunit is used for carrying out coordinate transformation on the super pixels of the depth map by using a homography transformation matrix which is the same as that of the full-focus map, and splicing the super pixels;
and the depth map fusion subunit is used for fusing the spliced depth maps in a weighted average mode.
The full focus map splicing subunit specifically comprises:
after coordinate transformation is directly carried out on the full focus map by using a homography matrix, the original coordinate value of each pixel is changed from an integer to a non-integer, and in the transformed coordinate system, the pixel value of each integer coordinate is obtained by carrying out interpolation calculation by taking the distance as a weight through the pixel value at the non-integer coordinate around the pixel value;
traversing all pixels, finding a hole position, taking the hole position as a starting point, detecting the hole position in a certain range outwards along the up-down, left-right directions until four non-hole pixels are detected, interpolating the pixel values of the four positions by taking the distance as a weight to obtain the pixel value of the hole position, and complementing the hole part.
According to the embodiment of the invention, the RGB-D data is subjected to super-pixel segmentation by combining the depth information; layering the super pixels according to depth, taking a 2D large view field image as a global reference, extracting and matching SIFT feature points of the full focus map and the large view field 2D image, calculating one layer of super pixels of each group of RGB-D data to obtain an accurate homography transformation matrix, and splicing the super pixels to the large view field 2D image, so that accurate splicing and fusion of focus stack RGB-D data under multiple groups of different view angles can be realized.
The large-view-field focusing stack data generating device is used for generating large-view-field focusing stack data obtained by a scene in the imaging system from the large-view-field full-focusing map and the depth map.
The large-view-field focusing stack data generating device specifically comprises:
a point spread function estimating unit for estimating a point spread function of the imaging system on different focal planes for each depth layer of the scene;
and the large-field focusing stack data generating unit is used for generating image sequences focused on different focusing planes according to the large-field full-focusing map, the depth map and the point spread function, namely, large-field focusing stack data.
The large-view-field focusing stack data generating unit specifically comprises:
convolving the large-view-field full-focusing image layer by layer according to the depth value in the depth image, and calculating an image generated by each depth layer on the corresponding focusing surface;
and overlapping the images generated by convolution of different depth layers on each focusing surface to generate the images of the large-view-field scene on the focusing surfaces, wherein the sequence of the large-view-field images formed on all the focusing surfaces is the large-view-field focusing stack data.
Finally, it should be pointed out that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting. Those of ordinary skill in the art will appreciate that: the technical schemes described in the foregoing embodiments may be modified or some of the technical features may be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for stitching and fusing focus stack data based on RGB-D super-pixel segmentation, comprising:
step 1, calculating and generating a full focus map and a depth map by focus stack data, namely accurately registered RGB-D data;
step 2, up-sampling the low-resolution 2D large-view-field image until the low-resolution 2D large-view-field image is consistent with the RGB-D data scale;
step 3, performing super-pixel segmentation on the RGB-D data;
step 4, extracting and matching characteristic points between the full-focus image and the large-view-field 2D image, and calculating an accurate homography transformation matrix of super pixels of the same depth layer;
step 5, performing super-pixel transformation on depth layers to realize splicing and fusion of the full-focus map and the depth map, and obtaining RGB-D data of a large view field;
step 6, generating large-field focusing stack data from the large-field RGB-D data;
step 3 includes super-pixel segmentation of the full focus map and the depth map, which specifically includes:
step 31, converting the fully focused image from RGB color space to CIE-Lab color space, then each pixel corresponds to three-channel color information (l, a, b), 2D coordinate information (x, y), and defines the depth value of the corresponding position in the depth map as the depth value D of the pixel;
step 32, two pixels p in the full focus map i ,p j Color distance dis between 1 Defined as formula (1), two pixels p in the full focus map i ,p j Plane coordinate distance dis between 2 Defined as equation (2), two pixels p in the full focus map i ,p j Depth distance dis between 3 Defined as formula (3):
step 33, by comparing dis 1 、dis 2 And dis 3 Weighted summation, as shown in equation (4), defines two pixels p in the full focus map i ,p j Distance dis (p) i ,p j ):
dis(p i ,p j )=dis 1 (p i ,p j )+αdis 2 (p i ,p j )+βdis 3 (p i ,p j ) (4)
Wherein alpha and beta are weight parameters, and adjustment and selection are needed according to different scenes;
step 34, from the distance dis (p i ,p j ) Clustering pixels in the full focus map, forming a super pixel in each type, and completing super pixel segmentation of the full focus map;
and 35, dividing the corresponding pixels in the depth map into one super pixel for the pixels in any super pixel in the full focus map, and completing super pixel segmentation of the depth map.
2. The method of claim 1, wherein step 4 specifically comprises:
step 41, matching the full-focus map with the extracted feature points of the large-field 2D image with the same scale after up-sampling, and removing outliers to obtain feature point pairs which can be accurately matched with the large-field 2D image in each super pixel;
step 42, approximating each superpixel to a plane with its interior aligned coordinates [ x ] capable of aligning with the exactly matching feature points in the large field of view 2D image 1 ,y 1 ,1] T 、[x 2 ,y 2 ,1] T The coordinate transformation between is described by a 3×3 homography transformation matrix H as equation (5):
and 43, establishing a linear regression equation set by at least 4 characteristic point pairs in the super pixels on each depth layer, and calculating and solving a homography transformation matrix H corresponding to the super pixels on the depth layer by using a least square method.
3. The method of any one of claims 1 to 2, wherein step 5 specifically comprises:
step 51, splicing and fusing the full focus map;
step 511, performing coordinate transformation on all super pixels in the full-focus map to the large-view-field 2D image by utilizing the homography matrix of each depth layer calculated in the step 3, and reserving full-focus map information for splicing;
step 512, fusing the spliced images by using a multi-band method;
step 52, splicing and fusing depth maps;
step 521, performing coordinate transformation on the super pixels of the depth map by using a homography transformation matrix which is the same as that of the full focus map, and performing stitching;
step 522, fusing the spliced depth maps in a weighted average manner;
the step 511 specifically includes:
5111, after coordinate transformation is directly performed on the full focus map by using a homography matrix, original coordinate values of each pixel are changed from an integer to a non-integer, and in the transformed coordinate system, pixel values of each integer coordinate are obtained by interpolation calculation by taking distance as weight through pixel values at non-integer coordinates around the pixel values;
in step 5112, all pixels are traversed to find the position with the pixel value of 0, the position of the hole is taken as the starting point, the position is detected in a certain range outwards along the four directions of up, down, left and right until four non-hole pixels are detected, the pixel values of the four positions are interpolated by taking the distance as the weight to obtain the pixel value of the hole position, and the hole part is complemented.
4. A method for stitching convergence of focus stack data based on RGB-D super-pixel segmentation as recited in claim 3, wherein step 6 comprises:
step 61, estimating a point spread function of each depth layer on different focus planes in the imaging system;
step 62, generating image sequences focused on different focal planes, namely large field-of-view focusing stack data, according to the large field-of-view full-focus map, the depth map and the point spread function, wherein the image sequences specifically comprise:
step 621, layering pixels in the large-view-field full-focus map according to depth values in the depth map to form a plurality of depth layers;
step 622, for each focal plane, convolving the full focus map with the point spread function estimated in step 61 layer by layer depth;
in step 623, for each focal plane, the convolved images are superimposed to obtain an image of the scene focused on the focal plane, forming large field of view focal stack data.
5. A system for stitching fusion of focal stack data based on RGB-D super-pixel segmentation, comprising:
the data preprocessing device is used for calculating the focusing stack data to generate RGB-D data and upsampling the low-resolution 2D large-view-field image to be consistent with the RGB-D data scale;
super-pixel dividing means for performing super-pixel division on the RGB-D data;
the homography transformation matrix calculation device is used for extracting and matching characteristic points between the full-focus image and the large-view-field 2D image and calculating an accurate homography transformation matrix of the super pixels of the same depth layer;
splicing and fusing device for performing super-pixel transformation on depth layers to realize splicing and fusing of full-focus map and depth map
Large-field-of-view focusing stack data generating means for computationally generating large-field-of-view focusing stack data from a large-field full-focus map and a depth map;
the super-pixel dividing device specifically comprises:
a conversion unit for converting the full-focus image from the RGB color space to the CIE-Lab color space, each pixel corresponding to three-channel color information (l, a, b), 2D coordinate information (x, y), and defining a depth value of a corresponding position in the depth map as a depth value D of the pixel;
a full focus image pixel distance definition subunit for defining two pixels p in the full focus map i ,p j Color distance dis between 1 Defined as formula (1), in the full focus mapTwo pixels p i ,p j Plane coordinate distance dis between 2 Defined as equation (2), two pixels p in the full focus map i ,p j Depth distance dis between 3 Defined as formula (3):
and through the pair dis 1 、dis 2 And dis 3 Weighted summation, as shown in equation (4), defines two pixels p in the full focus map i ,p j Distance dis (p) i ,p j ):
dis(p i ,p j )=dis 1 (p i ,p j )+αdis 2 (p i ,p j )+βdis 3 (p i ,p j ) (4)
Wherein alpha and beta are weight parameters, and adjustment and selection are needed according to different scenes;
a full focus map dividing unit for dividing the image by the distance dis (p i ,p j ) Clustering pixels in the full focus map, forming a super pixel in each type, and completing super pixel segmentation of the full focus map;
and the depth map segmentation unit is used for dividing the pixels corresponding to any one super pixel in the full-focus map into one super pixel in the depth map, so as to complete the super pixel segmentation of the depth map.
6. The system for stitching and fusing focus stack data based on RGB-D super-pixel segmentation of claim 5, wherein the homography transformation matrix computing means specifically comprises:
the feature point extraction and matching unit is used for matching the full-focus image with the feature points extracted from the large-view-field 2D image with the same scale after up-sampling, and removing outliers to obtain feature point pairs which can be accurately matched with the large-view-field 2D image in each super pixel;
homography transformation matrix H calculation unit for approximating each superpixel as a plane with its interior aligned coordinates [ x ] capable of aligning with the exactly matching feature points in the large field of view 2D image 1 ,y 1 ,1] T 、[x 2 ,y 2 ,1] T The coordinate transformation between the two is described as a formula (5) by a homography transformation matrix H of 3 multiplied by 3, a linear regression equation set is established according to at least 4 groups of characteristic point pairs in the super pixels of each depth layer, and the homography transformation matrix H corresponding to the super pixels of the depth layer is calculated and solved by a least square method:
7. a system for stitching and fusing focus stack data based on RGB-D super-pixel segmentation as recited in any one of claims 5-6, wherein the stitching and fusing device comprises:
the splicing and fusing unit of the full focus map and the depth map comprises;
the full-focus map splicing subunit is used for carrying out coordinate transformation on all super pixels in the full-focus map to a large-view-field 2D image by utilizing the homography matrix of each depth layer calculated by the calculation unit, and reserving full-focus map information for splicing;
the full-focusing image fusion subunit is used for fusing the spliced full-focusing images by utilizing a multi-band method;
the depth map splicing subunit is used for carrying out coordinate transformation on the super pixels of the depth map by using a homography transformation matrix which is the same as that of the full-focus map, and splicing the super pixels;
the depth map fusion subunit is used for fusing the spliced depth maps in a weighted average mode;
the full focus map splicing subunit specifically comprises:
after coordinate transformation is directly carried out on the full focus map by using a homography matrix, the original coordinate value of each pixel is changed from an integer to a non-integer, and in the transformed coordinate system, the pixel value of each integer coordinate is obtained by carrying out interpolation calculation by taking the distance as a weight through the pixel value at the non-integer coordinate around the pixel value;
traversing all pixels, finding a hole position, taking the hole position as a starting point, detecting the hole position in a certain range outwards along the up-down, left-right directions until four non-hole pixels are detected, interpolating the pixel values of the four positions by taking the distance as a weight to obtain the pixel value of the hole position, and complementing the hole part.
8. The system for stitching together focus stack data based on RGB-D super-pixel segmentation of claim 7, wherein the large field-of-view focus stack data generating means comprises:
a point spread function estimating unit for estimating a point spread function of the imaging system on different focal planes for each depth layer of the scene;
the large-view-field focusing stack data generating unit generates image sequences focused on different focusing planes according to the large-view-field full-focusing map, the depth map and the point spread function, namely large-view-field focusing stack data;
the large-view-field focusing stack data generating unit specifically comprises:
convolving the large-view-field full-focusing image layer by layer according to the depth value in the depth image, and calculating an image generated by each depth layer on the corresponding focusing surface;
and overlapping the images generated by convolution of different depth layers on each focusing surface to generate the images of the large-view-field scene on the focusing surfaces, wherein the sequence of the large-view-field images formed on all the focusing surfaces is the large-view-field focusing stack data.
CN202110006480.2A 2021-01-05 2021-01-05 Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation Active CN112669355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110006480.2A CN112669355B (en) 2021-01-05 2021-01-05 Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110006480.2A CN112669355B (en) 2021-01-05 2021-01-05 Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation

Publications (2)

Publication Number Publication Date
CN112669355A CN112669355A (en) 2021-04-16
CN112669355B true CN112669355B (en) 2023-07-25

Family

ID=75412823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110006480.2A Active CN112669355B (en) 2021-01-05 2021-01-05 Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation

Country Status (1)

Country Link
CN (1) CN112669355B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021944A (en) * 2007-03-14 2007-08-22 哈尔滨工业大学 Small wave function-based multi-scale micrograph division processing method
CN102314683A (en) * 2011-07-15 2012-01-11 清华大学 Computational imaging method and imaging system based on nonplanar image sensor
CN102663721A (en) * 2012-04-01 2012-09-12 清华大学 Defocus depth estimation and full focus image acquisition method of dynamic scene
CN103198475A (en) * 2013-03-08 2013-07-10 西北工业大学 Full-focus synthetic aperture perspective imaging method based on multilevel iteration visualization optimization
CN104732587A (en) * 2015-04-14 2015-06-24 中国科学技术大学 Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map
CN104809187A (en) * 2015-04-20 2015-07-29 南京邮电大学 Indoor scene semantic annotation method based on RGB-D data
CN105184808A (en) * 2015-10-13 2015-12-23 中国科学院计算技术研究所 Automatic segmentation method for foreground and background of optical field image
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107862698A (en) * 2017-11-29 2018-03-30 首都师范大学 Light field foreground segmentation method and device based on K mean cluster
CN109064410A (en) * 2018-10-24 2018-12-21 清华大学深圳研究生院 A kind of light field image joining method based on super-pixel
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN109447923A (en) * 2018-09-27 2019-03-08 中国科学院计算技术研究所 A kind of semantic scene completion System and method for
CN110059662A (en) * 2019-04-26 2019-07-26 山东大学 A kind of deep video Activity recognition method and system
CN110096961A (en) * 2019-04-04 2019-08-06 北京工业大学 A kind of indoor scene semanteme marking method of super-pixel rank
CN110246172A (en) * 2019-06-18 2019-09-17 首都师范大学 A kind of the light field total focus image extraction method and system of the fusion of two kinds of Depth cues
CN111882487A (en) * 2020-07-17 2020-11-03 北京信息科技大学 Large-view-field light field data fusion method based on biplane translation transformation
CN111932601A (en) * 2019-09-27 2020-11-13 北京信息科技大学 Dense depth reconstruction method based on YCbCr color space light field data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173940A1 (en) * 2016-12-19 2018-06-21 Canon Kabushiki Kaisha System and method for matching an object in captured images
US20180262744A1 (en) * 2017-02-07 2018-09-13 Mindmaze Holding Sa Systems, methods and apparatuses for stereo vision

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021944A (en) * 2007-03-14 2007-08-22 哈尔滨工业大学 Small wave function-based multi-scale micrograph division processing method
CN102314683A (en) * 2011-07-15 2012-01-11 清华大学 Computational imaging method and imaging system based on nonplanar image sensor
CN102663721A (en) * 2012-04-01 2012-09-12 清华大学 Defocus depth estimation and full focus image acquisition method of dynamic scene
CN103198475A (en) * 2013-03-08 2013-07-10 西北工业大学 Full-focus synthetic aperture perspective imaging method based on multilevel iteration visualization optimization
CN104732587A (en) * 2015-04-14 2015-06-24 中国科学技术大学 Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map
CN104809187A (en) * 2015-04-20 2015-07-29 南京邮电大学 Indoor scene semantic annotation method based on RGB-D data
CN105184808A (en) * 2015-10-13 2015-12-23 中国科学院计算技术研究所 Automatic segmentation method for foreground and background of optical field image
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107862698A (en) * 2017-11-29 2018-03-30 首都师范大学 Light field foreground segmentation method and device based on K mean cluster
CN109447923A (en) * 2018-09-27 2019-03-08 中国科学院计算技术研究所 A kind of semantic scene completion System and method for
CN109360235A (en) * 2018-09-29 2019-02-19 中国航空工业集团公司上海航空测控技术研究所 A kind of interacting depth estimation method based on light field data
CN109064410A (en) * 2018-10-24 2018-12-21 清华大学深圳研究生院 A kind of light field image joining method based on super-pixel
CN110096961A (en) * 2019-04-04 2019-08-06 北京工业大学 A kind of indoor scene semanteme marking method of super-pixel rank
CN110059662A (en) * 2019-04-26 2019-07-26 山东大学 A kind of deep video Activity recognition method and system
CN110246172A (en) * 2019-06-18 2019-09-17 首都师范大学 A kind of the light field total focus image extraction method and system of the fusion of two kinds of Depth cues
CN111932601A (en) * 2019-09-27 2020-11-13 北京信息科技大学 Dense depth reconstruction method based on YCbCr color space light field data
CN111882487A (en) * 2020-07-17 2020-11-03 北京信息科技大学 Large-view-field light field data fusion method based on biplane translation transformation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Using extended measurements and scene merging for efficient and robust point cloud registration;Serafin J;《Robotics and Autonomous Systems》;全文 *
一种由粗至精的RGB-D室内场景语义分割方法;刘天亮;冯希龙;顾雁秋;戴修斌;罗杰波;;东南大学学报(自然科学版)(04);第10-16页 *
基于梯度域的光场全聚焦图像生成方法;苏博妮;;西南大学学报(自然科学版)(10);第179-187页 *
基于稀疏原子融合的RGB-D场景图像融合算法;刘帆;刘鹏远;张峻宁;徐彬彬;;光学学报(01);第222-231页 *

Also Published As

Publication number Publication date
CN112669355A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
US9438878B2 (en) Method of converting 2D video to 3D video using 3D object models
CN108053367B (en) 3D point cloud splicing and fusion method based on RGB-D feature matching
US6671399B1 (en) Fast epipolar line adjustment of stereo pairs
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
US9412151B2 (en) Image processing apparatus and image processing method
US9241147B2 (en) External depth map transformation method for conversion of two-dimensional images to stereoscopic images
US8363985B2 (en) Image generation method and apparatus, program therefor, and storage medium which stores the program
JP4942221B2 (en) High resolution virtual focal plane image generation method
JP4209938B2 (en) Image processing apparatus and method, image processing program, and image processor
US20110080466A1 (en) Automated processing of aligned and non-aligned images for creating two-view and multi-view stereoscopic 3d images
CN109934772B (en) Image fusion method and device and portable terminal
CN110853151A (en) Three-dimensional point set recovery method based on video
KR101853269B1 (en) Apparatus of stitching depth maps for stereo images
Wang et al. Depth from semi-calibrated stereo and defocus
CN108090877A (en) A kind of RGB-D camera depth image repair methods based on image sequence
CN110738731A (en) 3D reconstruction method and system for binocular vision
CN115115522A (en) Goods shelf commodity image splicing method and system
KR20190044439A (en) Method of stitching depth maps for stereo images
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
US8472756B2 (en) Method for producing high resolution image
CN112669355B (en) Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation
CN111105370A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
GB2585197A (en) Method and system for obtaining depth data
CN110827338B (en) Regional self-adaptive matching light field data depth reconstruction method
RU2690757C1 (en) System for synthesis of intermediate types of light field and method of its operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant