CN112085653B - Parallax image splicing method based on depth of field compensation - Google Patents

Parallax image splicing method based on depth of field compensation Download PDF

Info

Publication number
CN112085653B
CN112085653B CN202010789913.1A CN202010789913A CN112085653B CN 112085653 B CN112085653 B CN 112085653B CN 202010789913 A CN202010789913 A CN 202010789913A CN 112085653 B CN112085653 B CN 112085653B
Authority
CN
China
Prior art keywords
scene
depth
image
field
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010789913.1A
Other languages
Chinese (zh)
Other versions
CN112085653A (en
Inventor
朱策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiuzhou Electric Group Co Ltd
Original Assignee
Sichuan Jiuzhou Electric Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiuzhou Electric Group Co Ltd filed Critical Sichuan Jiuzhou Electric Group Co Ltd
Priority to CN202010789913.1A priority Critical patent/CN112085653B/en
Publication of CN112085653A publication Critical patent/CN112085653A/en
Application granted granted Critical
Publication of CN112085653B publication Critical patent/CN112085653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a parallax image splicing method based on depth of field compensation, which comprises the following steps: s1: estimating the depth of field of a target scene, and segmenting the scene under different depth of field according to different depth of field of the image to be spliced by using a Mask R-CNN image segmentation algorithm; s2: respectively extracting feature points in scenes under different depths of field, and matching the scenes of different depths of field in the images to be spliced according to the feature points; s3: for the matched scenes with different depths of field, selecting N uniform anchor points from the segmented outlines of the different scenes, and dividing Delaunay triangles for the feature points matched with the scenes with different depths of field in combination with the step S2; s4: after the image to be spliced is subjected to scale normalization, calculating a parallax offset vector of the scene with the adjacent depth of field of the image to be spliced according to the matched characteristic points; s5: position compensation is performed based on the parallax offset vector acquired in step S4, and a transform relationship after compensation is acquired based on the feature points after compensation.

Description

Parallax image splicing method based on depth of field compensation
Technical Field
The invention relates to a parallax image splicing method, in particular to a parallax image splicing method based on depth of field compensation.
Background
The image stitching is a technology for obtaining an image with a larger field of view by using the correlation of the overlapping region of a plurality of images through a series of registration algorithms and post-processing algorithms, and mainly comprises a registration process and a combination process. The post-processing process in the combination stage aims at reducing structural distortion caused by inaccurate registration parameters and eliminating pixel distortion of the registered images. Namely, the registration stage obtains transformation parameters which enable the structural distortion of the transformed image to be minimum, and the combination stage eliminates the structural distortion and the pixel distortion to a certain extent.
However, due to the complexity of natural scenes (differences in depth of field leading to occlusion or non-occlusion relationships), the same scene taken at different angles or distances between cameras (base line) tends to have large differences in the images, which are referred to as parallax images.
The traditional image stitching algorithm is suitable for stitching simple scene images: in the scenes with continuously changing depth and the images strictly meeting specific acquisition conditions, most of the splicing methods treat the images as a single plane, and even though the images are divided by grids or other dividing modes, the depth relation among different scenes in the images is still not considered. Therefore, for the parallax images, the traditional stitching algorithm cannot achieve a good stitching effect, and in order to deal with the stitching of the parallax images and analyze the reason of parallax generation, the invention provides a method for stitching according to the scene depth.
Disclosure of Invention
The invention aims to solve the technical problem that the parallax image is difficult to splice or has poor splicing effect by using the conventional image splicing algorithm, and provides a parallax image splicing method based on depth of field compensation to solve the problem.
The invention is realized by the following technical scheme:
the parallax image splicing method based on depth compensation is characterized by comprising the following steps of: s1: estimating the depth of field of a target scene, and segmenting the scene under different depth of field according to different depth of field of the image to be spliced by using a Mask R-CNN image segmentation algorithm; s2: respectively extracting feature points in scenes under different depths of field, and matching the scenes of different depths of field in the images to be spliced according to the feature points; s3: for the matched scenes with different depths of field, selecting N uniform anchor points from the segmentation contours of the different scenes, and dividing Delaunay triangles for the feature points matched with the scenes with different depths of field in combination with the step S2; s4: after the image to be spliced is subjected to scale normalization, calculating a parallax offset vector of the scene with the adjacent depth of field of the image to be spliced according to the matched characteristic points; s5: performing position compensation according to the parallax offset vector acquired in step S4, and acquiring a transform relationship after compensation according to the feature points after compensation; s6: after the transformation relation is determined, sequentially splicing different depth-of-field scenes according to the depth sequence, calculating transformation weight for each Delaunay triangle of each scene, transforming each Delaunay triangle of the reference image and the target image according to the transformation weight, and establishing a lookup table; s7: and registering the whole image through a lookup table, and then carrying out post-processing on the registered image to obtain a final spliced image.
In the prior art, the existing image stitching method does not consider the depth occlusion relationship of scenes, and stitches the scenes at different depths of the image as the same plane, or when the scene is divided, divides the image into grids or adopts other division modes, the division is random, and the occlusion relationship of the scene is ignored, so that the stitching effect of the parallax image is poor. Therefore, according to the situation and the depth estimation result, the application document uses image segmentation algorithms such as Mask R-CNN and the like to finely segment the scenes under different depths of field, and finally splices the scenes under different depths of field.
Further, the step S2 is executed by
Figure BDA0002623389110000021
And matching scenes with different depths of field.
Where C denotes that the entire scene approximation can be divided into C depth planes, N (C) denotes that the C-th depth scene matches to N pairs of feature points,
Figure BDA0002623389110000022
an ith feature of a c-th depth scene representing the target image,
Figure BDA0002623389110000023
an ith feature representing a c-th depth scene of the reference image.
Further, the disparity offset vector in step S4 is obtained by using the following formula,
Figure BDA0002623389110000024
in the above formula, c represents an index of different depth scenes, for example, if three depth planes are estimated, then c is 0,1,2, and c is 0 to represent sky at infinity;
Figure BDA0002623389110000025
representing the ith feature matched on the scene with the index of c in the target image; n (c) scene matching with index cThe number of the feature points; parallel (c) represents the disparity offset vector of the c scene relative to the c-1 scene in the two disparity images.
Further, the position compensation in the step S5 adopts
Figure BDA0002623389110000026
Compensate and pass
Figure BDA0002623389110000027
Acquiring a transformation relation after compensation; in the above-mentioned formula,
Figure BDA0002623389110000028
representing the ith feature matched on the scene with the index of c in the target image;
Figure BDA0002623389110000029
representing the characteristic of the reference image after position compensation on the scene with the index of c; h c Representing a transformation matrix of a scene with index c in the target image to a scene with index c in the reference image before position compensation; offset H c Representing a transformation matrix of the scene with index c in the target image to the scene with index c in the reference image after the position compensation.
Further, the method for transforming each delaunay triangle in step S6 is:
Figure BDA0002623389110000031
wherein the content of the first and second substances,
Figure BDA0002623389110000032
representing the weight required for the jth Delaunay triangle transformation of the scene with index c between the parallax images; s c Representing a similarity transformation of a scene with index c between parallax images;
Figure BDA0002623389110000033
respectively representing referencesThe ith vertex of the jth Delaunay triangle in the scene with the index of c in the image and the target image;
Figure BDA0002623389110000034
respectively representing the ith vertex of the jth Delaunay triangle in the scene with the index of c in the reference image and the target image after transformation.
Transform time weighting for each Delaunay triangle
Figure BDA0002623389110000035
Can be according to pi j The distance between the center point of (a) and the center of the nearest 3 feature points.
Further, each delaunay triangle pi of each depth scene of the reference image and the target image in S6 j After the vertex of the image is transformed, a lookup table is established, and in step S7, according to the lookup table, when a point in a scene at a certain depth in the image is located inside a triangle, a transformation parameter is directly obtained through the lookup table, and the point is transformed. And after the scenes of all the depths are transformed, obtaining a final spliced image through corresponding post-processing.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. according to the parallax image splicing method based on the depth of field compensation, the conventional image splicing method is suitable for simple scenes or images acquired under the condition of meeting strict acquisition conditions, but the robustness of the parallax images is low. The method specifically analyzes the specific situation, analyzes the generation reason of the parallax, starts from the aspect of depth of field, firstly carries out scene segmentation on the image according to the depth of field, and splices the segmented image, so that the existing method can be utilized, and the influence of the parallax can be eliminated to a certain extent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
fig. 1 is a schematic imaging diagram in the case of parallel optical axes.
FIG. 2 is a flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Examples
As shown in fig. 1 and 2, the method for splicing parallax images based on depth compensation of the present invention is characterized by comprising the following steps: s1: estimating the depth of field of a target scene, and segmenting the scene under different depth of field according to different depth of field of the image to be spliced by using Mask R-CNN and other image segmentation algorithms; s2: respectively extracting characteristic points from scenes under different depths of field, and matching the scenes of different depths of field in the images to be spliced according to the characteristic points; s3: for the matched scenes with different depths of field, selecting N uniform anchor points from the segmented outlines of the different scenes, and dividing Delaunay triangles by combining the matched feature points of the scenes with different depths of field in the step S2; s4: after the image to be spliced is subjected to scale normalization, calculating a parallax offset vector of the scene with the adjacent depth of field of the image to be spliced according to the matched characteristic points; s5: performing position compensation according to the parallax offset vector acquired in step S4, and acquiring a transform relationship after compensation according to the feature points after compensation; s6: after the transformation relation is determined, sequentially calculating transformation weights for each Delaunay triangle of different depth-of-field scenes according to the depth sequence, transforming each Delaunay triangle of the reference image and the target image according to the transformation weights, and establishing a lookup table; s7: and registering the whole image through a lookup table, and then carrying out post-processing on the registered image to obtain a final spliced image.
By the step S2
Figure BDA0002623389110000041
Matching scenes with different depths of field, wherein C represents that the whole scene can be approximately divided into C +1 depthsDegree plane, N (c) denotes that the c-th depth scene matches to N pairs of feature points,
Figure BDA0002623389110000042
the ith feature of the c-th depth scene representing the target image,
Figure BDA0002623389110000043
an ith feature representing a c-th depth scene of the reference image.
The disparity offset vector in step S4 is obtained using the following formula,
Figure BDA0002623389110000044
in the above formula, c represents an index of different depth scenes, for example, if three depth planes are estimated, then c is 0,1,2, and c is 0 representing sky at infinity;
Figure BDA0002623389110000045
representing the ith feature matched on the scene with the index of c in the target image; n (c) represents the number of feature points matched by the scene with the index of c; parallel (c) represents the disparity offset vector of the c scene relative to the c-1 scene in the two disparity images.
The position compensation in the step S5 adopts
Figure BDA0002623389110000046
Compensate and pass
Figure BDA0002623389110000051
And obtaining the transformation relation after compensation, wherein in the formula,
Figure BDA0002623389110000052
representing the ith feature matched on the scene with the index of c in the target image;
Figure BDA0002623389110000053
representing the characteristic of the reference image after position compensation on the scene with the index c; h c Representing a transformation matrix of a scene with index c in the target image to a scene with index c in the reference image before position compensation; offset H c Representing a transformation matrix of the scene with index c in the target image to the scene with index c in the reference image after the position compensation.
In the step S6, a gaussian weight, a student' S-t weight, a cauchy weight, and the like are selected according to conditions to calculate a transformation weight, each probability distribution has a different ending degree and a different application condition, and specifically, the selection can be performed by comparing test results. When calculating the weights, we only need to assign weights to the distances according to their positions on the normal curve.
Transform time weighting for each Delaunay triangle
Figure BDA0002623389110000054
Can be according to pi j Is obtained from the distance of the center point of (a) to the center of the nearest 3 feature points, e.g. by calculating the weight using the Cauchy distribution
Figure BDA0002623389110000055
The following were used:
Figure BDA0002623389110000056
γ=||mid(anchors (c) )-mid(kpts (c) )|| 2
wherein kpts (c) Feature points representing a scene with index c; anchors (c) Anchor points on the contour representing the scene with index c; min 3 (a, b) represents the 3 elements in the b set closest to a; mid (-) denotes the center of the set of points; other weighting functions may be chosen, such as Gaussian weighting, Student's-t weighting, etc.
The check in the step S7The table is looked up according to each Delaunay triangle pi in S6 j After the vertex is transformed, a lookup table is established, when a point in the image is positioned in the triangle, transformation parameters are directly obtained through the lookup table, and then the point is transformed. The look-up table is built as follows: because the transformation process is to transform the vertex of each Delaunay triangle first, the weighted transformation matrix is used, and a lookup table is established for the vertex of each triangle and the transformation matrix. Therefore, for any point in the image, the triangle is judged to be in, and then the transformation matrix of the triangle is directly used for transformation, so that the transformation matrix does not need to be obtained through weighting again, and the operation time is greatly reduced.
After the weights are calculated, the transformation method for each delaunay triangle in the scene with index c is as follows:
Figure BDA0002623389110000061
wherein the content of the first and second substances,
Figure BDA0002623389110000062
representing the weight required for the jth Delaunay triangle transformation of the scene with index c between the parallax images; s c Representing a similarity transformation of a scene with index c between parallax images;
Figure BDA0002623389110000063
respectively representing the ith vertex of the jth Delaunay triangle in the scene with the index of c in the reference image and the target image;
Figure BDA0002623389110000064
respectively representing the ith vertex of the jth Delaunay triangle in the scene with the index of c in the reference image and the target image after transformation.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. The parallax image splicing method based on depth compensation is characterized by comprising the following steps of:
s1: estimating the depth of field of a target scene, and segmenting the scene under different depth of field according to different depth of field of the image to be spliced by using a Mask R-CNN image segmentation algorithm;
s2: respectively extracting characteristic points of scenes under different depths of field, and matching the scenes of different depths of field in the images to be spliced according to the characteristic points
S3: for the matched scenes with different depths of field, selecting N uniform anchor points in the outline of the scene, and dividing Delaunay triangles for the feature points matched with the scenes with different depths of field in combination with the step S2;
s4: after the image to be spliced is subjected to scale normalization, calculating a parallax offset vector according to the matched characteristic points of the scene with the adjacent depth of field of the image to be spliced; the disparity offset vector is obtained using the following formula,
Figure FDA0003687158260000011
in the above formula, C represents an index of a scene with different depths, and three depth planes are estimated from the depths, where C is 0,1,2, and C is 0 representing the sky at infinity, and C represents a scene in which the whole image can be approximately divided into C depths;
Figure FDA0003687158260000012
representing the ith feature matched on the scene with the index of c in the target image; n (c) represents the number of feature points matched by the scene with the index of c; parallelx (c) represents a disparity offset vector of the c scene relative to the c-1 scene in the two disparity images;
s5: performing position compensation according to the parallax offset vector acquired in step S4, and acquiring a transform relationship after compensation according to the feature points after compensation; the position compensation adopts
Figure FDA0003687158260000013
C is 1, …, and is compensated by
Figure FDA0003687158260000014
Acquiring a transformation relation after compensation; wherein the content of the first and second substances,
Figure FDA0003687158260000015
an ith feature representing a c-th depth scene of the reference image;
Figure FDA0003687158260000016
representing the characteristic of the reference image after position compensation on the scene with the index of c; h c Representing a transformation matrix of a scene with index c in the target image to a scene with index c in the reference image before position compensation; offset H c Representing a transformation matrix of a scene with the index c in the target image to a scene with the index c in the reference image after the position compensation;
s6: after the transformation relation is determined, calculating transformation weight for the vertex of each Delaunay triangle of different depth-of-field scenes according to the depth sequence, transforming each Delaunay triangle of different depth-of-field scenes of the reference image and the target image according to the transformation weight, and establishing a lookup table; wherein, the transformation method of each Delaunay triangle is as follows:
Figure FDA0003687158260000021
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003687158260000022
representing the weight required for the jth Delaunay triangle transformation of the scene with index c between the parallax images; s c Representing a similarity transformation of a scene with index c between parallax images;
Figure FDA0003687158260000023
respectively representing the ith vertex of the jth Delaunay triangle in the scene with the index of c in the reference image and the target image;
Figure FDA0003687158260000024
Figure FDA0003687158260000025
respectively representing the ith vertex of the jth Delaunay triangle in the scene with the index of c in the reference image and the target image after transformation;
s7: registering the whole image through a lookup table, and then performing post-processing on the registered image to obtain a final spliced image, wherein the method specifically comprises the following steps: the method comprises the steps that scenes are transformed according to a lookup table in a depth sequence, when a point in a certain depth scene is located inside a Delaunay triangle of the lookup table, transformation parameters are directly obtained through the lookup table, and the point is further transformed; and carrying out image fusion on the transformed images of the same depth scene of the reference image and the target image, and directly superposing the fused images of different depths to obtain a final spliced image.
2. The method for splicing depth-of-field compensation-based parallax images according to claim 1, wherein the camera imaging principle in step S1 predicts the depth of the scene in the image by knowing the baseline and the camera parameters under the condition that the optical axes are parallel, or predicts the depth of the scene in the image by using a monadepth 2 deep learning algorithm, and segments the scene under different depths of field according to the depth relationship.
3. The method according to claim 1The method for splicing depth-compensated parallax images, wherein the step S2 is performed by
Figure FDA0003687158260000026
And matching scenes with different depths of field.
4. The method for splicing disparity images based on depth-of-field compensation according to claim 1, wherein an anchor point is selected from the contour of the scene after the segmentation in step S3, and a delaunay triangle is divided between the anchor point and feature points matched to different scenes.
5. The method for depth-compensated parallax image stitching according to claim 1, wherein the transform weight for each Delaunay triangle is
Figure FDA0003687158260000027
Can be according to pi j The distance between the center point of (a) and the center of the nearest 3 feature points is obtained, and the weights are calculated by adopting Cauchy distribution
Figure FDA0003687158260000028
Each Delaunay triangle π is calculated as follows j After the weight is transformed, transforming the vertex of the Delaunay triangle according to the weight, and establishing a lookup table;
Figure FDA0003687158260000031
γ=||mid(anchors (c) )-mid(kpts (c) )|| 2
wherein kpts (c) Feature points representing a scene with index c; anchors (c) An anchor point on the outline representing the scene with index c;
min 3 (a, b) represents the 3 elements in the b set closest to a; mid (-) denotes the center of the point set.
CN202010789913.1A 2020-08-07 2020-08-07 Parallax image splicing method based on depth of field compensation Active CN112085653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010789913.1A CN112085653B (en) 2020-08-07 2020-08-07 Parallax image splicing method based on depth of field compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010789913.1A CN112085653B (en) 2020-08-07 2020-08-07 Parallax image splicing method based on depth of field compensation

Publications (2)

Publication Number Publication Date
CN112085653A CN112085653A (en) 2020-12-15
CN112085653B true CN112085653B (en) 2022-09-16

Family

ID=73734855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010789913.1A Active CN112085653B (en) 2020-08-07 2020-08-07 Parallax image splicing method based on depth of field compensation

Country Status (1)

Country Link
CN (1) CN112085653B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062866A (en) * 2019-11-07 2020-04-24 广西科技大学鹿山学院 Transformation matrix-based panoramic image splicing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734657B (en) * 2018-04-26 2022-05-03 重庆邮电大学 Image splicing method with parallax processing capability
CN111062873B (en) * 2019-12-17 2021-09-24 大连理工大学 Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN111275750B (en) * 2020-01-19 2022-05-13 武汉大学 Indoor space panoramic image generation method based on multi-sensor fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062866A (en) * 2019-11-07 2020-04-24 广西科技大学鹿山学院 Transformation matrix-based panoramic image splicing method

Also Published As

Publication number Publication date
CN112085653A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN110853151B (en) Video-based three-dimensional point set recovery method
WO2018171008A1 (en) Specular highlight area restoration method based on light field image
CN108460792B (en) Efficient focusing stereo matching method based on image segmentation
KR20110014067A (en) Method and system for transformation of stereo content
KR20130120730A (en) Method for processing disparity space image
KR101853269B1 (en) Apparatus of stitching depth maps for stereo images
CN113538569A (en) Weak texture object pose estimation method and system
CN110120013A (en) A kind of cloud method and device
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN104038752B (en) Multi-view point video rectangular histogram color correction based on three-dimensional Gaussian mixed model
KR20190044439A (en) Method of stitching depth maps for stereo images
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN112085653B (en) Parallax image splicing method based on depth of field compensation
Ajith et al. Dark Channel Prior based Single Image Dehazing of Daylight Captures
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
CN114782507B (en) Asymmetric binocular stereo matching method and system based on unsupervised learning
CN113674407B (en) Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image
Sun et al. Seamless view synthesis through texture optimization
CN116433760A (en) Underwater navigation positioning system and method
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN114694204A (en) Social distance detection method and device, electronic equipment and storage medium
CN113225484A (en) Method and device for rapidly acquiring high-definition picture shielding non-target foreground
CN112288669A (en) Point cloud map acquisition method based on light field imaging
CN111862184A (en) Light field camera depth estimation system and method based on polar image color difference
CN113344997B (en) Method and system for rapidly acquiring high-definition foreground image only containing target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant