US11350073B2 - Disparity image stitching and visualization method based on multiple pairs of binocular cameras - Google Patents
Disparity image stitching and visualization method based on multiple pairs of binocular cameras Download PDFInfo
- Publication number
- US11350073B2 US11350073B2 US17/283,119 US202017283119A US11350073B2 US 11350073 B2 US11350073 B2 US 11350073B2 US 202017283119 A US202017283119 A US 202017283119A US 11350073 B2 US11350073 B2 US 11350073B2
- Authority
- US
- United States
- Prior art keywords
- image
- disparity
- point
- images
- binocular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
Definitions
- the present invention belongs to the field of image processing and computer vision, and particularly relates to a method comprising the steps of calculating a homography matrix between images through external parameters (a rotation vector R and a translation vector T) between cameras, finding an optimal stitching seam between images by graph cut, using R, T, the homography matrix and an optimal transition area to stitch disparity images, and finally fusing the disparity images and visible light images for display.
- disparity images are also used as basic data in the field of driverless technology.
- the field angle of the binocular cameras is small, and a single pair of binocular cameras cannot provide sufficient environmental information for the vehicle of one's own. The larger the field angle of a vehicle is, the more complete the information obtained will be, and the higher the guarantee of driving safety will be.
- the following two methods are mainly used for stitching disparity images:
- This method is to extract feature matching points between images, then solve a rotation vector R and a translation vector T between cameras, and finally stitch disparity images according to R and T.
- the advantages of this method are that the stitching effect is relatively good, the use is flexible, and the method can be used in most scenes; and the disadvantages are that the calculation complexity is high and the method cannot meet the high real-time requirements of driverless technology.
- This method is to obtain external parameters R and T between cameras by using checkers, and then stitch disparity images.
- This method has a small amount of stitching calculation and high real-time performance, but it is easy to produce stitching seams during a disparity image stitching process, which makes the stitching effect poor.
- the disparity image stitching process is divided into two processes: camera coordinate transformation and image coordinate transformation.
- the transformation of a camera coordinate system requires the use of internal parameters K of cameras and external parameters RT between cameras to calculate in a three-dimensional coordinate system; and the transformation of an image coordinate system requires the use of a homography matrix H between camera images and an optimal transition area of visible light images for stitching.
- An image coordinate system transformation process requires pre-registration, and it takes a lot of time to calculate the external parameters and the homography matrix between cameras by matching feature points.
- the present invention provides a disparity image stitching and visualization method based on multiple pairs of binocular cameras: a homography matrix between images is pre-solved based on the prior information (i.e., the positional relationship R and T between cameras), the traditional graph cut algorithm is improved to increase the efficiency of the graph cut algorithm and then is used for stitching disparity images, and the disparity images are fused with visible light images to make it more convenient to observe the depth of the environment.
- a stitching process requires image information and depth image information obtained by each binocular camera.
- a disparity image stitching and visualization method based on multiple pairs of binocular cameras comprising the following steps:
- the internal parameters K include a focal length focus and optical center coordinates C x , C y ;
- the external parameters include a rotation matrix R and a translation vector T; obtaining a baseline length baseline of each binocular camera by calibration; and obtaining visible light images and disparity images of two binocular cameras;
- Formula (2) is further expressed as:
- R and T are respectively a rotation vector and a translation vector from the first binocular camera to the second binocular camera;
- H [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ] ( 8 )
- c 1 is a corresponding coordinate of C 1 in the coordinate system of an imaging plane
- c 2 is a corresponding coordinate of C 2 in the coordinate system of the imaging plane
- K 1 is the internal parameters of the first binocular camera
- K 2 is the internal parameters of the second binocular camera
- the finally obtained transformation matrix H is a 3*3 matrix, and a 11 -a 33 represent specific values.
- Step 3) using the internal parameters of the binocular cameras and the external parameters between the binocular cameras obtained in step 1) and step 2) to perform camera coordinate system transformation of the disparity images; and the specific steps are as follows:
- Z 1 baseline 1 * focus 1 disparity 1 ( 9 )
- X 1 ( x 1 - C x ) * baseline 1 disparity 1 ( 10 )
- Y 1 ( y 1 - C y ) * focus 1 disparity 1 ( 11 )
- x 1 and y 1 are the pixel coordinates of the first binocular camera; disparity is a disparity value;
- Step 4) building overlapping area model: using the homography matrix H between images obtained in step 2) to calculate an overlapping area ROI of images, and modeling the overlapping area; and the specific steps of building a mathematical model are as follows:
- e( ⁇ ) is a weight function
- p is a source image
- q is a target image
- p is the pixel value of one point in the image p
- p′ is the pixel value of a p adjacent point
- q is the pixel value of one point in the target image
- q′ is the pixel value of a q adjacent point
- R p is the value of R channel at p point
- R p′ is the value of R channel at p′ point
- G p is the value of G channel at p point
- G p′ is the value of G channel at p′ point
- B p is the value of B channel at p point
- B p′ is the value of B channel at p′ point
- R q is the value of R channel at q point
- R q′ is the value of R channel at q′ point
- G q is the value of G channel at q point
- G q′ is the value of G channel at q point
- Step 5) dividing each image into blocks with a size of B 1 *B 2 , taking the divided blocks as nodes of a graph, performing graph cut to find a local optimal solution, then continuing to divide each node corresponding to an optimal stitching line corresponding to B 1 *B 2 until a final block size is equal to a pixel value, and finally and approximately finding a global optimal solution by finding the local optimal solution each time;
- Step 6) using the homography matrix H to perform image coordinate system transformation of the disparity images; seamlessly stitching the optimal stitching line in step 5); and the specific steps of disparity image stitching are as follows:
- w ⁇ ( x 2 y 2 1 ) H ⁇ ( x 1 y 1 1 ) ( 18 )
- x 1 and y 1 are the coordinates in the image coordinate system of the first binocular camera
- x 2 and y 2 are the coordinates in the image coordinate system of the second binocular camera
- w is a normalization coefficient
- Stitching image comparing the positions of the first binocular image after image coordinate system transformation and the second binocular image corresponding to an optimal stitching seam, and merging two visible light images and two disparity images respectively;
- Step 7) adding the stitched disparity information to the visible light images; and the specific steps are as follows:
- k is a weight coefficient
- the present invention has the following beneficial effects: the present invention realizes large-field-angle panoramic disparity image display; the algorithm of the present invention achieves real-time performance, and realizes large-disparity seamless panoramic disparity image stitching and visualization.
- the present invention has the following advantages: (1) the program has low requirements on memory and hardware, and can achieve real-time performance on Nvidia TX2; (2) the program is simple and easy to implement; (3) after obtained, the prior information can be directly passed in as parameters to be used as default values; (4) the optimal stitching seam obtained from the images is applied to disparity image stitching to achieve seamless stitching; and (5) disparity image information is superimposed on visible light images.
- the present invention makes full use of the prior information of the images and reduces the time of image registration.
- the method proposed has good scalability; panoramic display of multiple pairs of cameras can be realized by simply inputting R, T and internal parameters K of cameras, and manually setting d value; and the disparity image information is superimposed on the visible light images to display the depth information of the environment more intuitively.
- FIG. 1 is a flow chart of the present invention.
- FIG. 2 is a system structure diagram of binocular cameras of an embodiment of the present invention.
- the present invention proposes a disparity image stitching and visualization method based on multiple pairs of binocular cameras, and will be described in detail below in combination with drawings and embodiments.
- the present invention uses multiple pairs of horizontally placed binocular cameras as an imaging system to perform multi-viewpoint image collection, wherein K 1 is the internal parameters of the first binocular camera, and K 2 is the internal parameters of the second binocular camera.
- the resolution of each binocular camera is 1024*768, the video frame rate is greater than 20 frames per second, and a system reference structure is shown in FIG. 2 .
- the spatial transformation relationship R and T between each pair of binocular cameras is calculated on this basis, and the homography matrix H between images is calculated through R, T and the distance d of the imaging plane; the horizontal translation of each image is calculated by taking an intermediate image as a benchmark; and finally, the calculated parameters are used as inputs for stitching and visualization.
- the specific process is as follows:
- Z 1 baseline 1 * focus 1 disparity 1
- X 1 ( x 1 - C x ) * baseline 1 disparity 1
- Y 1 ( y 1 - C y ) * focus 1 disparity 1
- disparity 2 baseline 2 * focus 2 Z 2
- x is the x-coordinate of the source image p point after perspective transformation
- y is the y-coordinate of the source image p point after perspective transformation
- u is the x-coordinate of the source image p point
- v is the y-coordinate of the source image p point;
- pi is set to be a very large number
- Stitching image comparing the positions of the disparity image after image coordinate system transformation and an intermediate disparity image corresponding to the optimal stitching seam, and merging the two disparity images.
- k is a weight coefficient; when k is relatively large (1-*0.5), visible light information can be observed more clearly; and when k is relatively small (0.5-0), more depth information can be observed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
N T C 1 =d (1)
Wherein C1 is the three-dimensional coordinate of a three-dimensional point P in the coordinate system of the first binocular camera, and the coordinate of the three-dimensional point P in the coordinate system of the second binocular camera is C2, then the relationship between C1 and C2 is:
C 2 =RC 1 +T (2)
Wherein R and T are respectively a rotation vector and a translation vector from the first binocular camera to the second binocular camera;
c 1 =K 1 C 1 (4)
c 2 =K 2 C 2 (5)
It can be obtained from formulas (3), (4) and (5) that:
Finally, a calculation formula of the homography matrix calculated by the internal parameters and the external parameters is obtained:
Wherein c1 is a corresponding coordinate of C1 in the coordinate system of an imaging plane, and c2 is a corresponding coordinate of C2 in the coordinate system of the imaging plane; K1 is the internal parameters of the first binocular camera; K2 is the internal parameters of the second binocular camera; and the finally obtained transformation matrix H is a 3*3 matrix, and a11-a33 represent specific values.
Wherein x1 and y1 are the pixel coordinates of the first binocular camera; disparity is a disparity value;
e(p,q)=∥p—p′∥+∥q−q′∥ (14)
∥p−p′∥=(R p −R p′)2+(G p −G p′)2+(B p −B p′)2 (15)
∥q−q′∥=(R q −R q′)2+(G q −G q′)2+(B q −B q′)2 (16)
E(f)=Σp,q∈N S p,q(l p ,l qi)+Σp∈P D p(I p) (17)
Wherein Sp,q is a smoothing term representing the cost of assigning a pair of pixels (p, q) in the overlapping area to (lp, lq), lp is a label assigned to the pixel p, lq is a label assigned to the pixel q, and DP is a data term representing the cost of marking the pixel p in the overlapping area as lp;
Wherein x1 and y1 are the coordinates in the image coordinate system of the first binocular camera; x2 and y2 are the coordinates in the image coordinate system of the second binocular camera; and w is a normalization coefficient;
Fused image=k*visible light image+(1−k)*color image (19)
N T C 1 =d
Wherein C1 is the coordinate of a three-dimensional point P in the coordinate system of the first camera, X1 and the coordinate of the three-dimensional point P in the coordinate system of the second camera is C2, then the relationship between the two is:
2-2) Obtaining the homography matrix obtained in step 2-1) from the coordinate system of the first camera, and the homography matrix needs to be transformed into the coordinate system of the imaging plane:
c 1 =K 1 C 1
c 2 =K 2 C 2
H=K 1 H′K 2 −1
The value of d in the above formula can be set manually, and the rest are fixed values. In this way, the homography matrix H from the first binocular camera to the second binocular camera is obtained.
Calculating overlapping area of images and solving optimal stitching seam by modeling: first, calculating an overlapping area ROI through the homography matrix between images, and then building an overlapping area model; and the specific steps are as follows:
Wherein x is the x-coordinate of the source image p point after perspective transformation, y is the y-coordinate of the source image p point after perspective transformation, u is the x-coordinate of the source image p point, and v is the y-coordinate of the source image p point;
Wherein the data term Dp(lp) represents the assigned value of pixel p in the overlapping area:
S p,q(l p ,l q)=I *(p)+I *(q)
I *(p)=∥I 0(⋅)−I 1(⋅)∥2
Fused image=k*visible light image+(1−k)*color image
Claims (10)
N T C 1 =d (1)
C 2 =RC 1 +T (2)
c 1 =K 1 C 1 (4)
c 2 =K 2 C 2 (5)
e(p,q)=∥p—p′∥+∥q−q′∥ (14)
∥p−p′∥=(R p −R p′)2+(G p −G p′)2+(B p −B p′)2 (15)
∥q−q′∥=(R q −R q′)2+(G q −G q′)2+(B q −B q′)2 (16)
E(f)=Σp,q∈N S p,q(l p ,l qi)+Σp∈P D p(I p) (17)
e(p,q)=∥p—p′∥+∥q−q′∥ (14)
∥p−p′∥=(R p −R p′)2+(G p −G p′)2+(B p −B p′)2 (15)
∥q−q′∥=(R q −R q′)2+(G q −G q′)2+(B q −B q′)2 (16)
E(f)=Σp,q∈N S p,q(l p ,l qi)+Σp∈P D p(I p) (17)
Fused image=k*visible light image+(1−k)*color image (19)
Fused image=k*visible light image+(1−k)*color image (19)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911304513.0A CN111062873B (en) | 2019-12-17 | 2019-12-17 | A Parallax Image Mosaic and Visualization Method Based on Multiple Pairs of Binocular Cameras |
| CN201911304513.0 | 2019-12-17 | ||
| PCT/CN2020/077957 WO2021120407A1 (en) | 2019-12-17 | 2020-03-05 | Parallax image stitching and visualization method based on multiple pairs of binocular cameras |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220046218A1 US20220046218A1 (en) | 2022-02-10 |
| US11350073B2 true US11350073B2 (en) | 2022-05-31 |
Family
ID=70302062
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/283,119 Active US11350073B2 (en) | 2019-12-17 | 2020-03-05 | Disparity image stitching and visualization method based on multiple pairs of binocular cameras |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11350073B2 (en) |
| CN (1) | CN111062873B (en) |
| WO (1) | WO2021120407A1 (en) |
Families Citing this family (31)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111915482B (en) * | 2020-06-24 | 2022-08-05 | 福建(泉州)哈工大工程技术研究院 | Image splicing method suitable for fixed scene |
| CN111982058A (en) * | 2020-08-04 | 2020-11-24 | 北京中科慧眼科技有限公司 | Distance measurement method, system and equipment based on binocular camera and readable storage medium |
| CN112085653B (en) * | 2020-08-07 | 2022-09-16 | 四川九洲电器集团有限责任公司 | Parallax image splicing method based on depth of field compensation |
| CN112396562B (en) * | 2020-11-17 | 2023-09-05 | 中山大学 | A Disparity Map Enhancement Method Based on RGB and DVS Image Fusion in High Dynamic Range Scene |
| CN112363682B (en) * | 2020-11-19 | 2024-01-30 | 北京华建纵横科技有限公司 | Spliced display screen image display processing method, device and system and computer readable storage medium |
| CN112419379B (en) * | 2020-11-30 | 2024-10-25 | 北京农业智能装备技术研究中心 | Multi-channel image matching method and device for multi-spectral camera |
| CN114693794A (en) * | 2020-12-25 | 2022-07-01 | 瑞芯微电子股份有限公司 | Calibration method, depth imaging method, structured light module and complete machine |
| CN113100941B (en) * | 2021-04-12 | 2022-03-08 | 中国科学院苏州生物医学工程技术研究所 | Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system |
| CN113848884B (en) * | 2021-09-07 | 2023-05-05 | 华侨大学 | Unmanned engineering machinery decision method based on feature fusion and space-time constraint |
| CN113963052B (en) * | 2021-09-22 | 2023-08-18 | 西安交通大学 | A real-time volume monitoring method for large aerostats based on binocular vision |
| CN113822949B (en) * | 2021-11-22 | 2022-02-11 | 湖南中腾结构科技集团有限公司 | Method, device and readable storage medium for calibrating binocular camera |
| CN114331839B (en) * | 2021-12-24 | 2025-09-02 | 梅卡曼德(雄安)机器人科技股份有限公司 | Multi-image fast stitching method |
| CN114022692A (en) * | 2022-01-06 | 2022-02-08 | 杭州灵西机器人智能科技有限公司 | Efficient and accurate error data representation method and terminal |
| CN114359365B (en) * | 2022-01-11 | 2024-02-20 | 合肥工业大学 | Convergence type binocular vision measuring method with high resolution |
| CN115965677A (en) * | 2022-03-24 | 2023-04-14 | 张国流 | Three-dimensional reconstruction method and equipment based on bionic stereoscopic vision and storage medium |
| CN115112024B (en) * | 2022-05-31 | 2023-09-26 | 江苏濠汉信息技术有限公司 | Algorithm for texture positioning in wire length measurement process |
| CN114936971B (en) * | 2022-06-08 | 2024-11-12 | 浙江理工大学 | A method and system for stitching multispectral images of unmanned aerial vehicle remote sensing for water areas |
| CN115131213B (en) * | 2022-07-27 | 2024-12-10 | 成都市晶林科技有限公司 | A real-time infrared binocular image stitching method and system |
| CN115375681B (en) * | 2022-10-24 | 2023-02-03 | 常州铭赛机器人科技股份有限公司 | Large-size target measuring method based on image splicing |
| CN115731303B (en) * | 2022-11-23 | 2023-10-27 | 江苏濠汉信息技术有限公司 | Large-span transmission conductor sag three-dimensional reconstruction method based on bidirectional binocular vision |
| CN115761007A (en) * | 2022-11-28 | 2023-03-07 | 元橡科技(北京)有限公司 | A real-time binocular camera self-calibration method |
| CN116823632B (en) * | 2023-01-31 | 2025-09-16 | 北京工业大学 | Turbid underwater fish data set acquisition and construction method |
| CN116051658B (en) * | 2023-03-27 | 2023-06-23 | 北京科技大学 | Camera hand-eye calibration method and device for target detection based on binocular vision |
| CN116168066B (en) * | 2023-04-25 | 2023-07-21 | 河海大学 | Building three-dimensional point cloud registration preprocessing method based on data analysis |
| CN116993709B (en) * | 2023-08-17 | 2025-05-30 | 浙江工业大学 | Wire cake defect detection method based on improved LoFTR algorithm |
| CN117291804B (en) * | 2023-09-28 | 2024-09-13 | 武汉星巡智能科技有限公司 | Binocular image real-time splicing method, device and equipment based on weighted fusion strategy |
| CN117876647B (en) * | 2024-03-13 | 2024-05-28 | 大连理工大学 | Image stitching method based on binocular vision and multi-scale homography regression |
| CN118015237B (en) * | 2024-04-09 | 2024-06-21 | 松立控股集团股份有限公司 | Multi-view image stitching method and system based on global similarity optimal seam |
| CN118505816B (en) * | 2024-05-21 | 2024-12-03 | 中国汽车工程研究院股份有限公司 | Visual calibration and parameter correction method and system |
| CN119554958B (en) * | 2024-11-21 | 2025-09-26 | 中国科学院长春光学精密机械与物理研究所 | Nonlinear guide rail splicing instrument precision and space position calibration method based on optical flat detection |
| CN120374737B (en) * | 2025-06-26 | 2025-09-19 | 湘江实验室 | A three-dimensional positioning method for transparent cell culture dishes based on pure binocular vision |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160269717A1 (en) | 2015-03-12 | 2016-09-15 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and recording medium |
| CN107767339A (en) | 2017-10-12 | 2018-03-06 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method |
| CN108470324A (en) | 2018-03-21 | 2018-08-31 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method of robust |
| CN109978760A (en) | 2017-12-27 | 2019-07-05 | 杭州海康威视数字技术股份有限公司 | A kind of image split-joint method and device |
| US10373362B2 (en) * | 2017-07-06 | 2019-08-06 | Humaneyes Technologies Ltd. | Systems and methods for adaptive stitching of digital images |
| US20190313070A1 (en) | 2016-11-23 | 2019-10-10 | Réalisations Inc. Montreal | Automatic calibration projection system and method |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105678687A (en) * | 2015-12-29 | 2016-06-15 | 天津大学 | Stereo image stitching method based on content of images |
| US10313584B2 (en) * | 2017-01-04 | 2019-06-04 | Texas Instruments Incorporated | Rear-stitched view panorama for rear-view visualization |
| CN106886979B (en) * | 2017-03-30 | 2020-10-20 | 深圳市未来媒体技术研究院 | Image splicing device and image splicing method |
-
2019
- 2019-12-17 CN CN201911304513.0A patent/CN111062873B/en active Active
-
2020
- 2020-03-05 US US17/283,119 patent/US11350073B2/en active Active
- 2020-03-05 WO PCT/CN2020/077957 patent/WO2021120407A1/en not_active Ceased
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160269717A1 (en) | 2015-03-12 | 2016-09-15 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and recording medium |
| US10027949B2 (en) * | 2015-03-12 | 2018-07-17 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and recording medium |
| US20190313070A1 (en) | 2016-11-23 | 2019-10-10 | Réalisations Inc. Montreal | Automatic calibration projection system and method |
| US10373362B2 (en) * | 2017-07-06 | 2019-08-06 | Humaneyes Technologies Ltd. | Systems and methods for adaptive stitching of digital images |
| CN107767339A (en) | 2017-10-12 | 2018-03-06 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method |
| CN109978760A (en) | 2017-12-27 | 2019-07-05 | 杭州海康威视数字技术股份有限公司 | A kind of image split-joint method and device |
| CN108470324A (en) | 2018-03-21 | 2018-08-31 | 深圳市未来媒体技术研究院 | A kind of binocular stereo image joining method of robust |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220046218A1 (en) | 2022-02-10 |
| CN111062873A (en) | 2020-04-24 |
| CN111062873B (en) | 2021-09-24 |
| WO2021120407A1 (en) | 2021-06-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11350073B2 (en) | Disparity image stitching and visualization method based on multiple pairs of binocular cameras | |
| US11783446B2 (en) | Large-field-angle image real-time stitching method based on calibration | |
| CN111028155B (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
| US10580205B2 (en) | 3D model generating system, 3D model generating method, and program | |
| CN108629829B (en) | A three-dimensional modeling method and system combining dome camera and depth camera | |
| CN111275621A (en) | Panoramic image generation method and system in driving all-round system and storage medium | |
| CN108665537A (en) | The three-dimensional rebuilding method and system of combined optimization human body figure and display model | |
| US20190005715A1 (en) | 3d model generating system, 3d model generating method, and program | |
| CN107274336A (en) | A kind of Panorama Mosaic method for vehicle environment | |
| CN106856000A (en) | A kind of vehicle-mounted panoramic image seamless splicing processing method and system | |
| JP2009116532A (en) | Virtual viewpoint image generation method and virtual viewpoint image generation apparatus | |
| CN114757834B (en) | Panoramic image processing method and panoramic image processing device | |
| CN106534670B (en) | It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group | |
| Wan et al. | Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform | |
| CN108898550A (en) | Image split-joint method based on the fitting of space triangular dough sheet | |
| Wang et al. | Image stitching using double features-based global similarity constraint and improved seam-cutting | |
| Zhou et al. | MR video fusion: interactive 3D modeling and stitching on wide-baseline videos | |
| CN118864274A (en) | A multi-view camera image stitching method for intersection scenes | |
| CN103295211B (en) | Infant image composition method and device | |
| Yang et al. | A flexible vehicle surround view camera system by central-around coordinate mapping model | |
| CN110631556B (en) | Distance measurement method of heterogeneous stereoscopic vision system | |
| JP2003115057A (en) | Texture editing apparatus, texture editing system and method | |
| Kurka et al. | Automatic estimation of camera parameters from a solid calibration box | |
| CN109272445A (en) | Panoramic video joining method based on Sphere Measurement Model | |
| CN115514751A (en) | Image acquisition method and remote control system for excavator remote control |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: DALIAN UNIVERSITY OF TECHNOLOGY, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FAN, XIN;LIU, RISHENG;LI, ZHUOXIAO;AND OTHERS;REEL/FRAME:055840/0729 Effective date: 20210108 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |