CN112308777A - Rapid image splicing method for plane and plane-like parts - Google Patents

Rapid image splicing method for plane and plane-like parts Download PDF

Info

Publication number
CN112308777A
CN112308777A CN202011111059.XA CN202011111059A CN112308777A CN 112308777 A CN112308777 A CN 112308777A CN 202011111059 A CN202011111059 A CN 202011111059A CN 112308777 A CN112308777 A CN 112308777A
Authority
CN
China
Prior art keywords
image
plane
images
camera
homography matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011111059.XA
Other languages
Chinese (zh)
Inventor
郭寅
尹仕斌
马雪奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Isvision Hangzhou Technology Co Ltd
Original Assignee
Isvision Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isvision Hangzhou Technology Co Ltd filed Critical Isvision Hangzhou Technology Co Ltd
Priority to CN202011111059.XA priority Critical patent/CN112308777A/en
Publication of CN112308777A publication Critical patent/CN112308777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention discloses a quick image splicing method for plane and plane-like parts, which comprises the following steps: firstly, a homography matrix between a single group of images to be matched is obtained to complete image registration: respectively carrying out rough matching, fine matching and three-dimensional reconstruction on the image A and the image B to obtain three-dimensional characteristic point information, and then carrying out plane fitting by using the three-dimensional characteristic point information to obtain a space plane and a normal vector of the space plane
Figure DDA0002728608950000011
Along a normal vector
Figure DDA0002728608950000012
Direction, obtaining the distance d between the space plane and the camera light plane; computing a homography matrix between image A and image B
Figure DDA0002728608950000013
Completing registration between the two images by using a homography matrix; respectively obtaining homography matrixes among all groups of images to be matched, and finishing the registration of the collected images at all adjacent photographing positions: splicing complete image information of the surface of the piece to be detected by utilizing the homography matrix; the method can rapidly solve the homography matrix between the cameras, and effectively improves the convenience in the image registration and splicing processes.

Description

Rapid image splicing method for plane and plane-like parts
Technical Field
The invention relates to the field of image registration, in particular to a quick image splicing method for plane and plane-like parts.
Background
In the modern measurement process, measurement of large parts is frequently faced, such as parts in the fields of rail transit, aerospace, ships and the like, the size of the parts is often large, a single camera is used, the view field of the parts cannot meet the requirement of comprehensive measurement, therefore, a plurality of shooting positions need to be set, and the full-size information of a to-be-measured part is obtained by utilizing an image splicing technology; the image registration technology is the core of image splicing, and is a process of converting different images of the same scene into the same coordinate system; these images may be taken at different times (multi-time registration), may be taken at different sensors (multi-mode registration), may be taken at different perspectives; the spatial relationship between the images may be rigid (translational and rotational), affine (e.g. miscut), or homography, or complex large deformation models. At present, for a surface type workpiece with a larger size, a conventional image registration method is to obtain matching points based on a scale invariant feature algorithm and then solve an image transformation model by using a k-d tree and a RANSAC method; however, the method has low efficiency and long time consumption for solving the image transformation model, and does not meet the actual requirement of quick splicing.
Disclosure of Invention
Aiming at the problems, the invention provides a method for quickly splicing images of plane and plane-like parts.
A fast image splicing method for plane and plane-like parts is characterized in that the surface of a part to be tested is a plane or a plane-like surface (the plane-like surface is a curved surface, and the main curvature of each point on the curved surface is more than 2000R);
the method comprises the following steps that a camera respectively collects images of the surface of a piece to be measured at a plurality of photographing positions, and at a single photographing position, the included angle between the image plane of the camera and the surface of the piece to be measured is smaller than 10 degrees (the image plane of the camera and the surface of the piece to be measured are kept parallel as much as possible); calibrating external parameters between cameras at any adjacent photographing positions, and recording two images acquired at the two adjacent photographing positions as a group of images to be registered, wherein a public area exists in the two images;
the splicing method comprises the following steps:
step one, utilizing the following steps to obtain a homography matrix between a single group of images to be matched and complete image registration:
1) recording one image as an image A and the other image as an image B, and respectively extracting feature points and feature vectors in the image A and the image B; respectively carrying out coarse matching and fine matching on the image A and the image B by utilizing the feature vectors;
2) carrying out three-dimensional reconstruction on the feature points in the precisely matched image A to obtain three-dimensional feature point information, and carrying out plane fitting by using the three-dimensional feature point information to obtain a space plane and a normal vector of the space plane
Figure RE-GDA0002848060710000021
Along a normal vector
Figure RE-GDA0002848060710000022
Direction, obtaining the distance d between the space plane and the camera light plane of the collected image A;
3) computing a homography matrix between image A and image B
Figure RE-GDA0002848060710000023
Wherein A isl、ArRespectively representing the internal reference matrixes of the cameras corresponding to the image A and the image B;
Figure RE-GDA0002848060710000024
r 'and T' respectively represent external parameter matrixes between cameras corresponding to the image A and the image B;
completing registration between the two images by using the homography matrix;
step two, respectively obtaining homography matrixes among all groups of images to be matched according to the step one, and finishing the registration of the collected images at all adjacent photographing positions:
and selecting one image as a reference image, wherein the photographing position of the selected image is the reference position, respectively transforming the images acquired at other photographing positions into the reference image by using the homography matrix, and then performing fusion processing to splice complete image information of the surface of the piece to be detected.
Further, in the step 1), a method for extracting feature points and feature vectors in the image A and the image B is an SIFT method or an SURF method;
the rough matching method is a KNN method; the fine matching method is an epipolar line matching method.
In order to improve the registration accuracy, before the step 1), respectively preprocessing the collected surface images of the piece to be detected:
and carrying out template matching on the acquired image by utilizing a pre-stored template image of the surface of the object to be detected, determining the position of the surface of the object to be detected in the image, carrying out mask processing on the image according to pre-stored shape information, and zeroing the gray level of a background area.
Further, the fusion processing method includes: a multi-band fusion method, a direct averaging method, a weighted averaging method, and a median filtering method.
Further, the camera respectively gathers the surface image of the piece to be measured at a plurality of photographing positions, and the method comprises the following steps: the camera is provided with a plurality of cameras, and the cameras are fixed at each photographing position respectively; or the camera is arranged at the tail end of the robot, and the robot is used for driving the camera to move to each photographing position and collect images.
According to the method, the special camera photographing position is set according to the characteristics of plane and plane-like parts, the homography matrix between the cameras can be rapidly calculated based on plane and distance constraints, the convenience in the image registration and splicing processes is effectively improved, the occupation of working beats is reduced, and the method is particularly suitable for the real-time splicing process of an industrial field.
Drawings
FIG. 1 is a schematic diagram of an original image captured by one of the cameras in the embodiment;
FIG. 2 is a schematic diagram of an image obtained by masking an original image in the embodiment;
FIG. 3 is a schematic diagram of an original image captured by another camera according to an embodiment;
fig. 4 is an image obtained by stitching images respectively acquired by two cameras in the embodiment.
Detailed Description
A fast image splicing method for plane and quasi-plane parts, the surface of the part to be measured is plane or quasi-plane (the quasi-plane is a curved surface, the principal curvature of each point on the curved surface is greater than 2000R);
the method comprises the following steps that a camera respectively collects images of the surface of a piece to be measured at a plurality of photographing positions, and at a single photographing position, the included angle between the image plane of the camera and the surface of the piece to be measured is smaller than 10 degrees (the image plane of the camera and the surface of the piece to be measured are kept parallel as much as possible); calibrating external parameters between cameras at any adjacent photographing positions, and recording two images acquired at the two adjacent photographing positions as a group of images to be registered, wherein a public area exists in the two images;
the splicing method comprises the following steps:
step one, utilizing the following steps to obtain a homography matrix between a single group of images to be matched and complete image registration:
1) recording one image as an image A (shown in figure 1) and the other image as an image B (shown in figure 3), and respectively extracting feature points and feature vectors in the image A and the image B; respectively carrying out coarse matching and fine matching on the image A and the image B by utilizing the feature vectors;
2) carrying out three-dimensional reconstruction on the feature points in the precisely matched image A to obtain three-dimensional feature point information, and carrying out plane fitting by using the three-dimensional feature point information to obtain a space plane and a normal vector of the space plane
Figure RE-GDA0002848060710000041
Along a normal vector
Figure RE-GDA0002848060710000042
Direction, obtaining the distance d between the space plane and the camera light plane of the collected image A;
3) computing a homography matrix between image A and image B
Figure RE-GDA0002848060710000051
Wherein A isl、ArRespectively representing internal reference matrixes of the cameras corresponding to the image A and the image B (calibrating the internal reference of the cameras in advance);
Figure RE-GDA0002848060710000052
r 'and T' respectively represent external parameter matrixes between cameras corresponding to the image A and the image B;
completing registration between the two images by using a homography matrix;
step two, respectively obtaining homography matrixes among all groups of images to be matched according to the step one, and finishing the registration of the collected images at all adjacent photographing positions:
and selecting one image as a reference image, wherein the photographing position of the selected image is the reference position, respectively transforming the images acquired at other photographing positions into the reference image by using the homography matrix, and then performing fusion processing to splice complete image information of the surface of the piece to be detected.
Fig. 4 shows an image obtained by stitching the image a and the image B when only two cameras are used.
Specifically, in step 1), the method for extracting feature points and feature vectors in the image a and the image B is an SIFT method or a SURF method;
the rough matching method is a KNN method; the fine matching method is an epipolar line matching method.
In order to improve the registration accuracy and prevent the interference of background components, before the step 1), respectively preprocessing the collected surface images of the piece to be detected:
and carrying out template matching on the acquired image by utilizing a pre-stored template image of the surface of the object to be detected, determining the position of the surface of the object to be detected in the image, carrying out mask processing on the image according to pre-stored shape information, and zeroing the gray level of a background area.
In this embodiment, the object to be measured is a pantograph pan (the upper surfaces of the two pantograph pan are planes), and the image obtained by masking the original collected image (fig. 1) is as shown in fig. 2, where the image only includes information of the pantograph pan part, and the grayscale of other background information is 0.
The fusion processing method comprises the following steps: a multi-band fusion method, a direct averaging method, a weighted averaging method, and a median filtering method.
The camera respectively collects the surface images of the piece to be measured at a plurality of photographing positions and carries out the following steps:
the camera is provided with a plurality of cameras, and the cameras are fixed at each photographing position respectively; or the camera is arranged at the tail end of the robot, and the robot is used for driving the camera to move to each photographing position and collect images.
Adopt this implementation method to splice in real time pantograph slide plate, after accomplishing first pantograph slide plate concatenation, the next pantograph of treating to splice is placed when examining the position, because the pantograph is "it" font structure and is connected with the spring, under the effect of spring, the distance d between the upper surface (space plane) of pantograph slide plate and camera and the camera light plane of gathering image A can change (compare in last pantograph slide plate), therefore when this embodiment splices different pantograph slide plates, all need calculate distance d at every turn, acquire new homography matrix, carry out the image concatenation.
The method is adopted to carry out a single splicing (two images) process, the time consumption is less than 5 seconds, the traditional splicing method at least needs more than 10 seconds, the time is shortened by more than half, and the time advantage is more obvious if a plurality of groups of splices (objects to be tested with larger sizes) are required; when hundreds of pantograph slide plate measurements are carried out, the whole time consumption is greatly reduced, and huge economic benefits are brought.
If the piece to be detected is a rigid body, and the distance d between the upper surface of the piece to be detected and the light plane of the camera cannot be changed, the detection process is repeated, and the photographing position of the camera and the placing position of the piece to be detected are the same as the positions in the previous detection process, such as a production line and a fixed transmission station; at this time, the homography matrix of the method can be obtained by calculation only when the first workpiece is measured, and the homography matrix is directly used for the splicing process when other workpieces of the same type are detected.

Claims (6)

1. A fast image splicing method for plane and quasi-plane parts is characterized in that the surface of a part to be tested is a plane or quasi-plane;
the method comprises the following steps that a camera respectively collects images of the surface of a piece to be measured at a plurality of photographing positions, and at a single photographing position, the included angle between the image plane of the camera and the surface of the piece to be measured is smaller than 10 degrees; calibrating external parameters between cameras at any adjacent photographing positions, and recording two images acquired at the two adjacent photographing positions as a group of images to be registered, wherein a public area exists in the two images;
it is characterized by comprising:
step one, utilizing the following steps to obtain a homography matrix between a single group of images to be matched and complete image registration:
1) recording one image as an image A and the other image as an image B, and respectively extracting feature points and feature vectors in the image A and the image B; respectively carrying out coarse matching and fine matching on the image A and the image B by utilizing the feature vectors;
2) carrying out three-dimensional reconstruction on the feature points in the precisely matched image A to obtain three-dimensional feature point information, and carrying out plane fitting by using the three-dimensional feature point information to obtain a space plane and a normal vector of the space plane
Figure FDA0002728608920000011
Along a normal vector
Figure FDA0002728608920000012
Direction, obtaining the distance d between the space plane and the camera light plane of the collected image A;
3) computing a homography matrix between image A and image B
Figure FDA0002728608920000013
Wherein A isl、ArRespectively representing the internal reference matrixes of the cameras corresponding to the image A and the image B;
Figure FDA0002728608920000014
r 'and T' respectively represent external parameter matrixes between cameras corresponding to the image A and the image B;
completing registration between the two images by using the homography matrix;
step two, respectively obtaining homography matrixes among all groups of images to be matched according to the step one, and finishing the registration of the collected images at all adjacent photographing positions:
and selecting one image as a reference image, wherein the photographing position of the selected image is the reference position, respectively transforming the images acquired at other photographing positions into the reference image by using the homography matrix, and then performing fusion processing to splice complete image information of the surface of the piece to be detected.
2. The method for fast image stitching of planar and plane-like components according to claim 1, wherein: in the step 1), a method for extracting feature points and feature vectors in the image A and the image B is an SIFT method or an SURF method;
the rough matching method is a KNN method; the fine matching method is an epipolar line matching method.
3. The method for fast image stitching of planar and plane-like components according to claim 1, wherein: before the step 1), respectively preprocessing the collected surface images of the piece to be detected:
and carrying out template matching on the acquired image by utilizing a pre-stored template image of the surface of the object to be detected, determining the position of the surface of the object to be detected in the image, carrying out mask processing on the image according to pre-stored shape information, and zeroing the gray level of a background area.
4. The method for fast image stitching of planar and plane-like components according to claim 1, wherein: the fusion processing method comprises the following steps: a multi-band fusion method, a direct averaging method, a weighted averaging method, and a median filtering method.
5. The method for fast image stitching of planar and plane-like components according to claim 1, wherein: the camera respectively collects the surface images of the piece to be measured at a plurality of photographing positions and carries out the following steps:
the camera is provided with a plurality of cameras, and the cameras are fixed at each photographing position respectively; or the camera is arranged at the tail end of the robot, and the robot is used for driving the camera to move to each photographing position and collect images.
6. The method for fast image stitching of planar and plane-like components according to claim 1, wherein: the quasi-plane is a curved surface, and the main curvature of each point on the curved surface is greater than 2000R.
CN202011111059.XA 2020-10-16 2020-10-16 Rapid image splicing method for plane and plane-like parts Pending CN112308777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011111059.XA CN112308777A (en) 2020-10-16 2020-10-16 Rapid image splicing method for plane and plane-like parts

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011111059.XA CN112308777A (en) 2020-10-16 2020-10-16 Rapid image splicing method for plane and plane-like parts

Publications (1)

Publication Number Publication Date
CN112308777A true CN112308777A (en) 2021-02-02

Family

ID=74328030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011111059.XA Pending CN112308777A (en) 2020-10-16 2020-10-16 Rapid image splicing method for plane and plane-like parts

Country Status (1)

Country Link
CN (1) CN112308777A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN109840884A (en) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 A kind of image split-joint method, device and electronic equipment
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system
CN111028155A (en) * 2019-12-17 2020-04-17 大连理工大学 Parallax image splicing method based on multiple pairs of binocular cameras
CN111047510A (en) * 2019-12-17 2020-04-21 大连理工大学 Large-field-angle image real-time splicing method based on calibration

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN109840884A (en) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 A kind of image split-joint method, device and electronic equipment
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system
CN111028155A (en) * 2019-12-17 2020-04-17 大连理工大学 Parallax image splicing method based on multiple pairs of binocular cameras
CN111047510A (en) * 2019-12-17 2020-04-21 大连理工大学 Large-field-angle image real-time splicing method based on calibration

Similar Documents

Publication Publication Date Title
CN109544456B (en) Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN105716539B (en) A kind of three-dimentioned shape measurement method of quick high accuracy
CN109035200A (en) A kind of bolt positioning and position and posture detection method based on the collaboration of single binocular vision
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
CN111223133A (en) Registration method of heterogeneous images
CN111879235A (en) Three-dimensional scanning detection method and system for bent pipe and computer equipment
CN111402330B (en) Laser line key point extraction method based on planar target
CN110763204B (en) Planar coding target and pose measurement method thereof
CN106996748A (en) A kind of wheel footpath measuring method based on binocular vision
US11948344B2 (en) Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime
CN112067233A (en) Six-degree-of-freedom motion capture method for wind tunnel model
CN112045676A (en) Method for grabbing transparent object by robot based on deep learning
CN113313659B (en) High-precision image stitching method under multi-machine cooperative constraint
CN113393439A (en) Forging defect detection method based on deep learning
CN106056121A (en) Satellite assembly workpiece fast-identification method based on SIFT image feature matching
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN114170284B (en) Multi-view point cloud registration method based on active landmark point projection assistance
CN110030979B (en) Spatial non-cooperative target relative pose measurement method based on sequence images
CN113221953B (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
He et al. Deployment of a deep-learning based multi-view stereo approach for measurement of ship shell plates
CN111145254B (en) Door valve blank positioning method based on binocular vision
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination