CN111210386A - Image shooting and splicing method and system - Google Patents

Image shooting and splicing method and system Download PDF

Info

Publication number
CN111210386A
CN111210386A CN201911312847.2A CN201911312847A CN111210386A CN 111210386 A CN111210386 A CN 111210386A CN 201911312847 A CN201911312847 A CN 201911312847A CN 111210386 A CN111210386 A CN 111210386A
Authority
CN
China
Prior art keywords
image
images
spliced
cameras
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911312847.2A
Other languages
Chinese (zh)
Inventor
何弢
廖文龙
张炜
黄洋文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Kuwa Robot Industry Technology Research Institute Co ltd
Original Assignee
Wuhu Kuwa Robot Industry Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Kuwa Robot Industry Technology Research Institute Co ltd filed Critical Wuhu Kuwa Robot Industry Technology Research Institute Co ltd
Priority to CN201911312847.2A priority Critical patent/CN111210386A/en
Publication of CN111210386A publication Critical patent/CN111210386A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image shooting and splicing method and system, which comprises the following steps: a camera mounting step: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical center distance of the two cameras is ensured to be the closest; a mapping table obtaining step: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras; image splicing: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images. The invention obtains the image information through two or more than two cameras, thereby enlarging the visual field. The calibration setting scheme of the camera is provided by calibrating and setting the position relation of two or more cameras.

Description

Image shooting and splicing method and system
Technical Field
The invention relates to image shooting and splicing method and system.
Background
With the matching landing of the factors such as technology, capital, relevant laws and regulations and the like, the automatic driving vehicle will gradually land and produce in volume and finally serve the majority of consumers. The automatic driving vehicle is required to be provided with a large number of sensors to identify relevant factors such as scenes, obstacles and the like, information collected by the sensors is fused through the information fusion module, and finally the fused information is delivered to the operation decision module to carry out path planning and vehicle body motion control.
In the process of scene and obstacle identification, a camera plays a very important role as one key sensor, but the vision field of a single camera is narrow based on the configuration of the camera, and if an automatic driving vehicle needs to know the road state in a larger area around the automatic driving vehicle to travel on the road, the camera with the large vision field needs to perform auxiliary judgment.
The existing camera for the automatic driving vehicle generally adopts a standard lens, the visual angle of the camera is narrow, if a plurality of cameras are adopted for image acquisition and splicing, the characteristic point identification of an image overlapping area can be involved, the adjacent relation is determined according to the overlapping area, and finally, the splicing scheme after duplication removal is calculated and determined. The above scheme has two problems in implementation, one is that the calculation amount is large, and the other is that the image splicing is difficult to perform when no overlapping area exists.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an image shooting and splicing method and system.
The image shooting and splicing method provided by the invention comprises the following steps:
a camera mounting step: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical center distance of the two cameras is ensured to be the closest;
a mapping table obtaining step: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras;
image splicing: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images.
Preferably, the method further comprises the following steps:
information fusion step: according to the spliced images, information fusion processing is carried out by combining data information acquired by a laser radar or other sensors, and the fused information is used for identifying scenes and obstacles of the automatic driving vehicle;
the data information includes: point cloud data.
Preferably, the method further comprises the following steps:
an image output step: sending the spliced images to a vehicle-mounted display screen or a background processor;
the background server is a server operated by the vehicle control management system and is used for remotely controlling the vehicle.
Preferably, the mapping table of the pixel points of the image refers to: and a mapping relation lookup table of the camera image pixel points and the spliced image pixel points.
Preferably, the mapping table obtaining step includes:
a camera calibration step: solving geometric model parameters of a plurality of cameras;
an image distortion correction step: image distortion is introduced due to the manufacturing precision of the lens and the deviation of the assembly process, and for images acquired by a plurality of cameras, the distorted images are converted into non-distorted images through a distortion model, so that images to be spliced after image distortion correction are obtained;
extracting and matching feature points: extracting characteristic points of the image to be spliced, expressing the extracted characteristic points by characteristic vectors, and determining a matching point pair of each characteristic point in the image sequence to be spliced by calculating the distance between every two vectors;
an image registration step: constructing a transformation matrix between image sequences to be spliced through the matching point pairs;
image splicing: determining a reference coordinate system of the spliced images, updating a transformation matrix of each image to be spliced, and mapping the image sequence to be spliced to the spliced images by using the transformation matrix;
the mapping table is the mapping relation of pixel points between the images to be spliced and the spliced images.
The invention provides an image shooting and splicing system, which comprises:
a camera mounting module: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical center distance of the two cameras is ensured to be the closest;
a mapping table acquisition module: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras;
an image stitching module: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images.
Preferably, the method further comprises the following steps:
the information fusion module: according to the spliced images, information fusion processing is carried out by combining data information acquired by a laser radar or other sensors, and the fused information is used for identifying scenes and obstacles of the automatic driving vehicle;
the data information includes: point cloud data.
Preferably, the method further comprises the following steps:
an image output module: sending the spliced images to a vehicle-mounted display screen or a background processor;
the background server is a server operated by the vehicle control management system and is used for remotely controlling the vehicle.
Preferably, the mapping table of the pixel points of the image refers to: and a mapping relation lookup table of the camera image pixel points and the spliced image pixel points.
Preferably, the mapping table obtaining module includes:
a camera calibration module: solving geometric model parameters of a plurality of cameras;
an image distortion correction module: image distortion is introduced due to the manufacturing precision of the lens and the deviation of the assembly process, and for images acquired by a plurality of cameras, the distorted images are converted into non-distorted images through a distortion model, so that images to be spliced after image distortion correction are obtained;
extracting and matching feature points: extracting characteristic points of the image to be spliced, expressing the extracted characteristic points by characteristic vectors, and determining a matching point pair of each characteristic point in the image sequence to be spliced by calculating the distance between every two vectors;
an image registration module: constructing a transformation matrix between image sequences to be spliced through the matching point pairs;
an image stitching module: determining a reference coordinate system of the spliced images, updating a transformation matrix of each image to be spliced, and mapping the image sequence to be spliced to the spliced images by using the transformation matrix;
the mapping table is the mapping relation of pixel points between the images to be spliced and the spliced images.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention obtains the image information through two or more than two cameras, thereby enlarging the visual field.
2. The invention provides a camera calibration setting scheme by calibrating and setting the position relation of two or more cameras.
3. The images acquired by two or more cameras are approximately spliced, so that the calculated amount of image splicing is reduced.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic view of a multi-camera provided by the present invention.
Fig. 2 is a schematic flow chart of an image shooting and splicing method provided by the invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The image shooting and splicing method provided by the invention comprises the following steps:
a camera mounting step: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical center distance of the two cameras is ensured to be the closest;
a mapping table obtaining step: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras;
image splicing: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images.
Specifically, the method further comprises the following steps:
information fusion step: according to the spliced images, information fusion processing is carried out by combining data information acquired by a laser radar or other sensors, and the fused information is used for identifying scenes and obstacles of the automatic driving vehicle;
the data information includes: point cloud data.
Specifically, the method further comprises the following steps:
an image output step: sending the spliced images to a vehicle-mounted display screen or a background processor;
the background server is a server operated by the vehicle control management system and is used for remotely controlling the vehicle.
Specifically, the mapping table of the pixel point of the image refers to: and a mapping relation lookup table of the camera image pixel points and the spliced image pixel points.
Specifically, the mapping table obtaining step includes:
a camera calibration step: solving geometric model parameters of a plurality of cameras;
an image distortion correction step: image distortion is introduced due to the manufacturing precision of the lens and the deviation of the assembly process, and for images acquired by a plurality of cameras, the distorted images are converted into non-distorted images through a distortion model, so that images to be spliced after image distortion correction are obtained;
extracting and matching feature points: extracting characteristic points of the image to be spliced, expressing the extracted characteristic points by characteristic vectors, and determining a matching point pair of each characteristic point in the image sequence to be spliced by calculating the distance between every two vectors;
an image registration step: constructing a transformation matrix between image sequences to be spliced through the matching point pairs;
image splicing: determining a reference coordinate system of the spliced images, updating a transformation matrix of each image to be spliced, and mapping the image sequence to be spliced to the spliced images by using the transformation matrix;
the mapping table is the mapping relation of pixel points between the images to be spliced and the spliced images.
The invention provides an image shooting and splicing system, which comprises:
a camera mounting module: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical center distance of the two cameras is ensured to be the closest;
a mapping table acquisition module: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras;
an image stitching module: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images.
Specifically, the method further comprises the following steps:
the information fusion module: according to the spliced images, information fusion processing is carried out by combining data information acquired by a laser radar or other sensors, and the fused information is used for identifying scenes and obstacles of the automatic driving vehicle;
the data information includes: point cloud data.
Specifically, the method further comprises the following steps:
an image output module: sending the spliced images to a vehicle-mounted display screen or a background processor;
the background server is a server operated by the vehicle control management system and is used for remotely controlling the vehicle.
Specifically, the mapping table of the pixel point of the image refers to: and a mapping relation lookup table of the camera image pixel points and the spliced image pixel points.
Specifically, the mapping table obtaining module includes:
a camera calibration module: solving geometric model parameters of a plurality of cameras;
an image distortion correction module: image distortion is introduced due to the manufacturing precision of the lens and the deviation of the assembly process, and for images acquired by a plurality of cameras, the distorted images are converted into non-distorted images through a distortion model, so that images to be spliced after image distortion correction are obtained;
extracting and matching feature points: extracting characteristic points of the image to be spliced, expressing the extracted characteristic points by characteristic vectors, and determining a matching point pair of each characteristic point in the image sequence to be spliced by calculating the distance between every two vectors;
an image registration module: constructing a transformation matrix between image sequences to be spliced through the matching point pairs;
an image stitching module: determining a reference coordinate system of the spliced images, updating a transformation matrix of each image to be spliced, and mapping the image sequence to be spliced to the spliced images by using the transformation matrix;
the mapping table is the mapping relation of pixel points between the images to be spliced and the spliced images.
The present invention will be described more specifically below with reference to preferred examples.
Preferred example 1:
according to the technical scheme, the positions of two cameras are fixed, mapping tables of pixel points of images obtained by the two cameras, namely mapping relation lookup tables of the pixel points of the images of the cameras and the pixel points of the spliced images are established, the mapping relation lookup tables are obtained through calculation of the steps of camera calibration, image distortion correction, feature point extraction, feature point matching, image registration and the like, the two cameras respectively collect images, and the corresponding images are spliced according to the mapping tables.
The mapping table is obtained as follows:
1. calibrating a camera:
the process of solving the camera geometric model parameters is camera calibration, and a common method is, for example, a Zhang Yongyou calibration method.
2. And (3) image distortion correction:
image distortion can be introduced due to lens manufacturing accuracy and assembly process variations; the distorted image can be converted into an undistorted image by the distortion model.
3. Extracting and matching feature points:
the feature points included in a picture are usually the points with larger information content around, and the feature point extraction method is commonly used, for example, the surf (speeded Up Robust features) algorithm; the extracted characteristic points are represented by characteristic vectors, and the matching point pairs of the characteristic points in the image sequence to be spliced can be determined by calculating the distance between every two vectors.
4. Image registration:
and constructing a transformation matrix between the image sequences to be spliced through the matching point pairs by using a common method such as RANSAC (random Sample consensus) algorithm.
5. Image splicing:
determining a reference coordinate system of the spliced images, and updating a transformation matrix of each image to be spliced; and mapping the image sequence to be spliced to the spliced image by using the transformation matrix.
6. Mapping table:
and obtaining an undistorted image through distortion correction of the image to be spliced, and mapping the image to be spliced to the spliced image through a transformation matrix, wherein a mapping table is a pixel point mapping relation between the image to be spliced and the spliced image.
The two or more cameras are calibrated and closely mounted, the two cameras are taken as an example and are respectively a first camera and a second camera, and the calibration and close mounting is to keep the physical mounting distance of the two adjacent cameras to be nearest when the cameras are mounted, so that the optical centers of the two cameras are ensured to be nearest.
And further determining mapping tables of pixel points of images obtained by the two cameras, and splicing the corresponding images according to the mapping tables to generate spliced images.
Further, the image stitching module determines the image stitching relation of the corresponding images according to the mapping table of the pixel points of the images acquired by the cameras, and carries out image stitching processing according to the images respectively shot by the two cameras at the same moment.
Further, on the basis of ensuring the calibration installation and splicing of the cameras, the number of the cameras can be three or more.
Further, after the image stitching module finishes image stitching, the stitched images can be sent to the information fusion module, and the information fusion module performs information fusion processing by combining information acquired by the laser radar or other sensors, that is, performing fusion processing on the stitched image data information and data information (for example, point cloud data) acquired by other sensors.
Further, after the image splicing module finishes image splicing, the spliced images can be sent to a vehicle-mounted display screen or a background processor, and the background server is a server operated by a vehicle control management system and used for remote control of the vehicle.
The optical centers of the lenses cannot be completely overlapped physically, so that the pictures shot by the two lenses have slight difference, namely: there is parallax between near objects and far objects. If the stitching algorithm chooses to stitch the near objects at the seams, the far objects will appear 'ghosts', otherwise, the near objects will be missing.
In actual use, although complete physical coincidence cannot be achieved, the optical centers are as close as possible, and the influence can be reduced as much as possible. The two cameras are calibrated and installed in a close mode, the mapping table of the camera for obtaining the pixel points of the images is established, and the images can be spliced according to the mapping table without calculation in the actual splicing process.
The optical centers of the lenses cannot be completely overlapped physically, so that the pictures shot by the two lenses have slight difference, namely: there is parallax between near objects and far objects. If the stitching algorithm chooses to stitch the near objects at the seams, the far objects will appear 'ghosts', otherwise, the near objects will be missing.
In actual use, although complete physical coincidence cannot be achieved, the optical centers are as close as possible, and the influence can be reduced as much as possible. The two cameras are calibrated and installed in a close mode, the mapping table of the camera for obtaining the pixel points of the images is established, and the images can be spliced according to the mapping table without calculation in the actual splicing process.
Preferred example 2:
the technical scheme of the patent provides an image shooting and splicing method, as shown in fig. 2, the shortest optical center distance of two cameras is ensured by setting the position relation of adjacent cameras, the cameras can be adjacently arranged left and right and can also be adjacently arranged up and down, the camera determines the splicing relation of corresponding images according to a mapping table of pixel points of images obtained by the cameras, and finally splices the output images according to the splicing relation.
Optimally, when the cameras are installed on the automatic driving vehicle, one camera is arranged below the other camera, the other camera is arranged above the other camera, calibration is carried out by using calibration equipment, the minimum optical center distance of the two camera devices is ensured, meanwhile, the two camera view areas are staggered through calibration of the calibration equipment, the two camera view areas are overlapped or connected at the edges, and a mapping table for the two cameras to acquire pixel points of images is established. The calibrated camera is mechanically fixed, so that the position of the camera cannot be easily changed.
The camera image splicing module splices a first image shot by a first camera and a second image shot by a second camera at the same moment according to a mapping table to obtain a target image, and sends the target image to the vehicle information fusion module for information fusion processing, so as to be used for identifying the scene and the obstacle of the automatic driving vehicle.
Meanwhile, as shown in fig. 1, in order to further enlarge the field of view, three or more cameras may be provided, and the positional relationship and the stitching scheme thereof are performed as described above.
And after the cameras collect images, the images are sent to an image splicing module, the image splicing module determines the image splicing position relation of the corresponding images according to the mapping table, and image splicing processing is carried out on the images respectively shot by the plurality of cameras at the same time.
After the image splicing module finishes image splicing, the spliced images can be sent to the information fusion module, and the information fusion module performs information fusion processing by combining information acquired by a laser radar or other sensors. Meanwhile, when the information identified by the laser radar or other sensors and the information identified by the spliced image have large errors, the error information can be fed back to the vehicle control unit, and the vehicle control unit feeds the information back to the background, so that the vehicle control unit can feed back the information to the background in time and recalibrate the camera when the camera is in an error installation position.
And after the image splicing module finishes image splicing, the spliced image can be sent to a vehicle-mounted display screen or a background.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. An image shooting and splicing method is characterized by comprising the following steps:
a camera mounting step: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical centers of the two cameras are ensured to be the closest;
a mapping table obtaining step: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras;
image splicing: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images.
2. The image shooting and splicing method according to claim 1, further comprising:
information fusion step: according to the spliced images, information fusion processing is carried out by combining data information acquired by a laser radar or other sensors, and the fused information is used for identifying scenes and obstacles of the automatic driving vehicle;
the data information includes: point cloud data.
3. The image shooting and splicing method according to claim 1, further comprising:
an image output step: sending the spliced images to a vehicle-mounted display screen or a background processor;
the background server is a server operated by the vehicle control management system and is used for remotely controlling the vehicle.
4. The image shooting and splicing method according to claim 1, wherein the mapping table of the pixel points of the images refers to: and a mapping relation lookup table of the camera image pixel points and the spliced image pixel points.
5. The image shooting and splicing method according to claim 1, wherein the mapping table acquiring step comprises:
a camera calibration step: solving geometric model parameters of a plurality of cameras;
an image distortion correction step: image distortion is introduced due to the manufacturing precision of the lens and the deviation of the assembly process, and for images acquired by a plurality of cameras, the distorted images are converted into non-distorted images through a distortion model, so that images to be spliced after image distortion correction are obtained;
extracting and matching feature points: extracting characteristic points of the image to be spliced, expressing the extracted characteristic points by characteristic vectors, and determining a matching point pair of each characteristic point in the image sequence to be spliced by calculating the distance between every two vectors;
an image registration step: constructing a transformation matrix between image sequences to be spliced through the matching point pairs;
image splicing: determining a reference coordinate system of the spliced images, updating a transformation matrix of each image to be spliced, and mapping the image sequence to be spliced to the spliced images by using the transformation matrix;
the mapping table is the mapping relation of pixel points between the images to be spliced and the spliced images.
6. An image capture stitching system, comprising:
a camera mounting module: calibrating and approaching a plurality of cameras, wherein the calibrating and approaching installation means that the physical installation distance of two adjacent cameras is kept to be the closest when the cameras are installed, so that the optical center distance of the two cameras is ensured to be the closest;
a mapping table acquisition module: establishing a mapping table of pixel points of images acquired by a plurality of cameras according to the positions of the installed cameras;
an image stitching module: and determining an image splicing relation corresponding to the images according to the established mapping table of the pixel points of the images, and performing image splicing processing according to the images respectively shot by the plurality of cameras at the same moment to obtain spliced images.
7. The image capturing stitching system of claim 6, further comprising:
the information fusion module: according to the spliced images, information fusion processing is carried out by combining data information acquired by a laser radar or other sensors, and the fused information is used for identifying scenes and obstacles of the automatic driving vehicle;
the data information includes: point cloud data.
8. The image capturing stitching system of claim 6, further comprising:
an image output module: sending the spliced images to a vehicle-mounted display screen or a background processor;
the background server is a server operated by the vehicle control management system and is used for remotely controlling the vehicle.
9. The image shooting and splicing system according to claim 6, wherein the mapping table of the pixel points of the images refers to: and a mapping relation lookup table of the camera image pixel points and the spliced image pixel points.
10. The image capturing and splicing system of claim 6, wherein the mapping table obtaining module comprises:
a camera calibration module: solving geometric model parameters of a plurality of cameras;
an image distortion correction module: image distortion is introduced due to the manufacturing precision of the lens and the deviation of the assembly process, and for images acquired by a plurality of cameras, the distorted images are converted into non-distorted images through a distortion model, so that images to be spliced after image distortion correction are obtained;
extracting and matching feature points: extracting characteristic points of the image to be spliced, expressing the extracted characteristic points by characteristic vectors, and determining a matching point pair of each characteristic point in the image sequence to be spliced by calculating the distance between every two vectors;
an image registration module: constructing a transformation matrix between image sequences to be spliced through the matching point pairs;
an image stitching module: determining a reference coordinate system of the spliced images, updating a transformation matrix of each image to be spliced, and mapping the image sequence to be spliced to the spliced images by using the transformation matrix;
the mapping table is the mapping relation of pixel points between the images to be spliced and the spliced images.
CN201911312847.2A 2019-12-18 2019-12-18 Image shooting and splicing method and system Withdrawn CN111210386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911312847.2A CN111210386A (en) 2019-12-18 2019-12-18 Image shooting and splicing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312847.2A CN111210386A (en) 2019-12-18 2019-12-18 Image shooting and splicing method and system

Publications (1)

Publication Number Publication Date
CN111210386A true CN111210386A (en) 2020-05-29

Family

ID=70786288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312847.2A Withdrawn CN111210386A (en) 2019-12-18 2019-12-18 Image shooting and splicing method and system

Country Status (1)

Country Link
CN (1) CN111210386A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712023A (en) * 2020-12-30 2021-04-27 武汉万集信息技术有限公司 Vehicle type identification method and system and electronic equipment
CN113450357A (en) * 2021-09-01 2021-09-28 南昌市建筑科学研究所(南昌市建筑工程质量检测中心) Segment image online analysis subsystem and subway shield detection system
CN113450260A (en) * 2021-08-31 2021-09-28 常州市宏发纵横新材料科技股份有限公司 Splicing method for photographed images of multiple cameras
CN113724176A (en) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 Multi-camera motion capture seamless connection method, device, terminal and medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712023A (en) * 2020-12-30 2021-04-27 武汉万集信息技术有限公司 Vehicle type identification method and system and electronic equipment
CN112712023B (en) * 2020-12-30 2024-04-05 武汉万集光电技术有限公司 Vehicle type recognition method and system and electronic equipment
CN113724176A (en) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 Multi-camera motion capture seamless connection method, device, terminal and medium
CN113450260A (en) * 2021-08-31 2021-09-28 常州市宏发纵横新材料科技股份有限公司 Splicing method for photographed images of multiple cameras
CN113450357A (en) * 2021-09-01 2021-09-28 南昌市建筑科学研究所(南昌市建筑工程质量检测中心) Segment image online analysis subsystem and subway shield detection system
CN113450357B (en) * 2021-09-01 2021-12-17 南昌市建筑科学研究所(南昌市建筑工程质量检测中心) Segment image online analysis subsystem and subway shield detection system

Similar Documents

Publication Publication Date Title
US11554717B2 (en) Vehicular vision system that dynamically calibrates a vehicular camera
CN111210386A (en) Image shooting and splicing method and system
US10919458B2 (en) Method and system for calibrating vehicular cameras
CN112907676B (en) Calibration method, device and system of sensor, vehicle, equipment and storage medium
EP3355241B1 (en) Determining a position of a vehicle on a track
US20150254853A1 (en) Calibration method and calibration device
CN106600644B (en) Parameter correction method and device for panoramic camera
CN110796711B (en) Panoramic system calibration method and device, computer readable storage medium and vehicle
CN114283201A (en) Camera calibration method and device and road side equipment
CN103863205A (en) Auxiliary installation method of camera of vehicle-mounted panoramic system and auxiliary system with same used
CN112907675B (en) Calibration method, device, system, equipment and storage medium of image acquisition equipment
CN105100600A (en) Method and apparatus for automatic calibration in surrounding view systems
CN110176038A (en) Calibrate the method and system of the camera of vehicle
CN110519498B (en) Method and device for calibrating imaging of double-optical camera and double-optical camera
CN110929669A (en) Data labeling method and device
JP5539250B2 (en) Approaching object detection device and approaching object detection method
CN114549595A (en) Data processing method and device, electronic equipment and storage medium
JP2014165810A (en) Parameter acquisition device, parameter acquisition method and program
CN114202588B (en) Method and device for quickly and automatically calibrating vehicle-mounted panoramic camera
KR101697229B1 (en) Automatic calibration apparatus based on lane information for the vehicle image registration and the method thereof
CN108195359B (en) Method and system for acquiring spatial data
CN111818270B (en) Automatic control method and system for multi-camera shooting
KR102298047B1 (en) Method of recording digital contents and generating 3D images and apparatus using the same
CN113011212B (en) Image recognition method and device and vehicle
EP3051494B1 (en) Method for determining an image depth value depending on an image region, camera system and motor vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200529

WW01 Invention patent application withdrawn after publication