CN110533586B - Image stitching method, device, equipment and system based on vehicle-mounted monocular camera - Google Patents

Image stitching method, device, equipment and system based on vehicle-mounted monocular camera Download PDF

Info

Publication number
CN110533586B
CN110533586B CN201810501402.8A CN201810501402A CN110533586B CN 110533586 B CN110533586 B CN 110533586B CN 201810501402 A CN201810501402 A CN 201810501402A CN 110533586 B CN110533586 B CN 110533586B
Authority
CN
China
Prior art keywords
frame image
image
current frame
acquisition time
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810501402.8A
Other languages
Chinese (zh)
Other versions
CN110533586A (en
Inventor
李雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810501402.8A priority Critical patent/CN110533586B/en
Publication of CN110533586A publication Critical patent/CN110533586A/en
Application granted granted Critical
Publication of CN110533586B publication Critical patent/CN110533586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image splicing method, device, equipment and system based on a vehicle-mounted monocular camera, wherein the method comprises the following steps: acquiring the orientation change information of the vehicle between a first acquisition time corresponding to the current frame image and a second acquisition time corresponding to the previous frame image, and constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix. Therefore, in the scheme, the image splicing is carried out based on the direction change information of the vehicle, the influence of the number of the characteristic point pairs is avoided, and the splicing effect is good.

Description

Image stitching method, device, equipment and system based on vehicle-mounted monocular camera
Technical Field
The invention relates to the technical field of auxiliary driving, in particular to an image stitching method, device, equipment and system based on a vehicle-mounted monocular camera.
Background
In the existing vehicle, a monocular camera is usually arranged on the outer side of a vehicle body or in a cab, in the running process of the vehicle, the monocular camera acquires images of scenery along the way, adjacent frame images acquired by acquisition are spliced to obtain a spliced image with a large visual angle, and the spliced image is displayed for a driver to play a role in assisting driving.
The scheme for stitching adjacent frame images generally includes: and extracting characteristic point pairs in the adjacent frame images, registering the characteristic point pairs, and splicing the adjacent frame images according to a registration result. In the splicing scheme, if the number of the extracted feature point pairs is small, splicing is performed according to the registration result of the feature point pairs, and the obtained spliced image has obvious seams and a poor splicing effect.
Disclosure of Invention
The embodiment of the invention aims to provide an image stitching method, device, equipment and system based on a vehicle-mounted monocular camera so as to improve the stitching effect.
In order to achieve the above object, an embodiment of the present invention provides an image stitching method based on a vehicle-mounted monocular camera, including:
acquiring a current frame image to be spliced;
acquiring the azimuth change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image;
constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information;
and splicing the current frame image and the previous frame image according to the first transformation matrix.
Optionally, the obtaining the direction change information of the vehicle between the first collecting time and the second collecting time may include:
predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points at specified positions in each frame of image;
and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
Optionally, the constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information may include:
and constructing an Euler matrix between the current frame image and the previous frame image according to the relative displacement and the relative rotation angle.
Optionally, before the obtaining of the information about the change of the orientation of the vehicle between the first collection time and the second collection time, the method may further include:
extracting characteristic point pairs from the current frame image and the previous frame image of the current frame image;
registering the characteristic point pairs, and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result;
calculating a deviation between the second transformation matrix and a predetermined historical transformation matrix, the historical transformation matrix being: a transformation matrix between the previous frame image and a further previous frame image of the previous frame image;
judging whether the deviation is greater than a preset threshold value or not;
if the position change information is larger than the first collection time, the step of obtaining the position change information of the vehicle between the first collection time and the second collection time is executed;
and if not, splicing the current frame image and the previous frame image according to the second transformation matrix.
Optionally, in a case that it is determined that the deviation is not greater than the preset threshold, the method further includes:
converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the second transformation matrix; the tracking points are pixel points at specified positions in each frame of image; the motion trail is used for acquiring the direction change information of the vehicle;
adding the position of a tracking point in the current frame image to the motion trail in the coordinate system of the current frame image;
in a case where it is determined that the deviation is greater than the preset threshold, after the constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information, the method further includes:
converting a predetermined motion track of a tracking point in a previous image of the current frame image into a coordinate system of the current frame image according to the first transformation matrix;
and in the coordinate system of the current frame image, adding the position of the tracking point in the current frame image to the motion trail.
Optionally, the splicing the current frame image and the previous frame image according to the first transformation matrix may include:
splicing the current frame image with the spliced image corresponding to the previous frame image according to the first transformation matrix to obtain a spliced image corresponding to the current frame image; the spliced image corresponding to the previous frame image is as follows: and the last frame image is spliced with the image before the last frame image.
Optionally, the obtaining of the current frame image to be stitched may include:
acquiring a fisheye image acquired by a vehicle-mounted monocular camera;
and converting the fisheye image into an overlook image according to the parameters of the vehicle-mounted monocular camera, and taking the overlook image as the current frame image to be spliced.
In order to achieve the above object, an embodiment of the present invention further provides an image stitching device based on a vehicle-mounted monocular camera, including:
the first acquisition module is used for acquiring a current frame image to be spliced;
the second acquisition module is used for acquiring the direction change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image;
the construction module is used for constructing a first transformation matrix between the current frame image and the previous frame image according to the azimuth change information;
and the first splicing module is used for splicing the current frame image and the previous frame image according to the first transformation matrix.
Optionally, the second obtaining module may be specifically configured to:
predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points of specified positions in each frame of image;
and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
Optionally, the building module may be specifically configured to:
and constructing an Euler matrix between the current frame image and the previous frame image according to the relative displacement and the relative rotation angle.
Optionally, the apparatus may further include:
the extraction module is used for extracting characteristic point pairs from the current frame image and the previous frame image of the current frame image before the second acquisition module acquires the direction change information of the vehicle between the first acquisition time and the second acquisition time;
the first calculation module is used for registering the characteristic point pairs and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result;
a second calculating module, configured to calculate a deviation between the second transformation matrix and a predetermined historical transformation matrix, where the historical transformation matrix is: a transformation matrix between the previous frame image and a further previous frame image of the previous frame image;
the judging module is used for judging whether the deviation is larger than a preset threshold value or not; if the number of the first splicing modules is larger than the number of the second splicing modules, triggering the second acquisition module, and if the number of the second splicing modules is not larger than the number of the second splicing modules, triggering the second splicing module;
and the second splicing module is used for splicing the current frame image and the previous frame image according to the second transformation matrix.
Optionally, the apparatus may further include:
the first conversion module is used for converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the second transformation matrix under the condition that the judgment module judges that the deviation is not larger than the preset threshold value, and triggering the adding module; the tracking points are pixel points at specified positions in each frame of image; the motion trail is used for acquiring the direction change information of the vehicle;
the second conversion module is used for converting a predetermined motion track of a tracking point in a previous image of the current frame image into a coordinate system of the current frame image according to the first transformation matrix after the construction module constructs the first transformation matrix between the current frame image and the previous frame image according to the azimuth change information under the condition that the judgment module judges that the deviation is larger than the preset threshold, and triggering an adding module;
and the adding module is used for adding the position of the tracking point in the current frame image to the motion trail in the coordinate system of the current frame image.
Optionally, the first splicing module may be specifically configured to:
splicing the current frame image with the spliced image corresponding to the previous frame image according to the first transformation matrix to obtain a spliced image corresponding to the current frame image; the spliced image corresponding to the previous frame image is as follows: and the last frame image is spliced with the image before the last frame image.
Optionally, the first obtaining module may be specifically configured to:
acquiring a fisheye image acquired by a vehicle-mounted monocular camera;
and converting the fisheye image into an overlook image according to the parameters of the vehicle-mounted monocular camera, and taking the overlook image as the current frame image to be spliced.
In order to achieve the above object, an embodiment of the present invention further provides an electronic device, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any image stitching method based on the vehicle-mounted monocular camera when executing the program stored in the memory.
In order to achieve the above object, an embodiment of the present invention provides a driving assistance system, including: a monocular camera and processing device; wherein the content of the first and second substances,
the monocular camera is used for sending each acquired current frame image to the processing equipment;
the processing device is used for receiving the current frame image; acquiring the azimuth change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image; constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix.
By applying the embodiment of the invention, the orientation change information of the vehicle between the first acquisition time corresponding to the current frame image and the second acquisition time corresponding to the previous frame image is acquired, and the first transformation matrix between the current frame image and the previous frame image is constructed according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix. Therefore, in the scheme, the image splicing is carried out based on the direction change information of the vehicle, the influence of the number of the characteristic point pairs is avoided, and the splicing effect is good.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a schematic diagram of a framework according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of another embodiment of a frame according to the present invention;
fig. 1c is a schematic flowchart of a first flowchart of an image stitching method based on a vehicle-mounted monocular camera according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of vehicle orientation change information in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a displacement curve of each frame of image tracking point according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating image stitching of frames according to an embodiment of the present invention;
fig. 5 is a second flowchart of an image stitching method based on a vehicle-mounted monocular camera according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image stitching device based on a vehicle-mounted monocular camera according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a driving assistance system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The terms of the present invention are explained as follows:
the feature point is a point where the image gray value changes drastically or a point where the curvature is large on the edge of the image (i.e., the intersection of two edges).
The perspective transformation matrix refers to a matrix used for projecting the picture to a new view plane.
The internal reference refers to parameters related to the characteristics of the camera, such as the focal length, pixel size, principal point coordinates, distortion coefficient, etc. of the camera.
External reference refers to parameters of the camera in the world coordinate system, such as the position, rotation direction, and the like of the camera.
Viewpoint conversion means changing the viewpoint position and direction according to the image display contents.
The overhead view image is an image that is generally formed at a certain angle with the ground when the camera is installed, and thus a rectangular object on the ground is deformed into a trapezoid or other shape. According to parameters such as the installation angle of the camera, the distortion of a lens and the like, the image formed by a common camera is converted into a visual angle of observing the ground downwards from the vertical direction, and the generated image is a top view image. If the top view image is subjected to distortion correction and viewpoint conversion, the shape of the on-ground object in the top view image may coincide with the shape of the real object.
The euler transformation is one of perspective transformation, which only changes the direction and position of an object and does not change the shape.
The invention concept of the invention is as follows:
as shown in fig. 1a, an image stitching apparatus provided in an embodiment of the present invention includes two modules: calibration module and concatenation module. The calibration module provides a lookup table for the splicing module, and is mutually independent from the splicing module in the process.
The look-up table is built in the calibration module in advance, and this process is generally performed in a train yard or a 4S store, and the camera installation can be performed only once. The process may include:
the method comprises the steps that a camera collects a fisheye image, a calibration module corrects the fisheye image by using camera internal parameters and converts the fisheye image into an overlook image through viewpoint transformation according to camera external parameters to obtain a lookup table from the overlook image to the fisheye image, and the lookup table is namely the mapping relation between the fisheye image and pixels in the overlook image.
And then, in the driving process of the vehicle, the camera acquires the fisheye image in real time, the acquired fisheye image is input into the splicing module, the splicing module converts the fisheye image into an overlook image by utilizing the lookup table established in the calibration module, splicing the overlook images of the adjacent frames, and outputting the spliced image.
Specifically, the process of stitching the adjacent frame top view images by the stitching module may be frame-by-frame stitching. For example, the top view image of the current frame is stitched with the stitched image corresponding to the top view image of the previous frame, and the stitched image corresponding to the top view image of the previous frame includes not only the information of the top view image of the previous frame, but also the image information of the top view image of the previous frame. Therefore, in the process of splicing frames by frames, new contents are continuously added in the spliced images, the display contents are richer, and the visual effect is better.
As described in detail below, the mosaic module, as shown in fig. 1b, may include the following sub-modules: an overlook image generation sub-module, an image registration sub-module and an image fusion sub-module.
The overlook image generation sub-module reads a lookup table in the calibration module, the lookup table is utilized to convert the fisheye image acquired by the camera in real time into an overlook image, and the overlook image is input to the image registration sub-module.
And the image registration submodule performs characteristic point registration in the overlooking images of the adjacent frames to obtain a perspective transformation matrix. Specifically, the process may include: extracting characteristic points, describing the characteristic points, matching the characteristic points, obtaining characteristic point pairs and calculating a perspective transformation matrix.
And if the deviation of the perspective transformation matrix obtained by calculation and the perspective transformation matrix used for splicing the previous frame of overhead image is smaller than a threshold value, the image registration submodule inputs the calculated perspective transformation matrix into the image fusion submodule.
If the deviation of the perspective transformation matrix obtained by calculation and the perspective transformation matrix used for splicing the previous frame of overhead image is not less than the threshold value, the image registration sub-module also needs to correct the perspective transformation matrix obtained by calculation.
The correction process may include:
one or more pixel points in the current frame overlook image are taken as tracking points, and the motion trail of the tracking points in the spliced image is obtained. For example, the process of forming the motion trajectory may include: after the first frame image is obtained, in a coordinate system of the first frame image, a central point of the first frame image is designated as a tracking point, and coordinates of the tracking point in the first frame image are obtained; after a second frame image is acquired, converting the coordinates of the tracking points in the first frame image into a coordinate system of the second frame image, designating the central point of the second frame image as the tracking point to obtain the coordinates of the tracking points in the second frame image, wherein the coordinates of the two tracking points form a section of motion track; and after a third frame image is acquired, converting the formed section of motion track into a coordinate system of the third frame image, designating a central point of the third frame image as a tracking point to obtain coordinates of the tracking point in the third frame image, adding the coordinates of the tracking point in the third frame image to the section of motion track, and the like. The images in this example are all top view images.
By using the movement locus, the relative displacement and the relative rotation angle of the vehicle body are predicted. The relative displacement and the relative rotation angle referred to herein mean: and in the time period corresponding to the current frame of the overlooking image and the next frame of the overlooking image, the relative displacement and the relative rotation angle of the vehicle body.
And calculating an Euler transformation matrix between the current frame of top view image and the next frame of top view image based on the relative displacement and the relative rotation angle, wherein the Euler transformation matrix is the modified transformation matrix.
The image registration submodule inputs the Euler transformation matrix to the image fusion module.
For the image fusion sub-module, the input may be an unmodified perspective transformation matrix or a modified euler transformation matrix. The image fusion sub-module uses the two matrixes to splice the overlook images of the adjacent frames in a similar way, and the description is not distinguished.
As described above, the stitching process may be frame-by-frame stitching: and splicing the top view image of the current frame with the spliced image corresponding to the top view image of the previous frame, wherein the spliced image corresponding to the top view image of the previous frame not only contains the information of the top view image of the previous frame, but also contains the image information of the top view image of the previous frame before the top view image of the previous frame. And the image registration submodule outputs a spliced image.
Therefore, by applying the scheme, if the matrix deviation is large, the matrix is corrected based on the relative displacement and the relative rotation angle of the vehicle body, the corrected matrix is utilized to carry out image splicing, the number and the quality of characteristic point pairs are not influenced, and the splicing effect is good.
Based on the same inventive concept, embodiments of the present invention provide an image stitching method, an apparatus, a device, and a system based on a vehicle-mounted monocular camera, where the method and the apparatus may be applied to various electronic devices, such as a mobile phone, an image processing device installed in a vehicle, or a vehicle-mounted camera with an image processing function, and the like, and are not limited specifically.
Fig. 1c is a first flowchart of an image stitching method based on a vehicle-mounted monocular camera according to an embodiment of the present invention, including:
s101: and acquiring a current frame image to be spliced.
For example, if the execution subject is a vehicle-mounted monocular camera, the camera may use the current frame image acquired by the camera as the current frame image to be stitched. If the execution subject is other electronic equipment in communication connection with the vehicle-mounted monocular camera, the electronic equipment can acquire the current frame image acquired by the camera as the current frame image to be spliced.
Or, an image acquired by the camera may be used as an original image, and the processed original image may be used as the current frame image to be stitched. For example, assuming that an original image acquired by the vehicle-mounted monocular camera is a fisheye image, the fisheye image may be converted into an overhead image according to parameters of the vehicle-mounted monocular camera, and the overhead image is used as a current frame image to be stitched. The parameters of the vehicle-mounted monocular camera are obtained in advance through calibration.
Alternatively, distortion correction or other optimization processing may be performed on the original image or the top-view image, and the processed image may be used as the current frame image to be stitched.
The embodiment of the invention can splice the video stream, and each frame in the video stream is taken as the current frame to be spliced.
S102: acquiring the direction change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image.
The orientation change information may include position change information and direction change information, such as relative displacement and relative rotation angle of the vehicle between the first and second acquisition time. As shown in FIG. 2, assume that the vehicle is at position P at the first acquisition time 0 The position of the vehicle at the second acquisition time is P 1 ,P 1 Relative to P 0 The relative displacement in the x direction is Deltax and the relative displacement in the y direction is Deltay, P 1 Relative to each otherAt P 0 The relative rotation angle in the x direction is b.
For example, the position information and the driving direction of the vehicle at the first collection time and the second collection time may be obtained through a Global Positioning System (GPS) mounted on the vehicle, and the relative displacement and the relative rotation angle of the vehicle between the first collection time and the second collection time are calculated according to the position information and the driving direction.
Alternatively, as an embodiment, S102 may include: predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points of specified positions in each frame of image; and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
In this embodiment, each frame of image is assigned with a tracking point, which may be a pixel point at a fixed position in the image, for example, a central point in the image, and a motion trajectory formed by each tracking point is a motion trajectory of the vehicle. The motion trail determined before the current frame image is processed does not include the tracking point in the current frame image, but the motion trail of the end point of the motion trail and the tracking point in the current frame image can be predicted according to the motion trend of the motion trail, wherein the end point is the tracking point in the previous frame image of the current frame image, and therefore the predicted motion trail is the predicted motion trail of the tracking point between the first acquisition time and the second acquisition time.
The predicted movement track can also be used as the movement track of the vehicle between the first acquisition time and the second acquisition time, so that the relative displacement and the relative rotation angle of the vehicle can be calculated according to the predicted movement track.
Referring to fig. 3, (a) in fig. 3 represents a displacement curve of a tracking point p in the x direction, the horizontal axis represents a frame number of an image, and the vertical axis represents a displacement value in the x direction; fig. 3 (b) shows a displacement curve of the tracking point p in the y direction, the horizontal axis shows the frame number of the image, and the vertical axis shows the displacement value in the y direction; in fig. 3, (c) shows the movement locus of the tracking point p, the horizontal axis shows the displacement value in the x direction, and the vertical axis shows the displacement value in the y direction. The three images in fig. 3 all use the position of the pixel point at the upper left corner of the image as the position of the origin of coordinates.
Suppose that the coordinate value of the tracking point in the previous frame image of the current frame is (x) c ,y c ) The coordinate value of the tracking point in the current frame is (x) c+1 ,y c+1 ) The relative displacement d of the vehicle in the x direction x =x c+1 -x c Relative displacement d of the vehicle in the y-direction y =y c+1 -y c Relative rotation angle of vehicle α = actan (k) c+1 )-actan(k c ) Wherein k is c Representing the slope, k, of the tracking point in the previous frame of image c+1 Representing the slope of the tracking point in the current frame image. Specifically, k can be calculated according to the first derivative of the motion trajectory in fig. 3 (c) c And k c+1 。d x I.e. corresponding to Δ x, d in fig. 2 y I.e. Δ y in fig. 2, and α is also b in fig. 2.
S103: and constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information.
As an embodiment, an euler matrix between the current frame image and the previous frame image may be constructed according to the relative displacement and the relative rotation angle obtained above.
As described above, the relative displacement d of the vehicle between the first acquisition time and the second acquisition time is calculated x And d y The relative rotation angle is α, and the coordinate value of the vehicle at the second acquisition time is assumed to be (x) 0 ,y 0 ) The coordinate value of the vehicle at the first acquisition time is (x) 1 ,y 1 ) Then, we can get:
Figure GDA0003959003080000121
from the euler matrix we can derive:
Figure GDA0003959003080000122
the Euler matrix H is:
Figure GDA0003959003080000123
s104: and splicing the current frame image and the previous frame image according to the first transformation matrix.
For example, the previous frame image may be projected to the plane of the current frame image through the first transformation matrix, and then the projected previous frame image and the current frame image are spliced.
As an embodiment, the current frame image and the previous frame image may be merged by combining with an image interpolation algorithm. It can be understood that if two images have unmatched pixel points, the unmatched pixel points can be interpolated by using the adjacent pixel values of the unmatched pixel points through an image interpolation algorithm.
As described above, the embodiment of the present invention may perform a splicing process on a video stream, and use each frame image in the video stream as a current frame image to be spliced. Assuming that the current frame image is the nth frame image, that is, before S101, a stitched image of the nth-1 st frame image and the nth-2 nd frame image is obtained, therefore, in S104, the nth frame image and the stitched image (the stitched image of the nth-1 st frame image and the nth-2 nd frame image) may be stitched. In this case, in the stitching process, the nth frame image is stitched with the N-1 st frame image included in the stitched image.
As an embodiment, S104 may include: splicing the current frame image with the spliced image corresponding to the previous frame image according to the first transformation matrix to obtain a spliced image corresponding to the current frame image; the spliced image corresponding to the previous frame image is as follows: and the last frame image is spliced with the image before the last frame image.
Continuing with the above example, the "stitched image corresponding to the previous frame image" may be understood as a stitched image of the above-mentioned N-1 th frame image and N-2 nd frame image, and the "stitched image corresponding to the current frame image" may be understood as a stitched image of the above-mentioned N-1 th frame image and N-1 th frame image.
In this embodiment, the current frame image is stitched with the stitched image corresponding to the previous frame image, and the stitched image corresponding to the previous frame image not only includes information of the previous frame image, but also includes image information of the previous frame image. As shown in fig. 4, f1 denotes a first frame image, f2 denotes a second frame image, p1 denotes an image obtained by stitching the first frame image and the second frame image, f3 denotes a third frame image, p2 denotes a stitched image of p1 and f3, and so on. That is to say, in the embodiment, in the process of performing frame-by-frame stitching, new contents are continuously added to the stitched image, so that the displayed contents are richer, and the visual effect is better.
In the existing scheme, image stitching is usually performed by adopting a characteristic point pair registration mode, but in some scenes, such as a highway with little scene change, the content between adjacent frame images is similar, and it is difficult to extract an accurate characteristic point pair. It can be understood that the feature point pairs belong to two frames of images respectively, and the feature point pairs correspond to the same point in the real space, while on a highway with little scene change, many pixel points in adjacent frame images are similar (such as pixel points on the highway), and it is difficult to distinguish the feature point pairs corresponding to the same point in the real space, so that the condition of false detection or false matching of the feature point pairs is easy to occur. Alternatively, if the vehicle running speed is high, the overlap area existing between the adjacent frame images is small, and in this case, it is also difficult to extract the feature point pairs in the adjacent frame images.
In the scene, if the image splicing is carried out by adopting the characteristic point pair registration mode, the splicing effect is poor, but by adopting the embodiment of the invention, the image splicing is carried out based on the direction change information of the vehicle, the influence of the quantity and the quality of the characteristic point pairs is avoided, and the splicing effect is good.
As an embodiment, before S102, feature point pairs may be extracted from the current frame image and a previous frame image of the current frame image; registering the characteristic point pairs, and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result; calculating a deviation between the second transformation matrix and a predetermined historical transformation matrix, the historical transformation matrix being: a transformation matrix between the previous frame image and a further previous frame image of the previous frame image; judging whether the deviation is larger than a preset threshold value or not; and if the current frame image is not larger than the first frame image, executing S102, and if the current frame image is not larger than the first frame image, splicing the current frame image and the previous frame image according to the second transformation matrix.
In the present embodiment, a transformation matrix obtained from the feature point pair registration result is referred to as a second transformation matrix, and a transformation matrix obtained from the vehicle direction change information is referred to as a first transformation matrix.
In this embodiment, a second transformation matrix between the current frame image and the previous frame image is calculated by using a feature point pair registration method, where the second transformation matrix may be a homography matrix. Assuming that the current frame image is the Nth frame image, the second transformation matrix is the transformation matrix between the Nth frame image and the N-1 th frame image, and the predetermined historical transformation matrix is the transformation matrix between the N-1 th frame image and the N-2 nd frame image. If the deviation between the second transformation matrix and the historical transformation matrix is small, namely the second transformation matrix meets the deviation condition, the second transformation matrix can be directly utilized to splice the Nth frame image and the (N-1) th frame image without executing a subsequent scheme; if the deviation is large, the image splicing effect is poor by using the characteristic point pair registration mode, and the image splicing needs to be performed based on the direction change information of the vehicle so as to improve the splicing effect.
And the historical transformation matrix between the N-1 frame image and the N-2 frame image is a first transformation matrix or a second transformation matrix. Similarly, a second transformation matrix between the image of the N-1 th frame and the image of the N-2 th frame may be calculated, and if the deviation between the second transformation matrix and the transformation matrix between the image of the N-2 th frame and the image of the N-3 th frame is small (the deviation condition is satisfied), the historical transformation matrix between the image of the N-1 th frame and the image of the N-2 th frame is the second transformation matrix, otherwise, the historical transformation matrix is the first transformation matrix.
As described above, each frame of image is assigned with a tracking point, and the motion trajectory formed by each tracking point is the motion trajectory of the vehicle, and the formation process of the motion trajectory may be: converting a predetermined motion track of a tracking point in a previous image of the current frame image into a coordinate system of the current frame image according to a transformation matrix between the current frame image and a previous frame image of the current frame image; and in the coordinate system of the current frame image, adding the position of the tracking point in the current frame image to the motion trail.
The transformation matrix between the current frame image and the previous frame image is a first transformation matrix or a second transformation matrix. Similarly, the second transformation matrix is the second transformation matrix if the deviation condition is satisfied, and the first transformation matrix if the deviation condition is not satisfied.
For example, after the first frame image is acquired, in the coordinate system of the first frame image, the central point of the first frame image is designated as the tracking point, and the coordinates of the tracking point in the first frame image are obtained; after a second frame image is obtained, coordinates of a tracking point in the first frame image are converted into a coordinate system of the second frame image, a central point of the second frame image is designated as the tracking point, coordinates of the tracking point in the second frame image are obtained, and the coordinates of the two tracking points form a motion track; and after a third frame image is acquired, converting the formed section of motion track into a coordinate system of the third frame image, designating a central point of the third frame image as a tracking point to obtain coordinates of the tracking point in the third frame image, adding the coordinates of the tracking point in the third frame image to the section of motion track, and the like.
By applying the embodiment shown in FIG. 1c of the invention, the orientation change information of the vehicle between the first acquisition time corresponding to the current frame image and the second acquisition time corresponding to the previous frame image is acquired, and a first transformation matrix between the current frame image and the previous frame image is constructed according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix. Therefore, in the scheme, the image splicing is carried out based on the direction change information of the vehicle, the influence of the quantity and quality of the characteristic point pairs is avoided, and the splicing effect is good.
Fig. 5 is a second flowchart of the image stitching method based on the vehicle-mounted monocular camera according to the embodiment of the present invention, including:
s501: and acquiring a current frame image to be spliced.
For example, if the execution subject is a vehicle-mounted monocular camera, the camera may use the current frame image acquired by the camera as the current frame image to be stitched. If the execution subject is other electronic equipment in communication connection with the vehicle-mounted monocular camera, the electronic equipment can acquire the current frame image acquired by the camera as the current frame image to be spliced.
Or, an image acquired by the camera may be used as an original image, and the processed original image may be used as the current frame image to be stitched. For example, assuming that an original image acquired by the vehicle-mounted monocular camera is a fisheye image, the fisheye image may be converted into an overhead image according to parameters of the vehicle-mounted monocular camera, and the overhead image is used as a current frame image to be stitched. The parameters of the vehicle-mounted monocular camera are obtained in advance through calibration.
Alternatively, distortion correction or other optimization processing may be performed on the original image or the top-view image, and the processed image may be used as the current frame image to be stitched.
S502: and extracting characteristic point pairs from the current frame image and the previous frame image of the current frame image.
S503: and registering the characteristic point pair, and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result. The second transformation matrix may be a homography matrix.
In the present embodiment, the transformation matrix obtained from the feature point pair registration result is referred to as a second transformation matrix, and the transformation matrix obtained from the vehicle orientation change information is referred to as a first transformation matrix.
S504: calculating a deviation between the second transformation matrix and a predetermined historical transformation matrix, the historical transformation matrix being: and a transformation matrix between the previous frame image and a frame image of the previous frame image.
S505: judging whether the deviation is greater than a preset threshold value or not; if not, executing S506-S508, if yes, executing S509-S514.
S506: and splicing the current frame image and the previous frame image according to the second transformation matrix.
S507: and converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the second transformation matrix.
The tracking points are pixel points of specified positions in each frame of image; the movement track is used for acquiring the direction change information of the vehicle.
S508: and in the coordinate system of the current frame image, adding the position of the tracking point in the current frame image to the motion track.
The execution sequence of S506 and S507-S508 is not limited, and S506 can be executed first, and then S507-S508 can be executed; or executing S507-S508 first and then executing S506; or S506 and S507-S508 may be performed simultaneously. If S506 is performed first and then S507-S508 are performed, the coordinate system of the current frame image in S507 is the coordinate system of the stitched image. That is, in a stitched image obtained by stitching the current frame image and the previous frame image, the coordinate system of the stitched image is the coordinate system of the current frame image with reference to the current frame image.
S509: and predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image. The first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image.
S510: and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
S511: and constructing a first transformation matrix between the current frame image and the previous frame image according to the relative displacement and the relative corner, wherein the first transformation matrix can be an Euler matrix.
S512: and splicing the current frame image and the previous frame image according to the first transformation matrix.
S513: and converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the first transformation matrix.
S514: and in the coordinate system of the current frame image, adding the position of the tracking point in the current frame image to the motion track.
The execution sequence of S512 and S513 to S514 is not limited, and S512 may be executed first, and then S513 to S14 may be executed; or executing S513-S14 first and then executing S512; or S512 and S513-S14 may be performed simultaneously. If S512 is performed first and then S513-S14 are performed, the coordinate system of the current frame image in S513 is the coordinate system of the stitched image. That is, in a stitched image obtained by stitching the current frame image and the previous frame image, the coordinate system of the stitched image is the coordinate system of the current frame image with reference to the current frame image.
For example, the process of forming the motion trajectory may include: after the first frame image is obtained, in a coordinate system of the first frame image, a central point of the first frame image is designated as a tracking point, and coordinates of the tracking point in the first frame image are obtained; after a second frame image is acquired, converting the coordinates of the tracking points in the first frame image into a coordinate system of the second frame image, designating the central point of the second frame image as the tracking point to obtain the coordinates of the tracking points in the second frame image, wherein the coordinates of the two tracking points form a section of motion track; and after a third frame image is acquired, converting the formed section of motion track into a coordinate system of the third frame image, designating the central point of the third frame image as a tracking point to obtain the coordinates of the tracking point in the third frame image, adding the coordinates of the tracking point in the third frame image to the section of motion track, and the like.
In one embodiment, in S506 and S512, the current frame image is stitched with the stitched image corresponding to the previous frame image, and the stitched image corresponding to the previous frame image includes not only the information of the previous frame image but also the image information of the previous frame image. As shown in fig. 4, f1 denotes a first frame image, f2 denotes a second frame image, p1 denotes an image obtained by stitching the first frame image and the second frame image, f3 denotes a third frame image, p2 denotes a stitched image of p1 and f3, and so on. That is to say, in the embodiment, in the process of performing frame-by-frame stitching, new content is continuously added to the stitched image, so that the displayed content is richer, and the visual effect is better.
By applying the embodiment shown in fig. 5 of the invention, the orientation change information of the vehicle between the first acquisition time corresponding to the current frame image and the second acquisition time corresponding to the previous frame image is acquired, and the first transformation matrix between the current frame image and the previous frame image is constructed according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix. Therefore, in the scheme, the image splicing is carried out based on the direction change information of the vehicle, the influence of the characteristic points on the quantity and the quality is avoided, and the splicing effect is good.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides an image stitching device based on a vehicle-mounted monocular camera, as shown in fig. 6, including:
a first obtaining module 601, configured to obtain a current frame image to be stitched;
a second obtaining module 602, configured to obtain the direction change information of the vehicle between the first collecting time and the second collecting time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image;
a constructing module 603, configured to construct a first transformation matrix between the current frame image and the previous frame image according to the orientation change information;
a first stitching module 604, configured to stitch the current frame image and the previous frame image according to the first transformation matrix.
As an embodiment, the second obtaining module 602 may specifically be configured to:
predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points of specified positions in each frame of image;
and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
As an embodiment, the building module 603 may be specifically configured to:
and constructing an Euler matrix between the current frame image and the previous frame image according to the relative displacement and the relative rotation angle.
As an embodiment, the apparatus may further include: an extraction module, a first calculation module, a second calculation module, a judgment module, and a second concatenation module (not shown in the figure), wherein,
the extraction module is used for extracting characteristic point pairs from the current frame image and the previous frame image of the current frame image before the second acquisition module acquires the direction change information of the vehicle between the first acquisition time and the second acquisition time;
the first calculation module is used for registering the characteristic point pairs and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result;
a second calculating module, configured to calculate a deviation between the second transformation matrix and a predetermined historical transformation matrix, where the historical transformation matrix is: a transformation matrix between the previous frame image and a further previous frame image of the previous frame image;
the judging module is used for judging whether the deviation is greater than a preset threshold value or not; if the number of the first splicing modules is larger than the preset value, triggering the second acquisition module, and if the number of the second splicing modules is not larger than the preset value, triggering the second splicing module;
and the second splicing module is used for splicing the current frame image and the previous frame image according to the second transformation matrix.
As an embodiment, the apparatus may further include: a first conversion module, a second conversion module and an adding module (not shown in the figure), wherein,
the first conversion module is used for converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the second transformation matrix under the condition that the judgment module judges that the deviation is not larger than the preset threshold value, and triggering the adding module; the tracking points are pixel points of specified positions in each frame of image; the motion trail is used for acquiring the direction change information of the vehicle;
the second conversion module is used for converting a predetermined motion track of a tracking point in a previous image of the current frame image into a coordinate system of the current frame image according to the first transformation matrix after the construction module constructs the first transformation matrix between the current frame image and the previous frame image according to the azimuth change information under the condition that the judgment module judges that the deviation is larger than the preset threshold, and triggering an adding module;
and the adding module is used for adding the position of the tracking point in the current frame image to the motion trail in the coordinate system of the current frame image.
As an embodiment, the first splicing module 604 may be specifically configured to:
splicing the current frame image with the spliced image corresponding to the previous frame image according to the first transformation matrix to obtain a spliced image corresponding to the current frame image; the spliced image corresponding to the previous frame of image is as follows: and the last frame image is spliced with the image before the last frame image.
As an embodiment, the first obtaining module 601 may specifically be configured to:
acquiring a fisheye image acquired by a vehicle-mounted monocular camera;
and converting the fisheye image into an overlook image according to the parameters of the vehicle-mounted monocular camera, and taking the overlook image as the current frame image to be spliced.
By applying the embodiment shown in fig. 6 of the present invention, the orientation change information of the vehicle between the first collection time corresponding to the current frame image and the second collection time corresponding to the previous frame image is obtained, and the first transformation matrix between the current frame image and the previous frame image is constructed according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix. Therefore, in the scheme, the image splicing is carried out based on the direction change information of the vehicle, the influence of the quantity and quality of the characteristic point pairs is avoided, and the splicing effect is good.
The first acquisition module 601 in the embodiment shown in fig. 6 may be understood as a top view image generation sub-module in fig. 1b, the second acquisition module 602 and the construction module 603 may exist in an image registration sub-module in fig. 1b, and the first stitching module may exist in an image fusion sub-module in fig. 1 b.
The extraction module, the first calculation module, the second calculation module, and the determination module may exist in the image registration sub-module in fig. 1b, and the second stitching module may exist in the image fusion sub-module in fig. 1 b.
The first conversion module, the second conversion module and the addition module may be present in the image registration sub-module in fig. 1 b.
An embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 complete mutual communication through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to implement any one of the image stitching methods based on the on-board monocular camera when executing the program stored in the memory 703.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this is not intended to represent only one bus or type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the image stitching method based on the vehicle-mounted monocular camera is realized.
An embodiment of the present invention further provides a driving assistance system, as shown in fig. 8, including: a monocular camera and processing device; wherein the content of the first and second substances,
the monocular camera is used for sending each acquired current frame image to the processing equipment;
the processing device is used for receiving the current frame image; acquiring the azimuth change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image; constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information; and splicing the current frame image and the previous frame image according to the first transformation matrix.
The processing device may include a display screen for displaying the stitched images. Or, a display device may be separately provided in the system, and the display device is configured to display the images obtained by stitching. Alternatively, the monocular camera, the processing device, and the display device may be the same device.
The processing device in the system can execute any one of the image stitching methods based on the vehicle-mounted monocular camera.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the embodiment of the image stitching device based on the vehicle-mounted monocular camera shown in fig. 6, the embodiment of the electronic device shown in fig. 7, the embodiment of the computer-readable storage medium described above, and the embodiment of the driving assistance system shown in fig. 8, since they are substantially similar to the embodiment of the image stitching method based on the vehicle-mounted monocular camera shown in fig. 1a to 5, the description is relatively simple, and relevant points can be referred to the partial description of the embodiment of the image stitching method based on the vehicle-mounted monocular camera shown in fig. 1a to 5.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (14)

1. An image stitching method based on a vehicle-mounted monocular camera is characterized by comprising the following steps:
acquiring a current frame image to be spliced;
acquiring the azimuth change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image;
constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information;
splicing the current frame image and the previous frame image according to the first transformation matrix;
wherein, the obtaining of the orientation change information of the vehicle between the first acquisition time and the second acquisition time comprises:
predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points at specified positions in each frame of image;
and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
2. The method of claim 1, wherein the constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information comprises:
and constructing an Euler matrix between the current frame image and the previous frame image according to the relative displacement and the relative rotation angle.
3. The method of claim 1, further comprising, prior to said obtaining the change in orientation information of the vehicle between the first acquisition time and the second acquisition time:
extracting characteristic point pairs from the current frame image and the previous frame image of the current frame image;
registering the characteristic point pairs, and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result;
calculating a deviation between the second transformation matrix and a predetermined historical transformation matrix, the historical transformation matrix being: a transformation matrix between the previous frame image and a further previous frame image of the previous frame image;
judging whether the deviation is greater than a preset threshold value or not;
if the position change information is larger than the first collection time, the step of obtaining the position change information of the vehicle between the first collection time and the second collection time is executed;
and if not, splicing the current frame image and the previous frame image according to the second transformation matrix.
4. The method according to claim 3, wherein in the case where it is determined that the deviation is not greater than the preset threshold, the method further comprises:
converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the second transformation matrix; the tracking points are pixel points of specified positions in each frame of image; the motion trail is used for acquiring the direction change information of the vehicle;
adding the position of a tracking point in the current frame image to the motion trail in the coordinate system of the current frame image;
in a case where it is determined that the deviation is greater than the preset threshold, after the constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information, the method further includes:
converting a predetermined motion track of a tracking point in a previous image of the current frame image into a coordinate system of the current frame image according to the first transformation matrix;
and in the coordinate system of the current frame image, adding the position of the tracking point in the current frame image to the motion trail.
5. The method of claim 1, wherein the stitching the current frame image and the previous frame image according to the first transformation matrix comprises:
splicing the current frame image with the spliced image corresponding to the previous frame image according to the first transformation matrix to obtain a spliced image corresponding to the current frame image; the spliced image corresponding to the previous frame of image is as follows: and the last frame image is spliced with the image before the last frame image.
6. The method according to claim 1, wherein the obtaining the current frame image to be stitched comprises:
acquiring a fisheye image acquired by a vehicle-mounted monocular camera;
and converting the fisheye image into an overhead image according to the parameters of the vehicle-mounted monocular camera, and taking the overhead image as the current frame image to be spliced.
7. The utility model provides an image splicing apparatus based on-vehicle monocular camera which characterized in that includes:
the first acquisition module is used for acquiring a current frame image to be spliced;
the second acquisition module is used for acquiring the direction change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image;
the construction module is used for constructing a first transformation matrix between the current frame image and the previous frame image according to the azimuth change information;
the first splicing module is used for splicing the current frame image and the previous frame image according to the first transformation matrix;
the second obtaining module is specifically configured to:
predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points of specified positions in each frame of image;
and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
8. The apparatus according to claim 7, wherein the building block is specifically configured to:
and constructing an Euler matrix between the current frame image and the previous frame image according to the relative displacement and the relative rotation angle.
9. The apparatus of claim 7, further comprising:
the extraction module is used for extracting characteristic point pairs from the current frame image and the previous frame image of the current frame image before the second acquisition module acquires the direction change information of the vehicle between the first acquisition time and the second acquisition time;
the first calculation module is used for registering the characteristic point pairs and calculating a second transformation matrix between the current frame image and the previous frame image according to a registration result;
a second calculation module, configured to calculate a deviation between the second transformation matrix and a predetermined historical transformation matrix, where the historical transformation matrix is: a transformation matrix between the previous frame image and a further previous frame image of the previous frame image;
the judging module is used for judging whether the deviation is larger than a preset threshold value or not; if the number of the first splicing modules is larger than the preset value, triggering the second acquisition module, and if the number of the second splicing modules is not larger than the preset value, triggering the second splicing module;
and the second splicing module is used for splicing the current frame image and the previous frame image according to the second transformation matrix.
10. The apparatus of claim 9, further comprising:
the first conversion module is used for converting the predetermined motion trail of the tracking point in the image before the current frame image into the coordinate system of the current frame image according to the second transformation matrix under the condition that the judgment module judges that the deviation is not larger than the preset threshold value, and triggering the adding module; the tracking points are pixel points at specified positions in each frame of image; the motion trail is used for acquiring the direction change information of the vehicle;
the second conversion module is used for converting a predetermined motion track of a tracking point in a previous image of the current frame image into a coordinate system of the current frame image according to the first transformation matrix after the construction module constructs the first transformation matrix between the current frame image and the previous frame image according to the azimuth change information under the condition that the judgment module judges that the deviation is larger than the preset threshold, and triggering an adding module;
and the adding module is used for adding the position of the tracking point in the current frame image to the motion trail in the coordinate system of the current frame image.
11. The apparatus of claim 7, wherein the first splicing module is specifically configured to:
splicing the current frame image with the spliced image corresponding to the previous frame image according to the first transformation matrix to obtain a spliced image corresponding to the current frame image; the spliced image corresponding to the previous frame image is as follows: and the last frame image is spliced with the image before the last frame image.
12. The apparatus of claim 7, wherein the first obtaining module is specifically configured to:
acquiring a fisheye image acquired by a vehicle-mounted monocular camera;
and converting the fisheye image into an overhead image according to the parameters of the vehicle-mounted monocular camera, and taking the overhead image as the current frame image to be spliced.
13. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-6 when executing a program stored in the memory.
14. A driving assistance system characterized by comprising: a monocular camera and processing device; wherein, the first and the second end of the pipe are connected with each other,
the monocular camera is used for sending each acquired current frame image to the processing equipment;
the processing device is used for receiving the current frame image; acquiring the azimuth change information of the vehicle between the first acquisition time and the second acquisition time; the first acquisition time is the acquisition time corresponding to the current frame image, and the second acquisition time is the acquisition time corresponding to the previous frame image of the current frame image; constructing a first transformation matrix between the current frame image and the previous frame image according to the orientation change information; splicing the current frame image and the previous frame image according to the first transformation matrix;
wherein, the processing equipment acquires the orientation change information of the vehicle between the first acquisition time and the second acquisition time, and comprises:
predicting the motion trail of the tracking point between the first acquisition time and the second acquisition time according to the predetermined motion trail of the tracking point in the image before the current frame image; the tracking points are pixel points of specified positions in each frame of image; and calculating the relative displacement and the relative rotation angle of the vehicle between the first acquisition time and the second acquisition time according to the predicted motion trail.
CN201810501402.8A 2018-05-23 2018-05-23 Image stitching method, device, equipment and system based on vehicle-mounted monocular camera Active CN110533586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810501402.8A CN110533586B (en) 2018-05-23 2018-05-23 Image stitching method, device, equipment and system based on vehicle-mounted monocular camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810501402.8A CN110533586B (en) 2018-05-23 2018-05-23 Image stitching method, device, equipment and system based on vehicle-mounted monocular camera

Publications (2)

Publication Number Publication Date
CN110533586A CN110533586A (en) 2019-12-03
CN110533586B true CN110533586B (en) 2023-02-07

Family

ID=68656414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810501402.8A Active CN110533586B (en) 2018-05-23 2018-05-23 Image stitching method, device, equipment and system based on vehicle-mounted monocular camera

Country Status (1)

Country Link
CN (1) CN110533586B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112648924B (en) * 2020-12-15 2023-04-07 广州小鹏自动驾驶科技有限公司 Suspended object space position determination method based on vehicle-mounted monocular camera equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106817539A (en) * 2016-12-29 2017-06-09 珠海市魅族科技有限公司 The image processing method and system of a kind of vehicle
CN106910217A (en) * 2017-03-17 2017-06-30 驭势科技(北京)有限公司 Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle
CN107341787A (en) * 2017-07-26 2017-11-10 珠海研果科技有限公司 Method, apparatus, server and the automobile that monocular panorama is parked

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106817539A (en) * 2016-12-29 2017-06-09 珠海市魅族科技有限公司 The image processing method and system of a kind of vehicle
CN106910217A (en) * 2017-03-17 2017-06-30 驭势科技(北京)有限公司 Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle
CN107341787A (en) * 2017-07-26 2017-11-10 珠海研果科技有限公司 Method, apparatus, server and the automobile that monocular panorama is parked

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像拼接的全景目标检测技术;陆天舒等;《兵工自动化》;20140228;第7-10页 *

Also Published As

Publication number Publication date
CN110533586A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
EP2437494B1 (en) Device for monitoring area around vehicle
CN110351494B (en) Panoramic video synthesis method and device and electronic equipment
JP2021533507A (en) Image stitching methods and devices, in-vehicle image processing devices, electronic devices, storage media
CN110139084B (en) Vehicle surrounding image processing method and device
EP3120328B1 (en) Information processing method, information processing device, and program
GB2593335A (en) Method and apparatus for 3-D auto tagging
JP6568374B2 (en) Information processing apparatus, information processing method, and program
CN110341597A (en) A kind of vehicle-mounted panoramic video display system, method and Vehicle Controller
TWI599989B (en) Image processing method and image system for transportation
JP7093015B2 (en) Panorama video compositing device, panoramic video compositing method, and panoramic video compositing program
CN105023260A (en) Panorama image fusion method and fusion apparatus
CN111768332B (en) Method for splicing vehicle-mounted panoramic real-time 3D panoramic images and image acquisition device
JP6570904B2 (en) Correction information output apparatus, image processing apparatus, correction information output method, imaging control system, and moving body control system
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN111800589A (en) Image processing method, device and system and robot
CN113301274A (en) Ship real-time video panoramic stitching method and system
CN111985475A (en) Ship board identification method, computing device and storage medium
CN114549666B (en) AGV-based panoramic image splicing calibration method
CN116012817A (en) Real-time panoramic parking space detection method and device based on double-network deep learning
CN110533586B (en) Image stitching method, device, equipment and system based on vehicle-mounted monocular camera
CN102081796B (en) Image splicing method and device thereof
CN110310335B (en) Camera angle determination method, device, equipment and system
JPH09153131A (en) Method and device for processing picture information and picture information integrating system
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN114821544B (en) Perception information generation method and device, vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant