CN109447901B - Panoramic imaging method and device - Google Patents
Panoramic imaging method and device Download PDFInfo
- Publication number
- CN109447901B CN109447901B CN201811191264.4A CN201811191264A CN109447901B CN 109447901 B CN109447901 B CN 109447901B CN 201811191264 A CN201811191264 A CN 201811191264A CN 109447901 B CN109447901 B CN 109447901B
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- ground
- image data
- panoramic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 49
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims abstract description 34
- 238000012937 correction Methods 0.000 claims abstract description 33
- 240000004050 Pentaglottis sempervirens Species 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Abstract
The invention provides a panoramic imaging method and a panoramic imaging device, wherein the pose relation between a camera and the ground is determined by calculating the moving distance and the rotating angle of the camera, a first correction image is converted into a bird's-eye view according to the pose relation between the camera and the ground, and the bird's-eye view and the ground bird's-eye view are spliced to obtain a panoramic image, and even if the pose relation between the camera and the ground changes, the pose relation between the camera and the ground can be determined in real time according to the acquired current frame image data, so that a correct panoramic image is obtained, and the accuracy of the panoramic image obtained under the condition of actual complex road conditions is greatly improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a panoramic imaging method and device.
Background
At present, with the development of image processing technology, panoramic imaging technology is gradually popularized, and a panoramic imaging system acquires images in real time through a plurality of cameras, and performs stitching processing on the images to generate panoramic images of the surrounding environment of the panoramic imaging system. Of course, the panoramic imaging system may also acquire an image by using a single camera, and process the acquired image to obtain a panoramic image.
In the related art, in a panoramic imaging system adopting a single camera, the panoramic imaging system can generate a relatively accurate panoramic image on the premise that the positional relationship between the camera, the vehicle and the ground in the panoramic imaging system is kept unchanged.
If a vehicle provided with the panoramic imaging system is under a complex road condition, so that the position relationship among the camera, the vehicle and the ground is changed frequently, the panoramic imaging system can obtain an incorrect panoramic image in actual use.
Disclosure of Invention
In order to solve the above problems, an embodiment of the present invention is to provide a panoramic imaging method and apparatus.
In a first aspect, an embodiment of the present invention provides a panoramic imaging method, including:
determining a matching point pair of a first correction image of current frame image data acquired by a camera and a second correction image of image data of a previous frame of the current frame image data;
calculating the moving distance and the rotating angle of the camera by using the matching point pairs;
obtaining ground obstacle characteristic points in the first corrected image based on the moving distance and the rotating angle of the camera, and determining the pose relationship between the camera and the ground;
according to the pose relation between the camera and the ground, converting the first corrected image into a bird's-eye view image, and splicing the bird's-eye view image with the ground bird's-eye view image to obtain a panoramic image, wherein the panoramic image displays the ground obstacle represented by the ground obstacle feature points.
In a second aspect, an embodiment of the present invention further provides a panoramic imaging apparatus, including:
the determining module is used for determining a matching point pair of a first correction image of the current frame image data acquired by the camera and a second correction image of the image data of the previous frame of the current frame image data;
the calculation module is used for calculating the moving distance and the rotating angle of the camera by utilizing the matching point pairs;
the processing module is used for obtaining ground obstacle characteristic points in the first corrected image based on the moving distance and the rotating angle of the camera and determining the pose relationship between the camera and the ground;
and the splicing module is used for converting the first corrected image into a bird's-eye view according to the pose relation between the camera and the ground, and splicing the bird's-eye view with the ground bird's-eye view to obtain a panoramic image, wherein the panoramic image is displayed with the ground obstacle characterized by the ground obstacle characteristic points.
In the solutions provided in the first to second aspects of the embodiments of the present invention, by calculating the movement distance and rotation angle of the camera, determining the pose relationship between the camera and the ground, converting the first corrected image into the bird's-eye view according to the pose relationship between the camera and the ground, and splicing the bird's-eye view with the ground to obtain the panoramic image.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic diagram of a panoramic imaging system to which embodiments of the present invention can be applied;
FIG. 2 is a flow chart of a panoramic imaging method according to embodiment 1 of the present invention;
fig. 3 is a schematic structural diagram of a panoramic imaging device according to embodiment 2 of the present invention.
Icon: 100-monocular camera; 102-an image processing unit; 104-an image storage unit; 106-an image display unit; 300-a determination module; 302-a computing module; 304-a processing module; 306-splice module.
Detailed Description
At present, if a panoramic imaging system adopting a single camera is installed on a vehicle, image data is acquired on the premise that the position relationship between the camera, the vehicle and the ground in the panoramic imaging system is kept unchanged, and the turning angle of the vehicle is calculated through the displacement difference between the left path and the right path and the distance between the left path and the right path in the image data and the perspective transformation matrix, so that a relatively accurate panoramic image is generated. However, if the road surface is uneven or the road surface is obstructed, there is a high possibility that the left and right paths in the acquired image data are different, and thus, an erroneous panoramic image is obtained. In addition, if the vehicle provided with the panoramic imaging system is under a complex road condition, so that the position relationship among the camera, the vehicle and the ground is often changed, the panoramic imaging system can often obtain an incorrect panoramic image in actual use. Based on the above, the embodiment of the application provides a panoramic imaging method and device, even if the pose relationship between the camera and the ground is changed, the pose relationship between the camera and the ground can be determined in real time according to the acquired current frame image data, so that a correct panoramic image is obtained, and the accuracy of the panoramic image obtained under the condition of actual complex road conditions is greatly improved.
According to the panoramic imaging method, the pose relation between the camera and the ground is determined by calculating the moving distance and the rotating angle of the camera, the first correction image is converted into the aerial view according to the pose relation between the camera and the ground, the aerial view and the ground aerial view are spliced to obtain the panoramic image, and even if the pose relation between the camera and the ground changes, the pose relation between the camera and the ground can be determined in real time according to the acquired current frame image data, so that a correct panoramic image is obtained, and the accuracy of the panoramic image obtained under actual complex road conditions is greatly improved.
Referring to the panoramic imaging system mounted on the vehicle shown in fig. 1, the panoramic imaging system includes a monocular camera 100, an image processing unit 102, an image storage unit 104, and an image display unit 106. The monocular camera 100 is mounted at the rear of the vehicle for acquiring an image of the rear of the vehicle. The image processing unit 102 is mainly responsible for image processing tasks such as image distortion correction, image stitching, image perspective transformation and the like, and analysis and calculation tasks such as calculating vehicle displacement and rotation angle and the like, and is a core calculation unit of the system. The image storage unit 104 is configured to store the distortion corrected image and the panoramic image that is spliced in real time. The image display unit 106 is used for displaying the panoramic image to the driver in real time, and displaying the vehicle periphery and the bottom image to the driver in a visual manner in an omnibearing dead-angle-free manner.
In order to make the above objects, features and advantages of the present application more comprehensible, the present application is described in further detail below with reference to the accompanying drawings and detailed description.
Example 1
Referring to the flow of the panoramic imaging method shown in fig. 2, the present embodiment proposes a panoramic imaging method whose execution subject is the image processing unit described above.
The panoramic imaging method provided by the embodiment comprises the following specific steps:
step 200, determining a matching point pair of a first correction image of the current frame image data acquired by the camera and a second correction image of the previous frame image data of the current frame image data.
Specifically, in order to obtain the matching point pair of the first corrected image and the second corrected image, the above step 200 may specifically execute the following steps (1) to (2):
(1) Acquiring current frame image data acquired by a camera, and performing distortion correction operation on the current frame image data to obtain the first corrected image;
(2) And acquiring a second correction image of the image data of the previous frame of the current frame of image data, and determining a matching point pair of the first correction image of the image data of the current frame and the second correction image of the image data of the previous frame of the current frame of image data.
In the step (1), after the first corrected image is obtained, the pixel coordinates a (u, v) of the pixel point on the first corrected image may be obtained, and the physical coordinates of the pixel point on the first corrected image may be obtained according to a transformation formula of the pixel coordinates and the physical coordinates in the image distortion model. Specifically, the physical coordinates of the pixel point on the first corrected image may be represented by the following formula (1):
wherein x 'represents the physical abscissa of the pixel on the first corrected image, y' represents the physical ordinate of the pixel on the first corrected image, u represents the pixel abscissa of the pixel on the first corrected image, v represents the pixel ordinate of the pixel on the first corrected image, C x And C y Representing the intersection of the main optical axis of the camera with the imaging plane,where f is the camera focal length, dx and dy are the width and height of the pixel, respectively.
After obtaining the physical coordinates of the pixel point on the first corrected image, the physical coordinates of the pixel point in the current frame image data may be obtained by the following formula (2):
wherein x 'represents the physical abscissa of the pixel point in the current frame image data, y' represents the physical ordinate of the pixel point in the current frame image data, and k 1 And k 2 Is the radial distortion coefficient of the lens, p 1 And p 2 Is the tangential distortion coefficient of the lens, r 2 =x′ 2 +y′ 2 。
F, C as above x 、C y 、k 1 、k 2 、p 1 、p 2 Dx and dy are all in-camera parameters. The in-camera parameters are parameters related to the camera obtained after the camera is mounted on the vehicle and calibrated. The camera parameters are fixed values, and are stored in the image processing unit in advance.
After obtaining the physical coordinates of the pixel in the current frame image data, the pixel coordinates B (u) of the pixel in the current frame image data may be obtained by the following formula (3) d ,v d ):
Obtaining a pixel coordinate B (u) of the pixel point in the current frame image data by the formula (3) d ,v d ) Then, B (u) d ,v d ) And performing interpolation calculation to obtain the pixel size of the pixel point, thereby completing the distortion correction of the pixel point.
And traversing other pixel points of the first corrected image in sequence, so that the distortion correction operation of the current image data can be completed.
The camera is a monocular camera in the panoramic imaging system.
In the step (2), the matching point pair of the first corrected image and the second corrected image is determined by the existing image matching algorithm.
Step 202, calculating the moving distance and the rotating angle of the camera by using the matching point pairs.
In one embodiment, the moving distance and the rotation angle of the above camera may be calculated by the following formula (4):
wherein R represents a rotation angle, t represents a moving distance, and x i Pixel coordinates representing the i-th matching point pair, K represents a matrix of parameters within the camera, X i For the world coordinates of the matching point pair, Ω is the matching point pair for all known world coordinates.
The operator argmin solves for R and t to minimize the right formula, so the definition of equation (4) above is to the right of the equal sign of equation (4)In the case of taking the minimum value, specific values of R and t are obtained.
Optionally, when the system is first run, the camera is located in the image data acquired at the first two positions, and the world coordinates of all feature points (including the matching point pair) in the image data are unknown. Then, the world coordinates of all the feature points in the image data are required to be estimated, the estimated values are input into the formula (4), R and t of the camera are solved by using the formula (4), and then, after R and t are solved, the world coordinates of the feature points are calculated by using a triangular ranging method and are recorded as omega_output;
if Ω_input is approximately equal to Ω_output, the pose of the camera and the world coordinates of all matching points can be obtained, otherwise Ω_output is re-input as Ω_input into the above formula (4), and Ω_output is calculated again until Ω_output is approximately equal to Ω_input.
After the system spends the first two positions, the matching point set in the image data acquired by the camera at the current position and the image data acquired at the adjacent last position contains part of characteristic points with known world coordinates, the world coordinates of the characteristic points are taken as omega_input to be input into the formula (4), R and t of the current camera are obtained by solving, and the world coordinates of the characteristic points with unknown world coordinates of another part are calculated by utilizing the R and t of the current camera.
Of course, the step 202 may also use the matching point pair to calculate the moving distance and the rotating angle of the camera by using the simultaneous localization and mapping (Simultaneous Localization and Mapping, SLAM) technique, which will not be described in detail.
After the moving distance and the rotating angle of the camera are calculated in the above step 202, the following step 204 may be continuously performed to determine the pose relationship between the camera and the ground. And 204, obtaining the ground obstacle characteristic points in the first corrected image based on the moving distance and the rotating angle of the camera, and determining the pose relationship between the camera and the ground.
Specifically, the above step 204 may perform the following steps (1) to (3):
(1) Calculating three-dimensional coordinates of each matching point in the matching point pair under a camera coordinate system based on the moving distance and the rotating angle of the camera;
(2) According to the three-dimensional coordinates of each matching point in the camera coordinate system, determining a plurality of coplanar matching points in each matching point as ground characteristic points, and determining non-ground characteristic points in each matching point as ground obstacle characteristic points;
(3) And performing plane fitting operation on the determined ground characteristic points to obtain normal vectors of the ground in a camera coordinate system, so as to determine the pose relationship between the camera and the ground.
In the step (1), three-dimensional coordinates of each matching point in the matching point pair under the camera coordinate system can be calculated based on the moving distance and the rotating angle of the camera by the SLAM technology, and will not be described herein.
The camera coordinate system is formed by taking a camera optical center as an origin, taking a direction which is vertically directed from the optical center to an imaging plane as a Z-axis positive direction, taking a direction which is parallel to a transverse direction of the imaging plane from left to right as an X-axis positive direction, and taking a direction which is parallel to a longitudinal direction of the imaging plane from top to bottom as a Y-axis positive direction.
In the above step (2), the ground characteristic points should be located substantially in the same plane, so a plurality of matching points having substantially the same x-coordinate and y-coordinate among the matching points are determined as the ground characteristic points.
In the step (3), the surface fitting is performed on the ground characteristic points by using a least square method, so as to obtain the normal vector of the ground in the camera coordinate system. Then, the pose relationship between the normal vector of the ground under the camera coordinate system and the Z axis of the camera coordinate system (the main optical axis of the camera) is the pose relationship between the ground and the camera.
After determining the pose relationship of the camera to the ground through the above step 204, the following step 206 may be continued to be performed to obtain a panoramic image.
And 206, converting the first corrected image into a bird's-eye view according to the pose relationship between the camera and the ground, and splicing the bird's-eye view with the ground bird's-eye view to obtain a panoramic image, wherein the panoramic image displays the ground obstacle represented by the ground obstacle feature points.
Specifically, in order to obtain the panoramic image, the above-described step 206 may perform the following steps (1) to (2):
(1) According to the pose relation between the camera and the ground, rotating and translating the first corrected image, and converting the first corrected image into a bird's eye view;
(2) And acquiring the ground aerial view, and splicing the aerial view and the ground aerial view to obtain the panoramic image.
In the step (2), the ground bird's eye view is cached in the image storage unit in advance.
After the panoramic image is obtained through the above steps 200 to 206, it is also necessary to display the position and posture of the vehicle in the panoramic image in order to give a prompt to the driver when reversing the vehicle. In order to display the position and posture of the vehicle in the panoramic image, the flow of the following steps (1) to (3) may be continuously performed:
(1) Obtaining the moving distance and the rotating angle of a vehicle provided with the camera according to the moving distance and the rotating angle of the camera;
(2) Calculating the position and the direction of the vehicle in the panoramic image according to the moving distance and the rotating angle of the vehicle;
(3) And drawing a vehicle model of the vehicle in the panoramic image according to the calculated position and direction of the vehicle in the panoramic image.
In the above step (1), the mounting position of the camera on the vehicle is fixed, so the moving distance and the rotation angle of the camera are the moving distance and the rotation angle of the vehicle.
In the step (2), the position, the moving distance and the rotation angle of the camera are already solved by SLAM technology. The camera is mounted in a fixed position of the vehicle and the physical dimensions of the vehicle are fixed. After the position of the camera is obtained, the position and the direction of the vehicle in the panoramic image are calculated according to the position of the camera.
In the step (3), the vehicle size of the vehicle is firstly obtained from the image storage unit according to the vehicle type information of the vehicle, the vehicle size of the vehicle is scaled in equal proportion according to the image proportion of the preset panoramic image and the actual environment, the vehicle size in the panoramic image is obtained, the vehicle model is drawn according to the obtained vehicle size, and the drawn vehicle model is placed in the panoramic image according to the calculated position and direction of the vehicle in the panoramic image.
The image storage unit is pre-stored with the corresponding relation between the vehicle type information and the vehicle size, the vehicle type information and the image proportion of the panoramic image and the actual environment.
Here, the vehicle type information is input by the driver when configuring the panoramic imaging system, and the panoramic imaging system caches the vehicle type information in the image storage unit after acquiring the vehicle type information input by the driver.
According to the description of the steps (1) to (3), the vehicle model of the vehicle is drawn and displayed in the panoramic image according to the position and the direction of the vehicle in the panoramic image, so that a driver can not only know the surrounding environment of the vehicle, but also determine the operation mode of reversing and warehousing the vehicle according to the position and the direction of the vehicle in the panoramic image, and the situation that the vehicle is scratched and knocked in the reversing process is avoided as much as possible.
In summary, according to the panoramic imaging method provided by the embodiment, the pose relationship between the camera and the ground is determined by calculating the moving distance and the rotating angle of the camera, the first corrected image is converted into the aerial view according to the pose relationship between the camera and the ground, and the aerial view and the ground aerial view are spliced to obtain the panoramic image.
Based on the same inventive concept, the embodiment of the present application further provides a panoramic imaging apparatus corresponding to the panoramic imaging method, and since the principle of solving the problem in the method of the embodiment of the present application is similar to the function of the image processing module described in the panoramic imaging method of embodiment 1 of the present application, the implementation of the apparatus of the present embodiment may refer to the implementation of the panoramic imaging method, and the repetition is omitted.
Example 2
Referring to the schematic structural diagram of the panoramic imaging device shown in fig. 3, the present embodiment provides a panoramic imaging device, including:
a determining module 300, configured to determine a matching point pair of a first correction image of the current frame image data acquired by the camera and a second correction image of the previous frame image data of the current frame image data;
a calculation module 302, configured to calculate a movement distance and a rotation angle of the camera using the matching point pair;
the processing module 304 is configured to obtain a ground obstacle feature point in the first corrected image based on the moving distance and the rotating angle of the camera, and determine a pose relationship between the camera and the ground;
and the stitching module 306 is configured to convert the first corrected image into a bird's-eye view according to the pose relationship between the camera and the ground, and stitch the bird's-eye view with the ground bird's-eye view to obtain a panoramic image, where the panoramic image displays a ground obstacle represented by the ground obstacle feature points.
In one embodiment, the determining module 300 is specifically configured to:
acquiring current frame image data acquired by a camera, and performing distortion correction operation on the current frame image data to obtain the first corrected image;
and acquiring a second correction image of the image data of the previous frame of the current frame of image data, and determining a matching point pair of the first correction image of the image data of the current frame and the second correction image of the image data of the previous frame of the current frame of image data.
In one embodiment, the processing module 304 is specifically configured to:
calculating three-dimensional coordinates of each matching point in the matching point pair under a camera coordinate system based on the moving distance and the rotating angle of the camera;
according to the three-dimensional coordinates of each matching point in the camera coordinate system, determining a plurality of coplanar matching points in each matching point as ground characteristic points, and determining non-ground characteristic points in each matching point as ground obstacle characteristic points;
and performing plane fitting operation on the determined ground characteristic points to obtain normal vectors of the ground in a camera coordinate system, so as to determine the pose relationship between the camera and the ground.
In one embodiment, the above-mentioned splicing module is specifically configured to:
according to the pose relation between the camera and the ground, rotating and translating the first corrected image, and converting the first corrected image into a bird's eye view;
and acquiring the ground aerial view, and splicing the aerial view and the ground aerial view to obtain the panoramic image.
After obtaining the panoramic image, in order to give a prompt to the driver when reversing, it is also necessary to display the position and posture of the vehicle in the panoramic image. In order to display the position and the posture of the vehicle in the panoramic image, the panoramic imaging apparatus provided in the present embodiment further includes:
a first operation module for obtaining a moving distance and a rotating angle of a vehicle on which the camera is mounted according to the moving distance and the rotating angle of the camera;
a second operation module for calculating a position and a direction of the vehicle in the panoramic image according to a moving distance and a rotation angle of the vehicle;
and the drawing module is used for drawing the vehicle model of the vehicle in the panoramic image according to the calculated position and direction of the vehicle in the panoramic image.
According to the description, the vehicle model of the vehicle is drawn and displayed in the panoramic image according to the position and the direction of the vehicle in the panoramic image, so that a driver can not only know the surrounding environment of the vehicle, but also determine the operation mode of backing and warehousing the vehicle according to the position and the direction of the vehicle in the panoramic image, and scratch and collision of the vehicle in the backing process are avoided as much as possible.
In summary, in the panoramic imaging device provided in this embodiment, the pose relationship between the camera and the ground is determined by calculating the movement distance and the rotation angle of the camera, the first corrected image is converted into the bird's-eye view according to the pose relationship between the camera and the ground, and the bird's-eye view and the ground bird's-eye view are spliced to obtain the panoramic image.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A panoramic imaging method, comprising:
determining a matching point pair of a first correction image of current frame image data acquired by a camera and a second correction image of image data of a previous frame of the current frame image data;
calculating the moving distance and the rotating angle of the camera by using the matching point pairs;
calculating three-dimensional coordinates of each matching point in the matching point pair under a camera coordinate system based on the moving distance and the rotating angle of the camera;
according to the three-dimensional coordinates of each matching point in the camera coordinate system, determining a plurality of coplanar matching points in each matching point as ground characteristic points, and determining non-ground characteristic points in each matching point as ground obstacle characteristic points;
performing plane fitting operation on the determined ground characteristic points to obtain normal vectors of the ground in a camera coordinate system, so as to determine the pose relationship between the camera and the ground;
according to the pose relation between the camera and the ground, converting the first corrected image into a bird's-eye view image, and splicing the bird's-eye view image with the ground bird's-eye view image to obtain a panoramic image, wherein the panoramic image displays the ground obstacle represented by the ground obstacle feature points.
2. The method of claim 1, wherein determining a matching pair of points for a first rectified image of current frame image data acquired by a camera and a second rectified image of image data of a previous frame of the current frame image data comprises:
acquiring current frame image data acquired by a camera, and performing distortion correction operation on the current frame image data to obtain the first corrected image;
and acquiring a second correction image of the image data of the previous frame of the image data of the current frame, and determining a matching point pair of the first correction image of the image data of the current frame and the second correction image of the image data of the previous frame of the image data of the current frame.
3. The method according to claim 1, wherein the converting the first corrected image into a bird's-eye view according to the pose relationship between the camera and the ground, and stitching the bird's-eye view with the ground bird's-eye view, to obtain the panoramic image, includes:
according to the pose relation between the camera and the ground, rotating and translating the first corrected image, and converting the first corrected image into a bird's eye view;
and acquiring the ground aerial view, and splicing the aerial view and the ground aerial view to obtain the panoramic image.
4. The method according to claim 1, wherein the method further comprises:
obtaining the moving distance and the rotating angle of a vehicle provided with the camera according to the moving distance and the rotating angle of the camera;
calculating the position and the direction of the vehicle in the panoramic image according to the moving distance and the rotating angle of the vehicle;
and drawing a vehicle model of the vehicle in the panoramic image according to the calculated position and direction of the vehicle in the panoramic image.
5. A panoramic imaging apparatus, comprising:
the determining module is used for determining a matching point pair of a first correction image of the current frame image data acquired by the camera and a second correction image of the image data of the previous frame of the current frame image data;
a calculation module for calculating a moving distance and a rotation angle of the camera by using the matching points;
the processing module is used for calculating three-dimensional coordinates of each matching point in the matching point pair under a camera coordinate system based on the moving distance and the rotating angle of the camera; according to the three-dimensional coordinates of each matching point in the camera coordinate system, determining a plurality of coplanar matching points in each matching point as ground characteristic points, and determining non-ground characteristic points in each matching point as ground obstacle characteristic points; performing plane fitting operation on the determined ground characteristic points to obtain normal vectors of the ground in a camera coordinate system, so as to determine the pose relationship between the camera and the ground;
and the splicing module is used for converting the first corrected image into a bird's-eye view according to the pose relation between the camera and the ground, and splicing the bird's-eye view with the ground bird's-eye view to obtain a panoramic image, wherein the panoramic image is displayed with the ground obstacle characterized by the ground obstacle characteristic points.
6. The apparatus of claim 5, wherein the determining module is specifically configured to:
acquiring current frame image data acquired by a camera, and performing distortion correction operation on the current frame image data to obtain the first corrected image;
and acquiring a second correction image of the image data of the previous frame of the image data of the current frame, and determining a matching point pair of the first correction image of the image data of the current frame and the second correction image of the image data of the previous frame of the image data of the current frame.
7. The device according to claim 5, wherein the splicing module is specifically configured to:
according to the pose relation between the camera and the ground, rotating and translating the first corrected image, and converting the first corrected image into a bird's eye view;
and acquiring the ground aerial view, and splicing the aerial view and the ground aerial view to obtain the panoramic image.
8. The apparatus as recited in claim 5, further comprising:
the first operation module is used for obtaining the moving distance and the rotating angle of the vehicle provided with the camera according to the moving distance and the rotating angle of the camera;
a second operation module for calculating a position and a direction of the vehicle in the panoramic image according to a moving distance and a rotation angle of the vehicle;
and the drawing module is used for drawing the vehicle model of the vehicle in the panoramic image according to the calculated position and direction of the vehicle in the panoramic image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811191264.4A CN109447901B (en) | 2018-10-12 | 2018-10-12 | Panoramic imaging method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811191264.4A CN109447901B (en) | 2018-10-12 | 2018-10-12 | Panoramic imaging method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109447901A CN109447901A (en) | 2019-03-08 |
CN109447901B true CN109447901B (en) | 2023-12-19 |
Family
ID=65546424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811191264.4A Active CN109447901B (en) | 2018-10-12 | 2018-10-12 | Panoramic imaging method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109447901B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110610520B (en) * | 2019-08-29 | 2022-03-29 | 中德(珠海)人工智能研究院有限公司 | Visual positioning method and system based on double-dome camera |
CN110969576B (en) * | 2019-11-13 | 2021-09-03 | 同济大学 | Highway pavement image splicing method based on roadside PTZ camera |
CN113837936A (en) * | 2020-06-24 | 2021-12-24 | 上海汽车集团股份有限公司 | Panoramic image generation method and device |
CN111915910A (en) * | 2020-08-14 | 2020-11-10 | 山东领军智能交通科技有限公司 | Road traffic signal lamp based on Internet of things |
CN113370993A (en) * | 2021-06-11 | 2021-09-10 | 北京汽车研究总院有限公司 | Control method and control system for automatic driving of vehicle |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105763854A (en) * | 2016-04-18 | 2016-07-13 | 扬州航盛科技有限公司 | Omnidirectional imaging system based on monocular camera, and imaging method thereof |
CN105825475A (en) * | 2016-04-01 | 2016-08-03 | 西安电子科技大学 | 360-degree panorama image generation method based on single pick-up head |
CN106476696A (en) * | 2016-10-10 | 2017-03-08 | 深圳市前海视微科学有限责任公司 | A kind of reverse guidance system and method |
CN106846243A (en) * | 2016-12-26 | 2017-06-13 | 深圳中科龙智汽车科技有限公司 | The method and device of three dimensional top panorama sketch is obtained in equipment moving process |
CN107145828A (en) * | 2017-04-01 | 2017-09-08 | 纵目科技(上海)股份有限公司 | Vehicle panoramic image processing method and device |
CN107341787A (en) * | 2017-07-26 | 2017-11-10 | 珠海研果科技有限公司 | Method, apparatus, server and the automobile that monocular panorama is parked |
CN108171655A (en) * | 2017-12-27 | 2018-06-15 | 深圳普思英察科技有限公司 | Reverse image joining method and device based on monocular cam |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204687853U (en) * | 2015-03-20 | 2015-10-07 | 京东方科技集团股份有限公司 | A kind of in-vehicle display system and automobile |
-
2018
- 2018-10-12 CN CN201811191264.4A patent/CN109447901B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825475A (en) * | 2016-04-01 | 2016-08-03 | 西安电子科技大学 | 360-degree panorama image generation method based on single pick-up head |
CN105763854A (en) * | 2016-04-18 | 2016-07-13 | 扬州航盛科技有限公司 | Omnidirectional imaging system based on monocular camera, and imaging method thereof |
CN106476696A (en) * | 2016-10-10 | 2017-03-08 | 深圳市前海视微科学有限责任公司 | A kind of reverse guidance system and method |
CN106846243A (en) * | 2016-12-26 | 2017-06-13 | 深圳中科龙智汽车科技有限公司 | The method and device of three dimensional top panorama sketch is obtained in equipment moving process |
CN107145828A (en) * | 2017-04-01 | 2017-09-08 | 纵目科技(上海)股份有限公司 | Vehicle panoramic image processing method and device |
CN107341787A (en) * | 2017-07-26 | 2017-11-10 | 珠海研果科技有限公司 | Method, apparatus, server and the automobile that monocular panorama is parked |
CN108171655A (en) * | 2017-12-27 | 2018-06-15 | 深圳普思英察科技有限公司 | Reverse image joining method and device based on monocular cam |
Non-Patent Citations (2)
Title |
---|
Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles;Davide Scaramuzza;《IEEE TRANSACTIONS ON ROBOTICS》;20081031;正文第1015-1026页 * |
移动系统中的实时图像处理算法的研究;马超群;《中国优秀硕士学位论文全文数据库》;20180415;I138-2363 * |
Also Published As
Publication number | Publication date |
---|---|
CN109447901A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109447901B (en) | Panoramic imaging method and device | |
CN110322500B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
JP7240367B2 (en) | Methods, apparatus, electronic devices and storage media used for vehicle localization | |
CN110070564B (en) | Feature point matching method, device, equipment and storage medium | |
US20130002861A1 (en) | Camera distance measurement device | |
WO2020133172A1 (en) | Image processing method, apparatus, and computer readable storage medium | |
JP2012088114A (en) | Optical information processing device, optical information processing method, optical information processing system and optical information processing program | |
CN112967344B (en) | Method, device, storage medium and program product for calibrating camera external parameters | |
CN112444242A (en) | Pose optimization method and device | |
CN110349212B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
JP2009042162A (en) | Calibration device and method therefor | |
KR20200085670A (en) | Method for calculating a tow hitch position | |
CN112614192B (en) | On-line calibration method of vehicle-mounted camera and vehicle-mounted information entertainment system | |
CN112489136B (en) | Calibration method, position determination device, electronic equipment and storage medium | |
CN112561841A (en) | Point cloud data fusion method and device for laser radar and camera | |
JP6726006B2 (en) | Calculation of distance and direction of target point from vehicle using monocular video camera | |
CN111768332A (en) | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device | |
Ding et al. | A robust detection method of control points for calibration and measurement with defocused images | |
JP2961264B1 (en) | Three-dimensional object model generation method and computer-readable recording medium recording three-dimensional object model generation program | |
CN116193108A (en) | Online self-calibration method, device, equipment and medium for camera | |
CN114919584A (en) | Motor vehicle fixed point target distance measuring method and device and computer readable storage medium | |
JP2003009141A (en) | Processing device for image around vehicle and recording medium | |
CN116030139A (en) | Camera detection method and device, electronic equipment and vehicle | |
EP3629292A1 (en) | Reference point selection for extrinsic parameter calibration | |
CN113610927B (en) | AVM camera parameter calibration method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |