CN114640801B - Car end panoramic view angle assisted driving system based on image fusion - Google Patents
Car end panoramic view angle assisted driving system based on image fusion Download PDFInfo
- Publication number
- CN114640801B CN114640801B CN202210124847.5A CN202210124847A CN114640801B CN 114640801 B CN114640801 B CN 114640801B CN 202210124847 A CN202210124847 A CN 202210124847A CN 114640801 B CN114640801 B CN 114640801B
- Authority
- CN
- China
- Prior art keywords
- image
- panoramic
- fisheye
- images
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 51
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 238000013507 mapping Methods 0.000 claims description 17
- 238000000034 method Methods 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 14
- 230000000694 effects Effects 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 239000011800 void material Substances 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims 1
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000004568 cement Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000002893 slag Substances 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Abstract
An image fusion-based vehicle-end panoramic viewing angle assisted driving system, comprising: the system comprises an image acquisition module for acquiring 360-degree road information around a vehicle body, an embedded image processing device for processing an image output by the image acquisition module in real time, and an image display device for displaying a panoramic image of a vehicle end. The embedded image processing equipment and other two devices are connected by a cable; three fish-eye cameras which are arranged at different positions on a vehicle and have a visual angle of 180 degrees are used for collecting images, and a video encoder and a video collecting card are adopted for integrating multiple paths of analog video images into one path. The embedded image processing device splices the integrated digital video images into a panoramic image through the fisheye image processing module and the panoramic image splicer, and displays the spliced panoramic image on the image display device through the Web panoramic player. The invention can eliminate the blind area of the visual field when the large-scale special vehicle runs.
Description
Technical Field
The invention relates to the field of safe driving of large-scale special vehicles, in particular to a vehicle-end panoramic view angle auxiliary driving system based on image fusion.
Background
In recent years, along with the continuous promotion of urban construction progress in China, more and more large-scale special vehicles are appeared on urban roads, including but not limited to urban buses, cement tank trucks, soil and slag vehicles and the like. The appearance of large-scale special vehicles greatly facilitates the daily production and life of people. However, the large-sized special vehicles often have the characteristics of long vehicle bodies and high vehicle bodies, so that the vehicles have wide-range visual field blind areas, serious traffic accidents can occur due to the large visual field blind areas when running on roads, particularly when turning, and the large-sized special vehicles bring great potential safety hazards to the running of other vehicles or the running of pedestrians on the roads. At present, the policy of 'right turn stop' of a large-sized special vehicle is implemented in a main city of the whole country, but traffic accidents caused by a visual field blind area of the large-sized special vehicle are always happening. Therefore, in order to improve the running safety of the large-sized private vehicle and reduce the risk of running of other road vehicles or pedestrians as much as possible, it is necessary to reduce or even completely eliminate the blind area of view that exists when the large-sized private vehicle runs on the road.
In order to achieve the above object, various systems for reducing the blind area of the field of view when the vehicle is traveling have been studied and developed. The vehicle-mounted system (Wang Lujie; ma Feilong; stone carving; etc. of the probe is disclosed in Wang Lujie, etc. the vehicle-mounted system of the probe is disclosed in [ P ]. Chinese patent CN112776729A, 2021-05-11.) and the problem of blind area of the visual field of the vehicle is solved by installing a foreign matter detection component on the vehicle body, but the vehicle-mounted system has the limitations of complex design, higher cost, etc. She Shengjuan and the like propose an automobile imaging device (She Shengjuan; wang Haifeng; yang Yingfei; and the like) for eliminating the blind area of the automobile field of view, which is an automobile imaging device [ P ] for eliminating the blind area of the automobile field of view, chinese patent No. CN213948279U,2021-08-13. The problem of the blind area of the automobile field of view is solved by installing a fixed display on a vehicle and adjusting the fixed angle of the display, but the defects that the angle of the field of view is fixed, the angle of the display needs to be manually adjusted, and the information of the surrounding environment of the automobile body cannot be completely displayed are overcome.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides a vehicle-end panoramic view angle auxiliary driving system based on image fusion, which aims to reduce or even completely eliminate the visual field blind area of a large-sized special vehicle so as to improve the driving safety of the large-sized special vehicle.
The invention discloses a vehicle-end panoramic view angle assisted driving system based on image fusion, which comprises image acquisition equipment, embedded image processing equipment and image display equipment.
The image acquisition equipment adopts three fisheye cameras to acquire a 360-degree road environment around a vehicle body, a video encoder is used for integrating output images of three fisheye cameras into one path of analog video, the video acquisition card is used for converting the one path of analog video into digital video, and the digital video is transmitted to a fisheye image processing module in the embedded image processing equipment through a cable for image processing;
the embedded image processing device comprises a fisheye image processing module; a Web panorama player; a panoramic image stitching device;
the fish-eye image processing module is used for processing the original fish-eye image output by the video acquisition card. In order to shoot a larger angle of view, the original fisheye camera can cause serious distortion of pixel information around an image, so that the original fisheye image needs to be corrected into an annular view by adopting a longitude and latitude unfolding mode to improve the effect of final panoramic stitching. The method mainly comprises the steps of transforming pixel coordinates in a 2D Cartesian coordinate system into a spherical Cartesian coordinate system through a series of changes on the pixel coordinates in the fisheye image, finally converting coordinates in the spherical Cartesian coordinate system into longitude and latitude coordinates, and then mapping pixel points based on the longitude and latitude coordinates, so that the purpose of converting the fisheye image into a circular view is achieved. The specific operation steps are as follows:
1) After an original fisheye image is obtained, a circular mask function is written according to the circle center and the radius of the three-view fisheye imaging to intercept an image region of a target, wherein the coordinate range of pixel points of the intercepted image region is shown as a formula (1):
x∈[0,cols-1],y∈[0,rows-1] (1)
wherein x and y are respectively the abscissa and the ordinate of the pixel point coordinates of the intercepted image, cols is the transverse width of the original fisheye image, and rows is the longitudinal width of the original fisheye image;
2) In order to control the resolution of the video after the final image fusion, the size of the output picture in the step 1) needs to be controlled;
3) Converting pixel coordinate points (x, y) of the intercepted image area from a 2D Cartesian coordinate system to standard coordinates A (x A ,y A ) The conversion relationship is shown in formula (2):
wherein x and y are respectively the abscissa and the ordinate of the pixel point coordinates of the intercepted image, cols is the transverse width of the original fisheye image, and rows is the longitudinal width of the original fisheye image;
4) Standard coordinates a (x A ,y A ) Three-dimensional Cartesian coordinates P (x) p ,y p ,z p ) The conversion formula is shown as formulas (3), (4):
P(p,φ,θ) (3)
wherein P is the radial distance of a connecting line OP between a point coordinate and an origin O on the spherical surface, theta is the included angle between the OP and a z axis, phi is the included angle between the projection of the OP on an xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to the formula (5):
x p =psinθcosφ,y p =psinθsinφ,z p =pcosθ (5)
5) Converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown in a formula (6):
wherein x is p ,y p ,z p Is the coordinate of the P point, latitudes is the longitude coordinate, longitude is the latitude coordinate;
6) Mapping to pixel coordinates (x) of the expanded graph according to the longitude and latitude coordinate conversion in step 5) o ,y o ) The mapping relation is shown in formula (7):
wherein x is 0 Represented by the pixel abscissa, y in the expanded view 0 Representing the pixel ordinate in the unfolded view;
7) After the pixel point mapping is completed, black void points which are not mapped by the pixel points appear in the picture, and the black areas are filled by a cubic interpolation algorithm to achieve the effect of outputting complete images.
And the panoramic image splicer is used for splicing the panoramic images with the three fisheye images processed by the fisheye image processing module. In order to ensure the continuity of the spliced field of view, the fisheye cameras in three different directions need to be numbered according to a fixed sequence, and the sequence is always kept unchanged in the subsequent operation. In the image processing process, feature points of each image are calculated by using a SIFT algorithm and used as image local invariant description operators with the scale space, scaling, rotation and affine transformation kept unchanged. It is also necessary to find matching feature points between adjacent images, and further screen out feature matching points by using the RANSAC method, so as to calculate a homography matrix by finding a mapping relationship between feature matching points. Finally, performing perspective change on the images according to the homography matrix obtained by calculation, and finally splicing the images after perspective transformation to realize the function of splicing the panoramic images at the vehicle end. The specific operation steps are as follows:
1) When the fisheye images processed by the fisheye image processing module are obtained, firstly, the images with different angles are sequentially and fixedly numbered, and the consistency of the serial numbers of the subsequent images is maintained;
2) Calculating the characteristic points of each image by using an SIFT algorithm carried by OpenCV, and taking the characteristic points as an image local invariant description operator with the scale space, scaling, rotation and affine transformation kept unchanged;
3) The matching characteristic points between adjacent images are required to be found in image stitching, so that the fisheye images of three visual angles are subjected to rough matching by adopting a method for calculating Euclidean distance measure, then screening is carried out in the characteristic points of the two images by adopting a SIFT matching mode for comparing the nearest neighbor Euclidean distance with the next-neighbor Euclidean distance, and the matching point is selected when the ratio of the nearest neighbor Euclidean distance to the next-neighbor Euclidean distance is smaller than 0.8;
4) Further screening out mismatching points from the rough matching points processed in the step 3) through a RANSAC method, so that the accuracy of subsequent image processing is improved, and then finding out the mapping relation between the characteristic points, so as to calculate a homography matrix;
5) And 4) performing perspective transformation on the fish-eye image processed by the fish-eye image processing module by using the homography matrix obtained by the calculation in the step 4), then splicing the images after the perspective transformation, and finally synthesizing a video stream to realize the panoramic splicing function.
And the Web panorama player is used for displaying the panoramic image output by the panoramic image splicer on the webpage. In order to reduce the delay of video display, the panoramic player adopts the rtc.js player plug-in to construct a front-end player to play the video. Also, in order to enable the front end to support the playing of panoramic video, a panoramic player is implemented using a technique of thre.js+video tag+rtc.js. The panoramic player mainly establishes a spherical model through three. Js, and uses the video tag as a rendering material on the surface of the sphere to map the sphere, so that the effect of projecting panoramic video onto the sphere is achieved. The panoramic player mainly establishes a spherical model through three.js, and uses a video tag as a rendering material on the surface of the sphere to map the sphere, so that the effect of projecting panoramic video onto the sphere is achieved, and a browser is installed on an embedded image processing device, so that panoramic images can be browsed on an image display device;
the image display device establishes physical connection with the embedded image processing device and is used for displaying panoramic images presented by the Web player.
Compared with the prior art, the invention has the beneficial effects that: the three fisheye cameras can be used for obtaining 360-degree environmental information around the vehicle body, three paths of analog video data are integrated into one path of digital video data through the video encoder and the video acquisition card, the path of digital video data is input into the embedded image processing equipment for further processing, port resources of the embedded equipment are greatly saved, and the overall design cost is lower. Meanwhile, the embedded equipment integrated with the AI chip is used for improving the real-time processing capacity and the image output capacity of the video, the panoramic player which is designed by self is combined for playing the panoramic video, the problem that a wide-range visual field blind area exists when a large-sized special vehicle runs is greatly reduced or even completely eliminated, and the device has a good auxiliary driving effect.
Drawings
FIG. 1 is a general frame diagram of a system of the present invention;
FIG. 2 is a schematic view of a camera mounting of the present invention;
FIG. 3 is a flow chart of the fisheye image processing of the invention;
fig. 4 is a flow chart of the process of image stitching fusion of the present invention.
Detailed Description
Examples of the invention are described in further detail below with reference to the accompanying drawings:
as shown in fig. 1, the vehicle-end panoramic view angle assisted driving system based on image fusion consists of an image acquisition device, an embedded image processing device and an image display device. The embedded image processing device is connected with the image acquisition device and the image display device by cables, the image acquisition device mainly realizes that the output analog video images of the multi-path fish-eye camera are integrated into one path of digital video by hardware devices such as a video encoder, a video acquisition card and the like in a hardware encoding mode, and the digital video data are input into the embedded image processing device for image processing while the data transmission quantity is reduced. After the embedded image processing equipment acquires the input digital image, the original fisheye image with serious distortion is processed into an all-around view by using a coordinate change method, so that rich image information is obtained, and then three paths of fisheye images processed into the all-around view are spliced to form a video stream. Finally, the spliced panoramic image is presented on the image display device through the Web panoramic player at the Web end, so that the function of greatly reducing or even completely eliminating the wider range of the visual field blind area when the large-scale special vehicle runs is realized, and the running safety of the large-scale special vehicle is improved.
As shown in fig. 2, the angle of view of the fisheye camera used in the invention is 180 °, in order to achieve collection of 360 ° environmental information around the vehicle body and better image stitching effect, three fisheye cameras are required to be installed at different positions at the same height, and an interval of 120 ° is required between each fisheye camera. According to the installation schematic diagram of fig. 2, and the installation positions of the three cameras on the large-scale special vehicle are adjusted according to the final image display effect, 360-degree panoramic environment image information around the vehicle body can be obtained, and therefore good driving assisting effect is obtained.
As shown in fig. 3, the fisheye image processing module achieves the optimal effect of the output image by performing a series of techniques such as transformation, mapping of pixel points, and image filling of gap points on pixel coordinates in the fisheye image, and the main implementation steps are as follows:
1) After an original fisheye image is obtained, a circular mask function is written according to the circle center and the radius of the three-view fisheye imaging to intercept an image region of a target, wherein the coordinate range of pixel points of the intercepted image region is shown as a formula (1):
x∈[0,cols-1],y∈[0,rows-1] (1)
wherein x and y are respectively the abscissa and the ordinate of the pixel point coordinates of the intercepted image, cols is the transverse width of the original fisheye image, and rows is the longitudinal width of the original fisheye image;
2) In order to control the resolution of the video after the final image fusion, the size of the output picture in the step 1) needs to be controlled;
3) Converting pixel coordinate points (x, y) of the intercepted image area from a 2D Cartesian coordinate system to standard coordinates A (x A ,y A ) The conversion relationship is shown in formula (2):
wherein x and y are respectively the abscissa and the ordinate of the pixel point coordinates of the intercepted image, cols is the transverse width of the original fisheye image, and rows is the longitudinal width of the original fisheye image;
4) Standard coordinates a (x A ,y A ) Three-dimensional Cartesian coordinates P (x) p ,y p ,z p ) The conversion formula is shown as formulas (3), (4):
P(p,φ,θ) (3)
wherein P is the radial distance of a connecting line OP between a point coordinate and an origin O on the spherical surface, theta is the included angle between the OP and a z axis, phi is the included angle between the projection of the OP on an xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to the formula (5):
x p =psinθcosφ,y p =psinθsinφ,z p =pcosθ (5)
5) Converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown in a formula (6):
wherein x is p ,y p ,z p Is the coordinate of the P point, latitudes is the longitude coordinate, longitude is the latitude coordinate;
6) Mapping according to the longitude and latitude coordinate conversion in step 5)To the pixel coordinates (x o ,y o ) The mapping relation is shown in formula (7):
wherein x is 0 Represented by the pixel abscissa, y in the expanded view 0 Representing the pixel ordinate in the unfolded view;
7) After the pixel point mapping is completed, black void points which are not mapped by the pixel points appear in the picture, and the black areas are filled by a cubic interpolation algorithm to achieve the effect of outputting complete images.
As shown in fig. 4, the technical method of the panoramic image stitching apparatus includes the steps of extracting and calculating image feature points, matching feature points between adjacent images, searching for a mapping relation between feature points, calculating a homography matrix, performing perspective transformation on the images, stitching the images, and the like, and the main implementation steps are as follows:
1) When the fisheye images processed by the fisheye image processing module are obtained, firstly, the images with different angles are sequentially and fixedly numbered, and the consistency of the serial numbers of the subsequent images is maintained;
2) Calculating the characteristic points of each image by using an SIFT algorithm carried by OpenCV, and taking the characteristic points as an image local invariant description operator with the scale space, scaling, rotation and affine transformation kept unchanged;
3) The matching characteristic points between adjacent images are required to be found in image stitching, so that the fisheye images of three visual angles are subjected to rough matching by adopting a method for calculating Euclidean distance measure, then screening is carried out in the characteristic points of the two images by adopting a SIFT matching mode for comparing the nearest neighbor Euclidean distance with the next-neighbor Euclidean distance, and the matching point is selected when the ratio of the nearest neighbor Euclidean distance to the next-neighbor Euclidean distance is smaller than 0.8;
4) Further screening out mismatching points from the rough matching points processed in the step 3) through a RANSAC method, so that the accuracy of subsequent image processing is improved, and then finding out the mapping relation between the characteristic points, so as to calculate a homography matrix;
5) And 4) performing perspective transformation on the fish-eye image processed by the fish-eye image processing module by using the homography matrix obtained by the calculation in the step 4), then splicing the images after the perspective transformation, and finally synthesizing a video stream to realize the panoramic splicing function.
The embodiments described in the present specification are merely examples of implementation forms of the inventive concept, and the scope of protection of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, and the scope of protection of the present invention and equivalent technical means that can be conceived by those skilled in the art based on the inventive concept.
Claims (3)
1. An image fusion-based vehicle-end panoramic view angle assisted driving system is characterized in that: comprising the following steps: the system comprises an image acquisition module, an embedded image processing device and an image display device, wherein the image acquisition module is used for acquiring 360-degree road environment information around a vehicle body, the embedded image processing device is used for processing an image output by the image acquisition module in real time, and the image display device is used for displaying a panoramic image of a vehicle end;
the image acquisition module acquires a road environment of 360 degrees around a vehicle body by adopting three fisheye cameras, integrates output images of three fisheye cameras into one path of analog video by using a video encoder, converts the one path of analog video into digital video by using a video acquisition card, and transmits the digital video to a fisheye image processing module in the embedded image processing equipment for image processing by using a cable;
the embedded image processing device comprises a fisheye image processing module, a Web panoramic player and a panoramic image splicer;
the fish-eye image processing module is used for processing the original fish-eye image output by the video acquisition card; the original fisheye image is corrected into a circular view by adopting a longitude and latitude unfolding mode to improve the effect of final panoramic stitching: through a series of changes to the pixel coordinates in the fisheye image, the pixel coordinates in the 2D Cartesian coordinate system are transformed into a spherical Cartesian coordinate system, the coordinates in the spherical Cartesian coordinate system are finally converted into longitude and latitude coordinates, and then, the pixel points are mapped based on the longitude and latitude coordinates, so that the fisheye image is converted into the annular view, and the specific operation steps are as follows:
1) After an original fisheye image is obtained, a circular mask function is written according to the circle center and the radius of the three-view fisheye imaging to intercept an image region of a target, wherein the coordinate range of pixel points of the intercepted image region is shown as a formula (1):
x is E [0, cols-1], y is E [0, rows-1] (1), wherein x and y are respectively the abscissa and ordinate of the pixel coordinates of the intercepted image, and cols is
The lateral width of the original fisheye image, rows is the longitudinal width of the original fisheye image;
2) In order to control the resolution of the video after final image fusion, it is necessary to control the output in step 1)
The size of the picture;
3) Converting pixel coordinate points (x, y) of the truncated image area from a 2D Cartesian coordinate system to a standard
Coordinates A (x) A ,y A ) The conversion relationship is shown in formula (2):
wherein x and y are respectively the abscissa and the ordinate of the pixel point coordinates of the intercepted image, and cols is
The lateral width of the original fisheye image, rows is the longitudinal width of the original fisheye image;
4) Standard coordinates a (x A ,y A ) Three-dimensional Cartesian coordinates P (x) p ,y p ,z p ) Conversion of
The formulas are shown as formulas (3) and (4):
P(p,φ,θ)(3)
wherein P is the radial distance of a connecting line OP between the P coordinate of a point on the spherical surface and the origin O, θ is the included angle between the OP and the z axis, φ is the included angle between the projection of the OP on the xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into Cartesian coordinate according to formula (5)
The system is as follows:
x p =psinθcosφ,y p =psinθsinφ,z p =pcosθ (5)
5) Converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown in a formula (6):
wherein x is p ,y p ,z p Is the coordinates of the P point, latitudes are longitude coordinates, longitudes are latitudes
Coordinates;
6) Mapping to pixel coordinates (x) of the expanded graph according to the longitude and latitude coordinate conversion in step 5) o ,y o ) Mapping of
The radial relationship is shown in formula (7):
wherein x is 0 Represented by the pixel abscissa, y in the expanded view 0 Representing pixels in an expanded view
An ordinate;
7) After the pixel mapping is completed, black void points which are not mapped by the pixel points appear in the picture,
filling the black areas by using a cubic interpolation algorithm to achieve the effect of outputting complete images;
the panoramic image splicer carries out panoramic image's concatenation with three fisheye images after fisheye image processing module handles, includes: in order to ensure the continuity of the spliced visual field, the three fisheye cameras in different directions need to be numbered according to a fixed sequence, and the sequence is kept unchanged all the time in the subsequent operation; in the image processing process, calculating the characteristic point of each image by using a SIFT algorithm, and taking the characteristic point as an image local invariant description operator with the scale space, scaling, rotation and affine transformation kept unchanged; the matching feature points between the adjacent images are required to be found, and the feature matching points are further screened out by using a RANSAC method, so that a homography matrix is calculated by finding the mapping relation between the feature matching points; finally, performing perspective change on the images according to the homography matrix obtained by calculation, and finally splicing the images after perspective change to realize the function of splicing panoramic images at the vehicle end; the specific operation steps are as follows:
(1) When the fisheye images processed by the fisheye image processing module are obtained, firstly, the images with different angles are sequentially and fixedly numbered, and the consistency of the serial numbers of the subsequent images is maintained;
(2) Calculating the characteristic points of each image by using an SIFT algorithm carried by OpenCV, and taking the characteristic points as an image local invariant description operator with the scale space, scaling, rotation and affine transformation kept unchanged;
(3) The image stitching also needs to search matching characteristic points between adjacent images, the fisheye images of three visual angles are subjected to rough matching by adopting a method for calculating Euclidean distance measure, then screening is carried out in the characteristic points of the two images by adopting a SIFT matching mode for comparing the nearest neighbor Euclidean distance with the secondary neighbor Euclidean distance, and when the ratio of the nearest neighbor Euclidean distance to the secondary neighbor Euclidean distance is smaller than 0.8, the matching points are selected;
(4) Further screening out mismatching points from the rough matching points processed in the step (3) by using a RANSAC method, thereby improving the precision of subsequent image processing, and then finding out the mapping relation between the characteristic points, so as to calculate a homography matrix;
(5) Performing perspective transformation on the fisheye image processed by the fisheye image processing module by using the homography matrix calculated in the step (4), then splicing the images after perspective transformation, and finally synthesizing a video stream to realize the panoramic splicing function;
the Web panorama player displays the panoramic image output by the panoramic image splicer on the webpage; in order to reduce the delay of video display, the Web panorama player adopts an rtc.js player plug-in to construct a front-end player to play the video; in order to enable the front end to support the playing of panoramic video, a technology of three.js+video tag+rtc.js is adopted to realize panoramic playing; the Web panoramic player establishes a spherical model through three. Js, and uses the video tag as a rendering material on the surface of the sphere to map the sphere, so that the effect of projecting panoramic video onto the sphere is achieved; installing a browser on the embedded image processing device, and browsing the panoramic image on the image display device;
the image display device is in physical connection with the embedded image processing device, and displays the panoramic image presented by the Web panoramic player.
2. The vehicle-end panoramic viewing angle assisted driving system based on image fusion according to claim 1, wherein: the embedded image processing device selects an Atlas 200 acceleration module as an AI computing chip.
3. The vehicle-end panoramic viewing angle assisted driving system based on image fusion according to claim 1, wherein: the angle of view of the fisheye cameras is 180 °, three fisheye cameras are installed at different positions at the same height, and a space of 120 ° is required between each fisheye camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210124847.5A CN114640801B (en) | 2022-02-10 | 2022-02-10 | Car end panoramic view angle assisted driving system based on image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210124847.5A CN114640801B (en) | 2022-02-10 | 2022-02-10 | Car end panoramic view angle assisted driving system based on image fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114640801A CN114640801A (en) | 2022-06-17 |
CN114640801B true CN114640801B (en) | 2024-02-20 |
Family
ID=81946324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210124847.5A Active CN114640801B (en) | 2022-02-10 | 2022-02-10 | Car end panoramic view angle assisted driving system based on image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114640801B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106357976A (en) * | 2016-08-30 | 2017-01-25 | 深圳市保千里电子有限公司 | Omni-directional panoramic image generating method and device |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180176465A1 (en) * | 2016-12-16 | 2018-06-21 | Prolific Technology Inc. | Image processing method for immediately producing panoramic images |
-
2022
- 2022-02-10 CN CN202210124847.5A patent/CN114640801B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106357976A (en) * | 2016-08-30 | 2017-01-25 | 深圳市保千里电子有限公司 | Omni-directional panoramic image generating method and device |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
Non-Patent Citations (2)
Title |
---|
基于3D空间球面的车载全景快速生成方法;曹立波;夏家豪;廖家才;张冠军;张瑞锋;;中国公路学报(01);全文 * |
基于球面空间匹配的双目鱼眼全景图像生成;何林飞;朱煜;林家骏;黄俊健;陈旭东;;计算机应用与软件(02);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114640801A (en) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107133988B (en) | Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system | |
CN108263283B (en) | Method for calibrating and splicing panoramic all-round looking system of multi-marshalling variable-angle vehicle | |
CN101276465B (en) | Method for automatically split-jointing wide-angle image | |
CN106952311B (en) | Auxiliary parking system and method based on panoramic stitching data mapping table | |
CN109961522B (en) | Image projection method, device, equipment and storage medium | |
CN107424120A (en) | A kind of image split-joint method in panoramic looking-around system | |
US8553081B2 (en) | Apparatus and method for displaying an image of vehicle surroundings | |
US20030117488A1 (en) | Stereoscopic panoramic image capture device | |
CN109087251B (en) | Vehicle-mounted panoramic image display method and system | |
CN103247030A (en) | Fisheye image correction method of vehicle panoramic display system based on spherical projection model and inverse transformation model | |
CN102291541A (en) | Virtual synthesis display system of vehicle | |
JP3381351B2 (en) | Ambient situation display device for vehicles | |
CN101122464A (en) | GPS navigation system road display method, device and apparatus | |
CN109883433B (en) | Vehicle positioning method in structured environment based on 360-degree panoramic view | |
Zhu et al. | Monocular 3D vehicle detection using uncalibrated traffic cameras through homography | |
CN112348741A (en) | Panoramic image splicing method, panoramic image splicing equipment, storage medium, display method and display system | |
CN111145362A (en) | Virtual-real fusion display method and system for airborne comprehensive vision system | |
CN116883610A (en) | Digital twin intersection construction method and system based on vehicle identification and track mapping | |
CN110736472A (en) | indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar | |
JP2004265396A (en) | Image forming system and image forming method | |
CN112750075A (en) | Low-altitude remote sensing image splicing method and device | |
CN108447042B (en) | Fusion method and system for urban landscape image data | |
CN112446915A (en) | Picture-establishing method and device based on image group | |
CN114640801B (en) | Car end panoramic view angle assisted driving system based on image fusion | |
CN106627373B (en) | A kind of image processing method and system for intelligent parking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |