CN117854046B - Integrated positioning system and device based on vision fusion - Google Patents

Integrated positioning system and device based on vision fusion Download PDF

Info

Publication number
CN117854046B
CN117854046B CN202410259012.XA CN202410259012A CN117854046B CN 117854046 B CN117854046 B CN 117854046B CN 202410259012 A CN202410259012 A CN 202410259012A CN 117854046 B CN117854046 B CN 117854046B
Authority
CN
China
Prior art keywords
image
module
fusion
vehicle
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410259012.XA
Other languages
Chinese (zh)
Other versions
CN117854046A (en
Inventor
陈雪梅
李健
杨东清
肖龙
薛杨武
张宝廷
刘晓慧
赵小萱
沈晓旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Advanced Technology Research Institute of Beijing Institute of Technology
Original Assignee
Beijing Institute of Technology BIT
Advanced Technology Research Institute of Beijing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Advanced Technology Research Institute of Beijing Institute of Technology filed Critical Beijing Institute of Technology BIT
Priority to CN202410259012.XA priority Critical patent/CN117854046B/en
Publication of CN117854046A publication Critical patent/CN117854046A/en
Application granted granted Critical
Publication of CN117854046B publication Critical patent/CN117854046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/87Arrangements for image or video recognition or understanding using pattern recognition or machine learning using selection of the recognition techniques, e.g. of a classifier in a multiple classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Vascular Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the field of vehicle positioning, in particular to an integrated positioning system and device based on vision fusion. The system comprises an image acquisition module, an image processing module and an image grading module; the image acquisition module is used for acquiring object images in the image acquisition direction range; the image processing module is in communication connection with the image acquisition module and is used for receiving the image data acquired by the image acquisition module and extracting a main body structural line of an object in the image; the image grading module is in communication connection with the image processing module and is used for receiving a main body structural line of an object in the image, and placing the main body structural line of the object in the image in a preset driving safety grading positioning range diagram, wherein the driving safety grading positioning range diagram comprises a safety area, a warning area and a dangerous area, the safety area, the warning area and the dangerous area are all sector areas, and the warning area and the dangerous area are both arranged. The invention can also judge whether the platform vehicle is in a safe driving state or not only according to the acquired image.

Description

Integrated positioning system and device based on vision fusion
Technical Field
The invention relates to the field of vehicle positioning, in particular to an integrated positioning system and device based on vision fusion.
Background
The intelligent mobile platform is a comprehensive system integrating environment sensing, dynamic decision and planning, behavior control and execution and other functions, and comprises intelligent vehicles, intelligent robots and other platforms. The environment perception is a basic requirement and precondition for decision control of the safety of the intelligent mobile platform, and can provide effective data support for safe running of the platform, such as radar and camera installed on the platform. In logistics distribution field, unmanned intelligent logistics vehicle is intelligent mobile platform promptly, and the vehicle independently goes, keeps away the barrier automatically, can send the destination with goods safety.
Chinese patent publication No. CN112731371a discloses an integrated target tracking system and method for laser radar and vision fusion. The system comprises a solid-state laser radar, a monocular vision sensor and a fusion tracker. The fusion tracker comprises a monocular vision target detection module, a laser radar-vision fusion tracking module and a communication module. The monocular vision target detection module acquires target information from the image; the laser radar target detection module acquires target information from the point cloud; the laser radar-vision fusion tracking module performs spatial registration on measurement, a target state model and a measurement model are established, a tracking gate is established through one-step prediction of the target state, fusion tracking of an image target and a point cloud target is completed through data association and target state filtering, the integration level and development efficiency of the intelligent mobile platform are improved, and the accuracy of fusion tracking results is improved.
However, the technical scheme has the following defects:
the environment sensing is realized by means of the cooperative operation of various devices such as the solid-state laser radar, the monocular vision sensor, the fusion tracker and the like, and when the laser radar or the vision sensor fails or cannot normally collect data, the whole system cannot normally operate, and the operation stability and reliability are lower.
Disclosure of Invention
The invention aims at solving the problems in the background technology, and provides an integrated positioning system and device based on vision fusion, which can judge whether a platform vehicle is in a safe driving state or not only according to an acquired image, and has higher running stability and reliability.
In one aspect, the invention provides an integrated positioning device based on visual fusion, which comprises an image acquisition module, an image processing module and an image grading module; the image acquisition module is used for acquiring object images in the image acquisition direction range; the image processing module is in communication connection with the image acquisition module and is used for receiving the image data acquired by the image acquisition module and extracting a main body structural line of an object in the image; the image grading module is in communication connection with the image processing module and is used for receiving a main body structural line of an object in the image, and placing the main body structural line of the object in the image in a preset driving safety grading positioning range diagram, wherein the driving safety grading positioning range diagram comprises a safety zone, a warning zone and a dangerous zone, the safety zone, the warning zone and the dangerous zone are all sector-shaped areas, the warning zone and the dangerous zone are respectively provided with two warning zones, the two warning zones are respectively connected to two ends of the safety zone, and the two dangerous zones are respectively connected to one end of each warning zone far away from the safety zone; when the main body structural lines of the objects in the images are all in the safe area, judging the running state of the vehicle as a safe running state; when the main body structural line of the object in the image invades the guard zone and does not invade the dangerous zone, judging the running state of the vehicle as the guard running state; when the main body structural line of the object in the image intrudes into the dangerous area, the vehicle running state is determined as a dangerous running state.
Preferably, the image acquisition module comprises a front-back vision camera, a lateral vision camera, a support and a mounting cover, the front-back vision camera and the lateral vision camera are vertical in acquisition direction, the front-back vision camera faces the front or the rear, the lateral vision camera faces the left or the right, the front-back vision camera and the lateral vision camera are arranged on the support, and the support is arranged on the mounting cover.
Preferably, the image acquisition ranges of the front and rear vision cameras and the lateral vision cameras are conical ranges.
On the other hand, the invention provides an integrated positioning system based on vision fusion, which comprises a radar ranging module, an image fusion module and the integrated positioning device based on vision fusion; the image acquisition modules are rectangular and are arranged on the top of a target vehicle, the four groups of the image acquisition modules comprise two front and rear vision cameras facing the front of the vehicle, two front and rear vision cameras facing the rear of the vehicle, two lateral vision cameras facing the left side of the vehicle and two lateral vision cameras facing the right side of the vehicle, eight radar ranging modules are arranged, two radar ranging modules are arranged on a mounting cover in each group of the image acquisition modules, and the directions of the two radar ranging modules are respectively the same as the directions of the front and rear vision cameras and the lateral vision cameras in the group of the image acquisition modules;
the image processing module is in communication connection with the image fusion module and is used for transmitting main structure line data of an object in the image to the image fusion module;
the radar ranging module is in communication connection with the image fusion module and is used for acquiring ranging lattice data of the object in the target direction and transmitting the ranging lattice data to the image fusion module;
For two front and back vision cameras or two lateral vision cameras facing the same side of the vehicle, the image processing module extracts main structural lines of objects in the images according to image data collected by the two front and back vision cameras or the two lateral vision cameras, and the image fusion module is used for correspondingly fusing the main structural lines of the objects in the two images with distance measurement lattice data collected by the radar ranging module facing the corresponding direction to form a structural line and distance measurement lattice integrated image.
Preferably, the image fusion module comprises a dotted line fusion module and a transformation fusion module;
The point-line fusion module is used for carrying out point-line fusion on a main structure line of an object in an image extracted according to image data collected by two front and rear vision cameras or two lateral vision cameras facing the same side of the vehicle and distance measurement lattice data collected by the radar ranging module facing the corresponding direction to obtain two groups of point-line fusion images;
The transformation fusion module is in communication connection with the point-line fusion module and is used for receiving the point-line fusion images, dividing the two groups of point-line fusion images into an image maintenance area which is distributed outwards and an image area to be fused which is distributed inwards, transforming the same object images in the two image areas to be fused in the two groups of point-line fusion images into object reconstruction images in the image fusion area, and respectively connecting the two image maintenance areas which are distributed outwards in the two groups of point-line fusion images on two sides of the image fusion area to form a structural line and distance measurement lattice integrated image.
Preferably, when the transformation fusion module performs image transformation on the same object in the two image fusion areas, the maximum transverse size of the object in each image fusion area is taken as a transformation transverse size, the maximum vertical size of the object in each image fusion area is taken as a transformation vertical size, and the forward display size of the reconstructed image of the object is determined according to the transformation transverse size and the transformation vertical size.
Preferably, the system further comprises an image output module, wherein the image output module is in communication connection with the image fusion module and is used for receiving the structural line and distance measurement lattice integrated image data and outputting the structural line and distance measurement lattice integrated image data in a visual mode.
Preferably, the system further comprises a global positioning navigation module and an inertial navigation module which are installed on the target vehicle, wherein the global positioning navigation module is used for positioning the vehicle position in real time, and the inertial navigation module is used for detecting the speed and the position of the vehicle.
Compared with the prior art, the invention has the following beneficial technical effects:
The invention can judge whether the platform vehicle is in a safe driving state or not only according to the image acquired by the integrated positioning device based on vision fusion, and can also combine the combined action of the radar ranging module, so that the operation stability and the reliability are high. When the radar ranging module is not combined, the image grading module is used for comparing the main structure line of the object in the image with a preset driving safety grading positioning range diagram, and judging whether the distance between the platform vehicle and the object is in a safety range or not. When the radar ranging module is used in combination, the image fusion module is used for correspondingly fusing the main structure line data of the object in the two images with the ranging lattice data acquired by the radar ranging module with the corresponding orientation to form a structure line and ranging lattice integrated image, so that the integrated image comprising the image data and the ranging data is obtained, and the distance between the object and the vehicle can be known more intuitively and accurately.
Drawings
FIG. 1 is a system block diagram of an integrated positioning system based on visual fusion according to an embodiment of the present invention;
FIG. 2 is a block diagram of an image fusion module according to an embodiment of the present invention;
FIG. 3 is a schematic view of the installation position of an image acquisition module on a vehicle in an embodiment of the invention;
FIG. 4 is a schematic diagram of an image capturing module according to an embodiment of the present invention;
FIG. 5 is a schematic view of the angular range of the image classification module for classifying and locating the driving safety;
FIG. 6 is a schematic diagram of a point-line fusion module fusing object structural lines and object ranging lattices to obtain a point-line fusion image;
Fig. 7 is a schematic diagram of a transform fusion module transforming a dotted fusion image to include a forward view fusion image.
Reference numerals: 100. an image acquisition module; 1. a front-rear visual camera; 2. a lateral vision camera; 3. a bracket; 4. and (5) installing a cover.
Detailed Description
Example 1
As shown in fig. 1 to 7, the integrated positioning system based on vision fusion provided in this embodiment includes a radar ranging module, an image fusion module and an integrated positioning device based on vision fusion.
The integrated positioning device based on visual fusion comprises an image acquisition module 100, an image processing module and an image grading module. The image acquisition module 100 is used for carrying out the collection of object image in image acquisition direction scope, the image acquisition module 100 is including fore-and-aft direction vision camera 1, side direction vision camera 2, support 3 and installation cover 4, the acquisition direction of fore-and-aft direction vision camera 1 and side direction vision camera 2 is perpendicular, fore-and-aft direction vision camera 1 is forward or the rear, side direction vision camera 2 is left side or right side towards, fore-and-aft direction vision camera 1 and side direction vision camera 2 all set up on support 3, support 3 sets up on installation cover 4, installation cover 4 mountable is to the roof of target vehicle. The image acquisition ranges of the front and rear vision cameras 1 and the lateral vision cameras 2 are conical ranges, and rectangular images with preset sizes can be acquired in the conical ranges.
The image processing module is in communication connection with the image acquisition module 100, and is used for receiving the image data acquired by the image acquisition module 100 and extracting the main structural line of the object in the image. The image grading module is in communication connection with the image processing module and is used for receiving a main body structural line of an object in the image, and placing the main body structural line of the object in the image in a preset driving safety grading positioning range diagram, wherein the driving safety grading positioning range diagram comprises a safety area, a warning area and a dangerous area, the safety area, the warning area and the dangerous area are all fan-shaped areas, the warning area and the dangerous area are both provided with two warning areas, the two warning areas are respectively connected to two ends of the safety area, and the two dangerous areas are respectively connected to one end of each warning area far away from the safety area. As shown in fig. 5, the security area is a sector area with an included angle α, the guard area is two sector areas with an included angle β, the security area is two sector areas with an included angle γ, and the sector areas all use the vision camera as the center of a circle. When the main body structural lines of the objects in the images are all in the safe area, judging the running state of the vehicle as a safe running state; when the main body structural line of the object in the image invades the guard zone and does not invade the dangerous zone, judging the running state of the vehicle as the guard running state; when the main body structure line of the object in the image invades into the dangerous area, the running state of the vehicle is judged to be the dangerous running state, the judging mode is based on the position of the main body structure line of the object, the distance to the object is not directly measured, the running safety judgment of the vehicle can be still realized, and the safety of the vehicle under different conditions is graded and qualitatively.
The image acquisition modules 100 are rectangular and installed in four groups on the top of a target vehicle, and comprise two front and rear vision cameras 1 facing the front of the vehicle, two front and rear vision cameras 1 facing the rear of the vehicle, two lateral vision cameras 2 facing the left side of the vehicle and two lateral vision cameras 2 facing the right side of the vehicle, eight radar ranging modules are installed on the installation cover 4 in each group of image acquisition modules 100, and the directions of the two radar ranging modules are the same as the directions of the front and rear vision cameras 1 and the lateral vision cameras 2 in the group of image acquisition modules 100 respectively.
The image processing module is in communication connection with the image fusion module and is used for transmitting main structure line data of an object in the image to the image fusion module. The radar ranging module is in communication connection with the image fusion module and is used for acquiring ranging lattice data of the object in the target direction and transmitting the ranging lattice data to the image fusion module.
For the two front and back vision cameras 1 or the two lateral vision cameras 2 facing the same side of the vehicle, the image processing module extracts main structural lines of objects in the images according to image data collected by the two front and back vision cameras 1 or the two lateral vision cameras 2, and the image fusion module is used for correspondingly fusing the main structural lines of the objects in the two images with distance measurement lattice data collected by the radar ranging module facing the corresponding direction to form a structural line and distance measurement lattice integrated image.
The integrated positioning system based on visual fusion further comprises an image output module, a global positioning navigation module and an inertial navigation module which are arranged on the target vehicle, wherein the image output module is in communication connection with the image fusion module and is used for receiving and visually outputting the structural line and distance measurement lattice integrated image data, and the display data are more visual. The global positioning navigation module locates the position of the vehicle in real time, and the inertial navigation module detects the speed and the position of the vehicle, so that the accuracy of locating is further improved.
According to the embodiment, whether the platform vehicle is in a safe driving state can be judged only according to the image acquired by the integrated positioning device based on vision fusion, and the radar ranging module can be combined for coaction, so that the operation stability and reliability are high. When the radar ranging module is not combined, the image processing module extracts main structure lines of objects in the image from the image data acquired by the image acquisition module 100, the image grading module compares the main structure lines of the objects in the image with a preset driving safety grading positioning range diagram, and when the main structure lines of the objects in the image are completely in a safety area, the distance between the platform vehicle and the objects is shown to be in a safety range, and the platform vehicle can normally drive; when a main body structure line of an object in the image invades the guard zone and does not invade the dangerous zone, the distance between the vehicle and the object is smaller, the running speed is required to be noted, and the situation that the speed is too high to brake in time is prevented; when the main structure line of the object in the image invades into the dangerous area, the running state of the vehicle is judged to be the dangerous running state, the distance between the vehicle and the object is further reduced to the extent that the running safety is possibly influenced, the vehicle needs to be slowed down or even stopped to enlarge the distance between the vehicle and the object, and the running safety of the vehicle is ensured. When the radar ranging module is used in combination, the image fusion module is used for correspondingly fusing the main structure line data of the object in the two images with the ranging lattice data acquired by the radar ranging module with the corresponding orientation to form a structure line and ranging lattice integrated image, so that the integrated image comprising the image data and the distance data is obtained, the distance between the object and the vehicle can be obtained more intuitively and accurately, and data support is provided for safe running of the vehicle.
Example two
As shown in fig. 1 to 7, in the embodiment, compared with the first embodiment, the image fusion module includes a dotted line fusion module and a transformation fusion module, and the operation principle of the dotted line fusion module and the transformation fusion module is as follows:
1. The point-line fusion module is used for carrying out point-line fusion on the main structure line of the object in the image extracted according to the image data collected by the two front-back vision cameras 1 or the two lateral vision cameras 2 facing the same side of the vehicle and the distance measurement lattice data collected by the radar ranging module facing the corresponding direction to obtain two groups of point-line fusion images, and fusing the distance measurement lattice with the real main structure line of the object, as shown in fig. 6.
2. The transformation fusion module is in communication connection with the point-line fusion module and is used for receiving the point-line fusion images, dividing the two groups of point-line fusion images into an image maintenance area which is distributed outwards and an image area to be fused which is distributed inwards, transforming the same object images in the two image areas to be fused in the two groups of point-line fusion images into object reconstruction images in the image fusion area, and respectively connecting the two image maintenance areas which are distributed outwards in the two groups of point-line fusion images on two sides of the image fusion area to form a structural line and distance measurement lattice integrated image, as shown in fig. 7. Because the vision cameras facing the same side are positioned at different positions on the top of the vehicle, the acquired images have perspective, for the same object in the to-be-fused area of the two images, a unified front view image needs to be determined according to the two different perspective, the front view image is taken as a part of a final structural line and distance measurement lattice integrated image, then the images in the image maintenance areas on the two sides are combined for linking, the structural line and distance measurement lattice integrated image is formed together, and the integral image comprising the front of the image acquisition and the two sides in the acquisition direction is obtained.
When the transformation fusion module performs image transformation on the same object in the two image fusion areas, the maximum transverse dimension of the object in each image fusion area is used as a transformation transverse dimension, the maximum vertical dimension of the object in each image fusion area is used as a transformation vertical dimension, the forward display dimension of the reconstructed image of the object is determined according to the transformation transverse dimension and the transformation vertical dimension, the dimension of the reconstructed image is maximized, the sensitivity of judging safe running of the vehicle is higher, and the running safety of the vehicle is better.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited thereto, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (5)

1. The integrated positioning system based on visual fusion is characterized by comprising a radar ranging module, an image fusion module and an integrated positioning device based on visual fusion;
The integrated positioning device based on visual fusion comprises:
An image acquisition module (100) for acquiring an image of an object in a range of image acquisition directions;
the image processing module is in communication connection with the image acquisition module (100) and is used for receiving the image data acquired by the image acquisition module (100) and extracting a main structure line of an object in the image;
The image grading module is in communication connection with the image processing module and is used for receiving a main body structural line of an object in the image, and placing the main body structural line of the object in the image in a preset driving safety grading positioning range diagram, wherein the driving safety grading positioning range diagram comprises a safety zone, a warning zone and a dangerous zone, the safety zone, the warning zone and the dangerous zone are all sector-shaped areas, the warning zone and the dangerous zone are respectively provided with two warning zones, the two warning zones are respectively connected to two ends of the safety zone, and the two dangerous zones are respectively connected to one end of each warning zone far away from the safety zone; when the structural lines of the objects in the image main body are all in the safety zone, judging the running state of the vehicle as a safety running state; when the main body structural line of the object in the image invades the guard zone and does not invade the dangerous zone, judging the running state of the vehicle as the guard running state; when the main body structural line of the object in the image invades into the dangerous area, judging the running state of the vehicle as a dangerous running state;
the image acquisition module (100) comprises a front-back vision camera (1), a lateral vision camera (2), a bracket (3) and a mounting cover (4), wherein the acquisition directions of the front-back vision camera (1) and the lateral vision camera (2) are vertical, the front-back vision camera (1) faces forwards or backwards, the lateral vision camera (2) faces leftwards or rightwards, the front-back vision camera (1) and the lateral vision camera (2) are both arranged on the bracket (3), and the bracket (3) is arranged on the mounting cover (4);
The image acquisition modules (100) are rectangular in top of a target vehicle and comprise two front and rear vision cameras (1) facing the front of the vehicle, two front and rear vision cameras (1) facing the rear of the vehicle, two lateral vision cameras (2) facing the left side of the vehicle and two lateral vision cameras (2) facing the right side of the vehicle, eight radar ranging modules are arranged, two radar ranging modules are arranged on an installation cover (4) in each group of image acquisition modules (100), and the directions of the two radar ranging modules are respectively the same as the directions of the front and rear vision cameras (1) and the lateral vision cameras (2) in the group of image acquisition modules (100);
the image processing module is in communication connection with the image fusion module and is used for transmitting main structure line data of an object in the image to the image fusion module;
the radar ranging module is in communication connection with the image fusion module and is used for acquiring ranging lattice data of the object in the target direction and transmitting the ranging lattice data to the image fusion module;
For two front and back vision cameras (1) or two lateral vision cameras (2) facing the same side of the vehicle, after an image processing module extracts main structural lines of objects in images according to image data collected by the two front and back vision cameras (1) or the two lateral vision cameras (2), an image fusion module is used for correspondingly fusing the main structural lines of the objects in the two images with distance measurement lattice data collected by a radar distance measurement module facing the corresponding direction to form a structural line and distance measurement lattice integrated image;
the image fusion module comprises a dotted line fusion module and a transformation fusion module;
The point-line fusion module is used for carrying out point-line fusion on main structure lines of objects in images extracted according to image data collected by the two front-back vision cameras (1) or the two lateral vision cameras (2) facing the same side of the vehicle and distance measurement lattice data collected by the radar ranging module facing the corresponding direction to obtain two groups of point-line fusion images;
The transformation fusion module is in communication connection with the point-line fusion module and is used for receiving the point-line fusion images, dividing the two groups of point-line fusion images into an image maintenance area which is distributed outwards and an image area to be fused which is distributed inwards, transforming the same object images in the two image areas to be fused in the two groups of point-line fusion images into object reconstruction images in the image fusion area, and respectively connecting the two image maintenance areas which are distributed outwards in the two groups of point-line fusion images on two sides of the image fusion area to form a structural line and distance measurement lattice integrated image.
2. The integrated positioning system based on visual fusion according to claim 1, wherein the image acquisition ranges of the front-rear visual camera (1) and the lateral visual camera (2) are conical ranges.
3. The integrated positioning system based on visual fusion according to claim 1, wherein the transformation fusion module uses the maximum transverse dimension of the object in each image fusion area as a transformation transverse dimension and uses the maximum vertical dimension of the object in each image fusion area as a transformation vertical dimension when performing image transformation on the same object in the two image fusion areas, and determines the forward display dimension of the reconstructed image of the object according to the transformation transverse dimension and the transformation vertical dimension.
4. The integrated positioning system based on visual fusion according to claim 1, further comprising an image output module, wherein the image output module is in communication connection with the image fusion module and is used for receiving and visually outputting the structural line and distance measurement lattice integrated image data.
5. The vision-fusion-based integrated positioning system of claim 1, further comprising a global positioning navigation module and an inertial navigation module mounted on the target vehicle, the global positioning navigation module locating the vehicle position in real time, the inertial navigation module detecting the speed and position of the vehicle.
CN202410259012.XA 2024-03-07 2024-03-07 Integrated positioning system and device based on vision fusion Active CN117854046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410259012.XA CN117854046B (en) 2024-03-07 2024-03-07 Integrated positioning system and device based on vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410259012.XA CN117854046B (en) 2024-03-07 2024-03-07 Integrated positioning system and device based on vision fusion

Publications (2)

Publication Number Publication Date
CN117854046A CN117854046A (en) 2024-04-09
CN117854046B true CN117854046B (en) 2024-05-14

Family

ID=90540459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410259012.XA Active CN117854046B (en) 2024-03-07 2024-03-07 Integrated positioning system and device based on vision fusion

Country Status (1)

Country Link
CN (1) CN117854046B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101318491A (en) * 2008-05-14 2008-12-10 合肥工业大学 Built-in integrated visual sensation auxiliary driving safety system
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN110228413A (en) * 2019-06-10 2019-09-13 吉林大学 Oversize vehicle avoids pedestrian from being involved in the safety pre-warning system under vehicle when turning
CN115179993A (en) * 2022-07-18 2022-10-14 鄂尔多斯应用技术学院 Omnidirectional obstacle grading early warning system of mining shuttle car
CN115351785A (en) * 2022-08-02 2022-11-18 深圳墨影科技有限公司 Three-dimensional protection method and system for mobile robot and storage medium
CN116129340A (en) * 2022-07-25 2023-05-16 中国电力科学研究院有限公司 Safety monitoring method for dangerous area based on action track prediction
CN117452410A (en) * 2023-10-25 2024-01-26 中国人民解放军32181部队 Millimeter wave radar-based vehicle detection system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101318491A (en) * 2008-05-14 2008-12-10 合肥工业大学 Built-in integrated visual sensation auxiliary driving safety system
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN110228413A (en) * 2019-06-10 2019-09-13 吉林大学 Oversize vehicle avoids pedestrian from being involved in the safety pre-warning system under vehicle when turning
CN115179993A (en) * 2022-07-18 2022-10-14 鄂尔多斯应用技术学院 Omnidirectional obstacle grading early warning system of mining shuttle car
CN116129340A (en) * 2022-07-25 2023-05-16 中国电力科学研究院有限公司 Safety monitoring method for dangerous area based on action track prediction
CN115351785A (en) * 2022-08-02 2022-11-18 深圳墨影科技有限公司 Three-dimensional protection method and system for mobile robot and storage medium
CN117452410A (en) * 2023-10-25 2024-01-26 中国人民解放军32181部队 Millimeter wave radar-based vehicle detection system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Camera and Radar Sensor Fusion for RobustVehicle Localization via Vehicle Part Localization;DAEJUN KANG;《IEEE Access》;20200505;全文 *
基于机器视觉的汽车开门避撞预警系统研究;芦彦兵;储江伟;;林业机械与木工设备;20190615(第06期);全文 *

Also Published As

Publication number Publication date
CN117854046A (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN101303735B (en) Method for detecting moving objects in a blind spot region of a vehicle and blind spot detection device
CN103455144B (en) Vehicle-mounted man-machine interaction system and method
CN108196260A (en) The test method and device of automatic driving vehicle multi-sensor fusion system
JP2022003578A (en) Operation vehicle
CN110065494A (en) A kind of vehicle collision avoidance method based on wheel detection
CN113085896B (en) Auxiliary automatic driving system and method for modern rail cleaning vehicle
KR20150141190A (en) Methods and systems for detecting weather conditions using vehicle onboard sensors
CN109001743B (en) Tramcar anti-collision system
CN105946766A (en) Vehicle collision warning system based on laser radar and vision and control method thereof
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN109597077A (en) A kind of detection system based on unmanned plane
CN111413983A (en) Environment sensing method and control end of unmanned vehicle
CN111507162A (en) Blind spot warning method and device based on cooperation of communication between vehicles
CN205601867U (en) Train contact net detection device
KR20190143151A (en) Automated Driving System for Automated Driving car
CN109910955A (en) Rail tunnel obstacle detection system and method based on transponder information transmission
CN107399341A (en) On-board running Environmental safety supervision system and method
CN111222441A (en) Point cloud target detection and blind area target detection method and system based on vehicle-road cooperation
JP2001195698A (en) Device for detecting pedestrian
CN211527320U (en) Automobile-used laser figure surveys barrier device
CN208847836U (en) Tramcar anti-collision system
CN115195775A (en) Vehicle control device, vehicle control method, and storage medium
CN117854046B (en) Integrated positioning system and device based on vision fusion
CN113759787A (en) Unmanned robot for closed park and working method
CN117471463A (en) Obstacle detection method based on 4D radar and image recognition fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant