CN117197019A - Vehicle three-dimensional point cloud image fusion method and system - Google Patents

Vehicle three-dimensional point cloud image fusion method and system Download PDF

Info

Publication number
CN117197019A
CN117197019A CN202311465599.1A CN202311465599A CN117197019A CN 117197019 A CN117197019 A CN 117197019A CN 202311465599 A CN202311465599 A CN 202311465599A CN 117197019 A CN117197019 A CN 117197019A
Authority
CN
China
Prior art keywords
image
fusion
vehicle
information
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311465599.1A
Other languages
Chinese (zh)
Inventor
叶才增
张炯
王军
姜晓琳
国海涛
朱旭刚
亓越
朱佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Institute of Commerce and Technology
Original Assignee
Shandong Institute of Commerce and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Institute of Commerce and Technology filed Critical Shandong Institute of Commerce and Technology
Priority to CN202311465599.1A priority Critical patent/CN117197019A/en
Publication of CN117197019A publication Critical patent/CN117197019A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a vehicle three-dimensional point cloud image fusion method and system. The method comprises the following steps: planning a specific area around the vehicle, collecting image information of the specific area, and detecting the image information; based on the detection result of the image information, abnormal early warning information is sent to the vehicle; according to the invention, the advantages of the point cloud and the image are fully utilized by combining the point cloud data and the image data, the more comprehensive and accurate environment sensing information is provided, the abnormal detection of the periphery of the vehicle is avoided, meanwhile, the device has certain stability to factors such as illumination change and weather change, reliable fusion results can be provided under different conditions, and corresponding automatic driving decisions are made through accurate environment data acquisition, so that the running stability of the vehicle is improved.

Description

Vehicle three-dimensional point cloud image fusion method and system
Technical Field
The invention relates to the technical field of image data processing, in particular to a vehicle three-dimensional point cloud image fusion method and system.
Background
At present, in the field of three-dimensional environment perception, the comprehensive environment information is often required to be obtained by fusing point cloud data and image data acquired by different sensors, the fusion method has the problems of high computational complexity, insufficient precision and poor real-time performance, when a vehicle driving route is planned, the road is easy to change, for example, the road is in traffic accident, foreign matter blocking occurs on the road, the vehicle cannot pass through the current road in time, the driving route planning is completed, and the actual application requirement cannot be met.
Disclosure of Invention
The invention aims to provide a vehicle three-dimensional point cloud image fusion method and system, which are used for solving the problems in the background technology.
In order to solve the above technical problems, one of the purposes of the present invention is to provide a vehicle three-dimensional point cloud image fusion method, which comprises the following steps:
s1, taking a vehicle as a center, expanding out a fusion area, collecting image information of the fusion area, and detecting the image information;
s2, based on the detection result of the image information in the S1, abnormal early warning information is sent to the vehicle;
s3, extracting features of the image information acquired in the S1, searching other image information in the range according to the image extraction result, and analyzing the other image information in combination with the image information;
s4, carrying out fusion analysis on the image information in the fusion area based on the analysis result of the S3;
s5, based on the image fusion analysis data of the S4, the vehicle driving route is predicted, and a driving scheme is provided.
As a further improvement of the present technical solution, the step of detecting the image information in S1 is as follows:
s1.1, planning a fusion area according to the influence degree of the surrounding environment on the normal running of the vehicle;
s1.2, acquiring image information of a fusion area taking a vehicle as a center in real time, and evaluating the definition of the acquired image information.
As a further improvement of the technical scheme, the expression of the S1.2 for performing sharpness evaluation on the collected image information is as follows:
a is a definition evaluation method based on Fourier transform, which is used for evaluating the existence degree of high-frequency information in an image, and the calculation mode is as follows:
wherein I represents the extracted pixel information, FFT (I) represents the spectrum obtained by fourier transforming it, mean (FFT (I)) represents the average value of the spectrum.
B is a definition evaluation method based on gradient information, which is used for evaluating the intensity of the gradient information in the image, and the calculation mode is as follows:
the Laplacian (I) represents gradient information obtained by carrying out Laplacian operator calculation on pixel information, S is a definition score, the higher the definition score is, the clearer the image is, and when the value of S is smaller than 10, the definition does not reach the standard.
As a further improvement of the present technical solution, the step of S3 analyzing other image information in combination with the image information is as follows:
s3.1, extracting features of the image information of the fusion area acquired by the S1.2, and carrying out matching classification on the images according to adjacent positions;
and S3.2, carrying out combined evaluation on the images subjected to the matching classification in the step S3.1, and judging that the image capturing range covers the fusion area according to an evaluation result.
As a further improvement of the present technical solution, the step of performing fusion analysis on the image information in the fusion area in S4 is as follows:
s4.1, fusing the image data in the fusion area by using point cloud image fusion according to the judgment result of the S3.2;
and S4.2, projecting the image data into a space in the fusion of S4.1 for data correction.
As a further improvement of the technical scheme, the expression of fusing the image data in the fusion area by using the point cloud image fusion in S4.1 is as follows:
the method comprises the steps that a PC represents point cloud data acquired by an automobile, an Image represents Image data corresponding to a point cloud position, P is obtained by fusing the point cloud data and the Image data, B represents a high-precision three-dimensional environment model, and in the fusion process, spatial scale transformation is used for improving the precision and smoothness of the model.
As a further improvement of the present technical solution, the expression that the S4.2 projects the image data into the space within the S4.1 fusion for data correction is as follows:
road edge:
dS represents the depth of the point cloud data, and dI represents the gradient information of the image data;
lane line:
the Cleval represents an evaluation function of the gradient of the lane line, th represents a threshold value of the lane line, and data are input into the three-dimensional environment model to replace corresponding image data, so that the road edge and the lane line data in the three-dimensional environment model are supplemented and corrected.
As a further improvement of the present technical solution, the step of providing the vehicle running environment information according to the prediction result in S5 is as follows:
s5.1, collecting automobile driving route information;
s5.2, carrying out combination prediction according to the space corrected by the data of S4.2 and the automobile driving route information acquired by S5.1, and providing an automatic driving decision of the automobile.
The second object of the invention is to provide a vehicle three-dimensional point cloud image fusion system, which comprises the vehicle three-dimensional point cloud image fusion method according to any one of the above, and comprises an image acquisition unit, an anomaly sending unit, a feature extraction unit, a fusion analysis unit and a running prediction unit;
the image acquisition unit is used for acquiring image information of the fusion area and detecting the image information;
the abnormal sending unit is used for sending abnormal early warning information to the vehicle according to the image information detection result;
the feature extraction unit is used for extracting features of the collected image information, searching other image information in the range according to the image extraction result, and analyzing the other image information combined with the image information;
the fusion analysis unit is used for carrying out fusion analysis on the analysis result combined with the image information in the fusion area;
the driving prediction unit is used for predicting the image fusion analysis data in combination with the driving route of the automobile to provide a driving scheme.
Compared with the prior art, the invention has the beneficial effects that: through combining the point cloud data and the image data, the advantages of the point cloud and the image are fully utilized, more comprehensive and accurate environment perception information is provided, abnormal detection of the periphery of the vehicle is avoided, meanwhile, the device has certain stability to factors such as illumination change and weather change, reliable fusion results can be provided under different conditions, corresponding automatic driving decisions are made through accurate environment data acquisition, and the running stability of the vehicle is improved.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a block flow diagram of detecting image information according to the present invention;
FIG. 3 is a block flow diagram of the present invention for analyzing other image information in combination with image information;
FIG. 4 is a flow chart of the fusion analysis of image information in a fusion area according to the present invention;
FIG. 5 is a block flow diagram of the present invention for providing a driving scheme;
fig. 6 is a block flow diagram of an image acquisition unit of the present invention.
The meaning of each reference sign in the figure is:
10. an image acquisition unit; 20. an abnormality transmission unit; 30. a feature extraction unit; 40. a fusion analysis unit; 50. and a travel prediction unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1 to 6, one of the purposes of the present invention is to provide a vehicle three-dimensional point cloud image fusion method, which includes the following steps:
s1, taking a vehicle as a center, expanding the fusion area outwards, and generally taking the vehicle as the center, expanding the fusion area outwards for 1-2 meters to form the fusion area, collecting image information of the fusion area, and detecting the image information;
the step of detecting the image information in the S1 is as follows:
s1.1, planning a fusion area according to the influence degree of the surrounding environment on the normal running of the vehicle; setting a circular range with a radius of two meters by taking a vehicle as a circle, and collecting image information of the circular range;
s1.2, acquiring image information of a fusion area taking a vehicle as a center in real time, and evaluating the definition of the acquired image information; the azimuth of the collected image information is as follows: the method comprises the steps of collecting sensor data, and simultaneously collecting point cloud data and image data around a vehicle by a laser radar and a camera;
the expression of the S1.2 for carrying out definition evaluation on the acquired image information is as follows:
a is a definition evaluation method based on Fourier transform, which is used for evaluating the existence degree of high-frequency information in an image, and the calculation mode is as follows:
wherein I represents the extracted pixel information, FFT (I) represents the spectrum obtained by fourier transforming it, mean (FFT (I)) represents the average value of the spectrum.
B is a definition evaluation method based on gradient information, which is used for evaluating the intensity of the gradient information in the image, and the calculation mode is as follows:
the Laplacian (I) represents gradient information obtained by carrying out Laplacian operator calculation on pixel information, S is a definition score, the higher the definition score is, the clearer the image is, and when the value of S is smaller than 10, the definition does not reach the standard;
s2, based on the detection result of the image information in the S1, abnormal early warning information is sent to the vehicle; when the image acquired by the camera is in a black area, for example, the image is blocked and damaged by the camera, early warning information is sent to a vehicle management cloud end, and a user looks over in a vehicle central control area;
s3, extracting features of the image information acquired in the S1, searching other image information in the range according to the image extraction result, and analyzing the other image information in combination with the image information;
the step of analyzing the other image information combined with the image information in the step S3 is as follows:
s3.1, extracting features of the image information of the fusion area acquired by the S1.2, and carrying out matching classification on the images according to adjacent positions; feature extraction: the acquired image is preprocessed, such as by smoothing, filtering, and cropping. Then, features such as texture, color, shape, etc. are extracted from the preprocessed image using an image processing algorithm, and by converting the image into a gray scale image, the amount of computation required for image processing can be reduced, thereby improving the processing speed. The expression for graying is as follows:
wherein R, G, B represents the red, green and blue channel values of the original image, respectively, and gray represents the pixel value of the gray image;
and S3.2, carrying out combined evaluation on the images subjected to the matching classification in the step S3.1, and judging that the image capturing range covers the fusion area according to an evaluation result. The method comprises the following steps:
feature extraction:
feature matching:
dead angle judgment:
wherein SIFT represents a feature extraction algorithm, x1 and x2 represent three different images respectively, match represents a feature matching algorithm, existdieadzone represents a dead angle judgment function, threshold is a matching point number threshold, and when existdieadzone is smaller than threshold, namely no dead angle area exists;
s4, carrying out fusion analysis on the image information in the fusion area based on the analysis result of the S3;
and S4, performing fusion analysis on the image information in the fusion area, wherein the fusion analysis comprises the following steps of:
s4.1, fusing the image data in the fusion area by using point cloud image fusion according to the judgment result of the S3.2;
and S4.2, projecting the image data into a space in the fusion of S4.1 for data correction.
The expression of fusing the image data in the fusion area by using the point cloud image fusion in the S4.1 is as follows:
the method comprises the steps that a PC represents point cloud data acquired by an automobile, an Image represents Image data corresponding to a point cloud position, B is obtained by fusing the point cloud data and the Image data, B represents a high-precision three-dimensional environment model, and in the fusion process, spatial scale transformation is used for improving the precision and smoothness of the model.
The expression that the S4.2 projects the image data into the space within the S4.1 fusion for data correction is as follows:
road edge:
dS represents the depth of the point cloud data, and dI represents the gradient information of the image data;
lane line:
the Cleval represents an evaluation function of the gradient of the lane line, th represents a threshold value of the lane line, and data are input into the three-dimensional environment model to replace corresponding image data, so that the road edge and the lane line data in the three-dimensional environment model are supplemented and corrected.
S5, based on the image fusion analysis data of the S4, the vehicle driving route is predicted, and a driving scheme is provided.
The step of providing the vehicle running environment information according to the prediction result is as follows:
s5.1, collecting automobile driving route information; the GPS equipment is arranged on the vehicle, and in the running process of the vehicle, the GPS equipment can send position information to the satellite at regular time, and the GPS receiver can collect data returned by the satellite, so that the positioning information of the vehicle is obtained;
s5.2, carrying out combination prediction according to the space corrected by the data of S4.2 and the automobile driving route information acquired by S5.1, and providing an automatic driving decision of the automobile. And performing environment sensing by using the fused three-dimensional environment model, and performing three-dimensional object detection by using deep learning based on the fused three-dimensional environment model. The object is to identify and locate obstacles in the environment, including pedestrians, traffic signs, vehicles, etc., which can be achieved through point cloud based object detection and conventional object identification techniques.
Road segmentation:
according to the fused three-dimensional environment model, road areas can be identified, obstacles influencing the running of vehicles such as pedestrians, vehicles and the like are removed, and the management of the running states of the vehicles in different running environments such as heterogeneous cities, rural roads and the like can be realized.
Lane line detection:
the lane line identification depends on the accuracy of data and the running state of the vehicle, and road data around the vehicle is established by means of the fused three-dimensional environment model so as to improve the steady-state control of the vehicle and the accuracy in turning, such as obstacle detection, road segmentation, lane line identification and the like, and provide accurate environment information for automatic driving decision and control;
the second object of the present invention is to provide a vehicle three-dimensional point cloud image fusion system, including any one of the above-mentioned vehicle three-dimensional point cloud image fusion methods, including an image acquisition unit 10, an anomaly transmission unit 20, a feature extraction unit 30, a fusion analysis unit 40, and a travel prediction unit 50;
the image acquisition unit 10 is used for acquiring image information of the fusion area and detecting the image information;
the anomaly sending unit 20 is used for sending anomaly early warning information to the vehicle according to the detection result of the image information;
the feature extraction unit 30 is used for extracting features of the collected image information, searching other image information in the range according to the image extraction result, and analyzing the other image information in combination with the image information;
the fusion analysis unit 40 is used for performing fusion analysis by combining the analysis result with the image information in the fusion area;
the driving prediction unit 50 is used for predicting the image fusion analysis data in combination with the driving route of the automobile to provide a driving scheme.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. A vehicle three-dimensional point cloud image fusion method is characterized in that: the method comprises the following steps:
s1, taking a vehicle as a center, expanding out a fusion area, collecting image information of the fusion area, and detecting the image information;
s2, based on the detection result of the image information in the S1, abnormal early warning information is sent to the vehicle;
s3, extracting features of the image information acquired in the S1, searching other image information in the range according to the image extraction result, and analyzing the other image information in combination with the image information;
s4, carrying out fusion analysis on the image information in the fusion area based on the analysis result of the S3;
s5, based on the image fusion analysis data of the S4, the vehicle driving route is predicted, and a driving scheme is provided.
2. The vehicle three-dimensional point cloud image fusion method according to claim 1, characterized in that: the step of detecting the image information in the S1 is as follows:
s1.1, planning a fusion area according to the influence degree of the surrounding environment on the normal running of the vehicle;
s1.2, acquiring image information of a fusion area taking a vehicle as a center in real time, and evaluating the definition of the acquired image information.
3. The vehicle three-dimensional point cloud image fusion method according to claim 2, characterized in that: the expression of the S1.2 for carrying out definition evaluation on the acquired image information is as follows:
a is a definition evaluation method based on Fourier transform, which is used for evaluating the existence degree of high-frequency information in an image, and the calculation mode is as follows:
wherein, I represents the extracted pixel information, FFT represents the frequency spectrum obtained after carrying out Fourier transform, mean represents the average value of the frequency spectrum;
b is a definition evaluation method based on gradient information, which is used for evaluating the intensity of the gradient information in the image, and the calculation mode is as follows:
wherein Laplacian represents gradient information obtained by carrying out Laplacian calculation on pixel information, and S is a definition score.
4. The vehicle three-dimensional point cloud image fusion method according to claim 2, characterized in that: the step of analyzing the other image information combined with the image information in the step S3 is as follows:
s3.1, extracting features of the image information of the fusion area acquired by the S1.2, and carrying out matching classification on the images according to adjacent positions;
and S3.2, carrying out combined evaluation on the images subjected to the matching classification in the step S3.1, and judging that the image capturing range covers the fusion area according to an evaluation result.
5. The vehicle three-dimensional point cloud image fusion method according to claim 4, characterized in that: and S4, performing fusion analysis on the image information in the fusion area, wherein the fusion analysis comprises the following steps of:
s4.1, fusing the image data in the fusion area by using point cloud image fusion according to the judgment result of the S3.2;
and S4.2, projecting the image data into a space in the fusion of S4.1 for data correction.
6. The vehicle three-dimensional point cloud image fusion method according to claim 5, characterized in that: the expression of fusing the image data in the fusion area by using the point cloud image fusion in the S4.1 is as follows:
the PC represents point cloud data acquired by an automobile, the Image represents Image data corresponding to the point cloud position, and P and B represent high-precision three-dimensional environment models obtained by fusing the point cloud data and the Image data.
7. The vehicle three-dimensional point cloud image fusion method according to claim 5, characterized in that: the expression that the S4.2 projects the image data into the space within the S4.1 fusion for data correction is as follows:
road edge:
dS represents the depth of the point cloud data, and dI represents the gradient information of the image data;
lane line:
the Cleval represents an evaluation function of the gradient of the lane line, th represents a threshold value of the lane line, and data are input into the three-dimensional environment model to replace corresponding image data, so that the road edge and the lane line data in the three-dimensional environment model are supplemented and corrected.
8. The vehicle three-dimensional point cloud image fusion method according to claim 5, characterized in that: the step of providing the vehicle running environment information according to the prediction result is as follows:
s5.1, collecting automobile driving route information;
s5.2, carrying out combination prediction according to the space corrected by the data of S4.2 and the automobile driving route information acquired by S5.1, and providing an automatic driving decision of the automobile.
9. The system for realizing vehicle three-dimensional point cloud image fusion, comprising the vehicle three-dimensional point cloud image fusion method according to any one of claims 1-8, characterized in that: comprises an image acquisition unit (10), an abnormality transmission unit (20), a feature extraction unit (30), a fusion analysis unit (40) and a running prediction unit (50);
the image acquisition unit (10) is used for acquiring image information of the fusion area and detecting the image information;
the anomaly sending unit (20) is used for sending anomaly early warning information to the vehicle according to the image information detection result;
the feature extraction unit (30) is used for extracting features of the collected image information, searching other image information in the range according to the image extraction result, and analyzing the other image information combined with the image information;
the fusion analysis unit (40) is used for carrying out fusion analysis on the analysis result combined with the image information in the fusion area;
the driving prediction unit (50) is used for predicting the image fusion analysis data in combination with the driving route of the automobile to provide a driving scheme.
CN202311465599.1A 2023-11-07 2023-11-07 Vehicle three-dimensional point cloud image fusion method and system Pending CN117197019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311465599.1A CN117197019A (en) 2023-11-07 2023-11-07 Vehicle three-dimensional point cloud image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311465599.1A CN117197019A (en) 2023-11-07 2023-11-07 Vehicle three-dimensional point cloud image fusion method and system

Publications (1)

Publication Number Publication Date
CN117197019A true CN117197019A (en) 2023-12-08

Family

ID=88998292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311465599.1A Pending CN117197019A (en) 2023-11-07 2023-11-07 Vehicle three-dimensional point cloud image fusion method and system

Country Status (1)

Country Link
CN (1) CN117197019A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911829A (en) * 2024-03-15 2024-04-19 山东商业职业技术学院 Point cloud image fusion method and system for vehicle navigation
CN117911829B (en) * 2024-03-15 2024-05-31 山东商业职业技术学院 Point cloud image fusion method and system for vehicle navigation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112509333A (en) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 Roadside parking vehicle track identification method and system based on multi-sensor sensing
CN114842438A (en) * 2022-05-26 2022-08-02 重庆长安汽车股份有限公司 Terrain detection method, system and readable storage medium for autonomous driving vehicle
CN115187964A (en) * 2022-09-06 2022-10-14 中诚华隆计算机技术有限公司 Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN115876198A (en) * 2022-11-28 2023-03-31 烟台艾睿光电科技有限公司 Target detection and early warning method, device, system and medium based on data fusion
CN116363100A (en) * 2023-03-31 2023-06-30 东软睿驰汽车技术(上海)有限公司 Image quality evaluation method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112509333A (en) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 Roadside parking vehicle track identification method and system based on multi-sensor sensing
CN114842438A (en) * 2022-05-26 2022-08-02 重庆长安汽车股份有限公司 Terrain detection method, system and readable storage medium for autonomous driving vehicle
CN115187964A (en) * 2022-09-06 2022-10-14 中诚华隆计算机技术有限公司 Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN115876198A (en) * 2022-11-28 2023-03-31 烟台艾睿光电科技有限公司 Target detection and early warning method, device, system and medium based on data fusion
CN116363100A (en) * 2023-03-31 2023-06-30 东软睿驰汽车技术(上海)有限公司 Image quality evaluation method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱孔凤 等: "《2004年全国光电技术学术交流会论文集 下》", 中国宇航学会光电技术专业委员会, pages: 992 - 993 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911829A (en) * 2024-03-15 2024-04-19 山东商业职业技术学院 Point cloud image fusion method and system for vehicle navigation
CN117911829B (en) * 2024-03-15 2024-05-31 山东商业职业技术学院 Point cloud image fusion method and system for vehicle navigation

Similar Documents

Publication Publication Date Title
EP3171292B1 (en) Driving lane data processing method, device, storage medium and apparatus
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
US8634593B2 (en) Pixel-based texture-less clear path detection
US8452053B2 (en) Pixel-based texture-rich clear path detection
US8611585B2 (en) Clear path detection using patch approach
US8699754B2 (en) Clear path detection through road modeling
CN105718872B (en) Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN105488454A (en) Monocular vision based front vehicle detection and ranging method
CN105654732A (en) Road monitoring system and method based on depth image
CN107909012B (en) Real-time vehicle tracking detection method and device based on disparity map
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN115049700A (en) Target detection method and device
CN112329623A (en) Early warning method for visibility detection and visibility safety grade division in foggy days
CN104902261A (en) Device and method for road surface identification in low-definition video streaming
CN115717894A (en) Vehicle high-precision positioning method based on GPS and common navigation map
CN103310199A (en) Vehicle model identification method based on high-resolution remote sensing data
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
CN117130010B (en) Obstacle sensing method and system for unmanned vehicle and unmanned vehicle
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
CN116901089B (en) Multi-angle vision distance robot control method and system
CN113837094A (en) Road condition rapid analysis method based on full-color high-resolution remote sensing image
CN116631187B (en) Intelligent acquisition and analysis system for case on-site investigation information
CN117197019A (en) Vehicle three-dimensional point cloud image fusion method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination