CN116129553A - Fusion sensing method and system based on multi-source vehicle-mounted equipment - Google Patents

Fusion sensing method and system based on multi-source vehicle-mounted equipment Download PDF

Info

Publication number
CN116129553A
CN116129553A CN202310347027.7A CN202310347027A CN116129553A CN 116129553 A CN116129553 A CN 116129553A CN 202310347027 A CN202310347027 A CN 202310347027A CN 116129553 A CN116129553 A CN 116129553A
Authority
CN
China
Prior art keywords
vehicle
data
fusion
state data
vehicle state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310347027.7A
Other languages
Chinese (zh)
Inventor
陈雪梅
薛杨武
肖龙
杨宏伟
赵小萱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Huichuang Information Technology Co ltd
Advanced Technology Research Institute of Beijing Institute of Technology
Original Assignee
Shandong Huichuang Information Technology Co ltd
Advanced Technology Research Institute of Beijing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Huichuang Information Technology Co ltd, Advanced Technology Research Institute of Beijing Institute of Technology filed Critical Shandong Huichuang Information Technology Co ltd
Priority to CN202310347027.7A priority Critical patent/CN116129553A/en
Publication of CN116129553A publication Critical patent/CN116129553A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of fusion perception, in particular to a fusion perception method and system based on multi-source vehicle-mounted equipment. The method comprises the steps of acquiring environmental data and vehicle state data around a vehicle; performing time synchronization on the acquired environmental data and vehicle state data; extracting the characteristics of the environment data after time synchronization; and fusing the extracted characteristics of the environment data and the vehicle state data by using a fusion algorithm to obtain a perception result. The invention can effectively realize the functions of environment sensing, obstacle detection, track prediction, early warning information, navigation and the like by carrying out global tracking on targets detected by the multiple sensors. The equipment can identify a front Fang Renhang road, a traffic road side mark, a traffic signal lamp, road topography, road conditions and the like on a driving road perception level; the method can accurately identify the objects which affect traffic safety in real time, and reliably and accurately identify the planned travel path which can ensure the standard, safe and rapid arrival at the destination.

Description

Fusion sensing method and system based on multi-source vehicle-mounted equipment
Technical Field
The invention relates to the technical field of fusion perception, in particular to a fusion perception method and system based on multi-source vehicle-mounted equipment.
Background
With the continuous rapid development of socioeconomic and scientific technologies, motor vehicles are rapidly increasing, vehicles with different intelligent levels are gradually introduced into the market, and intelligent networked vehicles and common vehicles are mixed on roads in different proportions for a long time. In the face of increasingly complex urban road traffic systems, it is particularly important to improve traffic management level and traffic running efficiency and ensure traffic safety and stability. Therefore, the method has very important theoretical significance and application value for researching the running state of the heterogeneous traffic flow, and the real-time accurate acquisition of the traffic parameter information is the basis for researching the running state of the heterogeneous traffic flow. Meanwhile, the technical development gradually optimizes the performance of the traffic detector, and the equipment is various.
However, the conventional information acquisition mode generally adopts a single type of sensor, and the single sensor cannot acquire enough comprehensive information due to the restriction of the single sensor. For example, the viewing range of a video detector in severe weather environments with poor lighting conditions such as heavy fog, rain and snow can be affected, and moreover, the shielding of vehicles with large volumes can easily cause missed detection of small vehicles; millimeter wave radars can penetrate fog, smoke and dust, acquire parameters such as distance, speed and angle by capturing reflected signals, but can generate detection errors under the condition of identifying different positions of a large vehicle for many times due to high resolution. And the data collected by a single sensor does not have integrity and intelligence, and the 3D environment modeling enables the laser radar to become a core sensor, but the laser radar cannot identify images and colors, and the performance is obviously reduced in severe weather. Millimeter wave radars can achieve all-weather perception, but their resolution is low and difficult to image. The camera is low in price, can identify traffic participants and traffic signs, but cannot realize lattice modeling and long-distance ranging. Therefore, a fusion sensing method and system based on multi-source vehicle-mounted equipment are needed.
Disclosure of Invention
In order to solve the above-mentioned problems, the invention provides a fusion sensing method and a fusion sensing system based on multi-source vehicle-mounted equipment.
In a first aspect, the present invention provides a fusion sensing method based on a multi-source vehicle-mounted device, which adopts the following technical scheme:
a fusion sensing method based on multi-source vehicle-mounted equipment comprises the following steps:
acquiring environmental data and vehicle state data around a vehicle;
performing time synchronization on the acquired environmental data and vehicle state data;
extracting the characteristics of the environment data after time synchronization;
and fusing the characteristics extracted from the environment data and the vehicle state data by using a fusion algorithm to obtain a perception result.
Further, the acquiring the environmental data and the vehicle state data of the vehicle surroundings includes acquiring the environmental data of the vehicle surroundings by using a vehicle-mounted laser radar, a millimeter wave radar and a camera.
Further, the acquiring the environmental data and the vehicle state data of the periphery of the vehicle comprises acquiring speed, deflection angle and position information in the driving process of the vehicle by using an inertial navigation system of the vehicle, and taking the speed, deflection angle and position information as the vehicle state data;
further, the time synchronization of the acquired environmental data and the vehicle state data includes that the sampling period and the sampling frame rate are controlled to realize the common sampling of one frame of data, so that the time synchronization is ensured.
Further, the feature extraction of the environment data and the vehicle state data after the time synchronization comprises the step of extracting a feature map of the environment data of the vehicle by utilizing a pre-training CNN model.
Further, the fusion algorithm is utilized to fuse the characteristics extracted from the environment data and the vehicle state data to obtain a perception result, the method comprises the steps of splicing the extracted characteristic images to form a multi-scale characteristic image structure, and sending the characteristic image structure into a neural network.
Further, the method includes fusing the characteristics extracted from the environmental data and the vehicle state data by using a fusion algorithm to obtain a sensing result, and obtaining the sensing result according to the characteristic diagram structure by using a neural network.
In a second aspect, a fusion awareness system based on a multi-source vehicle device includes:
a data acquisition module configured to acquire environmental data and vehicle state data of a surrounding of a vehicle;
the synchronization module is configured to perform time synchronization on the acquired environmental data and vehicle state data;
the feature extraction module is configured to perform feature extraction on the environment data after time synchronization;
and the fusion sensing module is configured to fuse the characteristics extracted from the environment data and the vehicle state data by using a fusion algorithm to obtain a sensing result.
In a third aspect, the present invention provides a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the fusion awareness method based on a multi-source vehicle device.
In a fourth aspect, the present invention provides a terminal device, including a processor and a computer readable storage medium, where the processor is configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the fusion awareness method based on a multi-source vehicle device.
In summary, the invention has the following beneficial technical effects:
the invention can effectively realize the functions of environment sensing, obstacle detection, track prediction, early warning information, navigation and the like by carrying out global tracking on targets detected by the multiple sensors. The equipment can identify a front Fang Renhang road, a traffic road side mark, a traffic signal lamp, road topography, road conditions and the like on a driving road perception level; the method can accurately identify the objects which affect traffic safety in real time, and reliably and accurately identify the planned travel path which can ensure the standard, safe and rapid arrival at the destination.
Drawings
Fig. 1 is a flow chart of a fusion perception method based on a multi-source vehicle-mounted device in embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, a fusion sensing method based on a multi-source vehicle-mounted device of the present embodiment includes:
acquiring environmental data and vehicle state data around a vehicle; performing time synchronization on the acquired environmental data and vehicle state data; extracting the characteristics of the environment data after time synchronization; and fusing the characteristics extracted from the environment data and the vehicle state data by using a fusion algorithm to obtain a perception result. The method for acquiring the environmental data and the vehicle state data of the periphery of the vehicle comprises the step of acquiring the environmental data of the periphery of the vehicle by utilizing a vehicle-mounted laser radar, a millimeter wave radar and a camera. The method comprises the steps of obtaining environmental data and vehicle state data around a vehicle, wherein the speed, the deflection angle and the position information in the driving process of the vehicle are obtained by using an inertial navigation system of the vehicle, and the speed, the deflection angle and the position information are used as the vehicle state data; the time synchronization of the acquired environmental data and the vehicle state data comprises the step of realizing the common sampling of one frame of data by controlling the sampling period and the sampling frame rate, so as to ensure the time synchronization. The feature extraction of the environment data and the vehicle state data after the time synchronization comprises the step of extracting a feature map of the environment data of the vehicle by utilizing a pre-training CNN model. The method comprises the steps of utilizing a fusion algorithm to fuse the characteristics extracted from the environment data and the vehicle state data to obtain a perception result, and performing splicing operation on the extracted characteristic images to form a multi-scale characteristic image structure and sending the characteristic image structure into a neural network. The method comprises the steps of utilizing a fusion algorithm to fuse the characteristics extracted from the environment data and the vehicle state data to obtain a perception result, and utilizing a neural network to obtain the perception result according to the characteristic diagram structure.
In particular, the method comprises the steps of,
the invention utilizes the multisource sensor to comprise a laser radar, a millimeter wave radar and a camera which are arranged on the vehicle; the laser radar is used for generating original point cloud data and comprises 4-line, 16-line and 32-line laser radars; the 4-line laser radar is arranged at the center position in front of the vehicle;
the 16-line laser radar is positioned in front of the vehicle to symmetrically divide the two sides of the 4-line laser radar;
the 32-line radar is positioned at the top of the vehicle;
the millimeter wave radar is used for generating millimeter wave signal data containing obstacle information, and comprises an ESR millimeter wave radar and an RSDS millimeter wave radar;
the ESR millimeter wave radar is arranged in front of the vehicle and is positioned above the 4-line laser radar;
the RSDS millimeter wave radar is arranged below the 16-line laser radar;
the camera is used for collecting image and video information data, and comprises a common camera and a fisheye camera.
The common camera is positioned at the front upper part of the cab and is used for collecting images and video information;
the fish-eye camera is arranged at the reflector of the vehicle and used for collecting image information.
In the multi-source sensor, the millimeter wave radar mainly obtains the distance, speed and angle of a target object by transmitting electromagnetic waves to the target object and receiving echoes.
The key for realizing the spatial fusion of the multi-sensor data is to establish a coordinate conversion relation among an accurate radar coordinate system, a three-dimensional world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system; the space fusion of the radar and the vision sensor is to convert the measured values of different sensor coordinate systems into the same coordinate system; because the forward vision system is based on vision, the spatial synchronization of multiple sensors can be realized by converting measurement points under a radar coordinate system into a pixel coordinate system corresponding to a camera through the coordinate system
The radar and visual information are required to be fused in space, and the sensors are required to synchronously acquire data in time, so that the fusion of time is realized; according to the millimeter wave radar function workbook, the sampling period is 50ms, namely the sampling frame rate is 20 frames/second, and the sampling frame rate of the camera is 25 frames/second, in order to ensure the reliability of data, the camera takes the sampling rate of the camera as a reference, and each time the camera acquires one frame of image, the data cached on the millimeter wave radar are selected, namely the data fused with the vision of the same frame of radar are sampled, so that the time synchronization of the millimeter wave radar data and the camera data is ensured.
The method comprises the steps of checking whether an abstract strategy can be executed or executing a strategy-meeting action by adopting three parts including a feature extraction stage, a backbone network and an RPN structure; the detailed steps are as follows:
1. in the feature extraction stage, a feature extraction module is adopted to divide the point cloud of the whole scene into three-dimensional grids with the same size, and point cloud data containing coordinate values of each point and reflection intensity information is input; in order to fix the number of the point clouds in each three-dimensional grid, if the number of the point clouds is too small, directly supplementing the point clouds to the fixed number by zero; if the number of the point clouds is excessive, directly and randomly selecting a fixed number of the point clouds, calculating the gravity center of each grid to obtain the offset of each point cloud and the gravity center in the grid, splicing the offset to the characteristics, and extracting the high-dimensional characteristics of the point clouds in the three-dimensional grid by using a plurality of PointNet networks; and a CNN model based on a classification task (such as ImageNet) is used as a feature extractor, a convolution feature picture with the length, width and height of D multiplied by W multiplied by H is input, and a convolution feature picture (a convolution layer conv feature map) is obtained through the processing of a pre-trained CNN model, wherein the output of the convolution layer is stretched into a one-dimensional vector.
2. The feature extraction module extracts feature map from the original image of the convolution layer by using a series of convolution and pulling, and obtains the position of the target from the feature map in a network training mode; extracting targets to be used for classification from feature map, dividing the feature map into a plurality of small areas, obtaining coordinates of a foreground area, mapping data with fixed length by mapping as input of a network, taking the center of a current sliding window as the center, mapping the point cloud into a pseudo image structure, and preparing to be sent to a backbone network for processing; the detected region pseudo-image is extracted by a region generation network (Region ProposalNetworks). R-CNN is a method of extracting (propose) possible RoIs (regions of interest) regions using the Selective Search algorithm, and then classifying each extracted region using standard CNN. The Selective Search (Selective Search) method sets 2000 candidate areas with different shape sizes and positions around the target object, and then convolves the areas to find the target object.
3. The backbone network comprises two parts: the first part is a top-down network structure, which is mainly used for increasing the channel number of the feature map and reducing the resolution of the feature map; the second part processes the multiple feature images of the first part through multiple up-sampling operations, and the obtained results are spliced to form a multi-scale feature image structure, so that the final stage of the network is ready to be sent; the whole flow of object detection is integrated into a neural network.
And 4, adopting an RPN structure module in the RPN part, adopting the RPN structure module to receive the result processed by the backbone network, mainly using a plurality of convolution layers to operate, finally using three independent convolutions as classification of object types, respectively carrying out two convolutions on the feature map obtained by 4 downsampling layers, carrying out classification of foreground and background by one convolution, carrying out regression of object positions and estimation of object orientations by the other convolution, and estimating the probability that each area is a target or background to obtain a vector with a fixed length.
2 convolution layers are arranged in the full convolution network of the RPN structure module, the first convolution layer encodes all information of the convolution feature map, each sliding window position of the feature map is encoded into a feature vector, and the position of 'things' encoded relative to the original picture is kept; the second convolution layer processes the extracted convolution feature map, searches for a predefined number of areas possibly containing targets, takes anchors corresponding to each sliding window position to output k area positions as probabilities of objects, locates the center of the anchor at the center of a convolution kernel sliding window, distributes a binary class label for each anchor, calculates output values of W multiplied by H multiplied by k anchor points of the RPN network, and the output convolution layer is of a size, and the total output length is 2 multiplied by k anchors corresponding to the sum k regressed areas of two output objects.
Example 2
The embodiment provides a fusion sensing system based on multi-source vehicle-mounted equipment, which comprises:
a data acquisition module configured to acquire environmental data and vehicle state data of a surrounding of a vehicle;
the synchronization module is configured to perform time synchronization on the acquired environmental data and vehicle state data;
the feature extraction module is configured to perform feature extraction on the environment data after time synchronization;
and the fusion sensing module is configured to fuse the characteristics extracted from the environment data and the vehicle state data by using a fusion algorithm to obtain a sensing result.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the fusion awareness method based on a multi-source vehicle device.
A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the fusion awareness method based on a multi-source vehicle device.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (8)

1. The fusion sensing method based on the multi-source vehicle-mounted equipment is characterized by comprising the following steps of:
acquiring environmental data and vehicle state data around a vehicle;
performing time synchronization on the acquired environmental data and vehicle state data;
extracting the characteristics of the environment data after time synchronization;
and fusing the characteristics extracted from the environment data and the vehicle state data by using a fusion algorithm to obtain a perception result.
2. The fusion sensing method based on multi-source vehicle-mounted equipment according to claim 1, wherein the step of acquiring the environmental data and the vehicle state data of the periphery of the vehicle comprises the step of acquiring the environmental data of the periphery of the vehicle by using a vehicle-mounted laser radar, a millimeter wave radar and a camera.
3. The fusion awareness method of a multi-source vehicle-mounted device according to claim 2, wherein the acquiring environmental data and vehicle state data around the vehicle includes acquiring speed, yaw angle and position information during traveling of the vehicle as vehicle state data by using an inertial navigation system of the vehicle.
4. A fusion sensing method based on multi-source vehicle-mounted equipment according to claim 3, wherein the time synchronization of the acquired environmental data and the vehicle state data comprises the steps of realizing the common sampling of one frame of data by controlling the sampling period and the sampling frame rate, and ensuring the time synchronization.
5. The fusion awareness method of a multi-source vehicle-mounted device of claim 4, wherein the feature extraction of the time-synchronized environmental data and the vehicle state data comprises feature extraction of the environmental data of the vehicle using a pre-trained CNN model.
6. The fusion sensing method based on the multi-source vehicle-mounted equipment according to claim 5, wherein the fusion algorithm is used for fusing the extracted features of the environment data and the vehicle state data to obtain a sensing result, and the method comprises the steps of splicing the extracted feature images to form a multi-scale feature image structure and sending the feature image structure into a neural network.
7. The fusion sensing method based on the multi-source vehicle-mounted equipment according to claim 6, wherein the method is characterized in that the method utilizes a fusion algorithm to fuse the characteristics extracted from the environmental data and the vehicle state data to obtain a sensing result, and further comprises utilizing a neural network to obtain the sensing result according to the characteristic diagram structure.
8. A fusion awareness system based on multi-source vehicle-mounted equipment, comprising:
a data acquisition module configured to acquire environmental data and vehicle state data of a surrounding of a vehicle;
the synchronization module is configured to perform time synchronization on the acquired environmental data and vehicle state data;
the feature extraction module is configured to perform feature extraction on the environment data after time synchronization;
and the fusion sensing module is configured to fuse the characteristics extracted from the environment data and the vehicle state data by using a fusion algorithm to obtain a sensing result.
CN202310347027.7A 2023-04-04 2023-04-04 Fusion sensing method and system based on multi-source vehicle-mounted equipment Pending CN116129553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310347027.7A CN116129553A (en) 2023-04-04 2023-04-04 Fusion sensing method and system based on multi-source vehicle-mounted equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310347027.7A CN116129553A (en) 2023-04-04 2023-04-04 Fusion sensing method and system based on multi-source vehicle-mounted equipment

Publications (1)

Publication Number Publication Date
CN116129553A true CN116129553A (en) 2023-05-16

Family

ID=86295850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310347027.7A Pending CN116129553A (en) 2023-04-04 2023-04-04 Fusion sensing method and system based on multi-source vehicle-mounted equipment

Country Status (1)

Country Link
CN (1) CN116129553A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118062016A (en) * 2024-04-25 2024-05-24 深圳市天之眼高新科技有限公司 Vehicle environment sensing method, apparatus and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507233A (en) * 2020-04-13 2020-08-07 吉林大学 Multi-mode information fusion intelligent vehicle pavement type identification method
CN111994066A (en) * 2020-10-29 2020-11-27 北京航空航天大学 Intelligent automobile sensing system based on intelligent tire touch sensing
CN113313154A (en) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 Integrated multi-sensor integrated automatic driving intelligent sensing device
CN113820714A (en) * 2021-09-07 2021-12-21 重庆驰知科技有限公司 Dust fog weather road environment perception system based on multi-sensor fusion
CN114783184A (en) * 2022-04-19 2022-07-22 江苏大学 Beyond-the-horizon sensing system based on information fusion of vehicle, road and unmanned aerial vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507233A (en) * 2020-04-13 2020-08-07 吉林大学 Multi-mode information fusion intelligent vehicle pavement type identification method
CN111994066A (en) * 2020-10-29 2020-11-27 北京航空航天大学 Intelligent automobile sensing system based on intelligent tire touch sensing
CN113313154A (en) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 Integrated multi-sensor integrated automatic driving intelligent sensing device
CN113820714A (en) * 2021-09-07 2021-12-21 重庆驰知科技有限公司 Dust fog weather road environment perception system based on multi-sensor fusion
CN114783184A (en) * 2022-04-19 2022-07-22 江苏大学 Beyond-the-horizon sensing system based on information fusion of vehicle, road and unmanned aerial vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118062016A (en) * 2024-04-25 2024-05-24 深圳市天之眼高新科技有限公司 Vehicle environment sensing method, apparatus and storage medium
CN118062016B (en) * 2024-04-25 2024-07-09 深圳市天之眼高新科技有限公司 Vehicle environment sensing method, apparatus and storage medium

Similar Documents

Publication Publication Date Title
US11915470B2 (en) Target detection method based on fusion of vision, lidar, and millimeter wave radar
TWI841695B (en) Method, on-board computer and non-transitory computer-readable medium for radar-aided single image three-dimensional depth reconstruction
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
CN103176185B (en) Method and system for detecting road barrier
CN113313154A (en) Integrated multi-sensor integrated automatic driving intelligent sensing device
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
CN110738121A (en) front vehicle detection method and detection system
CN111461088B (en) Rail transit obstacle avoidance system based on image processing and target recognition
CN112740225B (en) Method and device for determining road surface elements
CN112379674B (en) Automatic driving equipment and system
CN111413983A (en) Environment sensing method and control end of unmanned vehicle
CN112149460A (en) Obstacle detection method and device
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN115876198A (en) Target detection and early warning method, device, system and medium based on data fusion
CN115187964A (en) Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
WO2022047744A1 (en) Road surface extraction method and device for map
CN117808689A (en) Depth complement method based on fusion of millimeter wave radar and camera
CN116129553A (en) Fusion sensing method and system based on multi-source vehicle-mounted equipment
CN112001272A (en) Laser radar environment sensing method and system based on deep learning
Ennajar et al. Deep multi-modal object detection for autonomous driving
Li et al. Composition and application of current advanced driving assistance system: A review
US20240096109A1 (en) Automatic lane marking extraction and classification from lidar scans
CN116386003A (en) Three-dimensional target detection method based on knowledge distillation
CN115965847A (en) Three-dimensional target detection method and system based on multi-modal feature fusion under cross view angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230516