CN116486359A - All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method - Google Patents
All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method Download PDFInfo
- Publication number
- CN116486359A CN116486359A CN202310463049.XA CN202310463049A CN116486359A CN 116486359 A CN116486359 A CN 116486359A CN 202310463049 A CN202310463049 A CN 202310463049A CN 116486359 A CN116486359 A CN 116486359A
- Authority
- CN
- China
- Prior art keywords
- weather
- intelligent vehicle
- network
- laser radar
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010187 selection method Methods 0.000 title claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 56
- 230000004927 fusion Effects 0.000 claims abstract description 30
- 230000007613 environmental effect Effects 0.000 claims abstract description 18
- 239000002245 particle Substances 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000008447 perception Effects 0.000 claims description 31
- 238000013145 classification model Methods 0.000 claims description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 239000000443 aerosol Substances 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000005286 illumination Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/04—Systems determining the presence of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/95—Lidar systems specially adapted for specific applications for meteorological use
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V8/00—Prospecting or detecting by optical means
- G01V8/10—Detecting, e.g. by using light barriers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01W—METEOROLOGY
- G01W1/00—Meteorology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/87—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using selection of the recognition techniques, e.g. of a classifier in a multiple classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Life Sciences & Earth Sciences (AREA)
- Environmental & Geological Engineering (AREA)
- Molecular Biology (AREA)
- Atmospheric Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Ecology (AREA)
- Environmental Sciences (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geophysics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to the field of intelligent vehicle sensing and multi-mode target detection, in particular to an all-weather intelligent vehicle environment sensing network self-adaptive selection method, which comprises the following steps: (1) data acquisition; (2) data preprocessing; (3) weather identification; (4) aware network selection; (5) The method is oriented to the all-weather intelligent vehicle environment sensing network self-adaptive selection method, the weather classification network carries out multi-mode fusion on the sensor based on the scattering of environmental particles and the illumination intensity, and the weather condition can be accurately judged, so that the optimal target detection algorithm selection is realized according to the current environmental condition, the accuracy and the robustness of target detection are improved, and meanwhile, the method can timely sense the environmental change, so that the intelligent vehicle can accurately, real-time and stably detect the target in all weather.
Description
Technical Field
The invention relates to the field of intelligent vehicle sensing and multi-mode target detection, in particular to an all-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method.
Background
The intelligent vehicle can greatly improve the problems of traffic safety, traffic jam and the like, and can also improve the energy utilization efficiency and the comfort level of passengers. Intelligent vehicles have become one of the key points of future research. With the rapid development of artificial intelligence and deep learning technologies, research and development of intelligent vehicles is also under acceleration. Accurate perception is the precondition of whether the intelligent vehicle can safely run. The intelligent vehicle can timely and accurately make a correct decision only by accurately sensing the surrounding environment, so that the safe running of the vehicle is ensured.
At present, the existing single-sensor and multi-sensor fusion detection algorithm is trained in a specific environment, and the accuracy of the detection algorithm is drastically reduced due to weather or scene changes. In order to ensure driving safety and traffic efficiency, the perception system needs to provide accurate and reliable target detection under different weather and illumination conditions. Therefore, an object detection frame that can operate in all weather and has high accuracy is very important for intelligent vehicles. The smart car should have the ability to pass under various common weather conditions, such as rain, snow, fog, etc. An important reason that intelligent vehicles cannot be popularized rapidly is that intelligent vehicles cannot operate all the day. Environmental and weather changes also have an important impact on the decision-making of intelligent automobiles. In some complex scenes, such as insufficient illumination, bad weather, etc., accurate perception may face some difficulties. The key of ensuring safe running of the vehicle is to accurately identify the environment information and adjust the perception model according to different environment conditions. Therefore, aiming at the current situation, development of an all-weather intelligent vehicle environment-aware network adaptive selection method is urgently needed to overcome the defects in the current practical application.
Disclosure of Invention
The invention aims to provide an all-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method for solving the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an all-weather intelligent vehicle environment-aware network self-adaptive selection method comprises the following steps:
(1) And (3) data acquisition: collecting environmental data through a laser radar and a camera;
(2) Data preprocessing: performing preliminary filtering on the point cloud data acquired by the laser radar in the step (1), and performing coordinate conversion;
(3) Weather identification: the weather classification model firstly obtains laser radar point cloud data and camera visual data, and classifies the environment of the intelligent vehicle once every fixed time t;
(4) Sensing network selection: taking the detection result in the step (3) as a selection basis of a sensing network, and selecting a corresponding sensing model;
(5) Outputting a detection result: and (3) inputting the point cloud data and the camera vision data of the laser radar into the perception model selected in the step (4), and outputting a detection result.
As a further scheme of the invention: in step (4), the perceived network selection includes the following three cases in total:
(1) If the weather classification model detects that the current weather state is clear, selecting the perception model 1 as a current detection network model;
(2) If the weather classification model detects that the current weather state is bad weather, selecting the perception model 2 as a current detection network model;
(3) If the weather classification model detects that the current weather state is night, the perception model 3 is selected as the current detection network model.
As a further scheme of the invention: in the step (2), in the process of converting the point cloud data of the laser radar from the laser radar coordinate system to the vehicle coordinate system, the coordinates of the point under the vehicle coordinate system are set as [ x ] c ,y c ,z c ] T The coordinates of the lower point of the laser radar are [ x ] l ,y l ,z l ] T The conversion formula of the two coordinate systems is:
wherein R is a rotation matrix, and T is a translation matrix.
As a further scheme of the invention: in the step (3), the laser radar can classify the aerosol in the atmosphere according to the scattering difference of different particles, and judge whether the intelligent vehicle is in bad weather at present.
As a further scheme of the invention: the pictures acquired by the camera are input into a convolutional neural network, and the convolutional neural network can enable the intelligent vehicle to distinguish whether the intelligent vehicle is currently in the daytime, at night or in bad weather.
As a further scheme of the invention: the perception model 1 adopts a characteristic level fusion network structure, the perception model 2 adopts a decision level fusion network structure, and the perception model 3 adopts a decision level fusion network structure.
As a further scheme of the invention: the recognition conditions of the perception model 2 and the perception model 3 are divided into four types, namely:
(1) In the same area, when the laser radar and the camera detect that an object exists, judging that the object exists in the area, and calculating the cross ratio of the detection frames of the same category;
(2) In the same area, when the laser radar detects the existence of an object and the camera does not detect the existence of the object, judging that the object exists in the area;
(3) In the same area, when the laser radar does not detect the existence object and the camera detects the existence object, judging that the object exists in the area;
(4) In the same area, when the laser radar and the camera do not detect the existence of the object, the area is judged to have no object.
As a further scheme of the invention: the calculation formula of the cross ratio is as follows:
the IOU is the cross ratio of the detection frames of the same category; a and B are the laser radar and camera vision detection frames, respectively, and a threshold T is set, and when the IOU exceeds this threshold, a and B should be combined into the same detection frame.
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention adopts the laser radar and the camera as the perception sensor of the intelligent vehicle. The former can classify the air aerosol according to the scattering characteristics of different particles under the laser light, and can also provide accurate distance information. The latter may provide information on ambient brightness, color texture, etc. The intelligent vehicle and the intelligent vehicle are combined together to accurately classify the environment condition of the intelligent vehicle. Secondly, by combining the laser radar with the camera data, information complementation can be realized, the limitation of a single sensor in the aspects of detecting and identifying objects can be overcome, the robustness and the accuracy of target detection are improved, and the effect of information complementation is realized;
(2) According to the invention, the target detection model suitable for the current environmental conditions is intelligently selected by combining environmental factors (including but not limited to weather, illumination and the like), a multi-sensor data fusion technology and an optimal model selection algorithm, wherein the weather classification network carries out multi-mode fusion on the sensors based on the scattering of environmental particles and the illumination intensity, so that the weather conditions can be accurately judged, the optimal target detection algorithm selection is realized according to the current environmental conditions, and the accuracy and the robustness of target detection are improved. The invention can timely sense the environmental change so as to ensure that the intelligent vehicle can accurately, real-time and stably detect the target in all weather;
(3) The invention provides a weather classification model based on a laser radar and a camera, and designs an all-weather target detection frame based on the model. After the sensor collects data, weather identification is carried out once at fixed time t, then a corresponding sensing network is selected according to weather identification results, and the sensing accuracy and the driving safety of the intelligent automobile are guaranteed to the greatest extent.
Drawings
FIG. 1 is a flow chart of an atmospheric classification model in an embodiment of the invention.
Fig. 2 is a Convolutional Neural Network (CNN) model diagram in an embodiment of the present invention.
Fig. 3 is a schematic diagram of a conventional multi-sensor information fusion method.
Fig. 4 is a flowchart of the whole embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
Referring to fig. 1 to fig. 4, the method for adaptively selecting an all-weather intelligent vehicle environment-oriented network provided by the embodiment of the invention includes the following steps:
step one, data acquisition: the laser radar and the camera collect data and transmit the data to the weather classification model and the perception model respectively.
The laser radar and the camera are adopted as a perception sensor of the intelligent vehicle. The former can classify the air aerosol according to the scattering characteristics of different particles under the laser light, and can also provide accurate distance information. The latter may provide information on ambient brightness, color texture, etc. The intelligent vehicle and the intelligent vehicle are combined together to accurately classify the environment condition of the intelligent vehicle. Secondly, by combining the laser radar with the camera data, information complementation can be realized, the limitation of a single sensor in the aspects of detecting and identifying objects can be overcome, the robustness and the accuracy of target detection are improved, and the effect of information complementation is realized.
Step two, data preprocessing: firstly, performing preliminary filtering on point cloud data acquired by a laser radar.
Then, the point cloud data of the lidar is converted from the lidar coordinate system to the car coordinate system. Let the coordinates of the point under the car coordinate system be [ x ] c ,y c ,z c ] T The coordinates of the lower point of the laser radar are [ x ] l ,y l ,z l ] T . The conversion formula of the two coordinate systems is:
wherein R is a rotation matrix, and T is a translation matrix.
And step three, weather identification, as shown in fig. 1.
The use of the same target detection network in different weather environments will result in a lower accuracy of target detection, which is very dangerous during the travel of the intelligent vehicle. Weather changes such as rain, snow, fog, etc. can reduce the field of view and sensor performance. For example, a foggy day may reduce the detection range of the camera and the lidar, and a rainy day may interfere the view of the camera with water droplets. Weather changes such as rain, snow, fog, etc. can change the visibility of road markings and traffic signs, which can lead to misidentification of the smart car or a miss of the traffic signs.
Therefore, an accurate and efficient weather classification model is very important, and the weather classification model firstly obtains point cloud data of the laser radar and picture data of the camera. The aerosol in the atmosphere consists of different atmospheric particles, the different atmospheric particles have different radiuses, the radius of air molecules is about 0.0001 mu m, the particle radius of fog, rain and snow is in the range of 0.01-5000 mu m, and the laser radar can classify the aerosol in the atmosphere according to the scattering difference of different particles. It is thus possible to distinguish from this characteristic of the lidar whether the intelligent vehicle is currently in bad weather.
Camera pictures have very rich color brightness information, and the acquired pictures are input into a Convolutional Neural Network (CNN), as shown in fig. 2. CNNs have a strong feature extraction capability that can enable intelligent vehicles to distinguish whether they are currently in the day, night or bad weather.
Fig. 2 is a CNN structure diagram of a weather classification web camera recognition branch. Where "conv" represents the convolutional layer, "conv3-64" represents the convolutional kernel using 3*3, the output of 64 feature maps, "maxpool" represents the max pooling layer, and "FC" represents the fully connected layer. The network can extract image features so as to classify the current environmental conditions of the intelligent vehicle. The network layers are respectively from low to high "conv3-64", "maxpool", "conv3-128", "maxpool", "conv 3-256"; "maxpool", "conv3-512", "maxpool", "FC-4096", "FC-1000" and "softmax".
The weather classification model classifies the environment where the intelligent vehicle is located once every fixed time t. The method not only can reduce the occupation of hardware resources, but also can ensure that the intelligent vehicle sensitively reacts to environmental changes.
And step four, sensing network selection. The weather condition detected in the third step will determine the choice of the perception model in this step, and there are three cases:
(1) And if the weather classification model detects that the current weather state is clear, selecting the perception model 1 as the current detection network model.
(2) If the weather classification model detects that the current weather state is bad weather, the perception model 2 is selected as the current detection network model.
(3) If the weather classification model detects that the current weather state is night, the perception model 3 is selected as the current detection network model.
And step five, outputting a detection result.
The multi-sensor fusion network has three fusion levels in total, as shown in fig. 3, respectively: data level fusion, feature level fusion, and decision level fusion.
The data level fusion is a direct fusion of lidar data and camera data. For data level fusion, there is minimal data loss and highest reliability, but the number of valid radar points directly affects the final detection result. Feature level fusion requires extracting the feature information of the lidar and the image respectively, and then fusing the feature information. Feature level fusion fully utilizes feature information to realize optimal detection performance. However, the calculation is complex, and the conversion of the laser radar information is required to have additional cost. Decision level fusion is the fusion of the laser radar and the camera detection results. Decision-level fusion makes full use of sensor information. When one sensor fails, the intelligent automobile cannot be caused to completely detect the target.
Aiming at the characteristics of the intelligent vehicle running environment, the perception model designed by the framework is as follows:
the perception model 1 adopts a network structure of feature level fusion. Because the weather is clear, the camera and the laser radar can acquire data with very high quality, and therefore the feature level fusion network is the most reasonable choice. The feature level fusion network has the advantages of high detection speed and high precision.
The perception model 2 adopts a network structure of decision-level fusion. The recall of target detection is important since the perception model 2 will be used in bad weather conditions. Severe weather such as rain, snow, fog and the like can cause scattering of light, so that acquired pictures are low in quality, and contrast is reduced, which makes direct identification difficult. Therefore, defogging processing is required for the image, and the purpose of defogging is to improve the contrast of the fogged image and restore the visibility of the scene. And sending the defogging-treated picture into a YOLO network for target detection. After the point cloud data is subjected to noise elimination, the point cloud data is sent to a PointRCNN network for target detection. And finally, fusing the detection results.
The perception model 3 adopts a network structure of decision-level fusion. The laser radar is not influenced by the intensity of light, so that the laser radar has higher advantage in the night environment, and the point cloud data is sent to the PV-RCNN for target detection. The image data collected under the condition of poor lighting conditions is difficult to directly identify under the influence of ambient light. Therefore, the image is subjected to low light enhancement processing firstly, and the original characteristics of the image can be restored by a low light enhancement algorithm. And sending the image with enhanced low light to a YOLO network for target detection.
The recognition conditions of the perception model 2 and the perception model 3 are divided into four types, namely:
(1) In the same area, when the laser radar and the camera detect that an object exists, the area is judged to exist the object, and the intersection ratio (Intersection over Union, IOU) of the detection frame of the same category is calculated, wherein the calculation formula of the IOU is as follows:
wherein A and B are the detection frames of the laser radar and the camera vision respectively. And sets a threshold T above which a and B should be combined into the same box when the IOU exceeds.
(2) In the same area, when the laser radar detects the existence of an object and the camera does not detect the existence of the object, the existence of the object in the area is determined.
(3) In the same area, when the laser radar does not detect the existence of the object and the camera detects the existence of the object, the existence of the object in the area is determined.
(4) In the same area, when the laser radar and the camera do not detect the existence of the object, the area is judged to have no object.
According to the invention, the target detection model suitable for the current environmental conditions is intelligently selected by combining environmental factors (including but not limited to weather, illumination and the like), a multi-sensor data fusion technology and an optimal model selection algorithm, wherein the weather classification network carries out multi-mode fusion on the sensors based on the scattering of environmental particles and the illumination intensity, so that the weather conditions can be accurately judged, the optimal target detection algorithm selection is realized according to the current environmental conditions, and the accuracy and the robustness of target detection are improved. The invention can timely sense the environmental change so as to ensure that the intelligent vehicle can accurately, real-time and stably detect the target in all weather.
It should be noted that, in the present invention, it should be understood that, although the present disclosure describes embodiments, not every embodiment includes only a single embodiment, and this description is for clarity only, and those skilled in the art should consider the present disclosure as a whole, and the embodiments of the present disclosure may be combined appropriately to form other embodiments that can be understood by those skilled in the art.
Claims (8)
1. An all-weather intelligent vehicle environment-aware network self-adaptive selection method is characterized by comprising the following steps of:
(1) And (3) data acquisition: collecting environmental data through a laser radar and a camera;
(2) Data preprocessing: performing preliminary filtering on the point cloud data acquired by the laser radar in the step (1), and performing coordinate conversion;
(3) Weather identification: the weather classification model firstly obtains laser radar point cloud data and camera visual data, and classifies the environment of the intelligent vehicle once every fixed time t;
(4) Sensing network selection: taking the detection result in the step (3) as a selection basis of a sensing network, and selecting a corresponding sensing model;
(5) Outputting a detection result: and (3) inputting the point cloud data and the camera vision data of the laser radar into the perception model selected in the step (4), and outputting a detection result.
2. The all-weather-oriented intelligent vehicle environment-aware network adaptive selection method according to claim 1, wherein in step (4), the aware network selection includes the following three cases in total:
(1) If the weather classification model detects that the current weather state is clear, selecting the perception model 1 as a current detection network model;
(2) If the weather classification model detects that the current weather state is bad weather, selecting the perception model 2 as a current detection network model;
(3) If the weather classification model detects that the current weather state is night, the perception model 3 is selected as the current detection network model.
3. The method for adaptively selecting an all-weather intelligent vehicle environment sensing network according to claim 1, wherein in the step (2), in the process of converting the point cloud data of the laser radar from the laser radar coordinate system to the vehicle coordinate system, the coordinates of the point under the vehicle coordinate system are set as [ x ] c ,y c ,z c ] T The coordinates of the lower point of the laser radar are [ x ] l ,y l ,z l ] T The conversion formula of the two coordinate systems is:
wherein R is a rotation matrix, and T is a translation matrix.
4. The all-weather intelligent vehicle environment-oriented network adaptive selection method according to claim 1, wherein in the step (3), the laser radar can classify aerosols in the atmosphere according to the scattering difference of different particles, and judge whether the intelligent vehicle is in bad weather at present.
5. The method for adaptively selecting the all-weather-oriented intelligent vehicle environment sensing network according to claim 4, wherein the pictures acquired by the camera are input into a convolutional neural network, and the convolutional neural network can enable the intelligent vehicle to distinguish whether the intelligent vehicle is currently in daytime, at night or in bad weather.
6. The all-weather intelligent vehicle environment-oriented sensing network self-adaptive selection method according to claim 2, wherein the sensing model 1 adopts a feature level fusion network structure, the sensing model 2 adopts a decision level fusion network structure, and the sensing model 3 adopts a decision level fusion network structure.
7. The all-weather intelligent vehicle environment-oriented sensing network self-adaptive selection method according to claim 2 or 6, wherein the recognition conditions of the sensing model 2 and the sensing model 3 are divided into four types, which are respectively:
(1) In the same area, when the laser radar and the camera detect that an object exists, judging that the object exists in the area, and calculating the cross ratio of the detection frames of the same category;
(2) In the same area, when the laser radar detects the existence of an object and the camera does not detect the existence of the object, judging that the object exists in the area;
(3) In the same area, when the laser radar does not detect the existence object and the camera detects the existence object, judging that the object exists in the area;
(4) In the same area, when the laser radar and the camera do not detect the existence of the object, the area is judged to have no object.
8. The all-weather intelligent vehicle environment-oriented network adaptive selection method according to claim 7, wherein the calculation formula of the cross-over ratio is as follows:
the IOU is the cross ratio of the detection frames of the same category; a and B are the laser radar and camera vision detection frames, respectively, and a threshold T is set, and when the IOU exceeds this threshold, a and B should be combined into the same detection frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310463049.XA CN116486359A (en) | 2023-04-26 | 2023-04-26 | All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310463049.XA CN116486359A (en) | 2023-04-26 | 2023-04-26 | All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116486359A true CN116486359A (en) | 2023-07-25 |
Family
ID=87211583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310463049.XA Pending CN116486359A (en) | 2023-04-26 | 2023-04-26 | All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116486359A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576490A (en) * | 2024-01-16 | 2024-02-20 | 口碑(上海)信息技术有限公司 | Kitchen environment detection method and device, storage medium and electronic equipment |
-
2023
- 2023-04-26 CN CN202310463049.XA patent/CN116486359A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576490A (en) * | 2024-01-16 | 2024-02-20 | 口碑(上海)信息技术有限公司 | Kitchen environment detection method and device, storage medium and electronic equipment |
CN117576490B (en) * | 2024-01-16 | 2024-04-05 | 口碑(上海)信息技术有限公司 | Kitchen environment detection method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110069986B (en) | Traffic signal lamp identification method and system based on hybrid model | |
Han et al. | Research on road environmental sense method of intelligent vehicle based on tracking check | |
CN110837800A (en) | Port severe weather-oriented target detection and identification method | |
CN105512623A (en) | Foggy-day driving visual enhancement and visibility early warning system and method based on multiple sensors | |
CN105844257A (en) | Early warning system based on machine vision driving-in-fog road denoter missing and early warning method | |
CN112329623A (en) | Early warning method for visibility detection and visibility safety grade division in foggy days | |
CN112257522B (en) | Multi-sensor fusion environment sensing method based on environment characteristics | |
CN112147615B (en) | Unmanned perception method based on all-weather environment monitoring system | |
CN112215306A (en) | Target detection method based on fusion of monocular vision and millimeter wave radar | |
CN110458050B (en) | Vehicle cut-in detection method and device based on vehicle-mounted video | |
CN112329684B (en) | Pedestrian crossing road intention recognition method based on gaze detection and traffic scene recognition | |
CN112666553B (en) | Road ponding identification method and equipment based on millimeter wave radar | |
CN109887276B (en) | Night traffic jam detection method based on fusion of foreground extraction and deep learning | |
CN110599497A (en) | Drivable region segmentation method based on deep neural network | |
CN110780358A (en) | Method, system, computer-readable storage medium and vehicle for autonomous driving weather environment recognition | |
CN110682907A (en) | Automobile rear-end collision prevention control system and method | |
CN116486359A (en) | All-weather-oriented intelligent vehicle environment sensing network self-adaptive selection method | |
CN115876198A (en) | Target detection and early warning method, device, system and medium based on data fusion | |
CN109919062A (en) | A kind of road scene weather recognition methods based on characteristic quantity fusion | |
CN113903012A (en) | Collision early warning method and device, vehicle-mounted equipment and storage medium | |
CN114818819A (en) | Road obstacle detection method based on millimeter wave radar and visual signal | |
CN113870246A (en) | Obstacle detection and identification method based on deep learning | |
CN114155720A (en) | Vehicle detection and track prediction method for roadside laser radar | |
CN112052768A (en) | Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium | |
CN113611008B (en) | Vehicle driving scene acquisition method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |