CN116518989B - Method for vehicle navigation based on sound and thermal imaging - Google Patents
Method for vehicle navigation based on sound and thermal imaging Download PDFInfo
- Publication number
- CN116518989B CN116518989B CN202310815023.7A CN202310815023A CN116518989B CN 116518989 B CN116518989 B CN 116518989B CN 202310815023 A CN202310815023 A CN 202310815023A CN 116518989 B CN116518989 B CN 116518989B
- Authority
- CN
- China
- Prior art keywords
- road
- vehicle
- sound
- data
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001931 thermography Methods 0.000 title claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 238000010586 diagram Methods 0.000 claims abstract description 10
- 238000005516 engineering process Methods 0.000 claims abstract description 9
- 238000013079 data visualisation Methods 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000000926 separation method Methods 0.000 claims description 10
- 238000012880 independent component analysis Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 230000005236 sound signal Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 6
- 230000008901 benefit Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005206 flow analysis Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3446—Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3453—Special cost functions, i.e. other than distance or default speed limit of road segments
- G01C21/3492—Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3815—Road data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
The invention discloses a method for vehicle navigation based on sound and thermal imaging, which relates to the traffic field and comprises the following steps: the method comprises the steps that sound detection equipment and an infrared camera are installed on an urban road, and sound and thermal data are collected in a cloud; the sound data is converted into a frequency signal by processing, and the thermodynamic diagram is used for identifying the vehicle. Normalizing and adding the data to generate a data map containing road flow information; drawing the flow information into color points on a map by utilizing a data visualization tool to form a visual map and sending the visual map to a driver in real time; the driver can select cloud navigation or vehicle self-navigation, the cloud performs path planning according to the data map, and the vehicle performs path planning according to the visual map; and (5) path planning is realized by calculating the weight of the road unit and a shortest path algorithm. The method has low calculation cost and easy deployment, and is easier to popularize compared with the image recognition technology.
Description
Technical Field
The invention relates to the field of traffic, in particular to a method for navigating a vehicle based on sound and thermal imaging.
Background
In modern society, due to the acceleration of urban progress, demands for road traffic are increasing, but at the same time, urban traffic jam is also increasing. In order to effectively solve this problem, various traffic navigation systems have been developed to improve the road use efficiency. These systems help drivers select optimal paths by collecting, analyzing, and providing real-time road traffic information, thereby reducing traffic congestion.
The existing traffic navigation system mainly acquires road information such as the number of vehicles, the running speed of the vehicles and the like in an image recognition mode. The method is that cameras are installed on main roads of cities, then video shot by the cameras is analyzed frame by frame, vehicles in images are identified, and the congestion degree of the roads is estimated according to the number of the vehicles and the running speed.
However, existing navigation systems based on image recognition have some problems. First, image recognition is relatively difficult and requires the use of relatively complex algorithms, such as deep learning, etc. Second, image recognition requires a large amount of computing resources, not only high-performance hardware devices, but also a large amount of power supply. Both of these factors limit the large scale deployment of such navigation systems. In addition, the image recognition technology is also affected by factors such as illumination, weather, etc., which make the effect thereof in complex environments undesirable.
Therefore, there is an urgent need for a new vehicle navigation system that overcomes the above-mentioned problems and provides more accurate, economical and convenient navigation services.
Disclosure of Invention
The invention aims to provide a method for navigating a vehicle based on sound and thermal imaging so as to solve the problems in the background art.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a method for vehicle navigation based on sound and thermal imaging, comprising the steps of:
s1, uniformly installing sound detection equipment and an infrared camera on each road in a preset urban area;
s2, the cloud end collects sound data of each road through the sound detection equipment, a sound source separation technology is applied, sounds of non-motor vehicles are removed, and then the remaining time domain sound signals related to vehicle movement are converted into frequency signals through Fourier transformation, so that decibel levels of the sounds are obtained;
s3, collecting thermodynamic diagrams of each road by the cloud through the infrared cameras, and distinguishing vehicles in the heat source by using a gray level thresholding method;
s4, the cloud performs normalization processing on the vehicle-related data obtained in the step S2 and the step S3, and then adds the two data to generate a data map containing road flow information;
s5, the cloud uses a data visualization tool to draw each data point into a color point on a map according to flow information of the data point, a region with larger flow uses darker color, a region with smaller flow uses lighter color, so that a visual map capable of intuitively reflecting the flow information of the road is generated, and the visual map is sent to a driver in real time;
s6: the driver selects cloud navigation or vehicle self-navigation; when a driver selects cloud navigation, the cloud performs path planning in the steps S7-S8 according to the data map obtained in the step S4; when a driver selects the vehicle to navigate by oneself, the vehicle performs path planning in the steps S9-S10 according to the visual map;
s7, dividing each road into a plurality of units by the cloud end, and calculating weight for each unit based on the flow information in the data map generated in the step S4, wherein the weight reflects the vehicle density of the unit; then, calculating the average value of the weights of all units on the road between any two intersections, and taking the average value as the total weight of the road;
s8, taking the total weight calculated in the step S7 as the road weight in the shortest path algorithm, and planning a vehicle path by adopting the shortest path algorithm; the method comprises the steps of updating a data map and road weights in real time, and implementing updating corresponding path planning;
s9, dividing each road into a plurality of units by the vehicle, and calculating a weight for each unit based on the color depth information in the visual map generated in the step S5, wherein the weight reflects the vehicle density of the unit; then, calculating the average value of the weights of all units on the road between any two intersections, and taking the average value as the total weight of the road;
s10, taking the total weight calculated in the step S9 as the road weight in the shortest path algorithm, and planning a vehicle path by adopting the shortest path algorithm; and updating the data map and the road weight in real time, and implementing updating the corresponding path planning.
Preferably, the sound detection device and the infrared camera in step S1 are installed in a pair at intervals of a predetermined distance on the road between any two intersections within a predetermined city range, and the predetermined distance is proportional to the road length.
Preferably, the sound source separation technique in step S2 is a sound source separation technique based on independent component analysis, and the sounds of the non-motor vehicle are removed by setting a frequency threshold.
Preferably, the gray thresholding in step S3 uses the oxford algorithm to calculate the optimal threshold value from the gray histogram of each frame thermodynamic diagram, and identifies the portion above the threshold value as the vehicle.
Preferably, the data visualization tool in step S5 is a Python-based open source library Matplotlib.
Preferably, the shortest path algorithm adopted in the steps S8 and S10 is Dijkstra algorithm.
The invention has the advantages compared with the prior art that:
1. the calculation cost is reduced: in the prior art, the congestion evaluation is generally carried out by adopting an image recognition mode, so that the calculation cost is high, the difficulty is high, and the popularization difficulty is high. The invention uses sound and thermal data to image, and does not need to analyze video data frame by frame, thereby greatly reducing the calculation cost. Compared with the complex image recognition technology, the sound and thermal data processing is simpler and easier to deploy in a large-scale environment.
2. Navigation convenience enhancement: according to the invention, the visual map is generated, so that a driver can more intuitively see the road flow information, and therefore, the optimal path can be selected according to actual requirements, the road use efficiency is improved, and the congestion is reduced.
3. Flexible navigation mode: the invention provides two modes of cloud navigation and vehicle self-navigation. The cloud navigation can provide a more accurate navigation scheme according to the data map generated by the collected sound and thermal data; the vehicle self-navigation carries out path planning according to the generated visual map, the calculation process is simpler, and the method is suitable for being carried out on edge equipment, and is particularly suitable for being realized in equipment with limited calculation force such as vehicles.
4. Updating in real time: because the data is collected in real time through the sound and heat detection equipment, the generated data map and the visual map can reflect real-time road conditions. The real-time performance enables the navigation scheme of the invention to be adjusted in real time according to the actual road condition, and the navigation scheme has practicability.
Drawings
FIG. 1 is a schematic representation of the process of the present invention.
Detailed Description
The following describes specific embodiments of the present invention with reference to the drawings.
As shown in fig. 1, a method for navigating a vehicle based on sound and thermal imaging according to the present invention comprises the following steps:
s1, uniformly installing sound detection equipment and an infrared camera on each road in a preset urban area;
s2, the cloud end collects sound data of each road through the sound detection equipment, a sound source separation technology is applied, sounds of non-motor vehicles are removed, and then the remaining time domain sound signals related to vehicle movement are converted into frequency signals through Fourier transformation, so that decibel levels of the sounds are obtained;
s3, collecting thermodynamic diagrams of each road by the cloud through the infrared cameras, and distinguishing vehicles in the heat source by using a gray level thresholding method;
s4, the cloud performs normalization processing on the vehicle-related data obtained in the step S2 and the step S3, and then adds the two data to generate a data map containing road flow information;
s5, the cloud uses a data visualization tool to draw each data point into a color point on a map according to flow information of the data point, a region with larger flow uses darker color, a region with smaller flow uses lighter color, so that a visual map capable of intuitively reflecting the flow information of the road is generated, and the visual map is sent to a driver in real time;
s6: the driver selects cloud navigation or vehicle self-navigation; when a driver selects cloud navigation, the cloud performs path planning in the steps S7-S8 according to the data map obtained in the step S4; when a driver selects the vehicle to navigate by oneself, the vehicle performs path planning in the steps S9-S10 according to the visual map;
s7, dividing each road into a plurality of units by the cloud end, and calculating weight for each unit based on the flow information in the data map generated in the step S4, wherein the weight reflects the vehicle density of the unit; then, calculating the average value of the weights of all units on the road between any two intersections, and taking the average value as the total weight of the road;
s8, taking the total weight calculated in the step S7 as the road weight in the shortest path algorithm, and planning a vehicle path by adopting the shortest path algorithm; the method comprises the steps of updating a data map and road weights in real time, and implementing updating corresponding path planning;
s9, dividing each road into a plurality of units by the vehicle, and calculating a weight for each unit based on the color depth information in the visual map generated in the step S5, wherein the weight reflects the vehicle density of the unit; then, calculating the average value of the weights of all units on the road between any two intersections, and taking the average value as the total weight of the road;
s10, taking the total weight calculated in the step S9 as the road weight in the shortest path algorithm, and planning a vehicle path by adopting the shortest path algorithm; and updating the data map and the road weight in real time, and implementing updating the corresponding path planning.
The sound detection device and the infrared camera in the step S1 are arranged on a road between any two intersections within a preset city range at intervals of a preset distance, and the preset distance is in direct proportion to the length of the road; in a specific embodiment, this may be the case: if the preset standard is to set 10 devices on a road with a length of 1 km, the distance between the devices is set to 100 meters. However, if the road length increases to 2 km, the preset distance may be adjusted to 200 meters, i.e., the distance between devices may increase as the road length increases; such a layout saves resources and meets navigation requirements. The predetermined distance may be selected according to the specific circumstances.
The sound source separation technique in step S2 is a sound source separation technique based on independent component analysis, and the sound of the non-motor vehicle is removed by setting a frequency threshold. More specifically:
the sound source separation technique is a method based on Independent Component Analysis (ICA). ICA is a statistical technique for separating multiple signal sources into the largest possible independent components. The present embodiment applies ICA technology to analyze sound data collected from a sound detection device.
In order to reject sounds of non-motor vehicles from the sound data, the present embodiment sets a frequency threshold. Typically, non-motor vehicles (e.g., bicycles, pedestrians, etc.) produce lower frequencies of sound, while motor vehicles (e.g., automobiles, motorcycles, etc.) produce higher frequencies of sound. Thus, setting an appropriate frequency threshold can effectively distinguish sounds of a motor vehicle from those of a non-motor vehicle.
In a specific operation, the present embodiment first collects sound data of each road, and then applies ICA sound source separation technology to separate signals of the respective sound sources independently. The present embodiment then analyzes the frequencies of these separated signals and if the frequency of a signal is below a preset frequency threshold, it is considered to be sound of a non-motor vehicle and is rejected. In this way, the remaining sound signals associated with the movement of the motor vehicle can be used for subsequent navigation analysis.
In order to distinguish vehicles from thermodynamic diagrams, the cloud end adopts an Ojin algorithm to carry out gray thresholding. The oxford algorithm is an algorithm for automatically determining the image binarization threshold. It divides the image into two parts by computing the inter-class variance of all possible thresholds to find the best threshold.
In a specific operation, the cloud first calculates a gray level histogram of each frame thermodynamic diagram. Gray level histograms are a statistical tool that represents the distribution of pixel intensities in an image and can reflect the number of pixels at various gray levels in an image. Then, the cloud uses the oxford algorithm to analyze the gray histogram and calculate an optimal threshold. This threshold divides the gray level into two parts, one part being pixels with gray values above the threshold and the other part being pixels with gray values below the threshold.
In thermodynamic diagrams, the pixel gray level of a high temperature region (i.e., a vehicle) is typically higher than the pixel gray level of the environment. Therefore, the optimal threshold value calculated by the Ojin algorithm can effectively distinguish the vehicle from the environment in the thermodynamic diagram. And finally, the cloud end considers the part with the gray value higher than the threshold value as a vehicle for subsequent flow analysis and navigation planning.
In step S5 of the present embodiment, data visualization is accomplished using Python-based open source Matplotlib. Matplotlib is a powerful scientific library of drawings that provides a range of drawings and visualization tools that can be conveniently used to generate graphs in various formats, such as charts, histograms, power spectra, bar charts, error charts, scatter charts, and the like.
In a specific operation, the cloud end performs normalization processing on the vehicle related data obtained in the step S4, and then adds the two data to generate a data map containing road flow information. The data map is a two-dimensional array in which each element represents the vehicle flow at the corresponding location.
The cloud then converts this data map into a visual map using Matplotlib's drawing function. In this process, the cloud assigns each data point a color that is proportional to its corresponding vehicle flow. Areas of greater flow will be assigned darker colors, while areas of lesser flow will be assigned lighter colors.
In order to further improve the visual effect, the cloud may also use other functions of Matplotlib, such as adding color bars, setting coordinate axis labels, adjusting image sizes, and the like.
Finally, the cloud sends the visual map to a driver in real time, and the driver can know the traffic flow condition of the current city by looking up the map and conduct navigation planning according to the traffic flow condition.
In steps S8 and S10 of the present embodiment, the shortest path algorithm that may be specifically used by the cloud or the vehicle is Dijkstra algorithm. The Dijkstra algorithm is an algorithm for finding the shortest path between two points in a graph with non-negative weights.
In particular, in the model of a road network, each road can be regarded as an edge with weight, and each intersection can be regarded as a node. The weight of the road, i.e. the vehicle density calculated in step S4 or S5, reflects the traffic situation of this road. The greater the weight, the higher the vehicle density, and the greater the traffic pressure.
When the Dijkstra algorithm is implemented, the cloud or the vehicle first takes the intersection where the driver is currently located as a starting node. The algorithm will then continue to find the shortest path from the starting node to all other nodes. This process takes into account the weight of each road and preferentially selects less weighted (i.e., less dense vehicles) roads.
Specifically, each step of the Dijkstra algorithm finds one of the currently unprocessed nodes that is closest to the starting node (the smallest total weight) and then updates the distance of its neighbors (if the total weight to reach the neighbors through that node is less than the total weight currently recorded). This process is repeated until the shortest path to the destination is found.
The Dijkstra algorithm has the advantage that it always finds the optimal path. In this scenario, the optimal path is the fastest route that the vehicle can travel under the current traffic conditions. In this way, the driver or vehicle can adjust his own travel route in real time, thereby finding the fastest travel path in a busy urban road network. The Dijkstra algorithm corresponds to a lot of prior art documents, and is not described further herein.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.
Claims (4)
1. A method for navigating a vehicle based on sound and thermal imaging, comprising the steps of:
s1, uniformly installing sound detection equipment and an infrared camera on each road in a preset urban area;
s2, the cloud end collects sound data of each road through the sound detection equipment, a sound source separation technology is applied, sounds of non-motor vehicles are removed, and then the remaining time domain sound signals related to vehicle movement are converted into frequency signals through Fourier transformation, so that decibel levels of the sounds are obtained; the sound source separation technology is based on independent component analysis, and the sound of the non-motor vehicle is removed by setting a frequency threshold;
s3, collecting thermodynamic diagrams of each road by the cloud through the infrared cameras, and distinguishing vehicles in the heat source by using a gray level thresholding method; the gray level thresholding process adopts an Ojin algorithm, calculates an optimal threshold value according to a gray level histogram of each frame thermodynamic diagram, and considers the part higher than the threshold value as a vehicle;
s4, the cloud performs normalization processing on the vehicle-related data obtained in the step S2 and the step S3, and then adds the two data to generate a data map containing road flow information;
s5, the cloud uses a data visualization tool to draw each data point into a color point on a map according to flow information of the data point, a region with larger flow uses darker color, a region with smaller flow uses lighter color, so that a visual map capable of intuitively reflecting the flow information of the road is generated, and the visual map is sent to a driver in real time;
s6: the driver selects cloud navigation or vehicle self-navigation; when a driver selects cloud navigation, the cloud performs path planning in the steps S7-S8 according to the data map obtained in the step S4; when a driver selects the vehicle to navigate by oneself, the vehicle performs path planning in the steps S9-S10 according to the visual map;
s7, dividing each road into a plurality of units by the cloud end, and calculating weight for each unit based on the flow information in the data map generated in the step S4, wherein the weight reflects the vehicle density of the unit; then, calculating the average value of the weights of all units on the road between any two intersections, and taking the average value as the total weight of the road;
s8, taking the total weight calculated in the step S7 as the road weight in the shortest path algorithm, and planning a vehicle path by adopting the shortest path algorithm; the method comprises the steps of updating a data map and road weights in real time, and implementing updating corresponding path planning;
s9, dividing each road into a plurality of units by the vehicle, and calculating a weight for each unit based on the color depth information in the visual map generated in the step S5, wherein the weight reflects the vehicle density of the unit; then, calculating the average value of the weights of all units on the road between any two intersections, and taking the average value as the total weight of the road;
s10, taking the total weight calculated in the step S9 as the road weight in the shortest path algorithm, and planning a vehicle path by adopting the shortest path algorithm; and updating the data map and the road weight in real time, and implementing updating the corresponding path planning.
2. The method for navigating a vehicle based on sound and thermal imaging of claim 1, wherein the sound detecting device and the infrared camera in step S1 are installed in a pair at a predetermined distance on the road between any two intersections within a predetermined city range, and the predetermined distance is proportional to the length of the road.
3. The method for navigating a vehicle based on sound and thermal imaging of claim 1, wherein the data visualization tool in step S5 is Python based open source Matplotlib.
4. The method for navigating a vehicle based on sound and thermal imaging according to claim 1, wherein the shortest path algorithm adopted in the steps S8, S10 is Dijkstra algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310815023.7A CN116518989B (en) | 2023-07-05 | 2023-07-05 | Method for vehicle navigation based on sound and thermal imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310815023.7A CN116518989B (en) | 2023-07-05 | 2023-07-05 | Method for vehicle navigation based on sound and thermal imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116518989A CN116518989A (en) | 2023-08-01 |
CN116518989B true CN116518989B (en) | 2023-09-12 |
Family
ID=87390769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310815023.7A Active CN116518989B (en) | 2023-07-05 | 2023-07-05 | Method for vehicle navigation based on sound and thermal imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116518989B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751782A (en) * | 2009-12-30 | 2010-06-23 | 北京大学深圳研究生院 | Crossroad traffic event automatic detection system based on multi-source information fusion |
CN109640032A (en) * | 2018-04-13 | 2019-04-16 | 河北德冠隆电子科技有限公司 | Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence |
CN110047306A (en) * | 2019-04-23 | 2019-07-23 | 杜泽臣 | A kind of intelligent traffic network |
CN114384901A (en) * | 2022-01-12 | 2022-04-22 | 浙江中智达科技有限公司 | Dynamic traffic environment-oriented reinforcement learning auxiliary driving decision method |
CN115014374A (en) * | 2022-05-27 | 2022-09-06 | 重庆长安汽车股份有限公司 | Lane-level path planning method integrating dynamic events |
WO2023010599A1 (en) * | 2021-08-04 | 2023-02-09 | 深圳市沃特沃德信息有限公司 | Target trajectory calibration method based on video and audio, and computer device |
WO2023051322A1 (en) * | 2021-09-29 | 2023-04-06 | 华为技术有限公司 | Travel management method, and related apparatus and system |
CN115985122A (en) * | 2022-10-31 | 2023-04-18 | 内蒙古智能煤炭有限责任公司 | Unmanned system sensing method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020082089A1 (en) * | 2018-10-19 | 2020-04-23 | Neutron Holdings, Inc. | Detecting types of travel corridors on which personal mobility vehicles travel |
-
2023
- 2023-07-05 CN CN202310815023.7A patent/CN116518989B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751782A (en) * | 2009-12-30 | 2010-06-23 | 北京大学深圳研究生院 | Crossroad traffic event automatic detection system based on multi-source information fusion |
CN109640032A (en) * | 2018-04-13 | 2019-04-16 | 河北德冠隆电子科技有限公司 | Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence |
CN110047306A (en) * | 2019-04-23 | 2019-07-23 | 杜泽臣 | A kind of intelligent traffic network |
WO2023010599A1 (en) * | 2021-08-04 | 2023-02-09 | 深圳市沃特沃德信息有限公司 | Target trajectory calibration method based on video and audio, and computer device |
WO2023051322A1 (en) * | 2021-09-29 | 2023-04-06 | 华为技术有限公司 | Travel management method, and related apparatus and system |
CN114384901A (en) * | 2022-01-12 | 2022-04-22 | 浙江中智达科技有限公司 | Dynamic traffic environment-oriented reinforcement learning auxiliary driving decision method |
CN115014374A (en) * | 2022-05-27 | 2022-09-06 | 重庆长安汽车股份有限公司 | Lane-level path planning method integrating dynamic events |
CN115985122A (en) * | 2022-10-31 | 2023-04-18 | 内蒙古智能煤炭有限责任公司 | Unmanned system sensing method |
Non-Patent Citations (1)
Title |
---|
全息高精度导航地图:概念及理论模型;余卓渊;闾国年;张夕宁;贾远信;周成虎;葛咏;吕可晶;;地球信息科学学报(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116518989A (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111540201B (en) | Vehicle queuing length real-time estimation method and system based on roadside laser radar | |
CN107492251B (en) | Driver identity recognition and driving state monitoring method based on machine learning and deep learning | |
JP6838248B2 (en) | Information processing device | |
JP5900454B2 (en) | Vehicle lane guidance system and vehicle lane guidance method | |
US20220230449A1 (en) | Automatically perceiving travel signals | |
EP2410294A1 (en) | Method and device for providing cost information associated with junctions and method of determining a route | |
WO2009118988A1 (en) | Driving support device, driving support method, and driving support program | |
KR102031503B1 (en) | Method and system for detecting multi-object | |
KR20210052031A (en) | Deep Learning based Traffic Flow Analysis Method and System | |
CN114435138A (en) | Vehicle energy consumption prediction method and device, vehicle and storage medium | |
CN108645375B (en) | Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system | |
US11314974B2 (en) | Detecting debris in a vehicle path | |
JP2018195237A (en) | Image processing system, image processing method, information processing apparatus and recording medium | |
WO2018195150A1 (en) | Automatically perceiving travel signals | |
CN113178074A (en) | Traffic flow machine learning modeling system and method applied to vehicle | |
CN115690718A (en) | System and method for attention-aware region of interest window generation | |
CN103577790B (en) | road turn type detection method and device | |
KR20200087296A (en) | 3D viewer system for detecting object based on lidar sensor data | |
CN116518989B (en) | Method for vehicle navigation based on sound and thermal imaging | |
Das et al. | Why slammed the brakes on? auto-annotating driving behaviors from adaptive causal modeling | |
Hammoudi et al. | Towards a model of car parking assistance system using camera networks: Slot analysis and communication management | |
JP3235322U (en) | Traffic direction change amount survey report automatic generation system | |
CN115776680A (en) | Dynamic deployment method and device of computing model suitable for edge computing equipment | |
JP7238821B2 (en) | Map generation system and map generation program | |
Nalavde et al. | Driver assistant services using ubiquitous smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |