CN117496463A - Learning method and system for improving road perception precision - Google Patents

Learning method and system for improving road perception precision Download PDF

Info

Publication number
CN117496463A
CN117496463A CN202311358656.6A CN202311358656A CN117496463A CN 117496463 A CN117496463 A CN 117496463A CN 202311358656 A CN202311358656 A CN 202311358656A CN 117496463 A CN117496463 A CN 117496463A
Authority
CN
China
Prior art keywords
feature
road
static
target
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311358656.6A
Other languages
Chinese (zh)
Inventor
闫军
冯澍
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202311358656.6A priority Critical patent/CN117496463A/en
Publication of CN117496463A publication Critical patent/CN117496463A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a learning method and a learning system for improving road perception precision, which relate to the technical field of road perception, and the method comprises the following steps: acquiring a target road image; obtaining a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature; obtaining target radar monitoring data; performing road feature analysis based on the target radar monitoring data to obtain a second road feature, wherein the second road feature comprises a second static feature and a second dynamic feature; and carrying out fusion verification on the first road feature and the second road feature to obtain a target road perception feature. The technical problem that the road perception precision is low due to the fact that the recognition precision of different sensing monitoring data is not high in the prior art is solved, the road feature analysis is respectively carried out on a target road area through image sensing equipment and a monitoring radar, then feature fusion is carried out, and the technical effect of improving the road perception precision is achieved.

Description

Learning method and system for improving road perception precision
Technical Field
The application relates to the technical field of road perception, in particular to a learning method and system for improving road perception precision.
Background
With the rapid progress of artificial intelligence and the advent of intelligent society, automated driving vehicles gradually enter human life. The key core technology of the automatic driving vehicle mainly comprises road perception, accurate positioning, path planning, drive-by-wire execution and the like, wherein the road perception has important reference significance for the path planning and driving control of the automatic driving vehicle, so that the important practical significance is provided for improving the road perception precision.
At present, the technical problem of lower road perception precision is caused by low recognition precision of different sensing monitoring data in the prior art.
Disclosure of Invention
The application provides a learning method and a learning system for improving road perception precision, which are used for solving the technical problem that the road perception precision is lower due to the fact that the recognition precision of different sensing monitoring data is not high in the prior art.
According to a first aspect of the present application, there is provided a learning method for improving road perception accuracy, including: image acquisition is carried out on the target road area in a preset time zone through image sensing equipment, and a target road image is obtained; carrying out road feature recognition based on the target road image to obtain a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature; monitoring the target road area by a monitoring radar in the preset time zone to obtain target radar monitoring data; performing road feature analysis based on the target radar monitoring data to obtain a second road feature, wherein the second road feature comprises a second static feature and a second dynamic feature; and carrying out fusion verification on the first road feature and the second road feature to obtain a target road perception feature.
According to a second aspect of the present application, there is provided a learning system for improving road perception accuracy, comprising: the image acquisition module is used for acquiring images of the target road area through the image sensing equipment in a preset time zone to acquire target road images; the image feature analysis module is used for carrying out road feature recognition based on the target road image to obtain a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature; the radar monitoring module is used for monitoring the target road area through a monitoring radar in the preset time zone to obtain target radar monitoring data; the radar feature analysis module is used for carrying out road feature analysis based on the target radar monitoring data to obtain second road features, wherein the second road features comprise second static features and second dynamic features; and the fusion verification module is used for carrying out fusion verification on the first road characteristics and the second road characteristics to obtain target road perception characteristics.
The following beneficial effects can be achieved according to one or more technical schemes adopted by the application:
image acquisition is carried out on a target road area through image sensing equipment in a preset time zone, a target road image is obtained, road feature identification is carried out on the basis of the target road image, a first road feature is obtained, wherein the first road feature comprises a first static feature and a first dynamic feature, the target road area is monitored through a monitoring radar in the preset time zone, target radar monitoring data are obtained, road feature analysis is carried out on the basis of the target radar monitoring data, a second road feature is obtained, the second road feature comprises a second static feature and a second dynamic feature, and fusion verification is carried out on the first road feature and the second road feature, so that a target road perception feature is obtained. Therefore, road feature analysis and feature fusion are respectively carried out on the target road area through the image sensing equipment and the monitoring radar, and the technical effect of improving the road perception precision is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. The accompanying drawings, which form a part hereof, illustrate embodiments of the present application and, together with the description, serve to explain the present application and not to limit the application unduly, and to enable a person skilled in the art to make and use other drawings without the benefit of the present inventive subject matter.
Fig. 1 is a schematic flow chart of a learning method for improving road perception accuracy according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a learning system for improving road perception accuracy according to an embodiment of the present application.
Reference numerals illustrate: the system comprises an image acquisition module 11, an image characteristic analysis module 12, a radar monitoring module 13, a radar characteristic analysis module 14 and a fusion verification module 15.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
The terminology used in the description is for the purpose of describing embodiments only and is not intended to be limiting of the application. As used in this specification, the singular terms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises" and/or "comprising," when used in this specification, specify the presence of steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other steps, operations, elements, components, and/or groups thereof.
Unless defined otherwise, all terms (including technical and scientific terms) used in this specification should have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. Terms, such as those defined in commonly used dictionaries, should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Like numbers refer to like elements throughout.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for presentation, analyzed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Example 1
Fig. 1 is a diagram of a learning method for improving road perception accuracy according to an embodiment of the present application, where the method includes:
image acquisition is carried out on the target road area in a preset time zone through image sensing equipment, and a target road image is obtained;
specifically, the preset time zone refers to a future time period for road sensing, which is specifically set by those skilled in the art in combination with actual requirements, and is not limited thereto. The target road area refers to any road section on any road needing road feature perception, and can be determined by combining with actual conditions without limitation. The image sensing device is a camera arranged on the target road area, and the image sensing device is used for acquiring images of the target road area in a preset time zone to obtain target road images.
Carrying out road feature recognition based on the target road image to obtain a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature;
in a preferred embodiment, further comprising:
preprocessing the target road image to obtain a preprocessed image; carrying out static feature recognition on the preprocessed image through a static feature recognition branch in an image feature recognition channel to obtain a first static feature, wherein the first static feature comprises a first road surface obstacle feature and a first road surface damage feature; carrying out dynamic feature recognition on the preprocessed image through a dynamic feature recognition branch in an image feature recognition channel to obtain a first dynamic feature, wherein the first dynamic feature comprises a first pedestrian feature and a first vehicle feature; and forming the first road characteristic by the first static characteristic and the first dynamic characteristic.
In a preferred embodiment, further comprising:
acquiring a road image sample data set, wherein the road image sample data set comprises a road image sample and a static characteristic sample; performing gradient descent training on a convolutional neural network by using the road image sample and the static feature sample to obtain the static feature recognition branch; and carrying out static feature recognition on the preprocessed image by using the static feature recognition branch to acquire the first static feature.
And carrying out road feature recognition based on the target road image to obtain a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature, the first static feature is a feature which does not change in a short period on a target road region, such as road surface barriers, road surface damages and the like, the first dynamic feature is a feature which changes in real time on the target road region, such as vehicle and pedestrian features, and a specific acquisition process is described in detail below.
Specifically, the target road image is first subjected to preprocessing, where the preprocessing includes processing means for improving image accuracy, such as denoising and enhancing, and the denoising and enhancing of the image are common technical means for those skilled in the art, so that the preprocessing is not performed here, and thus a preprocessed image is obtained as a preprocessed image. Static feature recognition is carried out on the preprocessed image through a static feature recognition branch in an image feature recognition channel, a first static feature is obtained, the first static feature comprises a first road surface obstacle feature and a first road surface damage feature, the first road surface obstacle feature is an obstacle which is immovable in a short period, such as waste on a road surface, soil blocks caused by landslide and the like, the first road surface damage feature is a damaged area on the road surface, the static feature recognition branch is a convolutional neural network model in machine learning, and the construction mode is as follows:
the method comprises the steps of obtaining a road image sample data set, wherein the road image sample data set comprises a road image sample and a static characteristic sample, the road image sample is obtained by a person skilled in the art in combination with the prior art, for example, the road image sample can be extracted from the Internet through a data mining technology, and the static characteristic sample is obtained by marking road obstacles and road damage areas in the road image sample based on the prior art. And performing gradient descent training on the convolutional neural network by using the road image sample and the static feature sample to obtain the static feature recognition branch, namely, using the convolutional neural network as a network structure of the static feature recognition branch, using the static feature sample as output of the static feature recognition branch, testing the error rate of the static feature recognition branch in the training process, and reducing the error rate by adjusting network parameters of the static feature recognition branch until the error rate is the lowest, thereby obtaining the trained static feature recognition branch. And then carrying out static feature recognition on the preprocessed image by using the static feature recognition branch, and outputting the first static feature.
Further, the preprocessing image is subjected to dynamic feature recognition through a dynamic feature recognition branch in the image feature recognition channel, and a first dynamic feature is obtained, wherein the first dynamic feature comprises a first pedestrian feature (comprising the number and the speed of pedestrians) and a first vehicle feature (comprising the number and the speed of vehicles), the dynamic feature recognition branch is also constructed based on a convolutional neural network model, and the construction method of the dynamic feature recognition branch is the same as that of the static feature recognition branch, so that the construction method is not repeated here. And forming the first road characteristic by the first static characteristic and the first dynamic characteristic. Therefore, feature identification of the road image is realized, and a foundation is provided for subsequent road feature fusion verification.
Monitoring the target road area by a monitoring radar in the preset time zone to obtain target radar monitoring data;
the monitoring radar is a radar detector installed in the target road area, and the type of the detection radar is not limited in this embodiment. And monitoring the target road area by using a monitoring radar in the preset time zone to obtain target radar monitoring data, wherein the target radar monitoring data is detection information of the monitoring radar, such as received reflected waves.
Performing road feature analysis based on the target radar monitoring data to obtain a second road feature, wherein the second road feature comprises a second static feature and a second dynamic feature;
in a preferred embodiment, further comprising:
monitoring the target road area in the preset time zone through a monitoring radar to obtain initial radar monitoring data; denoising the initial radar monitoring data to obtain target radar monitoring data; and carrying out static feature and dynamic feature recognition on the target radar monitoring data through a radar data recognition channel to obtain the second road feature.
Specifically, road feature analysis is performed based on the target radar monitoring data, and a second road feature is obtained, wherein the second road feature comprises a second static feature and a second dynamic feature, the second static feature is the same as the first static feature in the type of parameters, and the second dynamic feature is the same as the first dynamic feature in the type of parameters, but the parameter values may be different. The specific acquisition process is described in detail below.
Specifically, monitoring the target road area by a monitoring radar in the preset time zone to obtain initial radar monitoring data, wherein the initial radar monitoring data is reflected wave data output by the monitoring radar. And denoising the initial radar monitoring data, wherein a spatial filter, a time domain filter and the like can be used for reducing high-frequency noise, and the initial radar monitoring data is denoised to obtain the target radar monitoring data. And carrying out static feature and dynamic feature recognition on the target radar monitoring data through a radar data recognition channel to obtain the second road feature, wherein the radar data recognition channel is a model for carrying out target position analysis based on reflected waves output by the existing radar. That is, the radar is monitored to emit electromagnetic waves to irradiate the target road area and receive reflected waves, reflected wave data is obtained, the reflected wave data is radio waves reflected by the target, and the positions of various targets can be located according to the reflected wave data based on the prior art, and it is noted that continuous detection is performed in a preset time zone, so that each detection result is analyzed and compared, if the positions and the distances of the targets detected for multiple times are not changed, the detected positions are used as the second static features, otherwise, if the positions and the distances of the targets detected for multiple times are dynamically changed, the detected change speeds and the detected quantity are used as the second dynamic features.
And carrying out fusion verification on the first road feature and the second road feature to obtain a target road perception feature.
In a preferred embodiment, further comprising:
acquiring a first static feature recognition accuracy and a first dynamic feature recognition accuracy of the image feature recognition channel; acquiring a second static feature recognition accuracy and a second dynamic feature recognition accuracy of the radar data recognition channel; performing fusion verification on the first static feature and the second static feature based on the first static feature identification accuracy and the second static feature identification accuracy to obtain a target static feature; performing fusion verification on first dynamic characteristics and second dynamic characteristics in the first road characteristics and the second road characteristics based on the first dynamic characteristic identification accuracy and the second dynamic characteristic identification accuracy to obtain target dynamic characteristics; and taking the target static characteristic and the target dynamic characteristic as the target road perception characteristic.
In a preferred embodiment, further comprising:
setting a first weight and a second weight based on the first static feature recognition accuracy and the second static feature recognition accuracy; and weighting the first static feature and the second static feature according to the first weight and the second weight to obtain the target static feature.
And carrying out fusion verification on the first road characteristic and the second road characteristic to obtain a target road perception characteristic, wherein the specific process is as follows.
Acquiring a first static feature recognition accuracy and a first dynamic feature recognition accuracy of the image feature recognition channel, wherein the first static feature recognition accuracy is the output accuracy obtained by testing when training is performed on a static feature recognition branch of the image feature recognition channel; the first dynamic feature recognition accuracy is the output accuracy obtained by testing when training is completed when training is performed on the dynamic feature recognition branches of the image feature recognition channels. And further acquiring a second static feature recognition accuracy and a second dynamic feature recognition accuracy of the radar data recognition channel, wherein the second static feature recognition accuracy and the second dynamic feature recognition accuracy can be acquired through actual testing. And based on the first static feature recognition accuracy and the second static feature recognition accuracy, performing fusion verification on the first static feature and the second static feature to obtain a target static feature, wherein in short, the first static feature and the second static feature contain the same parameter types, but possibly different parameter values, so that the same type of parameter values need to be fused, one type of parameter has only one parameter value, and the parameter value is taken as the target static feature.
And based on the first dynamic feature recognition accuracy and the second dynamic feature recognition accuracy, carrying out fusion verification on the first dynamic feature and the second dynamic feature in the first road feature and the second road feature to obtain a target dynamic feature, wherein the first dynamic feature and the second dynamic feature contain the same parameter types and only possibly different parameter values, so that the same type of parameter values need to be fused, one type of parameter has only one parameter value, and the parameter value is taken as the target dynamic feature. And finally, taking the target static features and the target dynamic features as the target road perception features to realize the perception of the road features and improve the road perception precision.
The process for obtaining the target static feature comprises the following steps of: and setting a first weight and a second weight based on the first static feature recognition accuracy and the second static feature recognition accuracy, specifically, calculating the sum of the first static feature recognition accuracy and the second static feature recognition accuracy, and dividing the sum by the first static feature recognition accuracy and the second static feature recognition accuracy to obtain the first weight and the second weight. And then weighting the first static feature and the second static feature according to the first weight and the second weight, namely, carrying out weighted calculation on two parameter values belonging to the same feature, and taking a weighted calculation result as the target static feature.
And similarly, adding and calculating the first dynamic feature recognition accuracy and the second dynamic feature recognition accuracy, dividing the first dynamic feature recognition accuracy and the second dynamic feature recognition accuracy by the adding and calculating result to obtain a third weight and a fourth weight, and then carrying out weighted calculation on the first dynamic feature and the second dynamic feature in the first road feature and the second road feature according to the third weight and the fourth weight, wherein the weighted calculation result is used as a target dynamic feature.
Therefore, the fusion of the first road feature and the second road feature is realized, and the road perception precision is improved.
Based on the analysis, the following beneficial effects can be achieved by one or more technical schemes provided by the application:
image acquisition is carried out on a target road area through image sensing equipment in a preset time zone, a target road image is obtained, road feature identification is carried out on the basis of the target road image, a first road feature is obtained, wherein the first road feature comprises a first static feature and a first dynamic feature, the target road area is monitored through a monitoring radar in the preset time zone, target radar monitoring data are obtained, road feature analysis is carried out on the basis of the target radar monitoring data, a second road feature is obtained, the second road feature comprises a second static feature and a second dynamic feature, and fusion verification is carried out on the first road feature and the second road feature, so that a target road perception feature is obtained. Therefore, road feature analysis and feature fusion are respectively carried out on the target road area through the image sensing equipment and the monitoring radar, and the technical effect of improving the road perception precision is achieved.
Example two
Based on the same inventive concept as the learning method for improving the road sensing accuracy in the foregoing embodiment, as shown in fig. 2, the present application further provides a learning system for improving the road sensing accuracy, where the system includes:
the image acquisition module 11 is used for acquiring an image of a target road area through the image sensing equipment in a preset time zone to acquire a target road image;
the image feature analysis module 12 is configured to perform road feature recognition based on the target road image to obtain a first road feature, where the first road feature includes a first static feature and a first dynamic feature;
the radar monitoring module 13 is configured to monitor the target road area in the preset time zone by using a monitoring radar to obtain target radar monitoring data;
the radar feature analysis module 14 is configured to perform a road feature analysis based on the target radar monitoring data, and obtain a second road feature, where the second road feature includes a second static feature and a second dynamic feature;
and the fusion verification module 15 is used for carrying out fusion verification on the first road feature and the second road feature to obtain a target road perception feature.
Further, the image feature analysis module 12 further includes:
preprocessing the target road image to obtain a preprocessed image;
carrying out static feature recognition on the preprocessed image through a static feature recognition branch in an image feature recognition channel to obtain a first static feature, wherein the first static feature comprises a first road surface obstacle feature and a first road surface damage feature;
carrying out dynamic feature recognition on the preprocessed image through a dynamic feature recognition branch in an image feature recognition channel to obtain a first dynamic feature, wherein the first dynamic feature comprises a first pedestrian feature and a first vehicle feature;
and forming the first road characteristic by the first static characteristic and the first dynamic characteristic.
Further, the image feature analysis module 12 further includes:
acquiring a road image sample data set, wherein the road image sample data set comprises a road image sample and a static characteristic sample;
performing gradient descent training on a convolutional neural network by using the road image sample and the static feature sample to obtain the static feature recognition branch;
and carrying out static feature recognition on the preprocessed image by using the static feature recognition branch to acquire the first static feature.
Further, the radar feature analysis module 14 further includes:
monitoring the target road area in the preset time zone through a monitoring radar to obtain initial radar monitoring data;
denoising the initial radar monitoring data to obtain target radar monitoring data;
and carrying out static feature and dynamic feature recognition on the target radar monitoring data through a radar data recognition channel to obtain the second road feature.
Further, the fusion check module 15 further includes:
acquiring a first static feature recognition accuracy and a first dynamic feature recognition accuracy of the image feature recognition channel;
acquiring a second static feature recognition accuracy and a second dynamic feature recognition accuracy of the radar data recognition channel;
performing fusion verification on the first static feature and the second static feature based on the first static feature identification accuracy and the second static feature identification accuracy to obtain a target static feature;
performing fusion verification on first dynamic characteristics and second dynamic characteristics in the first road characteristics and the second road characteristics based on the first dynamic characteristic identification accuracy and the second dynamic characteristic identification accuracy to obtain target dynamic characteristics;
and taking the target static characteristic and the target dynamic characteristic as the target road perception characteristic.
Further, the fusion check module 15 further includes:
setting a first weight and a second weight based on the first static feature recognition accuracy and the second static feature recognition accuracy;
and weighting the first static feature and the second static feature according to the first weight and the second weight to obtain the target static feature.
A specific example of a learning method for improving road sensing accuracy in the first embodiment is also applicable to a learning system for improving road sensing accuracy in the present embodiment, and a person skilled in the art can clearly know a learning system for improving road sensing accuracy in the present embodiment through the foregoing detailed description of a learning method for improving road sensing accuracy, so that details thereof will not be described herein for brevity of description.
It should be understood that the various forms of flow shown above, reordered, added, or deleted steps may be used, as long as the desired results of the presently disclosed technology are achieved, and are not limited herein.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, the scope of which is defined by the scope of the appended claims.

Claims (7)

1. A learning method for improving road perception accuracy, the method comprising:
image acquisition is carried out on the target road area in a preset time zone through image sensing equipment, and a target road image is obtained;
carrying out road feature recognition based on the target road image to obtain a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature;
monitoring the target road area by a monitoring radar in the preset time zone to obtain target radar monitoring data;
performing road feature analysis based on the target radar monitoring data to obtain a second road feature, wherein the second road feature comprises a second static feature and a second dynamic feature;
and carrying out fusion verification on the first road feature and the second road feature to obtain a target road perception feature.
2. The method of claim 1, wherein the identifying road features based on the target road image obtains a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature, comprising:
preprocessing the target road image to obtain a preprocessed image;
carrying out static feature recognition on the preprocessed image through a static feature recognition branch in an image feature recognition channel to obtain a first static feature, wherein the first static feature comprises a first road surface obstacle feature and a first road surface damage feature;
carrying out dynamic feature recognition on the preprocessed image through a dynamic feature recognition branch in an image feature recognition channel to obtain a first dynamic feature, wherein the first dynamic feature comprises a first pedestrian feature and a first vehicle feature;
and forming the first road characteristic by the first static characteristic and the first dynamic characteristic.
3. The method of claim 2, wherein the performing static feature recognition on the preprocessed image through the static feature recognition branch in the image feature recognition channel, obtaining the first static feature, comprises:
acquiring a road image sample data set, wherein the road image sample data set comprises a road image sample and a static characteristic sample;
performing gradient descent training on a convolutional neural network by using the road image sample and the static feature sample to obtain the static feature recognition branch;
and carrying out static feature recognition on the preprocessed image by using the static feature recognition branch to acquire the first static feature.
4. The method of claim 3, wherein the performing a road feature analysis based on the target radar monitoring data obtains a second road feature, wherein the second road feature comprises a second static feature and a second dynamic feature, comprising:
monitoring the target road area in the preset time zone through a monitoring radar to obtain initial radar monitoring data;
denoising the initial radar monitoring data to obtain target radar monitoring data;
and carrying out static feature and dynamic feature recognition on the target radar monitoring data through a radar data recognition channel to obtain the second road feature.
5. The method of claim 4, wherein the performing a fusion check on the first road feature and the second road feature to obtain a target road perception feature comprises:
acquiring a first static feature recognition accuracy and a first dynamic feature recognition accuracy of the image feature recognition channel;
acquiring a second static feature recognition accuracy and a second dynamic feature recognition accuracy of the radar data recognition channel;
performing fusion verification on the first static feature and the second static feature based on the first static feature identification accuracy and the second static feature identification accuracy to obtain a target static feature;
performing fusion verification on first dynamic characteristics and second dynamic characteristics in the first road characteristics and the second road characteristics based on the first dynamic characteristic identification accuracy and the second dynamic characteristic identification accuracy to obtain target dynamic characteristics;
and taking the target static characteristic and the target dynamic characteristic as the target road perception characteristic.
6. The method of claim 5, wherein the performing a fusion check on the first static feature and the second static feature of the first road feature and the second road feature based on the first static feature recognition accuracy and the second static feature recognition accuracy to obtain a target static feature comprises:
setting a first weight and a second weight based on the first static feature recognition accuracy and the second static feature recognition accuracy;
and weighting the first static feature and the second static feature according to the first weight and the second weight to obtain the target static feature.
7. A learning system for improving road perception accuracy, characterized by the steps for performing any one of the learning methods for improving road perception accuracy as claimed in claims 1 to 6, the system comprising:
the image acquisition module is used for acquiring images of the target road area through the image sensing equipment in a preset time zone to acquire target road images;
the image feature analysis module is used for carrying out road feature recognition based on the target road image to obtain a first road feature, wherein the first road feature comprises a first static feature and a first dynamic feature;
the radar monitoring module is used for monitoring the target road area through a monitoring radar in the preset time zone to obtain target radar monitoring data;
the radar feature analysis module is used for carrying out road feature analysis based on the target radar monitoring data to obtain second road features, wherein the second road features comprise second static features and second dynamic features;
and the fusion verification module is used for carrying out fusion verification on the first road characteristics and the second road characteristics to obtain target road perception characteristics.
CN202311358656.6A 2023-10-19 2023-10-19 Learning method and system for improving road perception precision Pending CN117496463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311358656.6A CN117496463A (en) 2023-10-19 2023-10-19 Learning method and system for improving road perception precision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311358656.6A CN117496463A (en) 2023-10-19 2023-10-19 Learning method and system for improving road perception precision

Publications (1)

Publication Number Publication Date
CN117496463A true CN117496463A (en) 2024-02-02

Family

ID=89666783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311358656.6A Pending CN117496463A (en) 2023-10-19 2023-10-19 Learning method and system for improving road perception precision

Country Status (1)

Country Link
CN (1) CN117496463A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118379881A (en) * 2024-06-21 2024-07-23 华睿交通科技股份有限公司 Highway traffic safety early warning system based on vehicle-road cooperation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118379881A (en) * 2024-06-21 2024-07-23 华睿交通科技股份有限公司 Highway traffic safety early warning system based on vehicle-road cooperation
CN118379881B (en) * 2024-06-21 2024-09-06 华睿交通科技股份有限公司 Highway traffic safety early warning system based on vehicle-road cooperation

Similar Documents

Publication Publication Date Title
CN112462346B (en) Ground penetrating radar subgrade disease target detection method based on convolutional neural network
Liu et al. Novel YOLOv3 model with structure and hyperparameter optimization for detection of pavement concealed cracks in GPR images
CN114693615A (en) Deep learning concrete bridge crack real-time detection method based on domain adaptation
CN113009447B (en) Road underground cavity detection and early warning method based on deep learning and ground penetrating radar
CN109961106A (en) The training method and device of track disaggregated model, electronic equipment
CN117496463A (en) Learning method and system for improving road perception precision
CN109977968B (en) SAR change detection method based on deep learning classification comparison
CN110852158B (en) Radar human motion state classification algorithm and system based on model fusion
CN110516633A (en) A kind of method for detecting lane lines and system based on deep learning
CN113255580A (en) Method and device for identifying sprinkled objects and vehicle sprinkling and leaking
Barkataki et al. Classification of soil types from GPR B scans using deep learning techniques
CN109684986A (en) A kind of vehicle analysis method and system based on automobile detecting following
US11668857B2 (en) Device, method and computer program product for validating data provided by a rain sensor
CN114120150B (en) Road target detection method based on unmanned aerial vehicle imaging technology
CN117351321A (en) Single-stage lightweight subway lining cavity recognition method and related equipment
CN116340770A (en) Intelligent identification method for tunnel lining geological radar data
CN111144279A (en) Method for identifying obstacle in intelligent auxiliary driving
CN110991507A (en) Road underground cavity identification method, device and system based on classifier
Al-Suleiman et al. Assessment of the effect of alligator cracking on pavement condition using WSN-image processing
Qian et al. Deep Learning-Augmented Stand-off Radar Scheme for Rapidly Detecting Tree Defects
CN111007464B (en) Road underground cavity identification method, device and system based on optimal weighting
CN114529815A (en) Deep learning-based traffic detection method, device, medium and terminal
Tran et al. GAN–XGB–cavity: automated estimation of underground cavities’ properties using GPR data
Li et al. Road sub-surface defect detection based on gprMax forward simulation-sample generation and Swin Transformer-YOLOX
CN110472571A (en) A kind of spacing determines method, apparatus and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination