CN115451901A - Method and device for classifying and identifying road surface unevenness, vehicle and storage medium - Google Patents

Method and device for classifying and identifying road surface unevenness, vehicle and storage medium Download PDF

Info

Publication number
CN115451901A
CN115451901A CN202211088233.2A CN202211088233A CN115451901A CN 115451901 A CN115451901 A CN 115451901A CN 202211088233 A CN202211088233 A CN 202211088233A CN 115451901 A CN115451901 A CN 115451901A
Authority
CN
China
Prior art keywords
target
vehicle
road surface
time
time point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211088233.2A
Other languages
Chinese (zh)
Inventor
洪日
张建
刘秋铮
王超
王御
谢飞
韩亚凝
杜杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202211088233.2A priority Critical patent/CN115451901A/en
Publication of CN115451901A publication Critical patent/CN115451901A/en
Priority to PCT/CN2023/116866 priority patent/WO2024051661A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/30Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention discloses a classification and identification method and device for road surface unevenness, a vehicle and a storage medium. The method comprises the steps of obtaining at least one group of training data sequences, and carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model; acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a future driving predicted track of the vehicle; and inputting the real-time pose data into the road surface unevenness prediction model, and determining an unevenness classification identification result matched with the running road surface of the front vehicle. According to the technical scheme, the accuracy and robustness of the road surface unevenness prediction model are improved, and the accuracy of front road surface unevenness recognition is improved.

Description

Method and device for classifying and identifying road surface unevenness, vehicle and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a method and a device for classifying and recognizing road surface unevenness, a vehicle and a storage medium.
Background
Under the working condition of urban roads, the vision field of the forward vision sensor of the vehicle is frequently occupied by other vehicles on the road surface, and the condition of the road surface in front can not be sensed, so that the existing function of identifying the unevenness of the road surface is failed, the function influences other functions of vehicle whole vehicle mode switching, vehicle dynamic state prediction, active suspension and the like, and if the function fails, the performances of vehicle safety, comfort and the like are reduced. The prior art generally uses a vision sensor to identify the front road surface elevation and the road surface unevenness.
In the process of implementing the invention, the inventor finds that the prior art has the following defects: the existing method for the front road surface unevenness does not consider the problem that the view is blocked under the urban working condition, which widely exists in the practical application process, so that the front road surface condition cannot be sensed.
Disclosure of Invention
The invention provides a classification and identification method and device for road surface unevenness, a vehicle and a storage medium, which are used for increasing the accuracy and robustness of a road surface unevenness prediction model and improving the accuracy of front road surface unevenness identification.
According to an aspect of the present invention, there is provided a classification recognition method of road surface unevenness, the method comprising:
obtaining at least one group of training data sequences, and carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model;
acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a future driving predicted track of the vehicle;
and inputting the real-time pose data into the road surface unevenness prediction model, and determining an unevenness classification identification result matched with the running road surface of the front vehicle.
According to another aspect of the present invention, there is provided a classification recognition apparatus of road surface unevenness, the apparatus including:
the road surface unevenness prediction model acquisition module is used for acquiring at least one group of training data sequences and carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model;
the pose data acquisition module is used for acquiring real-time pose data of the front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle;
and the classification and identification result determining module is used for inputting the real-time pose data to the road surface unevenness prediction model and determining an unevenness classification and identification result matched with the running road surface of the front vehicle.
According to another aspect of the present invention, there is provided a vehicle including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor to enable the at least one processor to execute the method for classifying and identifying road surface unevenness according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the method for classifying and identifying road unevenness according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme of the embodiment of the invention, at least one group of training data sequences is obtained, and a preset neural network model is trained for at least one round according to the at least one group of training data sequences to obtain a road surface unevenness prediction model; acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle; the real-time pose data are input into the road surface unevenness prediction model, unevenness classification recognition results matched with the running road surface of the front vehicle are determined, namely, a neural network model is trained through a large amount of training data to obtain the road surface unevenness prediction model, the road surface unevenness prediction model is used for determining the unevenness classification recognition results of the running road surface of the front vehicle according to pose signals of the front vehicle, the problem that the vision of a vision sensor is shielded in the front direction of the vehicle in the prior art, so that the front road surface condition cannot be sensed is solved, the accuracy and the robustness of the road surface unevenness prediction model are improved, and the accuracy of the front road surface unevenness recognition is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a flowchart of a classification and identification method for road surface unevenness according to an embodiment of the present invention;
fig. 1b is an application scenario diagram of a classification and identification method for road surface unevenness according to an embodiment of the present invention;
fig. 2 is a flowchart of another classification and identification method for road surface unevenness according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a classification and identification device for road surface unevenness according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle implementing the method for classifying and identifying road surface unevenness according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1a is a flowchart of a method for classifying and identifying road unevenness according to an embodiment of the present invention, where the embodiment is applicable to a situation where a vehicle ahead blocks a road surface on a predicted future driving track of the vehicle, and the method may be implemented by a device for classifying and identifying road unevenness, which may be implemented in a form of hardware and/or software, and the device for classifying and identifying road unevenness may be configured in a main controller of the vehicle. As shown in fig. 1a, the method comprises:
s110, obtaining at least one group of training data sequences, and carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model.
Under the urban road working condition, the visual field of the vehicle forward vision sensor is frequently occupied by other vehicles on the road surface, the road surface condition in front can not be sensed, and further the existing road surface unevenness identification function is failed.
At least one set of training data sequences may be obtained from data collected during actual driving of a large number of user vehicles, and each set of training data sequences may include pose data of a preceding vehicle and matched road surface irregularity tag data, for example, pose data of a preceding vehicle at a certain time point and an irregularity tag of a road surface on which the preceding vehicle is driven at the same time. The preset Neural Network model may be RNN (Recurrent Neural Network) suitable for calculation of time series signals, such as LSTM (Long Short-Term Memory); the preset Neural network model may also be CNN (Convolutional Neural Networks). The road unevenness prediction model may be a model obtained by training a preset neural network model using at least one set of training data sequences.
In this embodiment, at least one set of training data sequences may be obtained in advance, and at least one round of training is performed on the preset neural network model according to the plurality of sets of training data sequences, so as to obtain a trained model with enhanced accuracy and robustness as the road unevenness prediction model.
And S120, acquiring real-time pose data of the front vehicle after the front vehicle is shielded on the predicted future running track of the vehicle.
In this embodiment, referring to fig. 1b, when the vehicle is traveling on the road surface, a predicted future traveling track of the vehicle may be obtained according to the steering wheel angle prediction at this time, and if a viewing angle range of a forward-looking sensor of the vehicle is blocked by the vehicle, it indicates that the forward-looking sensor cannot identify the road surface unevenness of the vehicle at a future time, and the vehicle will pass through the road surface under the wheel of the vehicle at this time in the future, and then, real-time pose data of the vehicle may be collected, and the road surface unevenness under the front wheel may be identified by analyzing the pose data of the vehicle.
And S130, inputting the real-time pose data to the road surface unevenness prediction model, and determining an unevenness classification identification result matched with the running road surface of the front vehicle.
The identification result of the unevenness classification may include the unevenness grade classification and the confidence of each grade. The unevenness level classification may include, for example, mild, general, and severe categories.
In this embodiment, the collected pose data of the leading vehicle may be input to a road surface unevenness prediction model trained in advance, so as to obtain an unevenness grade classification result matched with the road surface on which the leading vehicle runs and confidence of each grade.
Optionally, after the classification result of the unevenness of the running road surface of the front vehicle is determined, an input signal can be provided for other control systems of the vehicle according to the condition of the vehicle, so that the driving comfort and safety of the vehicle are improved.
According to the technical scheme of the embodiment of the invention, at least one group of training data sequences is obtained, and a preset neural network model is trained for at least one round according to the at least one group of training data sequences to obtain a road surface unevenness prediction model; acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle; the real-time pose data are input into the road surface unevenness prediction model, unevenness classification recognition results matched with the running road surface of the front vehicle are determined, namely, a neural network model is trained through a large amount of training data to obtain the road surface unevenness prediction model, the road surface unevenness prediction model is used for determining the unevenness classification recognition results of the running road surface of the front vehicle according to pose signals of the front vehicle, the problem that the vision of a vision sensor is shielded in the front direction of the vehicle in the prior art, so that the front road surface condition cannot be sensed is solved, the accuracy and the robustness of the road surface unevenness prediction model are improved, and the accuracy of the front road surface unevenness recognition is improved.
Example two
Fig. 2 is a flowchart of another classification and identification method for road surface unevenness according to a second embodiment of the present invention, and this embodiment refines at least one set of training data sequences obtained on the basis of the foregoing embodiments. As shown in fig. 2, the method includes:
s210, when the target vehicle ahead occlusion appears on the predicted future driving track of the target vehicle, recording a first time point and a target position of the target vehicle ahead, and collecting real-time pose data of the target vehicle ahead after the first time point.
The first time point may refer to a time point when the forward-looking sensor of the target vehicle detects the target forward vehicle on the future predicted travel track. The target position may refer to an actual position of the target preceding vehicle at the first time point.
In this embodiment, to train the preset neural network model, at least one set of training data columns may be obtained in advance. Taking the example of obtaining a set of training data sequences: when the target vehicle is shielded in the future driving predicted track of the target vehicle, recording a first time point and the target position of the target vehicle in front of the target, and acquiring real-time pose data of the target vehicle in front of the target after the first time point.
S220, preliminarily determining that the road surface of the target in front of the target is seriously uneven according to the real-time pose data of the target in front of the target, and judging whether the target vehicle drives to the target position according to the future driving prediction track after the first time point.
The real-time pose data of the target front vehicle can comprise real-time vehicle speed and real-time roll angle. The real-time pose data of the target leading vehicle can be acquired by a sensor configured on the target vehicle.
Because the size of the vehicle body and the performance of a suspension system are different between vehicles, and the vehicle body posture response conditions of different vehicles on the road surface with the same unevenness grade are different, the unevenness of the road surface cannot be judged directly according to the pose data of the front vehicle, and the specific road unevenness grade can be judged according to the self-dynamics response signal after the vehicle passes through the front road surface.
In this embodiment, when it is determined that the target road surface of the target preceding vehicle is seriously uneven primarily according to the real-time vehicle speed and the real-time roll angle of the target preceding vehicle, it is further required to further determine whether the target vehicle travels to the target position according to the future predicted travel track after the first time point.
In an optional embodiment, the preliminary determination of the severe unevenness of the road surface of the target in front of the target according to the real-time pose data may include: acquiring the current speed and the current roll angle of the target front vehicle; calculating the current roll variance value of the target front vehicle according to the current vehicle speed and the current roll angle; and determining that the target road surface of the target front vehicle at the current moment is seriously uneven according to the current roll variance value and a preset variance threshold value.
Optionally, the current roll variance value of the target front vehicle is calculated according to the current vehicle speed and the current roll angle, and the required amount of historical target front vehicle attitude data at the current moment can be calculated according to the current vehicle speed, a preset acquisition cycle and a preset unevenness judgment interval length; acquiring historical target front vehicle side inclination angles in corresponding quantity according to the required quantity; and calculating the current roll variance value of the target front vehicle according to the required quantity and each historical target front vehicle roll angle.
The preset acquisition period may refer to an acquisition period of the target front parking position data. The preset unevenness judgment section may be used to judge how much length of the road surface unevenness is. For example, the unevenness determination section may be predetermined according to information such as the length of the body of the target vehicle, the vibration damping performance, and the like, for example, the length of the body of the target vehicle is 5 meters, and the unevenness determination section may be preset to be 5 meters. The required number of the historical target front parking position data at the current moment can refer to the required number of the target front parking position data collected in the time before the current moment.
In particular, the current roll variance value may be determined by
Figure BDA0003836050230000081
Calculation of where S 2 Indicating the degree of fluctuation in the roll of the body of the target preceding vehicle in a period of time before the present time,
Figure BDA0003836050230000082
the method comprises the steps of representing the roll angles of a target front vehicle at different moments, representing the number of sampling points, representing the time length intercepted before the current moment, representing the required quantity of historical target front parking position data, wherein in order to spatially correspond the front parking position data to the actual road surface unevenness, the number of the sampling points is changed along with the vehicle speed, and the method can be specifically used
Figure BDA0003836050230000083
Is determined in which L c Judging the length of the interval for the preset unevenness, wherein the length is a standard quantity; t is t s Is a preset sampling period u of the target front parking position data f The target vehicle speed of the preceding vehicle.
Optionally, determining that the target road surface of the target front vehicle at the current moment is seriously uneven according to the current rolling variance value and a preset variance threshold, and determining whether the current rolling variance value is greater than the preset variance threshold; and when the current roll variance value is larger than the preset variance threshold value, determining that the target road surface of the target front vehicle at the current moment is seriously uneven.
The sensor installation height of the target vehicle limits, the roll angle data in the pose data of the target front parking space are more obvious in characteristic and higher in precision, and therefore the variance of the roll angle data can be used as the judgment basis of the road surface unevenness under the target front wheels. If the calculated target front vehicle side-inclination variance value S 2 If the difference is larger than the preset variance threshold, the target front vehicle can be considered to pass through the road with larger unevenness. Accordingly, if S 2 If the pose data is not greater than the preset variance threshold, the pose data of the target front vehicle can be discarded, namely the pose data can not be continuously used as a sampling object.
In another alternative embodiment, the determining whether the target vehicle travels to the target position according to the future travel predicted trajectory after the first time point may include:
when the target vehicle is shielded on the predicted future running track of the target vehicle, acquiring the predicted lengths of the target vehicle and the target vehicle on the predicted future running track and the current speed of the target vehicle; according to the predicted length and the current speed, obtaining the predicted time length of the target vehicle driving to the target position according to the future driving predicted track, and obtaining a second time point of the target vehicle predicted to drive to the target position; acquiring a first yaw rate of the target vehicle at the first time point, a second yaw rate of the target vehicle actually traveling to a second time point, and acquiring a real-time vehicle speed of the target vehicle between the first time point and the second time point; calculating the actual running length of the target vehicle between the first time point and the second time point according to the real-time vehicle speed, the first time point and the second time point; calculating a yaw-rate integrated value of the target vehicle between the first time point and the second time point, based on the first yaw acceleration, the second yaw rate, the first time point, and the second time point; and judging whether the target vehicle drives to the target position according to the future driving prediction track after the first time point or not according to the actual driving length, the prediction length, the yaw rate integral value and a preset integral threshold value.
Referring to fig. 1b, the predicted length may be referred to as S, and the predicted length may be obtained by a radar sensor, a laser sensor, and the like, which are mounted on the target vehicle, for example. The predicted time period may refer to a time period predicted to travel S at the current vehicle speed of the target vehicle. The second time point may refer to a time point at which the predicted target vehicle travels to the target position where the target preceding vehicle is located according to S.
In particular, the actual travel length of the target vehicle between the first time point and the second time point may be determined by
Figure BDA0003836050230000091
Wherein, t 1 Denotes a first point in time, t 2 Denotes a second point in time, u 0 Representing the actual vehicle speed of the target vehicle, the formula may be understood as accumulating the actual vehicle speed of the target vehicle between a first point in time and a second point in time.
The yaw-rate integrated value of the target vehicle between the first time point and the second time point may be obtained by
Figure BDA0003836050230000092
Wherein w yaw A second yaw rate of the target vehicle is indicated,
Figure BDA0003836050230000093
a first yaw rate of the target vehicle is indicated.
On the basis of the above-described embodiment, when the first difference between the actual travel length and the predicted length does not exceed the preset difference threshold value and the yaw-rate integrated value is smaller than the preset integrated threshold value, it is determined that the target vehicle travels to the target position following the future-travel predicted trajectory after the first time point.
Specifically, a first difference between the actual running length and the predicted length of the target vehicle may be calculated, and when the first difference does not exceed a preset difference threshold and the yaw rate integral is smaller than a preset integral threshold, it may be determined that the target vehicle runs to the target position following the future-running predicted trajectory after the first time point. Accordingly, if the first difference exceeds the preset difference threshold and/or the yaw-rate integral is not less than the preset integral threshold, it may be determined that the target vehicle has not traveled to the target position according to the predicted future travel trajectory after the first time point, and the target vehicle may not be continuously taken as the sampling object.
And S230, if so, acquiring a real-time dynamics response signal of the target vehicle after the target vehicle drives to the target position according to the future driving prediction track.
The dynamic response signals can comprise suspension vertical acceleration signals, sprung mass vertical acceleration signals, vehicle body pitch angle speed signals, vehicle body side inclination angle signals and the like, and can be acquired through self-configured sensors.
S240, determining the unevenness classification and identification result of the target road surface again according to the real-time dynamic response signal of the target vehicle, and taking the unevenness classification and identification result as a label data sequence matched with the target road surface.
In this embodiment, the unevenness classification recognition result of the target road surface on which the vehicle ahead of the target is running may be determined again according to the fluctuation condition of the real-time dynamics response signal of the target vehicle, so that the unevenness classification recognition result is used as the tag data sequence matched with the target road surface.
And S250, aligning the tag data sequence with the real-time pose data of the target front vehicle in a time sequence to obtain an alignment result, and taking the alignment result as a group of training data sequences.
In this embodiment, since the distance difference exists between the target vehicle and the target preceding vehicle, the tag data sequence obtained according to the dynamic response signal of the target vehicle and the real-time pose data of the target preceding vehicle are different in time sequence, and therefore, the tag data sequence and the real-time pose data of the target preceding vehicle can be aligned in time sequence, and the alignment result is used as a set of training data sequences.
The S210-S250 operation set in this way has the advantage that the reality of the training data set can be improved, and the robustness of the road surface unevenness prediction model algorithm is further improved.
And S260, judging whether the number of the training data sequences meets a preset condition, if so, acquiring at least one group of training data sequences to execute the operation of S270, and if not, returning to execute the operation of S210.
The preset condition may refer to a preset number requirement value of the training data sequence, for example, hundreds, thousands of groups, etc.
In this embodiment, the number of the training data sequences is multiple sets, that is, at least one set of training data sequences needs to be acquired according to multiple sets of target vehicles and target vehicles ahead. After obtaining a set of training data sequences, it may be determined whether the number of training data sequences satisfies a preset condition, if yes, at least one set of training data sequences is obtained, and if not, the operation of S210-S250 is returned to be executed again until the training data sequences satisfy the preset condition.
And S270, performing at least one round of training on the preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model.
And S280, acquiring real-time pose data of the front vehicle after the front vehicle is shielded on the predicted future running track of the vehicle.
And S290, inputting the real-time pose data into the road surface unevenness prediction model, and determining an unevenness classification and identification result matched with the driving road surface of the front vehicle.
According to the technical scheme of the embodiment of the invention, the unevenness of the target road surface is judged for the first time by collecting pose data of a target front vehicle in actual driving, when the target vehicle drives to the target road surface driven by the target front vehicle according to a future driving prediction track, the unevenness of the target road surface is judged again by collecting a dynamic response signal of the target vehicle, at least one group of training data sequences is obtained according to the pose data of the target front vehicle and the finally obtained unevenness label sequence of the target road surface, and the road surface unevenness prediction model is obtained by carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences; acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle; the real-time pose data are input into the road surface unevenness prediction model, unevenness classification recognition results matched with the running road surface of the front vehicle are determined, namely, a neural network model is trained through a large amount of training data to obtain the road surface unevenness prediction model, the road surface unevenness prediction model is used for determining the unevenness classification recognition results of the running road surface of the front vehicle according to pose signals of the front vehicle, the problem that the vision of a vision sensor is shielded in the front direction of the vehicle in the prior art, so that the front road surface condition cannot be sensed is solved, the accuracy and the robustness of the road surface unevenness prediction model are improved, and the accuracy of the front road surface unevenness recognition is improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a device for classifying and identifying road surface unevenness according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes: the road surface unevenness prediction model acquisition module 310, the pose data acquisition module 320 and the classification recognition result determination module 330. Wherein:
the road surface unevenness prediction model obtaining module 310 is configured to obtain at least one set of training data sequences, and perform at least one round of training on a preset neural network model according to the at least one set of training data sequences to obtain a road surface unevenness prediction model;
the pose data acquisition module 320 is used for acquiring real-time pose data of a vehicle in front after the vehicle in front is shielded on a predicted future running track of the vehicle;
and the classification and identification result determination module 330 is configured to input the real-time pose data to the road surface irregularity prediction model, and determine an irregularity classification and identification result matched with the road surface on which the preceding vehicle runs.
According to the technical scheme of the embodiment of the invention, at least one group of training data sequences is obtained, and a preset neural network model is trained for at least one round according to the at least one group of training data sequences to obtain a road surface unevenness prediction model; acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle; the real-time pose data are input into the road surface unevenness prediction model, unevenness classification recognition results matched with the running road surface of the front vehicle are determined, namely, a neural network model is trained through a large amount of training data to obtain the road surface unevenness prediction model, the road surface unevenness prediction model is used for determining the unevenness classification recognition results of the running road surface of the front vehicle according to pose signals of the front vehicle, the problem that the vision of a vision sensor is shielded in the front direction of the vehicle in the prior art, so that the front road surface condition cannot be sensed is solved, the accuracy and the robustness of the road surface unevenness prediction model are improved, and the accuracy of the front road surface unevenness recognition is improved.
Optionally, the road unevenness prediction model obtaining module 310 includes:
performing the following for a set of target vehicles and target lead vehicles:
the real-time pose data acquisition unit is used for recording a first time point and a target position of the target vehicle ahead when the target vehicle ahead is shielded on the predicted future running track of the target vehicle, and acquiring real-time pose data of the target vehicle ahead after the first time point;
the target vehicle running position judging unit is used for preliminarily determining that the target road surface where the target front vehicle is located is seriously uneven according to the real-time pose data of the target front vehicle and judging whether the target vehicle runs to the target position according to the future running prediction track after the first time point;
the real-time dynamics response signal acquisition unit is used for acquiring real-time dynamics response signals of the target vehicle after the target vehicle drives to the target position according to the future driving prediction track if the target vehicle drives to the target position according to the future driving prediction track;
the tag data sequence acquisition unit is used for determining the unevenness classification and identification result of the target road surface again according to the real-time dynamic response signal of the target vehicle and taking the unevenness classification and identification result as a tag data sequence matched with the target road surface;
and the training data sequence acquisition unit is used for aligning the tag data sequence with the real-time pose data of the target front vehicle in time sequence to obtain an alignment result, and taking the alignment result as a group of training data sequences.
Optionally, the real-time pose data of the target front vehicle includes a real-time vehicle speed and a real-time roll angle;
accordingly, the target vehicle travel position determination unit includes:
the vehicle speed and roll angle acquisition subunit is used for acquiring the current vehicle speed and the current roll angle of the target front vehicle;
the roll variance value calculation operator unit is used for calculating the current roll variance value of the target front vehicle according to the current vehicle speed and the current roll angle;
and the target road surface severe unevenness determining subunit is used for determining the target road surface severe unevenness of the target front vehicle at the current moment according to the current rolling variance value and a preset variance threshold value.
Optionally, the roll variance value calculating sub-unit may be specifically configured to:
calculating the required quantity of the historical target front parking position data at the current moment according to the current vehicle speed, a preset acquisition period and the length of a preset unevenness judgment interval;
acquiring historical target front vehicle side inclination angles in corresponding quantity according to the required quantity;
and calculating the current roll variance value of the target front vehicle according to the required quantity and each historical target front vehicle roll angle.
Optionally, the target road surface severe unevenness determining subunit may be specifically configured to:
judging whether the current roll variance value is larger than the preset variance threshold value or not;
and when the current roll variance value is larger than the preset variance threshold value, determining that the target road surface of the target front vehicle at the current moment is seriously uneven.
Optionally, the target vehicle driving position determining unit further includes:
the current vehicle speed obtaining subunit of the target vehicle is configured to, when the target vehicle is shielded on the predicted future travel track of the target vehicle, obtain the predicted lengths of the target vehicle and the target vehicle on the predicted future travel track and the current vehicle speed of the target vehicle;
a second time point obtaining subunit, configured to obtain, according to the predicted length and the current vehicle speed, a predicted time length for the target vehicle to travel to the target position according to the future travel predicted trajectory, and obtain a second time point at which the target vehicle is predicted to travel to the target position;
a real-time vehicle speed acquisition subunit of the target vehicle, configured to acquire a first yaw rate of the target vehicle at the first time point, a second yaw rate of the target vehicle actually traveling to a second time point, and acquire a real-time vehicle speed of the target vehicle between the first time point and the second time point;
the actual running length calculating subunit is used for calculating the actual running length of the target vehicle between the first time point and the second time point according to the real-time vehicle speed, the first time point and the second time point;
a yaw-rate-integrated-value-calculating sub-unit configured to calculate a yaw-rate integrated value of the target vehicle between the first time point and the second time point, based on the first yaw acceleration, the second yaw rate, the first time point, and the second time point;
and the target vehicle running position judging sub-unit is used for judging whether the target vehicle runs to the target position according to the future running prediction track after the first time point according to the actual running length, the prediction length, the yaw rate integral value and a preset integral threshold value.
Optionally, the target vehicle driving position determining subunit may specifically be configured to:
and when a first difference between the actual running length and the predicted length does not exceed a preset difference threshold and the yaw rate integral value is smaller than the preset integral threshold, determining that the target vehicle runs to the target position according to the future running predicted track after the first time point.
The device for classifying and identifying the road surface unevenness can execute the method for classifying and identifying the road surface unevenness provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 4 illustrates a schematic structural diagram of a vehicle 400 that may be used to implement an embodiment of the present invention. The vehicle is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The vehicle may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the vehicle 400 includes at least one processor 401, and a memory communicatively connected to the at least one processor 401, such as a Read Only Memory (ROM) 402, a Random Access Memory (RAM) 403, and the like, wherein the memory stores computer programs executable by the at least one processor, and the processor 401 may perform various suitable actions and processes according to the computer programs stored in the Read Only Memory (ROM) 402 or the computer programs loaded from the storage unit 18 into the Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the vehicle 400 may also be stored. The processor 401, ROM 402, and RAM 403 are connected to each other via the bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the vehicle 400 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the vehicle 400 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Processor 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 401 performs the various methods and processes described above, such as a classification recognition method of road surface irregularities.
In some embodiments, the classification recognition method of the road unevenness may be implemented as a computer program that is tangibly embodied in a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on the vehicle 400 via the ROM 402 and/or the communication unit 19. When the computer program is loaded into the RAM 403 and executed by the processor 401, one or more steps of the above-described classification and identification method of road surface unevenness may be performed. Alternatively, in other embodiments, the processor 401 may be configured to perform the method of classification identification of road surface irregularities by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described herein may be implemented on a vehicle having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the vehicle. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A classification and identification method for road unevenness is characterized by comprising the following steps:
obtaining at least one group of training data sequences, and carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model;
acquiring real-time pose data of a front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle;
and inputting the real-time pose data into the road surface unevenness prediction model, and determining an unevenness classification identification result matched with the running road surface of the front vehicle.
2. The method of claim 1, wherein obtaining at least one set of training data sequences comprises:
performing the following for a set of target vehicles and target lead vehicles:
when the target vehicle is shielded in the future driving prediction track of the target vehicle, recording a first time point and a target position of the target vehicle, and acquiring real-time pose data of the target vehicle after the first time point;
preliminarily determining that the road surface of the target in front of the target is seriously uneven according to the real-time pose data of the target in front of the target, and judging whether the target vehicle drives to the target position according to the future driving predicted track after the first time point;
if so, acquiring a real-time dynamics response signal of the target vehicle after the target vehicle drives to the target position according to the future driving prediction track;
determining the unevenness classification and identification result of the target road surface again according to the real-time dynamic response signal of the target vehicle, and taking the unevenness classification and identification result as a label data sequence matched with the target road surface;
and aligning the tag data sequence with the real-time pose data of the target front vehicle in time sequence to obtain an alignment result, and taking the alignment result as a group of training data sequences.
3. The method of claim 2, wherein the real-time pose data of the target lead vehicle includes real-time vehicle speed and real-time roll angle;
correspondingly, preliminarily determining that the road surface of the target in front of the target is seriously uneven according to the real-time pose data, and the method comprises the following steps:
acquiring the current speed and the current roll angle of the target front vehicle;
calculating a current roll variance value of the target front vehicle according to the current vehicle speed and the current roll angle;
and determining that the target road surface of the target front vehicle at the current moment is seriously uneven according to the current roll variance value and a preset variance threshold value.
4. The method of claim 3, wherein calculating a current roll variance value for the target leading vehicle based on the current vehicle speed and the current roll angle comprises:
calculating the required quantity of the historical target front parking position data at the current moment according to the current vehicle speed, a preset acquisition period and the length of a preset unevenness judgment interval;
acquiring historical target front vehicle side inclination angles in corresponding quantity according to the required quantity;
and calculating the current roll variance value of the target front vehicle according to the required quantity and each historical target front vehicle roll angle.
5. The method of claim 3, wherein determining that the target road surface of the target leading vehicle at the current time is severely uneven according to the current roll variance value and a preset variance threshold comprises:
judging whether the current roll variance value is larger than the preset variance threshold value or not;
and when the current roll variance value is larger than the preset variance threshold value, determining that the target road surface of the target front vehicle at the current moment is seriously uneven.
6. The method of claim 2, wherein determining whether the target vehicle traveled to the target location following the predicted future travel trajectory after the first point in time comprises:
when the target vehicle is shielded on the predicted future running track of the target vehicle, acquiring the predicted lengths of the target vehicle and the target vehicle on the predicted future running track and the current speed of the target vehicle;
according to the predicted length and the current speed, obtaining the predicted time length of the target vehicle driving to the target position according to the future driving predicted track, and obtaining a second time point of the target vehicle predicted driving to the target position;
acquiring a first yaw rate of the target vehicle at the first time point, a second yaw rate of the target vehicle actually traveling to a second time point, and acquiring a real-time vehicle speed of the target vehicle between the first time point and the second time point;
calculating the actual running length of the target vehicle between the first time point and the second time point according to the real-time vehicle speed, the first time point and the second time point;
calculating a yaw-rate integrated value of the target vehicle between the first time point and the second time point, based on the first yaw acceleration, the second yaw rate, the first time point, and the second time point;
and judging whether the target vehicle drives to the target position according to the future driving prediction track after the first time point or not according to the actual driving length, the prediction length, the yaw rate integral value and a preset integral threshold value.
7. The method according to claim 6, wherein determining whether the target vehicle has traveled to the target position following the predicted future travel trajectory after the first time point based on the actual travel length, the predicted length, the yaw-rate integration value, and a preset integration threshold value comprises:
and when a first difference between the actual running length and the predicted length does not exceed a preset difference threshold and the yaw rate integral value is smaller than the preset integral threshold, determining that the target vehicle runs to the target position according to the future running predicted track after the first time point.
8. A classification and recognition device for road surface unevenness is characterized by comprising:
the road surface unevenness prediction model acquisition module is used for acquiring at least one group of training data sequences and carrying out at least one round of training on a preset neural network model according to the at least one group of training data sequences to obtain a road surface unevenness prediction model;
the pose data acquisition module is used for acquiring real-time pose data of the front vehicle after the front vehicle is shielded on a predicted future running track of the vehicle;
and the classification and identification result determining module is used for inputting the real-time pose data to the road surface unevenness prediction model and determining an unevenness classification and identification result matched with the running road surface of the front vehicle.
9. A vehicle, characterized in that the vehicle comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method of classifying and identifying road surface irregularities according to any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a processor to execute the method for classifying and identifying road surface unevenness according to any one of claims 1 to 7.
CN202211088233.2A 2022-09-07 2022-09-07 Method and device for classifying and identifying road surface unevenness, vehicle and storage medium Pending CN115451901A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211088233.2A CN115451901A (en) 2022-09-07 2022-09-07 Method and device for classifying and identifying road surface unevenness, vehicle and storage medium
PCT/CN2023/116866 WO2024051661A1 (en) 2022-09-07 2023-09-05 Pavement unevenness classification and identification method and apparatus, vehicle, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211088233.2A CN115451901A (en) 2022-09-07 2022-09-07 Method and device for classifying and identifying road surface unevenness, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN115451901A true CN115451901A (en) 2022-12-09

Family

ID=84302722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211088233.2A Pending CN115451901A (en) 2022-09-07 2022-09-07 Method and device for classifying and identifying road surface unevenness, vehicle and storage medium

Country Status (2)

Country Link
CN (1) CN115451901A (en)
WO (1) WO2024051661A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937046A (en) * 2023-01-09 2023-04-07 禾多科技(北京)有限公司 Road ground information generation method, device, equipment and computer readable medium
WO2024051661A1 (en) * 2022-09-07 2024-03-14 中国第一汽车股份有限公司 Pavement unevenness classification and identification method and apparatus, vehicle, and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977641A (en) * 2017-12-14 2018-05-01 东软集团股份有限公司 A kind of method, apparatus, car-mounted terminal and the vehicle of intelligent recognition landform
US10800403B2 (en) * 2018-05-14 2020-10-13 GM Global Technology Operations LLC Autonomous ride dynamics comfort controller
CN109050535B (en) * 2018-07-25 2020-02-14 北京理工大学 Rapid terrain condition identification method based on vehicle attitude
CN111290386B (en) * 2020-02-20 2023-08-04 北京小马慧行科技有限公司 Path planning method and device and carrier
CN112896188B (en) * 2021-02-22 2022-07-15 浙江大学 Automatic driving decision control system considering front vehicle encounter
CN115451901A (en) * 2022-09-07 2022-12-09 中国第一汽车股份有限公司 Method and device for classifying and identifying road surface unevenness, vehicle and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051661A1 (en) * 2022-09-07 2024-03-14 中国第一汽车股份有限公司 Pavement unevenness classification and identification method and apparatus, vehicle, and storage medium
CN115937046A (en) * 2023-01-09 2023-04-07 禾多科技(北京)有限公司 Road ground information generation method, device, equipment and computer readable medium

Also Published As

Publication number Publication date
WO2024051661A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
CN115451901A (en) Method and device for classifying and identifying road surface unevenness, vehicle and storage medium
CN110134126B (en) Track matching method, device, equipment and medium
CN113022580B (en) Trajectory prediction method, trajectory prediction device, storage medium and electronic equipment
CN112526999B (en) Speed planning method, device, electronic equipment and storage medium
CN113183975B (en) Control method, device, equipment and storage medium for automatic driving vehicle
CN110617824B (en) Method, apparatus, device and medium for determining whether vehicle is on or off elevated road
CN113353083B (en) Vehicle behavior recognition method
CN114882198A (en) Target determination method, device, equipment and medium
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN114463985A (en) Driving assistance method, device, equipment and storage medium
CN113971723A (en) Method, device, equipment and storage medium for constructing three-dimensional map in high-precision map
CN113978465A (en) Lane-changing track planning method, device, equipment and storage medium
CN114954532A (en) Lane line determination method, device, equipment and storage medium
CN115583258A (en) Automatic vehicle meeting control method and device, vehicle control equipment and medium
CN114120252B (en) Automatic driving vehicle state identification method and device, electronic equipment and vehicle
CN114895274A (en) Guardrail identification method
CN113822593A (en) Security situation assessment method and device, storage medium and electronic equipment
US20230294669A1 (en) Control method, vehicle, and storage medium
CN115630760A (en) Driving planning method, device, equipment and medium
CN115140040A (en) Car following target determining method and device, electronic equipment and storage medium
CN115839722A (en) Path planning method, device, equipment and medium for automatic driving vehicle
CN116767230A (en) Road detection method, device, electronic equipment and storage medium
CN116501018A (en) Method, device, equipment and storage medium for determining vehicle faults
CN115503754A (en) Intersection vehicle meeting control method and device, vehicle control equipment and storage medium
CN115909813A (en) Vehicle collision early warning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination