CN117349677A - Training method, device, equipment, medium and program product of pavement recognition model - Google Patents

Training method, device, equipment, medium and program product of pavement recognition model Download PDF

Info

Publication number
CN117349677A
CN117349677A CN202311653765.0A CN202311653765A CN117349677A CN 117349677 A CN117349677 A CN 117349677A CN 202311653765 A CN202311653765 A CN 202311653765A CN 117349677 A CN117349677 A CN 117349677A
Authority
CN
China
Prior art keywords
road
data
dimension
target
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311653765.0A
Other languages
Chinese (zh)
Other versions
CN117349677B (en
Inventor
徐晓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311653765.0A priority Critical patent/CN117349677B/en
Publication of CN117349677A publication Critical patent/CN117349677A/en
Application granted granted Critical
Publication of CN117349677B publication Critical patent/CN117349677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2131Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on a transform domain processing, e.g. wavelet transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2123/00Data types
    • G06F2123/02Data types in the time domain, e.g. time-series data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a training method, device, equipment, medium and program product of a pavement identification model; the method relates to the field of Internet of vehicles and the field of intelligent transportation, and comprises the following steps: acquiring vehicle running data acquired by a target vehicle in the process of passing through a target road, and carrying out window division on the vehicle running data to acquire window running data corresponding to a plurality of window road sections respectively; determining a road section state corresponding to a corresponding window road section based on the running data of each window, wherein the road section state comprises a flat state and a bumpy state; and constructing a training sample set for training the pavement identification model by taking the running data of each window as a training sample and the corresponding road section state as a sample label, and training the pavement identification model based on the training sample set. Through the method and the device, the training strength of the pavement identification model and the accuracy of the pavement identification model obtained through training in identifying whether the road on which the vehicle runs is bumpy or not are effectively improved.

Description

Training method, device, equipment, medium and program product of pavement recognition model
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a training method, apparatus, device, medium, and program product for a pavement recognition model.
Background
Artificial intelligence (Artificial Intelligence, AI) is a comprehensive discipline involving a wide range of fields, both hardware-level and software-level technologies. Artificial intelligence infrastructure technologies generally include, for example, sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, pre-training model technologies, operation/interaction systems, mechatronics, and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
In the related art, feature extraction is generally performed on acquired vehicle driving data directly, and a road surface recognition model is trained through the acquired vehicle driving features, so that the training strength of the road surface recognition model is weaker due to the insufficient number of samples of the vehicle driving data, and the recognition performance of the road surface recognition model obtained through training is lower.
Disclosure of Invention
The embodiment of the application provides a training method, a device, electronic equipment, a computer readable storage medium and a computer program product of a road surface recognition model, so that the training strength of the road surface recognition model and the accuracy of the road surface recognition model obtained by training for recognizing whether a road on which a vehicle runs is bumpy are effectively improved.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a training method of a pavement identification model, which comprises the following steps:
acquiring vehicle running data acquired by a target vehicle in the process of passing through a target road, and carrying out window division on the vehicle running data to acquire window running data corresponding to a plurality of window road sections respectively;
determining a road section state corresponding to the corresponding window road section based on each window driving data, wherein the road section state comprises a flat state and a bumpy state;
taking each window driving data as a training sample, taking the corresponding road section state as a sample label, constructing a training sample set for training the road surface recognition model, and training the road surface recognition model based on the training sample set;
The road surface recognition model is used for recognizing the road surface state of a road on which the target vehicle runs based on the running data of the target vehicle.
The embodiment of the application provides a training device of road surface recognition model, includes:
the system comprises an acquisition module, a window dividing module and a window processing module, wherein the acquisition module is used for acquiring vehicle running data acquired by a target vehicle in the process of passing through a target road, and carrying out window division on the vehicle running data to acquire window running data corresponding to a plurality of window road sections respectively;
the determining module is used for determining the road section state corresponding to the corresponding window road section based on the window driving data, wherein the road section state comprises a flat state and a bumpy state;
the training module is used for taking each window driving data as a training sample, taking the corresponding road section state as a sample label, constructing a training sample set for training the road surface recognition model, and training the road surface recognition model based on the training sample set, wherein the road surface recognition model is used for recognizing the road surface state of a road on which the target vehicle is driven based on the driving data of the target vehicle.
In the above aspect, the training module is further configured to perform the following processing for each window driving data in the training sample set: performing data conversion on the window driving data from a plurality of different data conversion dimensions to obtain dimension data corresponding to each data conversion dimension; respectively extracting features of the dimensional data to obtain dimensional features corresponding to the dimensional data, and carrying out feature fusion on the dimensional features to obtain fusion features; invoking the road surface recognition model, and carrying out road surface recognition on the target road based on the fusion characteristics to obtain a predicted road section state of the target road; and training the pavement recognition model by combining the predicted road section state and the corresponding road section state.
In the above scheme, the acquiring module is further configured to acquire a running acceleration and a running angular velocity respectively corresponding to each acquisition time in a process that the target vehicle passes through the target road; for each acquisition time, averaging the corresponding running acceleration module value and the running angular velocity module value to obtain time running data corresponding to the acquisition time; and arranging the running data at all the moments according to the sequence from the early to the late of the acquisition moments to obtain the running data of the vehicle.
In the above aspect, the window driving data includes time driving data corresponding to each collection time of the target vehicle in the process of passing through the corresponding window road section, and the determining module is further configured to execute the following processing for each window driving data: determining the target flatness corresponding to the acquisition time based on the time running data corresponding to the acquisition time aiming at each acquisition time corresponding to the window running data; when the target flatness indicates that the road surface of the target road through which the target vehicle passes at the acquisition time is a flat road surface, determining the acquisition time corresponding to the target flatness as a target acquisition time; and determining the road section state corresponding to the corresponding window road section based on the number of the target acquisition moments.
In the above scheme, the determining module is further configured to obtain at least one adjacent acquisition time of the acquisition time, and subtract the time running data corresponding to the acquisition time from the time running data corresponding to each adjacent acquisition time, so as to obtain jitter differences corresponding to each adjacent acquisition time; determining the target flatness degree as a first flatness degree when at least one of the jitter difference values is greater than or equal to a difference threshold; determining the target flatness degree as a second flatness degree when the jitter difference is not greater than or equal to the difference threshold; the first flatness is used for indicating that a road surface through which the target vehicle passes at the acquisition time is a non-flat road surface, and the second flatness is used for indicating that the road surface through which the target vehicle passes at the acquisition time is a flat road surface.
In the above scheme, the determining module is further configured to compare the number of the target acquisition time with a number threshold to obtain a number comparison result; when the number comparison result indicates that the number of the target acquisition moments is greater than the number threshold, determining the road section state as a flat state; and when the number comparison result indicates that the number of the target acquisition moments is smaller than or equal to the number threshold value, determining the road section state as a bumpy state.
In the above scheme, the determining module is further configured to divide the number of the target acquisition moments by the number of the acquisition moments to obtain a flat ratio of the target road; wherein the magnitude of the value of the flattening ratio is positively correlated with the flattening degree of the target road; the road surface state is determined to be the flat state when the flat ratio is greater than a flat ratio threshold, and the road surface state is determined to be the bumpy state when the flat ratio is less than or equal to the flat ratio threshold.
In the above scheme, the data conversion dimension includes a time domain conversion dimension, a frequency domain conversion dimension, and a wavelet transformation conversion dimension, and the training module is further configured to perform time domain data conversion on the window driving data from the time domain conversion dimension to obtain time domain dimension data corresponding to the time domain conversion dimension; performing frequency domain data conversion on the window driving data from the frequency domain conversion dimension to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension; and carrying out wavelet transformation on the window driving data from the wavelet transformation dimension to obtain wavelet transformation dimension data corresponding to the wavelet transformation dimension.
In the above scheme, the window driving data includes time driving data corresponding to each collecting time of the target vehicle in the process of passing through the window section, and the training module is further configured to obtain an average value, a square average value, a variance, a standard deviation of each time driving data, and a maximum value of the time driving data; dividing the standard deviation by the average value to obtain a discrete coefficient of the window driving data, dividing the maximum value by the square average value to obtain a peak factor of the window driving data, and dividing the maximum value by the average value to obtain a pulse factor of the window driving data; and carrying out data fusion on the average value, the variance, the discrete coefficient, the peak factor and the pulse factor to obtain time domain dimension data corresponding to the time domain conversion dimension.
In the above scheme, the training module is further configured to obtain a spectral density and an average power spectral density corresponding to the window driving data, and determine an average frequency of the window driving data in a frequency domain based on the spectral density; combining the average frequency and the frequency spectrum density, determining the frequency variance of the window driving data on the frequency domain, and obtaining the maximum frequency value in the frequency spectrum density; and carrying out data fusion on the frequency spectrum density, the average power spectrum density, the average frequency, the frequency variance and the maximum frequency value to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension.
In the above scheme, the training module is further configured to obtain feature dimensions corresponding to the dimensional features respectively, and average the feature dimensions to obtain average dimensions; comparing each characteristic dimension with the average dimension to obtain a dimension comparison result corresponding to each characteristic dimension; when the dimension comparison result indicates that the feature dimension is the same as the average dimension, determining the corresponding dimension feature as a target dimension feature corresponding to the dimension feature; when the dimension comparison result indicates that the feature dimension is different from the average dimension, adjusting the feature dimension of the corresponding dimension feature to the average dimension to obtain a target dimension feature corresponding to the dimension feature; and carrying out feature fusion on each target dimension feature to obtain the fusion feature.
In the above-mentioned scheme, the training device of the above-mentioned road surface recognition model further includes: the road surface recognition module is used for acquiring a navigation route of the target vehicle, inquiring the road section state of a reference road corresponding to the navigation route from the target map, and obtaining an inquiry result; when the query result indicates that the road section state of the reference road is not recorded in the target map, acquiring target running data of the target vehicle running on the reference road, and extracting features of the target running data to obtain target running features; invoking a trained road surface recognition model, and carrying out road surface recognition on the reference road based on the target driving characteristics to obtain a predicted road section state of the reference road; and adding the predicted road section state of the reference road to the target map to obtain an updated map.
In the above aspect, the road surface recognition module is further configured to generate a first prompt message when the query result indicates that a road segment state of the reference road is recorded in the target map, and the road segment state of the reference road indicates that the road surface of the reference road is a non-flat road surface; generating second prompt information when the query result indicates that the road section state of the reference road is recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a flat road surface; the first prompting information is used for prompting that the navigation route is switched on the target map, and the second prompting information is used for prompting that the reference road corresponding to the navigation route can stably pass.
An embodiment of the present application provides an electronic device, including:
a memory for storing computer executable instructions or computer programs;
and the processor is used for realizing the training method of the pavement identification model provided by the embodiment of the application when executing the computer executable instructions or the computer programs stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores computer executable instructions for causing a processor to execute the training method of the pavement identification model.
Embodiments of the present application provide a computer program product comprising a computer program or computer-executable instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the training method of the pavement identification model according to the embodiment of the application.
The embodiment of the application has the following beneficial effects:
window division is carried out on vehicle running data to obtain window running data corresponding to a plurality of window road sections respectively, road section states corresponding to the corresponding window road sections are determined based on the window running data, the window running data are used as training samples, the corresponding road section states are used as sample labels, a training sample set is constructed, and the road surface recognition model is trained based on the training sample set. Therefore, window division is carried out on the vehicle driving data to obtain window driving data corresponding to a plurality of window road sections respectively, each window road section corresponds to one road section state, and training samples are constructed based on the window driving data, so that the number of training samples for training the road surface recognition model is effectively increased, and the training strength of the road surface recognition model and the accuracy of the road surface recognition model obtained through training for recognizing whether the road on which the vehicle is driving is bumpy are effectively improved.
Drawings
FIG. 1 is a schematic architecture diagram of a training system for a pavement identification model provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an electronic device for training a pavement recognition model according to an embodiment of the present application;
fig. 3 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 4 is a second flow chart of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 5 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 6 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 7 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 8 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 9 is a flow chart seventh of a training method of a pavement recognition model according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a pavement recognition model according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) Artificial intelligence (Artificial Intelligence, AI): is a comprehensive discipline, and relates to a wide field, and has the technology of a hardware level and the technology of a software level. Artificial intelligence infrastructure technologies generally include, for example, sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, pre-training model technologies, operation/interaction systems, mechatronics, and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
2) Computer Vision technology (CV): the computer vision is a science for researching how to make a machine "see", and more specifically, a camera and a computer are used to replace human eyes to identify and measure targets, and the like, and further, graphic processing is performed, so that the computer is processed into images which are more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. The large model technology brings important innovation for the development of computer vision technology, and a pre-trained model in the vision fields of swin-transformer, viT, V-MOE, MAE and the like can be rapidly and widely applied to downstream specific tasks through fine tuning. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
3) Machine Learning (ML): is a multi-domain interdisciplinary, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like. The pre-training model is the latest development result of deep learning, and integrates the technology.
4) An automatic driving technology: refers to a vehicle that achieves self-driving without driver operation. Typically including high-precision maps, environmental awareness, computer vision, behavioral decision-making, path planning, motion control, and the like. The automatic driving comprises various development paths such as single car intelligence, car-road coordination, networking cloud control and the like. The automatic driving technology has wide application prospect, and the current field is the field of logistics, public transportation, taxis and intelligent transportation, and is further developed in the future.
5) Convolutional neural network (CNN, convolutional Neural Networks): is a type of feedforward neural network (FNN, feed forward Neural Networks) with a Deep structure that includes convolution computation, and is one of representative algorithms of Deep Learning. Convolutional neural networks have the capability of token learning (Representation Learning) and are capable of performing a Shift-Invariant Classification classification of input images in their hierarchical structure.
6) Fourier transform: the representation can represent a certain function satisfying a certain condition as a trigonometric function (sine and/or cosine function) or a linear combination of their integrals. In different areas of research, fourier transforms have a number of different variant forms, such as the continuous fourier transform and the discrete fourier transform. The initial fourier analysis was proposed as a tool for analytical analysis of thermal processes.
7) Wavelet transform (wavelet transform, WT): the method is a new transformation analysis method, inherits and develops the concept of short-time Fourier transformation localization, overcomes the defects that the size of a window does not change along with frequency and the like, can provide a time-frequency window which changes along with frequency, and is an ideal tool for carrying out time-frequency analysis and processing of signals. The method is mainly characterized in that the characteristics of certain aspects of the problems can be fully highlighted through transformation, the local analysis of time (space) frequency can be realized, the multi-scale refinement of the signals (functions) is gradually carried out through telescopic translation operation, finally, the time subdivision at high frequency and the frequency subdivision at low frequency are finally achieved, the requirement of time-frequency signal analysis can be automatically met, and therefore, any details of the signals can be focused.
In the implementation of the embodiments of the present application, the applicant found that the related art has the following problems:
in the related art, the collected vehicle driving data is generally directly subjected to feature extraction, and the road surface recognition model is trained by the obtained vehicle driving features, so that the training strength of the road surface recognition model is weaker due to the insufficient number of samples of the vehicle driving data, and the recognition performance of the road surface recognition model obtained by training is lower.
The embodiment of the application provides a training method, a training device, electronic equipment, a computer readable storage medium and a computer program product for a pavement identification model, which can effectively improve the identification performance of the pavement identification model, and the following describes an exemplary application of the training system for the pavement identification model.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of a training system 100 of a pavement recognition model according to an embodiment of the present application, where a terminal (a terminal 400 is shown in an exemplary manner) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 400 is configured to display a target map on a graphical interface 410-1 (the graphical interface 410-1 is shown as an example) using a client 410 for a user. The terminal 400 and the server 200 are connected to each other through a wired or wireless network.
In some embodiments, the server 200 may be a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart television, a smart watch, a car terminal, etc. The electronic device provided in the embodiment of the application may be implemented as a terminal or as a server. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiments of the present application.
In some embodiments, the server 200 obtains vehicle driving data of the target vehicle in the process of passing through the target road, performs window division on the vehicle driving data to obtain window driving data corresponding to a plurality of window road segments, determines a road segment state corresponding to a corresponding window road segment based on each window driving data, sends the road segment state and the window driving data to the terminal 400, and the terminal 400 trains the road surface recognition model by combining the road segment state and the window driving data.
In other embodiments, the terminal 400 obtains vehicle driving data of the target vehicle in the process of passing through the target road, performs window division on the vehicle driving data to obtain window driving data corresponding to a plurality of window road segments, determines a road segment state corresponding to a corresponding window road segment based on each window driving data, sends the road segment state and the window driving data to the server 200, and the server 200 combines the road segment state and the window driving data to train the road surface recognition model.
In other embodiments, the embodiments of the present application may be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 for training a pavement recognition model according to an embodiment of the present application, where the electronic device 500 shown in fig. 2 may be the server 200 or the terminal 400 in fig. 1, and the electronic device 500 shown in fig. 2 includes: at least one processor 430, a memory 450, at least one network interface 420. The various components in electronic device 500 are coupled together by bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The processor 430 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, which may be a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 430.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 450 described in the embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for accessing other electronic devices via one or more (wired or wireless) network interfaces 420, the exemplary network interface 420 comprising: bluetooth, wireless compatibility authentication (WiFi, wireless Fidelity), and universal serial bus (USB, universal Serial Bus), etc.
In some embodiments, the training device for a pavement recognition model provided in the embodiments of the present application may be implemented in a software manner, and fig. 2 shows a training device 455 for a pavement recognition model stored in a memory 450, which may be software in the form of a program and a plug-in, and includes the following software modules: the acquisition module 4551, the determination module 4552, the training module 4553, which are logical, so that any combination or further splitting may be performed depending on the functions implemented. The functions of the respective modules will be described hereinafter.
In other embodiments, the training device for the pavement recognition model provided in the embodiments of the present application may be implemented in hardware, and by way of example, the training device for the pavement recognition model provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the training method for the pavement recognition model provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array), or other electronic components.
In some embodiments, the terminal or the server may implement the training method of the pavement identification model provided in the embodiments of the present application by running a computer program or computer executable instructions. For example, the computer program may be a native program (e.g., a dedicated road surface recognition program) or a software module in an operating system, e.g., a road surface recognition module that may be embedded in any program (e.g., an instant messaging client, an album program, an electronic map client, a navigation client); for example, a Native Application (APP) may be used, i.e. a program that needs to be installed in an operating system to be run. In general, the computer programs described above may be any form of application, module or plug-in.
The training method of the pavement recognition model provided by the embodiment of the application will be described with reference to exemplary application and implementation of the server or the terminal provided by the embodiment of the application.
Referring to fig. 3, fig. 3 is a schematic flow chart of a training method of a pavement recognition model according to an embodiment of the present application, which will be described with reference to steps 101 to 105 shown in fig. 3, the training method of a pavement recognition model according to an embodiment of the present application may be implemented by a server or a terminal alone or by a server and a terminal cooperatively, and will be described with reference to a server alone embodiment.
In step 101, vehicle travel data acquired by a target vehicle during passing through a target road is acquired.
In some embodiments, the vehicle driving data is used for indicating driving influence of the road surface of the target road on the target vehicle, and the vehicle driving data comprises time driving data corresponding to each acquisition time of the target vehicle in the process of passing through the target road.
In some embodiments, the sensor mounted on the target vehicle performs data acquisition during the target vehicle passing through the target road, so as to obtain vehicle driving data acquired during the target vehicle passing through the target road.
In some embodiments, the target road may be an actual road corresponding to a navigation route in the navigation map, and the target road may also be an actual road traversed by the target vehicle in at least one time window.
In some embodiments, referring to fig. 4, fig. 4 is a second flowchart of a training method of a pavement recognition model according to an embodiment of the present application, and step 101 shown in fig. 3 may be implemented by steps 1011 to 1013 shown in fig. 4.
In step 1011, the running acceleration and the running angular velocity corresponding to each of the acquisition times during the course of the target vehicle passing through the target road are acquired.
In some embodiments, the at least one time window that the target vehicle passes through during the course of passing through the target road includes a plurality of acquisition moments, and at each acquisition moment, a sensor mounted on the target vehicle acquires a running acceleration and a running angular velocity corresponding to the acquisition moment respectively.
As an example, in the process that the target vehicle passes through the target road, the elapsed time window is a time window a, the time window a is a time period between 10 am and up to 10 am, the acquisition time interval of the sensor mounted on the target vehicle is 1 second, the time window a includes an acquisition time A1 (10 am, 01 seconds), an acquisition time A2 (10 am, 02 seconds), an acquisition time A3 (10 am, 03 seconds), an acquisition time A4 (10 am, 04 seconds) and an acquisition time A5 (10 am, 05 seconds), the running acceleration and the running angular velocity of the target vehicle at the acquisition time A1 are acquired, the running acceleration and the running angular velocity of the target vehicle at the acquisition time A2 are acquired, the running acceleration and the running angular velocity of the target vehicle at the acquisition time A3 are acquired, the running acceleration and the running angular velocity of the target vehicle at the acquisition time A4 are acquired, and the running acceleration and the running angular velocity of the target vehicle at the acquisition time A5 are acquired.
In step 1012, for each acquisition time, the corresponding running acceleration module and running angular velocity module are averaged to obtain the time running data corresponding to the acquisition time.
The method comprises the steps of receiving the previous example, wherein the acquisition time corresponds to the time running data one by one, and averaging a model value of running acceleration corresponding to the acquisition time A1 and a model value of running angular speed corresponding to the acquisition time A1 to obtain the time running data corresponding to the acquisition time A1; averaging the model value of the running acceleration corresponding to the acquisition time A2 and the model value of the running angular speed corresponding to the acquisition time A2 to obtain the time running data corresponding to the acquisition time A2; averaging the model value of the running acceleration corresponding to the acquisition time A3 and the model value of the running angular speed corresponding to the acquisition time A3 to obtain the time running data corresponding to the acquisition time A3; averaging the model value of the running acceleration corresponding to the acquisition time A4 and the model value of the running angular speed corresponding to the acquisition time A4 to obtain the time running data corresponding to the acquisition time A4; and averaging the module value of the running acceleration corresponding to the acquisition time A5 and the module value of the running angular speed corresponding to the acquisition time A5 to obtain the time running data corresponding to the acquisition time A5.
As an example, the expression of the above-described time travel data may be:
(1);
wherein,for indicating the time travel data corresponding to the acquisition time i,a module value for indicating the running acceleration corresponding to the acquisition time i,and the module value is used for indicating the running angular speed corresponding to the acquisition time i.
In step 1013, the vehicle travel data is obtained by arranging the travel data at each time in the order from the early to the late of the collection time.
As an example, the expression of the vehicle travel data may be:
(2);
wherein,for indicating the running data of the vehicle,for indicating the moment of acquisitionThe corresponding time-of-day travel data,for indicating the moment of acquisitionThe corresponding time-of-day travel data,for indicating the moment of acquisitionThe corresponding time-of-day travel data,for indicating the moment of acquisitionThe corresponding time-of-day travel data,for indicating the moment of acquisitionAnd the corresponding moment driving data N is used for indicating the number of the acquisition moments.
Receiving the above example, collectingTime of dayFor the earliest acquisition time in the time window A, the acquisition time isFor the latest acquisition time in the time window A, the acquisition time isLater than the acquisition time
In step 102, window division is performed on the vehicle running data to obtain window running data corresponding to each of the plurality of window segments.
In some embodiments, the step 102 may be implemented as follows: the method comprises the steps of obtaining the data length of vehicle running data and the number of windows, dividing the data length by the number of windows to obtain the data length of the window running data, and dividing the window running data according to the data length of the window running data to obtain window running data corresponding to a plurality of window road sections respectively.
In step 103, based on each of the window driving data, a road segment state corresponding to the corresponding window road segment is determined.
In some embodiments, the window driving data includes time driving data corresponding to each collection time in the process of the target vehicle passing through the corresponding window road section, referring to fig. 5, fig. 5 is a flowchart illustrating a third flowchart of the training method of the road surface recognition model provided in the embodiment of the present application, and step 103 shown in fig. 3 may be implemented by executing steps 1031 to 1033 shown in fig. 5 with respect to each window driving data.
In step 1031, for each acquisition time corresponding to the window travel data, a target flatness corresponding to the acquisition time is determined based on the time travel data corresponding to the acquisition time.
In some embodiments, the target flatness corresponding to the acquisition time is used to indicate the flatness of the position of the target vehicle on the target road where the acquisition time is located. When the target vehicle continuously moves on the target road, the target flatness corresponding to each acquisition time is different, and when the target vehicle is in a static state on the target road, the target flatness corresponding to each acquisition time is the same.
In some embodiments, step 1031 may be implemented as follows: acquiring at least one adjacent acquisition time of the acquisition time, and subtracting the time running data corresponding to the acquisition time from the time running data corresponding to each adjacent acquisition time to obtain a jitter difference value corresponding to each adjacent acquisition time; determining a target flatness degree as a first flatness degree when there is at least one jitter difference value greater than or equal to a difference threshold; when there is no jitter difference greater than or equal to the difference threshold, the target flatness degree is determined as a second flatness degree.
In some embodiments, the first level of flatness is used to indicate that the road surface traversed by the target vehicle at the time of acquisition is a non-flat road surface, and the second level of flatness is used to indicate that the road surface traversed by the target vehicle at the time of acquisition is a flat road surface.
In some embodiments, when at least one jitter difference value is greater than or equal to the difference value threshold, it indicates that the road surface traversed by the target vehicle at the time of collection is jittered, and it indicates that the road surface traversed by the target vehicle at the time of collection is uneven, which results in the road surface traversed by the target vehicle at the time of collection being jittered, then the target flatness degree may be determined as a first flatness degree, where the first flatness degree is used to indicate that the road surface traversed by the target vehicle at the time of collection is uneven.
In some embodiments, when the jitter difference is greater than or equal to the difference threshold, it indicates that the road surface that the target vehicle passes by at the collection time is not jittered, and it indicates that the road surface that the target vehicle passes by at the collection time is flat and does not cause the road surface that the target vehicle passes by at the collection time to be jittered, then the target flatness degree may be determined as a second flatness degree, where the second flatness degree is used to indicate that the road surface that the target vehicle passes by at the collection time is a flat road surface.
In step 1032, when the target flatness degree indicates that the road surface of the target road through which the target vehicle passes at the acquisition time is a flat road surface, the acquisition time corresponding to the target flatness degree is determined as the target acquisition time.
In some embodiments, the target collection time is a collection time corresponding to when the road surface of the target road through which the target vehicle passes is a flat road surface, and the collection times are in one-to-one correspondence with the target flatness.
In step 1033, the road segment states corresponding to the corresponding window road segments are determined based on the number of target acquisition moments.
In some embodiments, the road segment status is used to indicate the road surface flatness of the target road, and the number of target collection times is positively related to the road surface flatness of the target road, and the greater the number of target collection times is, the greater the road surface flatness of the target road is, the smaller the number of target collection times is, and the smaller the road surface flatness of the target road is.
In some embodiments, step 1033 described above may be implemented by: comparing the number of the target acquisition moments with a number threshold value to obtain a number comparison result; when the quantity comparison result indicates that the quantity of the target acquisition moments is larger than a quantity threshold value, determining the road section state as a flat state; and when the number comparison result indicates that the number of the target acquisition moments is smaller than or equal to the number threshold value, determining the road section state as a bumpy state.
In some embodiments, the number comparison result is used to indicate whether the number of the target collection times is greater than the number threshold, when the number comparison result indicates that the number of the target collection times is greater than the number threshold, it is indicated that the road surface of the plurality of road segments on the target road is a flat road surface, then the road segment state may be determined as a flat state used to indicate that the road surface of the target road is a flat road surface, when the number comparison result indicates that the number of the target collection times is less than or equal to the number threshold, it is indicated that the road surface of the plurality of road segments on the target road is a non-flat road surface, then the road segment state may be determined as a bumpy state used to indicate that the road surface of the target road is a non-flat road surface.
In other embodiments, the step 1033 may be implemented as follows: dividing the number of the target acquisition moments by the number of the acquisition moments to obtain a flat ratio of the target road; wherein, the value of the flattening ratio is positively correlated with the flattening degree of the target road; the road surface state is determined to be the flat state when the flat ratio is greater than a flat ratio threshold, and the road surface state is determined to be the bumpy state when the flat ratio is less than or equal to the flat ratio threshold.
As an example, the expression of the flat ratio of the above-described target road may be:
(3);
wherein,for indicating the flat ratio of the target road,for indicating the number of target acquisition instants,for indicating the number of acquisition instants.
Therefore, the road section state for indicating the road surface flatness of the target road is determined based on the number of the target acquisition moments, so that the road surface flatness of the target road is accurately determined, data support is laid for subsequent accurate training of the road surface recognition model, and the accuracy of the road surface recognition model obtained through training is higher.
In some embodiments, the road segment conditions include a flat condition and a bumpy condition.
In step 104, a training sample set for training the road surface recognition model is constructed by taking each window driving data as a training sample and the corresponding road section state as a sample label.
As an example, the expression for the training sample set may be:
(4);
wherein,for indicating the set of training samples,for indicating the window travel data 1,for indicating the road segment status corresponding to the window driving data 1,for indicating the window travel data 2,for indicating the road segment status corresponding to the window driving data 2.
In step 105, the pavement recognition model is trained based on the set of training samples.
In some embodiments, the road surface recognition model is used for recognizing the road surface state of a road on which the target vehicle runs based on the running data of the target vehicle.
In some embodiments, referring to fig. 6, fig. 6 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application, and step 105 shown in fig. 3 may be implemented by executing steps 1051 to 1055 shown in fig. 6 for each window driving data in the training sample set.
In step 1051, data conversion is performed on the window driving data from a plurality of different data conversion dimensions, so as to obtain dimension data corresponding to each data conversion dimension.
In some embodiments, the data conversion dimensions may include a time domain conversion dimension, a frequency domain conversion dimension, and a wavelet transform conversion dimension, and the data conversion dimensions may also include other dimensions, where the specific dimension types of the data conversion dimensions do not constitute limitations of embodiments of the present application.
As an example, the above-described data conversion dimensions include at least two of a time domain conversion dimension, a frequency domain conversion dimension, and a wavelet transform conversion dimension, and the following description will take an example in which the data conversion dimensions include the time domain conversion dimension, the frequency domain conversion dimension, and the wavelet transform conversion dimension.
In some embodiments, referring to fig. 7, fig. 7 is a flowchart of a training method of a pavement recognition model provided in the embodiment of the present application, and step 1051 shown in fig. 3 may be implemented by steps 10511 to 10513 shown in fig. 7.
In step 10511, from the time domain conversion dimension, the window driving data is subjected to time domain data conversion, so as to obtain time domain dimension data corresponding to the time domain conversion dimension.
In some embodiments, the window driving data includes time driving data corresponding to each collection time of the target vehicle in the process of passing through the corresponding window road section.
In some embodiments, step 10511 above may be implemented as follows: acquiring the average value, square average, variance and standard deviation of running data at each moment and the maximum value of the running data at each moment; dividing the standard deviation by the average value to obtain a discrete coefficient of the window driving data, dividing the maximum value by the square average value to obtain a peak factor of the window driving data, and dividing the maximum value by the average value to obtain a pulse factor of the window driving data; and carrying out data fusion on the average value, the variance, the discrete coefficient, the peak factor and the pulse factor to obtain time domain dimension data corresponding to the time domain conversion dimension.
For the reception example, the expression of the average value of the travel data at each time may be:
(5);
wherein,for indicating the average value of the running data at each time,for indicating the travel data at each time.
For example, the expression of the square average of the travel data at each time may be:
(6);
wherein,for indicating the square average of the driving data at each moment,for indicating the travel data at each time.
For the reception example, the expression of the variance of the travel data at each time may be:
(7);
wherein,for indicating the variance of the travel data at each time, For indicating the average value of the running data at each time,for indicating the travel data at each time.
For example, the expression of the standard deviation of the travel data at each time may be:
(8);
wherein,for indicating the standard deviation of the travel data at each time,for indicating the average value of the running data at each time,for indicating the travel data at each time.
In connection with the above example, the expression of the discrete coefficient of the vehicle running data may be:
(9);
wherein,discrete coefficients for indicating vehicle travel data,for indicating the standard deviation of the travel data at each time,for indicating the average value of the travel data at each time.
In connection with the above example, the expression of the peak factor of the vehicle running data may be:
(10);
wherein,a peak factor for indicating vehicle travel data,for indicating the maximum value in the time-of-day travel data,indicating the square average of the driving data at each moment.
In connection with the above example, the expression of the above pulse factor may be:
(11);
wherein,for the purpose of indicating the pulse factor,for indicating the maximum value in the time-of-day travel data,for indicating the average value of the travel data at each time.
In connection with the above example, the expression of the time domain dimension data corresponding to the time domain conversion dimension may be:
(12);
Wherein,for indicating the average value of the running data at each time,for indicating the variance of the travel data at each time,discrete coefficients for indicating vehicle travel data,a peak factor for indicating vehicle travel data,for indicating the pulse factor.
In this way, the average value, the variance, the discrete coefficient, the peak factor and the pulse factor are subjected to data fusion to obtain the time domain dimension data corresponding to the time domain conversion dimension, so that the obtained time domain dimension data can comprehensively reflect the time domain characteristics of the window driving data from a plurality of different time domain dimensions, the road section state is predicted through the time domain dimension data, the predicted road section state obtained through prediction can more accurately reflect the road surface flatness of the target road, and the accuracy of road surface recognition is effectively improved.
In step 10512, frequency domain data conversion is performed on the window driving data from the frequency domain conversion dimension, and frequency domain dimension data corresponding to the frequency domain conversion dimension is obtained.
In some embodiments, the window driving data includes time driving data corresponding to each collection time of the target vehicle in the process of passing through the corresponding window road section.
In some embodiments, step 10512 above may be implemented as follows: acquiring frequency spectrum density and average power spectrum density corresponding to window driving data, and determining average frequency of the window driving data on a frequency domain based on the frequency spectrum density; combining the average frequency and the frequency spectrum density, determining the frequency variance of the window driving data on the frequency domain, and obtaining the maximum frequency value in the frequency spectrum density; and carrying out data fusion on the frequency spectrum density, the average power spectrum density, the average frequency, the frequency variance and the maximum frequency value to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension.
In some embodiments, the spectral density is also called the shorthand for frequency spectral density, the spectrum is the distribution curve of frequency, and complex oscillations are decomposed into different amplitude and different frequency harmonics, the amplitude of which is called the spectrum in a frequency-aligned pattern. Assuming an energy signal S (t), its spectral density S (w) can be found by fourier transform. Any variable that is expressed in the form of complex vibrations over time or spatial distance can be decomposed into a number of resonances of different amplitudes and different frequencies whose amplitude values are arranged in a pattern of frequencies (or periods). May be used as a set of frequencies of electromagnetic waves or oscillations that convey information. The frequency spectrum is a frequency distribution curve, and complex oscillation can be decomposed into harmonic oscillation with different amplitudes and different frequencies, and the amplitude of the harmonic oscillation is called as a frequency spectrum according to a frequency arrangement graph. Spectrum is an abbreviation for frequency spectral density. It leads the research of the signal from the time domain to the frequency domain, thereby bringing more visual knowledge.
As an example, the expression of the spectral density corresponding to the above window travel data may be:
(13);
wherein,for indicating the spectral density to which the windowed travel data corresponds,for the purpose of indicating the fourier transform,for indicating window travel data.
In some embodiments, the average power spectral density, also called power spectrum, is defined as the signal power within a unit frequency band. It shows the variation of signal power with frequency, i.e. the distribution of signal power in the frequency domain. The power spectrum represents the variation of signal power with frequency.
As an example, the expression for the average power spectral density described above may be:
(14);
wherein,for indicating the average power spectral density,and T is used for indicating the interval length of the time interval.
As an example, the expression of the average frequency of the above window traveling data in the frequency domain may be:
(15);
wherein,for indicating the average frequency of the windowed travel data over the frequency domain,for indicating the frequencies in the spectral density,and the method is used for indicating the frequency spectrum density corresponding to the window driving data.
As an example, the expression of the frequency variance of the above window driving data in the frequency domain may be:
(16);
Wherein,for indicating the frequency variance of the windowed travel data in the frequency domain,for indicating the frequencies in the spectral density,for indicating the spectral density to which the windowed travel data corresponds,for indicating the frequency mean.
As an example, the expression of the frequency domain dimension data corresponding to the frequency domain conversion dimension may be:
(17);
wherein,frequency domain dimension data for indicating correspondence of the frequency domain conversion dimension,for indicating the spectral density to which the windowed travel data corresponds,for indicating the average power spectral density,for indicating the average frequency of the windowed travel data over the frequency domain,for indicating the frequency variance of the windowed travel data in the frequency domain,for indicating maximum frequency in spectral densityA value of the frequency.
In this way, the frequency domain dimension data corresponding to the frequency domain conversion dimension is obtained by carrying out data fusion on the frequency spectrum density, the average power spectrum density, the average frequency, the frequency variance and the maximum frequency value, so that the obtained frequency domain dimension data can comprehensively reflect the frequency domain characteristics of the window driving data from a plurality of different frequency domain dimensions, the road section state is predicted through the frequency domain dimension data, the predicted road section state obtained through prediction can more accurately reflect the road surface flatness of the target road, and the accuracy of road surface recognition is effectively improved.
In step 10513, wavelet transform is performed on the window travel data from the wavelet transform dimension to obtain wavelet transform dimension data corresponding to the wavelet transform dimension.
In some embodiments, wavelet transform (wavelet transform, WT) is a new transform analysis method that inherits and develops the concept of short-time fourier transform localization while overcoming the disadvantages of window size that does not change with frequency, and can provide a frequency-dependent "time-frequency" window that is an ideal tool for signal time-frequency analysis and processing. The method is mainly characterized in that the characteristics of certain aspects of the problems can be fully highlighted through transformation, the local analysis of time (space) frequency can be realized, the multi-scale refinement of the signals (functions) is gradually carried out through telescopic translation operation, finally, the time subdivision at high frequency and the frequency subdivision at low frequency are finally achieved, the requirement of time-frequency signal analysis can be automatically met, and therefore, any details of the signals can be focused.
In some embodiments, wavelet transformation may decompose the signal into wavelet components of different frequencies, using the pywavelets library in Python for wavelet transformation and feature extraction in the molecular process. And (4) giving the wavelet type and level=n, returning n-order detail coefficients (details coefficients), and respectively solving the characteristics of the average value, the variance, the energy function and the like of the output n-order coefficients.
In this way, wavelet transformation is performed on window driving data through wavelet transformation dimensions to obtain wavelet transformation dimension data corresponding to the wavelet transformation dimensions, so that the obtained wavelet transformation dimension data can comprehensively reflect wavelet transformation characteristics of the window driving data from a plurality of different wavelet transformation dimensions, and prediction of road section states is performed through the wavelet transformation dimension data, so that the predicted road section states obtained through prediction can reflect the road surface flatness of a target road more accurately, and the accuracy of road surface recognition is effectively improved.
In step 1052, feature extraction is performed on each dimension data, so as to obtain dimension features corresponding to each dimension data.
In some embodiments, the feature extraction described above is a process of converting dimension data into corresponding vector-form dimension features.
In some embodiments, the feature extraction is performed on each piece of dimension data to obtain dimension features corresponding to each piece of dimension data, which may be implemented in the following manner: and aiming at each dimension data, acquiring a feature extraction network corresponding to the dimension data, calling the feature extraction network, and carrying out feature extraction on the dimension data to obtain dimension features corresponding to the dimension data.
In some embodiments, the feature extraction network corresponding to the acquired dimension data may be implemented as follows: acquiring an initial feature extraction network (AlexNet) and a dimension data sample corresponding to dimension data, and training the initial feature extraction network based on the dimension data sample to obtain a feature extraction network corresponding to the dimension data.
In step 1053, feature fusion is performed on each dimension feature, resulting in a fused feature.
In some embodiments, the feature fusion refers to a process of feature merging different dimensional features to obtain a fused feature.
In some embodiments, step 1053 described above may be implemented as follows: acquiring feature dimensions corresponding to the feature features of each dimension respectively, and averaging the feature dimensions to obtain average dimensions; comparing each characteristic dimension with the average dimension to obtain dimension comparison results corresponding to each characteristic dimension; when the dimension comparison result indicates that the feature dimension is the same as the average dimension, determining the corresponding dimension feature as a target dimension feature corresponding to the dimension feature; when the dimension comparison result indicates that the feature dimension is different from the average dimension, the feature dimension of the corresponding dimension feature is adjusted to the average dimension, and the target dimension feature corresponding to the dimension feature is obtained; and carrying out feature fusion on each target dimension feature to obtain fusion features.
As an example, the expression for the average dimension may be:
(18);
wherein,for indicating the average dimension of the dimension,for indicating the feature dimension to which each dimension feature corresponds,indicating the total number of feature dimensions.
As an example, the expression for the above fusion feature may be:
(19);
wherein,for the purpose of indicating the fusion characteristics,for indicating a target dimension feature to which the dimension feature corresponds,indicating the total number of feature dimensions.
In some embodiments, the foregoing adjusting the feature dimensions of the corresponding dimension features to the average dimensions to obtain the target dimension features corresponding to the dimension features may be implemented in the following manner: when the feature dimension of the corresponding dimension feature is smaller than the average dimension, carrying out dimension-lifting processing on the feature dimension of the corresponding dimension feature to obtain a target dimension feature of the average dimension; and when the feature dimension of the corresponding dimension feature is larger than the average dimension, performing dimension reduction processing on the feature dimension of the corresponding dimension feature to obtain the target dimension feature of the average dimension.
In step 1054, a road surface recognition model is invoked, and road surface recognition is performed on the target road based on the fusion characteristics, so as to obtain the predicted road section state of the target road.
In some embodiments, the pavement recognition model may be a codec model, where the codec model includes at least one encoding layer and at least one decoding layer, and the encoding layers and the decoding layers are the same.
As an example, referring to fig. 10, fig. 10 is a schematic structural diagram of a pavement recognition model provided in an embodiment of the present application, where the pavement recognition model includes an encoding layer 1 and a decoding layer 2, and the step 1054 may be implemented as follows: calling a coding layer 1 to code the fusion characteristics to obtain reference characteristics; and calling the decoding layer 2, and carrying out pavement identification on the target road based on the reference characteristics to obtain the predicted road section state of the target road.
In some embodiments, the predicted road segment status of the target road is used to indicate the predicted road surface flatness of the target road, the accuracy of the predicted road surface flatness of the target road is positively correlated with the recognition performance of the road surface recognition model, the higher the recognition performance of the road surface recognition model is, the higher the accuracy of the predicted road surface flatness of the corresponding target road is, the lower the recognition performance of the road surface recognition model is, and the lower the accuracy of the predicted road surface flatness of the corresponding target road is.
In step 1055, the road surface recognition model is trained in combination with the predicted road segment states and the corresponding road segment states.
In some embodiments, step 1055 above may be implemented as follows: and determining a loss value of the road surface recognition model by combining the predicted road section state and the road section state, and reversely updating model parameters of the road surface recognition model based on the loss value to obtain the trained road surface recognition model.
As an example, the expression of the above-described loss value may be:
(20);
wherein,for indicating the value of the loss,for indicating the state of the predicted link,for indicating the road segment status.
In some embodiments, referring to fig. 8, fig. 8 is a flowchart of a training method of a pavement recognition model according to an embodiment of the present application, after step 105 shown in fig. 3, updating of a target map may be implemented through steps 106 to 109 shown in fig. 8.
In step 106, a navigation route of the target vehicle is obtained, and a road segment state of a reference road corresponding to the navigation route is queried from the target map, so as to obtain a query result.
In some embodiments, the query result is used to indicate whether the road segment status of the reference road is recorded in the target map, and when the query result indicates that the road segment status of the reference road is recorded in the target map, the road segment status of the reference road is indicated that the target vehicle or the vehicle using the target map has historically traveled over the reference road, and is sent to the target map. When the query result indicates that the link state of the reference road is not recorded in the target map, it is explained that the target vehicle has not been historically or that the vehicle using the target map has traveled past the reference road.
In step 107, when the query result indicates that the road segment state of the reference road is not recorded in the target map, the target running data of the target vehicle running on the reference road is acquired, and the feature extraction is performed on the target running data, so as to obtain the target running feature.
In some embodiments, when the query result indicates that the road segment state of the reference road is not recorded in the target map, it is stated that no target vehicle has historically been driven or the vehicle using the target map is driven through the reference road, that is, the road segment state of the reference road needs to be predicted, so that the vehicle which can drive to the reference road can predict the flatness of the reference road in advance, then a data acquisition instruction can be sent to the target vehicle, and target driving data returned by the target vehicle in response to the data acquisition instruction is received, where the target driving data is obtained by extracting features of the target driving data, and the target driving features are in the form of vectors of the target driving data.
In some embodiments, the first hint information is generated when the query result indicates a link status of the reference link recorded in the target map, and the link status of the reference link indicates that the road surface of the reference link is a non-flat road surface.
In some embodiments, the first prompt is for prompting a navigation route to be switched on the target map. When the query result indicates that the road section state of the reference road is recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a non-flat road surface, the first prompt information can be generated and sent to the vehicle to be driven to the reference road, so that the vehicle to be driven to the reference road can predict that the road surface of the reference road is a non-flat road surface in advance, and the navigation route is switched on the target map, thereby avoiding the vehicle from being driven to the non-flat road surface.
In some embodiments, the second hint information is generated when the query result indicates a link status of the reference link recorded in the target map and the link status of the reference link indicates that the road surface of the reference link is a flat road surface.
In some embodiments, the second prompting information is used for prompting that the reference road corresponding to the navigation route can smoothly pass through.
In some embodiments, when the query result indicates that the road segment state of the reference road is not recorded in the target map and the road segment state of the reference road indicates that the road surface of the reference road is a flat road surface, the second prompt message may be generated and sent to the vehicle about to travel to the reference road, so that the vehicle about to travel to the reference road can predict in advance that the road surface of the reference road is a flat road surface, and thus the reference road corresponding to the navigation route is known in advance to be able to smoothly travel, and thus the vehicle is prevented from traveling to other non-flat road surfaces.
In step 108, the trained road surface recognition model is invoked, and road surface recognition is performed on the reference road based on the target driving characteristics, so as to obtain the predicted road section state of the reference road.
In some embodiments, the predicted road segment status of the reference road is used to indicate the flatness of the road surface of the reference road, and the model structure of the trained road surface recognition model is the same as the model structure of the road surface recognition model, and the model parameters are different.
In step 109, the predicted link state of the reference road is added to the target map to obtain an updated map.
In some embodiments, the updated map includes a predicted road segment state of the reference road, so that when the other vehicles use the target map to perform route planning, the road surface flatness of the reference road can be accurately predicted, and when the road surface flatness of the other routes is higher than that of the reference road between the navigation start point and the navigation end point, the other navigation routes are preferentially selected to travel, thereby effectively improving the route guiding performance of the updated map.
In this way, window division is performed on vehicle driving data to obtain window driving data corresponding to a plurality of window road segments respectively, road segment states corresponding to the corresponding window road segments are determined based on the window driving data, the window driving data are used as training samples, the corresponding road segment states are used as sample labels, a training sample set is constructed, and the road surface recognition model is trained based on the training sample set. Therefore, window division is carried out on the vehicle driving data to obtain window driving data corresponding to a plurality of window road sections respectively, each window road section corresponds to one road section state, and training samples are constructed based on the window driving data, so that the number of training samples for training the road surface recognition model is effectively increased, and the training strength of the road surface recognition model and the accuracy of the road surface recognition model obtained through training for recognizing whether the road on which the vehicle is driving is bumpy are effectively improved.
In the following, an exemplary application of the embodiments of the present application in an application scenario of an actual navigation map will be described.
When a user uses a mobile phone map to locate or drive and navigate, if a poor road condition is met, early warning can be broadcasted in advance, driving safety awareness of the user is improved, safety of people and the vehicle is guaranteed, comfort level of a user experience product is greatly improved, and user viscosity is increased; meanwhile, some new paths can be mined, and road data conditions can be provided.
In some embodiments, the training method of the pavement recognition model provided in the embodiments of the present application may be specifically implemented by the following manner: (1) data acquisition and truth value marking; (2) generating samples and build features; and (3) model training and testing. Referring to fig. 9, fig. 9 is a flow chart seven of a training method for a pavement recognition model according to an embodiment of the present application, and the training method for a pavement recognition model according to an embodiment of the present application may be implemented through steps 201 to 209 shown in fig. 9.
In step 201, data acquisition is performed on a road surface to obtain acquired data.
In some embodiments, using a cell phone map to navigate the drive, the drive is over a continuous road (not laid), the start time and the end time of the bumpy road are recorded by recording, and the recorder also manually records the start time and the end time of the continuous bumpy road; throughout the navigation process, a discontinuous jounce road is considered flat. And collecting multiple data of different mobile phones in different paths, recording the accelerometer, the gyroscope and the GPS information in each second according to the frequency of 25hz, and determining the true value through recording time and manual marking. If the time is sufficient, and in order to ensure the authenticity of the data, the accelerometer can be plotted according to the time sequence, and the data has larger jitter in the bumpy time period, and the true value can be calibrated in an auxiliary way by using a visual method. The consolidated data format is as follows: time stamp, label, acc_x, acc_y, acc_z, gyr_x, gyr_y, gyr_z, gps_lat, gps_lng, gps_spd.
In step 202, the acquired data is mean filtered.
In some embodiments, the average filtering is also called linear filtering, and the main method adopted by the average filtering is a neighborhood averaging method, and the basic principle of the linear filtering is to replace a data value in acquired data with an average value, so as to obtain the acquired data after the average value filtering.
In step 203, a time window and step size setting is performed.
In some embodiments, the thrash distance threshold is defined herein as thred_dist, tentative thred_dist=100 meters, as a window that is slid every time if more than half of the records are bumped, then the window is considered a bumped record, and vice versa. After the window data is obtained, the subsequent data conversion is carried out, and the characteristics are mainly divided into three dimensions, time domain, frequency domain and wavelet transformation.
In step 204, the acquired data is data converted.
In step 205, time domain data conversion is performed on the acquired data to obtain time domain converted data.
In some embodiments, the features include four-way statistics, means, variances, coefficients of variation, peak factors, pulse factors, and the like. Coefficient of variation, used to measure the relative degree of dispersion of the dataset, formula = standard deviation/average; a peak factor for an index describing the ratio between the signal peak and its root mean square value, the formula = peak value/root mean square value; a pulse factor, an index describing the ratio between the peak of a signal and its average, formula = peak value/average.
In step 206, the acquired data is subjected to frequency domain data conversion, resulting in frequency domain converted data.
In some embodiments, the frequency domain features herein are based on doing the signalFourier transform, and thus analysis. The following features are defined: spectral density, describing the energy distribution at different frequencies, calculated by fourier transform; average power spectral density, which represents the average power of a signal at different frequencies; frequency mean, which represents the average frequency of a signal in the frequency domain, calculated by weighted average of the spectral density, frequency mean = Σ @) Sigma (spectral density); frequency variance, representing the degree of dispersion of the frequency distribution of a signal in the frequency domain, calculated by weighting the square variance of the spectral density, frequency variance = Σ (>) Sigma (spectral density); the peak frequency is a frequency value having the highest spectral density of the signal, and is calculated by finding a frequency corresponding to the maximum spectral density.
In step 207, the acquired data is subjected to wavelet transform data conversion to obtain wavelet transform conversion data.
In some embodiments, wavelet transformation may decompose the signal into wavelet components of different frequencies, using the pywavelets library in Python for wavelet transformation and feature extraction in the molecular process. And (4) giving the wavelet type and level=n, returning n-order detail coefficients (Details Coefficients), and respectively solving the characteristics of the average value, the variance, the energy function and the like of the output n-order coefficients.
In step 208, feature extraction and merging are performed on the data after the data conversion.
In some embodiments, the step 208 may be implemented as follows: and respectively carrying out feature extraction on the wavelet transformation conversion data, the frequency domain conversion data and the time domain conversion data to obtain wavelet transformation features, frequency domain conversion features and time domain conversion features, and fusing the wavelet transformation features, the frequency domain conversion features and the time domain conversion features to obtain fusion features.
In step 209, the pavement recognition model is trained.
In some embodiments, a classification training study will be performed here using SVM, whose goal is to find an optimal hyperplane, separating samples of different classes.
For a two-classification SVM, the objective function can be expressed as: f (x) =sign (w) Tx +b); where x is the input sample, w is the weight vector, and b is the bias term. sign is a sign function, when w Tx When +b is greater than 0, f (x) is a positive class, when w Tx When +b is less than 0, f (x) is a negative category. The goal of the SVM is to maximize the separation (Margin), i.e., the minimum distance of the sample point from the hyperplane. This can be achieved by the following optimization problem: minimize:1/2 [ w ] 2+C Σ (max (0, 1-y) _i (w^ T x _i +b)) where w is the square of the L2 norm of the weight vector, C is the regularization parameter, y _i Is the true label (1 or-1) of the sample, x _i Is a sample feature vector. In the model training process, a kernel parameter (kernel function) and a parameter C in the model need to be adjusted, the parameter C represents a penalty term, and a search range is set to be [0.01,0.1,0.5,1, 10, 100 ]]The method comprises the steps of carrying out a first treatment on the surface of the The search range of kernel function type is linear, poly, rbf, sigmoid, pre-computed]。
Therefore, the road bump model (namely the road surface recognition model described above) is obtained through training by collecting some bump road and platform road data, the accuracy rate in the test data is 80%, and auxiliary information can be conveniently provided for road data production, guidance broadcasting and the like.
In some embodiments, due to limitation of data collection amount, the embodiments of the present application use an SVM model with relatively strong robustness, and if more data can be collected, a large model such as deep learning can be applied to perform feature automatic extraction and end-to-end training. In the analysis process, the accelerometer data fluctuation condition is found to be positively correlated with the bumping degree, so that in order to save the cost, a bumping road can be mined according to a heuristic algorithm, and the generalization effect is better.
In this way, window division is performed on vehicle driving data to obtain window driving data corresponding to a plurality of window road segments respectively, road segment states corresponding to the corresponding window road segments are determined based on the window driving data, the window driving data are used as training samples, the corresponding road segment states are used as sample labels, a training sample set is constructed, and the road surface recognition model is trained based on the training sample set. Therefore, window division is carried out on the vehicle driving data to obtain window driving data corresponding to a plurality of window road sections respectively, each window road section corresponds to one road section state, and training samples are constructed based on the window driving data, so that the number of training samples for training the road surface recognition model is effectively increased, and the training strength of the road surface recognition model and the accuracy of the road surface recognition model obtained through training for recognizing whether the road on which the vehicle is driving is bumpy are effectively improved.
It will be appreciated that in the embodiments of the present application, related data such as vehicle driving data is referred to, and when the embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of related data is required to comply with related laws and regulations and standards of related countries and regions.
Continuing with the description below of an exemplary architecture of the training device 455 for a pavement recognition model provided in embodiments of the present application implemented as a software module, in some embodiments, as shown in fig. 2, the software module stored in the training device 455 for a pavement recognition model of the memory 450 may include: the acquisition module 4551 is configured to acquire vehicle running data acquired by a target vehicle in a process of passing through a target road, and divide the vehicle running data into windows to obtain window running data corresponding to a plurality of window segments respectively; the determining module 4552 is configured to determine, based on each of the window driving data, a road segment state corresponding to the corresponding window road segment, where the road segment state includes a flat state and a bump state; the training module 4553 is configured to construct a training sample set for training the road surface recognition model by using each window driving data as a training sample and using the corresponding road segment state as a sample label, and train the road surface recognition model based on the training sample set, where the road surface recognition model is used for recognizing the road surface state of the road on which the target vehicle is driven based on the driving data of the target vehicle.
In some embodiments, the training module is further configured to perform the following processing for each window driving data in the training sample set: performing data conversion on the window driving data from a plurality of different data conversion dimensions to obtain dimension data corresponding to each data conversion dimension; respectively extracting features of the dimensional data to obtain dimensional features corresponding to the dimensional data, and carrying out feature fusion on the dimensional features to obtain fusion features; invoking the road surface recognition model, and carrying out road surface recognition on the target road based on the fusion characteristics to obtain a predicted road section state of the target road; and training the pavement recognition model by combining the predicted road section state and the corresponding road section state.
In some embodiments, the acquiring module is further configured to acquire a running acceleration and a running angular velocity corresponding to each acquisition time in a process that the target vehicle passes through the target road; for each acquisition time, averaging the corresponding running acceleration module value and the running angular velocity module value to obtain time running data corresponding to the acquisition time; and arranging the running data at all the moments according to the sequence from the early to the late of the acquisition moments to obtain the running data of the vehicle.
In some embodiments, the window driving data includes time driving data corresponding to each collection time of the target vehicle during passing through a corresponding window road section, and the determining module is further configured to perform the following processing for each window driving data: determining the target flatness corresponding to the acquisition time based on the time running data corresponding to the acquisition time aiming at each acquisition time corresponding to the window running data; when the target flatness indicates that the road surface of the target road through which the target vehicle passes at the acquisition time is a flat road surface, determining the acquisition time corresponding to the target flatness as a target acquisition time; and determining the road section state corresponding to the corresponding window road section based on the number of the target acquisition moments.
In some embodiments, the determining module is further configured to obtain at least one adjacent acquisition time of the acquisition time, and subtract the time running data corresponding to the acquisition time from the time running data corresponding to each adjacent acquisition time, so as to obtain a jitter difference value corresponding to each adjacent acquisition time; determining the target flatness degree as a first flatness degree when at least one of the jitter difference values is greater than or equal to a difference threshold; determining the target flatness degree as a second flatness degree when the jitter difference is not greater than or equal to the difference threshold; the first flatness is used for indicating that a road surface through which the target vehicle passes at the acquisition time is a non-flat road surface, and the second flatness is used for indicating that the road surface through which the target vehicle passes at the acquisition time is a flat road surface.
In some embodiments, the determining module is further configured to compare the number of the target acquisition time with a number threshold to obtain a number comparison result; when the number comparison result indicates that the number of the target acquisition moments is greater than the number threshold, determining the road section state as a flat state; and when the number comparison result indicates that the number of the target acquisition moments is smaller than or equal to the number threshold value, determining the road section state as a bumpy state.
In some embodiments, the determining module is further configured to divide the number of the target acquisition moments by the number of the acquisition moments to obtain a flat ratio of the target road; wherein the magnitude of the value of the flattening ratio is positively correlated with the flattening degree of the target road; the road surface state is determined to be the flat state when the flat ratio is greater than a flat ratio threshold, and the road surface state is determined to be the bumpy state when the flat ratio is less than or equal to the flat ratio threshold.
In some embodiments, the data conversion dimension includes a time domain conversion dimension, a frequency domain conversion dimension, and a wavelet transformation conversion dimension, and the training module is further configured to perform time domain data conversion on the window driving data from the time domain conversion dimension to obtain time domain dimension data corresponding to the time domain conversion dimension; performing frequency domain data conversion on the window driving data from the frequency domain conversion dimension to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension; and carrying out wavelet transformation on the window driving data from the wavelet transformation dimension to obtain wavelet transformation dimension data corresponding to the wavelet transformation dimension.
In some embodiments, the window driving data includes time driving data corresponding to each collection time of the target vehicle during passing through the window section, and the training module is further configured to obtain an average value, a square average value, a variance, a standard deviation of each time driving data, and a maximum value of the time driving data; dividing the standard deviation by the average value to obtain a discrete coefficient of the window driving data, dividing the maximum value by the square average value to obtain a peak factor of the window driving data, and dividing the maximum value by the average value to obtain a pulse factor of the window driving data; and carrying out data fusion on the average value, the variance, the discrete coefficient, the peak factor and the pulse factor to obtain time domain dimension data corresponding to the time domain conversion dimension.
In some embodiments, the training module is further configured to obtain a spectral density and an average power spectral density corresponding to the window driving data, and determine an average frequency of the window driving data in a frequency domain based on the spectral density; combining the average frequency and the frequency spectrum density, determining the frequency variance of the window driving data on the frequency domain, and obtaining the maximum frequency value in the frequency spectrum density; and carrying out data fusion on the frequency spectrum density, the average power spectrum density, the average frequency, the frequency variance and the maximum frequency value to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension.
In some embodiments, the training module is further configured to obtain feature dimensions corresponding to the dimensional features respectively, and average the feature dimensions to obtain average dimensions; comparing each characteristic dimension with the average dimension to obtain a dimension comparison result corresponding to each characteristic dimension; when the dimension comparison result indicates that the feature dimension is the same as the average dimension, determining the corresponding dimension feature as a target dimension feature corresponding to the dimension feature; when the dimension comparison result indicates that the feature dimension is different from the average dimension, adjusting the feature dimension of the corresponding dimension feature to the average dimension to obtain a target dimension feature corresponding to the dimension feature; and carrying out feature fusion on each target dimension feature to obtain the fusion feature.
In some embodiments, the training device for a pavement recognition model further includes: the road surface recognition module is used for acquiring a navigation route of the target vehicle, inquiring the road section state of a reference road corresponding to the navigation route from the target map, and obtaining an inquiry result; when the query result indicates that the road section state of the reference road is not recorded in the target map, acquiring target running data of the target vehicle running on the reference road, and extracting features of the target running data to obtain target running features; invoking a trained road surface recognition model, and carrying out road surface recognition on the reference road based on the target driving characteristics to obtain a predicted road section state of the reference road; and adding the predicted road section state of the reference road to the target map to obtain an updated map.
In some embodiments, the road surface identification module is further configured to generate a first prompt message when the query result indicates that a road segment state of the reference road is recorded in the target map, and the road segment state of the reference road indicates that the road surface of the reference road is a non-flat road surface; generating second prompt information when the query result indicates that the road section state of the reference road is recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a flat road surface; the first prompting information is used for prompting that the navigation route is switched on the target map, and the second prompting information is used for prompting that the reference road corresponding to the navigation route can stably pass.
Embodiments of the present application provide a computer program product comprising a computer program or computer-executable instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the training method of the pavement identification model according to the embodiment of the application.
The present embodiments provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, cause the processor to perform a method of training a road surface recognition model provided by the embodiments of the present application, for example, a method of training a road surface recognition model as illustrated in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of electronic devices including one or any combination of the above-described memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function, and works together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
As an example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the application has the following beneficial effects:
(1) Window division is carried out on vehicle running data to obtain window running data corresponding to a plurality of window road sections respectively, road section states corresponding to the corresponding window road sections are determined based on the window running data, the window running data are used as training samples, the corresponding road section states are used as sample labels, a training sample set is constructed, and the road surface recognition model is trained based on the training sample set. Therefore, window division is carried out on the vehicle driving data to obtain window driving data corresponding to a plurality of window road sections respectively, each window road section corresponds to one road section state, and training samples are constructed based on the window driving data, so that the number of training samples for training the road surface recognition model is effectively increased, and the training strength of the road surface recognition model and the accuracy of the road surface recognition model obtained through training for recognizing whether the road on which the vehicle is driving is bumpy are effectively improved.
(2) The time domain dimension data corresponding to the time domain conversion dimension is obtained by carrying out data fusion on the average value, the variance, the discrete coefficient, the peak factor and the pulse factor, so that the obtained time domain dimension data can comprehensively reflect the time domain characteristics of the vehicle driving data from a plurality of different time domain dimensions, the road section state is predicted through the time domain dimension data, the predicted road section state obtained by prediction can more accurately reflect the road surface flatness of the target road, and the accuracy of road surface recognition is effectively improved.
(3) The frequency domain dimension data corresponding to the frequency domain conversion dimension is obtained by carrying out data fusion on the frequency spectrum density, the average power spectrum density, the average frequency, the frequency variance and the maximum frequency value, so that the obtained frequency domain dimension data can comprehensively reflect the frequency domain characteristics of the vehicle driving data from a plurality of different frequency domain dimensions, the road section state is predicted through the frequency domain dimension data, the predicted road section state obtained through prediction can more accurately reflect the road surface flatness of the target road, and the accuracy of road surface recognition is effectively improved.
(4) The wavelet transformation is carried out on the vehicle driving data from the wavelet transformation dimension to obtain wavelet transformation dimension data corresponding to the wavelet transformation dimension, so that the obtained wavelet transformation dimension data can comprehensively reflect the wavelet transformation characteristics of the vehicle driving data from a plurality of different wavelet transformation dimensions, the road section state is predicted through the wavelet transformation dimension data, the predicted road section state obtained through prediction can reflect the road surface flatness of the target road more accurately, and the accuracy of road surface recognition is effectively improved.
(5) The update map comprises the predicted road section state of the reference road, so that the road surface flatness of the reference road can be accurately predicted when other vehicles use the target map to conduct route planning, and when the road surface flatness of other routes is higher than that of the reference road between the navigation starting point and the navigation ending point, other navigation routes are preferentially selected for traveling, and the route guiding performance of the update map is effectively improved.
(6) When the query result indicates that the road section state of the reference road is not recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a flat road surface, the second prompt information can be generated and sent to the vehicle to be driven to the reference road, so that the vehicle to be driven to the reference road can predict the road surface of the reference road to be a flat road surface in advance, the reference road corresponding to the navigation route can be known in advance to stably pass, and the vehicle is prevented from being driven to other non-flat road surfaces.
(7) And the first prompting information is used for prompting the switching of the navigation route on the target map. When the query result indicates that the road section state of the reference road is recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a non-flat road surface, the first prompt information can be generated and sent to the vehicle to be driven to the reference road, so that the vehicle to be driven to the reference road can predict that the road surface of the reference road is a non-flat road surface in advance, and the navigation route is switched on the target map, thereby avoiding the vehicle from being driven to the non-flat road surface.
(8) The road section state used for indicating the road surface flatness of the target road is determined based on the number of the target acquisition moments, so that the road surface flatness of the target road is accurately determined, data support is laid for subsequent accurate training of the road surface recognition model, and the accuracy of the road surface recognition model obtained through training is higher.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (17)

1. A method of training a pavement identification model, the method comprising:
acquiring vehicle running data acquired by a target vehicle in the process of passing through a target road, and carrying out window division on the vehicle running data to acquire window running data corresponding to a plurality of window road sections respectively;
determining a road section state corresponding to the corresponding window road section based on each window driving data, wherein the road section state comprises a flat state and a bumpy state;
taking each window driving data as a training sample, taking the corresponding road section state as a sample label, constructing a training sample set for training the road surface recognition model, and training the road surface recognition model based on the training sample set;
The road surface recognition model is used for recognizing the road surface state of a road on which the target vehicle runs based on the vehicle running data of the target vehicle.
2. The method of claim 1, wherein the training the pavement recognition model based on the training sample set comprises:
the following processing is respectively executed for each window driving data in the training sample set:
performing data conversion on the window driving data from a plurality of different data conversion dimensions to obtain dimension data corresponding to each data conversion dimension;
respectively extracting features of the dimensional data to obtain dimensional features corresponding to the dimensional data, and carrying out feature fusion on the dimensional features to obtain fusion features;
invoking the road surface recognition model, and carrying out road surface recognition on the target road based on the fusion characteristics to obtain a predicted road section state of the target road;
and training the pavement recognition model by combining the predicted road section state and the corresponding road section state.
3. The method according to claim 2, wherein the data conversion dimensions include a time domain conversion dimension, a frequency domain conversion dimension, and a wavelet transform conversion dimension, and the performing data conversion on the window driving data from a plurality of different data conversion dimensions to obtain dimension data corresponding to each data conversion dimension respectively includes:
Performing time domain data conversion on the window driving data from the time domain conversion dimension to obtain time domain dimension data corresponding to the time domain conversion dimension;
performing frequency domain data conversion on the window driving data from the frequency domain conversion dimension to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension;
and carrying out wavelet transformation on the window driving data from the wavelet transformation dimension to obtain wavelet transformation dimension data corresponding to the wavelet transformation dimension.
4. A method according to claim 3, wherein the window driving data includes time driving data corresponding to each collection time of the target vehicle during passing through the corresponding window road section, and the performing time domain data conversion on the window driving data from the time domain conversion dimension to obtain time domain dimension data corresponding to the time domain conversion dimension includes:
acquiring the average value, the square average value, the variance and the standard deviation of all the moment running data and the maximum value of the moment running data;
dividing the standard deviation by the average value to obtain a discrete coefficient of the window driving data, dividing the maximum value by the square average value to obtain a peak factor of the window driving data, and dividing the maximum value by the average value to obtain a pulse factor of the window driving data;
And carrying out data fusion on the average value, the variance, the discrete coefficient, the peak factor and the pulse factor to obtain time domain dimension data corresponding to the time domain conversion dimension.
5. The method of claim 3, wherein the performing frequency domain data conversion on the window driving data from the frequency domain conversion dimension to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension includes:
acquiring the frequency spectrum density and the average power spectrum density corresponding to the window driving data, and determining the average frequency of the window driving data on a frequency domain based on the frequency spectrum density;
combining the average frequency and the frequency spectrum density, determining the frequency variance of the window driving data on the frequency domain, and obtaining the maximum frequency value in the frequency spectrum density;
and carrying out data fusion on the frequency spectrum density, the average power spectrum density, the average frequency, the frequency variance and the maximum frequency value to obtain frequency domain dimension data corresponding to the frequency domain conversion dimension.
6. The method according to claim 2, wherein the feature fusing each dimension feature to obtain a fused feature comprises:
Acquiring feature dimensions corresponding to the dimension features respectively, and averaging the feature dimensions to obtain average dimensions;
comparing each characteristic dimension with the average dimension to obtain a dimension comparison result corresponding to each characteristic dimension;
when the dimension comparison result indicates that the feature dimension is the same as the average dimension, determining the corresponding dimension feature as a target dimension feature corresponding to the dimension feature;
when the dimension comparison result indicates that the feature dimension is different from the average dimension, adjusting the feature dimension of the corresponding dimension feature to the average dimension to obtain a target dimension feature corresponding to the dimension feature;
and carrying out feature fusion on each target dimension feature to obtain the fusion feature.
7. The method according to claim 1, wherein the acquiring vehicle travel data acquired by the target vehicle during passing through the target road includes:
acquiring running acceleration and running angular velocity respectively corresponding to each acquisition moment in the process of passing through the target road by the target vehicle;
for each acquisition time, averaging the corresponding running acceleration module value and the running angular velocity module value to obtain time running data corresponding to the acquisition time;
And arranging the running data at all the moments according to the sequence from the early to the late of the acquisition moments to obtain the running data of the vehicle.
8. The method according to claim 1, wherein the window travel data includes time travel data corresponding to each collection time of the target vehicle during passing through the corresponding window road section, and the determining the road section state corresponding to the corresponding window road section based on each window travel data includes:
the following processing is performed for each of the window travel data:
determining the target flatness corresponding to the acquisition time based on the time running data corresponding to the acquisition time aiming at each acquisition time corresponding to the window running data;
when the target flatness indicates that the road surface of the target road through which the target vehicle passes at the acquisition time is a flat road surface, determining the acquisition time corresponding to the target flatness as a target acquisition time;
and determining the road section state corresponding to the corresponding window road section based on the number of the target acquisition moments.
9. The method of claim 8, wherein the determining the target flatness corresponding to the acquisition time based on the time travel data corresponding to the acquisition time comprises:
Acquiring at least one adjacent acquisition time of the acquisition time, and subtracting the time running data corresponding to the acquisition time from the time running data corresponding to each adjacent acquisition time to obtain a jitter difference value corresponding to each adjacent acquisition time;
determining the target flatness degree as a first flatness degree when at least one of the jitter difference values is greater than or equal to a difference threshold;
determining the target flatness degree as a second flatness degree when the jitter difference is not greater than or equal to the difference threshold;
the first flatness is used for indicating that a road surface through which the target vehicle passes at the acquisition time is a non-flat road surface, and the second flatness is used for indicating that the road surface through which the target vehicle passes at the acquisition time is a flat road surface.
10. The method of claim 8, wherein determining the link status corresponding to the corresponding windowed link based on the number of target acquisition times comprises:
comparing the number of the target acquisition moments with a number threshold value to obtain a number comparison result;
when the number comparison result indicates that the number of the target acquisition moments is greater than the number threshold, determining the road section state as the flat state;
And when the number comparison result indicates that the number of the target acquisition moments is smaller than or equal to the number threshold value, determining the road section state as the jolt state.
11. The method of claim 8, wherein determining the link status corresponding to the corresponding windowed link based on the number of target acquisition times comprises:
dividing the number of the target acquisition moments by the number of the acquisition moments to obtain a flat ratio of the target road;
the road surface state is determined to be the flat state when the flat ratio is greater than a flat ratio threshold, and the road surface state is determined to be the bumpy state when the flat ratio is less than or equal to the flat ratio threshold.
12. The method of claim 1, wherein after training the pavement recognition model based on the training sample set, the method further comprises:
acquiring a navigation route of a target vehicle, and inquiring a road section state of a reference road corresponding to the navigation route from a target map to obtain an inquiry result;
when the query result indicates that the road section state of the reference road is not recorded in the target map, acquiring target running data of the target vehicle running on the reference road, and extracting features of the target running data to obtain target running features;
Invoking a trained road surface recognition model, and carrying out road surface recognition on the reference road based on the target driving characteristics to obtain a predicted road section state of the reference road;
and adding the predicted road section state of the reference road to the target map to obtain an updated map.
13. The method according to claim 12, wherein the method further comprises:
generating first prompt information when the query result indicates that the road section state of the reference road is recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a non-flat road surface;
generating second prompt information when the query result indicates that the road section state of the reference road is recorded in the target map and the road section state of the reference road indicates that the road surface of the reference road is a flat road surface;
the first prompting information is used for prompting that the navigation route is switched on the target map, and the second prompting information is used for prompting that the reference road corresponding to the navigation route can stably pass.
14. A training device for a pavement identification model, the device comprising:
The system comprises an acquisition module, a window dividing module and a window processing module, wherein the acquisition module is used for acquiring vehicle running data acquired by a target vehicle in the process of passing through a target road, and carrying out window division on the vehicle running data to acquire window running data corresponding to a plurality of window road sections respectively;
the determining module is used for determining the road section state corresponding to the corresponding window road section based on the window driving data, wherein the road section state comprises a flat state and a bumpy state;
the training module is used for taking each window driving data as a training sample, taking the corresponding road section state as a sample label, constructing a training sample set for training the road surface recognition model, and training the road surface recognition model based on the training sample set, wherein the road surface recognition model is used for recognizing the road surface state of the road on which the target vehicle is driven based on the driving data of the target vehicle.
15. An electronic device, the electronic device comprising:
a memory for storing computer executable instructions or computer programs;
a processor for implementing the training method of the road surface recognition model according to any one of claims 1 to 13 when executing the computer executable instructions or computer programs stored in the memory.
16. A computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the method of training a pavement identification model according to any one of claims 1 to 13.
17. A computer program product comprising a computer program or computer-executable instructions which, when executed by a processor, implements the method of training a road surface recognition model according to any one of claims 1 to 13.
CN202311653765.0A 2023-12-05 2023-12-05 Training method, device, equipment, medium and program product of pavement recognition model Active CN117349677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311653765.0A CN117349677B (en) 2023-12-05 2023-12-05 Training method, device, equipment, medium and program product of pavement recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311653765.0A CN117349677B (en) 2023-12-05 2023-12-05 Training method, device, equipment, medium and program product of pavement recognition model

Publications (2)

Publication Number Publication Date
CN117349677A true CN117349677A (en) 2024-01-05
CN117349677B CN117349677B (en) 2024-03-22

Family

ID=89359879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311653765.0A Active CN117349677B (en) 2023-12-05 2023-12-05 Training method, device, equipment, medium and program product of pavement recognition model

Country Status (1)

Country Link
CN (1) CN117349677B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337812A1 (en) * 2016-05-09 2017-11-23 Denso Corporation Driving-state data storage apparatus
CN110636468A (en) * 2018-06-22 2019-12-31 上海博泰悦臻电子设备制造有限公司 Road condition detection method, system, storage medium and vehicle machine
GB202017514D0 (en) * 2020-01-24 2020-12-23 Motional Ad Llc Detection and classification of siren signals and localization of siren signal sources
CN113074749A (en) * 2021-06-07 2021-07-06 湖北亿咖通科技有限公司 Road condition detection and update method, electronic equipment and computer-readable storage medium
CN114861756A (en) * 2022-03-30 2022-08-05 北京大学 Driving behavior mode real-time classification method and system based on short-term observation
CN116049649A (en) * 2023-01-06 2023-05-02 一汽解放汽车有限公司 Method and device for identifying running road condition and working condition of vehicle and computer equipment
CN116702000A (en) * 2023-05-31 2023-09-05 深圳技术大学 Road surface quality dynamic monitoring and evaluating method based on multi-layer data fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337812A1 (en) * 2016-05-09 2017-11-23 Denso Corporation Driving-state data storage apparatus
CN110636468A (en) * 2018-06-22 2019-12-31 上海博泰悦臻电子设备制造有限公司 Road condition detection method, system, storage medium and vehicle machine
GB202017514D0 (en) * 2020-01-24 2020-12-23 Motional Ad Llc Detection and classification of siren signals and localization of siren signal sources
CN113074749A (en) * 2021-06-07 2021-07-06 湖北亿咖通科技有限公司 Road condition detection and update method, electronic equipment and computer-readable storage medium
CN114861756A (en) * 2022-03-30 2022-08-05 北京大学 Driving behavior mode real-time classification method and system based on short-term observation
CN116049649A (en) * 2023-01-06 2023-05-02 一汽解放汽车有限公司 Method and device for identifying running road condition and working condition of vehicle and computer equipment
CN116702000A (en) * 2023-05-31 2023-09-05 深圳技术大学 Road surface quality dynamic monitoring and evaluating method based on multi-layer data fusion

Also Published As

Publication number Publication date
CN117349677B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN113642633B (en) Method, device, equipment and medium for classifying driving scene data
Fugiglando et al. Driving behavior analysis through CAN bus data in an uncontrolled environment
CN111512345B (en) Electronic system for dynamically and quasi-real-time measuring and identifying driver action based on mobile phone remote measurement only and corresponding method thereof
Garcia Cuenca et al. Machine learning techniques for undertaking roundabouts in autonomous driving
Xiao et al. TrajData: On vehicle trajectory collection with commodity plug-and-play OBU devices
Ounoughi et al. Data fusion for ITS: A systematic literature review
US11255678B2 (en) Classifying entities in digital maps using discrete non-trace positioning data
Gao et al. A data-driven lane-changing behavior detection system based on sequence learning
Zhao et al. Real-time vehicle motion detection and motion altering for connected vehicle: Algorithm design and practical applications
Naranjo et al. Floating car data augmentation based on infrastructure sensors and neural networks
US20230128964A1 (en) Mobile Device And System For Automated Trip Familiarity Recognition And Corresponding Method Thereof
Zhang et al. A hybrid approach for turning intention prediction based on time series forecasting and deep learning
Visan et al. Towards intelligent public transport systems in Smart Cities; Collaborative decisions to be made
Longhi et al. Car telematics big data analytics for insurance and innovative mobility services
Liu et al. A vehicle steering recognition system based on low-cost smartphone sensors
Li et al. An improved traffic lights recognition algorithm for autonomous driving in complex scenarios
Guo et al. Modeling driver’s evasive behavior during safety–critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning
Xin et al. Sustainable Road Pothole Detection: A Crowdsourcing Based Multi-Sensors Fusion Approach
CN117349677B (en) Training method, device, equipment, medium and program product of pavement recognition model
Hou Applications of big data technology in intelligent transportation system
Sharma et al. Deep Learning-Based Object Detection and Classification for Autonomous Vehicles in Different Weather Scenarios of Quebec, Canada
Talebloo A Practical Deep Learning Approach to Detect Aggressive Driving Behaviour
CN106781470B (en) Method and device for processing running speed of urban road
Raslan et al. IoT for measuring road network quality index
Jin et al. An Object Association Matching Method Based on V2I System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant