CN115009296A - Sensor detection method, sensor detection device, computer equipment and computer program product - Google Patents

Sensor detection method, sensor detection device, computer equipment and computer program product Download PDF

Info

Publication number
CN115009296A
CN115009296A CN202210641231.5A CN202210641231A CN115009296A CN 115009296 A CN115009296 A CN 115009296A CN 202210641231 A CN202210641231 A CN 202210641231A CN 115009296 A CN115009296 A CN 115009296A
Authority
CN
China
Prior art keywords
vehicle
scene
sensor
data
driving process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210641231.5A
Other languages
Chinese (zh)
Inventor
周佳
刘国翌
佘晓丽
任少卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202210641231.5A priority Critical patent/CN115009296A/en
Publication of CN115009296A publication Critical patent/CN115009296A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • B60W50/0205Diagnosing or detecting failures; Failure detection models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/02Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
    • B60W50/0205Diagnosing or detecting failures; Failure detection models
    • B60W2050/0215Sensor drifts or sensor failures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • B60W2420/408

Abstract

The application relates to a sensor detection method, a sensor detection apparatus, a computer device, a storage medium and a computer program product. The method comprises the following steps: when the vehicle-mounted terminal monitors that the vehicle is in a driving mode, scene data in the driving process of the vehicle and sensor real-time data of the vehicle are obtained, the scene type of the vehicle is determined according to the scene data in the driving process of the vehicle, sensor target data of the vehicle in the scene type are further obtained according to the scene type of the vehicle, the sensor real-time data and the sensor target data are matched, and whether the sensor of the vehicle is abnormal or not is determined according to a matching result. Therefore, whether the sensor of the vehicle is abnormal or not can be automatically detected, and the fault diagnosis of the sensor on the vehicle through instruments and meters in a professional place is not depended on, so that the detection efficiency of the vehicle sensor is improved, and the detection cost is saved.

Description

Sensor detection method, sensor detection device, computer equipment and computer program product
Technical Field
The present application relates to the field of automotive technologies, and in particular, to a sensor detection method, apparatus, computer device, storage medium, and computer program product.
Background
Along with the development of the automobile intelligent technology, the driving experience is more and more concerned by the majority of automobile owners. The driving experience at present relies mainly on intelligent hardware, such as various sensors in the vehicle.
In the conventional technology, in order to improve driving safety and user experience, a sensor on a vehicle is generally subjected to fault diagnosis periodically through an instrument to detect whether the sensor is abnormal. Thereby avoiding the problems of safety and the driving function being unusable due to the abnormality of the sensor.
However, the current method of diagnosing faults of sensors on a vehicle through instruments mainly depends on a professional person to perform detection in a professional place, so that the detection efficiency is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a sensor detection method, an apparatus, a computer device, a storage medium, and a computer program product capable of improving detection efficiency.
In a first aspect, the present application provides a sensor detection method. The method comprises the following steps:
when a vehicle is monitored to be in a driving mode, acquiring scene data in the driving process of the vehicle and acquiring sensor real-time data of the vehicle;
determining the scene type of the vehicle according to the scene data in the driving process of the vehicle;
acquiring sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and matching the sensor real-time data with the sensor target data, and determining whether the sensor of the vehicle is abnormal or not according to a matching result.
In one embodiment, the scene data comprises scene timing features and scene point cloud features; when the situation that the vehicle is in the driving mode is monitored, scene data in the driving process of the vehicle are obtained, and the method comprises the following steps: when a vehicle is monitored to be in a driving mode, acquiring vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle in real time; carrying out time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process; and carrying out point cloud data analysis processing on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, a coding and decoding model based on an attention mechanism is adopted for time sequence analysis processing; the time sequence analysis processing is carried out on the vehicle signals and the positioning information in the vehicle driving process to obtain the scene time sequence characteristics in the vehicle driving process, and the method comprises the following steps: and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
In one embodiment, a segmentation model is adopted for point cloud data analysis processing; the radar calibration parameters of the vehicle in the vehicle driving process and the radar point cloud data of the vehicle are subjected to point cloud data analysis processing to obtain scene point cloud characteristics in the vehicle driving process, and the method comprises the following steps: and inputting the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle into the segmentation model in the vehicle driving process to obtain the scene point cloud characteristics in the vehicle driving process.
In one embodiment, the scene data comprises scene timing features and scene point cloud features; the determining the scene type of the vehicle according to the scene data in the driving process of the vehicle comprises the following steps: and inputting the scene time sequence characteristics and the scene point cloud characteristics in the vehicle driving process into a scene classification model to obtain the scene type of the vehicle.
In one embodiment, the method for obtaining the scene classification model includes: acquiring sample data of a sample vehicle under different scene types; the sample data comprises a scene type where the sample vehicle is located, a scene time sequence feature sample set and a scene point cloud feature sample set under the scene type; and training a basic classification model by adopting the sample data to obtain the scene classification model.
In one embodiment, the base classification model includes at least one of a neural network classification model, a nearest neighbor classification model, a decision tree classification model, a bayesian classification model, and a linear classification model.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the matching the sensor real-time data and the sensor target data and determining whether the sensor of the vehicle is abnormal according to the matching result comprise: when the real-time parameters of the sensors are inconsistent with the target parameters of the sensors under the scene type, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal; or when the real-time parameters of the sensors are consistent with the target parameters of the sensors under the scene type, but the parameter values corresponding to the real-time parameters do not meet the parameter value ranges of the corresponding target parameters, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the matching the sensor real-time data and the sensor target data and determining whether the sensor of the vehicle is abnormal according to the matching result comprises the following steps: inputting a target parameter in a normal state of the sensor, a parameter value range corresponding to the target parameter, a real-time parameter of the sensor and a parameter value corresponding to the real-time parameter into a similarity calculation model to obtain a similarity between the real-time data of the sensor and the target data of the sensor; and when the similarity is smaller than a similarity threshold value, determining that the matching result of the sensor real-time data and the sensor target data is not matched, and determining that the sensor of the vehicle is abnormal.
In one embodiment, after determining that there is an abnormality in the sensor of the vehicle, the method further includes: and reporting an abnormal result of the sensor, wherein the abnormal result comprises a result that the sensor is abnormal.
In one embodiment, the driving mode includes any one of a manual driving mode and an automatic driving mode.
In a second aspect, the present application further provides a sensor detection device. The device comprises:
the driving monitoring module is configured to acquire scene data in the driving process of the vehicle and acquire sensor real-time data of the vehicle when the vehicle is monitored to be in a driving mode;
the scene type determining module is configured to determine the scene type of the vehicle according to scene data in the driving process of the vehicle;
the data query module is configured to execute acquisition of sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and the abnormality detection module is configured to perform matching of the sensor real-time data and the sensor target data and determine whether the sensor of the vehicle is abnormal according to a matching result.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method according to the first aspect as described above when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to the first aspect.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, performs the steps of the method according to the first aspect.
According to the sensor detection method, the sensor detection device, the computer equipment, the storage medium and the computer program product, when the vehicle-mounted terminal monitors that the vehicle is in a driving mode, scene data in the driving process of the vehicle and sensor real-time data of the vehicle are obtained, the scene type of the vehicle is determined according to the scene data in the driving process of the vehicle, further, sensor target data of the vehicle in the scene type is obtained according to the scene type of the vehicle, the sensor real-time data and the sensor target data are matched, and whether the sensor of the vehicle is abnormal or not is determined according to the matching result. Therefore, whether the sensor of the vehicle is abnormal or not can be automatically detected, and the fault diagnosis of the sensor on the vehicle through instruments and meters in a professional place is not depended on, so that the detection efficiency of the vehicle sensor is improved, and the detection cost is saved.
Drawings
FIG. 1 is a schematic flow chart of a sensor detection method according to one embodiment;
FIG. 2 is a schematic flow chart of the steps of obtaining scene data in one embodiment;
FIG. 3 is a flowchart illustrating steps for obtaining a scene classification model in one embodiment;
FIG. 4 is a flowchart illustrating the determining a scene type step in one embodiment;
FIG. 5 is a schematic flow chart diagram illustrating the step of matching sensor data in one embodiment;
FIG. 6 is a block diagram of a sensor detection device according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In one embodiment, as shown in fig. 1, a sensor detection method is provided, and this embodiment is exemplified by applying the method to a vehicle-mounted terminal, and includes the following steps:
step 102, when it is monitored that the vehicle is in a driving mode, scene data in the driving process of the vehicle and sensor real-time data of the vehicle are obtained.
The driving mode may be a mode in which the vehicle is in a driving state, and includes any one of a manual driving mode and an automatic driving mode. Specifically, the automatic driving mode refers to an operation mode in which the vehicle can autonomously drive without human intervention, and the manual driving mode refers to an operation mode in which the vehicle drives based on a manual control command with human intervention.
The scene data may be feature data for characterizing a scene in which the vehicle is located. The sensor real-time data can be the real-time data of each sensor acquired in the vehicle driving process. The sensor includes, but is not limited to, at least one of a photosensitive imaging sensing device of the vehicle, a radar sensor, an Inertial Measurement Unit (IMU), a Global Positioning System (GPS), a vehicle speed sensor, a wheel speed sensor, and a gear sensor. The real-time data of the sensor refers to the parameter values of the sensor acquired in real time.
In this embodiment, the vehicle may be equipped with a vehicle-mounted terminal, which may then monitor the driving mode of the vehicle. Specifically, when the vehicle-mounted terminal monitors that the vehicle is in a driving mode, scene data in the driving process of the vehicle can be acquired, and real-time sensor data of the vehicle can be acquired.
And 104, determining the scene type of the vehicle according to the scene data in the driving process of the vehicle.
The scene type may be a category for characterizing an environment in which the vehicle is currently located, where the environment is affected by at least one of light factors, weather factors, road factors, and the like of the current location of the vehicle. That is, when at least one of the light factor, the weather factor, the road factor, and the like of the vehicle is changed, the corresponding environment types may be different, and the final scene types may also be different.
In the embodiment, the vehicle-mounted terminal can determine the type of the current scene of the vehicle according to the scene data of the vehicle in the driving process.
And 106, acquiring sensor target data of the vehicle under the scene type according to the scene type of the vehicle.
The sensor target data may be sensor standard data corresponding to the scene type, that is, standard data of the sensor that can be acquired under different scene types. In some embodiments of the present disclosure, the sensor may be set to have different data acquisition requirements in different scene types, where the data acquisition requirements may preset parameters (or parameter types) corresponding to the sensor-acquired data, value ranges of the parameters, and the like. Specifically, the vehicle-mounted terminal acquires sensor target data of the vehicle in the scene type according to the determined scene type of the vehicle.
And 108, matching the real-time sensor data with the target sensor data, and determining whether the sensor of the vehicle is abnormal or not according to the matching result.
In the embodiment, the vehicle-mounted terminal matches the sensor real-time data with the sensor target data and determines whether the sensor of the vehicle is abnormal or not according to the matching result. For example, by comparing the sensor real-time data and the sensor target data, when the two data do not match, it is determined that the matching result is a mismatch, and when the matching result is a mismatch, it may be determined that the sensor of the vehicle has an abnormality. And when the matching result of the sensor real-time data and the sensor target data is matching, determining that the sensor of the vehicle is not abnormal. Some embodiments provided by the present disclosure, the matching result is matching, and the data to be compared are consistent or the difference range of the data is within a preset error range. Correspondingly, if the matching result is not matched, the matched data is inconsistent or the difference exceeds the error range. The matching may include value matching of the data itself, or may include matching of the number of the data and/or the number of parameters (or types of parameters) corresponding to the data, and the like.
In the sensor detection method, when the vehicle-mounted terminal monitors that the vehicle is in a driving mode, scene data in the driving process of the vehicle and real-time sensor data of the vehicle are obtained, the scene type of the vehicle is determined according to the scene data in the driving process of the vehicle, further, target sensor data of the vehicle in the scene type is obtained according to the scene type of the vehicle, the real-time sensor data and the target sensor data are matched, and whether the sensor of the vehicle is abnormal or not is determined according to a matching result. Therefore, whether the sensor of the vehicle is abnormal or not can be automatically detected, and the fault diagnosis of the sensor on the vehicle through instruments and meters in a professional place is not depended on, so that the detection efficiency of the vehicle sensor is improved, and the detection cost is saved.
In one embodiment, the scene data includes scene timing features and scene point cloud features. As shown in fig. 2, when it is monitored that the vehicle is in the driving mode, acquiring scene data in the driving process of the vehicle may specifically include:
step 202, when it is monitored that the vehicle is in a driving mode, vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle are collected in real time.
The vehicle signal may be a CAN (Controller Area Network) signal of the vehicle. Alternatively, the vehicle signal may include an on state of a vehicle speed sensor, an on state of a wheel speed sensor, an on state of a wiper, and the like. The positioning information may then include a result of positioning the current position of the vehicle and may further include map information for which positioning is intended.
The radar point cloud data of the vehicle is point cloud information obtained by emitting laser signals through a laser radar and then collecting the reflected laser signals. Specifically, each point in the point cloud information at least contains three-dimensional coordinate information. The radar calibration parameter may be a scaling factor or a radar constant for determining the lidar reflectivity of the target object and the output power value of the lidar receiver.
In this embodiment, when the vehicle-mounted terminal monitors that the vehicle is in the driving mode, vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle are acquired in real time in the driving process of the vehicle.
And 204, carrying out time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process.
The scene time sequence feature may be a feature describing the corresponding scene by the time sequence feature. The time sequence analysis processing is a method for predicting a target which is possibly reached in a future time domain by analyzing vehicle signals and positioning information in the driving process of a vehicle and the development process, direction and trend of a time sequence. Specifically, the time sequence analysis processing may be implemented by using a mathematical model, or may be implemented by using a mathematical method such as probability statistics.
In this embodiment, the vehicle-mounted terminal performs time sequence analysis processing on the vehicle signal and the positioning information in the vehicle driving process, so as to obtain scene time sequence characteristics in the vehicle driving process.
And step 206, performing point cloud data analysis processing on radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the vehicle driving process to obtain scene point cloud characteristics in the vehicle driving process.
The scene point cloud feature may be a feature describing a corresponding scene by a point cloud feature. The point cloud data analysis processing is a process of extracting point cloud characteristics based on radar calibration parameters of the vehicle in the vehicle driving process and radar point cloud data of the vehicle. Specifically, the point cloud data analysis and processing can be realized by adopting a neural network, a filtering algorithm and the like.
In this embodiment, the vehicle-mounted terminal performs point cloud data analysis processing on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process, so as to obtain scene point cloud characteristics in the vehicle driving process.
In the above embodiment, when the vehicle-mounted terminal monitors that the vehicle is in the driving mode, the vehicle signal, the positioning information, the radar calibration parameter of the vehicle and the radar point cloud data of the vehicle are acquired in real time in the driving process of the vehicle, the vehicle signal and the positioning information in the driving process of the vehicle are subjected to time sequence analysis processing to obtain the scene time sequence characteristics in the driving process of the vehicle, and meanwhile, the radar calibration parameter of the vehicle in the driving process of the vehicle and the radar point cloud data of the vehicle are subjected to point cloud data analysis processing to obtain the scene point cloud characteristics in the driving process of the vehicle. And scene data in the driving process of the vehicle is obtained, so that the type of the scene where the vehicle is located can be determined through the scene data.
In one embodiment, the time sequence analysis process may be performed using an attention-based codec model. Then, performing time sequence analysis processing on the vehicle signal and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process, which may specifically include: and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model so as to obtain the scene time sequence characteristics in the vehicle driving process.
Wherein, the codec model based on the attention mechanism can be a model similar to a Transformer, which can process timing signals with indefinite length. Specifically, a vehicle signal in the vehicle driving process and positioning information are input into the attention mechanism-based coding and decoding model, so that scene time sequence characteristics in the vehicle driving process output by the model can be obtained. The embodiment can improve the processing efficiency by performing the time sequence analysis processing by means of the attention-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
In one embodiment, a segmentation model may be employed for point cloud data analysis processing. Then, the radar calibration parameters of the vehicle in the vehicle driving process and the radar point cloud data of the vehicle are subjected to point cloud data analysis processing to obtain scene point cloud characteristics in the vehicle driving process, and the method specifically comprises the following steps: and inputting radar calibration parameters of the vehicle and radar point cloud data of the vehicle into the segmentation model in the vehicle driving process so as to obtain scene point cloud characteristics in the vehicle driving process.
The segmentation model may be a Mask R-CNN (Mask Region-Convolutional Neural network) similar segmentation model, which can determine the position and type of each object in the picture and give pixel level prediction. In the embodiment, the radar calibration parameters of the vehicle in the driving process of the vehicle and the radar point cloud data of the vehicle are input into the segmentation model, so that the scene point cloud characteristics output by the model in the driving process of the vehicle can be obtained. The embodiment performs the time sequence analysis processing by means of the segmentation model to obtain the scene time sequence characteristics in the driving process of the vehicle, so that the processing efficiency can be improved.
In one embodiment, the scene data may include scene timing features and scene point cloud features. The scene classification model may also be used for classification of scene types. Therefore, determining the type of the scene where the vehicle is located according to the scene data in the driving process of the vehicle may specifically include: and inputting scene time sequence characteristics and scene point cloud characteristics in the driving process of the vehicle into the scene classification model so as to obtain the scene type of the vehicle output by the model.
The scene classification model can be obtained based on training of a basic classification model. In the embodiment, the scene time sequence characteristics and the scene point cloud characteristics in the driving process of the vehicle are input into the scene classification model, so that the scene type of the vehicle output by the model is obtained. It is understood that in some embodiments, the time dimensions corresponding to the scene temporal features and the scene point cloud features are the same or similar. Namely, the scene time sequence characteristics and the scene point cloud characteristics which tend to be at the same moment are input into the scene classification model, so that the scene type of the vehicle at the moment output by the model is obtained. The embodiment obtains the scene type of the vehicle by the scene classification model, thereby further improving the processing efficiency.
In an embodiment, as shown in fig. 3, the method for obtaining the scene classification model may include the following steps:
step 302, sample data of a sample vehicle in different scene types is obtained.
The sample data comprises the scene type of the sample vehicle, the scene time sequence characteristic sample set and the scene point cloud characteristic sample set under each scene type. For example, if there are A, B and C scene types, the sample data includes a scene time sequence feature sample set and a scene point cloud feature sample set of the sample vehicle in a scene a, a scene time sequence feature sample set and a scene point cloud feature sample set of the sample vehicle in a scene B, and a scene time sequence feature sample set and a scene point cloud feature sample set of the sample vehicle in a scene C. And obtaining sample data of the sample vehicle under different scene types.
And step 304, training a basic classification model by adopting the sample data to obtain a scene classification model.
The basic classification model comprises at least one of a neural network classification model, a nearest neighbor classification model, a decision tree classification model, a Bayesian classification model and a linear classification model. For example, the basic classification model may be any one of the above classification models, and the classification model is trained by using the sample data, so that the classification model can learn features of the vehicle in different scene types to obtain the scene classification model.
In one scenario, the basic classification model may also be a combination of two or more of the above classification models. By adopting the sample data to respectively train two or more than two classification models, each classification model can learn the characteristics of the vehicle in different scene types, and then the scene classification model is obtained based on each trained classification model, so that the robustness of the scene classification model is improved.
In the embodiment, the model is trained by the sample data of the sample vehicle in different scene types, so that the scene type of the vehicle can be accurately and effectively classified by using the trained scene classification model based on the scene data in the vehicle driving process.
In one embodiment, the determination of the type of scene the vehicle is in based on scene data during the driving of the vehicle is further described below. Specifically, as shown in fig. 4, when it is monitored that the vehicle is in the driving mode, vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle are acquired in real time in the driving process of the vehicle. The method comprises the steps that a vehicle signal and positioning information in the vehicle driving process are input into an attention-based coding and decoding model, and time sequence feature extraction is carried out through the attention-based coding and decoding model, so that scene time sequence features in the vehicle driving process are obtained.
The radar calibration parameters of the vehicle in the vehicle driving process and the radar point cloud data of the vehicle are input into the segmentation model, and point cloud characteristics are extracted through the segmentation model, so that the scene point cloud characteristics in the vehicle driving process are obtained. And further inputting scene time sequence characteristics and scene point cloud characteristics in the driving process of the vehicle into the scene classification model to obtain the scene type of the vehicle.
In the embodiment, the time sequence feature extraction is performed by adopting an attention mechanism-based coding and decoding model, the point cloud feature extraction is performed by adopting a segmentation model, and the extracted scene time sequence feature and scene point cloud feature are used as the input of a scene classification model to obtain the scene type of the vehicle output by the scene classification model, so that the scene type of the vehicle can be quickly determined.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters. Matching the real-time sensor data and the target sensor data, and determining whether the sensor of the vehicle is abnormal according to the matching result, which may specifically include: and when the real-time parameters of the sensor are inconsistent with the target parameters of the sensor under the scene type, determining that the matching result of the real-time data of the sensor and the target data of the sensor is not matched, and determining that the sensor of the vehicle is abnormal.
It can be understood that, under different scene types, the sensor target data are also different, that is, the scene type and the sensor target data have a certain corresponding relationship. For example, if three scene types A, B and C are included, there is corresponding sensor target data a under scene type a, corresponding sensor target data B under scene type B, and corresponding sensor target data C under scene type C.
Specifically, the sensor target data may be a target parameter of a normal state of the sensor in a corresponding scene type, and a parameter value range corresponding to the target parameter. The target parameter may be a name of a standard parameter corresponding to a normal state of the sensor in a corresponding scene type, and the parameter value range is a range of values of each standard parameter. For example, if the corresponding sensor target data a in the scene type a is [ (a1: xo-xp), (a2: yo-yp), (a3: zo-zp) ], where a1, a2, and a3 are names of standard parameters corresponding to the normal state of the sensor in the scene type a, xo-xp is a range of values of the standard parameter a1 in the scene type a, yo-yp is a range of values of the standard parameter a2 in the scene type a, and zo-zp is a range of values of the standard parameter a3 in the scene type a. Therefore, the sensor target data are different under different scene types, and may be different in names of the standard parameters or different in ranges of values of the standard parameters.
The sensor real-time data can be the real-time data of each sensor collected in the driving process of the vehicle. The sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters. The real-time parameters refer to the names of the sensor parameters acquired in real time, and the parameter values refer to the specific values of the sensor parameters acquired in real time.
For example, if the type of the scene where the vehicle is located is determined to be A according to scene data during the driving of the vehicle, and the acquired real-time sensor data r of the vehicle is [ a1: x1, a2: y2], wherein a1 and a2 are sensor parameter names acquired in real time, x1 is a value of the sensor parameter name a1 acquired in real time, and y2 is a value of the sensor parameter name a2 acquired in real time. If the acquired corresponding sensor target data a under the scene type A is [ (a1: xo-xp), (a2: yo-yp), (a3: zo-zp) ]. When the real-time sensor data and the target sensor data are matched, firstly, whether the real-time parameters of the sensor collected under the scene type A are consistent with the target parameters of the sensor under the corresponding scene type can be compared. From the above, the real-time parameters of the sensors collected under scene type a in the present embodiment include a1 and a2, and the target parameters of the sensors under scene type a include a1, a2 and a 3. By comparing the real-time parameters of the sensor under scene type a with the target parameters, the real-time parameters lack the real-time parameters with the target parameter name a 3. Therefore, it may be determined that the real-time parameters of the sensor collected under scene type a are inconsistent with the target parameters of the sensor under the corresponding scene type. The matching result of the sensor real-time data and the sensor target data can be determined to be not matched, and the sensor of the vehicle can be determined to be abnormal.
In one scenario, when the real-time parameter of the sensor is consistent with the target parameter of the sensor in the scenario type, the real-time sensor data and the target sensor data are matched, and whether the sensor of the vehicle is abnormal or not is determined according to a matching result, which may specifically include: and determining whether the real-time data of the sensor is matched with the target data of the sensor according to the parameter value corresponding to the real-time parameter and the parameter value range of the target parameter of the sensor under the scene type.
For example, if the type of the scene where the vehicle is located is determined to be A according to the scene data in the driving process of the vehicle, and the acquired real-time sensor data r of the vehicle is [ a1: x1, a2: y2, a3: z3 ]. Wherein, a1, a2 and a3 are sensor parameter names acquired in real time, x1 is a value named as a1 of the sensor parameter acquired in real time, y2 is a value named as a2 of the sensor parameter acquired in real time, and z3 is a value named as a3 of the sensor parameter acquired in real time. If the acquired corresponding sensor target data a under the scene type A is [ (a1: xo-xp), (a2: yo-yp), (a3: zo-zp) ]. As can be seen from the comparison, the real-time parameters a1, a2, and a3 of the sensor collected under the scene type a are consistent with the target parameters a1, a2, and a3 of the sensor under the corresponding scene type, so that the parameter values corresponding to the real-time parameters are further compared with the parameter value ranges of the corresponding target parameters. For example, for the real-time parameter a1, the corresponding parameter value is x1, and the parameter value range of the target parameter a1 is xo-xp, it is determined whether the parameter value x1 of the real-time parameter a1 falls within the parameter value range xo-xp of the target parameter a 1. If the parameter value x1 of the real-time parameter a1 does not fall into the parameter value range xo-xp of the target parameter a1, that is, x1 is smaller than xo or x1 is larger than xp, it indicates that the parameter value corresponding to the real-time parameter does not meet the parameter value range of the corresponding target parameter. Therefore, it can be determined that the matching result of the sensor real-time data and the sensor target data is a mismatch, and it can be determined that the sensor of the vehicle is abnormal.
If the parameter value x1 of the real-time parameter a1 falls into the parameter value range xo-xp of the target parameter a1, further judging whether the parameter value y2 of the real-time parameter a2 falls into the parameter value range yo-yp of the target parameter a 2. Similarly, if the parameter value y2 of the real-time parameter a2 falls within the parameter value range yo-yp of the target parameter a2, it is further determined whether the parameter value z3 of the real-time parameter a3 falls within the parameter value range zo-zp of the target parameter a 3. If the parameter value y2 of the real-time parameter a2 does not fall into the parameter value range yo-yp of the target parameter a2, or the parameter value z3 of the real-time parameter a3 does not fall into the parameter value range zo-zp of the target parameter a3, the matching result of the sensor real-time data and the sensor target data can be determined to be mismatching, and the sensor of the vehicle can be further determined to be abnormal.
In the above embodiment, when the real-time parameter of the sensor is inconsistent with the target parameter of the sensor in the scene type, it is determined that the matching result of the sensor real-time data and the sensor target data is not matched, and it is determined that the sensor of the vehicle is abnormal. And when the real-time parameters of the sensor are consistent with the target parameters of the sensor under the scene type, determining whether the real-time data of the sensor are matched with the target data of the sensor according to the parameter values corresponding to the real-time parameters and the parameter value range of the target parameters, and determining that the matching result of the real-time data of the sensor and the target data of the sensor is not matched when the parameter values corresponding to the real-time parameters do not meet the parameter value range of the corresponding target parameters, and determining that the sensor of the vehicle is abnormal. In the embodiment, before parameter comparison, the sensor target data in the corresponding scene type is called for comparison based on different scene types, so that the sensor states in various scene types can be detected, and the detection accuracy is improved.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters. As shown in fig. 5, the matching of the sensor real-time data and the sensor target data, and determining whether the sensor of the vehicle is abnormal according to the matching result may further include: inputting a target parameter in a normal state of the sensor, a parameter value range corresponding to the target parameter, a real-time parameter of the sensor and a parameter value corresponding to the real-time parameter into a similarity calculation model to obtain the similarity between the real-time data of the sensor and the target data of the sensor; and when the similarity is smaller than the similarity threshold value, determining that the matching result of the sensor real-time data and the sensor target data is not matched, and determining that the sensor of the vehicle is abnormal.
Wherein, the similarity calculation model may be a mathematical model for calculating the similarity between the sensor real-time data and the sensor target data. Specifically, the similarity calculation model may obtain the similarity between the sensor real-time data and the sensor target data by calculating a distance between a feature of the sensor real-time data and a feature of the sensor target data. Specifically, the smaller the distance between the two is, the greater the corresponding similarity is; the larger the distance between the two is, the smaller the corresponding similarity is.
The similarity threshold may be preset and used as a basis for determining whether the real-time sensor data matches the target sensor data. Specifically, the similarity threshold may be set according to an actual application scenario. In the present embodiment, when the similarity between the sensor real-time data and the sensor target data is greater than or equal to the similarity threshold, it may be determined that the matching result of the sensor real-time data and the sensor target data is a match, and it may be determined that there is no abnormality in the sensor of the vehicle. And when the similarity between the sensor real-time data and the sensor target data is smaller than the similarity threshold, the matching result of the sensor real-time data and the sensor target data can be determined to be not matched, and the sensor of the vehicle can be determined to have abnormality.
In this embodiment, the vehicle-mounted terminal inputs the similarity calculation model according to the target parameter of the normal state of the sensor in the same scene type, the parameter value range corresponding to the target parameter, the real-time parameter of the sensor, and the parameter value corresponding to the real-time parameter, so as to obtain the similarity between the sensor real-time data output by the model and the sensor target data. And the vehicle-mounted terminal further compares the obtained similarity with a similarity threshold, and when the similarity is smaller than the similarity threshold, the matching result of the real-time data of the sensor and the target data of the sensor is determined to be mismatching, and the sensor of the vehicle is determined to be abnormal. In the embodiment, the similarity between the sensor real-time data and the sensor target data is obtained by adopting the similarity calculation model, so that the calculation efficiency can be improved.
In one embodiment, after determining that there is an abnormality in the sensor of the vehicle, the method may further include: and reporting an abnormal result of the sensor, wherein the abnormal result comprises a result that the sensor is abnormal.
Specifically, after determining that the sensor of the vehicle is abnormal, the vehicle-mounted terminal can also send an alarm to report the abnormal result of the sensor to the vehicle owner, so that the vehicle owner can timely handle the abnormality of the sensor.
In one scenario, after determining that the sensor of the vehicle is abnormal, the vehicle-mounted terminal may also report a corresponding abnormal result to a remote server. Therefore, the server can effectively monitor the sensor state of each vehicle in the network. The server can be implemented by an independent server, a server cluster composed of a plurality of servers, or a cloud server.
In one scenario, after the sensor of the vehicle is determined to be abnormal, the vehicle-mounted terminal can further repair the abnormality of the sensor according to an abnormal result, so that the abnormality of the sensor can be repaired locally and timely, and the safety problem caused by the abnormality of the sensor is avoided.
In one scenario, after determining that the sensor of the vehicle is abnormal, the vehicle-mounted terminal may also report the abnormal result to a remote server, and interact with the server according to the abnormal result to obtain a corresponding repairing mode to repair the abnormality of the sensor, thereby realizing timely repairing of the abnormality of the sensor.
In one scenario, after determining that the sensor of the vehicle is abnormal, the vehicle-mounted terminal may further determine a corresponding error code according to the abnormal result, report the error code to the server, interact with the server according to the error code, so as to obtain a fault reason and a repairing mode corresponding to the error code, and repair the abnormality of the sensor according to the corresponding repairing mode. The abnormal condition of the sensor can be timely repaired, and the error code is only reported to the server, so that the transmitted data volume can be reduced, the data transmission efficiency is improved, and the server burden is reduced.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a sensor detection apparatus for implementing the above-mentioned sensor detection method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so specific limitations in one or more embodiments of the sensor detection device provided below can be referred to the limitations on the sensor detection method in the foregoing, and details are not repeated herein.
In one embodiment, as shown in fig. 6, there is provided a sensor detecting device including: a driving monitoring module 602, a scene type determination module 604, a data query module 606, and an anomaly detection module 608, wherein:
a driving monitoring module 602, configured to execute, when it is monitored that a vehicle is in a driving mode, acquiring scene data during driving of the vehicle, and acquiring sensor real-time data of the vehicle;
the scene type determining module 604 is configured to determine a scene type of the vehicle according to scene data of the vehicle in the driving process;
a data query module 606 configured to execute acquiring sensor target data of the vehicle under the scene type according to the scene type where the vehicle is located;
an anomaly detection module 608 configured to perform matching of the sensor real-time data and the sensor target data and determine whether there is an anomaly in a sensor of the vehicle according to a matching result.
In one embodiment, the scene data includes scene timing features and scene point cloud features; the driving monitoring module includes: the data acquisition unit is configured to acquire vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle in real time when the vehicle is monitored to be in a driving mode; the time sequence analysis unit is configured to perform time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process; and the point cloud data analysis unit is configured to execute point cloud data analysis processing on the radar calibration parameters of the vehicle in the vehicle driving process and the radar point cloud data of the vehicle to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, a coding and decoding model based on an attention mechanism is adopted for time sequence analysis processing; the timing analysis unit is configured to perform: and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
In one embodiment, a segmentation model is used for point cloud data analysis processing; the point cloud data analysis unit is configured to perform: and inputting the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle into the segmentation model in the vehicle driving process to obtain the scene point cloud characteristics in the vehicle driving process.
In one embodiment, the scene type determination module is configured to perform: and inputting the scene time sequence characteristics and the scene point cloud characteristics in the vehicle driving process into a scene classification model to obtain the scene type of the vehicle.
In one embodiment, the method further comprises a scene classification model obtaining module configured to perform: obtaining sample data of a sample vehicle under different scene types, wherein the sample data comprises a scene type of the sample vehicle, a scene time sequence characteristic sample set and a scene point cloud characteristic sample set under the scene type; and training a basic classification model by adopting the sample data to obtain the scene classification model.
In one embodiment, the base classification model includes at least one of a neural network classification model, a nearest neighbor classification model, a decision tree classification model, a bayesian classification model, and a linear classification model.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the anomaly detection module is configured to perform: when the real-time parameters of the sensor are inconsistent with the target parameters of the sensor under the scene type, determining that the matching result of the real-time data of the sensor and the target data of the sensor is not matched, and determining that the sensor of the vehicle is abnormal; or when the real-time parameters of the sensors are consistent with the target parameters of the sensors under the scene type, but the parameter values corresponding to the real-time parameters do not meet the parameter value ranges of the corresponding target parameters, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the anomaly detection module is further configured to perform: inputting a target parameter in a normal state of the sensor, a parameter value range corresponding to the target parameter, a real-time parameter of the sensor and a parameter value corresponding to the real-time parameter into a similarity calculation model to obtain similarity between real-time data of the sensor and target data of the sensor; and when the similarity is smaller than a similarity threshold value, determining that the matching result of the sensor real-time data and the sensor target data is not matched, and determining that the sensor of the vehicle is abnormal.
In one embodiment, the apparatus further includes a reporting module configured to perform: and reporting an abnormal result of the sensor, wherein the abnormal result comprises a result that the sensor is abnormal.
In one embodiment, the driving mode includes any one of a manual driving mode and an automatic driving mode.
The modules in the sensor detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, and the computer device may be a vehicle-mounted terminal, and the internal structure diagram of the computer device may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a sensor detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
when a vehicle is monitored to be in a driving mode, acquiring scene data in the driving process of the vehicle and acquiring sensor real-time data of the vehicle;
determining the scene type of the vehicle according to the scene data in the driving process of the vehicle;
acquiring sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and matching the sensor real-time data with the sensor target data, and determining whether the sensor of the vehicle is abnormal or not according to a matching result.
In one embodiment, the scene data includes scene timing features and scene point cloud features; the processor, when executing the computer program, further performs the steps of: when a vehicle is monitored to be in a driving mode, acquiring vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle in real time; carrying out time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process; and carrying out point cloud data analysis processing on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, the processor, when executing the computer program, further performs the steps of: performing time sequence analysis processing by adopting an attention-based coding and decoding model; and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
In one embodiment, the processor when executing the computer program further performs the steps of: analyzing and processing point cloud data by adopting a segmentation model; and inputting the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle into the segmentation model in the vehicle driving process to obtain the scene point cloud characteristics in the vehicle driving process.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and inputting the scene time sequence characteristics and the scene point cloud characteristics in the vehicle driving process into a scene classification model to obtain the scene type of the vehicle.
In one embodiment, the processor when executing the computer program further performs the steps of: obtaining sample data of a sample vehicle under different scene types, wherein the sample data comprises the scene type of the sample vehicle, a scene time sequence characteristic sample set and a scene point cloud characteristic sample set under the scene type; and training a basic classification model by adopting the sample data to obtain the scene classification model.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the processor when executing the computer program further realizes the following steps: when the real-time parameters of the sensor are inconsistent with the target parameters of the sensor under the scene type, determining that the matching result of the real-time data of the sensor and the target data of the sensor is not matched, and determining that the sensor of the vehicle is abnormal; or when the real-time parameters of the sensors are consistent with the target parameters of the sensors under the scene type, but the parameter values corresponding to the real-time parameters do not meet the parameter value ranges of the corresponding target parameters, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the processor, when executing the computer program, further performs the steps of: inputting a target parameter in a normal state of the sensor, a parameter value range corresponding to the target parameter, a real-time parameter of the sensor and a parameter value corresponding to the real-time parameter into a similarity calculation model to obtain similarity between real-time data of the sensor and target data of the sensor; and when the similarity is smaller than a similarity threshold value, determining that the matching result of the sensor real-time data and the sensor target data is not matched, and determining that the sensor of the vehicle is abnormal.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and reporting an abnormal result of the sensor, wherein the abnormal result comprises a result that the sensor is abnormal.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
when a vehicle is monitored to be in a driving mode, acquiring scene data in the driving process of the vehicle and acquiring sensor real-time data of the vehicle;
determining the scene type of the vehicle according to the scene data in the driving process of the vehicle;
acquiring sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and matching the sensor real-time data with the sensor target data, and determining whether the sensor of the vehicle is abnormal or not according to a matching result.
In one embodiment, the scene data includes scene timing features and scene point cloud features; the computer program when executed by the processor further realizes the steps of: when a vehicle is monitored to be in a driving mode, collecting vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle in real time; carrying out time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process; and carrying out point cloud data analysis processing on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, the computer program when executed by the processor further performs the steps of: performing time sequence analysis processing by adopting an attention-based coding and decoding model; and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
In one embodiment, the computer program when executed by the processor further performs the steps of: analyzing and processing point cloud data by adopting a segmentation model; and inputting radar calibration parameters of the vehicle in the vehicle driving process and radar point cloud data of the vehicle into the segmentation model to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, the computer program when executed by the processor further performs the steps of: and inputting the scene time sequence characteristics and the scene point cloud characteristics in the vehicle driving process into a scene classification model to obtain the scene type of the vehicle.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining sample data of a sample vehicle under different scene types, wherein the sample data comprises the scene type of the sample vehicle, a scene time sequence characteristic sample set and a scene point cloud characteristic sample set under the scene type; and training a basic classification model by adopting the sample data to obtain the scene classification model.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the computer program when executed by the processor further realizes the steps of: when the real-time parameters of the sensors are inconsistent with the target parameters of the sensors under the scene type, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal; or when the real-time parameters of the sensors are consistent with the target parameters of the sensors under the scene type, but the parameter values corresponding to the real-time parameters do not meet the parameter value ranges of the corresponding target parameters, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the computer program when executed by the processor further realizes the steps of: inputting a target parameter in a normal state of the sensor, a parameter value range corresponding to the target parameter, a real-time parameter of the sensor and a parameter value corresponding to the real-time parameter into a similarity calculation model to obtain a similarity between the real-time data of the sensor and the target data of the sensor; and when the similarity is smaller than a similarity threshold value, determining that the matching result of the sensor real-time data and the sensor target data is not matched, and determining that the sensor of the vehicle is abnormal.
In one embodiment, the computer program when executed by the processor further performs the steps of: and reporting an abnormal result of the sensor, wherein the abnormal result comprises a result that the sensor is abnormal.
In one embodiment, a computer program product is provided, comprising a computer program which when executed by a processor performs the steps of:
when a vehicle is monitored to be in a driving mode, acquiring scene data in the driving process of the vehicle and acquiring sensor real-time data of the vehicle;
determining the scene type of the vehicle according to the scene data in the driving process of the vehicle;
acquiring sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and matching the sensor real-time data with the sensor target data, and determining whether the sensor of the vehicle is abnormal or not according to a matching result.
In one embodiment, the scene data includes scene timing features and scene point cloud features; the computer program when executed by the processor further realizes the steps of: when a vehicle is monitored to be in a driving mode, acquiring vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle in real time; carrying out time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process; and carrying out point cloud data analysis processing on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, the computer program when executed by the processor further performs the steps of: performing time sequence analysis processing by adopting an attention-based coding and decoding model; and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
In one embodiment, the computer program when executed by the processor further performs the steps of: analyzing and processing point cloud data by adopting a segmentation model; and inputting radar calibration parameters of the vehicle in the vehicle driving process and radar point cloud data of the vehicle into the segmentation model to obtain scene point cloud characteristics in the vehicle driving process.
In one embodiment, the computer program when executed by the processor further performs the steps of: and inputting the scene time sequence characteristics and the scene point cloud characteristics in the vehicle driving process into a scene classification model to obtain the scene type of the vehicle.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining sample data of a sample vehicle under different scene types, wherein the sample data comprises a scene type of the sample vehicle, a scene time sequence characteristic sample set and a scene point cloud characteristic sample set under the scene type; and training a basic classification model by adopting the sample data to obtain the scene classification model.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the computer program when executed by the processor further realizes the steps of: when the real-time parameters of the sensor are inconsistent with the target parameters of the sensor under the scene type, determining that the matching result of the real-time data of the sensor and the target data of the sensor is not matched, and determining that the sensor of the vehicle is abnormal; or when the real-time parameters of the sensors are consistent with the target parameters of the sensors under the scene type, but the parameter values corresponding to the real-time parameters do not meet the parameter value ranges of the corresponding target parameters, determining that the matching results of the real-time data of the sensors and the target data of the sensors are not matched, and determining that the sensors of the vehicle are abnormal.
In one embodiment, the sensor target data includes a target parameter in a normal state of the sensor, and a parameter value range corresponding to the target parameter; the sensor real-time data comprises real-time parameters of the sensor and parameter values corresponding to the real-time parameters; the computer program when executed by the processor further realizes the steps of: inputting a target parameter in a normal state of the sensor, a parameter value range corresponding to the target parameter, a real-time parameter of the sensor and a parameter value corresponding to the real-time parameter into a similarity calculation model to obtain a similarity between the real-time data of the sensor and the target data of the sensor; and when the similarity is smaller than a similarity threshold value, determining that the matching result of the sensor real-time data and the sensor target data is not matched, and determining that the sensor of the vehicle is abnormal.
In one embodiment, the computer program when executed by the processor further performs the steps of: and reporting an abnormal result of the sensor, wherein the abnormal result comprises a result that the sensor is abnormal.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A sensor detection method, the method comprising:
when a vehicle is monitored to be in a driving mode, acquiring scene data in the driving process of the vehicle and acquiring sensor real-time data of the vehicle;
determining the scene type of the vehicle according to the scene data in the driving process of the vehicle;
acquiring sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and matching the sensor real-time data with the sensor target data, and determining whether the sensor of the vehicle is abnormal or not according to a matching result.
2. The method of claim 1, wherein the scene data comprises scene timing features and scene point cloud features; when the situation that the vehicle is in the driving mode is monitored, scene data in the driving process of the vehicle are obtained, and the method comprises the following steps:
when a vehicle is monitored to be in a driving mode, acquiring vehicle signals, positioning information, radar calibration parameters of the vehicle and radar point cloud data of the vehicle in the driving process of the vehicle in real time;
carrying out time sequence analysis processing on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process;
and carrying out point cloud data analysis processing on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process to obtain scene point cloud characteristics in the vehicle driving process.
3. The method of claim 2, wherein the time sequence analysis process is performed by using an attention-based codec model; the time sequence analysis processing is carried out on the vehicle signals and the positioning information in the vehicle driving process to obtain scene time sequence characteristics in the vehicle driving process, and the method comprises the following steps:
and inputting the vehicle signal and the positioning information in the vehicle driving process into the attention mechanism-based coding and decoding model to obtain the scene time sequence characteristics in the vehicle driving process.
4. The method of claim 2, wherein a segmentation model is used for point cloud data analysis; the point cloud data analysis processing is carried out on the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle in the vehicle driving process to obtain the scene point cloud characteristics in the vehicle driving process, and the method comprises the following steps:
and inputting the radar calibration parameters of the vehicle and the radar point cloud data of the vehicle into the segmentation model in the vehicle driving process to obtain the scene point cloud characteristics in the vehicle driving process.
5. The method of claim 1, wherein the scene data comprises scene timing features and scene point cloud features; the determining the scene type of the vehicle according to the scene data in the driving process of the vehicle comprises the following steps:
and inputting the scene time sequence characteristics and the scene point cloud characteristics in the vehicle driving process into a scene classification model to obtain the scene type of the vehicle.
6. The method of claim 5, wherein the method for obtaining the scene classification model comprises:
acquiring sample data of a sample vehicle under different scene types; the sample data comprises a scene type where the sample vehicle is located, a scene time sequence feature sample set and a scene point cloud feature sample set under the scene type;
and training a basic classification model by adopting the sample data to obtain the scene classification model.
7. A sensor detection device, the device comprising:
the driving monitoring module is configured to acquire scene data in the driving process of the vehicle and acquire sensor real-time data of the vehicle when the vehicle is monitored to be in a driving mode;
the scene type determining module is configured to determine the scene type of the vehicle according to scene data in the driving process of the vehicle;
the data query module is configured to execute acquisition of sensor target data of the vehicle under the scene type according to the scene type of the vehicle;
and the abnormality detection module is configured to perform matching of the sensor real-time data and the sensor target data and determine whether the sensor of the vehicle is abnormal according to a matching result.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202210641231.5A 2022-06-08 2022-06-08 Sensor detection method, sensor detection device, computer equipment and computer program product Pending CN115009296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210641231.5A CN115009296A (en) 2022-06-08 2022-06-08 Sensor detection method, sensor detection device, computer equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210641231.5A CN115009296A (en) 2022-06-08 2022-06-08 Sensor detection method, sensor detection device, computer equipment and computer program product

Publications (1)

Publication Number Publication Date
CN115009296A true CN115009296A (en) 2022-09-06

Family

ID=83073813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210641231.5A Pending CN115009296A (en) 2022-06-08 2022-06-08 Sensor detection method, sensor detection device, computer equipment and computer program product

Country Status (1)

Country Link
CN (1) CN115009296A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116306953A (en) * 2023-01-17 2023-06-23 深圳国际量子研究院 Real-time measurement and control system architecture of quantum physical experiment platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116306953A (en) * 2023-01-17 2023-06-23 深圳国际量子研究院 Real-time measurement and control system architecture of quantum physical experiment platform
CN116306953B (en) * 2023-01-17 2023-11-03 深圳国际量子研究院 Real-time measurement and control system architecture of quantum physical experiment platform

Similar Documents

Publication Publication Date Title
US20210042543A1 (en) Methods and systems for automatically predicting the repair costs of a damaged vehicle from images
CN104044413B (en) That predicts is monitored based on equal tire health
US8855954B1 (en) System and method for prognosticating capacity life and cycle life of a battery asset
US10762540B1 (en) Systems and methods for automated trade-in with limited human interaction
US11017619B2 (en) Techniques to detect vehicle anomalies based on real-time vehicle data collection and processing
US11238506B1 (en) Methods and systems for automatic processing of images of a damaged vehicle and estimating a repair cost
US11704945B2 (en) System and method for predicting vehicle component failure and providing a customized alert to the driver
US20210374997A1 (en) Methods and systems for obtaining image data of a vehicle for automatic damage assessment
CA3037941A1 (en) Method and system for generating and using vehicle pricing models
US20230177892A1 (en) Systems And Methods Of Determining Effectiveness Of Vehicle Safety Features
CN115009296A (en) Sensor detection method, sensor detection device, computer equipment and computer program product
US11132851B2 (en) Diagnosis device and diagnosis method
CN116881783A (en) Road damage detection method, device, computer equipment and storage medium
US11745766B2 (en) Unseen environment classification
CN114559775A (en) Automobile tire abnormity identification system and identification method
CN116469013B (en) Road ponding prediction method, device, computer equipment and storage medium
CN115588170B (en) Muck truck weight identification method and application thereof
CN116302364B (en) Automatic driving reliability test method, device, equipment, medium and program product
Mandala Predictive Failure Analytics in Critical Automotive Applications: Enhancing Reliability and Safety through Advanced AI Techniques
Muthumanickam et al. Vehicle health monitoring and accident avoidance system based on IoT model
CN117734710A (en) Method, device, equipment, storage medium and vehicle for optimizing fuel efficiency of vehicle
US20240037022A1 (en) Method and system for the analysis of test procedures
CN117131957A (en) Machine learning-based engine oil low pressure fault prediction method and system
CN116881782A (en) Pavement defect identification method, device, computer equipment and storage medium
CN117711087A (en) Real-time vehicle state monitoring method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination