CN117804667A - Sensor performance analysis method, electronic equipment and storage medium - Google Patents

Sensor performance analysis method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117804667A
CN117804667A CN202410122481.7A CN202410122481A CN117804667A CN 117804667 A CN117804667 A CN 117804667A CN 202410122481 A CN202410122481 A CN 202410122481A CN 117804667 A CN117804667 A CN 117804667A
Authority
CN
China
Prior art keywords
value
sensor
sensor output
sample
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410122481.7A
Other languages
Chinese (zh)
Inventor
陈彩燕
魏敏
葛俊良
刘昌业
耿黄政
欧学仕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC GM Wuling Automobile Co Ltd
Original Assignee
SAIC GM Wuling Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC GM Wuling Automobile Co Ltd filed Critical SAIC GM Wuling Automobile Co Ltd
Priority to CN202410122481.7A priority Critical patent/CN117804667A/en
Publication of CN117804667A publication Critical patent/CN117804667A/en
Pending legal-status Critical Current

Links

Landscapes

  • Indication And Recording Devices For Special Purposes And Tariff Metering Devices (AREA)

Abstract

The embodiment of the application provides a sensor performance analysis method, electronic equipment and a storage medium. The method comprises the following steps: determining M sample data, and further determining a first straight line; substituting each sensor input sample value in the M sample data into a first straight line to obtain M sensor output intermediate values, and performing difference calculation with the M sensor output sample values to obtain M sensor output difference values; determining first coordinate data and second coordinate data according to the M sensor output difference values; and determining a second straight line according to the first coordinate data and the second coordinate data. In the embodiment of the application, first coordinate data and second coordinate data are determined according to M sensor output difference values; and determining a second straight line according to the first coordinate data and the second coordinate data. In the process, related data are not required to be selected by staff, so that the automation of the sensor performance analysis can be realized, and the accuracy and the efficiency of the sensor performance analysis are improved.

Description

Sensor performance analysis method, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of sensor technologies, and in particular, to a method for analyzing sensor performance, an electronic device, and a storage medium.
Background
The sensor is a device that can directly sense a measured object and output an electric signal or other signals having a certain relationship with the measured object. During actual use of the vehicle, the presence sensor collects relevant signals for the electronic power steering system (Electric Power Steering System, EPS) of the vehicle. The performance of the sensor is directly determined as one of key devices of the electronic power steering system, so that the performance of the sensor needs to be analyzed in a correlated manner. Typically, the performance of the sensor is analyzed based on a corresponding curve of the sensor input value and the sensor output value. Therefore, before analyzing the performance of the sensor, it is first necessary to determine a corresponding curve of the sensor input value and the sensor output value from the collected sensor sample data.
In the related art, when the collected sensor output sample value and the sensor input sample value are subjected to related processing, after a plurality of sample data are determined, a corresponding curve of the sensor input value and the sensor output value can be fitted according to the plurality of sample data, so that the sensing performance is analyzed.
However, in the process of fitting a corresponding curve of the sensor input value and the sensor output value according to a plurality of sample data, a worker is required to select related data, so that automation of sensor performance analysis cannot be thoroughly realized, and meanwhile, the determined sensor performance is low in accuracy and low in efficiency.
It should be noted that the information disclosed in the background section of the present application is only intended to enhance understanding of the general background of the present application and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art that is already known to a person skilled in the art.
Disclosure of Invention
In view of this, the present application provides a method, an electronic device, and a storage medium for analyzing performance of a sensor, so as to solve the problems in the prior art that in a process of fitting a corresponding curve of an input value and an output value of the sensor according to a plurality of sample data, a worker is required to select related data, automation of performance analysis of the sensor cannot be thoroughly realized, and meanwhile, the determined performance of the sensor is low in accuracy and low in efficiency.
In a first aspect, an embodiment of the present application provides a method for analyzing performance of a sensor, including:
Determining M sample data, wherein each sample data comprises a sensor input sample value and a sensor output sample value, and M is more than 1;
determining a first straight line according to any one of the sample data and the zero point coordinates;
substituting each sensor input sample value in M sample data into the first straight line to obtain M sensor output intermediate values;
respectively carrying out difference calculation on each sensor output sample value in M sample data and the corresponding sensor output intermediate value to obtain M sensor output difference values;
taking an average value of sample data corresponding to a first sensor output difference value and sample data corresponding to a second sensor output difference value to obtain first coordinate data, wherein the first sensor output difference value and the second sensor output difference value are respectively the maximum value and the minimum value in N sensor output difference values;
taking an average value of sample data corresponding to a third sensor output difference value and sample data corresponding to a fourth sensor output difference value to obtain second coordinate data, wherein the third sensor output difference value and the fourth sensor output difference value are respectively the maximum value and the next-smallest value in N sensor output difference values; or the third sensor output difference value and the fourth sensor output difference value are respectively the next largest value and the smallest value in the N sensor output difference values;
And determining a second straight line according to the first coordinate data and the second coordinate data, wherein the second straight line is used for analyzing the sensor performance.
In one possible implementation manner, the determining a first straight line according to any one of the sample data and the zero point coordinates includes:
and determining a first straight line according to the maximum sample data and the zero point coordinate in the M sample data, wherein the maximum sample data is the sample data corresponding to the sensor input sample value or the sensor output sample value.
In one possible implementation, the determining N sample data, each of the sample data including one sensor input sample value and one sensor output sample value, includes:
acquiring N pieces of sampling data, wherein each piece of sampling data comprises a sensor input sampling value and a sensor output sampling value, and one sensor input sampling value corresponds to a plurality of sensor output sampling values;
according to the N sampling data, M sample data are determined, each sample data comprises a sensor input sample value and a sensor output sample value, the sensor input sample value is the sensor input sampling value, the sensor output sample value is a sensor output sampling average value, and the sensor output sampling average value is an average value of a plurality of sensor output sampling values corresponding to the sensor input sampling value;
Wherein N is greater than or equal to M.
In one possible implementation, the determining M sample data according to the N sample data, where each sample data includes one sensor input sample value and one sensor output sample value includes:
determining M positive-stroke sample data according to N pieces of sampling data, wherein each positive-stroke sample data comprises a sensor input sample value and a positive-stroke sensor output sample value, the sensor input sample value is the sensor input sampling value, the positive-stroke sensor output sample value is a positive-stroke sensor output sampling average value, and the positive-stroke sensor output sampling average value is an average value of a plurality of positive-stroke sensor output sampling values corresponding to the sensor input sampling value;
determining M back-stroke sample data according to N pieces of sampling data, wherein each back-stroke sample data comprises a sensor input sample value and a back-stroke sensor output sample value, the sensor input sample value is the sensor input sampling value, the back-stroke sensor output sample value is a back-stroke sensor output sampling average value, and the back-stroke sensor output sampling average value is an average value of a plurality of back-stroke sensor output sampling values corresponding to the sensor input sampling value;
And determining M sample data according to the M forward stroke sample data and the M reverse stroke sample data, wherein each sample data comprises a sensor input sample value and a sensor output sample value, the sensor input sample value is the sensor input sampling value, and the sensor output sample value is the average value of the forward stroke sensor output sampling average value and the reverse stroke sensor output sampling average value corresponding to the sensor input sampling value.
In one possible implementation, the method further includes:
calculating a positive stroke standard deviation according to M positive stroke sample data;
and calculating the back stroke standard deviation according to M back stroke sample data.
In one possible implementation of the present invention,
after the acquiring the N pieces of sampling data, further includes: judging whether abnormal data exist in the N pieces of sampling data;
the determining N sample data according to the N sample data includes: if no abnormal data exists in the N pieces of sampling data, M pieces of sample data are determined according to the N pieces of sampling data.
In one possible implementation manner, the determining M sample data according to the N sample data further includes:
If abnormal data exist in the N pieces of sampling data, deleting the abnormal data, and then determining M pieces of sample data.
In one possible implementation, the method further includes:
outputting the second straight line.
In a second aspect, an embodiment of the present application provides an electronic device, including:
a processor;
a memory;
and a computer program, wherein the computer program is stored in the memory, the computer program comprising instructions that, when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where the program when executed controls a device in which the computer readable storage medium is located to perform the method of any one of the first aspects.
In the embodiment of the application, sample data corresponding to the output difference value of the first sensor and sample data corresponding to the output difference value of the second sensor are averaged to obtain first coordinate data; sampling the sample data corresponding to the output difference value of the third sensor and the sample data corresponding to the output difference value of the fourth sensor to obtain second coordinate data; a second line is determined from the first coordinate data and the second coordinate data. In the process, related data are not required to be selected by staff, so that the automation of the sensor performance analysis can be realized, the accuracy of the sensor performance analysis is improved, and the efficiency of the sensor performance analysis is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of EPS provided in the related art.
Fig. 2 is a flow chart of a method for analyzing sensor performance according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a man-machine interaction interface for sensor performance analysis according to an embodiment of the present application.
Fig. 4 is a flow chart of another method for analyzing performance of a sensor according to an embodiment of the present application.
FIG. 5 is a software framework diagram providing a sensor performance analysis according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without making any inventive effort, are intended to be within the scope of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one way of describing an association of associated objects, meaning that there may be three relationships, e.g., a and/or b, which may represent: the first and second cases exist separately, and the first and second cases exist separately. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
An electric power steering system (Electric Power Steering, EPS) is a power steering system that directly relies on an electric motor to provide assist torque. In the actual use process of the vehicle, each sensor collects steering wheel torque and steering wheel angle applied by a driver on the steering wheel and sends the steering wheel angle to the EPS. The EPS calculates the power-assisted torque according to the steering wheel torque and the steering wheel angle, converts the power-assisted torque into a current command of the power-assisted motor, and controls the power-assisted motor to generate corresponding power-assisted torque. The assist torque acts on the steering gear after being amplified by the gear reduction mechanism. Finally, the steering resistance moment is overcome by the auxiliary driver, and the steering of the vehicle is realized.
For ease of understanding, the following detailed description is made with reference to the accompanying drawings and specific examples.
Referring to fig. 1, an application scenario schematic diagram of an EPS is provided for related technology. As shown in fig. 1, in this application scenario, a steering wheel 101, an electronic power steering system 102, a steering shaft 103, a rack and pinion steering 104, and a tire 105 are shown. The electronic power steering system 102 specifically includes an electronic control unit (Electronic Control Unit, ECU) 1021, a sensor 1022, a power-assisted motor 1023, and a gear reduction mechanism 1024.
As shown in fig. 1, a steering wheel 101 controls steering of a tire 105 through a steering shaft 103 and a rack-and-pinion steering gear 104; the sensor 1022 is used for collecting the sensor signal on the steering shaft 103; the ECU1021 outputs a corresponding power-assisted motor control instruction according to the received sensor signal; the assist motor 1023 applies assist torque to the steering shaft 103 via the assist gear reduction mechanism 1024, assisting the steering shaft 103 to rotate.
In the actual application process, when the driver rotates the steering wheel 101, the steering wheel 101 drives the steering shaft 103 to rotate, and at this time, the sensor 1022 transmits the collected sensor signal of the steering shaft 103 to the ECU1021. The ECU1021 controls the power-assisted motor 1023 to drive the gear reduction mechanism 1024 to rotate according to the received sensor signal, so that the power-assisted steering shaft 103 rotates, and finally the rack-and-pinion steering device 104 is driven to control the tire steering.
It should be noted that fig. 1 is only an exemplary illustration of an application scenario according to an embodiment of the present application, and should not be taken as a limitation on the protection scope of the present application. In addition, it is understood that the sensor 1022 is only an exemplary description, and the sensor 1022 may be specifically a torque sensor, an angle sensor, or a torque angle sensor (the torque angle sensor is integrated by the torque sensor and the angle sensor), and the sensor type is not specifically limited in this application.
It will be appreciated that the performance of the sensor, which is one of the key devices of an electronic power steering system, directly determines the performance of the electronic power steering system, and thus a relevant analysis of the performance of the sensor is required. Typically, the performance of the sensor is analyzed based on a corresponding curve of the sensor input value and the sensor output value. Therefore, before analyzing the performance of the sensor, it is first necessary to determine a corresponding curve of the sensor input value and the sensor output value from the collected sensor sample data.
In the related art, when the collected sensor output sample value and the sensor input sample value are subjected to related processing, after a plurality of sample data are determined, a corresponding curve of the sensor input value and the sensor output value can be fitted according to the plurality of sample data, so that the sensing performance is analyzed.
However, in the process of fitting a corresponding curve of the sensor input value and the sensor output value according to a plurality of sample data, a worker is required to select related data, so that automation of sensor performance analysis cannot be thoroughly realized, and meanwhile, the determined sensor performance is low in accuracy and low in efficiency.
In the embodiment of the application, sample data corresponding to the output difference value of the first sensor and sample data corresponding to the output difference value of the second sensor are averaged to obtain first coordinate data; sampling the sample data corresponding to the output difference value of the third sensor and the sample data corresponding to the output difference value of the fourth sensor to obtain second coordinate data; a second line is determined from the first coordinate data and the second coordinate data. In the process, related data are not required to be selected by staff, so that the automation of the sensor performance analysis can be realized, the accuracy of the sensor performance analysis is improved, and the efficiency of the sensor performance analysis is improved. In particular, the following detailed description is made with reference to the accompanying drawings and specific embodiments. In particular, the drawings and detailed description will be made hereinafter.
Referring to fig. 2, a flow chart of a method for analyzing sensor performance according to an embodiment of the present application is shown. As shown, the method specifically comprises the following steps.
Step S201: m sample data are determined.
In the embodiment of the present application, M sample data are determined according to N sample data. Each sampling data comprises a sensor input sampling value and a sensor output sampling value, wherein one sensor input sampling value corresponds to a plurality of sensor output sampling values; each sample data includes a sensor input sample value and a sensor output sample value, N.gtoreq.M > 1.
Specifically, in the embodiment of the present application, a sensor input sampling value is taken as a sensor input sample value, and an average value of a plurality of sensor output sampling values corresponding to the sensor input sampling value is taken as a sensor output sample value. It can be appreciated that by taking the average value of a plurality of sensor output sample values corresponding to the sensor input sample values as the sensor output sample value, the accuracy of testing the sensor performance can be improved to some extent.
For example, with an angle sensor, there are 36 sample values (60, 58.2), (60, 60.5), (60, 60.7), (60, 59.2), (60, 60.4), (60, 60.1), (120,119.5), (120,119.1), (120,120.5), (120,118.6), (120,121.3), (120,119.5), (180,179.2), (180,179.6), (180,180.2), (180,181.1), (180,181.3), (180,178.9), (240,240.8), (240, 241.2), (240, 240.2), (240, 241.9), (240, 240.6), (240, 240.5), (300,300.8), (300, 301.2), (300, 299.2), (300, 298.9), (300, 299.6), (300, 300.6), (360, 360.1), (360, 361.2), (360, 359.1), (360.1), (360.360.360.7), and an average value of a plurality of sensor output sample values corresponding to the sensor input sample values as the sensor output sample values. Namely (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05) and (360,359.95). It should be noted that the angle sensor input sample value and the angle sensor output sample value are displayed in the form of coordinate points, for example, (60, 58.2) as described above means that the angle sensor input sample value is 60 °, the angle sensor output sample value is 58.2 °, and the like. For brevity, this application will not be described in detail.
In practical application, a sensor data set needs to be acquired first, a sensor output sampling value corresponding to a sensor input sampling value is determined according to a preset sensor input sampling value and the sensor data set, N pieces of sampling data are determined, and M pieces of sample data are further determined.
Specifically, in the embodiment of the present application, when the sensor input value identical to the sensor input sampling value exists in the sensor data set, the determined sensor output sampling value is the sensor output value corresponding to the sensor input sampling value identical to the sensor input sampling value in the sensor data set; when the sensor input value which is the same as the sensor input sampling value does not exist in the sensor data set, the sensor output value corresponding to the sensor input value adjacent to the sensor input sampling value in the sensor data set is taken as the sensor output sampling value by size.
It is noted that in general, each sensor input sample value is adjacent to two sensor input values when there is no sensor input value in the sensor data set that is the same as the sensor input sample value. In one possible implementation, a sensor output value of the two sensor input values that corresponds to a sensor input value that is closer to the sensor input sample value is used as the sensor output sample value.
For example, when there is no sensor input value in the sensor data set that is the same as the sensor input sample value, the sensor input sample value is 180 ° at this time, and two sensor input values adjacent to the sensor input sample value in size are 180.1 ° and 179.8 °, and a sensor output value corresponding to the sensor input value of 180.1 ° is selected as the sensor output sample value.
In one possible implementation, after the N pieces of sample data are acquired, it is further required to determine whether there is any abnormal data in the N pieces of sample data, and if there is no abnormal data in the N pieces of sample data, determine M pieces of sample data according to the N pieces of sample data. In this embodiment of the present application, a data anomaly allowable range may be preset, and when a difference between a sensor input sampling value and a sensor output sampling value is not within the data anomaly allowable range, the data is considered to be anomalous. And when the difference value between the sensor input sampling value and the sensor output sampling value is in the data abnormality allowable range, the data is considered to be not abnormal.
Illustratively, when the partial sampling data among the N sampling data are (60, 58.2), (60, 61.5), (60, 60.7), (60, 59.2), (60, 62.4), and (60, 60.1), and the data anomaly allowable range is [0,2], since (60, 62.4) among the 6 sampling data represents that the angle sensor input sampling value is 60 °, the angle sensor output sampling value is 62.4 °, the difference between the sensor input sampling value and the sensor output sampling value is 2.4 °, the difference between the sensor input sampling value and the sensor output sampling value is not within the data anomaly allowable range, the sampling data (60, 62.4) is regarded as anomalous data, and the sensor performance analysis system recognizes the data anomaly.
Of course, in one possible implementation, the abnormality determination may not be performed on N pieces of sample data, but on M pieces of sample data. And determining M sample data according to the N sample data, and judging the M sample data. In this embodiment of the present application, a data anomaly allowable range may be preset, and when the difference between the sensor input sample value and the sensor output sample value is not within the data anomaly allowable range, the data is considered to be anomalous. When the difference between the sensor input sample value and the sensor output sample value is within the data abnormality allowable range, the data is considered to be not abnormal. This is not particularly limited in this application.
In this embodiment of the present application, if there is abnormal data in the N sample data, after deleting the abnormal data, M sample data are determined. For example, when the partial sample data of the N sample data are (60, 58.2), (60, 61.5), (60, 60.7), (60, 59.2), (60, 62.4), and (60, 60.1), and the data anomaly allowable range is [0,2], the anomaly data is deleted because (60, 62.4) in the sample data is the anomaly data, and then the sample data is determined according to the sample data after deleting the anomaly data.
Step S202: a first line is determined from any of the sample data and the zero point coordinates.
In the embodiment of the present application, after M sample data are acquired, a first straight line is determined according to zero coordinates and any one sample data in the M sample data. For example, when the M sample data are (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05), and (360,359.95), the first straight line may be determined according to (60,59.85) and the zero point coordinates, and of course, the first straight line may be determined according to (120, 119.75) and the zero point coordinates, and so on, the present application does not particularly limit this.
It will be appreciated that in a practical case, when the sensor input sample value is 0, the corresponding sensor output sample value is also 0. The zero point coordinates may be selected as one determination condition of the first straight line.
In practical applications, the slope of the determined curve is continuously reduced or increased after the points in the sample data are connected by the smooth curve through observing the sample data. Therefore, when sample data close to the zero point coordinate in the M pieces of sample data are selected to determine the first straight line, a large error exists between the determined first straight line and sample data far from the zero point coordinate, and the accuracy of the determined first straight line is low. In view of the above, in the embodiment of the present application, one sample data farthest from the zero point coordinates may be selected, and the zero point coordinates may be determined to be the first straight line.
In the embodiment of the application, the first straight line is determined according to the maximum sample data and the zero point coordinate in the M sample data. The maximum sample data is the maximum sensor input sample value or the maximum sensor output sample value corresponds to the sample data. It will be appreciated that the determined slope of the curve is continuously reduced after the points in the sample data are connected by the smooth curve, so that the maximum sample data and the zero point coordinates are selected, and the determined first straight line has smaller error.
Illustratively, when the M sample data are (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05), and (360,359.95), a first straight line is determined according to (360,359.95) and zero point coordinates.
It will be appreciated that the first straight line obtained may also be used to some extent for analysing the performance of the sensor, but that the first straight line still has a large error with other sample data. To further reduce errors in sensor performance, the first straight line may be further calibrated by step S203-step S207, thereby allowing for a more accurate analysis of sensor performance. In particular, the following detailed description is made with reference to the accompanying drawings and specific embodiments.
Step S203: substituting each sensor input sample value in the M sample data into a first straight line to obtain M sensor output intermediate values.
In the embodiment of the application, each sensor input sample value in the M sample data is substituted into the first straight line, and M sensor output intermediate values are obtained. For example, when the determined first straight line is y1=1.02x and the corresponding sample data is (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05) and (360,353), substituting 60 into the first straight line to obtain the corresponding intermediate value is 61.2; substituting 120 into the first straight line to obtain a corresponding intermediate value of 122.4; similarly, the same applies. For brevity, this application is not described in detail herein.
Step S204: and respectively carrying out difference value calculation on each sensor output sample value in the M sample data and the corresponding sensor output intermediate value to obtain M sensor output difference values.
In the embodiment of the present application, after obtaining M sensor output intermediate values, difference calculation is performed on each sensor output sample value in M sample data and a corresponding sensor output intermediate value, so as to obtain M sensor output difference values. For example, when the sensor output sample value corresponding to the sensor input sample value 60 is 59.85 and the sensor output intermediate value corresponding to the sensor input sample value 60 is 61.2, the difference between the sensor output intermediate value 61.2 and the sensor output sample value 59.85 is calculated, and the finally determined sensor output difference is 1.35, and so on. For brevity, this application will not be described in detail.
It will be appreciated that the obtained sensor output difference may be used to characterize the error between the determined first line and the sample point in the sample data, and that in order to reduce this error, the corresponding coordinate data may be re-determined by averaging the sample point with the larger error with the sample point with the smaller error. It should be noted that the error referred to herein is the difference between the sample data and the first line and is not used to describe the error between the final sensor performance and the actual sensor performance.
Step S205: and averaging the sample data corresponding to the output difference value of the first sensor and the sample data corresponding to the output difference value of the second sensor to obtain first coordinate data.
In the embodiment of the application, sample data corresponding to the output difference value of the first sensor and sample data corresponding to the output difference value of the second sensor are averaged to obtain first coordinate data. The first sensor output difference value and the second sensor output difference value are respectively the maximum value and the minimum value in the M sensor output difference values.
Illustratively, when the sample data are (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05), and (360,353), and the corresponding first straight line is y1=1.02X, the corresponding sensor output differences are 1.35, 2.65, 3.55, 4.1, 5.95, and 0. The maximum value of the sensor output difference is 5.95 and the sensor sample data corresponding to the sensor output difference is (300, 300.05); when the minimum value of the sensor output difference values is 0, the sensor sample data corresponding to the sensor output difference values is 360,353. And averaging the sample data corresponding to the first sensor output difference value and the sample data corresponding to the second sensor output difference value, wherein the obtained first coordinate data is (330, 326.53).
Step S206: and averaging the sample data corresponding to the output difference value of the third sensor and the sample data corresponding to the output difference value of the fourth sensor to obtain second coordinate data.
In the embodiment of the application, the sample data corresponding to the output difference value of the third sensor and the sample data corresponding to the output difference value of the fourth sensor are averaged to obtain the second coordinate data. The third sensor output difference value and the fourth sensor output difference value are respectively the maximum value and the next minimum value in the M sensor output difference values; or the third sensor output difference value and the fourth sensor output difference value are respectively the next largest value and the smallest value in the M sensor output difference values.
It can be understood that the second smallest value in the sensor output difference values refers to the smallest value in the corresponding M-1 sensor output difference values after the smallest value is removed from the M sensor output difference values; the next largest value in the sensor output difference values refers to the largest value in the corresponding M-1 sensor output difference values after the largest value is removed from the M sensor output difference values. Illustratively, when the sensor output differences are 1.35, 2.65, 3.55, 4.1, 5.95, and 0, the next smallest value in the corresponding sensor output differences is 1.35; the next largest value in the corresponding sensor output difference is 4.1.
Illustratively, when the sample data are (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05), and (360,353), and the corresponding first straight line is y1=1.02X, the corresponding sensor output differences are 1.35, 2.65, 3.55, 4.1, 5.95, and 0. At this time, the maximum value of the sensor output difference values is 5.95, and the sensor sample data corresponding to the sensor output difference values is (300, 300.05); when the second smallest value in the sensor output difference is 1.35, the sensor sample data corresponding to the sensor output difference is (60,59.85). And taking the average value of the sample data corresponding to the third sensor output difference value and the sample data corresponding to the fourth sensor output difference value to obtain second coordinate data (180, 179.95).
Alternatively, exemplary, when the sample data are (60,59.85), (120, 119.75), (180, 180.05), (240, 240.7), (300, 300.05), and (360,353), and the corresponding first straight line is y1=1.02X, the corresponding sensor output differences are 1.35, 2.65, 3.55, 4.1, 5.95, and 0. At this time, the next largest value in the sensor output difference is 4.1, and the sensor sample data corresponding to the sensor output difference is (240, 240.7); when the minimum value of the sensor output difference values is 0, and the sensor sample data corresponding to the sensor output difference values is (360,353). And taking an average value of the sample data corresponding to the third sensor output difference value and the sample data corresponding to the fourth sensor output difference value to obtain second coordinate data (300, 296.85).
In one possible implementation, the third sensor output difference and the fourth sensor output difference may be selected to be a maximum value and a next minimum value of the M sensor output differences, or a next maximum value and a minimum value of the M sensor output differences, according to a magnitude of a difference between the third sensor output difference and the fourth sensor output difference. Specifically, the third sensor output difference value and the fourth sensor output difference value, which have a larger difference between the third sensor output difference value and the fourth sensor output difference value, are used to determine the die coordinate data.
Illustratively, when the maximum value of the sensor output differences is 5.95 and the next-smallest value of the sensor output differences is 1.35, the difference between the corresponding third sensor output difference and fourth sensor output difference is 5.6; when the next largest value in the sensor output differences is 4.1 and the smallest value in the sensor output differences is 0, the difference between the corresponding third sensor output difference and fourth sensor output difference is 4.1. Since 5.6 is greater than 4.1 at this time, the third sensor output difference and the fourth sensor output difference are selected to be the maximum value and the next smallest value of the M sensor output differences.
In one possible implementation, to obtain the second coordinate, the M sensor output differences obtained in step S204 may be ranked from large to small, and the two sets of sensor output differences with the largest difference among the M sensor output differences are found. It is understood that the two sets of the M sensor output differences with the largest difference are the first sensor output difference and the M sensor output difference and the first sensor output difference and the M-1 sensor output difference, or the first sensor output difference and the M sensor output difference and the second sensor output difference and the M sensor output difference. And after determining the output difference values of the two groups of sensors, averaging sample data corresponding to the output difference values of the two sensors in each group of sensor output difference values to obtain first coordinate data and second coordinate data. For brevity, this application is not described in detail herein.
Step S207: a second line is determined from the first coordinate data and the second coordinate data.
In an embodiment of the present application, the second straight line is determined according to the first coordinate data and the second coordinate data. For example, when the first coordinate data is (330, 326.53) and the second coordinate data is (180, 179.95), it may be determined that the slope of the second straight line is (326.53-179.95)/(330-180) ≡0.98, the first coordinate data is brought into the second straight line y=0.98x+d, and it may be determined that D is about 3.13, that is, the second straight line y=0.98x+3.13. It can be appreciated that the second straight line can reflect the correspondence of the target data more than the first straight line, i.e. the second straight line can be used as a correspondence curve of the sensor output value and the sensor input value, and thus the second straight line can be used for analyzing the sensor performance.
It is understood that the sensor performance includes linearity of the sensor, repeatability of the sensor, return difference of the sensor, confidence of the sensor, and absolute error of the sensor, and of course, those skilled in the art may set other sensor performance according to actual needs, which is not particularly limited in this application.
In practical application, in order to make the sensor performance test operation simpler and more convenient, a man-machine interaction interface is established, and the corresponding sensor performance is output on the man-machine interaction interface. In one possible implementation manner, a corresponding relation curve of the corresponding sensor output value and the sensor input value, namely a second straight line, is simultaneously output on the human-computer interaction interface. Of course, the person skilled in the art may set a curve of the output value of the output sensor related to time and a curve of the input value of the sensor related to time on the human-computer interaction interface according to actual needs, which is not specifically limited in this application. In particular, the following detailed description is made with reference to the accompanying drawings and specific embodiments.
Referring to fig. 3, a schematic diagram of a human-computer interaction interface for performance analysis of a sensor according to an embodiment of the present application is provided. As shown in fig. 3, a display area of a corresponding relation curve of the sensor output value and the sensor input value, a curve display area of the relation of the sensor output value with time, a curve display area of the relation of the sensor input value with time, a torque sensor performance display area, and an angle sensor performance display area are shown. It can be appreciated that fig. 3 is only an exemplary illustration, and when performance tests are performed on two angle sensors and two torque sensors at the same time, the man-machine interaction interface may include display interfaces corresponding to curves of the two angle sensors and the two torque sensors, which is not described in detail in this application for brevity of description.
In the embodiment of the application, sample data corresponding to the output difference value of the first sensor and sample data corresponding to the output difference value of the second sensor are averaged to obtain first coordinate data; sampling the sample data corresponding to the output difference value of the third sensor and the sample data corresponding to the output difference value of the fourth sensor to obtain second coordinate data; a second line is determined from the first coordinate data and the second coordinate data. In the process, related data are not required to be selected by staff, so that the automation of the sensor performance analysis can be realized, the accuracy of the sensor performance analysis is improved, and the efficiency of the sensor performance analysis is improved.
In practical applications, the sample data includes forward stroke sample data and backward stroke sample data, so that the finally determined second straight line can better reflect the performance of the sensor, and the sample data should include the forward stroke data and the backward stroke data uniformly. In particular, the following detailed description is made with reference to the accompanying drawings and specific embodiments.
It can be understood that taking the angle sensor to collect the angle information of the steering wheel as an example, the positive stroke refers to the process of controlling the steering wheel to turn left or right from the original position; the back stroke refers to the process of controlling the steering wheel to return to the original position. The original position refers to a position corresponding to a steering wheel when the vehicle runs straight.
Referring to fig. 4, a flow chart of another method for analyzing performance of a sensor according to an embodiment of the present application is shown. As shown in fig. 4, the following steps are specifically included on the basis of step S201 shown in fig. 2.
Step S2011: m positive-stroke sample data are determined from the N sample data.
In the embodiment of the application, M positive-stroke sample data are determined according to the N sample data. Each piece of forward stroke sample data comprises a sensor input sample value and a forward stroke sensor output sample value, wherein the sensor input sample value is a sensor input sampling value, the forward stroke sensor output sample value is a forward stroke sensor output sampling average value, and the forward stroke sensor output sampling average value is an average value of a plurality of forward stroke sensor output sampling values corresponding to the sensor input sampling value.
Specifically, in the embodiment of the present application, N pieces of sampling data include N1 pieces of forward-run sampling data and N2 pieces of reverse-run sampling data, n1+n2=n. And determining a plurality of sensor output sampling values corresponding to the sensor input sample values in the N1 positive stroke sampling data, and averaging the determined plurality of sensor output sampling values to obtain a positive stroke sensor output sampling average value. The sensor input sample value is a sensor input sample value, and the positive stroke sensor output sample value is a positive stroke sensor output sample average value.
Step S2012: m back-travel sample data are determined from the N sample data.
In the embodiment of the application, M back-stroke sample data are determined according to the N sample data. Each back stroke sample data comprises a sensor input sample value and a back stroke sensor output sample value, wherein the sensor input sample value is a sensor input sampling value, the back stroke sensor output sample value is a back stroke sensor output sampling average value, and the back stroke sensor output sampling average value is an average value of a plurality of back stroke sensor output sampling values corresponding to the sensor input sampling value.
Specifically, in the embodiment of the present application, N pieces of sampling data include N1 pieces of forward-run sampling data and N2 pieces of reverse-run sampling data, n1+n2=n. And determining a plurality of sensor output sampling values corresponding to the sensor input sample values in the N2 back stroke sampling data, and averaging the determined plurality of sensor output sampling values to obtain a back stroke sensor output sampling average value. The sensor input sample value is a sensor input sample value, and the back stroke sensor output sample value is a back stroke sensor output sample average value.
Step S2013: m sample data are determined from the M forward travel sample data and the M reverse travel sample data.
In this embodiment of the present application, M sample data are determined according to M forward stroke sample data and M reverse stroke sample data, where each sample data includes a sensor input sample value and a sensor output sample value, the sensor input sample value is a sensor input sample value, and the sensor output sample value is an average value of a forward stroke sensor output sample average value and a reverse stroke sensor output sample average value corresponding to the sensor input sample value. It can be appreciated that in the M forward stroke sample data and the M reverse stroke sample data, the corresponding M sensor output sample values are the same, i.e., the M sensor output sample values in the M forward stroke sample data are the same as the M sensor output sample values in the M reverse stroke sample data.
In one possible implementation, to better analyze the performance of the sensor, it is also necessary to determine the forward and reverse standard deviations of the forward and reverse sample data. It is understood that the positive and negative stroke standard deviations are used to analyze the return difference performance of the sensor.
In the embodiment of the application, the positive stroke standard deviation is calculated according to M positive stroke sample data. Specifically, the positive stroke standard deviation corresponding to the input sample value of the positive stroke sensor is determined by taking the difference between the input sample value of the M sensors in the M positive stroke sample data and the output sample value of the positive stroke sensor corresponding to each input sample value of the sensors, averaging the M differences, dividing the sum by M-1, and performing the square operation.
Similarly, in the present embodiment, the back-stroke standard deviation is calculated from M back-stroke sample data. Specifically, the input sample value of the M back-stroke sensors in the M back-stroke sample data is differenced with the output sample value of the back-stroke sensor corresponding to each input sample value of the sensors, the M differences are averaged and summed, then divided by M-1, and then the square operation is performed, so that the corresponding back-stroke standard deviation can be determined.
Corresponding to the embodiment, the application also provides a software framework diagram for sensor performance analysis. Referring to fig. 5, a software framework diagram of sensor performance analysis is provided for an embodiment of the present application. As shown in the figure, the software core function of the sensor performance analysis is sample data acquisition and performance calculation, specifically refer to the above method embodiment, and this application will not be described herein.
The sensor performance analysis software also comprises a real-time monitoring function and other functions, wherein the real-time monitoring function comprises a corresponding relation between an output sensor input signal and time, a corresponding relation between a sensor output signal and time and a corresponding relation between the sensor output signal and the input signal. The sensor performance analysis software also comprises other functions, including lighting prompt, test result storage and test result inquiry.
Corresponding to the above embodiment, the present application also provides an electronic device. Referring to fig. 6, a schematic structural diagram of an electronic device according to an embodiment of the present application is provided. The electronic device 600 may include: processor 601, memory 602, and communication unit 603. The components may communicate via one or more buses, and it will be appreciated by those skilled in the art that the configuration of the electronic device shown in the drawings is not limiting of the embodiments of the invention, as it may be a bus-like structure, a star-like structure, or include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
Wherein, the communication unit 603 is configured to establish a communication channel, so that the electronic device may communicate with other devices. Receiving user data sent by other devices or sending user data to other devices.
The processor 601 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and/or processes data by running or executing software programs, instructions, and/or modules stored in the memory 602, and invoking data stored in the memory. The processor may be comprised of integrated circuits (integrated circuit, ICs), such as a single packaged IC, or may be comprised of packaged ICs that connect multiple identical or different functions. For example, the processor 601 may include only a central processing unit (central processing unit, CPU). In the embodiment of the invention, the CPU can be a single operation core or can comprise multiple operation cores.
The memory 602, for storing instructions for execution by the processor 601, the memory 602 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The execution of the instructions in memory 602, when executed by processor 601, enables electronic device 600 to perform some or all of the steps of the embodiment shown in fig. 1.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, where the program may include some or all of the steps in each embodiment of the simulation scene generating method provided by the present invention when the program is executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided herein, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for the device embodiment and the terminal embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description in the method embodiment for relevant points.

Claims (10)

1. A method of analyzing sensor performance, comprising:
determining M sample data, wherein each sample data comprises a sensor input sample value and a sensor output sample value, and M is more than 1;
determining a first straight line according to any one of the sample data and the zero point coordinates;
substituting each sensor input sample value in M sample data into the first straight line to obtain M sensor output intermediate values;
respectively carrying out difference calculation on each sensor output sample value in M sample data and the corresponding sensor output intermediate value to obtain M sensor output difference values;
taking an average value of sample data corresponding to a first sensor output difference value and sample data corresponding to a second sensor output difference value to obtain first coordinate data, wherein the first sensor output difference value and the second sensor output difference value are respectively the maximum value and the minimum value in M sensor output difference values;
Taking an average value of sample data corresponding to a third sensor output difference value and sample data corresponding to a fourth sensor output difference value to obtain second coordinate data, wherein the third sensor output difference value and the fourth sensor output difference value are respectively the maximum value and the next minimum value in M sensor output difference values; or the third sensor output difference value and the fourth sensor output difference value are respectively the next largest value and the smallest value in the M sensor output difference values;
and determining a second straight line according to the first coordinate data and the second coordinate data, wherein the second straight line is used for analyzing the sensor performance.
2. The method of analyzing sensor performance according to claim 1, wherein determining the first line from any one of the sample data and zero point coordinates includes:
and determining a first straight line according to the maximum sample data and the zero point coordinate in the M sample data, wherein the maximum sample data is the maximum sensor input sample value or the maximum sensor output sample value corresponds to the sample data.
3. The method of claim 1, wherein said determining N sample data, each of said sample data comprising one sensor input sample value and one sensor output sample value, comprises:
Acquiring N pieces of sampling data, wherein each piece of sampling data comprises a sensor input sampling value and a sensor output sampling value, and one sensor input sampling value corresponds to a plurality of sensor output sampling values;
according to the N sampling data, M sample data are determined, each sample data comprises a sensor input sample value and a sensor output sample value, the sensor input sample value is the sensor input sampling value, the sensor output sample value is a sensor output sampling average value, and the sensor output sampling average value is an average value of a plurality of sensor output sampling values corresponding to the sensor input sampling value;
wherein N is greater than or equal to M.
4. A method of analyzing sensor performance according to claim 3, wherein said determining M sample data from N of said sample data, each of said sample data including one sensor input sample value and one sensor output sample value, comprises:
determining M positive-stroke sample data according to N pieces of sampling data, wherein each positive-stroke sample data comprises a sensor input sample value and a positive-stroke sensor output sample value, the sensor input sample value is the sensor input sampling value, the positive-stroke sensor output sample value is a positive-stroke sensor output sampling average value, and the positive-stroke sensor output sampling average value is an average value of a plurality of positive-stroke sensor output sampling values corresponding to the sensor input sampling value;
Determining M back-stroke sample data according to N pieces of sampling data, wherein each back-stroke sample data comprises a sensor input sample value and a back-stroke sensor output sample value, the sensor input sample value is the sensor input sampling value, the back-stroke sensor output sample value is a back-stroke sensor output sampling average value, and the back-stroke sensor output sampling average value is an average value of a plurality of back-stroke sensor output sampling values corresponding to the sensor input sampling value;
and determining M sample data according to the M forward stroke sample data and the M reverse stroke sample data, wherein each sample data comprises a sensor input sample value and a sensor output sample value, the sensor input sample value is the sensor input sampling value, and the sensor output sample value is the average value of the forward stroke sensor output sampling average value and the reverse stroke sensor output sampling average value corresponding to the sensor input sampling value.
5. The method of analyzing sensor performance according to claim 4, further comprising:
calculating a positive stroke standard deviation according to M positive stroke sample data;
And calculating the back stroke standard deviation according to M back stroke sample data.
6. A method for analyzing sensor performance according to claim 3,
after the acquiring the N pieces of sampling data, further includes: judging whether abnormal data exist in the N pieces of sampling data;
the determining N sample data according to the N sample data includes: if no abnormal data exists in the N pieces of sampling data, M pieces of sample data are determined according to the N pieces of sampling data.
7. The method of analyzing sensor performance according to claim 6, wherein determining M sample data from the N sample data further comprises:
if abnormal data exist in the N pieces of sampling data, deleting the abnormal data, and then determining M pieces of sample data.
8. The method of analyzing sensor performance according to claim 1, further comprising:
outputting the second straight line.
9. An electronic device, comprising:
a processor;
a memory;
and a computer program, wherein the computer program is stored in the memory, the computer program comprising instructions that, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 8.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run controls a device in which the computer readable storage medium is located to perform the method according to any one of claims 1 to 8.
CN202410122481.7A 2024-01-29 2024-01-29 Sensor performance analysis method, electronic equipment and storage medium Pending CN117804667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410122481.7A CN117804667A (en) 2024-01-29 2024-01-29 Sensor performance analysis method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410122481.7A CN117804667A (en) 2024-01-29 2024-01-29 Sensor performance analysis method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117804667A true CN117804667A (en) 2024-04-02

Family

ID=90423577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410122481.7A Pending CN117804667A (en) 2024-01-29 2024-01-29 Sensor performance analysis method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117804667A (en)

Similar Documents

Publication Publication Date Title
CN111661137B (en) Remote driving road feel simulation method, device and system and storage medium
CN110834642B (en) Vehicle deviation identification method and device, vehicle and storage medium
US20130110749A1 (en) Control device and method for calculating an output parameter for a controller
CN110334816B (en) Industrial equipment detection method, device, equipment and readable storage medium
CN105579922A (en) Information processing device and analysis method
DE102014225893A1 (en) Method and system for determining a malfunction within a resolver
CN110962930B (en) Steering wheel corner analysis method and device
CN110608900B (en) Method, device, equipment and storage device for evaluating closing performance of vehicle door
DE102017126074A1 (en) Method for controlling a steering system
CN111572551B (en) Course angle calculation method, device, equipment and storage medium under parking condition
CN112560974B (en) Information fusion and vehicle information acquisition method and device
CN110161181A (en) The concentration of component recognition methods of mixed gas and system
CN112989587B (en) Online analysis method and system for degradation cause of capacitive voltage transformer
CN117804667A (en) Sensor performance analysis method, electronic equipment and storage medium
CN111780799B (en) Instrument data verification method and device
CN110525513B (en) Fault monitoring method and fault monitoring system of wire-controlled steering system
CN111976832B (en) Method and device for calculating steering wheel angle data and electronic equipment
EP2738701A2 (en) Method and apparatus for statistical electronic circuit simulation
WO2014024640A1 (en) Device for estimating error location in logic diagram and method therefor
JPH10187226A (en) Plant state predicting device
EP2169406B1 (en) Method for determining of the velocity of an electric motor
CN111428345B (en) Performance evaluation system and method of random load disturbance control system
CN118090034A (en) Sensor data processing method and device, electronic equipment and storage medium
WO2019222152A1 (en) Online fault localization in industrial processes without utilizing a dynamic system model
CN118010242A (en) Sensor testing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination