CN113591744A - Generation method of labeled data for dangerous driving behaviors and data acquisition system - Google Patents
Generation method of labeled data for dangerous driving behaviors and data acquisition system Download PDFInfo
- Publication number
- CN113591744A CN113591744A CN202110895907.9A CN202110895907A CN113591744A CN 113591744 A CN113591744 A CN 113591744A CN 202110895907 A CN202110895907 A CN 202110895907A CN 113591744 A CN113591744 A CN 113591744A
- Authority
- CN
- China
- Prior art keywords
- data
- event
- travel
- mobile terminal
- acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000006399 behavior Effects 0.000 title claims abstract description 60
- 238000013480 data collection Methods 0.000 claims abstract description 31
- 230000008569 process Effects 0.000 claims abstract description 19
- 230000015654 memory Effects 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 13
- 230000006855 networking Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 abstract description 10
- 238000004891 communication Methods 0.000 description 20
- 230000001133 acceleration Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 13
- 238000005070 sampling Methods 0.000 description 11
- 230000009471 action Effects 0.000 description 9
- 238000012790 confirmation Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000001502 supplementing effect Effects 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000446 fuel Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000036461 convulsion Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 239000010705 motor oil Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 239000003921 oil Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/10—Active monitoring, e.g. heartbeat, ping or trace-route
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Signal Processing (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- Entrepreneurship & Innovation (AREA)
- Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Cardiology (AREA)
- Quality & Reliability (AREA)
- Marketing (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Operations Research (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Development Economics (AREA)
- Multimedia (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a data acquisition system, comprising: the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing an acquisition task, wherein the acquisition task comprises a plurality of different types of events pointing to dangerous driving behaviors; the camera is suitable for acquiring image data in the process of executing the acquisition task; the mobile terminal is arranged on the vehicle, is suitable for collecting the driving state data in the process of executing the collection task, and is also suitable for being bound with the data collection equipment through the server; the data acquisition equipment is arranged on the vehicle and is respectively coupled with the vehicle-mounted equipment and the at least one camera so as to acquire vehicle state data and image data; and the server is suitable for performing correlation processing on the driving state data, the vehicle state data and the image data to obtain annotation data indicating dangerous driving behaviors. According to the data acquisition system, high-quality marking data aiming at dangerous driving behaviors can be obtained.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a method for generating labeled data aiming at dangerous driving behaviors and a data acquisition system.
Background
With the continuous development of the automobile industry and the rising demand of people for driving experience, the demand of people for avoiding traffic risks is increasing day by day. Among them, recognizing dangerous driving behavior is an important means for evaluating driving risk of drivers and preventing traffic accidents.
Meanwhile, the application scene for identifying dangerous driving behaviors is wide. For example, with the development of technologies such as car networking, automatic driving, big data monitoring, fleet monitoring, driving behavior risk factor, and driving assistance service all lay out travel monitoring technology silently, and in travel monitoring technology, the technology is the most technically sensitive and highly practical application value of the current dangerous driving behavior recognition technology. For another example, a group of freight cars is a high-occurrence group of traffic accidents, and how to monitor and avoid traffic risks of a freight fleet is an urgent problem to be solved in the freight industry.
On the other hand, the dangerous driving behavior is a complex and uncertain behavior, and in the prior art, the dangerous driving behavior can be defined simply by using the variation rule of the kinematic physical quantities such as acceleration, angular velocity and the like, and can also be determined by using the probability of the driving behavior possibly inducing future car accident. However, due to the differences in personal perception, the perceived risk of the same dangerous driving behavior is different for different groups of people. Meanwhile, the complexity of the driving habits and road conditions of the driver can also cause the difference of the objective risks corresponding to the same dangerous driving behaviors. To accurately identify these differences, a large amount of high-quality marking data of dangerous driving behavior is required.
Therefore, a solution capable of acquiring high-quality dangerous driving behavior data is needed.
Disclosure of Invention
The present invention provides a method for generating annotation data for dangerous driving behavior and a data acquisition system, in an attempt to solve or at least alleviate at least one of the above problems.
According to one aspect of the invention, a method for generating annotation data for dangerous driving behaviors is provided, and the method comprises the following steps: responding to a binding request from the mobile terminal, and binding the mobile terminal with data acquisition equipment arranged on a vehicle through a travel identifier; sending configuration information corresponding to the collection task to the mobile terminal so that the mobile terminal can output event description information of each event based on the configuration information to guide a user to execute each event according to the event description information, wherein the collection task comprises a plurality of different types of events pointing to dangerous driving behaviors; respectively acquiring first travel data from the data acquisition equipment and second travel data from the mobile terminal based on the travel identifier; determining a time interval corresponding to the execution of the acquisition task from the configuration information; according to the time interval, the first travel data and the second travel data are respectively processed to obtain corresponding third travel data and fourth travel data; and aligning the third stroke data and the fourth stroke data based on the acquisition frequency of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data which are used as marking data.
Optionally, in the method according to the present invention, the step of binding the mobile terminal with the data collection device disposed on the vehicle by the trip identity in response to the binding request from the mobile terminal includes: when a request to be bound with the data acquisition equipment is received from the mobile terminal, generating a travel identifier and returning the travel identifier to the mobile terminal; after a heartbeat signal from the data acquisition equipment is received, the stroke identifier is sent to the data acquisition equipment, wherein the heartbeat signal is sent to the server every first time after the data acquisition equipment is started, so that the server can monitor the networking state of the data acquisition equipment.
Optionally, in the method according to the present invention, the configuration information includes at least: and acquiring the event type, the event identification, the expected execution time and the instruction template corresponding to each event type of each event in the task.
Optionally, the method according to the invention further comprises the steps of: acquiring first travel data from the data acquisition equipment every second time, wherein the first travel data are data which are acquired by the data acquisition equipment after receiving the travel identifier and are acquired by the data acquisition equipment when an acquisition task is executed; when the acquisition task is completed, acquiring second travel data from the mobile terminal, wherein the second travel data is acquired by the mobile terminal from the time of receiving the travel identifier and is data of the vehicle when the acquisition task is executed; and storing the first travel data and the second travel data in association based on the travel identifier.
Optionally, in the method according to the present invention, the data acquisition device is respectively coupled to the vehicle-mounted device and the at least one camera, and the first trip data includes: when the collection task is executed, vehicle state data collected through vehicle-mounted equipment and image data collected through at least one camera are acquired; the second trip data includes: and when the collection task is executed, the driving state data of the vehicle and the actual execution time of each event.
Optionally, in the method according to the present invention, the step of determining, from the configuration information, a time interval corresponding to execution of the collection task includes: determining the expected execution time of each event from the configuration information; judging whether the expected execution time of each event is consistent with the actual execution time; and if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
Optionally, in the method according to the present invention, the actual execution time of the event is a time generated based on an input of a user, and includes: a start time of the generated event in response to a user selection of an event in the collection task; and responding to the confirmation operation of the user after the event is finished, and the end time of the generated event.
Optionally, in the method according to the present invention, the step of respectively processing the first trip data and the second trip data according to the time interval to obtain corresponding third trip data and fourth trip data includes: respectively intercepting corresponding data in a time interval from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as fourth stroke data.
Optionally, in the method according to the present invention, based on the acquisition frequencies of the first trip data and the second trip data, the third trip data and the fourth trip data are respectively aligned to obtain aligned third trip data and aligned fourth trip data, and the step of taking the aligned third trip data and the aligned fourth trip data as the annotation data includes: determining an alignment frequency based on the acquisition frequencies of the first travel data and the second travel data; and respectively sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain the aligned third stroke data and the aligned fourth stroke data which are used as marking data.
Optionally, the method according to the invention further comprises the steps of: sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain sampled third stroke data and sampled fourth stroke data; and if the sampled third stroke data and/or the sampled fourth stroke data have data missing, supplementing the missing data through interpolation to obtain the aligned third stroke data and/or the aligned fourth stroke data.
Optionally, the method according to the invention further comprises the steps of: checking whether the action instruction of the user meets the preset requirement when the acquisition task is executed by combining the configuration information, the aligned third travel data and the aligned fourth travel data; and if the preset requirements are met, using the aligned third travel data and the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
According to another aspect of the present invention, there is provided a data acquisition system comprising: the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing the acquisition task, wherein the acquisition task comprises a plurality of different types of events pointing to dangerous driving behaviors; the camera is suitable for acquiring image data in the process of executing the acquisition task; the mobile terminal is arranged on the vehicle, is suitable for collecting driving state data in the process of executing the collection task, and is also suitable for being bound with the data collection equipment through the server; the data acquisition equipment is arranged on the vehicle and is respectively coupled with the vehicle-mounted equipment and the at least one camera so as to acquire vehicle state data and image data; and the server is suitable for executing the method, and performing correlation processing on the driving state data, the vehicle state data and the image data to obtain annotation data indicating dangerous driving behaviors.
Optionally, in the system according to the present invention, the data acquisition device is further adapted to be self-powered by the onboard device.
Optionally, in the system according to the present invention, the data acquisition device is further adapted to, after being started, send a heartbeat signal to the server every first time period, so that the server monitors the networking state of the data acquisition device; and after receiving the travel identifier from the server, starting to acquire the vehicle state data and the image data as first travel data, and sending the first travel data to the server every second time.
Optionally, in the system according to the present invention, the mobile terminal is further adapted to, after determining the collection task, obtain configuration information corresponding to the collection task from the server, where the configuration information at least includes: acquiring event types, event identifications, expected execution times and instruction templates of the event types of all events in a task; and responding to the selection of the user for one event, generating and outputting event description information of the event based on the instruction template corresponding to the event so as to guide the user to execute the event according to the event description information.
Optionally, in the system according to the present invention, the mobile terminal is further adapted to generate a start time of an event in response to a user selection of one event in the collection task; and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event.
Optionally, in the system according to the present invention, the data acquisition device further includes a two-dimensional code image, so that the mobile terminal can be bound with the data acquisition device by scanning the two-dimensional code image.
According to yet another aspect of the present invention, there is provided a computing device comprising: one or more processor memories; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods above.
According to a further aspect of the invention there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods described above.
In summary, according to the scheme of the present invention, the collection task and the corresponding configuration information are generated by using different types of event combinations. The configuration information of the event can be understood as marking information with finer granularity, and is extremely valuable for researching and optimizing dangerous driving behavior identification.
Meanwhile, according to the scheme of the invention, the collected multi-source data is coupled in time and space, so that the alignment and the quality of the data are ensured.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a data acquisition system 100 according to some embodiments of the invention;
FIG. 2 illustrates a workflow diagram of the data acquisition device 110 according to one embodiment of the invention;
FIG. 3A illustrates a schematic diagram of a collection task interface according to one embodiment of the invention;
FIG. 3B is a diagram illustrating a display interface of event description information according to one embodiment of the invention;
FIG. 4 illustrates a schematic diagram of a computing device 400 according to some embodiments of the invention;
FIG. 5 shows a flow diagram of a method 500 for generating annotation data for dangerous driving behavior according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the development of mobile terminal applications and sensor hardware, and in view of the property that mobile terminals (e.g., mobile phones) are directly bound with people, they become monitoring devices with great potential for monitoring dangerous driving behaviors. However, although the mobile terminal is convenient for collecting information, the accuracy and quality of the sensor are not as good as those of vehicle-mounted recording devices (such as a vehicle data recorder and an On-Board Diagnostics (OBD) system). Monitoring driving behavior with a mobile terminal, such as recognizing the world with limited perception capabilities, is a challenging task to identify even potentially deep accident risk behavior in a small sense. In addition, noise of a sensor of the mobile terminal and hardware difference of different models can limit the mobile terminal to sense objective and real motion state of the vehicle, and accordingly identification of dangerous driving behaviors is hindered.
Both the trip recorder and the OBD center are accurate acquisition modes bound with the vehicle, but in an actual scene, it is difficult to directly acquire OBD data to identify dangerous driving behaviors. Meanwhile, the OBD system is independently researched and developed by an automobile manufacturer, on one hand, the OBD data which can be obtained by different automobile models have difference due to the difference of automobile hardware; on the other hand, different vehicle families (euro, japanese, american) follow different OBD protocols, and even some small people follow proprietary protocols, which are the key to encryption and analysis of OBD data. Thus, the diversity of OBD data and protocols presents challenges for collecting different vehicle driving behavior data.
In view of the above, according to the embodiment of the present invention, a data collection system 100 is provided to collect status data of a vehicle and other critical data of the vehicle during driving from various aspects. And then, the multi-source data are processed to analyze the driving behavior mode contained in the back of the data, and then the data capable of representing dangerous driving behaviors are determined. These data can be used as annotation data for subsequent analysis based on dangerous driving behavior. According to an embodiment of the invention, the dangerous driving behaviour comprises at least: rapid acceleration, rapid deceleration, rapid turning, mobile phone playing, phone making and the like.
In one embodiment, the insurance company carries out differently priced insurance on different users according to the risk behaviors of the users driving the vehicles and the use conditions of the vehicles. The insurance mode (algorithm) highly depends on the recognition effect of the dangerous driving behaviors, and the collection of high-quality labeled data aiming at the dangerous driving behaviors is the key for improving the effect of the algorithm and the model.
In yet another embodiment, the electronic map navigation application provides a driving scoring function for scoring the driving of each self-driving navigation driving route of the user. The driving score is scored around the "dangerous driving behavior" event identified by the algorithm. Therefore, high-quality labeled data for dangerous driving behaviors is the key for calculating accurate scores.
FIG. 1 shows a schematic diagram of a data acquisition system 100 according to one embodiment of the invention. As shown in fig. 1, the data acquisition system 100 includes: the system comprises a data acquisition device 110, at least one camera 120, a vehicle-mounted device 130, a mobile terminal 140 and a server 150. According to one implementation, the data acquisition device 110 is coupled to the camera 120 and the in-vehicle device 130, respectively. In addition, the mobile terminal 140 may be bound with the data collection device 110 through the server 150.
The onboard device 130 is, for example, an OBD box disposed on the vehicle for collecting vehicle status data. The vehicle state data includes at least one or more of the following: the vehicle comprises the following components of vehicle model, average fuel consumption, instantaneous fuel consumption, endurance mileage, vehicle speed, rotating speed, light state, hand brake state, safety belt state, vehicle door state, vehicle window state, steering angle, battery voltage, water temperature, engine oil temperature, oil mass percentage and electric quantity percentage.
The mobile terminal 140 is generally disposed on a vehicle, and acquires driving state data through various sensors disposed in the mobile terminal 140, including positioning data (e.g., gnss (global Navigation Satellite system) data), imu (inertial Measurement unit) data (e.g., acceleration, rotation angle, etc.), proximity (measured by a proximity sensor, such as a proximity sensor, to obtain a distance between the mobile terminal 140 and an obstacle in front of the mobile terminal), motion state, orientation of a mobile phone, a state of receiving a call by a driver, a state of intensity of light sensation, and the like.
In addition, a data acquisition application may be further disposed on the mobile terminal 140, and a user selects an acquisition task and an event by operating the Application (APP), and inputs the acquisition task and the event according to a related instruction, so as to implement human-computer interaction between the user and the mobile terminal 140.
According to one embodiment of the present invention, the data acquisition system 100 comprises at least 2 cameras 120, as shown in fig. 1. One of which is disposed near the brake pedal for collecting video image data when the driver operates the brake pedal (e.g., depresses the brake pedal, releases the brake pedal); the other is arranged near the main driving seat for capturing video image data containing the face of the driver. It should be noted that, this is only an example, and the embodiment of the present invention does not limit the camera 120. Those skilled in the art can add or delete the number of the cameras 120 or adjust the installation position of the cameras 120 and the collection objects according to the collection scene requirements.
In one embodiment, the data collection device 110 is provided as hardware external to the vehicle-mounted device 130, and is powered by the vehicle-mounted device 130. According to one embodiment of the invention, the data acquisition device 110 is secured within the vehicle about the lighter. Preferably, a plurality of screw holes are disposed at the edge of the data collecting apparatus 110 to fix it. It should be appreciated that the data collection device 110 is typically disposed about a center console of the vehicle, and embodiments of the present invention are not so limited.
In one embodiment, the data acquisition device 110 is implemented as a micro-computing memory device carrying a Rayleigh core micro RK3288 processor in the form of a metal box with multiple communication interfaces. According to the embodiment of the invention, the data acquisition equipment 110 establishes connection with each camera 120 through the USB communication interface; and establishing connection with the vehicle-mounted equipment 130 through the CAN communication interface.
In addition, the inside mainboard of data acquisition equipment 110 is accompanied with multiple network communication hardware, supports functions such as WIFI, 4G, bluetooth. Meanwhile, a detachable signal amplification transmitter is arranged outside the data acquisition device 110.
Further, in an embodiment according to the present invention, the data collection device 110 is loaded with a Linux operating system and is installed with a corresponding application, enabling communication with the in-vehicle device 130 and parsing the acquired data. It should be understood that the operating system carried by the data acquisition device 110 may also be a known or future known operating system such as Android or alic, which is not limited in this embodiment of the present invention.
Further, outside the data collection device 110, a two-dimensional code image is arranged (for example, without limitation, the two-dimensional code image is pasted on the data collection device 110), and the two-dimensional code, as an identifier of the data collection device 110, can be bound with the mobile terminal 140 to establish a communication connection between the mobile terminal 140 and the data collection device 110.
In addition, the data collection device 110 also has a temporary storage module to store vehicle state data from the in-vehicle device 130 and various image data from the camera 120.
FIG. 2 illustrates a flow diagram of the operation of the data acquisition device 110 according to one embodiment of the present invention.
According to the embodiment of the invention, when the data acquisition device 110 is connected to the vehicle-mounted device 130, the power supply of the vehicle-mounted device 130 is self-started, and the network is automatically connected after the self-starting. After startup, the data acquisition device 110 sends a heartbeat signal to the server 150 every first duration (e.g., 5 seconds) so that the server 150 monitors the heartbeat status of the data acquisition device 110.
When a user (e.g., a driver) selects an acquisition task on the mobile terminal 140 and scans the two-dimensional code image on the data acquisition device 110, the server 150 generates a trip identifier and returns it to the mobile terminal 140. Meanwhile, the server 150 returns the travel identification to the data collection device 110 when receiving a new heartbeat signal. Therefore, the data acquisition device 110 also listens for the trip identification via the heartbeat signal. In the process of executing the collection task once, the data collection device 110 continues to send heartbeat signals to the server 150 every first time period, and meanwhile, as a response to the heartbeat detection, the server 150 continues to return the trip identifier to the data collection device 110 until the collection task is executed, and the data collection device 110 receives a response that the trip identifier is empty from the server 150.
The data acquisition device 110 monitors the travel identifier, and as long as the travel identifier is not empty, the data acquisition device 110 acquires the vehicle state data from the in-vehicle device 130 and the image data from the camera 120, and caches the data as the first travel data. And, the first trip data is transmitted to the server 150 every second time period (e.g., 2 minutes). The first duration and the second duration are not limited to a great extent, and in some preferred embodiments, the second duration is greater than the first duration. When the trip flag is empty, the data acquisition device 110 stops acquisition.
In addition, the new version is also detected periodically (e.g., 10 seconds) after the data acquisition device 110 is started. And when a new version is detected, updating the version.
With reference to fig. 1 and fig. 2, a brief description will be given below of how the data acquisition system 100 according to the present application generates annotation data indicating dangerous driving behavior based on the acquired data, by taking an acquisition task as an example.
It should be noted that, according to the embodiment of the present invention, when the data acquisition system 100 is used for data acquisition, an open training ground (e.g., a training ground of a driving school) is usually selected, and as many road conditions (e.g., straight road sections, curves, ramps, etc.) as possible are included in the training ground. Meanwhile, in order to ensure that the collection process is safely and effectively carried out, drivers with abundant driving experiences (such as coaches in driving schools) are selected to drive the vehicles, and designated operation in the collection task is completed.
Before the acquisition process begins, a user (e.g., a driver) logs into a data acquisition application disposed on the mobile terminal 140 and selects a set of acquisition tasks. One set of collection tasks is a collection of many different types of dangerous driving behavior events. Dangerous driving behaviors include rapid acceleration, rapid deceleration, rapid turning, mobile phone playing, telephone making and the like, and each dangerous driving behavior can comprise various situations, such as the following events aiming at the type of rapid acceleration: low-speed rapid acceleration, medium-speed rapid acceleration, high-speed rapid acceleration, traffic light rapid acceleration, rapid acceleration after turning, and starting rapid acceleration. Thus, a set of acquisition tasks can be represented as: {3 rapid accelerations, 2 rapid turns, 2 rapid decelerations }, wherein the rapid decelerations, the rapid turns, and the like are different event types, specific events are required in the same event type, and different requirements generally refer to requirements for driving states such as vehicle speed and execution time.
After selecting a group of data acquisition tasks, the user also needs to scan the two-dimensional code image on the data acquisition device 110 through the mobile terminal 140, and the mobile terminal 140 sends the two-dimensional code image to the server 150 to request binding. After receiving the binding request, the server 150 checks whether a binding environment exists, and if the binding environment exists, generates a travel identifier and distributes the travel identifier to the mobile terminal 140 and the data acquisition device 110. At this point, the mobile terminal 140 and the data collection device 110 are in a bound state and begin to perform collection tasks. In one embodiment, the binding environment, i.e. verifying whether the data acquisition device 110 is in a normal networking state where data can be uploaded, is verified by the heartbeat detection as described above, and if the server 150 receives a heartbeat signal from the data acquisition device 110 within the first time period, it indicates that the data acquisition device 110 is in the normal networking state. The data collection device 110 instructs the in-vehicle device 130 and the camera 120 to start collecting data after receiving the travel identifier. And, the first trip data is transmitted to the server 150 every second duration. The process can refer to the related description of fig. 2, and is not described herein again.
In addition, while returning the trip identifier to the mobile terminal 140, the server 150 sends configuration information corresponding to the collection task to the mobile terminal 140. The configuration information includes at least: and acquiring the event type, the event identification, the expected execution time and the instruction template corresponding to each event type of each event in the task. The instruction template is adapted to prompt the user for an action instruction when executing the corresponding event.
In one embodiment, the configuration information is pre-generated by the server 150. Taking the instruction template as an example, the instruction template may include a voice instruction template and a text instruction template, and the contents of the two may be the same. Wherein, the text instruction template is displayed on the mobile terminal 140 in a text manner to prompt the user; and the voice instruction template prompts the user in a voice playing mode. Most of the fixed contents in the voice instruction template are recorded by real persons, so that the continuity and the clarity of voice are ensured, and partial contents (mainly numbers and scene states) meet the requirement of acquisition diversity.
In one embodiment, the rapid acceleration and the rapid deceleration are based on pedal operation, the command templates are similar, the command templates for playing and making phone calls highlight more complicated status scenes, and the like. Therefore, specific data and scenes of the instruction templates are different for different events, so that the difference of the danger degree can be highlighted, and the training of the dangerous driving behavior recognition algorithm is extremely valuable. Several examples of sets of instruction templates according to embodiments of the present invention are shown below, but are not limited thereto. Here, the operation data (the operation data at least includes at least one of the following data: speed, distance, time, direction, execution action, and the like) to be written when the event description information is generated for each specific event in the subsequent execution, which is indicated by the horizontal line "____", is not described herein again.
a. Instruction template for related events of rapid acceleration and rapid deceleration
Please fasten the safety belt and ensure the safety distance of ____ m straight ahead. Please bring the initial speed of the vehicle to ____ km/h (if the initial speed is 0, if the vehicle starts to accelerate suddenly or the traffic light accelerates suddenly, please keep the vehicle still). Now please count down 5 seconds later ______ (fill-in actions such as accelerator tip-in, brake tip-in, acceleration tip-in, brake tip-in, etc.).
b. Instruction template for sharp turn related events
Please tie the harness and ensure a curve of _____ meters ahead. Please bring the initial speed of the vehicle to ____ km/h, now please hit the wheel __ to ___ (left or right) to go through the turn after 5 seconds of countdown.
c. Instruction template for playing mobile phone related events
Please fasten the safety belt and play the mobile phone by the assistant driver. Please reach ____ km/h. After the 5 second countdown, the co-pilot places the cell phone at _____ to begin playing the cell phone (watch a movie, play a game, etc.) and remains at ____ seconds.
d. Instruction templates for call related events
The action of fastening the safety belt and answering the call is executed by the assistant driver. Please reach the vehicle speed of ____ km/h, and place the mobile phone on the mobile phone support. Please take a call test after the 5 second countdown and let the phone ring for ____ seconds. Please __________ (answer phone _____ seconds, stop while holding phone ____ seconds, drive normally until the other party hangs up, drive normally and hang up, stop while holding phone and hang up).
After the collection is started, the mobile terminal 140 determines an execution sequence of a plurality of events in the collection task, and then, the collection task interface of the mobile terminal 140 sequentially displays the plurality of events corresponding to the collection task in a first display mode.
FIG. 3A shows a schematic diagram of a collection task interface, according to one embodiment of the invention. As shown in fig. 3A, a schematic diagram of a jerk type event in one acquisition task. On the collection task interface, basic information corresponding to each event, such as initial speed, training action, execution time, execution times, and the like, may be displayed. As shown in fig. 3A, the display mode for the "middle-speed and rapid-acceleration training" and "traffic light and rapid-acceleration training" events is the first display mode. Meanwhile, the display mode of the event of 'low-speed rapid acceleration training' is the second display mode. The second display mode is different from the first display mode and is used for displaying the executed events.
In one embodiment, the execution order of the plurality of events in the collection task is determined based on the road information. The road information comprises static and/or dynamic information of all or part of the objects within the road. For example, whether the road is wide or not and whether there is a curve or not may be determined, and whether there is an obstacle or a moving object in the road range or not may be determined. The road information may be obtained by V2X (Vehicle to X) technology, which is not limited by the embodiment of the present invention.
In another embodiment, the execution sequence of the plurality of events may also be selected by the actual according to the road section condition. For example, on an open road segment, the driver may choose to perform a sharp acceleration event on a long, open straight road segment, and then perform a sharp turn event before reaching the intersection.
The driver then selects one of the events on the collection task interface. Generally, the driver selects the events to perform according to the display order, but is not limited thereto. The mobile terminal 140 outputs event description information of the selected event to guide the user to execute the event according to the event description information. Specifically, the mobile terminal 140 generates event description information of the event according to the instruction template corresponding to the event type. After that, the event description information is displayed on the interface, and fig. 3B illustrates a schematic diagram of a display interface of the event description information according to an embodiment of the present invention. The event description information may include safety precautions at the time of collection, driving detail requirements, and the like. As shown in FIG. 3B, the substeps illustrate specific operating requirements in the course of executing a "low speed rapid acceleration" event. Meanwhile, the mobile terminal 140 plays the event description information through a voice command.
In one embodiment, after the collection of a specific event is started, a 30-second voice broadcast is provided to prompt safety precautions and driving details during the collection, then a countdown is performed for a certain time (the time is determined according to the collection requirement of the specific event, and the predetermined time can also be used as a part of event description information), a driver performs operation according to the event description information during the countdown, and after the requirement is completed, a confirmation operation can be performed to indicate that the event is completely performed. Continuing with fig. 3B, the driver can indicate "slide complete training" according to the interface, and the mobile terminal can receive the confirmation operation of the driver.
During the course of the driver performing the event, the corresponding sensors in the mobile terminal 140 collect driving state data of the vehicle.
Meanwhile, in this process, in response to the selection of the event by the user, the mobile terminal 140 records the current time as the start time of the event; in response to the user's confirmation after completing the event, the mobile terminal 140 records the current time as the end time of the event. And then, the period of time corresponding to the starting time and the ending time of the event is used as the actual execution time of the event.
Optionally, the mobile terminal 140 stores the event identifier, the driving state data, and the actual execution time corresponding to the event in an associated manner.
After an event is completed, the mobile terminal 140 returns to the collection task interface, and displays the executed event in a second display mode different from the first display mode on the collection task interface, where the executed event is in an unselectable state. As shown in fig. 3A, the "low-speed rapid acceleration event" is an executed event and is in an unselected state. The driver selects other event tasks for collection. If the acquisition count-down is up, the driver does not complete the corresponding acquisition requirement, the driver needs to select incomplete or failed execution, and the failed data cannot be uploaded. The driver can select the incomplete event under the appropriate condition to perform the collection again.
The driver repeats the process until all the specific event tasks are collected, and the driver can finish the collection task after all the events are executed. When the collection task is completed, the mobile terminal 140 sends the event identifier, the driving state data, and the actual execution time of each event to the server 150 as second travel data.
On the server 150 side, acquiring first travel data from the data acquisition device 110 every second time period; and acquiring second travel data from the mobile terminal 140 when the acquisition task is finished. In this way, the server 150 may associate the first trip data with the second trip data based on the trip identification of the current collection task.
In one collection task, since a certain time interval (usually less than 5 seconds) exists between the time point when the mobile terminal 140 receives the travel identifier and the time point when the data collection device 110 receives the travel identifier, the collection start times of the first travel data and the second travel data may be different. Meanwhile, after receiving the confirmation operation of the user, the mobile terminal 140 ends the acquisition; when the travel identifier is not received, the data acquisition device 110 ends the acquisition, and therefore, the acquisition end times of the first travel data and the second travel data are also different. In addition, the data may be segmented into multiple segments for uploading to the server 150, subject to the objective constraints of network transmission. In addition, the first travel data and the second travel data relate to multi-modal data, including but not limited to GPS positioning data, OBD data, video image data, data of instruction templates, and the like, and the acquisition frequency of different types of data is not consistent. Therefore, in the embodiment according to the present invention, the first trip data and the second trip data are aligned in time and space, which is very important to obtain high quality annotation data.
In one embodiment, the actual execution time considering the dangerous driving behavior is controlled by the command (i.e., data of the command template), which is reflected in the human-machine interaction between the mobile terminal 140 and the driver. Therefore, the server 150 first determines a time interval corresponding to the execution of the collection task. In one embodiment, server 150 determines the expected execution time for each event from the configuration information. Then, whether the expected execution time of each event is consistent with the actual execution time is judged. And if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
And then, respectively processing the first stroke data and the second stroke data according to the determined time interval to obtain corresponding third stroke data and fourth stroke data. In one embodiment, corresponding data in a time interval is respectively intercepted from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as fourth stroke data.
And then, aligning the third stroke data and the fourth stroke data based on the acquisition frequency of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data which are used as marking data. In one embodiment, in the sensor of the mobile terminal 140, the GNSS acquisition frequency is 1Hz, and the IMU acquisition frequency is up to 10 Hz; the acquisition frequency of the camera 120 is typically 20-30 Hz; the acquisition frequency of the in-vehicle apparatus 130 is 10 Hz. Based on the above acquisition frequency, the alignment frequency is determined to be 1 Hz. And then, sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain the aligned third stroke data and the aligned fourth stroke data as marking data.
In other embodiments, in consideration of the situation of uncertain acquisition frequency or data missing caused by GPS signal loss and other problems, after the third trip data and the fourth trip data are obtained through sampling, missing data is supplemented through interpolation, and the data after interpolation processing is used as final labeling data.
According to the data acquisition system 100 of the present invention, in addition to the external devices (e.g., the data acquisition device 110 and the on-board device 130 and the camera 120 coupled thereto) connected to the vehicle central control system for analyzing and uploading OBD and other data, the mobile terminal 140 is added, and data change characteristics corresponding to dangerous driving behaviors are completely depicted from multiple angles. Moreover, the whole set of multi-source data acquisition process is simplified, and the understanding cost and the communication cost of an acquirer are reduced.
In addition, according to the data acquisition system 100 of the present invention, the instruction templates are generated by using different event configurations, and the driver in the professional driving school actually executes the dangerous driving behavior according to the instruction templates, so that compared with the subsequent image annotation, the obtained data annotation quality is higher. Meanwhile, the configuration information of the event can be understood as marking information with finer granularity, and the method is extremely valuable for researching and optimizing dangerous driving behavior recognition.
In addition, according to the data acquisition system 100 of the present invention, the acquired data are coupled in time and space, which ensures the alignment and quality verification of the data.
According to one embodiment of the invention, the data acquisition system 100 and portions thereof may be implemented by one or more computing devices. FIG. 4 shows a schematic block diagram of a computing device 400 according to one embodiment of the invention.
As shown in FIG. 4, in a basic configuration 402, a computing device 400 typically includes a system memory 406 and one or more processors 404. A memory bus 408 may be used for communicating between the processor 404 and the system memory 406.
Depending on the desired configuration, processor 404 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. Processor 404 may include one or more levels of cache, such as a level one cache 410 and a level two cache 412, a processor core 414, and registers 416. The example processor core 414 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 418 may be used with the processor 404, or in some implementations the memory controller 418 may be an internal part of the processor 404.
Depending on the desired configuration, system memory 406 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 406 may include an operating system 420, one or more applications 422, and data 424. In some implementations, the application 422 can be arranged to execute instructions on an operating system with the data 424 by one or more processors 404.
Computing device 400 also includes storage 432, storage 432 including removable storage 436 and non-removable storage 438, each of removable storage 436 and non-removable storage 438 connected to a storage interface bus 434.
Computing device 400 may also include an interface bus 440 that facilitates communication from various interface devices (e.g., output devices 442, peripheral interfaces 444, and communication devices 446) to the basic configuration 402 via bus/interface controller 430. The example output device 442 includes a graphics processing unit 448 and an audio processing unit 450. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 452. Example peripheral interfaces 444 may include a serial interface controller 454 and a parallel interface controller 456, which may be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 458. An example communication device 446 may include a network controller 460, which may be arranged to facilitate communications with one or more other computing devices 462 over a network communication link via one or more communication ports 464.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In general, computing device 400 may be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset, an application specific device, or a hybrid device that include any of the above functions. In one embodiment according to the invention, the computing device 400 may also be implemented as a micro-computing module or the like. The embodiments of the present invention are not limited thereto.
In an embodiment in accordance with the invention, the computing device 400 is configured to perform a data acquisition method, and/or a data processing method in accordance with the invention. Among other things, application 422 of computing device 400 includes a plurality of program instructions that implement the above-described methods according to the present invention.
FIG. 5 shows a flow diagram of a method 500 for generating annotation data for dangerous driving behavior according to one embodiment of the invention. The method 500 is suitable for execution in the server 150. It should be noted that the method 500 is complementary to the foregoing, and repeated portions are not described in detail.
As shown in fig. 5, the method 500 begins at step S510. In step S510, the server 150 binds the mobile terminal with the data collection device 110 disposed on the vehicle through the travel identifier in response to the binding request from the mobile terminal 140.
According to one embodiment, the server 150 generates a travel identifier upon receiving a request from the mobile terminal 140 to bind with the data collection device 110, and returns the travel identifier to the mobile terminal 140. Thereafter, when the heartbeat signal from the data collection device 110 is received, the stroke identification is sent to the data collection device 110. The heartbeat signal is sent to the server 150 every first time after the data acquisition device 110 is started, so that the server 150 monitors the networking state of the data acquisition device 110.
Subsequently, in step S520, the server 150 sends configuration information corresponding to the collection task to the mobile terminal 140, so that the mobile terminal 140 outputs event description information of each event based on the configuration information to guide the user to execute each event according to the event description information, wherein the collection task includes a plurality of different types of events pointing to dangerous driving behaviors.
Subsequently, in step S530, based on the travel identifier, the server 150 acquires the first travel data from the data collection device 110 and the second travel data from the mobile terminal 140, respectively.
According to one embodiment, the server 150 obtains the first trip data from the data collection device 110 every second duration. The first journey data is data of the vehicle when the data acquisition device 110 starts to acquire the journey identification and executes the acquisition task. The first trip data includes: the vehicle state data collected by the in-vehicle device 130, and the image data collected by the at least one camera 120 while the collection task is performed.
Meanwhile, when the execution of the collection task is completed, second travel data from the mobile terminal 140 is acquired. Wherein the second journey data is the data of the vehicle when executing the collection task, collected by the mobile terminal 140 since receiving the journey identification. The second trip data includes: and when the collection task is executed, the driving state data of the vehicle and the actual execution time of each event.
Thereafter, based on the trip identification, the server 150 may store the first trip data in association with the second trip data.
Subsequently, in step S540, a time interval corresponding to the execution of the collection task is determined from the configuration information.
In one embodiment, the expected execution time for each event is determined from the configuration information. And then judging whether the expected execution time of each event is consistent with the actual execution time. And if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event. If the expected execution time is inconsistent with the actual execution time, the data of the collected task is wrong and needs to be collected again.
As described above, the actual execution time of the event is the time generated based on the input of the user (i.e., the interaction of the user with the mobile terminal 140), and specifically includes: in response to a user selection of an event in the collection task, a start time of the event is generated; and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event.
Subsequently, in step S550, the first trip data and the second trip data are respectively processed according to the time interval, so as to obtain corresponding third trip data and fourth trip data.
According to one embodiment, the corresponding data in the time interval are respectively intercepted from the vehicle state data and the image data as third travel data. And intercepting corresponding data in the time interval from the driving state data to serve as fourth stroke data.
Subsequently, in step S560, the third stroke data and the fourth stroke data are aligned based on the collection frequency of the first stroke data and the second stroke data, and the aligned third stroke data and the aligned fourth stroke data are obtained as the annotation data.
According to one embodiment, the alignment frequency is determined based on the acquisition frequency of the first and second trip data. And sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain the aligned third stroke data and the aligned fourth stroke data as marking data.
Considering that there may be data missing and other problems in the collected data, the step of sampling the third travel data and the fourth travel data based on the alignment frequency further includes: based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain sampled third stroke data and sampled fourth stroke data; and if the sampled third stroke data and/or the sampled fourth stroke data have data missing, supplementing the missing data through interpolation to obtain the aligned third stroke data and/or the aligned fourth stroke data. It should be appreciated that embodiments of the present invention are not limited in what manner to interpolate.
According to still further embodiments, after obtaining the aligned third trip data and/or the aligned fourth trip data, further comprising the steps of:
and checking whether the action instruction of the user meets the preset requirement when the acquisition task is executed by combining the configuration information and the aligned third travel data and/or the aligned fourth travel data. The preset requirement is, for example, whether the action performed by the driver meets the requirements on speed and duration, but is not limited thereto. And if the preset requirements are met, using the aligned third travel data and/or the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
According to the data collection method 500 of the present invention, collection tasks and their corresponding configuration information are generated using different types of event combinations. The configuration information of the event can be understood as marking information with finer granularity, and is extremely valuable for researching and optimizing dangerous driving behavior identification.
Meanwhile, a professional driver in a driving school actually executes dangerous driving behaviors according to an instruction template in the configuration information to complete one acquisition task, and compared with the picture marking after the accident, the obtained data marking quality is higher.
In addition, according to the data acquisition method 500 of the present invention, the acquired multi-source data is coupled in time and space, which ensures the alignment and quality of the data.
The invention also discloses:
a5, the method as in A4, wherein the data acquisition devices are respectively coupled with an on-board device and at least one camera, and the first trip data comprises: when the collection task is executed, vehicle state data collected through vehicle-mounted equipment and image data collected through at least one camera are acquired; the second trip data includes: and when the collection task is executed, the driving state data of the vehicle and the actual execution time of each event. A6, the method of A5, wherein the step of determining from configuration information a time interval corresponding to the execution of the collection task comprises: determining the expected execution time of each event from the configuration information; judging whether the expected execution time of each event is consistent with the actual execution time; and if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event. A7, the method of A5, wherein the actual execution time of the event is a time generated based on user input, comprising: in response to a user selection of an event in an acquisition task, a start time of the generated event; and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event. A8, the method according to any one of a5-7, wherein the step of processing the first trip data and the second trip data respectively according to the time interval to obtain corresponding third trip data and fourth trip data comprises: respectively capturing corresponding data in the time interval from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as fourth travel data. A9, the method according to any one of a1-8, wherein the step of aligning the third stroke data and the fourth stroke data based on the collection frequency of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data as the annotation data comprises: determining an alignment frequency based on the acquisition frequencies of the first and second stroke data; and sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain aligned third stroke data and aligned fourth stroke data which are used as marking data. A10, the method of a9, wherein the step of sampling the third travel data and the fourth travel data based on the alignment frequency to obtain aligned third travel data and aligned fourth travel data further comprises: based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain sampled third stroke data and sampled fourth stroke data; and if data are missing in the sampled third stroke data and/or the sampled fourth stroke data, supplementing missing data through interpolation to obtain aligned third stroke data and/or aligned fourth stroke data. A11, the method as in a10, further comprising, after the step of sampling the third travel data and the fourth travel data based on the alignment frequency to obtain aligned third travel data and aligned fourth travel data: checking whether the action instruction of the user meets the preset requirement when the acquisition task is executed by combining the configuration information, the aligned third travel data and the aligned fourth travel data; and if the preset requirements are met, using the aligned third travel data and the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
B16, the data collection system of B15, wherein the mobile terminal is further adapted to generate a start time of an event in a collection task in response to a user selection of the event; and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event. B17, the data acquisition system according to any one of B12-16, wherein the data acquisition device further comprises a two-dimensional code image, so that the mobile terminal can be bound with the data acquisition device by scanning the two-dimensional code image.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
Claims (10)
1. A method for generating labeled data aiming at dangerous driving behaviors comprises the following steps:
in response to a binding request from a mobile terminal, binding the mobile terminal with a data acquisition device arranged on a vehicle through a travel identifier;
sending configuration information corresponding to an acquisition task to the mobile terminal so that the mobile terminal can output event description information of each event based on the configuration information to guide a user to execute each event according to the event description information, wherein the acquisition task comprises a plurality of different types of events pointing to dangerous driving behaviors;
respectively acquiring first travel data from the data acquisition equipment and second travel data from the mobile terminal based on the travel identifier;
determining a time interval corresponding to the execution of the acquisition task from the configuration information;
according to the time interval, the first travel data and the second travel data are respectively processed to obtain corresponding third travel data and fourth travel data; and
and aligning the third stroke data and the fourth stroke data based on the acquisition frequency of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data which are used as marking data.
2. The method of claim 1, wherein the step of binding the mobile terminal with a data collection device disposed on a vehicle by a travel identity in response to a binding request from the mobile terminal comprises:
when a request to be bound with data acquisition equipment from a mobile terminal is received, generating a travel identifier and returning the travel identifier to the mobile terminal;
after receiving a heartbeat signal from the data acquisition equipment, sending the stroke identifier to the data acquisition equipment, wherein the heartbeat signal is sent to the server every first time after the data acquisition equipment is started, so that the server monitors the networking state of the data acquisition equipment.
3. The method of claim 1 or 2, wherein the configuration information comprises at least: and the event type, the event identifier, the expected execution time and the instruction template corresponding to each event type of each event in the collection task.
4. The method of any one of claims 1-3, wherein the step of separately obtaining first trip data from the data collection device and second trip data from the mobile terminal based on the trip identification comprises:
acquiring first travel data from the data acquisition equipment every second time, wherein the first travel data are data which are acquired by the data acquisition equipment after the data acquisition equipment receives the travel identifier and are acquired by the data acquisition equipment when an acquisition task is executed;
when the acquisition task is finished, acquiring second travel data from the mobile terminal, wherein the second travel data is acquired by the mobile terminal from the time of receiving the travel identifier and is data of a vehicle during execution of the acquisition task; and
and based on the travel identification, the first travel data and the second travel data are stored in an associated mode.
5. A data acquisition system comprising:
the vehicle-mounted equipment is suitable for acquiring vehicle state data in a process of executing an acquisition task, wherein the acquisition task comprises a plurality of different types of events pointing to dangerous driving behaviors;
the camera is suitable for acquiring image data in the process of executing the acquisition task;
the mobile terminal is arranged on the vehicle, is suitable for collecting the driving state data in the process of executing the collection task, and is also suitable for being bound with the data collection equipment through the server;
a data acquisition device disposed on the vehicle and coupled to the onboard device and the at least one camera, respectively, to acquire the vehicle state data and the image data;
a server adapted to perform the method according to any one of claims 1-4, and to perform a correlation process on the driving state data, the vehicle state data and the image data to obtain annotation data indicative of dangerous driving behavior.
6. The data acquisition system of claim 5, wherein the data acquisition device is further adapted to be self-powered by the onboard device.
7. The data acquisition system of claim 4 or 5, wherein the data acquisition device is further adapted to,
after the data acquisition equipment is started, sending heartbeat signals to the server at intervals of a first time so as to facilitate the server to monitor the networking state of the data acquisition equipment; and
and after receiving the travel identifier from the server, starting to acquire the vehicle state data and the image data as first travel data, and sending the first travel data to the server every second time.
8. The data acquisition system according to any one of claims 4-7, wherein the mobile terminal is further adapted to,
after the acquisition task is determined, configuration information corresponding to the acquisition task is acquired from the server, wherein the configuration information at least comprises: the method comprises the steps of collecting event types, event identifications, expected execution times and instruction templates of the event types of all events in a task;
and responding to the selection of the user for one event, generating and outputting event description information of the event based on the instruction template corresponding to the event so as to guide the user to execute the event according to the event description information.
9. A computing device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-4.
10. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110895907.9A CN113591744B (en) | 2021-08-05 | 2021-08-05 | Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110895907.9A CN113591744B (en) | 2021-08-05 | 2021-08-05 | Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113591744A true CN113591744A (en) | 2021-11-02 |
CN113591744B CN113591744B (en) | 2024-03-22 |
Family
ID=78255397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110895907.9A Active CN113591744B (en) | 2021-08-05 | 2021-08-05 | Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113591744B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114724366A (en) * | 2022-03-29 | 2022-07-08 | 北京万集科技股份有限公司 | Driving assistance method, device, equipment, storage medium and program product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077819A (en) * | 2014-06-17 | 2014-10-01 | 深圳前向启创数码技术有限公司 | Remote monitoring method and system based on driving safety |
US20170357866A1 (en) * | 2016-06-13 | 2017-12-14 | Surround.IO Corporation | Method and System for Providing Behavior of Vehicle Operator Using Virtuous Cycle |
CN107784587A (en) * | 2016-08-25 | 2018-03-09 | 大连楼兰科技股份有限公司 | A kind of driving behavior evaluation system |
CN109816811A (en) * | 2018-10-31 | 2019-05-28 | 杭州云动智能汽车技术有限公司 | A kind of nature driving data acquisition device |
CN110447214A (en) * | 2018-03-01 | 2019-11-12 | 北京嘀嘀无限科技发展有限公司 | A kind of system, method, apparatus and storage medium identifying driving behavior |
-
2021
- 2021-08-05 CN CN202110895907.9A patent/CN113591744B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077819A (en) * | 2014-06-17 | 2014-10-01 | 深圳前向启创数码技术有限公司 | Remote monitoring method and system based on driving safety |
US20170357866A1 (en) * | 2016-06-13 | 2017-12-14 | Surround.IO Corporation | Method and System for Providing Behavior of Vehicle Operator Using Virtuous Cycle |
CN107784587A (en) * | 2016-08-25 | 2018-03-09 | 大连楼兰科技股份有限公司 | A kind of driving behavior evaluation system |
CN110447214A (en) * | 2018-03-01 | 2019-11-12 | 北京嘀嘀无限科技发展有限公司 | A kind of system, method, apparatus and storage medium identifying driving behavior |
CN109816811A (en) * | 2018-10-31 | 2019-05-28 | 杭州云动智能汽车技术有限公司 | A kind of nature driving data acquisition device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114724366A (en) * | 2022-03-29 | 2022-07-08 | 北京万集科技股份有限公司 | Driving assistance method, device, equipment, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN113591744B (en) | 2024-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11568492B2 (en) | Information processing apparatus, information processing method, program, and system | |
US10891808B1 (en) | Crowd-sourced driver grading | |
US11142190B2 (en) | System and method for controlling autonomous driving vehicle | |
US11961397B1 (en) | Processing system having a machine learning engine for providing a customized driving assistance output | |
US10032360B1 (en) | In-vehicle apparatus for early determination of occupant injury | |
CN106161502A (en) | Mobile communication system and control method, auxiliary terminal and vehicle | |
CN110875937A (en) | Information pushing method and system | |
JP6603506B2 (en) | Parking position guidance system | |
KR20130082874A (en) | Support system for road drive test and support method for road drive test usgin the same | |
CN114756700B (en) | Scene library establishing method and device, vehicle, storage medium and chip | |
CN107516354A (en) | Examination of driver system, DAS (Driver Assistant System) and the method for rule-based script | |
CN114935334B (en) | Construction method and device of lane topological relation, vehicle, medium and chip | |
CN113611007B (en) | Data processing method and data acquisition system | |
CN113591744B (en) | Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system | |
CN112164224A (en) | Traffic information processing system, method, device and storage medium for information security | |
CN103801075A (en) | driving game method and device | |
JP6619316B2 (en) | Parking position search method, parking position search device, parking position search program, and moving object | |
JP2009090927A (en) | Information management server, parking assist device, navigation system equipped with parking assist device, information management method, parking assist method, information management program, parking assist program, and record medium | |
CN113628360B (en) | Data acquisition method and system | |
CN115221151B (en) | Vehicle data transmission method and device, vehicle, storage medium and chip | |
CN115203457B (en) | Image retrieval method, device, vehicle, storage medium and chip | |
JP4866061B2 (en) | Information recording apparatus, information recording method, information recording program, and computer-readable recording medium | |
CN114880408A (en) | Scene construction method, device, medium and chip | |
CN111461686A (en) | Payment method and device applied to vehicle, computer equipment and storage medium | |
CN115115822B (en) | Vehicle-end image processing method and device, vehicle, storage medium and chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |