CN113591744B - Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system - Google Patents

Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system Download PDF

Info

Publication number
CN113591744B
CN113591744B CN202110895907.9A CN202110895907A CN113591744B CN 113591744 B CN113591744 B CN 113591744B CN 202110895907 A CN202110895907 A CN 202110895907A CN 113591744 B CN113591744 B CN 113591744B
Authority
CN
China
Prior art keywords
data
event
acquisition
vehicle
trip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110895907.9A
Other languages
Chinese (zh)
Other versions
CN113591744A (en
Inventor
张源源
苏锦华
汪磊
唐锐猊
张鸿飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Minmin Car Service Network Technology Co ltd
Original Assignee
Beijing Minmin Car Service Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Minmin Car Service Network Technology Co ltd filed Critical Beijing Minmin Car Service Network Technology Co ltd
Priority to CN202110895907.9A priority Critical patent/CN113591744B/en
Publication of CN113591744A publication Critical patent/CN113591744A/en
Application granted granted Critical
Publication of CN113591744B publication Critical patent/CN113591744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Signal Processing (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Cardiology (AREA)
  • Quality & Reliability (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Operations Research (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a data acquisition system, which comprises: the vehicle-mounted device is suitable for collecting vehicle state data in the process of executing a collection task, wherein the collection task comprises a plurality of different types of events pointing to dangerous driving behaviors; the at least one camera is suitable for collecting image data in the process of executing the collection task; the mobile terminal is arranged on the vehicle and is suitable for collecting driving state data in the process of executing the collection task and binding with the data collection equipment through the server; the data acquisition device is arranged on the vehicle and is respectively coupled with the vehicle-mounted device and the at least one camera to acquire vehicle state data and image data; and the server is suitable for carrying out association processing on the driving state data, the vehicle state data and the image data to obtain labeling data indicating dangerous driving behaviors. According to the data acquisition system provided by the invention, high-quality annotation data aiming at dangerous driving behaviors can be obtained.

Description

Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system
Technical Field
The invention relates to the technical field of computers, in particular to a method for generating annotation data aiming at dangerous driving behaviors and a data acquisition system.
Background
With the continuous development of the automobile industry and the rising demand of people for driving experiences, the demand of people for avoiding traffic risks is increasing. Among them, identifying dangerous driving behaviors is an important means of evaluating driving risk of a driver and preventing traffic accidents.
Meanwhile, the application scene of identifying dangerous driving behaviors is wide. For example, with the development of technologies such as internet of vehicles, automatic driving, big data monitoring, etc., fleet monitoring, driving behavior risk factors and driving assistance services all lay out travel monitoring technologies silently, and the technology of travel monitoring is the dangerous driving behavior recognition technology with the highest sense of science and technology and practical application value. As another example, a trucking population is a high incidence population of traffic accidents, and how to monitor and avoid traffic risks of trucking fleets is a problem to be solved in the trucking industry.
On the other hand, the dangerous driving behavior is a behavior with complexity and uncertainty, in the prior art, the dangerous driving behavior can be defined simply by using the change rule of the kinematic physical quantity such as acceleration, angular velocity and the like, and the dangerous driving behavior can be determined by using the probability that the driving behavior is likely to induce future car accident. However, the perceived risk of the same dangerous driving behaviour under different populations is also different due to the difference in personal perception. Meanwhile, the complexity of the driving habit of the driver and the road condition can also cause the difference of objective risks corresponding to the same dangerous driving behaviors. To accurately identify these discrepancies, a large amount of high quality dangerous driving behavior annotation data is required.
Therefore, a scheme capable of acquiring high-quality dangerous driving behavior data is demanded.
Disclosure of Invention
The invention provides a method for generating annotation data aiming at dangerous driving behaviors and a data acquisition system, aiming at solving or at least alleviating at least one problem.
According to one aspect of the present invention, there is provided a method for generating annotation data for dangerous driving behavior, comprising the steps of: responding to a binding request from the mobile terminal, and binding the mobile terminal with data acquisition equipment arranged on a vehicle through a journey identifier; the method comprises the steps that configuration information corresponding to a collection task is sent to a mobile terminal, so that the mobile terminal can conveniently output event description information of each event based on the configuration information, and a user is guided to execute each event according to the event description information, wherein the collection task comprises a plurality of events pointing to dangerous driving behaviors in different types; based on the travel identification, respectively acquiring first travel data from the data acquisition equipment and second travel data from the mobile terminal; determining a time interval corresponding to the acquisition task from the configuration information; respectively processing the first travel data and the second travel data according to the time interval to obtain corresponding third travel data and fourth travel data; and aligning the third travel data and the fourth travel data based on the acquisition frequency of the first travel data and the second travel data to obtain aligned third travel data and aligned fourth travel data as labeling data.
Optionally, in the method according to the present invention, the step of binding the mobile terminal with the data collection device arranged on the vehicle by the trip identification in response to the binding request from the mobile terminal includes: when a request for binding with the data acquisition equipment from the mobile terminal is received, generating a travel identifier and returning the travel identifier to the mobile terminal; and after receiving the heartbeat signal from the data acquisition equipment, transmitting the stroke identifier to the data acquisition equipment, wherein the heartbeat signal is transmitted to the server at intervals of a first time after the data acquisition equipment is started, so that the server can monitor the networking state of the data acquisition equipment.
Optionally, in the method according to the present invention, the configuration information at least includes: and collecting event types, event identifications, expected execution time and instruction templates corresponding to the event types in the task.
Optionally, the method according to the invention further comprises the step of: acquiring first journey data from the data acquisition equipment every second time length, wherein the first journey data are data of a vehicle when the data acquisition equipment starts to acquire after receiving a journey identifier and executes an acquisition task; acquiring second journey data from the mobile terminal when the acquisition task is executed, wherein the second journey data is data of a vehicle when the acquisition task is executed, wherein the data are acquired by the mobile terminal from the time of receiving the journey identification; and storing the first trip data and the second trip data in association based on the trip identification.
Optionally, in the method according to the invention, the data acquisition device is coupled to the vehicle-mounted device and to the at least one camera, respectively, the first travel data comprising: when an acquisition task is executed, vehicle state data acquired through the vehicle-mounted equipment and image data acquired through at least one camera; the second trip data includes: the vehicle driving state data and the actual execution time of each event are obtained when the acquisition task is executed.
Optionally, in the method according to the present invention, the step of determining, from the configuration information, a time interval corresponding to the execution of the acquisition task includes: determining expected execution time of each event from the configuration information; judging whether the expected execution time and the actual execution time of each event are consistent; if the expected execution time is consistent with the actual execution time, the actual execution time is taken as a time interval corresponding to each event.
Optionally, in the method according to the invention, the actual execution time of the event is a time generated based on input of the user, comprising: responding to the selection of one event in the acquisition task by a user, and starting time of the generated event; and responding to the confirmation operation of the user after the event is executed, and ending time of the generated event.
Optionally, in the method according to the present invention, the step of processing the first trip data and the second trip data according to the time interval, respectively, to obtain corresponding third trip data and fourth trip data includes: intercepting corresponding data in a time interval from vehicle state data and image data respectively to serve as third journey data; and intercepting the data corresponding to the time interval from the driving state data to serve as fourth journey data.
Optionally, in the method according to the present invention, the step of aligning the third trip data and the fourth trip data based on the acquisition frequencies of the first trip data and the second trip data, respectively, to obtain aligned third trip data and aligned fourth trip data, as labeling data includes: determining an alignment frequency based on the acquisition frequencies of the first travel data and the second travel data; and based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain aligned third stroke data and aligned fourth stroke data as marking data.
Optionally, the method according to the invention further comprises the step of: sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain sampled third stroke data and sampled fourth stroke data; if the data is missing in the sampled third stroke data and/or the sampled fourth stroke data, the missing data is supplemented through interpolation to obtain aligned third stroke data and/or aligned fourth stroke data.
Optionally, the method according to the invention further comprises the step of: checking whether a user action instruction meets a preset requirement when executing an acquisition task by combining configuration information, aligned third stroke data and aligned fourth stroke data; and if the third stroke data and the fourth stroke data meet the preset requirements, the third stroke data and the fourth stroke data are used as marking data for indicating dangerous driving behaviors.
According to another aspect of the present invention, there is provided a data acquisition system comprising: the vehicle-mounted device is suitable for collecting vehicle state data in the process of executing a collection task, wherein the collection task comprises a plurality of different types of events pointing to dangerous driving behaviors; the at least one camera is suitable for collecting image data in the process of executing the collection task; the mobile terminal is arranged on the vehicle and is suitable for collecting driving state data in the process of executing a collection task and binding the driving state data with the data collection equipment through the server; the data acquisition device is arranged on the vehicle and is respectively coupled with the vehicle-mounted device and the at least one camera to acquire vehicle state data and image data; and the server is suitable for executing the method, and carrying out association processing on the driving state data, the vehicle state data and the image data to obtain the marking data indicating dangerous driving behaviors.
Optionally, in the system according to the invention, the data acquisition device is further adapted to be powered self-starting by the vehicle-mounted device.
Optionally, in the system according to the invention, the data acquisition device is further adapted to, after start-up, send heartbeat signals to the server at intervals of a first time period, so that the server monitors the networking status of the data acquisition device; and after receiving the journey identification from the server, starting to acquire vehicle state data and image data as first journey data, and sending the first journey data to the server every second time.
Optionally, in the system according to the present invention, the mobile terminal is further adapted to, after determining the acquisition task, obtain configuration information corresponding to the acquisition task from the server, where the configuration information at least includes: collecting event types, event identifications, expected execution time and instruction templates of the event types in the task; and responding to the selection of the user on one of the events, generating and outputting event description information of the event based on an instruction template corresponding to the event, so as to guide the user to execute the event according to the event description information.
Optionally, in the system according to the invention, the mobile terminal is further adapted to generate a start time of an event in response to a user selection of an event in the acquisition task; and generating the ending time of the event in response to a confirmation operation of the user after the completion event is executed.
Optionally, in the system according to the present invention, the data acquisition device further includes a two-dimensional code image, so that the mobile terminal binds with the data acquisition device by scanning the two-dimensional code image.
According to yet another aspect of the present invention, there is provided a computing device comprising: one or more processor memories; one or more programs, wherein the one or more programs are stored in memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods described above.
According to yet another aspect of the present invention, there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods described above.
In summary, according to the scheme of the invention, different types of event combinations are utilized to generate the acquisition task and the corresponding configuration information thereof. The configuration information of the event can be understood as more fine-grained labeling information, and is extremely valuable for researching and optimizing dangerous driving behavior identification.
Meanwhile, according to the scheme of the invention, the acquired multi-source data are coupled in time and space, so that the alignment and quality of the data are ensured.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which set forth the various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The above, as well as additional objects, features, and advantages of the present disclosure will become more apparent from the following detailed description when read in conjunction with the accompanying drawings. Like reference numerals generally refer to like parts or elements throughout the present disclosure.
FIG. 1 illustrates a schematic diagram of a data acquisition system 100 according to some embodiments of the invention;
FIG. 2 illustrates a flowchart of the operation of the data acquisition device 110 according to one embodiment of the invention;
FIG. 3A shows a schematic diagram of an acquisition task interface, according to one embodiment of the invention;
FIG. 3B shows a schematic diagram of a display interface for event description information, according to one embodiment of the invention;
FIG. 4 illustrates a schematic diagram of a computing device 400 according to some embodiments of the invention;
FIG. 5 illustrates a flow chart of a method 500 of generating annotation data for dangerous driving behavior in accordance with one embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the development of mobile terminal applications and sensor hardware, and in view of the property that mobile terminals (such as mobile phones) are directly bound with people, the mobile terminals become monitoring devices with great potential for monitoring dangerous driving behaviors. However, although the mobile terminal is convenient for collecting information, the accuracy and quality of the sensor are not as good as those of vehicle-mounted recording equipment (such as a vehicle recorder and an OBD (On-Board Diagnostics) system). Monitoring driving behavior with mobile terminals, such as recognizing the world with limited perceptibility, is a very challenging task to identify even potentially deep accident-risk behaviors. In addition, noise of a sensor of the mobile terminal and hardware differences of different models can limit the mobile terminal to sense objective and real motion states of the vehicle, so that dangerous driving behaviors are prevented from being identified.
Both the travel recorder and the OBD center are accurate acquisition modes bound with the vehicle, but in an actual scene, it is difficult to directly acquire OBD data to identify dangerous driving behaviors. Meanwhile, the OBD system is independently researched and developed by automobile manufacturers, and on one hand, the difference of automobile hardware leads to the difference of OBD data which can be acquired by different automobile types; on the other hand, different vehicles (European, japanese and American) follow different OBD protocols, even some of the small-sized vehicles follow private protocols, and the OBD protocols are key to encryption and analysis of OBD data. Thus, the diversity of OBD data and protocols presents challenges for collecting different vehicle driving behavior data.
In view of this, according to an embodiment of the present invention, a data acquisition system 100 is provided that acquires vehicle status data from various aspects, as well as other critical data of the vehicle during travel. And then, the multisource data are processed to analyze driving behavior patterns behind the data, so that data capable of representing dangerous driving behaviors are determined. These data can be used as labeling data for subsequent analysis of the correlation based on dangerous driving behavior. According to an embodiment of the present invention, the dangerous driving behavior includes at least: rapid acceleration, rapid deceleration, rapid turning, playing a mobile phone, making a call, etc.
In one embodiment, an insurance company differentially rates insurance for different users based on the risk behavior of the users driving the vehicle and the use of the vehicle. The insurance mode (algorithm) is highly dependent on the recognition effect of dangerous driving behaviors, and collecting high-quality labeling data aiming at dangerous driving behaviors is a key for improving the effects of the algorithm and the model.
In yet another embodiment, the electronic map navigation application provides a driving scoring function that score the driving of each self-driving navigation driving trip of the user. The driving score is a score around the "dangerous driving behavior" event identified by the algorithm. Therefore, high-quality labeling data aiming at dangerous driving behaviors is a key for calculating accurate scores.
Fig. 1 shows a schematic diagram of a data acquisition system 100 according to one embodiment of the invention. As shown in fig. 1, the data acquisition system 100 includes: the system comprises a data acquisition device 110, at least one camera 120, an in-vehicle device 130, a mobile terminal 140 and a server 150. According to one implementation, the data acquisition device 110 is coupled with the camera 120, the in-vehicle device 130, respectively. In addition, the mobile terminal 140 may also be bound to the data acquisition device 110 through the server 150.
The in-vehicle device 130 is, for example, an OBD box, arranged on the vehicle, for collecting vehicle status data. The vehicle state data includes at least one or more of the following: vehicle model, average fuel consumption, instantaneous fuel consumption, endurance mileage, vehicle speed, rotation speed, lamplight state, hand brake state, safety belt state, vehicle door state, vehicle window state, steering angle, battery voltage, water temperature, engine oil temperature, oil quantity percentage and electric quantity percentage.
The mobile terminal 140 is typically disposed on a vehicle, and the driving state data is collected by various sensors disposed in the mobile terminal 140, including positioning data (e.g., GNSS (Global Navigation Satellite System) data), IMU (Inertial Measurement Unit) data (e.g., acceleration, rotation angle, etc.), proximity (measured by a proximity sensor, for example), a movement state, a mobile phone orientation, a driver call receiving state, a light intensity state, etc. of the mobile terminal 140 and an obstacle in front thereof.
In addition, a data acquisition application may be disposed on the mobile terminal 140, and a user selects an acquisition task and an event by operating the Application (APP), and inputs the acquisition task and the event according to related instructions, so as to implement man-machine interaction between the user and the mobile terminal 140.
According to one embodiment of the present invention, the data acquisition system 100 includes at least 2 cameras 120, as shown in fig. 1. One of which is disposed near the brake pedal for collecting video image data when the driver operates the brake pedal (e.g., depresses the brake pedal, releases the brake pedal); the other is disposed near the main driving position for capturing video image data containing the face of the driver. It should be noted that, here, by way of example only, the embodiment of the present invention does not limit the camera 120. Those skilled in the art can increase or decrease the number of cameras 120 or adjust the installation position of the cameras 120 and the acquisition object according to the acquisition scene requirement.
In one embodiment, the data acquisition device 110 acts as external hardware to the in-vehicle device 130, powered by the in-vehicle device 130. According to one embodiment of the invention, the data acquisition device 110 is secured within the vehicle around the cigar lighter. Preferably, a plurality of screw holes are arranged at an edge of the data collection device 110 to fix it. It should be appreciated that the data acquisition device 110 is typically disposed about a center console of a vehicle, which embodiments of the present invention do not impose excessive limitations.
In one embodiment, data acquisition device 110 is implemented as a microcomputer storage device carrying a Rayleigh micro RK3288 processor in the form of a metal box with multiple communication interfaces. According to an embodiment of the present invention, the data acquisition device 110 establishes a connection with each camera 120 through a USB communication interface; a connection is established with the in-vehicle device 130 through the CAN communication interface.
In addition, the internal motherboard of the data acquisition device 110 is attached with various network communication hardware, and supports functions such as WIFI, 4G, bluetooth, and the like. Meanwhile, a detachable signal amplification transmitter is disposed outside the data acquisition device 110.
Further, in one embodiment according to the present invention, the data collection device 110 is equipped with a Linux operating system and is installed with a corresponding application, enabling communication with the in-vehicle device 130 and parsing the acquired data. It should be appreciated that the operating system installed on the data acquisition device 110 may also be an operating system known by Android or AliOS, or the like, which is not limited in the embodiment of the present invention.
Further, outside the data collection device 110, a two-dimensional code image is arranged (for example, the two-dimensional code image is pasted on the data collection device 110, not limited thereto), and the two-dimensional code is used as an identification of the data collection device 110, and can be bound with the mobile terminal 140 to establish a communication connection between the mobile terminal 140 and the data collection device 110.
In addition, the data collection device 110 also has a temporary storage module to store vehicle state data from the in-vehicle device 130 and various image data from the camera 120.
Fig. 2 shows a flowchart of the operation of the data acquisition device 110 according to one embodiment of the invention.
According to an embodiment of the present invention, the data collection device 110 is powered by the power source of the in-vehicle device 130 when connected to the in-vehicle device 130, and is automatically networked after the self-start. After startup, the data acquisition device 110 sends a heartbeat signal to the server 150 every first time period (e.g., 5 seconds) for the server 150 to monitor the heartbeat status of the data acquisition device 110.
When a user (e.g., driver) selects a collection task on the mobile terminal 140 and scans the two-dimensional code image on the data collection device 110, the server 150 generates a trip identification and returns it to the mobile terminal 140. Meanwhile, the server 150 returns the trip identification to the data acquisition device 110 upon receiving a new heartbeat signal. Thus, the data acquisition device 110 also listens for trip identifications via the heartbeat signal. In the process of executing the collection task once, the data collection device 110 continues to send the heartbeat signal to the server 150 at intervals of a first time period, meanwhile, as a response of heartbeat detection, the server 150 continues to return the trip identifier to the data collection device 110 until the collection task is executed once, and the data collection device 110 receives a response from the server 150 that the trip identifier is empty.
The data acquisition device 110 listens for the trip identification, and as long as the trip identification is not empty, the data acquisition device 110 acquires the vehicle state data from the in-vehicle device 130 and the image data from the camera 120 as first trip data for buffering. And, the first course data is transmitted to the server 150 every second period of time (e.g., 2 minutes). The first time period and the second time period are not limited in the embodiment of the invention, and in some preferred embodiments, the second time period is longer than the first time period. When the trip flag is empty, the data collection device 110 stops collection.
In addition, after the data acquisition device 110 is started, a new version is also detected periodically (e.g., 10 seconds). When a new version is detected, a version update is performed.
In connection with fig. 1 and 2, a brief description will be given below of how the data acquisition system 100 according to the present application generates labeling data indicating dangerous driving behavior based on the acquired data, taking a one-time acquisition task as an example.
It should be noted that, according to the embodiment of the present invention, when the data acquisition system 100 is used for data acquisition, an open training area (e.g., a training area of a driving school) is generally selected, and various road conditions (e.g., a straight road section, a curve, a ramp, etc.) are included in the training area as much as possible. Meanwhile, in order to ensure that the collection flow is performed safely and effectively, a driver (such as a driving school coach) with abundant driving experience is selected to drive the vehicle, and the designated operation in the collection task is completed.
Before the acquisition process begins, a user (e.g., a driver) logs into a data acquisition application disposed on the mobile terminal 140 and selects a set of acquisition tasks. A set of acquisition tasks is a collection of dangerous driving behavior events of a plurality of different types. Dangerous driving actions include rapid acceleration, rapid deceleration, rapid cornering, cell phone playing, phone making, etc., each of which may in turn contain various situations, such as for the type of rapid acceleration, may include the following events: low-speed rapid acceleration, medium-speed rapid acceleration, high-speed rapid acceleration, rapid acceleration of traffic light, rapid acceleration after turning, and rapid acceleration after starting. Thus, a set of acquisition tasks can be expressed as: {3 rapid acceleration, 2 rapid turning, 2 rapid deceleration }, wherein the rapid deceleration, the rapid turning, etc. are different event types, and specific events with different requirements under the same event type, and the different requirements generally refer to the requirements of driving states such as vehicle speed, execution time, etc.
After selecting a set of data acquisition tasks, the user also needs to scan the two-dimensional code image on the data acquisition device 110 through the mobile terminal 140, and the mobile terminal 140 sends the two-dimensional code image to the server 150 to request binding. After receiving the binding request, the server 150 checks whether a binding environment exists, and if so, generates a trip identifier and distributes the trip identifier to the mobile terminal 140 and the data acquisition device 110. So far, the mobile terminal 140 and the data acquisition device 110 are in a binding state, and start to perform an acquisition task. In one embodiment, the binding environment, i.e., verifying whether the data collection device 110 is in a normal networking state in which data can be uploaded, is verified by heartbeat detection as described above, and if within a first time period, the server 150 receives a heartbeat signal from the data collection device 110, then it indicates that the data collection device 110 is in a normal networking state. After receiving the trip identification, the data acquisition device 110 instructs the in-vehicle device 130 and the camera 120 to start acquiring data. And, the first stroke data is transmitted to the server 150 every second time period. This process may be described with reference to fig. 2, and is not described in detail herein.
In addition, the server 150 may send configuration information corresponding to the acquisition task to the mobile terminal 140 while returning the trip identifier to the mobile terminal 140. The configuration information at least comprises: and collecting event types, event identifications, expected execution time and instruction templates corresponding to the event types in the task. The instruction templates are adapted to prompt the user for action instructions when executing the corresponding event.
In one embodiment, the configuration information is pre-generated by the server 150. Taking the instruction template as an example, the instruction template may include a voice instruction template and a text instruction template, and the contents of the two templates may be consistent. Wherein, the text instruction template is displayed on the mobile terminal 140 in a text mode to prompt the user; and the voice command template prompts the user in a voice playing mode. Most of the fixed contents in the voice instruction template are recorded by a real person, so that the continuity and clarity of voice are ensured, and part of the contents (mainly numbers and scene states) meet the requirement of acquisition diversity.
In one embodiment, the sudden acceleration and deceleration are based on pedal operation, with similar instruction templates, with the more complex status scenarios highlighted by the instruction templates for cell phone and phone calls, and so on. In this way, specific data and scenes of the instruction templates are different according to different events, the difference of dangerous degrees can be highlighted, and the training of dangerous driving behavior recognition algorithms is very valuable. Several sets of examples of instruction templates according to embodiments of the present invention are shown below, but are not limited thereto. Where the location is shown by a horizontal line ____, that is, when generating the event description information for each specific event in the subsequent execution, operation data (the operation data includes at least one of speed, distance, time, direction, execution action, etc.) needs to be written, which will not be described herein.
a. Instruction template for rapid acceleration and rapid deceleration related events
The safety belt is tied, and a straight safety distance of ____ meters is ensured. Please get the initial speed of the vehicle to ____ km/h (if it is a sudden start acceleration or a sudden traffic light acceleration, the initial speed is 0, please keep the vehicle stationary). Now please ______ after 5 seconds countdown (fill in actions such as sudden stepping on accelerator, sudden stepping on brake, gentle sudden acceleration, gentle sudden braking, etc.).
b. Instruction templates for tight turn related events
Please tie the seat belt and ensure that there is a curve in front of _____ meters. Please bring the initial vehicle speed to ____ km/h, now please make a sudden turn to ___ (left or right) steering wheel __ through the turn junction after a 5 second countdown.
c. Instruction template for mobile phone playing related event
Please tie the safety belt, the action of playing the mobile phone is executed by the copilot. Please reach the speed of ____ km/h. Please begin playing the phone (watching a movie, playing a game, etc.) by the co-pilot placing the phone at _____ after 5 seconds countdown and holding ____ seconds.
d. Instruction template for related event of making telephone call
Please tie the safety belt, and the call receiving actions are all carried out by the copilot. Please get the speed of the car to ____ km/h and put the phone on the phone stand. Please make a call test after a 5 second countdown and let the call ring ____ seconds. Please __________ (answer _____ seconds, park alongside answer ____ seconds, normal travel until the other party hangs up, normal travel and hangs up, park alongside and hang up).
After the collection is started, the mobile terminal 140 determines the execution sequence of a plurality of events in the collection task, and then sequentially displays a plurality of events corresponding to the collection task in a first display mode on a collection task interface of the mobile terminal 140.
FIG. 3A illustrates a schematic diagram of an acquisition task interface, according to one embodiment of the invention. A schematic of a rapid acceleration type event in one acquisition task is shown in fig. 3A. Basic information corresponding to each event, such as initial speed, training action, execution time, execution times, etc., can be displayed on the acquisition task interface. As shown in fig. 3A, the display modes of the events of the "medium speed rapid acceleration training" and the "traffic light rapid acceleration training" are the first display mode. Meanwhile, the display mode of the low-speed rapid acceleration training event is a second display mode. The second display mode is different from the first display mode and is used for displaying the executed event.
In one embodiment, an order of execution of a plurality of events in the acquisition task is determined based on the road information. The road information includes static and/or dynamic information of all or part of the objects within the road. For example, whether the road is open, whether there is a curve, whether there is an obstacle in the road, whether there is a moving object, or the like may be mentioned. The road information may be obtained through V2X (Vehicle to X) technology, which is not limited in the embodiment of the present invention.
In another embodiment, the execution sequence of the plurality of events may be selected according to the road section condition. For example, on an open road segment, the driver may choose to perform a sudden acceleration event on a long distance open straight road segment, and then a sharp turn event before reaching the intersection.
Then, the driver selects one of the events on the acquisition task interface. Typically, the driver will select events to execute according to the display order, but is not limited thereto. The mobile terminal 140 outputs event description information of the selected event to guide the user to perform the event according to the event description information. Specifically, the mobile terminal 140 generates the event description information of the event according to the instruction template corresponding to the event type. Thereafter, the event description information is displayed on the interface, and fig. 3B is a schematic diagram showing a display interface of the event description information according to an embodiment of the present invention. The event description information may include safety precautions at the time of acquisition, details of driving, etc. As shown in fig. 3B, the substeps illustrate specific operational requirements in performing a "low speed rapid acceleration" event. Meanwhile, the mobile terminal 140 plays the event description information through a voice command.
In one embodiment, after the specific event is collected, a 30 second voice broadcast prompts the requirements of safety notice and driving details during the collection, then a certain time length is counted down (the time length is determined according to the collection requirements of the specific event, the predetermined time length can also be a part of the event description information), and a driver performs an operation according to the event description information in the counted down time, and can confirm the operation after the requirements are completed, so that the event is completed. Continuing with fig. 3B, the driver may "slide to complete training" as indicated by the interface, and the mobile terminal may receive confirmation from the driver.
During the driver's performance of the event, the corresponding sensors in the mobile terminal 140 may collect driving state data of the vehicle.
Meanwhile, in this process, in response to the user's selection of the event, the mobile terminal 140 records the current time as the start time of the event; in response to a confirmation operation by the user after the event is completed, the mobile terminal 140 records the current time as the end time of the event. Then, the period of time corresponding to the start time and the end time of the event is taken as the actual execution time of the event.
Optionally, the mobile terminal 140 stores the event identifier, the driving state data and the actual execution time corresponding to the event in an associated manner.
After the execution of one event is completed, the mobile terminal 140 returns to the acquisition task interface, and displays the executed event on the acquisition task interface in a second display mode different from the first display mode, wherein the executed event is in an unselected state. As shown in fig. 3A, the "low-speed rapid acceleration event" is an event that has been executed, and is an unselected state. The driver selects other event tasks for collection. If the acquisition count down is up, the driver does not complete the corresponding acquisition requirement, the driver needs to select incomplete or execution failure, and failed data cannot be uploaded. The driver selects unfinished events to acquire again under proper conditions.
And the driver repeats the process until all specific event tasks are collected, and all events are executed, so that the collection task can be finished. At the end of the acquisition task, the mobile terminal 140 transmits the event identification, the driving state data, and the actual execution time of each event as second trip data to the server 150.
On the server 150 side, acquiring first journey data from the data acquisition device 110 every second time period; and when the acquisition task is completed, acquiring second journey data from the mobile terminal 140. In this way, based on the trip identification of the current acquisition task, the server 150 may correlate the first trip data with the second trip data.
In the one-time acquisition task, since there is a certain time interval (typically less than 5 seconds) between the time point when the mobile terminal 140 receives the trip identifier and the time point when the data acquisition device 110 receives the trip identifier, there is a difference in the acquisition start time of the first trip data and the second trip data. Meanwhile, the mobile terminal 140 ends the acquisition after receiving the confirmation operation of the user; when the data acquisition device 110 does not receive the trip identification, the acquisition is ended, and thus, there is also a difference in acquisition end time of the first trip data and the second trip data. In addition, the data is further segmented into multiple segments and uploaded to the server 150, subject to objective conditions of network transmission. In addition, the first trip data and the second trip data relate to multi-mode data, including but not limited to GPS positioning data, OBD data, video image data, data of an instruction template and the like, and the acquisition frequencies of different types of data are not consistent. Thus, in an embodiment according to the invention, the first and second travel data are aligned in time and space, which is important for obtaining high quality annotation data.
In one embodiment, the actual execution time taking into account dangerous driving behavior is controlled by the instructions (i.e., the data of the instruction templates), specifically reflected in the human-machine interaction of the mobile terminal 140 with the driver. Therefore, the server 150 determines a time interval corresponding to the acquisition task. In one embodiment, server 150 determines the expected execution time for each event from the configuration information. Then, it is determined whether the expected execution time of each event coincides with the actual execution time. If the expected execution time is consistent with the actual execution time, the actual execution time is taken as a time interval corresponding to each event.
And then, respectively processing the first travel data and the second travel data according to the determined time interval to obtain corresponding third travel data and fourth travel data. In one embodiment, corresponding data in the time interval is respectively intercepted from the vehicle state data and the image data to be used as third journey data; and intercepting the data corresponding to the time interval from the driving state data to serve as fourth journey data.
And then, based on the acquisition frequency of the first stroke data and the second stroke data, aligning the third stroke data and the fourth stroke data to obtain aligned third stroke data and aligned fourth stroke data as marking data. In one embodiment, in the sensor of the mobile terminal 140, the acquisition frequency of the GNSS is 1Hz and the acquisition frequency of the IMU is at most 10Hz; the acquisition frequency of the camera 120 is typically 20-30Hz; the acquisition frequency of the in-vehicle device 130 is 10Hz. Based on the acquisition frequency, the alignment frequency is determined to be 1Hz. And then, based on the alignment frequency, sampling the third stroke data and the fourth stroke data to obtain aligned third stroke data and aligned fourth stroke data as marking data.
In other embodiments, in consideration of the situation of unstable acquisition frequency or missing data caused by the loss of the GPS signal, the missing data is supplemented by interpolation after the third stroke data and the fourth stroke data are obtained by sampling, and the interpolated data are used as final labeling data.
According to the data acquisition system 100 of the present invention, in addition to an external device (such as the data acquisition device 110 and the vehicle-mounted device 130 and the camera 120 coupled thereto) connected to the vehicle central control system for analyzing and uploading data such as OBD, a mobile terminal 140 is further added, so that the data change characteristics corresponding to dangerous driving behaviors are completely depicted from multiple angles. Moreover, the whole set of multi-source data acquisition flow is simplified, and the understanding cost and the communication cost of an acquirer are reduced.
In addition, according to the data acquisition system 100 of the present invention, instruction templates are generated using different event configurations, and dangerous driving behaviors are actually executed by professional drivers according to the instruction templates, so that the quality of the obtained data labels is higher than that of the later image labels. Meanwhile, the configuration information of the event can be understood as more fine-grained labeling information, and is extremely valuable for researching and optimizing dangerous driving behavior identification.
Furthermore, according to the data acquisition system 100 of the present invention, the acquired data are coupled in time and space, ensuring alignment and quality verification of the data.
According to one embodiment of the invention, the data acquisition system 100 and portions thereof may be implemented by one or more computing devices. FIG. 4 illustrates a schematic block diagram of a computing device 400, according to one embodiment of the invention.
As shown in FIG. 4, in a basic configuration 402, computing device 400 typically includes a system memory 406 and one or more processors 404. A memory bus 408 may be used for communication between the processor 404 and the system memory 406.
Depending on the desired configuration, processor 404 may be any type of processing, including, but not limited to: a microprocessor (μp), a microcontroller (μc), a digital information processor (DSP), or any combination thereof. Processor 404 may include one or more levels of cache, such as a first level cache 410 and a second level cache 412, a processor core 414, and registers 416. The example processor core 414 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 418 may be used with the processor 404 or, in some implementations, the memory controller 418 may be an internal part of the processor 404.
Depending on the desired configuration, system memory 406 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 406 may include an operating system 420, one or more applications 422, and data 424. In some implementations, the application 422 may be arranged to execute instructions on an operating system by the one or more processors 404 using the data 424.
Computing device 400 also includes a storage device 432, where storage device 432 includes removable storage 436 and non-removable storage 438, where both removable storage 436 and non-removable storage 438 are connected to storage interface bus 434.
Computing device 400 may also include an interface bus 440 that facilitates communication from various interface devices (e.g., output devices 442, peripheral interfaces 444, and communication devices 446) to basic configuration 402 via bus/interface controller 430. The example output device 442 includes a graphics processing unit 448 and an audio processing unit 450. They may be configured to facilitate communication with various external devices such as a display or speakers via one or more a/V ports 452. Example peripheral interfaces 444 may include a serial interface controller 454 and a parallel interface controller 456, which may be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 458. An example communication device 446 may include a network controller 460, which may be arranged to facilitate communication with one or more other computing devices 462 over a network communication link via one or more communication ports 464.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media in a modulated data signal, such as a carrier wave or other transport mechanism. A "modulated data signal" may be a signal that has one or more of its data set or changed in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or special purpose network, and wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR) or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In general, computing device 400 may be implemented as part of a small-sized portable (or mobile) electronic device, such as a cellular telephone, digital camera, personal Digital Assistant (PDA), personal media player device, wireless web-watch device, personal headset device, application specific device, or a hybrid device that may include any of the above functions. In one embodiment according to the invention, the computing device 400 may also be implemented as a micro-computing module or the like. The embodiments of the present invention are not limited in this regard.
In an embodiment according to the invention, the computing device 400 is configured to perform a data acquisition method according to the invention, and/or a data processing method. Wherein the application 422 of the computing device 400 contains a plurality of program instructions for performing the above-described method according to the present invention.
FIG. 5 illustrates a flow chart of a method 500 of generating annotation data for dangerous driving behavior in accordance with one embodiment of the present invention. The method 500 is adapted to be executed in the server 150. It should be noted that, the method 500 and the foregoing are complementary, and the repeated parts are not repeated.
As shown in fig. 5, the method 500 begins at step S510. In step S510, the server 150 binds the mobile terminal with the data collection device 110 disposed on the vehicle through the trip identification in response to the binding request from the mobile terminal 140.
According to one embodiment, the server 150 generates a trip identification upon receiving a request from the mobile terminal 140 to bind with the data acquisition device 110 and returns to the mobile terminal 140. Thereafter, when a heartbeat signal is received from the data acquisition device 110, a trip identification is sent to the data acquisition device 110. The heartbeat signal is sent to the server 150 at intervals of a first time after the data acquisition device 110 is started, so that the server 150 monitors the networking state of the data acquisition device 110.
Then in step S520, the server 150 sends configuration information corresponding to the acquisition task to the mobile terminal 140, so that the mobile terminal 140 outputs event description information of each event based on the configuration information, to guide the user to execute each event according to the event description information, where the acquisition task includes a plurality of different types of events pointing to dangerous driving behaviors.
Then, in step S530, based on the trip identification, the server 150 acquires the first trip data from the data collection device 110 and the second trip data from the mobile terminal 140, respectively.
According to one embodiment, the server 150 obtains the first trip data from the data acquisition device 110 every second time period. Wherein the first trip data is data of the vehicle when the data collection device 110 starts to collect after receiving the trip identification and performs the collection task. The first stroke data includes: in performing the acquisition task, vehicle state data acquired by the in-vehicle apparatus 130, and image data acquired by the at least one camera 120.
Meanwhile, when the acquisition task is completed, second trip data from the mobile terminal 140 is acquired. Wherein the second trip data is data of the vehicle at the time of performing the collection task, which is collected by the mobile terminal 140 since the receipt of the trip identification. The second trip data includes: the vehicle driving state data and the actual execution time of each event are obtained when the acquisition task is executed.
Thereafter, based on the trip identification, the server 150 may store the first trip data and the second trip data in association.
Then in step S540, a time interval corresponding to the acquisition task is determined from the configuration information.
In one embodiment, the expected execution time for each event is determined from the configuration information. And then judging whether the expected execution time of each event is consistent with the actual execution time. If the expected execution time is consistent with the actual execution time, the actual execution time is taken as a time interval corresponding to each event. If the expected execution time is inconsistent with the actual execution time, the task data acquired at this time is wrong and needs to be acquired again.
As described above, the actual execution time of the event is a time generated based on the input of the user (i.e., the user's interaction with the mobile terminal 140), and specifically includes: responding to the selection of an event in the acquisition task by a user, and generating the starting time of the event; and responding to the confirmation operation of the user after the event is executed, and generating the ending time of the event.
Then in step S550, the first trip data and the second trip data are respectively processed according to the time interval, so as to obtain corresponding third trip data and fourth trip data.
According to one embodiment, data corresponding to the time interval is extracted from the vehicle state data and the image data, respectively, as the third trip data. And intercepting the data corresponding to the time interval from the driving state data to serve as fourth journey data.
Then in step S560, the third trip data and the fourth trip data are aligned based on the acquisition frequencies of the first trip data and the second trip data, and the aligned third trip data and the aligned fourth trip data are obtained as labeling data.
According to one embodiment, the alignment frequency is determined based on the acquisition frequencies of the first and second travel data. And sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain aligned third stroke data and aligned fourth stroke data as labeling data.
Considering that there may be a problem of data missing in the acquired data, the step of sampling the third stroke data and the fourth stroke data based on the alignment frequency further includes: based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain sampled third stroke data and sampled fourth stroke data; if the data is missing in the sampled third stroke data and/or the sampled fourth stroke data, the missing data is supplemented through interpolation to obtain aligned third stroke data and/or aligned fourth stroke data. It should be appreciated that embodiments of the present invention are not limited in the manner in which interpolation is performed.
According to still further embodiments, after obtaining the aligned third stroke data and/or the aligned fourth stroke data, the method further comprises the steps of:
and checking whether the action instruction of the user meets the preset requirement when the acquisition task is executed by combining the configuration information with the aligned third stroke data and/or the aligned fourth stroke data. The preset requirement is, for example, whether the action performed by the driver meets the requirements on speed and duration, and is not limited thereto. And if the third stroke data and/or the fourth stroke data meet the preset requirements, the third stroke data and/or the fourth stroke data are/is used as marking data for indicating dangerous driving behaviors.
According to the data collection method 500 of the present invention, collection tasks, and their corresponding configuration information, are generated using different types of event combinations. The configuration information of the event can be understood as more fine-grained labeling information, and is extremely valuable for researching and optimizing dangerous driving behavior identification.
Meanwhile, a professional driving school driver actually executes dangerous driving behaviors according to an instruction template in the configuration information to finish a collection task, and compared with the later picture marking, the obtained data marking quality is higher.
In addition, according to the data acquisition method 500 of the present invention, the acquired multi-source data are coupled in time and space, so that alignment and quality of the data are ensured.
The invention also discloses:
a5, the method of A4, wherein the data acquisition device is respectively coupled with the vehicle-mounted device and at least one camera, and the first travel data comprises: when an acquisition task is executed, vehicle state data acquired through the vehicle-mounted equipment and image data acquired through at least one camera; the second trip data includes: the vehicle driving state data and the actual execution time of each event are obtained when the acquisition task is executed. A6, the method of A5, wherein the step of determining the time interval corresponding to the acquisition task from the configuration information comprises the following steps: determining the expected execution time of each event from the configuration information; judging whether the expected execution time of each event is consistent with the actual execution time; and if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event. A7, the method of A5, wherein the actual execution time of the event is a time generated based on user input, comprising: responsive to a user selection of an event in the acquisition task, a start time of the generated event; and responding to the confirmation operation of the user after the event is executed, and generating the ending time of the event. A8, the method of any one of A5-7, wherein the step of respectively processing the first travel data and the second travel data according to the time interval to obtain corresponding third travel data and fourth travel data comprises the following steps: respectively intercepting corresponding data in the time interval from the vehicle state data and the image data as third journey data; and cutting out the data corresponding to the time interval from the driving state data to be used as fourth journey data. A9, the method of any of A1-8, wherein the step of aligning third trip data and fourth trip data based on the collection frequency of the first trip data and the second trip data to obtain aligned third trip data and aligned fourth trip data, and the step of using the aligned third trip data and the aligned fourth trip data as labeling data includes: determining an alignment frequency based on the acquisition frequencies of the first and second travel data; and sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain aligned third stroke data and aligned fourth stroke data as labeling data. A10, the method of A9, wherein the step of sampling the third and fourth stroke data based on the alignment frequency to obtain aligned third and fourth stroke data, further comprises: based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain sampled third stroke data and sampled fourth stroke data; if the data is missing in the sampled third stroke data and/or the sampled fourth stroke data, the missing data is supplemented through interpolation, so that aligned third stroke data and/or aligned fourth stroke data are obtained. A11, the method of A10, based on the alignment frequency, further comprising the steps of: checking whether a motion instruction of a user meets a preset requirement when the acquisition task is executed by combining the configuration information with the aligned third stroke data and the aligned fourth stroke data; and if the third stroke data and the fourth stroke data meet the preset requirements, the third stroke data and the fourth stroke data are used as marking data for indicating dangerous driving behaviors.
B16, the data acquisition system of B15, wherein the mobile terminal is further adapted to generate a start time of an event in the acquisition task in response to a user selection of the event; and responding to the confirmation operation of the user after the event is executed, and generating the ending time of the event. B17, the data acquisition system of any one of B12-16, wherein the data acquisition device further comprises a two-dimensional code image, so that the mobile terminal can be bound with the data acquisition device by scanning the two-dimensional code image.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (19)

1. A method for generating annotation data aiming at dangerous driving behaviors comprises the following steps:
responding to a binding request from a mobile terminal, and binding the mobile terminal with data acquisition equipment arranged on a vehicle through a journey identifier;
transmitting configuration information corresponding to an acquisition task to the mobile terminal so that the mobile terminal can output event description information of each event based on the configuration information to guide a user to execute each event according to the event description information, wherein the acquisition task comprises a plurality of events pointing to dangerous driving behaviors in different types;
Based on the travel identification, respectively acquiring first travel data from the data acquisition equipment and second travel data from the mobile terminal;
determining a time interval corresponding to the acquisition task from the configuration information;
respectively processing the first stroke data and the second stroke data according to the time interval to obtain corresponding third stroke data and fourth stroke data; and
and aligning the third travel data and the fourth travel data based on the acquisition frequency of the first travel data and the second travel data to obtain aligned third travel data and aligned fourth travel data serving as marking data.
2. The method of claim 1, wherein the step of binding the mobile terminal with a data collection device disposed on a vehicle by a trip identification in response to a binding request from the mobile terminal comprises:
when a request for binding with data acquisition equipment from a mobile terminal is received, generating a travel identifier and returning the travel identifier to the mobile terminal;
and after receiving a heartbeat signal from the data acquisition equipment, transmitting the stroke identifier to the data acquisition equipment, wherein the heartbeat signal is transmitted to a server at intervals of a first time after the data acquisition equipment is started, so that the server can monitor the networking state of the data acquisition equipment.
3. The method of claim 1 or 2, wherein the configuration information comprises at least: and acquiring event types, event identifications, expected execution time and instruction templates corresponding to the event types of the events in the task.
4. The method of claim 1, wherein the step of separately acquiring first trip data from the data acquisition device and second trip data from the mobile terminal based on the trip identification comprises:
acquiring first journey data from the data acquisition equipment every second time length, wherein the first journey data are data of vehicles which are acquired by the data acquisition equipment after receiving the journey identification and are used for executing an acquisition task;
acquiring second journey data from the mobile terminal when the acquisition task is executed, wherein the second journey data are data of a vehicle acquired by the mobile terminal from the time of receiving the journey identification and used for executing the acquisition task; and
and based on the journey identification, storing the first journey data and the second journey data in an associated mode.
5. The method of claim 4, wherein,
The data acquisition device is respectively coupled with the vehicle-mounted device and at least one camera, and the first travel data comprises: when an acquisition task is executed, vehicle state data acquired through the vehicle-mounted equipment and image data acquired through at least one camera;
the second trip data includes: the vehicle driving state data and the actual execution time of each event are obtained when the acquisition task is executed.
6. The method of claim 5, wherein the determining a time interval corresponding to performing the acquisition task from the configuration information comprises:
determining the expected execution time of each event from the configuration information;
judging whether the expected execution time of each event is consistent with the actual execution time;
and if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
7. The method of claim 5, wherein the actual execution time of the event is a time generated based on user input, comprising: responsive to a user selection of an event in the acquisition task, a start time of the generated event; and responding to the confirmation operation of the user after the event is executed, and generating the ending time of the event.
8. The method according to any one of claims 5-7, wherein the step of processing the first trip data and the second trip data according to the time interval, respectively, to obtain corresponding third trip data and fourth trip data comprises:
respectively intercepting corresponding data in the time interval from the vehicle state data and the image data as third journey data;
and cutting out the data corresponding to the time interval from the driving state data to be used as fourth journey data.
9. The method of claim 1, wherein the step of aligning the third trip data and the fourth trip data based on the collection frequencies of the first trip data and the second trip data to obtain aligned third trip data and aligned fourth trip data, includes:
determining an alignment frequency based on the acquisition frequencies of the first and second travel data;
and sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain aligned third stroke data and aligned fourth stroke data as labeling data.
10. The method of claim 9, wherein the step of sampling the third and fourth trip data based on the alignment frequency to obtain aligned third and fourth trip data further comprises:
Based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain sampled third stroke data and sampled fourth stroke data;
if the data is missing in the sampled third stroke data and/or the sampled fourth stroke data, the missing data is supplemented through interpolation, so that aligned third stroke data and/or aligned fourth stroke data are obtained.
11. The method of claim 10, further comprising, after the step of sampling the third and fourth trip data based on the alignment frequency to obtain aligned third and fourth trip data, the step of:
checking whether a motion instruction of a user meets a preset requirement when the acquisition task is executed by combining the configuration information with the aligned third stroke data and the aligned fourth stroke data;
and if the third stroke data and the fourth stroke data meet the preset requirements, the third stroke data and the fourth stroke data are used as marking data for indicating dangerous driving behaviors.
12. A data acquisition system, comprising:
the vehicle-mounted device is suitable for collecting vehicle state data in the process of executing a collection task, wherein the collection task comprises a plurality of events pointing to dangerous driving behaviors in different types;
The camera is suitable for collecting image data in the process of executing the collecting task;
the mobile terminal is arranged on the vehicle and is suitable for collecting driving state data in the process of executing the collection task and binding with the data collection equipment through the server;
a data acquisition device disposed on the vehicle, coupled with the vehicle-mounted device and the at least one camera, respectively, to acquire the vehicle state data and the image data;
a server adapted to perform the method according to any one of claims 1-11, and to correlate said driving status data, said vehicle status data and said image data to obtain annotation data indicative of dangerous driving behaviour.
13. The data acquisition system of claim 12, wherein the data acquisition device is further adapted to power self-start through the in-vehicle device.
14. A data acquisition system as claimed in claim 12 or 13, wherein the data acquisition device is further adapted to,
after starting, sending heartbeat signals to the server at intervals of a first time so that the server can monitor the networking state of the data acquisition equipment; and
After receiving the journey identification from the server, starting to acquire the vehicle state data and the image data as first journey data, and sending the first journey data to the server every second time.
15. The data acquisition system of claim 12, wherein the mobile terminal is further adapted to,
after determining an acquisition task, acquiring configuration information corresponding to the acquisition task from the server, wherein the configuration information at least comprises: the method comprises the steps of collecting event types, event identifications, expected execution time and instruction templates of the event types in a task;
and responding to the selection of a user on one of the events, generating and outputting event description information of the event based on an instruction template corresponding to the event, so as to guide the user to execute the event according to the event description information.
16. The data acquisition system of claim 15, wherein the mobile terminal is further adapted to,
responding to the selection of a user to an event in a collection task, and generating the starting time of the event; and
and responding to the confirmation operation of the user after the event is executed, and generating the ending time of the event.
17. The data acquisition system of claim 12, wherein,
the data acquisition equipment further comprises a two-dimensional code image, so that the mobile terminal can be bound with the data acquisition equipment by scanning the two-dimensional code image.
18. A computing device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-11.
19. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the method of any of claims 1-11.
CN202110895907.9A 2021-08-05 2021-08-05 Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system Active CN113591744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110895907.9A CN113591744B (en) 2021-08-05 2021-08-05 Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110895907.9A CN113591744B (en) 2021-08-05 2021-08-05 Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system

Publications (2)

Publication Number Publication Date
CN113591744A CN113591744A (en) 2021-11-02
CN113591744B true CN113591744B (en) 2024-03-22

Family

ID=78255397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110895907.9A Active CN113591744B (en) 2021-08-05 2021-08-05 Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system

Country Status (1)

Country Link
CN (1) CN113591744B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724366B (en) * 2022-03-29 2023-06-20 北京万集科技股份有限公司 Driving assistance method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077819A (en) * 2014-06-17 2014-10-01 深圳前向启创数码技术有限公司 Remote monitoring method and system based on driving safety
CN107784587A (en) * 2016-08-25 2018-03-09 大连楼兰科技股份有限公司 A kind of driving behavior evaluation system
CN109816811A (en) * 2018-10-31 2019-05-28 杭州云动智能汽车技术有限公司 A kind of nature driving data acquisition device
CN110447214A (en) * 2018-03-01 2019-11-12 北京嘀嘀无限科技发展有限公司 A kind of system, method, apparatus and storage medium identifying driving behavior

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019518287A (en) * 2016-06-13 2019-06-27 ジーボ インコーポレーテッドXevo Inc. Method and system for car parking space management using virtual cycle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077819A (en) * 2014-06-17 2014-10-01 深圳前向启创数码技术有限公司 Remote monitoring method and system based on driving safety
CN107784587A (en) * 2016-08-25 2018-03-09 大连楼兰科技股份有限公司 A kind of driving behavior evaluation system
CN110447214A (en) * 2018-03-01 2019-11-12 北京嘀嘀无限科技发展有限公司 A kind of system, method, apparatus and storage medium identifying driving behavior
CN109816811A (en) * 2018-10-31 2019-05-28 杭州云动智能汽车技术有限公司 A kind of nature driving data acquisition device

Also Published As

Publication number Publication date
CN113591744A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US11568492B2 (en) Information processing apparatus, information processing method, program, and system
US11142190B2 (en) System and method for controlling autonomous driving vehicle
US20200410789A1 (en) Recording device, recording method, and computer program
CN112544071B (en) Video splicing method, device and system
US10899358B2 (en) Vehicle driver monitoring system and method for capturing driver performance parameters
CN107305561B (en) Image processing method, device and equipment and user interface system
JP6603506B2 (en) Parking position guidance system
CN110875937A (en) Information pushing method and system
KR20130082874A (en) Support system for road drive test and support method for road drive test usgin the same
CN107146439A (en) Restricted driving reminding method, restricted driving prompt system and car-mounted terminal
CN113591744B (en) Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system
CN110509931B (en) Information display method, device and system for voice question answering
CN113611007B (en) Data processing method and data acquisition system
CN112164224A (en) Traffic information processing system, method, device and storage medium for information security
CN111721315A (en) Information processing method and device, vehicle and display equipment
CN114935334A (en) Method and device for constructing topological relation of lanes, vehicle, medium and chip
CN113628360B (en) Data acquisition method and system
JP6619316B2 (en) Parking position search method, parking position search device, parking position search program, and moving object
CN112185157B (en) Roadside parking space detection method, system, computer equipment and storage medium
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN114880408A (en) Scene construction method, device, medium and chip
JP6861562B2 (en) Image sharing system, image sharing server and image sharing method
KR100938549B1 (en) Accident verification system and method based on black box
CN111475233A (en) Information acquisition method, graphic code generation method and device
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant