CN113628360A - Data acquisition method and system - Google Patents

Data acquisition method and system Download PDF

Info

Publication number
CN113628360A
CN113628360A CN202110896379.9A CN202110896379A CN113628360A CN 113628360 A CN113628360 A CN 113628360A CN 202110896379 A CN202110896379 A CN 202110896379A CN 113628360 A CN113628360 A CN 113628360A
Authority
CN
China
Prior art keywords
data
event
vehicle
acquisition
events
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110896379.9A
Other languages
Chinese (zh)
Other versions
CN113628360B (en
Inventor
张源源
苏锦华
汪磊
唐锐猊
张鸿飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Minmin Car Service Network Technology Co ltd
Original Assignee
Beijing Minmin Car Service Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Minmin Car Service Network Technology Co ltd filed Critical Beijing Minmin Car Service Network Technology Co ltd
Priority to CN202110896379.9A priority Critical patent/CN113628360B/en
Publication of CN113628360A publication Critical patent/CN113628360A/en
Application granted granted Critical
Publication of CN113628360B publication Critical patent/CN113628360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a data acquisition system, comprising: the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing an acquisition task, wherein the acquisition task comprises a plurality of different types of events pointing to dangerous driving behaviors; the camera is suitable for acquiring image data in the process of executing the acquisition task; the mobile terminal is arranged on the vehicle, is suitable for collecting the driving state data in the process of executing the collection task, and is also suitable for being bound with the data collection equipment through the server; the data acquisition equipment is arranged on the vehicle and is respectively coupled with the vehicle-mounted equipment and the at least one camera so as to acquire vehicle state data and image data; and the server is suitable for performing correlation processing on the driving state data, the vehicle state data and the image data to obtain annotation data indicating dangerous driving behaviors. According to the data acquisition system, high-quality marking data aiming at dangerous driving behaviors can be obtained.

Description

Data acquisition method and system
Technical Field
The invention relates to the technical field of computers, in particular to a data acquisition method and a data acquisition system.
Background
With the continuous development of the automobile industry and the rising demand of people for driving experience, the demand of people for avoiding traffic risks is increasing day by day. Among them, recognizing dangerous driving behavior is an important means for evaluating driving risk of drivers and preventing traffic accidents.
Meanwhile, the application scene for identifying dangerous driving behaviors is wide. For example, with the development of technologies such as car networking, automatic driving, big data monitoring, fleet monitoring, driving behavior risk factor, and driving assistance service all lay out travel monitoring technology silently, and in travel monitoring technology, the technology is the most technically sensitive and highly practical application value of the current dangerous driving behavior recognition technology. For another example, a group of freight cars is a high-occurrence group of traffic accidents, and how to monitor and avoid traffic risks of a freight fleet is an urgent problem to be solved in the freight industry.
On the other hand, the dangerous driving behavior is a complex and uncertain behavior, and in the prior art, the dangerous driving behavior can be defined simply by using the variation rule of the kinematic physical quantities such as acceleration, angular velocity and the like, and can also be determined by using the probability of the driving behavior possibly inducing future car accident. However, due to the difference of personal perception, the perception risks of the same dangerous driving behavior in different groups are different, and meanwhile, the objective risks corresponding to the same dangerous driving behavior are different due to the complexity of the driving habits and road conditions of the driver. To accurately identify these differences, a large amount of high-quality marking data of dangerous driving behavior is required.
Therefore, a solution capable of acquiring high-quality dangerous driving behavior data is needed.
Disclosure of Invention
The present invention provides a data acquisition method and system in an attempt to solve or at least alleviate at least one of the problems identified above.
According to an aspect of the present invention, there is provided a data acquisition method, performed on a mobile terminal disposed in a vehicle, comprising the steps of: determining a group of acquisition tasks, wherein the group of acquisition tasks comprises a plurality of events of different types, and the events point to dangerous driving behaviors; displaying a plurality of events in sequence in a first display mode on an acquisition task interface; in response to the selection of one event by the user, outputting event description information of the event to guide the user to execute the event according to the event description information; collecting driving state data when a user executes an event; within a preset time length, responding to the confirmation operation of the user after the event is finished, and returning to the collection task interface; and repeating the iterative output step, the acquisition step and the return step until all events in a group of acquisition tasks are executed.
Optionally, the method according to the invention further comprises the steps of: the server is bound with data acquisition equipment arranged on the vehicle, so that the server respectively sends the travel identifier to the mobile terminal and the data acquisition equipment. Wherein the step of binding, by the server, with the data collection device disposed on the vehicle further comprises: receiving configuration information corresponding to a collection task from a server, wherein the configuration information at least comprises: and acquiring the event type, the event identification, the expected execution time and an instruction template corresponding to each event type of each event in the task, wherein the instruction template is suitable for prompting a user to execute an action instruction when the corresponding event is executed.
Optionally, in the method according to the present invention, the step of outputting event description information of one of the events in response to a selection of the event by a user includes: responding to the selection of a user for one event, and generating event description information of the event according to the instruction template; displaying the event description information; and playing the event description information through a voice instruction.
Optionally, in the method according to the present invention, the step of generating event description information of the event according to the instruction template includes: determining an instruction template corresponding to the event based on the configuration information; writing operation data in the determined instruction template according to the event, wherein the operation data at least comprises at least one of the following data as description information of the event: speed, distance, time, direction, execution action.
Optionally, in the method according to the present invention, the step of sequentially displaying the plurality of events in the first display manner in the task collection interface further includes: and determining the execution sequence of a plurality of events in a set of acquisition tasks.
Optionally, in the method according to the present invention, the step of determining an execution order of the plurality of events in the set of acquisition tasks includes: and determining the execution sequence of a plurality of events in a set of collection tasks based on road information, wherein the road information comprises static and/or dynamic information of all or part of objects in a road range.
Optionally, in the method according to the present invention, the step of determining an execution order of the plurality of events in the set of acquisition tasks includes: in response to a user selection of an execution order of the plurality of events, the execution order of the plurality of events is determined.
Optionally, in the method according to the present invention, in response to a selection of one of the events by a user, outputting event description information of the event to guide the user to execute the step of the event according to the event description information, further comprising: in response to a user selection of one of the events, a start time for the event is generated.
Optionally, in the method according to the present invention, the step of returning to the task collection interface in response to a confirmation operation performed by the user after the completion event is performed within a predetermined time period further includes: and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event.
Optionally, the method according to the invention further comprises the steps of: correspondingly determining the actual execution time of each event based on the starting time and the ending time of each event; and sending the acquired driving state data and the actual execution time of each event as second stroke data and the stroke identifier to the server so that the server performs association processing on the second stroke data and the first stroke data acquired by the data acquisition equipment based on the stroke identifier.
Optionally, in the method according to the present invention, the step of returning to the task collection interface in response to the user's confirmation operation after the completion event within the predetermined time period includes: and displaying the executed events in a second display mode different from the first display mode on the collection task interface.
According to another aspect of the present invention, there is provided a data acquisition system comprising: the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing the acquisition task; the camera is suitable for acquiring image data in the process of executing the acquisition task; the mobile terminal is arranged on the vehicle, is suitable for executing the method to collect second journey data pointing to dangerous driving behaviors, and is also suitable for being bound with the data collecting equipment through the server; the data acquisition equipment is arranged on the vehicle and is respectively coupled with the vehicle-mounted equipment and the at least one camera so as to acquire vehicle state data and image data as first travel data; the server is suitable for generating the travel identifier when the mobile terminal is bound with the data acquisition equipment and respectively sending the travel identifier to the mobile terminal and the data acquisition equipment, and is also suitable for associating the first travel data with the second travel data based on the travel identifier.
Optionally, in the system according to the present invention, the data acquisition device is further adapted to send a heartbeat signal to the server every first time interval to obtain a travel identifier, where the travel identifier is generated by the server when receiving a request from the mobile terminal to bind with the data acquisition device; acquiring vehicle state data from vehicle-mounted equipment and image data from at least one camera; and sending the acquired vehicle state data and the image data together with the travel identifier to a server at intervals of a second time length until the travel identifier is not acquired any more.
According to yet another aspect of the present invention, there is provided a computing device comprising: one or more processor memories; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods above.
According to a further aspect of the invention there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods described above.
In summary, according to the scheme of the present invention, different event description information is generated for different events to guide a professional to actually execute dangerous driving behaviors, and compared with the subsequent image annotation, the obtained data annotation quality is higher.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a data acquisition system 100 according to some embodiments of the invention;
FIG. 2 illustrates a workflow diagram of the data acquisition device 110 according to one embodiment of the invention;
FIG. 3A illustrates a schematic diagram of a collection task interface according to one embodiment of the invention;
FIG. 3B is a diagram illustrating a display interface of event description information according to one embodiment of the invention;
FIG. 4 illustrates a schematic diagram of a computing device 400 according to some embodiments of the invention;
FIG. 5 shows a flow diagram of a data acquisition method 500 according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the development of mobile terminal applications and sensor hardware, and in view of the property that mobile terminals (e.g., mobile phones) are directly bound with people, they become monitoring devices with great potential for monitoring dangerous driving behaviors. However, although the mobile terminal is convenient for collecting information, the accuracy and quality of the sensor are not as good as those of vehicle-mounted recording devices (such as a vehicle data recorder and an On-Board Diagnostics (OBD) system). Monitoring driving behavior with a mobile terminal, such as recognizing the world with limited perception capabilities, is a challenging task to identify even potentially deep accident risk behavior in a small sense. In addition, noise of a sensor of the mobile terminal and hardware difference of different models can limit the mobile terminal to sense objective and real motion state of the vehicle, and accordingly identification of dangerous driving behaviors is hindered.
Both the trip recorder and the OBD center are accurate acquisition modes bound with the vehicle, but in an actual scene, it is difficult to directly acquire OBD data to identify dangerous driving behaviors. Meanwhile, the OBD system is independently researched and developed by an automobile manufacturer, on one hand, the OBD data which can be obtained by different automobile models have difference due to the difference of automobile hardware; on the other hand, different vehicle families (euro, japanese, american) follow different OBD protocols, and even some small people follow proprietary protocols, which are the key to encryption and analysis of OBD data. Thus, the diversity of OBD data and protocols presents challenges for collecting different vehicle driving behavior data.
In view of the above, according to the embodiment of the present invention, a data collection system 100 is provided to collect status data of a vehicle and other critical data of the vehicle during driving from various aspects. And then, the multi-source data are processed to analyze the driving behavior mode contained in the back of the data, and then the data capable of representing dangerous driving behaviors are determined. These data can be used as annotation data for subsequent analysis based on dangerous driving behavior. According to an embodiment of the invention, the dangerous driving behaviour comprises at least: rapid acceleration, rapid deceleration, rapid turning, mobile phone playing, phone making and the like.
In one embodiment, the insurance company carries out differently priced insurance on different users according to the risk behaviors of the users driving the vehicles and the use conditions of the vehicles. The insurance mode (algorithm) highly depends on the recognition effect of the dangerous driving behaviors, and the collection of high-quality labeled data aiming at the dangerous driving behaviors is the key for improving the effect of the algorithm and the model.
In yet another embodiment, the electronic map navigation application provides a driving scoring function for scoring the driving of each self-driving navigation driving route of the user. The driving score is scored around the "dangerous driving behavior" event identified by the algorithm. Therefore, high-quality labeled data for dangerous driving behaviors is the key for calculating accurate scores.
FIG. 1 shows a schematic diagram of a data acquisition system 100 according to one embodiment of the invention. As shown in fig. 1, the data acquisition system 100 includes: the system comprises a data acquisition device 110, at least one camera 120, a vehicle-mounted device 130, a mobile terminal 140 and a server 150. According to one implementation, the data acquisition device 110 is coupled to the camera 120 and the in-vehicle device 130, respectively. In addition, the mobile terminal 140 may be bound with the data collection device 110 through the server 150.
The onboard device 130 is, for example, an OBD box disposed on the vehicle for collecting vehicle status data. The vehicle state data includes at least one or more of the following: the vehicle comprises the following components of vehicle model, average fuel consumption, instantaneous fuel consumption, endurance mileage, vehicle speed, rotating speed, light state, hand brake state, safety belt state, vehicle door state, vehicle window state, steering angle, battery voltage, water temperature, engine oil temperature, oil mass percentage and electric quantity percentage.
The mobile terminal 140 is generally disposed on a vehicle, and acquires driving state data through various sensors disposed in the mobile terminal 140, including positioning data (e.g., gnss (global Navigation Satellite system) data), imu (inertial Measurement unit) data (e.g., acceleration, rotation angle, etc.), proximity (measured by a proximity sensor, such as a proximity sensor, to obtain a distance between the mobile terminal 140 and an obstacle in front of the mobile terminal), motion state, orientation of a mobile phone, a state of receiving a call by a driver, a state of intensity of light sensation, and the like.
In addition, a data acquisition application may be further disposed on the mobile terminal 140, and a user selects an acquisition task and an event by operating the Application (APP), and inputs the acquisition task and the event according to a related instruction, so as to implement human-computer interaction between the user and the mobile terminal 140.
According to one embodiment of the present invention, the data acquisition system 100 comprises at least 2 cameras 120, as shown in fig. 1. One of which is disposed near the brake pedal for collecting video image data when the driver operates the brake pedal (e.g., depresses the brake pedal, releases the brake pedal); the other is arranged near the main driving seat for capturing video image data containing the face of the driver. It should be noted that, this is only an example, and the embodiment of the present invention does not limit the camera 120. Those skilled in the art can add or delete the number of the cameras 120 or adjust the installation position of the cameras 120 and the collection objects according to the collection scene requirements.
In one embodiment, the data collection device 110 is provided as hardware external to the vehicle-mounted device 130, and is powered by the vehicle-mounted device 130. According to one embodiment of the invention, the data acquisition device 110 is secured within the vehicle about the lighter. Preferably, a plurality of screw holes are disposed at the edge of the data collecting apparatus 110 to fix it. It should be appreciated that the data collection device 110 is typically disposed about a center console of the vehicle, and embodiments of the present invention are not so limited.
In one embodiment, the data acquisition device 110 is implemented as a micro-computing memory device carrying a Rayleigh core micro RK3288 processor in the form of a metal box with multiple communication interfaces. According to the embodiment of the invention, the data acquisition equipment 110 establishes connection with each camera 120 through the USB communication interface; and establishing connection with the vehicle-mounted equipment 130 through the CAN communication interface.
In addition, the inside mainboard of data acquisition equipment 110 is accompanied with multiple network communication hardware, supports functions such as WIFI, 4G, bluetooth. Meanwhile, a detachable signal amplification transmitter is arranged outside the data acquisition device 110.
Further, in an embodiment according to the present invention, the data collection device 110 is loaded with a Linux operating system and is installed with a corresponding application, enabling communication with the in-vehicle device 130 and parsing the acquired data. It should be understood that the operating system carried by the data acquisition device 110 may also be a known or future known operating system such as Android or alic, which is not limited in this embodiment of the present invention.
Further, outside the data collection device 110, a two-dimensional code image is arranged (for example, without limitation, the two-dimensional code image is pasted on the data collection device 110), and the two-dimensional code, as an identifier of the data collection device 110, can be bound with the mobile terminal 140 to establish a communication connection between the mobile terminal 140 and the data collection device 110.
In addition, the data collection device 110 also has a temporary storage module to store vehicle state data from the in-vehicle device 130 and various image data from the camera 120.
FIG. 2 illustrates a flow diagram of the operation of the data acquisition device 110 according to one embodiment of the present invention.
According to the embodiment of the invention, when the data acquisition device 110 is connected to the vehicle-mounted device 130, the power supply of the vehicle-mounted device 130 is self-started, and the network is automatically connected after the self-starting. After startup, the data acquisition device 110 sends a heartbeat signal to the server 150 every first duration (e.g., 5 seconds) so that the server 150 monitors the heartbeat status of the data acquisition device 110.
When a user (e.g., a driver) selects an acquisition task on the mobile terminal 140 and scans the two-dimensional code image on the data acquisition device 110, the server 150 generates a trip identifier and returns it to the mobile terminal 140. Meanwhile, the server 150 returns the travel identification to the data collection device 110 when receiving a new heartbeat signal. Therefore, the data acquisition device 110 also listens for the trip identification via the heartbeat signal. In the process of executing the collection task once, the data collection device 110 continues to send heartbeat signals to the server 150 every first time period, and meanwhile, as a response to the heartbeat detection, the server 150 continues to return the trip identifier to the data collection device 110 until the collection task is executed, and the data collection device 110 receives a response that the trip identifier is empty from the server 150.
The data acquisition device 110 monitors the travel identifier, and as long as the travel identifier is not empty, the data acquisition device 110 acquires the vehicle state data from the in-vehicle device 130 and the image data from the camera 120, and caches the data as the first travel data. And, the first trip data is transmitted to the server 150 every second time period (e.g., 2 minutes). The first duration and the second duration are not limited to a great extent, and in some preferred embodiments, the second duration is greater than the first duration. When the trip flag is empty, the data acquisition device 110 stops acquisition.
In addition, the new version is also detected periodically (e.g., 10 seconds) after the data acquisition device 110 is started. And when a new version is detected, updating the version.
With reference to fig. 1 and fig. 2, a brief description will be given below of how the data acquisition system 100 according to the present application generates annotation data indicating dangerous driving behavior based on the acquired data, by taking an acquisition task as an example.
It should be noted that, according to the embodiment of the present invention, when the data acquisition system 100 is used for data acquisition, an open training ground (e.g., a training ground of a driving school) is usually selected, and as many road conditions (e.g., straight road sections, curves, ramps, etc.) as possible are included in the training ground. Meanwhile, in order to ensure that the collection process is safely and effectively carried out, drivers with abundant driving experiences (such as coaches in driving schools) are selected to drive the vehicles, and designated operation in the collection task is completed.
Before the acquisition process begins, a user (e.g., a driver) logs into a data acquisition application disposed on the mobile terminal 140 and selects a set of acquisition tasks. One set of collection tasks is a collection of many different types of dangerous driving behavior events. Dangerous driving behaviors include rapid acceleration, rapid deceleration, rapid turning, mobile phone playing, telephone making and the like, and each dangerous driving behavior can comprise various situations, such as the following events aiming at the type of rapid acceleration: low-speed rapid acceleration, medium-speed rapid acceleration, high-speed rapid acceleration, traffic light rapid acceleration, rapid acceleration after turning, and starting rapid acceleration. Thus, a set of acquisition tasks can be represented as: {3 rapid accelerations, 2 rapid turns, 2 rapid decelerations }, wherein the rapid decelerations, the rapid turns, and the like are different event types, specific events are required in the same event type, and different requirements generally refer to requirements for driving states such as vehicle speed and execution time.
After selecting a group of data acquisition tasks, the user also needs to scan the two-dimensional code image on the data acquisition device 110 through the mobile terminal 140, and the mobile terminal 140 sends the two-dimensional code image to the server 150 to request binding. After receiving the binding request, the server 150 checks whether a binding environment exists, and if the binding environment exists, generates a travel identifier and distributes the travel identifier to the mobile terminal 140 and the data acquisition device 110. At this point, the mobile terminal 140 and the data collection device 110 are in a bound state and begin to perform collection tasks. In one embodiment, the binding environment, i.e. verifying whether the data acquisition device 110 is in a normal networking state where data can be uploaded, is verified by the heartbeat detection as described above, and if the server 150 receives a heartbeat signal from the data acquisition device 110 within the first time period, it indicates that the data acquisition device 110 is in the normal networking state. The data collection device 110 instructs the in-vehicle device 130 and the camera 120 to start collecting data after receiving the travel identifier. And, the first trip data is transmitted to the server 150 every second duration. The process can refer to the related description of fig. 2, and is not described herein again.
In addition, while returning the trip identifier to the mobile terminal 140, the server 150 sends configuration information corresponding to the collection task to the mobile terminal 140. The configuration information includes at least: and acquiring the event type, the event identification, the expected execution time and the instruction template corresponding to each event type of each event in the task. The instruction template is adapted to prompt the user for an action instruction when executing the corresponding event.
In one embodiment, the configuration information is pre-generated by the server 150. Taking the instruction template as an example, the instruction template may include a voice instruction template and a text instruction template, and the contents of the two may be the same. Wherein, the text instruction template is displayed on the mobile terminal 140 in a text manner to prompt the user; and the voice instruction template prompts the user in a voice playing mode. Most of the fixed contents in the voice instruction template are recorded by real persons, so that the continuity and the clarity of voice are ensured, and partial contents (mainly numbers and scene states) meet the requirement of acquisition diversity.
In one embodiment, the rapid acceleration and the rapid deceleration are based on pedal operation, the command templates are similar, the command templates for playing and making phone calls highlight more complicated status scenes, and the like. Therefore, specific data and scenes of the instruction templates are different for different events, so that the difference of the danger degree can be highlighted, and the training of the dangerous driving behavior recognition algorithm is extremely valuable. Several examples of sets of instruction templates according to embodiments of the present invention are shown below, but are not limited thereto. Here, the operation data (the operation data at least includes at least one of the following data: speed, distance, time, direction, execution action, and the like) to be written when the event description information is generated for each specific event in the subsequent execution, which is indicated by the horizontal line "____", is not described herein again.
a. Instruction template for related events of rapid acceleration and rapid deceleration
Please fasten the safety belt and ensure the safety distance of ____ m straight ahead. Please bring the initial speed of the vehicle to ____ km/h (if the initial speed is 0, if the vehicle starts to accelerate suddenly or the traffic light accelerates suddenly, please keep the vehicle still). Now please count down 5 seconds later ______ (fill-in actions such as accelerator tip-in, brake tip-in, acceleration tip-in, brake tip-in, etc.).
b. Instruction template for sharp turn related events
Please tie the harness and ensure a curve of _____ meters ahead. Please bring the initial speed of the vehicle to ____ km/h, now please hit the wheel __ to ___ (left or right) to go through the turn after 5 seconds of countdown.
c. Instruction template for playing mobile phone related events
Please fasten the safety belt and play the mobile phone by the assistant driver. Please reach ____ km/h. After the 5 second countdown, the co-pilot places the cell phone at _____ to begin playing the cell phone (watch a movie, play a game, etc.) and remains at ____ seconds.
d. Instruction templates for call related events
The action of fastening the safety belt and answering the call is executed by the assistant driver. Please reach the vehicle speed of ____ km/h, and place the mobile phone on the mobile phone support. Please take a call test after the 5 second countdown and let the phone ring for ____ seconds. Please __________ (answer phone _____ seconds, stop while holding phone ____ seconds, drive normally until the other party hangs up, drive normally and hang up, stop while holding phone and hang up).
After the collection is started, the mobile terminal 140 determines an execution sequence of a plurality of events in the collection task, and then, the collection task interface of the mobile terminal 140 sequentially displays the plurality of events corresponding to the collection task in a first display mode.
FIG. 3A shows a schematic diagram of a collection task interface, according to one embodiment of the invention. As shown in fig. 3A, a schematic diagram of a jerk type event in one acquisition task. On the collection task interface, basic information corresponding to each event, such as initial speed, training action, execution time, execution times, and the like, may be displayed. As shown in fig. 3A, the display mode for the "middle-speed and rapid-acceleration training" and "traffic light and rapid-acceleration training" events is the first display mode. Meanwhile, the display mode of the event of 'low-speed rapid acceleration training' is the second display mode. The second display mode is different from the first display mode and is used for displaying the executed events.
In one embodiment, the execution order of the plurality of events in the collection task is determined based on the road information. The road information comprises static and/or dynamic information of all or part of the objects within the road. For example, whether the road is wide or not and whether there is a curve or not may be determined, and whether there is an obstacle or a moving object in the road range or not may be determined. The road information may be obtained by V2X (Vehicle to X) technology, which is not limited by the embodiment of the present invention.
In another embodiment, the execution sequence of the plurality of events may also be selected by the actual according to the road section condition. For example, on an open road segment, the driver may choose to perform a sharp acceleration event on a long, open straight road segment, and then perform a sharp turn event before reaching the intersection.
The driver then selects one of the events on the collection task interface. Generally, the driver selects the events to perform according to the display order, but is not limited thereto. The mobile terminal 140 outputs event description information of the selected event to guide the user to execute the event according to the event description information. Specifically, the mobile terminal 140 generates event description information of the event according to the instruction template corresponding to the event type. After that, the event description information is displayed on the interface, and fig. 3B illustrates a schematic diagram of a display interface of the event description information according to an embodiment of the present invention. The event description information may include safety precautions at the time of collection, driving detail requirements, and the like. As shown in FIG. 3B, the substeps illustrate specific operating requirements in the course of executing a "low speed rapid acceleration" event. Meanwhile, the mobile terminal 140 plays the event description information through a voice command.
In one embodiment, after the collection of a specific event is started, a 30-second voice broadcast is provided to prompt safety precautions and driving details during the collection, then a countdown is performed for a certain time (the time is determined according to the collection requirement of the specific event, and the predetermined time can also be used as a part of event description information), a driver performs operation according to the event description information during the countdown, and after the requirement is completed, a confirmation operation can be performed to indicate that the event is completely performed. Continuing with FIG. 3B, the driver can indicate "slide complete training" according to the interface and the mobile terminal 140 can receive the driver's confirmation.
During the course of the driver performing the event, the corresponding sensors in the mobile terminal 140 collect driving state data of the vehicle.
Meanwhile, in this process, in response to the selection of the event by the user, the mobile terminal 140 records the current time as the start time of the event; in response to the user's confirmation after completing the event, the mobile terminal 140 records the current time as the end time of the event. And then, the period of time corresponding to the starting time and the ending time of the event is used as the actual execution time of the event.
Optionally, the mobile terminal 140 stores the event identifier, the driving state data, and the actual execution time corresponding to the event in an associated manner.
After an event is completed, the mobile terminal 140 returns to the collection task interface, and displays the executed event in a second display mode different from the first display mode on the collection task interface, where the executed event is in an unselectable state. As shown in fig. 3A, the "low-speed rapid acceleration event" is an executed event and is in an unselected state. The driver selects other event tasks for collection. If the acquisition count-down is up, the driver does not complete the corresponding acquisition requirement, the driver needs to select incomplete or failed execution, and the failed data cannot be uploaded. The driver can select the incomplete event under the appropriate condition to perform the collection again.
The driver repeats the process until all the specific event tasks are collected, and the driver can finish the collection task after all the events are executed. When the collection task is completed, the mobile terminal 140 sends the event identifier, the driving state data, and the actual execution time of each event to the server 150 as second travel data.
On the server 150 side, acquiring first travel data from the data acquisition device 110 every second time period; and acquiring second travel data from the mobile terminal 140 when the acquisition task is finished. In this way, the server 150 may associate the first trip data with the second trip data based on the trip identification of the current collection task.
In one collection task, since a certain time interval (usually less than 5 seconds) exists between the time point when the mobile terminal 140 receives the travel identifier and the time point when the data collection device 110 receives the travel identifier, the collection start times of the first travel data and the second travel data may be different. Meanwhile, after receiving the confirmation operation of the user, the mobile terminal 140 ends the acquisition; when the travel identifier is not received, the data acquisition device 110 ends the acquisition, and therefore, the acquisition end times of the first travel data and the second travel data are also different. In addition, the data may be segmented into multiple segments for uploading to the server 150, subject to the objective constraints of network transmission. In addition, the first travel data and the second travel data relate to multi-modal data, including but not limited to GPS positioning data, OBD data, video image data, data of instruction templates, and the like, and the acquisition frequency of different types of data is not consistent. Therefore, in the embodiment according to the present invention, the first trip data and the second trip data are aligned in time and space, which is very important to obtain high quality annotation data.
In one embodiment, the actual execution time considering the dangerous driving behavior is controlled by the command (i.e., data of the command template), which is reflected in the human-machine interaction between the mobile terminal 140 and the driver. Therefore, the server 150 first determines a time interval corresponding to the execution of the collection task. In one embodiment, server 150 determines the expected execution time for each event from the configuration information. Then, whether the expected execution time of each event is consistent with the actual execution time is judged. And if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
And then, respectively processing the first stroke data and the second stroke data according to the determined time interval to obtain corresponding third stroke data and fourth stroke data. In one embodiment, corresponding data in a time interval is respectively intercepted from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as fourth stroke data.
And then, aligning the third stroke data and the fourth stroke data based on the acquisition frequency of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data which are used as marking data. In one embodiment, in the sensor of the mobile terminal 140, the GNSS acquisition frequency is 1Hz, and the IMU acquisition frequency is up to 10 Hz; the acquisition frequency of the camera 120 is typically 20-30 Hz; the acquisition frequency of the in-vehicle apparatus 130 is 10 Hz. Based on the above acquisition frequency, the alignment frequency is determined to be 1 Hz. And then, sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain the aligned third stroke data and the aligned fourth stroke data as marking data.
In other embodiments, in consideration of the situation of uncertain acquisition frequency or data missing caused by GPS signal loss and other problems, after the third trip data and the fourth trip data are obtained through sampling, missing data is supplemented through interpolation, and the data after interpolation processing is used as final labeling data.
According to the data acquisition system 100 of the present invention, in addition to the external devices (e.g., the data acquisition device 110 and the on-board device 130 and the camera 120 coupled thereto) connected to the vehicle central control system for analyzing and uploading OBD and other data, the mobile terminal 140 is added, and data change characteristics corresponding to dangerous driving behaviors are completely depicted from multiple angles. Moreover, the whole set of multi-source data acquisition process is simplified, and the understanding cost and the communication cost of an acquirer are reduced.
In addition, according to the data acquisition system 100 of the present invention, the instruction templates are generated by using different event configurations, and the driver in the professional driving school actually executes the dangerous driving behavior according to the instruction templates, so that compared with the subsequent image annotation, the obtained data annotation quality is higher. Meanwhile, the configuration information of the event can be understood as marking information with finer granularity, and the method is extremely valuable for researching and optimizing dangerous driving behavior recognition.
In addition, according to the data acquisition system 100 of the present invention, the acquired data are coupled in time and space, which ensures the alignment and quality verification of the data.
According to one embodiment of the invention, the data acquisition system 100 and portions thereof may be implemented by one or more computing devices. FIG. 4 shows a schematic block diagram of a computing device 400 according to one embodiment of the invention.
As shown in FIG. 4, in a basic configuration 402, a computing device 400 typically includes a system memory 406 and one or more processors 404. A memory bus 408 may be used for communicating between the processor 404 and the system memory 406.
Depending on the desired configuration, processor 404 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. Processor 404 may include one or more levels of cache, such as a level one cache 410 and a level two cache 412, a processor core 414, and registers 416. The example processor core 414 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 418 may be used with the processor 404, or in some implementations the memory controller 418 may be an internal part of the processor 404.
Depending on the desired configuration, system memory 406 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 406 may include an operating system 420, one or more applications 422, and data 424. In some implementations, the application 422 can be arranged to execute instructions on an operating system with the data 424 by one or more processors 404.
Computing device 400 also includes storage 432, storage 432 including removable storage 436 and non-removable storage 438, each of removable storage 436 and non-removable storage 438 connected to a storage interface bus 434.
Computing device 400 may also include an interface bus 440 that facilitates communication from various interface devices (e.g., output devices 442, peripheral interfaces 444, and communication devices 446) to the basic configuration 402 via bus/interface controller 430. The example output device 442 includes a graphics processing unit 448 and an audio processing unit 450. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 452. Example peripheral interfaces 444 may include a serial interface controller 454 and a parallel interface controller 456, which may be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 458. An example communication device 446 may include a network controller 460, which may be arranged to facilitate communications with one or more other computing devices 462 over a network communication link via one or more communication ports 464.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In general, computing device 400 may be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset, an application specific device, or a hybrid device that include any of the above functions. In one embodiment according to the invention, the computing device 400 may also be implemented as a micro-computing module or the like. The embodiments of the present invention are not limited thereto.
In an embodiment in accordance with the invention, the computing device 400 is configured to perform a data acquisition method, and/or a data processing method in accordance with the invention. Among other things, application 422 of computing device 400 includes a plurality of program instructions that implement the above-described methods according to the present invention.
FIG. 5 shows a flow diagram of a method 500 for generating annotation data for dangerous driving behavior according to one embodiment of the invention. The method 500 is suitable for execution in the mobile terminal 140. It should be noted that the method 500 is complementary to the foregoing, and repeated portions are not described in detail.
As shown in fig. 5, the method 500 begins at step S510. In step S510, a set of acquisition tasks is determined. The set of collection tasks comprises a plurality of different types of events, and the different events point to different dangerous driving behaviors.
In one embodiment, when a user successfully logs into a data collection application, a plurality of collection tasks are displayed to the user on a task list interface of the mobile terminal 140. From which a set of acquisition tasks is selected for execution by the user.
As mentioned above, after determining the acquisition task, the method further comprises the steps of: the mobile terminal 140 is bound with the data collection device 110 disposed on the vehicle through the server 150, so that the server 150 transmits the trip identification to the mobile terminal 140 and the data collection device 110, respectively.
Then, the mobile terminal 140 receives configuration information corresponding to the collection task from the server 150, where the configuration information at least includes: the method comprises the steps of collecting event types, event identifications, expected execution times and instruction templates corresponding to the event types of all events in a task. The instruction template is adapted to prompt the user for an action instruction when executing the corresponding event.
Subsequently, in step S520, a plurality of events are sequentially displayed in a first display manner on the collection task interface.
As shown in fig. 3A, the display mode for the "middle-speed and rapid-acceleration training" and "traffic light and rapid-acceleration training" events is the first display mode. Meanwhile, the display mode of the event of 'low-speed rapid acceleration training' is the second display mode. The second display mode is different from the first display mode and is used for displaying the executed events.
According to the embodiment of the invention, the following ways are provided for determining the execution sequence of a plurality of events in a set of acquisition tasks.
In one embodiment, an order of execution of a plurality of events in a set of collection tasks is determined based on road information. The road information comprises static and/or dynamic information of all or part of the objects within the road.
In yet another embodiment, the order of execution of the plurality of events is determined in response to a user selection of the order of execution of the plurality of events.
Subsequently, in step S530, in response to the selection of one of the events by the user, event description information of the event is output to guide the user to execute the event according to the event description information.
First, in response to a user's selection of one of the events, event description information of the event is generated according to an instruction template.
In one embodiment, based on the configuration information, an instruction template corresponding to each event is determined. Then, according to the event, writing operation data in the determined instruction template, wherein the operation data at least comprises at least one of the following data: speed, distance, time, direction, perform an action, etc. For the instruction templates, refer to the related description in the foregoing, and take the instruction templates for the events related to rapid acceleration and rapid deceleration as an example, the place indicated by the horizontal line "____", that is, the operation data that needs to be written, is not described herein again.
Next, the event description information is displayed through the display screen of the mobile terminal 140; and simultaneously, playing the event description information through a voice instruction. Referring to FIG. 3B, event description information is shown, according to one embodiment of the present invention.
Further, in the embodiment according to the present invention, in response to the user' S selection of one of the events, the mobile terminal 140 also generates a start time of the event when step S530 is performed.
Subsequently, in step S540, driving state data at the time when the user performs the event is collected.
For the description of the driving state data, reference may be made to the description related to fig. 1, and details are not repeated here.
Then, in step S550, in response to the user' S confirmation operation after the completion event is executed, the collection task interface is returned to.
As shown in fig. 3B, after the user completes the event, the user indicates "slide complete training" according to the interface, and the mobile terminal can receive the confirmation operation of the user. According to an embodiment of the present invention, the mobile terminal 140 generates an end time of the event in response to a confirmation operation of the user after the completion of the event.
As described in step S520, the executed event is displayed in a second display manner different from the first display manner on the collection task interface.
In addition, the predetermined time length is generally a set countdown time length in the event description information, and in some embodiments, the predetermined time length may be displayed on a display interface of the event description information. As shown in fig. 3B, the predetermined time period is displayed on the display interface, and is: execution time is 2 minutes.
Subsequently, in step S560, the iteration output step (i.e., step S530), the collection step (i.e., step S540), and the return step (i.e., step S550) are repeated until all the events in a set of collection tasks have been executed.
After a group of collection tasks are completely executed, the method further comprises the following steps: and correspondingly determining the actual execution time of each event based on the starting time and the ending time of each event. Specifically, the difference between the end time and the start time is used as the actual execution time of the event. The collected driving state data and the actual execution time of each event are used as second travel data, and are sent to the server 150 together with the travel identifier. In this way, the server 150 may associate the second trip data with the first trip data acquired via the data collection device 110 based on the trip identification.
According to the data acquisition method 500 of the present invention, in addition to the external devices (such as the data acquisition device 110 and the on-board device 130 and the camera 120 coupled thereto) connected to the vehicle central control system for analyzing and uploading OBD and other data, the sensor data of the mobile terminal 140 is added, and the data change characteristics corresponding to the dangerous driving behavior are completely depicted from multiple angles. Moreover, the whole set of multi-source data acquisition process is simplified, and the understanding cost and the communication cost of an acquirer are reduced.
In addition, according to the data acquisition method 500 of the present invention, different event description information is generated for different events to guide professionals to actually perform dangerous driving behaviors, and compared with the subsequent image annotation, the obtained data annotation quality is higher.
The invention also discloses:
a7, the method of a6, wherein the step of determining an order of execution of a plurality of events in a set of acquisition tasks comprises: and determining the execution sequence of a plurality of events in the set of collection tasks based on road information, wherein the road information comprises static and/or dynamic information of all or part of objects in a road range. A8, the method as claimed in a6 or 7, wherein the step of determining an execution order of a plurality of events in a set of acquisition tasks comprises: determining an execution order of the plurality of events in response to a user selection of the execution order of the plurality of events. A9, the method according to any one of a1-8, wherein the step of outputting event description information of an event in response to a user's selection of the event to guide the user to execute the event according to the event description information further comprises: in response to a user selection of one of the events, a start time for the event is generated. A10, the method as in A9, wherein the step of returning to the collection task interface in response to the user confirming after completing the event within the predetermined time period further comprises: and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event. A11, the method according to A10, wherein after the group of collection tasks are all executed, the method further comprises the following steps: correspondingly determining the actual execution time of each event based on the starting time and the ending time of each event; and sending the acquired driving state data and the actual execution time of each event as second travel data and the travel identifier to the server, so that the server can associate the second travel data with the first travel data acquired by the data acquisition equipment based on the travel identifier. A12, the method of any one of A1-11, wherein the step of returning to the collection task interface in response to the user performing a confirmation operation after completing the event within the predetermined length of time comprises: displaying the executed events on the collection task interface in a second display mode different from the first display mode.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A data collection method performed on a mobile terminal disposed in a vehicle, the method comprising the steps of:
determining a group of acquisition tasks, wherein the group of acquisition tasks comprises a plurality of events of different types, and the events point to dangerous driving behaviors;
sequentially displaying the plurality of events in a first display mode on an acquisition task interface;
responding to the selection of a user for one event, outputting event description information of the event to guide the user to execute the event according to the event description information;
collecting driving state data when a user executes the event;
within a preset time length, responding to the confirmation operation of the user after the event is executed, and returning to the collection task interface; and
and repeating the iteration output step, the acquisition step and the return step until all the events in the group of acquisition tasks are executed.
2. The method of claim 1, wherein after the step of determining a set of acquisition tasks and before the step of sequentially displaying the plurality of events, further comprising the steps of:
and binding the travel identifier with data acquisition equipment arranged on the vehicle through a server so that the server respectively sends the travel identifier to the mobile terminal and the data acquisition equipment.
3. The method of claim 2, wherein the step of binding, by the server, with a data collection device disposed on the vehicle further comprises:
receiving configuration information corresponding to the collection task from a server, wherein the configuration information at least comprises: the event type, the event identification, the expected execution time and the instruction template corresponding to each event type of each event in the collection task are suitable for prompting a user to execute an action instruction when the corresponding event is executed.
4. The method of claim 3, wherein the step of outputting event description information of one of the events in response to a user's selection of the event comprises:
responding to the selection of a user for one event, and generating event description information of the event according to the instruction template;
displaying the event description information; and
and playing the event description information through a voice instruction.
5. The method of claim 4, wherein the generating of the event description information of the event according to the instruction template comprises:
determining an instruction template corresponding to the event based on the configuration information;
writing operation data in the determined instruction template according to the event as description information of the event, wherein the operation data at least comprises at least one of the following data: speed, distance, time, direction, execution action.
6. The method of any of claims 1-5, wherein the step of sequentially displaying a plurality of events in a first display mode on the collection task interface further comprises: a step of determining an execution order of a plurality of events in the set of acquisition tasks.
7. A data acquisition system comprising:
the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing the acquisition task;
the camera is suitable for acquiring image data in the process of executing the acquisition task;
a mobile terminal, arranged on a vehicle, adapted to perform the method according to any one of claims 1 to 12, to collect second journey data directed to dangerous driving behaviour, and further adapted to be bound with a data collection device by means of a server;
a data acquisition device disposed on the vehicle and coupled to the onboard device and the at least one camera, respectively, to acquire the vehicle status data and the image data as first travel data;
the server is suitable for generating travel identifiers when the mobile terminal is bound with the data acquisition equipment and respectively sending the travel identifiers to the mobile terminal and the data acquisition equipment, and is also suitable for associating the first travel data with the second travel data based on the travel identifiers.
8. The data acquisition system of claim 7, wherein the data acquisition device is further adapted to,
sending heartbeat signals to the server every other first time interval to obtain a travel identifier, wherein the travel identifier is generated when the server receives a request to be bound with the data acquisition equipment from the mobile terminal;
acquiring vehicle state data from the vehicle-mounted equipment and the image data from the at least one camera;
and sending the acquired vehicle state data and image data together with the travel identifier to the server at intervals of a second time length until the travel identifier is not acquired any more.
9. A computing device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-6.
10. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-6.
CN202110896379.9A 2021-08-05 2021-08-05 Data acquisition method and system Active CN113628360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110896379.9A CN113628360B (en) 2021-08-05 2021-08-05 Data acquisition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110896379.9A CN113628360B (en) 2021-08-05 2021-08-05 Data acquisition method and system

Publications (2)

Publication Number Publication Date
CN113628360A true CN113628360A (en) 2021-11-09
CN113628360B CN113628360B (en) 2023-05-26

Family

ID=78382902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110896379.9A Active CN113628360B (en) 2021-08-05 2021-08-05 Data acquisition method and system

Country Status (1)

Country Link
CN (1) CN113628360B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0231607A1 (en) * 1986-01-15 1987-08-12 James V. Zaleski Apparatus and method for testing auto electronics systems
CN103164885A (en) * 2011-12-16 2013-06-19 上海博泰悦臻电子设备制造有限公司 Driving behavior control system
CN104504922A (en) * 2014-12-31 2015-04-08 北京赛维安讯科技发展有限公司 Traffic information sharing method, vehicle-mounted equipment and system
US20150363797A1 (en) * 2014-06-13 2015-12-17 Atieva, Inc. Vehicle Test System
KR20160109616A (en) * 2015-03-12 2016-09-21 주식회사 에코트루먼트 System For Collecting And Analyzing Big Data By Monitoring Car's And Road's Conditions
CN106441340A (en) * 2015-08-06 2017-02-22 平安科技(深圳)有限公司 Running track prompt method, vehicle and electronic equipment
US20180017950A1 (en) * 2016-07-15 2018-01-18 Baidu Online Network Technology (Beijing) Co., Ltd . Real vehicle in-the-loop test system and method
CN207955524U (en) * 2018-03-12 2018-10-12 北京汽车研究总院有限公司 A kind of vehicle
CN110406541A (en) * 2019-06-12 2019-11-05 天津五八到家科技有限公司 Driving data processing method, equipment, system and storage medium
JP2020021408A (en) * 2018-08-03 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Dangerous vehicle information collection method, dangerous vehicle information collection system, dangerous vehicle information collection program
CN112596972A (en) * 2020-12-23 2021-04-02 文思海辉智科科技有限公司 Vehicle-mounted equipment testing method, device and system and computer equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0231607A1 (en) * 1986-01-15 1987-08-12 James V. Zaleski Apparatus and method for testing auto electronics systems
CN103164885A (en) * 2011-12-16 2013-06-19 上海博泰悦臻电子设备制造有限公司 Driving behavior control system
US20150363797A1 (en) * 2014-06-13 2015-12-17 Atieva, Inc. Vehicle Test System
CN104504922A (en) * 2014-12-31 2015-04-08 北京赛维安讯科技发展有限公司 Traffic information sharing method, vehicle-mounted equipment and system
KR20160109616A (en) * 2015-03-12 2016-09-21 주식회사 에코트루먼트 System For Collecting And Analyzing Big Data By Monitoring Car's And Road's Conditions
CN106441340A (en) * 2015-08-06 2017-02-22 平安科技(深圳)有限公司 Running track prompt method, vehicle and electronic equipment
US20180017950A1 (en) * 2016-07-15 2018-01-18 Baidu Online Network Technology (Beijing) Co., Ltd . Real vehicle in-the-loop test system and method
CN207955524U (en) * 2018-03-12 2018-10-12 北京汽车研究总院有限公司 A kind of vehicle
JP2020021408A (en) * 2018-08-03 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Dangerous vehicle information collection method, dangerous vehicle information collection system, dangerous vehicle information collection program
CN110406541A (en) * 2019-06-12 2019-11-05 天津五八到家科技有限公司 Driving data processing method, equipment, system and storage medium
CN112596972A (en) * 2020-12-23 2021-04-02 文思海辉智科科技有限公司 Vehicle-mounted equipment testing method, device and system and computer equipment

Also Published As

Publication number Publication date
CN113628360B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
US11568492B2 (en) Information processing apparatus, information processing method, program, and system
US10504304B1 (en) Crowd-sourced driver grading
US10032360B1 (en) In-vehicle apparatus for early determination of occupant injury
CN106573546A (en) Presenting routing information for electric vehicles
CN106161502A (en) Mobile communication system and control method, auxiliary terminal and vehicle
CN104494534A (en) Vehicle transportation management system and method
CN110875937A (en) Information pushing method and system
JP6603506B2 (en) Parking position guidance system
KR20130082874A (en) Support system for road drive test and support method for road drive test usgin the same
CN113611007B (en) Data processing method and data acquisition system
CN112164224A (en) Traffic information processing system, method, device and storage medium for information security
CN113591744B (en) Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system
CN114935334A (en) Method and device for constructing topological relation of lanes, vehicle, medium and chip
JP2009090927A (en) Information management server, parking assist device, navigation system equipped with parking assist device, information management method, parking assist method, information management program, parking assist program, and record medium
JP6619316B2 (en) Parking position search method, parking position search device, parking position search program, and moving object
CN113329079A (en) Vehicle unlocking method, server and terminal
US11734967B2 (en) Information processing device, information processing method and program
JP2013109704A (en) Real estate patrol management system and real estate patrol management method
CN113628360B (en) Data acquisition method and system
CN109916420B (en) Vehicle navigation method and related device
CN113706915A (en) Parking prompting method, device, equipment and storage medium
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
JP4866061B2 (en) Information recording apparatus, information recording method, information recording program, and computer-readable recording medium
CN114880408A (en) Scene construction method, device, medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant