CN113611007B - Data processing method and data acquisition system - Google Patents

Data processing method and data acquisition system Download PDF

Info

Publication number
CN113611007B
CN113611007B CN202110896380.1A CN202110896380A CN113611007B CN 113611007 B CN113611007 B CN 113611007B CN 202110896380 A CN202110896380 A CN 202110896380A CN 113611007 B CN113611007 B CN 113611007B
Authority
CN
China
Prior art keywords
data
travel
acquisition
vehicle
aligned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110896380.1A
Other languages
Chinese (zh)
Other versions
CN113611007A (en
Inventor
张源源
苏锦华
汪磊
唐锐猊
张鸿飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Minmin Car Service Network Technology Co ltd
Original Assignee
Beijing Minmin Car Service Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Minmin Car Service Network Technology Co ltd filed Critical Beijing Minmin Car Service Network Technology Co ltd
Priority to CN202110896380.1A priority Critical patent/CN113611007B/en
Publication of CN113611007A publication Critical patent/CN113611007A/en
Application granted granted Critical
Publication of CN113611007B publication Critical patent/CN113611007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Abstract

The invention discloses a data acquisition system, comprising: the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing an acquisition task, wherein the acquisition task comprises a plurality of different types of events pointing to dangerous driving behaviors; the camera is suitable for acquiring image data in the process of executing the acquisition task; the mobile terminal is arranged on the vehicle, is suitable for collecting the driving state data in the process of executing the collection task, and is also suitable for being bound with the data collection equipment through the server; the data acquisition equipment is arranged on the vehicle and is respectively coupled with the vehicle-mounted equipment and the at least one camera so as to acquire vehicle state data and image data; and the server is suitable for performing correlation processing on the driving state data, the vehicle state data and the image data to obtain annotation data indicating dangerous driving behaviors. According to the data acquisition system, high-quality marking data aiming at dangerous driving behaviors can be obtained.

Description

Data processing method and data acquisition system
Technical Field
The invention relates to the technical field of computers, in particular to a data processing method and a data acquisition system.
Background
With the continuous development of the automobile industry and the rising demand of people for driving experience, the demand of people for avoiding traffic risks is increasing day by day. Among them, recognizing dangerous driving behavior is an important means for evaluating driving risk of drivers and preventing traffic accidents.
Meanwhile, the application scene for identifying dangerous driving behaviors is wide. For example, with the development of technologies such as car networking, automatic driving, big data monitoring, fleet monitoring, driving behavior risk factor, and driving assistance service all lay out travel monitoring technology silently, and in travel monitoring technology, the technology is the most technically sensitive and highly practical application value of the current dangerous driving behavior recognition technology. For another example, a group of freight cars is a high-occurrence group of traffic accidents, and how to monitor and avoid traffic risks of a freight fleet is an urgent problem to be solved in the freight industry.
On the other hand, the dangerous driving behavior is a complex and uncertain behavior, and in the prior art, the dangerous driving behavior can be defined simply by using the variation rule of the kinematic physical quantities such as acceleration, angular velocity and the like, and can also be determined by using the probability of the driving behavior possibly inducing future car accident. However, due to the difference of personal perception, the perception risks of the same dangerous driving behavior in different groups are different, and meanwhile, the objective risks corresponding to the same dangerous driving behavior are different due to the complexity of the driving habits and road conditions of the driver. To accurately identify these differences, a large amount of high-quality critical driving behavior labeling data is required.
Therefore, a solution capable of acquiring high-quality dangerous driving behavior data is needed.
Disclosure of Invention
The present invention provides a data processing method and a data acquisition system in an attempt to solve or at least alleviate at least one of the problems identified above.
According to an aspect of the present invention, there is provided a data processing method, executed in a server, comprising the steps of: responding to a binding request from the mobile terminal, and binding the mobile terminal with data acquisition equipment arranged on a vehicle through a travel identifier; acquiring first travel data from data acquisition equipment every second time, wherein the first travel data are data which are acquired by the data acquisition equipment after receiving the travel identifier and are acquired by the data acquisition equipment when an acquisition task is executed; acquiring second travel data from the mobile terminal, wherein the second travel data is acquired by the mobile terminal since the travel identifier is received and is data of a vehicle during execution of an acquisition task; acquiring configuration information of an acquisition task corresponding to the travel identifier; determining a time interval corresponding to the execution of the acquisition task from the configuration information; according to the time interval, the first travel data and the second travel data are respectively processed to obtain corresponding third travel data and fourth travel data; and aligning the third travel data and the fourth travel data based on the acquisition frequency of the first travel data and the second travel data to obtain aligned third travel data and aligned fourth travel data.
Optionally, the method according to the invention further comprises the steps of: when a request to be bound with data acquisition equipment from a mobile terminal is received, generating a travel identifier and returning the travel identifier to the mobile terminal; after receiving a heartbeat signal from the data acquisition equipment, sending the stroke identifier to the data acquisition equipment, wherein the heartbeat signal is sent to the server every first time after the data acquisition equipment is started, so that the server monitors the networking state of the data acquisition equipment.
Optionally, in the method according to the present invention, the collection task includes a plurality of events of different types, and the events are directed to dangerous driving behaviors; and the configuration information at least comprises: and acquiring the event type, the event identification and the expected execution time of each event in the task.
Optionally, in the method according to the present invention, the data acquisition device is respectively coupled to the vehicle-mounted device and the camera, and the first trip data includes: when the acquisition task is executed, vehicle state data acquired through vehicle-mounted equipment and image data acquired through a camera are acquired; the second trip data includes: and when the collection task is executed, the driving state data of the vehicle and the actual execution time of each event.
Optionally, in the method according to the present invention, the step of determining, from the configuration information, a time interval corresponding to the execution of the acquisition task includes: determining the expected execution time of each event from the configuration information; judging whether the expected execution time of each event is consistent with the actual execution time; and if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
Optionally, in the method according to the present invention, the step of respectively processing the first trip data and the second trip data according to the time interval to obtain corresponding third trip data and fourth trip data includes: respectively capturing corresponding data in the time interval from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as fourth travel data.
Optionally, in the method according to the present invention, the step of aligning the third stroke data and the fourth stroke data based on the acquisition frequencies of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data includes: determining an alignment frequency based on the acquisition frequencies of the first travel data and the second travel data; sampling the third travel data and the fourth travel data based on the alignment frequency to obtain aligned third travel data and aligned fourth travel data.
Optionally, in the method according to the present invention, the step of sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain aligned third stroke data and aligned fourth stroke data further includes: based on the alignment frequency, sampling the third travel data and the fourth travel data to obtain sampled third travel data and sampled fourth travel data; and if data are missing in the sampled third stroke data and/or the sampled fourth stroke data, supplementing missing data through interpolation to obtain aligned third stroke data and aligned fourth stroke data.
Optionally, the method according to the invention further comprises the steps of: checking whether the action instruction of the driver meets the preset requirement when the acquisition task is executed by combining the configuration information, the aligned third travel data and the aligned fourth travel data; and if the preset requirements are met, using the aligned third travel data and the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
According to another aspect of the present invention, there is provided a data acquisition system comprising: the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process of executing the acquisition task by the user; the camera is suitable for acquiring image data in the process of executing the acquisition task by the user; the mobile terminal is arranged on the vehicle and is suitable for acquiring driving state data pointing to dangerous driving behaviors; the data acquisition equipment is arranged on the vehicle, is respectively coupled with the vehicle-mounted equipment and the at least one camera to acquire the vehicle state data and the image data, and is also suitable for being bound with the mobile terminal through a server; and the server is suitable for executing the method, and performing correlation processing on the driving state data, the vehicle state data and the image data to obtain annotation data indicating dangerous driving behaviors.
According to yet another aspect of the present invention, there is provided a computing device comprising: one or more processor memories; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods above.
According to a further aspect of the invention there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods described above.
In summary, according to the scheme of the present invention, in consideration of the difference between the collection start time and the collection end time of the first trip data and the second trip data, and the limitation of the objective condition of network transmission, the data is also divided into multiple segments and uploaded to the server. In addition, the first travel data and the second travel data relate to multi-modal data, including but not limited to GPS positioning data, OBD data, video image data, data of an instruction template and the like, and the acquisition frequencies of different types of data are different, so that the acquired multi-source data are coupled in time and space, and the alignment and the quality of the data are ensured.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 illustrates a schematic diagram of a data acquisition system 100 according to some embodiments of the invention;
FIG. 2 illustrates a workflow diagram of the data acquisition device 110 according to one embodiment of the invention;
FIG. 3A illustrates a schematic diagram of a collection task interface according to one embodiment of the invention;
FIG. 3B is a diagram illustrating a display interface of event description information according to one embodiment of the invention;
FIG. 4 illustrates a schematic diagram of a computing device 400 according to some embodiments of the invention;
FIG. 5 shows a flow diagram of a data processing method 500 according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the development of mobile terminal applications and sensor hardware, and in view of the property that mobile terminals (e.g., mobile phones) are directly bound with people, they become monitoring devices with great potential for monitoring dangerous driving behaviors. However, although the mobile terminal is convenient for collecting information, the accuracy and quality of the sensor are not as good as those of vehicle-mounted recording devices (such as a car recorder and an On-Board Diagnostics (OBD) system). Monitoring driving behavior with a mobile terminal, such as recognizing the world with limited perception capabilities, is a challenging task to identify even potentially deep accident risk behavior in a small sense. In addition, noise of a sensor of the mobile terminal and hardware difference of different models can limit the mobile terminal from perceiving objective and real motion states of the vehicle, and accordingly dangerous driving behaviors are prevented from being identified.
Both the trip recorder and the OBD center are accurate acquisition modes bound with the vehicle, but in an actual scene, it is difficult to directly acquire OBD data to identify dangerous driving behaviors. Meanwhile, the OBD system is independently developed by an automobile manufacturer, and on one hand, the OBD data which can be acquired by different automobile models have difference due to the difference of automobile hardware; on the other hand, different vehicle families (euro, japanese, american) follow different OBD protocols, and even some small people follow proprietary protocols, which are the key to encryption and analysis of OBD data. Thus, the diversity of OBD data and protocols presents challenges for collecting different vehicle driving behavior data.
In view of the above, according to the embodiment of the present invention, a data collection system 100 is provided to collect status data of a vehicle and other critical data of the vehicle during driving from various aspects. And then, the multi-source data are processed to analyze the driving behavior mode contained in the back of the data, and then the data capable of representing dangerous driving behaviors are determined. These data can be used as annotation data for subsequent analysis based on dangerous driving behavior. According to an embodiment of the invention, the dangerous driving behaviour comprises at least: rapid acceleration, rapid deceleration, rapid turning, mobile phone playing, phone making and the like.
In one embodiment, the insurance company carries out differently priced insurance on different users according to the risk behaviors of the users driving the vehicles and the use conditions of the vehicles. The insurance mode (algorithm) highly depends on the recognition effect of the dangerous driving behaviors, and the collection of high-quality labeled data aiming at the dangerous driving behaviors is the key for improving the effect of the algorithm and the model.
In yet another embodiment, the electronic map navigation application provides a driving scoring function for scoring the driving of each self-driving navigation driving route of the user. The driving score is scored around the "dangerous driving behavior" event identified by the algorithm. Therefore, high-quality labeled data for dangerous driving behaviors is the key for calculating accurate scores.
FIG. 1 shows a schematic diagram of a data acquisition system 100 according to one embodiment of the invention. As shown in fig. 1, the data acquisition system 100 includes: the system comprises a data acquisition device 110, at least one camera 120, a vehicle-mounted device 130, a mobile terminal 140 and a server 150. According to one implementation, the data acquisition device 110 is coupled to the camera 120 and the vehicle-mounted device 130, respectively. In addition, the mobile terminal 140 may be bound with the data collection device 110 through the server 150.
The onboard device 130 is, for example, an OBD box disposed on the vehicle for collecting vehicle status data. The vehicle state data includes at least one or more of the following: the vehicle comprises the following components of vehicle model, average fuel consumption, instantaneous fuel consumption, endurance mileage, vehicle speed, rotating speed, light state, hand brake state, safety belt state, vehicle door state, vehicle window state, steering angle, battery voltage, water temperature, engine oil temperature, oil mass percentage and electric quantity percentage.
The mobile terminal 140 is generally disposed on a vehicle, and acquires driving state data through various sensors disposed in the mobile terminal 140, including positioning data (e.g., GNSS (Global Navigation Satellite System) data), IMU (Inertial Measurement Unit) data (e.g., acceleration, rotation angle, etc.), proximity (measured by a proximity sensor, such as a proximity sensor, to obtain a distance between the mobile terminal 140 and an obstacle in front of the mobile terminal), motion state, orientation of a mobile phone, a state of receiving a call by a driver, a state of intensity of light sensation, and the like.
In addition, a data acquisition application may also be disposed on the mobile terminal 140, and a user selects an acquisition task and an event by operating the Application (APP), and inputs the acquisition task and the event according to a related instruction, thereby implementing human-computer interaction between the user and the mobile terminal 140.
According to one embodiment of the present invention, the data acquisition system 100 comprises at least 2 cameras 120, as shown in fig. 1. One of which is disposed near the brake pedal for collecting video image data when the driver operates the brake pedal (e.g., depresses the brake pedal, releases the brake pedal); the other is arranged near the main driving seat for capturing video image data containing the face of the driver. It should be noted that, this is only an example, and the camera 120 is not limited in the embodiment of the present invention. Those skilled in the art can add or delete the number of the cameras 120 or adjust the installation position of the cameras 120 and the collection objects according to the collection scene requirements.
In one embodiment, the data collection device 110 is provided as hardware external to the vehicle-mounted device 130, and is powered by the vehicle-mounted device 130. According to one embodiment of the invention, the data acquisition device 110 is secured within the vehicle about the lighter. Preferably, a plurality of screw holes are disposed at the edge of the data collection device 110 to fix it. It should be appreciated that the data collection device 110 is typically disposed about a center console of the vehicle, and embodiments of the present invention are not so limited.
In one embodiment, the data acquisition device 110 is implemented as a micro-computing memory device carrying a Rayleigh core micro RK3288 processor in the form of a metal box with multiple communication interfaces. According to the embodiment of the invention, the data acquisition equipment 110 establishes connection with each camera 120 through the USB communication interface; a connection is established with the in-vehicle device 130 through the CAN communication interface.
In addition, the inside mainboard of data acquisition equipment 110 is accompanied with multiple network communication hardware, supports functions such as WIFI, 4G, bluetooth. Meanwhile, a detachable signal amplification transmitter is arranged outside the data acquisition device 110.
Furthermore, in an embodiment according to the present invention, the data collection device 110 is loaded with a Linux operating system and is installed with a corresponding application, enabling communication with the in-vehicle device 130 and parsing of the acquired data. It should be understood that the operating system carried by the data acquisition device 110 may also be a known or future known operating system such as Android or alic, which is not limited in this embodiment of the present invention.
Further, outside the data collection device 110, a two-dimensional code image is arranged (for example, without limitation, the two-dimensional code image is pasted on the data collection device 110), and the two-dimensional code, as an identifier of the data collection device 110, can be bound with the mobile terminal 140 to establish a communication connection between the mobile terminal 140 and the data collection device 110.
In addition, the data collection device 110 also has a temporary storage module to store vehicle state data from the in-vehicle device 130 and various image data from the camera 120.
FIG. 2 illustrates a flow diagram of the operation of the data acquisition device 110 according to one embodiment of the present invention.
According to the embodiment of the invention, when the data acquisition device 110 is connected to the vehicle-mounted device 130, the power supply of the vehicle-mounted device 130 is self-started, and the network is automatically connected after the self-starting. After startup, the data acquisition device 110 sends a heartbeat signal to the server 150 every first time period (e.g., 5 seconds) so that the server 150 monitors the heartbeat status of the data acquisition device 110.
When a user (e.g., a driver) selects an acquisition task on the mobile terminal 140 and scans the two-dimensional code image on the data acquisition device 110, the server 150 generates a trip identifier and returns it to the mobile terminal 140. Meanwhile, the server 150 returns the travel identification to the data collection device 110 when receiving a new heartbeat signal. Therefore, the data acquisition device 110 also listens for the trip identification via the heartbeat signal. In the process of executing the acquisition task once, the data acquisition device 110 continues to send heartbeat signals to the server 150 at intervals of the first time period, and meanwhile, as a response to heartbeat detection, the server 150 continues to return the trip identifier to the data acquisition device 110 until the acquisition task is executed at the end, and the data acquisition device 110 receives a response that the trip identifier is empty from the server 150.
The data acquisition device 110 monitors the travel identifier, and as long as the travel identifier is not empty, the data acquisition device 110 acquires the vehicle state data from the in-vehicle device 130 and the image data from the camera 120, and caches the data as the first travel data. And, the first trip data is transmitted to the server 150 every second time period (e.g., 2 minutes). The first time period and the second time period are not limited too much, and in some preferred embodiments, the second time period is longer than the first time period. When the trip flag is empty, the data acquisition device 110 stops acquisition.
In addition, the new version is also detected periodically (e.g., 10 seconds) after the data acquisition device 110 is started. And when a new version is detected, updating the version.
With reference to fig. 1 and fig. 2, the following is a brief description of how the data acquisition system 100 according to the present application generates annotation data indicating dangerous driving behavior according to the acquired data, taking an acquisition task as an example.
It should be noted that, according to the embodiment of the present invention, when the data acquisition system 100 is used for data acquisition, an open training ground (e.g., a training ground of a driving school) is usually selected, and as many road conditions as possible (e.g., straight road sections, curves, ramps, etc.) are included in the training ground. Meanwhile, in order to ensure that the acquisition process is safely and effectively carried out, drivers with rich driving experiences (such as coaches in driving schools) are selected to drive the vehicles, and designated operation in the acquisition task is finished.
Before the acquisition process begins, a user (e.g., a driver) logs into a data acquisition application disposed on the mobile terminal 140 and selects a set of acquisition tasks. One set of collection tasks is a collection of many different types of dangerous driving behavior events. Dangerous driving behaviors include rapid acceleration, rapid deceleration, rapid turning, mobile phone playing, telephone making and the like, and each dangerous driving behavior can comprise a plurality of situations, for example, for the type of rapid acceleration, the following events can be included: low-speed rapid acceleration, medium-speed rapid acceleration, high-speed rapid acceleration, traffic light rapid acceleration, rapid acceleration after turning, and starting rapid acceleration. Thus, a set of acquisition tasks can be represented as: {3 rapid accelerations, 2 rapid turns, 2 rapid decelerations }, wherein the rapid decelerations, the rapid turns, and the like are different event types, specific events are required in the same event type, and different requirements generally refer to requirements for driving states such as vehicle speed and execution time.
After selecting a group of data acquisition tasks, the user also needs to scan the two-dimensional code image on the data acquisition device 110 through the mobile terminal 140, and the mobile terminal 140 sends the two-dimensional code image to the server 150 to request binding. After receiving the binding request, the server 150 checks whether a binding environment exists, and if the binding environment exists, generates a travel identifier and distributes the travel identifier to the mobile terminal 140 and the data acquisition device 110. At this point, the mobile terminal 140 and the data collection device 110 are in a binding state, and start to perform collection tasks. In one embodiment, the binding environment, i.e. verifying whether the data acquisition device 110 is in a normal networking state where data can be uploaded, is verified by the heartbeat detection as described above, and if the server 150 receives a heartbeat signal from the data acquisition device 110 within the first time period, it indicates that the data acquisition device 110 is in the normal networking state. After receiving the travel identifier, the data collection device 110 instructs the vehicle-mounted device 130 and the camera 120 to start collecting data. And, the first trip data is transmitted to the server 150 every second duration. The process can refer to the related description of fig. 2, and is not described herein again.
In addition, while returning the travel identifier to the mobile terminal 140, the server 150 also sends configuration information corresponding to the collection task to the mobile terminal 140. The configuration information includes at least: and acquiring the event type, the event identifier, the expected execution time and the instruction template corresponding to each event type of each event in the task. The instruction template is adapted to prompt the user for an action instruction when executing the corresponding event.
In one embodiment, the configuration information is pre-generated by the server 150. Taking the instruction template as an example, the instruction template may include a voice instruction template and a text instruction template, and the contents of the two may be the same. Wherein, the text instruction template is displayed on the mobile terminal 140 in a text manner to prompt the user; and the voice instruction template prompts the user in a voice playing mode. Most of the fixed contents in the voice instruction template are recorded by real persons, so that the continuity and the clarity of voice are ensured, and partial contents (mainly numbers and scene states) meet the requirement of acquisition diversity.
In one embodiment, the rapid acceleration and the rapid deceleration are based on pedal operation, the command templates are similar, the command templates for playing and making phone calls highlight more complicated status scenes, and the like. Therefore, specific data and scenes of the instruction templates are different for different events, the difference of the danger degree can be highlighted, and the training of the dangerous driving behavior recognition algorithm is of great value. Several examples of sets of instruction templates according to embodiments of the present invention are shown below, but are not limited thereto. The place shown by the horizontal line "____", that is, the operation data (the operation data at least includes at least one of the following data: speed, distance, time, direction, execution action, etc.) that needs to be written when the event description information is generated for each specific event in the subsequent execution, is not described herein again.
a. Instruction template for related events of rapid acceleration and rapid deceleration
Please fasten the safety belt and ensure the safety distance of ____ m straight ahead. Please make the initial speed of the vehicle reach ____ km/h (if the initial speed is 0, the vehicle should be kept still if the vehicle starts to accelerate suddenly or the traffic light accelerates suddenly). Now please count down ______ after the 5 second countdown (fill-in action, such as accelerator tip-in, brake tip-in, rapid acceleration tip-in, rapid brake tip-in, etc.).
b. Instruction template for sharp turn related events
Please tie the safety belt and ensure that there is a bend in the front _____ m. Please reach ____ km/h, please hit ___ (left or right) to rush on __ to pass through the turn intersection after counting down for 5 seconds.
c. Instruction template for mobile phone playing related events
Please fasten the safety belt and play the mobile phone by the assistant driver. Please reach ____ km/h. After the 5 second countdown, the co-pilot places the phone at _____ to start playing the phone (watching a movie, playing a game, etc.) and keeps ____ seconds.
d. Instruction templates for call related events
The action of fastening the safety belt and answering the call is executed by the assistant driver. Please reach ____ km/h, and place the mobile phone on the mobile phone support. Please perform the call test after the 5 second countdown and let the phone ring ____ seconds. Please __________ (answer _____ s, stop close ____ s, go normal until the other side hangs up, go normal and hang up, stop close and hang up).
After the collection is started, the mobile terminal 140 determines an execution sequence of a plurality of events in the collection task, and then, the collection task interface of the mobile terminal 140 sequentially displays the plurality of events corresponding to the collection task in a first display mode.
FIG. 3A illustrates a schematic diagram of a collection task interface according to one embodiment of the invention. Fig. 3A is a schematic diagram of a fast acceleration type event in one acquisition task. On the collection task interface, basic information corresponding to each event, such as initial speed, training action, execution time, execution times, and the like, may be displayed. As shown in fig. 3A, the display mode for the "middle-speed and rapid-acceleration training" and "traffic light and rapid-acceleration training" events is the first display mode. Meanwhile, the display mode of the event of 'low-speed rapid acceleration training' is the second display mode. The second display mode is different from the first display mode and is used for displaying the executed events.
In one embodiment, the execution order of the plurality of events in the collection task is determined based on the road information. The road information comprises static and/or dynamic information of all or part of the objects within the road. For example, whether the road is wide or not and whether there is a curve or not may be determined, and whether there is an obstacle or a moving object in the road range or not may be determined. The road information may be obtained through a V2X (Vehicle to X) technology, which is not limited in this embodiment of the present invention.
In another embodiment, the execution sequence of the plurality of events may also be selected by the user according to the road section condition. For example, on an open road segment, the driver may choose to perform a sharp acceleration event on a long, open straight road segment, and then perform a sharp turn event before reaching the intersection.
The driver then selects one of the events on the collection task interface. Generally, the driver selects the events to perform according to the display order, but is not limited thereto. The mobile terminal 140 outputs event description information of the selected event to guide the user to execute the event according to the event description information. Specifically, the mobile terminal 140 generates event description information of the event according to the instruction template corresponding to the event type. Then, the event description information is displayed on the interface, and fig. 3B is a schematic diagram illustrating a display interface of the event description information according to an embodiment of the present invention. The event description information may include safety precautions at the time of collection, driving detail requirements, and the like. As shown in FIG. 3B, the substeps illustrate specific operating requirements in the course of executing a "low speed rapid acceleration" event. Meanwhile, the mobile terminal 140 plays the event description information through a voice command.
In one embodiment, after the collection of a specific event is started, a 30-second voice broadcast is provided to prompt safety precautions and driving details during the collection, then a countdown is performed for a certain time (the time is determined according to the collection requirement of the specific event, and the predetermined time can also be used as a part of event description information), a driver performs operation according to the event description information during the countdown, and after the requirement is completed, a confirmation operation can be performed to indicate that the event is completely performed. Continuing with fig. 3B, the driver can indicate "slide complete training" according to the interface, and the mobile terminal can receive the confirmation operation of the driver.
During the course of the driver performing the event, the corresponding sensors in the mobile terminal 140 collect driving state data of the vehicle.
Meanwhile, in this process, in response to the selection of the event by the user, the mobile terminal 140 records the current time as the start time of the event; in response to the user's confirmation after completing the event, the mobile terminal 140 records the current time as the end time of the event. Then, the period of time corresponding to the start time and the end time of the event is used as the actual execution time of the event.
Optionally, the mobile terminal 140 stores the event identifier, the driving state data, and the actual execution time corresponding to the event in an associated manner.
After the execution of one event is finished, the mobile terminal 140 returns to the collection task interface, and displays the executed event in a second display mode different from the first display mode on the collection task interface, where the executed event is in a non-selectable state. As shown in fig. 3A, the "low-speed rapid acceleration event" is an event that has been executed, and is in an unselected state. The driver selects other event tasks for collection. If the acquisition count-down is up, the driver does not complete the corresponding acquisition requirement, the driver needs to select incomplete or failed execution, and the failed data cannot be uploaded. The driver can select the incomplete event under the appropriate condition to perform the collection again.
The driver repeats the process until all the specific event tasks are collected, and the driver can finish the collection task after all the events are executed. When the collection task is completed, the mobile terminal 140 sends the event identifier, the driving state data, and the actual execution time of each event to the server 150 as second travel data.
On the server 150 side, acquiring first travel data from the data acquisition device 110 every second time period; and acquiring second travel data from the mobile terminal 140 when the acquisition task is finished. In this way, the server 150 may associate the first travel data with the second travel data based on the travel identification of the current collection task.
In one collection task, since a certain time interval (usually less than 5 seconds) exists between the time point when the mobile terminal 140 receives the travel identifier and the time point when the data collection device 110 receives the travel identifier, the collection start times of the first travel data and the second travel data may be different. Meanwhile, after receiving the confirmation operation of the user, the mobile terminal 140 ends the acquisition; when the travel identifier is not received, the data acquisition device 110 ends the acquisition, and therefore, the acquisition end times of the first travel data and the second travel data are also different. In addition, the data may be segmented into multiple segments for uploading to the server 150, subject to the objective constraints of network transmission. In addition, the first travel data and the second travel data relate to multi-modal data, including but not limited to GPS positioning data, OBD data, video image data, data of instruction templates, and the like, and the acquisition frequency of different types of data is not consistent. Therefore, in the embodiment according to the present invention, the first travel data and the second travel data are aligned in time and space, which is very important to obtain high quality labeling data.
In one embodiment, the actual execution time considering the dangerous driving behavior is controlled by the command (i.e., data of the command template), which is reflected in the human-machine interaction between the mobile terminal 140 and the driver. Therefore, the server 150 first determines a time interval corresponding to the execution of the collection task. In one embodiment, server 150 determines the expected execution time for each event from the configuration information. Then, whether the expected execution time of each event is consistent with the actual execution time is judged. And if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
And then, respectively processing the first travel data and the second travel data according to the determined time interval to obtain corresponding third travel data and fourth travel data. In one embodiment, corresponding data in a time interval is respectively intercepted from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as fourth stroke data.
And then, aligning the third travel data and the fourth travel data based on the acquisition frequency of the first travel data and the second travel data to obtain aligned third travel data and aligned fourth travel data which are used as marking data. In one embodiment, in the sensor of the mobile terminal 140, the GNSS has an acquisition frequency of 1Hz, and the imu has an acquisition frequency of up to 10Hz; the acquisition frequency of the camera 120 is typically 20-30Hz; the acquisition frequency of the in-vehicle apparatus 130 is 10Hz. Based on the acquisition frequency, the alignment frequency is determined to be 1Hz. And then, sampling the third stroke data and the fourth stroke data based on the alignment frequency to obtain the aligned third stroke data and the aligned fourth stroke data as marking data.
In other embodiments, in consideration of the situation of uncertain acquisition frequency or data missing caused by GPS signal loss and other problems, after the third trip data and the fourth trip data are obtained through sampling, missing data is supplemented through interpolation, and the data after interpolation processing is used as final labeling data.
According to the data acquisition system 100 of the present invention, in addition to the external devices (e.g., the data acquisition device 110 and the on-board device 130 and the camera 120 coupled thereto) connected to the vehicle central control system for analyzing and uploading OBD and other data, the mobile terminal 140 is added, and data change characteristics corresponding to dangerous driving behaviors are completely depicted from multiple angles. Moreover, the whole set of multi-source data acquisition process is simplified, and the understanding cost and the communication cost of an acquirer are reduced.
In addition, according to the data acquisition system 100 of the present invention, the command templates are generated by using different event configurations, and the drivers in the professional driving schools actually execute dangerous driving behaviors according to the command templates, so that compared with the subsequent image annotation, the obtained data annotation quality is higher. Meanwhile, the configuration information of the event can be understood as marking information with finer granularity, and the method is extremely valuable for researching and optimizing dangerous driving behavior recognition.
In addition, the data acquisition system 100 according to the present invention couples the acquired data in time and space, ensuring alignment and quality verification of the data.
According to one embodiment of the invention, the data acquisition system 100 and portions thereof may be implemented by one or more computing devices. FIG. 4 shows a schematic block diagram of a computing device 400 according to one embodiment of the invention.
As shown in FIG. 4, in a basic configuration 402, a computing device 400 typically includes a system memory 406 and one or more processors 404. A memory bus 408 may be used for communicating between the processor 404 and the system memory 406.
Depending on the desired configuration, processor 404 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. Processor 404 may include one or more levels of cache, such as a level one cache 410 and a level two cache 412, a processor core 414, and registers 416. The example processor core 414 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 418 may be used with the processor 404, or in some implementations the memory controller 418 may be an internal part of the processor 404.
Depending on the desired configuration, system memory 406 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 406 may include an operating system 420, one or more applications 422, and data 424. In some implementations, the application 422 can be arranged to execute instructions on an operating system with the data 424 by one or more processors 404.
Computing device 400 also includes storage 432, storage 432 including removable storage 436 and non-removable storage 438, each of removable storage 436 and non-removable storage 438 connected to a storage interface bus 434.
Computing device 400 may also include an interface bus 440 that facilitates communication from various interface devices (e.g., output devices 442, peripheral interfaces 444, and communication devices 446) to the basic configuration 402 via bus/interface controller 430. The example output device 442 includes a graphics processing unit 448 and an audio processing unit 450. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 452. Example peripheral interfaces 444 may include a serial interface controller 454 and a parallel interface controller 456, which may be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 458. An example communication device 446 may include a network controller 460, which may be arranged to facilitate communications with one or more other computing devices 462 over a network communication link via one or more communication ports 464.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In general, computing device 400 may be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset, an application specific device, or a hybrid device that include any of the above functions. In one embodiment according to the invention, the computing device 400 may also be implemented as a micro-computing module or the like. The embodiments of the present invention are not limited thereto.
In an embodiment according to the invention, the computing device 400 is configured to perform a data acquisition method, and/or a data processing method according to the invention. Among other things, application 422 of computing device 400 includes a plurality of program instructions for carrying out the above-described methods according to the present invention.
FIG. 5 shows a flow diagram of a method 500 for generating annotation data for dangerous driving behavior according to one embodiment of the invention. The method 500 is suitable for execution in the server 150. It should be noted that the method 500 is complementary to the foregoing, and repeated portions are not described in detail.
As shown in fig. 5, the method 500 begins at step S510. In step S510, in response to a binding request from the mobile terminal, the mobile terminal is bound with a data collection device disposed on the vehicle by the travel identifier.
According to one embodiment, when the server 150 receives a request from the mobile terminal 140 to bind with the data collection device 110, a travel identification is generated and returned to the mobile terminal 140. Then, after receiving a heartbeat signal from the data acquisition device 110, the trip identifier is sent to the data acquisition device 110, where the heartbeat signal is sent to the server 150 every first time period after the data acquisition device 110 is started, so that the server 150 monitors the networking state of the data acquisition device.
Then, in step S520, the first trip data from the data collecting device 110 is acquired every second time period. The first journey data is data of the vehicle when the data acquisition device 110 starts to acquire the journey identification and executes the acquisition task.
As described above, the collection task includes a plurality of different types of events, and each event is directed to a different dangerous driving behavior.
Also, the data collecting apparatus 110 is coupled to the in-vehicle apparatus 130 and the camera 120, respectively. The first trip data includes: the vehicle state data collected by the in-vehicle apparatus 130, and the image data collected by the camera 120 at the time of executing the collection task.
Subsequently, in step S530, the second trip data from the mobile terminal 140 is acquired.
The second travel data is data of the vehicle collected by the mobile terminal 140 since the travel identifier was received and when the collection task was executed. The second trip data includes: and when the acquisition task is executed, the driving state data of the vehicle and the actual execution time of each event.
Subsequently, in step S540, the configuration information of the collection task corresponding to the travel identifier is acquired.
The configuration information may be generated in advance at the server 150 side. Different events are set according to different dangerous driving behaviors, and each event has a corresponding event type, an event identifier, an instruction template corresponding to the event type and the like. And combining different events to form a group of acquisition tasks, and correspondingly obtaining configuration information of the acquisition tasks. Optionally, the configuration information at least includes: and acquiring the event type, the event identification and the expected execution time of each event in the task. It is contemplated that the execution may be in the form of an interval, such as from 13.
Subsequently, in step S550, a time interval corresponding to the execution of the collection task is determined from the configuration information.
In one embodiment, the expected execution time for each event is first determined from the configuration information. And then judging whether the expected execution time of each event is consistent with the actual execution time. And if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to the event. If the expected execution time is inconsistent with the actual execution time, the data of the acquired task is wrong and needs to be acquired again.
As described above, the actual execution time of the event is the time generated based on the input of the user (i.e., the interaction of the user with the mobile terminal 140), and specifically includes: in response to a user selection of an event in the collection task, a start time of the event is generated; and responding to the confirmation operation of the user after the event is finished, and generating the end time of the event.
Subsequently, in step S560, the first trip data and the second trip data are respectively processed according to the time interval, so as to obtain corresponding third trip data and fourth trip data.
In one embodiment, corresponding data in a time interval is respectively intercepted from the vehicle state data and the image data to serve as third stroke data; and intercepting corresponding data in the time interval from the driving state data to serve as corresponding fourth stroke data.
Subsequently, in step S570, the third stroke data and the fourth stroke data are aligned based on the collection frequency of the first stroke data and the second stroke data, so as to obtain aligned third stroke data and aligned fourth stroke data.
In one embodiment, the alignment frequency is determined based on the acquisition frequency of the first and second trip data.
Then, based on the alignment frequency, the third run data and the fourth run data are respectively sampled to obtain aligned third run data and aligned fourth run data. The third trip data and the fourth trip data may be used as annotation data indicating dangerous driving behavior.
In still other embodiments, in consideration of the possible data missing problem in the collected data, the step of sampling the third travel data and the fourth travel data based on the alignment frequency further includes: based on the alignment frequency, respectively sampling the third stroke data and the fourth stroke data to obtain sampled third stroke data and sampled fourth stroke data; if there is a data missing (for example, at a certain sampling time point, image data is missing, but not limited thereto) in the sampled third run data and/or the sampled fourth run data, the missing data is supplemented by interpolation to obtain aligned third run data and/or aligned fourth run data. It should be appreciated that embodiments of the present invention are not limited in what manner to interpolate.
According to still further embodiments, after obtaining the aligned third trip data and the aligned fourth trip data, further comprising the steps of:
and checking whether the action instruction of the user meets the preset requirement when the acquisition task is executed by combining the configuration information, the aligned third travel data and the aligned fourth travel data. The preset requirement is, for example, whether the action performed by the driver meets the requirements on speed and duration, but is not limited thereto. And if the preset requirements are met, using the aligned third travel data and the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
According to the data processing method 500 of the present invention, in consideration of the difference between the collection start time and the collection end time of the first trip data and the second trip data, and the limitation of the objective condition of network transmission, the data is further divided into multiple segments and uploaded to the server 150. In addition, the first travel data and the second travel data relate to multi-modal data, including but not limited to GPS positioning data, OBD data, video image data, data of an instruction template and the like, and the acquisition frequencies of different types of data are different, so that the acquired multi-source data are coupled in time and space, and the alignment and the quality of the data are ensured. In addition, when the multi-source data are aligned, the actual execution time of the dangerous driving behaviors is taken as a reference, and is compared with the expected execution time in the instruction template, so that the instruction of the collected data is further ensured.
The invention also discloses:
the method of A8, as set forth in A7, wherein the step of sampling the third travel data and the fourth travel data based on the alignment frequency to obtain aligned third travel data and aligned fourth travel data further includes: sampling the third travel data and the fourth travel data based on the alignment frequency to obtain sampled third travel data and sampled fourth travel data; and if data are missing in the sampled third stroke data and/or the sampled fourth stroke data, supplementing the missing data through interpolation to obtain aligned third stroke data and aligned fourth stroke data. A9, the method of any one of A1-8, further comprising the step of: checking whether the action instruction of the driver meets the preset requirement when the acquisition task is executed by combining the configuration information, the aligned third travel data and the aligned fourth travel data; and if the preset requirements are met, using the aligned third travel data and the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the device in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (11)

1. A data processing method, executed in a server, comprising the steps of:
in response to a binding request from a mobile terminal, binding the mobile terminal with a data acquisition device arranged on a vehicle through a travel identifier;
acquiring first travel data from the data acquisition equipment every other second time, wherein the first travel data are data which are acquired by the data acquisition equipment after receiving the travel identifier and are acquired by the data acquisition equipment when an acquisition task is executed;
acquiring second travel data from the mobile terminal, wherein the second travel data is acquired by the mobile terminal from the reception of the travel identifier and is data of a vehicle during execution of an acquisition task;
acquiring configuration information of an acquisition task corresponding to the travel identifier;
determining a time interval corresponding to the execution of the acquisition task from the configuration information;
according to the time interval, the first travel data and the second travel data are respectively processed to obtain corresponding third travel data and fourth travel data; and
aligning the third stroke data and the fourth stroke data based on the acquisition frequency of the first stroke data and the second stroke data to obtain aligned third stroke data and aligned fourth stroke data;
checking whether the action instruction of the driver meets the preset requirement when the acquisition task is executed by combining the configuration information, the aligned third travel data and the aligned fourth travel data;
and if the preset requirements are met, using the aligned third travel data and the aligned fourth travel data as marking data for indicating dangerous driving behaviors.
2. The method of claim 1, wherein the step of binding the mobile terminal with a data collection device disposed on a vehicle by a travel identity in response to a binding request from the mobile terminal comprises:
when a request to be bound with data acquisition equipment from a mobile terminal is received, generating a travel identifier and returning the travel identifier to the mobile terminal;
after receiving a heartbeat signal from the data acquisition equipment, sending the stroke identifier to the data acquisition equipment, wherein the heartbeat signal is sent to the server every first time after the data acquisition equipment is started, so that the server monitors the networking state of the data acquisition equipment.
3. The method of claim 1, wherein,
the acquisition task comprises a plurality of events of different types, and the events point to dangerous driving behaviors; and
the configuration information at least comprises: and acquiring the event type, the event identifier and the expected execution time of each event in the task.
4. The method of claim 1, wherein,
the data acquisition equipment is coupled with mobile unit and camera respectively, first stroke data includes: when the acquisition task is executed, vehicle state data acquired through vehicle-mounted equipment and image data acquired through a camera are acquired;
the second trip data includes: and when the collection task is executed, the driving state data of the vehicle and the actual execution time of each event.
5. The method of claim 4, wherein the determining a time interval corresponding to the execution of the acquisition task from the configuration information comprises:
determining the expected execution time of each event from the configuration information;
judging whether the expected execution time of each event is consistent with the actual execution time;
and if the expected execution time is consistent with the actual execution time, taking the actual execution time as a time interval corresponding to each event.
6. The method of claim 4 or 5, wherein the step of processing the first trip data and the second trip data according to the time interval to obtain corresponding third trip data and fourth trip data comprises:
respectively capturing corresponding data in the time interval from the vehicle state data and the image data as third travel data;
and intercepting corresponding data in the time interval from the driving state data to serve as fourth travel data.
7. The method of claim 4, wherein the step of aligning the third travel data and the fourth travel data based on the acquisition frequency of the first travel data and the second travel data to obtain aligned third travel data and aligned fourth travel data comprises:
determining an alignment frequency based on the acquisition frequencies of the first and second stroke data;
sampling the third travel data and the fourth travel data based on the alignment frequency to obtain aligned third travel data and aligned fourth travel data.
8. The method of claim 7, wherein the step of sampling the third travel data and the fourth travel data based on the alignment frequency to obtain aligned third travel data and aligned fourth travel data further comprises:
sampling the third travel data and the fourth travel data based on the alignment frequency to obtain sampled third travel data and sampled fourth travel data;
and if data are missing in the sampled third stroke data and/or the sampled fourth stroke data, supplementing missing data through interpolation to obtain aligned third stroke data and aligned fourth stroke data.
9. A data acquisition system comprising:
the vehicle-mounted equipment is suitable for acquiring vehicle state data in the process that a user executes an acquisition task;
the camera is suitable for acquiring image data in the process of executing the acquisition task by the user;
the mobile terminal is arranged on the vehicle and is suitable for acquiring driving state data pointing to dangerous driving behaviors;
the data acquisition equipment is arranged on the vehicle, is respectively coupled with the vehicle-mounted equipment and the at least one camera to acquire the vehicle state data and the image data, and is also suitable for being bound with the mobile terminal through a server;
a server adapted to perform the method according to any one of claims 1-8, and to correlate the driving state data, the vehicle state data and the image data to obtain annotation data indicative of dangerous driving behavior.
10. A computing device, comprising:
one or more processors; and
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-8.
11. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-8.
CN202110896380.1A 2021-08-05 2021-08-05 Data processing method and data acquisition system Active CN113611007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110896380.1A CN113611007B (en) 2021-08-05 2021-08-05 Data processing method and data acquisition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110896380.1A CN113611007B (en) 2021-08-05 2021-08-05 Data processing method and data acquisition system

Publications (2)

Publication Number Publication Date
CN113611007A CN113611007A (en) 2021-11-05
CN113611007B true CN113611007B (en) 2023-04-18

Family

ID=78307086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110896380.1A Active CN113611007B (en) 2021-08-05 2021-08-05 Data processing method and data acquisition system

Country Status (1)

Country Link
CN (1) CN113611007B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116300780B (en) * 2022-09-07 2024-01-23 广州汽车集团股份有限公司 Component configuration method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161774A1 (en) * 2017-03-06 2018-09-13 腾讯科技(深圳)有限公司 Driving behavior determination method, device, equipment and storage medium
CN110765807A (en) * 2018-07-25 2020-02-07 阿里巴巴集团控股有限公司 Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium
CN113056390A (en) * 2018-06-26 2021-06-29 伊泰·卡茨 Situational driver monitoring system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468847B (en) * 2014-12-31 2017-12-15 山东赛维安讯信息科技有限公司 Stroke recording information sharing method, equipment, server and the system of a kind of vehicle
CN110758398A (en) * 2018-07-10 2020-02-07 阿里巴巴集团控股有限公司 Driving risk detection method and device
CN109887124B (en) * 2019-01-07 2022-05-13 平安科技(深圳)有限公司 Vehicle motion data processing method and device, computer equipment and storage medium
CN110329268B (en) * 2019-03-22 2021-04-06 中国人民财产保险股份有限公司 Driving behavior data processing method, device, storage medium and system
KR102198196B1 (en) * 2019-05-24 2021-01-04 (주)두레윈 Black box for vehicle, user terminal and server for collecting image from the same
CN110390557A (en) * 2019-06-17 2019-10-29 深圳壹账通智能科技有限公司 Vehicle premium determines method, apparatus and computer equipment and readable storage medium storing program for executing
CN110739969A (en) * 2019-10-18 2020-01-31 唐智科技湖南发展有限公司 signal synchronous acquisition system
CN111845728B (en) * 2020-06-22 2021-09-21 福瑞泰克智能系统有限公司 Driving assistance data acquisition method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161774A1 (en) * 2017-03-06 2018-09-13 腾讯科技(深圳)有限公司 Driving behavior determination method, device, equipment and storage medium
CN113056390A (en) * 2018-06-26 2021-06-29 伊泰·卡茨 Situational driver monitoring system
CN110765807A (en) * 2018-07-25 2020-02-07 阿里巴巴集团控股有限公司 Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium

Also Published As

Publication number Publication date
CN113611007A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US11568492B2 (en) Information processing apparatus, information processing method, program, and system
US11142190B2 (en) System and method for controlling autonomous driving vehicle
CN112544071B (en) Video splicing method, device and system
US20150187351A1 (en) Method and system for providing user with information in vehicle
JP6603506B2 (en) Parking position guidance system
CN110875937A (en) Information pushing method and system
KR20130082874A (en) Support system for road drive test and support method for road drive test usgin the same
CN107146439A (en) Restricted driving reminding method, restricted driving prompt system and car-mounted terminal
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN113611007B (en) Data processing method and data acquisition system
CN110509931B (en) Information display method, device and system for voice question answering
CN113591744B (en) Method for generating annotation data aiming at dangerous driving behaviors and data acquisition system
CN111721315A (en) Information processing method and device, vehicle and display equipment
JP6619316B2 (en) Parking position search method, parking position search device, parking position search program, and moving object
US20230351823A1 (en) Information processing device, information processing method and program
CN112185157B (en) Roadside parking space detection method, system, computer equipment and storage medium
CN113628360B (en) Data acquisition method and system
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN113706915A (en) Parking prompting method, device, equipment and storage medium
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
JP4866061B2 (en) Information recording apparatus, information recording method, information recording program, and computer-readable recording medium
CN114880408A (en) Scene construction method, device, medium and chip
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
KR102167252B1 (en) Auto control system for car using old smart-phone and the control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant