CN112287801A - Vehicle-mounted data processing method and device, server and readable storage medium - Google Patents

Vehicle-mounted data processing method and device, server and readable storage medium Download PDF

Info

Publication number
CN112287801A
CN112287801A CN202011150412.5A CN202011150412A CN112287801A CN 112287801 A CN112287801 A CN 112287801A CN 202011150412 A CN202011150412 A CN 202011150412A CN 112287801 A CN112287801 A CN 112287801A
Authority
CN
China
Prior art keywords
vehicle
data
driving assistance
algorithm
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011150412.5A
Other languages
Chinese (zh)
Inventor
卢美奇
李国镇
杨宏达
李友增
戚龙雨
吴若溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202011150412.5A priority Critical patent/CN112287801A/en
Publication of CN112287801A publication Critical patent/CN112287801A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a vehicle-mounted data processing method, a vehicle-mounted data processing device, a server and a readable storage medium, and relates to the technical field of data processing. According to the vehicle-mounted data processing method, the vehicle-mounted data processing device, the server and the readable storage medium, after event data uploaded by the vehicle-mounted terminals are received, the event data are input into the driving assistance algorithm stored by the server to obtain the first detection result, the driving assistance algorithm stored by the server and each vehicle-mounted terminal is evaluated based on the first detection result to obtain the evaluation result, the driving assistance algorithm is updated based on the evaluation result, and the updated driving assistance algorithm is issued to each vehicle-mounted terminal, so that effective evaluation on landing performance of the driving assistance algorithm is achieved, the driving assistance algorithm is driven to be updated based on the evaluation result, and landing requirements of the driving assistance algorithm are met.

Description

Vehicle-mounted data processing method and device, server and readable storage medium
Technical Field
The invention relates to the technical field of data processing, in particular to a vehicle-mounted data processing method, a vehicle-mounted data processing device, a server and a readable storage medium.
Background
With the rapid development of intelligent Driving technology, the penetration rate of Advanced Driving Assistance System (ADAS) related products in the automobile market is continuously increased, and besides the ADAS System installed in front of the automobile, a large number of after-loading products integrating ADAS and automobile data recorders are also emerging in the market.
The current ADAS product generally uses a monocular or binocular camera, an Inertial Measurement Unit (IMU) and a Global Positioning System (GPS) as sensors, and uses a deep learning technique to perform position detection and distance Measurement and calculation on pedestrians, obstacles and the like in a forward field of view of an automobile, so as to realize environmental perception, predict the possibility of collision, and give a driver an early warning signal.
Due to the fact that large differences exist between data, calculation power and equipment used in a real landing scene under a laboratory condition, an algorithm successfully researched under the laboratory condition cannot be well generalized to the real data, and the landing effectiveness of an ADAS product is difficult to evaluate, so that the landing requirement of the ADAS product is difficult to meet.
Disclosure of Invention
Based on the above research, the present invention provides a vehicle-mounted data processing method, apparatus, server, and readable storage medium to improve the above problems.
Embodiments of the invention may be implemented as follows:
in a first aspect, an embodiment of the present invention provides a vehicle-mounted data processing method, which is applied to a server, where the server is in communication connection with at least one vehicle-mounted terminal, and the server and each vehicle-mounted terminal store a driving assistance algorithm, where the method includes:
receiving event data reported by each vehicle-mounted terminal; the event data are obtained by screening based on a driving assistance algorithm stored in the vehicle-mounted terminal;
inputting the event data into a driving assistance algorithm stored by the server to obtain a first detection result;
evaluating the driving assistance algorithms stored by the server and each vehicle-mounted terminal based on the first detection result to obtain an evaluation result;
and updating the driving assistance algorithm stored in the server based on the evaluation result, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional embodiment, an information detection model is pre-constructed in the server, the driving assistance algorithm includes a leading vehicle detection model, the first detection result includes an output result of the leading vehicle detection model, and the leading vehicle detection model is obtained by compressing or pruning the information detection model;
the step of evaluating the driving assistance algorithms stored in the server and each of the in-vehicle terminals based on the first detection result to obtain an evaluation result includes:
inputting the event data into the information detection model and outputting a second detection result;
comparing the output result with the second detection result, and judging whether the output result is consistent with the second detection result;
and if the two models are not consistent, judging that the detection of the front vehicle detection model is wrong.
In an optional embodiment, the updating the driving assistance algorithm stored in the server based on the evaluation result, and the issuing the updated driving assistance algorithm to each of the vehicle-mounted terminals includes:
acquiring event data with a wrong detection result, and expanding the event data with the wrong detection result into a training sample of the front vehicle detection model;
based on the expanded training sample, the front vehicle detection model is trained again to obtain a target front vehicle detection model;
and updating the preceding vehicle detection model included in the driving assistance algorithm stored in the server based on the target preceding vehicle detection model, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional embodiment, the driving assistance algorithm further comprises a pre-established warning strategy, and the first detection result further comprises a warning result; the step of evaluating the driving assistance algorithms stored in the server and each of the in-vehicle terminals based on the first detection result to obtain an evaluation result includes:
acquiring behavior data of the vehicle-mounted terminal;
analyzing the alarm result, the output result and the behavior data, and judging whether alarm error data exists or not;
and if so, judging that the detection result of the alarm strategy is wrong.
In an optional embodiment, the step of analyzing the alarm result, the output result, and the behavior data and determining whether there is alarm error data includes:
judging whether an alarm is needed or not based on the output result and the behavior data;
and if the alarm is needed and the alarm result is not alarm, or the alarm is not needed and the alarm result is alarm, judging that the alarm is wrong.
In an optional embodiment, the updating the driving assistance algorithm stored in the server based on the evaluation result, and the issuing the updated driving assistance algorithm to each of the vehicle-mounted terminals includes:
marking the behavior data and the data with the alarm error to obtain a marking result, and adjusting the alarm strategy according to the marking result;
and updating the alarm strategy included in the driving assistance algorithm stored in the server based on the adjusted alarm strategy, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
In an alternative embodiment, the driving assistance algorithm comprises a model layer, a filter layer and a strategy layer, the method further comprises the step of obtaining an initial version of the driving assistance algorithm, the step comprising:
acquiring running data of a vehicle and a labeling scene corresponding to the running data;
inputting the driving data into the model layer, and outputting the front vehicle detection result data of the vehicle;
inputting the detection result data of the front vehicle into the filter layer, and smoothing the detection result data of the front vehicle;
inputting the smoothed front vehicle detection result data into the strategy layer, and identifying scenes in the smoothed front vehicle detection result data;
and judging whether the identified scene is consistent with the labeled scene, if not, adjusting the parameters of the model layer, the filtering layer or the strategy layer until the identified scene is consistent with the labeled scene, and obtaining the driving assistance algorithm of the initial version.
In a second aspect, an embodiment of the present invention provides a vehicle-mounted data processing apparatus, which is applied to a server, where the server is in communication connection with at least one vehicle-mounted terminal, the server and each vehicle-mounted terminal store a driving assistance algorithm, and the apparatus includes a data acquisition module, a data processing module, an algorithm evaluation module, and an algorithm update module;
the data acquisition module is used for receiving event data reported by each vehicle-mounted terminal; the event data are obtained by screening based on a driving assistance algorithm stored in the vehicle-mounted terminal;
the data processing module is used for inputting the event data into a driving assistance algorithm stored by the server to obtain a first detection result;
the algorithm evaluation module is used for evaluating the driving assistance algorithms stored by the server and each vehicle-mounted terminal based on the first detection result to obtain an evaluation result;
the algorithm updating module is used for updating the driving assistance algorithm stored in the server based on the evaluation result and sending the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional embodiment, an information detection model is pre-constructed in the server, the driving assistance algorithm includes a leading vehicle detection model, the first detection result includes an output result of the leading vehicle detection model, and the leading vehicle detection model is obtained by compressing or pruning the information detection model; the algorithm evaluation module is configured to:
inputting the event data into the information detection model which is constructed in advance, and outputting a second detection result;
comparing the output result with the second detection result, and judging whether the output result is consistent with the second detection result;
and if the two models are not consistent, judging that the detection of the front vehicle detection model is wrong.
In an alternative embodiment, the algorithm update module is configured to:
acquiring event data with a wrong detection result, and expanding the event data with the wrong detection result into a training sample of the front vehicle detection model;
based on the expanded training sample, the front vehicle detection model is trained again to obtain a target front vehicle detection model;
and updating the preceding vehicle detection model included in the driving assistance algorithm stored in the server based on the target preceding vehicle detection model, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional embodiment, the driving assistance algorithm further comprises a pre-established warning strategy, and the first detection result further comprises a warning result; the algorithm evaluation module is configured to:
acquiring behavior data of the vehicle-mounted terminal;
analyzing the alarm result, the output result and the behavior data, and judging whether alarm error data exists or not;
and if so, judging that the detection result of the alarm strategy is wrong.
In an alternative embodiment, the algorithm evaluation module is configured to:
judging whether an alarm is needed or not based on the output result and the behavior data;
and if the alarm is needed and the alarm result is not alarm, or the alarm is not needed and the alarm result is alarm, judging that the alarm is wrong.
In an alternative embodiment, the algorithm update module is configured to:
marking the behavior data and the data with the alarm error to obtain a marking result, and adjusting the alarm strategy according to the marking result;
and updating the alarm strategy included in the driving assistance algorithm stored in the server based on the adjusted alarm strategy, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional embodiment, the driving assistance algorithm includes a model layer, a filter layer, and a strategy layer, and the vehicle-mounted data processing apparatus further includes an algorithm training module, and the algorithm training module is configured to:
acquiring running data of a vehicle and a labeling scene corresponding to the running data;
inputting the driving data into the model layer, and outputting the front vehicle detection result data of the vehicle;
inputting the detection result data of the front vehicle into the filter layer, and smoothing the detection result data of the front vehicle;
inputting the smoothed front vehicle detection result data into the strategy layer, and identifying scenes in the smoothed front vehicle detection result data;
and judging whether the identified scene is consistent with the labeled scene, if not, adjusting the parameters of the model layer, the filtering layer or the strategy layer until the identified scene is consistent with the labeled scene, and obtaining the driving assistance algorithm of the initial version.
In a third aspect, an embodiment of the present invention provides a server, including: the vehicle-mounted data processing system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the vehicle-mounted data processing method of any one of the preceding embodiments when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a readable storage medium, where a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, the steps of the vehicle-mounted data processing method according to any one of the foregoing embodiments are executed.
According to the vehicle-mounted data processing method, the vehicle-mounted data processing device, the server and the readable storage medium, the data closed loop is constructed, after the event data uploaded by the vehicle-mounted terminal are received, the event data are input into the driving assistance algorithm stored by the server to obtain the first detection result, the driving assistance algorithm stored by the server and each vehicle-mounted terminal is evaluated based on the first detection result to obtain the evaluation result, the driving assistance algorithm is updated based on the evaluation result, and the updated driving assistance algorithm is issued to each vehicle-mounted terminal.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic block diagram of a server according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a vehicle-mounted data processing method according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a driving assistance algorithm according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart illustrating a sub-step of a vehicle-mounted data processing method according to an embodiment of the present invention.
Fig. 5 is a second flow chart illustrating the substeps of the vehicle-mounted data processing method according to the embodiment of the present invention.
Fig. 6 is a third schematic flow chart illustrating sub-steps of the vehicle-mounted data processing method according to the embodiment of the invention.
Fig. 7 is a fourth schematic flow chart illustrating a substep of the vehicle-mounted data processing method according to the embodiment of the present invention.
Fig. 8 is a block diagram of an on-board data processing apparatus according to an embodiment of the present invention.
Fig. 9 is another block diagram of the vehicle-mounted data processing device according to the embodiment of the present invention.
Icon: 100-a server; 110-a network port; 120-a processor; 130-a communication bus; 140-a storage medium; 150-a vehicle data processing device; 151-data acquisition module; 152-a data processing module; 153-algorithm evaluation module; 154-algorithm update module; 155-algorithm training module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. indicate an orientation or a positional relationship based on that shown in the drawings or that the product of the present invention is used as it is, this is only for convenience of description and simplification of the description, and it does not indicate or imply that the device or the element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
With the vigorous development of artificial intelligence technology, the annual innovation and research results of academia are continuously emerging, and the algorithm application in the industry is continuously innovated, but the industry generally has a consensus at present, namely, in the field of artificial intelligence, a great gap exists between the advanced level of the industry and the academia. The main reason is that the problems in the real landing scene are more complex, different from the problems that can be clearly classified in academic research, and specific analysis of specific problems is needed in the real landing scene. Secondly, there are great differences between data, computational power and equipment used under laboratory conditions and data, computational power and equipment used in a real landing scene, and an algorithm successfully researched based on laboratory conditions cannot be well generalized to real data, and the landing requirements of the algorithm under a low-cost and low-power consumption edge computing scene are difficult to meet. Moreover, the verification of the algorithm effect under the laboratory condition is only based on a small number of test sets, and is completely different from the long-term effect verification on mass data required by the landing of the algorithm.
The inventor finds that the problems to be overcome by converting the advanced results of artificial intelligence algorithm research into the advanced productivity of the industry are as follows: aiming at the actual problem, an algorithm which is most suitable for solving the problem is designed by utilizing the advanced technology in the industry; collecting real data in an algorithm landing scene, and restoring an algorithm landing effect; algorithm iteration is realized in a data-driven mode; and verifying the effectiveness of the algorithm through mass data.
Therefore, based on the above research, in order to realize the evaluation of the effectiveness of the ADAS algorithm landing, that is, the effectiveness of the ADAS algorithm in actual use, the embodiment provides a vehicle-mounted data processing method, a device, a server, and a readable storage medium, the effectiveness of the algorithm landing is evaluated by collecting real data in the ADAS algorithm landing scene, restoring the effect of the algorithm landing, and driving the ADAS algorithm to update according to the evaluation result, so that the ADAS algorithm landing requirement is met.
Referring to fig. 1, fig. 1 is a block diagram of a server according to an embodiment of the present invention, in this embodiment, a server 100 may be a single server or a server group. The set of servers may be centralized or distributed (e.g., server 100 may be a distributed system). In some embodiments, the server 100 may also be implemented on a cloud platform, which may include, by way of example only, a private cloud, a public cloud, a hybrid cloud, a community cloud (community cloud), a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
The server 100 may include a network port 110 connected to a network, one or more processors 120 for executing program instructions, a communication bus 130, and different forms of storage media 140, such as disks, ROM, or RAM, or any combination thereof. Illustratively, the server 100 may also include program instructions stored in ROM, RAM, or other types of non-transitory storage media, or any combination thereof. The method of the present invention can be implemented according to these program instructions.
In some embodiments, processor 120 may process information and/or data related to a service request to perform one or more of the functions described in this disclosure. In some embodiments, processor 120 may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, the Processor 120 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set computer) or a microprocessor, or the like, or any combination thereof.
For ease of illustration, only one processor is depicted in server 100. However, it should be noted that the server 100 in the present invention may also include a plurality of processors, and thus, the steps performed by one processor described in the present invention may also be performed by a plurality of processors in combination or individually. For example, if the processor of the server 100 executes step a and step B, it should be understood that step a and step B may also be executed by two different processors together or executed in one processor separately. For example, a first processor performs step a and a second processor performs step B, or the first processor and the second processor perform steps a and B together.
The network may be used for the exchange of information and/or data. In some embodiments, one or more components in server 100 may send information and/or data to other components. For example, the server 100 may obtain a service request from a user handheld device, such as a cell phone, via a network. In some embodiments, the network may be any type of wired or wireless network, or combination thereof. Merely by way of example, the Network may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof.
In some embodiments, the network may include one or more network access points. For example, the network may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of the server 100 may connect to the network to exchange data and/or information.
A database may be included in server 100 and may store data and/or instructions. In some embodiments, the database may store data obtained from a service requester terminal, such as a user handset. In some embodiments, the database may store data and/or instructions for the exemplary methods described in this disclosure. In some embodiments, the database may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), among others, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, and the like; volatile read-write Memory may include Random Access Memory (RAM); the RAM may include Dynamic RAM (DRAM), Double data Rate Synchronous Dynamic RAM (DDR SDRAM); static RAM (SRAM), Thyristor-Based Random Access Memory (T-RAM), Zero-capacitor RAM (Zero-RAM), and the like. By way of example, ROMs may include Mask Read-Only memories (MROMs), Programmable ROMs (PROMs), Erasable Programmable ROMs (PERROMs), Electrically Erasable Programmable ROMs (EEPROMs), compact disk ROMs (CD-ROMs), digital versatile disks (ROMs), and the like.
In some embodiments, the database may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, cross-cloud, multi-cloud, elastic cloud, or the like, or any combination thereof.
In some embodiments, the database may be connected to a network to communicate with one or more components in the server 100. One or more components in server 100 may access data or instructions stored in a database via a network. In some embodiments, the database may be directly connected to one or more components in the server 100. Alternatively, in some embodiments, the database may also be part of the server 100.
In some embodiments, one or more components in the server 100 may have access to a database. In some embodiments, one or more components in the server 100 may read and/or modify information related to the service requestor or the public, or any combination thereof, when certain conditions are met. For example, the server 100 may read and/or modify one or more pieces of information in a database after receiving a service request.
It will be appreciated that the configuration shown in fig. 1 is merely illustrative and that the server 100 may include more or fewer components than shown in fig. 1 or may have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
In this embodiment, the server may be connected to the plurality of vehicle-mounted terminals through a network, the server and each vehicle-mounted terminal are respectively provided with a driving assistance algorithm, that is, an ADAS algorithm, each vehicle-mounted terminal uploads real data in an actual scene to the server through the driving assistance algorithm, and then, based on the driving assistance algorithm stored in the server, the server performs reproduction, and based on a reproduction result, the evaluation of the landing driving assistance algorithm is realized.
It can be understood that the vehicle-mounted data processing method provided by the embodiment may also be applied to other artificial intelligence algorithm landing practice processes, and is not limited to the driving assistance algorithm provided by the embodiment.
Based on the implementation architecture of fig. 1, please refer to fig. 2 in combination, and fig. 2 shows one of the flowcharts of the vehicle-mounted data processing method according to the embodiment of the present invention. The method is applied to the server 100 shown in fig. 1 and is executed by the server 100 shown in fig. 1. The flow of the on-vehicle data processing method shown in fig. 2 is described in detail below.
Step S10: and receiving the event data reported by each vehicle-mounted terminal.
Step S20: and inputting the event data into a driving assistance algorithm stored by a server to obtain a first detection result.
Step S30: and evaluating the driving assistance algorithms stored by the server and each vehicle-mounted terminal based on the first detection result to obtain an evaluation result.
Step S40: and updating the driving assistance algorithm stored in the server based on the evaluation result, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
The event data represents real data in an actual scene, and is obtained by screening a driving assistance algorithm stored in the vehicle-mounted terminal.
After the server receives the uploaded event data, the event data are input into a locally stored driving assistance algorithm for detection, scene reproduction of the event data is achieved, a first detection result is obtained, after the first detection result is obtained, the driving assistance algorithms in the server and each vehicle-mounted terminal are evaluated based on the first detection result, performance of the landing driving assistance algorithm is further evaluated, the driving assistance algorithm stored in the server is updated based on the evaluation result, the updated driving assistance algorithm is issued to each vehicle-mounted terminal, and landing requirements of the driving assistance algorithm are met.
Alternatively, as shown in fig. 3, in the present embodiment, the driving assistance algorithm is composed of a model layer, a filter layer, and a strategy layer.
The model layer is composed of a front vehicle detection model, the front vehicle detection model is obtained through coordinate regression training based on anchors, and the model layer is used for detecting the position of a front vehicle of the vehicle, the distance between the front vehicle and the brake state of the front vehicle and predicting the vanishing point of the front vehicle through a deep learning method. Optionally, in this embodiment, the front vehicle detection model selects the shuffleNet v2 as a core network for extracting features of an input image, where the input image is an image in front of a vehicle captured by a vehicle event data recorder device.
The output of the model consists of a classification branch and a regression branch, wherein the classification branch is used for judging whether a vehicle exists in front, and the regression branch is used for outputting the position of the vehicle in front and the position of the vanishing point. At the same time, the regression branch took Balanced l1 loss as a loss function to alleviate the jitter problem.
In this embodiment, the vanishing point is a crossing point of lane lines parallel to the front of the vehicle, and after the positions of the message point and the front vehicle are obtained by regression, the vehicle distance between the host vehicle and the front vehicle can be calculated based on the position of the vanishing point and the camera installation position, and optionally, the vehicle distance can be calculated by the following formula:
ep=cy+fytanα
Figure BDA0002741005800000111
Figure BDA0002741005800000112
wherein ep is the vanishing point, cyIs the camera principal point ordinate, fyIs the focal length of the camera, alpha is the pitch angle of the camera, H is the installation height of the camera, d is the distance from the head of the vehicle to the tail of the vehicle, i.e. the distance between the head of the vehicle and the tail of the vehicle, lheadThe distance from the camera mounting position to the vehicle head.
In addition, based on the front vehicle detection model, the relative speed between the front vehicle and the vehicle can be calculated according to the vehicle distance output by the two adjacent frames of images and the time of the two adjacent frames of images.
In this embodiment, the filter layer is mainly used to solve the problem that the original output waveform of the front vehicle detection model has large noise and is directly used for a strategy to easily cause a false alarm. Optionally, the filter layer uses a filter to smooth the vanishing point, the position of the preceding vehicle, the preceding distance and the relative speed. For the vanishing point, the camera pitch angle generally cannot be changed, and the vanishing point is relatively fixed, so that the cumulative average is directly taken, for other data, a stable corresponding value is generated through a filter, and the parameter of the filter can be subjected to parameter search in combination with the calling-ready condition of a later strategy, and is continuously adjusted and optimized. After the waveform is smooth, the position information, the vehicle distance and the vanishing point of the front vehicle are smoother in visual effect and more approximate to a real physical scene.
Alternatively, in the present embodiment, the filter uses a cascaded Kalman filter (Kalman filter).
In the embodiment, the strategy layer mainly solves the problem that a hidden danger scene is identified from a result output by the filter layer, and the hidden danger scene is finally encapsulated into an alarm function of a driving assistance algorithm, so that an alarm strategy is formulated by the strategy layer. Wherein the alarm strategy is obtained by the following processes: the method comprises the steps of collecting a large amount of driving behavior data of drivers, a large amount of accident (front vehicle collision) data, a large amount of normal driving data and the like, carrying out prior processing on the collected data after the data are collected, and screening out unreasonable data, such as that the vehicle width is not in accordance with a conventional setting, the relative speed exceeds a set range value and the like. And then, carrying out feature extraction on the screened data to obtain a scene needing alarming and a scene not needing alarming, namely the required alarming strategy. After the scene needing alarming and the scene not needing alarming are obtained, the data output by the filter layer can be analyzed to judge whether alarming is needed or not, and if alarming is needed, alarming information is output.
Based on the above-described architecture of the driving assistance algorithm, the initial version of the driving assistance algorithm provided in this embodiment can be obtained through the following processes:
and acquiring the driving data of the vehicle and a labeling scene corresponding to the driving data.
And inputting the running data into the model layer, and outputting the front vehicle detection result data of the vehicle.
And inputting the detection result data of the front vehicle into a filter layer, and smoothing the detection result data of the front vehicle.
And inputting the smoothed detection result data of the front vehicle to a strategy layer, and identifying the scene in the smoothed detection result data of the front vehicle.
And judging whether the identified scene is consistent with the labeled scene, if not, adjusting parameters of a model layer, a filter layer or a strategy layer until the identified scene is consistent with the labeled scene, and obtaining the driving assistance algorithm of the initial version.
The driving data comprises a plurality of images in front of the vehicle, and the labeling scene corresponding to the driving data is the labeling scene representing each image in front of the vehicle.
Optionally, in this embodiment, the image in front of the vehicle may be obtained by performing frame extraction on a video captured by the automobile data recorder. For each image in front of the vehicle, video data corresponding to the image in front of the vehicle within a set time period and a vehicle behavior corresponding to the image in front of the vehicle detected by a vehicle behavior detection algorithm can be acquired, and the video data and the vehicle behavior are analyzed to label the image in front of the vehicle, so that a labeling scene of the image in front of the vehicle is obtained. The labeling scene represents a scene whether the image in front of the vehicle needs to give an alarm or not, and serves as a real scene for verifying whether the output of the driving assistance algorithm is correct or not. Vehicle behavior characterizes the behavior data generated by the driver during driving, including, but not limited to, rapid acceleration, lane change, rapid deceleration, and collision.
It is to be understood that, in the present embodiment, the running data of the vehicle includes running data of a plurality of vehicles, not limited to running data of a single vehicle.
After the front image of the vehicle is obtained, the front image of the vehicle is input into the model layer, the front image of the vehicle is detected through the front vehicle detection model, and front vehicle detection result data are output. The front vehicle detection result data comprises a front vehicle detection position, a vehicle distance, a relative speed and the like.
And after the detection result data of the front vehicle is obtained, inputting the detection result data of the front vehicle into a filtering layer, performing filtering processing by using a filter, and inputting the data after the filtering processing into a strategy layer.
And after the data after filtering processing is input into the strategy layer, identifying the scene in the smoothed front vehicle detection result data according to a preset alarm strategy and behavior data corresponding to the front vehicle image. The behavior data corresponding to the image in front of the vehicle can be obtained through a vehicle behavior detection algorithm, and can also be obtained through GPS speed and IMU data.
After the scene is identified, comparing the identified scene with a pre-obtained labeled scene, judging whether the identified scene is consistent with the labeled scene, if not, adjusting parameters of a model layer, a filter layer or a strategy layer, namely adjusting parameters of a front vehicle detection model, a filter or an alarm strategy, until the identified scene is consistent with the labeled scene, and obtaining a driving assistance algorithm.
For example, for a certain image in front of a vehicle, a vehicle in front is detected by a vehicle-ahead detection model to obtain vehicle-ahead detection result data with a vehicle distance d, after filtering and smoothing, judgment and identification are performed based on an alarm strategy, if behavior data corresponding to the image in front of the vehicle is behavior with rapid deceleration and the vehicle distance d is smaller than a set threshold, the identified scene is an alarm scene, and if the labeled scene is a scene without alarm, that is, the labeled scene is not consistent with the identified scene, parameters of the vehicle-ahead detection model, a filter or the alarm strategy need to be adjusted back until the labeled scene is consistent with the identified scene, so as to obtain a driving assistance algorithm.
It should be noted that, when parameters of the preceding vehicle detection model, the filter or the alarm strategy are adjusted, if the analysis result shows that the detection of the preceding vehicle detection model is incorrect, and the identified scene is inconsistent with the labeled scene, the parameters of the preceding vehicle detection model are adjusted, if the analysis result shows that the alarm of the alarm strategy is incorrect, and the identified scene is inconsistent with the labeled scene, the parameters of the strategy layer are adjusted, and if the analysis result shows that the filtering of the filter is incorrect, and the identified scene is inconsistent with the labeled scene, the parameters of the filter are adjusted.
After the initial version of the driving assistance algorithm is obtained through the above-described process, the initial version of the driving assistance algorithm may be deployed to the in-vehicle terminal. After the driving assistance algorithm of the initial version is deployed to the vehicle-mounted terminal, hundreds of thousands of levels of vehicle-mounted terminals operate simultaneously, massive data can be generated every day, and the indexes of the driving assistance algorithm (including alarm accuracy, recall rate of traffic accidents, whether accidents can be effectively reduced, and the like) need to be evaluated from the data. After evaluation, the driving assistance algorithm can be driven to update based on the evaluation results, so that iteration of the driving assistance algorithm deployed on the vehicle-mounted terminal is realized, and the landing requirement of the driving assistance algorithm is met.
The great difference between the algorithm landing in the industry and the research in academia is that the mass data cannot be stored and analyzed in full due to the consideration of flow cost, data storage cost and data analysis labor cost, for example, existing front-loading or rear-loading products providing driving assistance function do not form a complete link of behavior data, algorithm alarm event and traffic accident data of driver driving, especially the traffic accident data, and belong to long tail data with extremely low occurrence probability. These data have a very important influence on the performance of the driving assistance algorithm. Therefore, in order to improve the performance of the driving assistance algorithm, effective data needs to be mined from mass data to the maximum extent for effect evaluation, algorithm simulation and support of algorithm iteration.
Based on this, in the present embodiment, the event data mainly includes three types of data: the dangerous scene data, the long tail data and the data with potential abnormality are obtained by screening through a driving assistance algorithm, and the process can be as follows:
the data acquisition of the dangerous scene comprises data acquisition of a positive sample (dangerous scene) and data acquisition of a negative sample (non-dangerous normal scene), wherein a data source of the positive sample comprises two parts: the vehicle-to-vehicle collision accident data is identified through a collision detection algorithm, and the data of potential hidden danger scenes such as sudden braking or sudden lane change of a driver is identified through a vehicle behavior detection algorithm. Because a collision accident usually occurs or is about to occur, if a driver perceives the collision accident in advance, the driver can take an emergency danger avoiding action such as stepping on a brake or steering, and the like, so that the driver can rapidly decelerate and rapidly change the lane, which is a potentially dangerous scene. And the negative samples are normal scene data obtained by random sampling.
The driving assistance algorithm is notified immediately when the collision detection algorithm detects that a collision has occurred and the direction of the collision is toward the vehicle head, and when the vehicle behavior detection algorithm detects that a rapid deceleration or a rapid lane change, etc., has occurred in the current vehicle.
And when the driving assistance algorithm receives the event notification of the collision detection algorithm or the vehicle behavior detection algorithm, whether the current frame (namely the image in front of the current vehicle) has a front vehicle or not is judged according to the result of the front vehicle detection model, and if the front vehicle exists and the vehicle distance is smaller than a set threshold value, the driving assistance algorithm is triggered to report data, so that the screening of positive sample data is realized.
The data collection of the normal scene can be realized by timing random sampling, the driving assistance algorithm is informed after timing for a period of time, and if the driving assistance algorithm judges that a vehicle is in front and the distance between the vehicles is reasonable at the moment, the data can be uploaded to the server, so that the screening of the negative sample data is realized.
After the data is accumulated to a certain degree, the data set can show the condition that the common scene is most. Although the algorithm trained according to the data has certain capability of identifying dangerous scenes, the identification accuracy of the algorithm is limited for unusual scenes, and false alarm false detection can occur. Based on this, the present embodiment solves this problem by collecting long tail data.
In this embodiment, the long-tailed data includes data for erroneously determining the normal scene as the dangerous scene and data for erroneously determining the dangerous scene as the normal scene.
The driving assistance algorithm judges whether to give an alarm before an event occurs before reporting, and triggers data reporting if the driving assistance algorithm does not give an alarm, namely, a dangerous scene is judged as a normal scene by mistake. For example, the vehicle behavior detection algorithm detects that rapid deceleration is obtained, the driving assistance algorithm detects that a vehicle exists in front, the distance between the vehicles is smaller than a set threshold value, an alarm needs to be given, if the driving assistance algorithm does not give an alarm, data reporting is triggered, and therefore difficult sample data which are not recalled in the driving assistance algorithm can be accumulated in a targeted mode.
And for the data which wrongly judges the normal scene as the dangerous scene, after the driving assistance algorithm gives an alarm, whether the vehicle decelerates or changes the lane within a period of time after the alarm is judged through the vehicle behavior detection algorithm. If the vehicle behavior detection algorithm does not detect potential behaviors such as deceleration, lane change and the like, the current alarm is judged to be a false alarm, and the driving assistance algorithm is triggered to package and upload the alarm data to the server.
For data with potential abnormality, the driving assistance algorithm can judge whether the switching of the front vehicle occurs, and the waveforms in all dimensions may have jitter jump when the switching of the front vehicle occurs. And if the waveform jitter is detected, judging that potentially abnormal data exists, packaging the data before and after the current moment, and uploading the data to a server.
One way of detecting the target switching of the preceding vehicle is to compare IoU (intersection ratio) sizes of the surrounding box of the preceding vehicle target in the current frame and the surrounding box of the preceding vehicle target in the previous frame, and if IoU is smaller than a threshold, the target switching is considered to occur.
The waveform jitter detection is realized by a Kalman filter. Through the prediction phase of the Kalman filter, the possible Predict Value of the variable at time t can be predicted according to the variable values (namely the values of the position, the distance and the relative speed of the vehicle in front) at time t-2 and time t-1. At the time t, the front vehicle detection model obtains an image in front of the vehicle through a camera (i.e. a vehicle data recorder) to obtain an observed Value mean Value of the variable at the current time. And when the absolute error between the Predict Value and the Measure Value is greater than a set threshold, determining that one abnormal jitter occurs at the current moment. When the front vehicle switching does not occur and the waveform shakes, data reporting is performed.
When reporting data, the driving assistance algorithm packages the internal video data, the external video data, the IMU data, the GPS data, the algorithm data and the like within a set time when an event occurs and uploads the packaged data to the server.
For example, when a vehicle collides, a sudden brake occurs, and a sudden lane change occurs, the driving assistance algorithm uploads internal video data, external video data, IMU data, GPS data, algorithm data, and the like within a set time period (e.g., 5 minutes before and 5 minutes after the occurrence of the event) to the server.
The actual data generated during vehicle running is screened through the driving assistance algorithm, the most effective event data for improving the overall algorithm effect can be mined, the driving assistance algorithm is evaluated based on the event data obtained through mining, the part with insufficient driving assistance algorithm can be accurately analyzed, and the landing effectiveness of the driving assistance algorithm is evaluated.
Optionally, in order to facilitate the server to process the event data, the vehicle-mounted terminal may perform structured processing on the event data, upload the structured processed data to the server, and perform structured storage on the data through a hive (Hadoop-based data warehouse tool) and an object storage system, so as to query historical data, evaluate an influence surface of the whole algorithm landing and monitor abnormal conditions, and provide data for data simulation and algorithm evaluation.
And the server receives the uploaded event data, inputs the event data into a driving assistance algorithm stored by the server for reproduction, and obtains a first detection result. In this embodiment, the first detection result includes an output result of the preceding vehicle detection model in the driving assistance algorithm and an alarm result obtained by the alarm strategy, so that the performance of the preceding vehicle detection model and the alarm strategy in the driving assistance algorithm can be evaluated based on the first detection result, thereby realizing evaluation of the effectiveness of landing of the driving assistance algorithm.
In this embodiment, the server is pre-configured with an information detection model, and the information detection model may be compressed or pruned to obtain a leading vehicle detection model. In this embodiment, the detection result output by the information detection model is used as a true value and compared with the output result of the preceding vehicle detection model, so that the performance of the preceding vehicle detection model in the driving assistance algorithm can be evaluated.
Alternatively, referring to fig. 4 in combination, the step of evaluating the driving assistance algorithm stored in the server and each terminal in-vehicle terminal based on the first detection result includes steps S31 to S34.
Step S31: and inputting the event data into the information detection model and outputting a second detection result.
Step S32: and comparing the output result with the second detection result, and judging whether the output result is consistent with the second detection result.
If not, go to step S33, and if yes, go to step S34.
Step S33: and judging that the detection of the front vehicle detection model is wrong.
Step S34: and judging that the detection of the detection model of the front vehicle is correct.
After receiving the uploaded event data, the server inputs the event data into the information detection model and the driving assistance algorithm respectively to obtain a second detection result output by the information detection model and an output result of a front vehicle detection model included in the driving assistance algorithm, and compares the second detection result with the output result, if the second detection result is inconsistent with the output result, it is determined that the detection of the front vehicle detection model included in the driving assistance algorithm is wrong, and the detection effect is to be improved.
It should be noted that, since the event data uploaded by the terminal is video data and the information detection model and the driving assistance algorithm detect images, after receiving the event data uploaded by the terminal, video frame extraction needs to be performed on the event data to obtain an image to be detected, and after obtaining the image to be detected, the image to be detected is input into the information detection model and the driving assistance algorithm to be detected respectively.
After it is determined that the detection of the front vehicle detection model is incorrect, the front vehicle detection model needs to be optimized. Therefore, referring to fig. 5 in combination, the steps of updating the driving assistance algorithm stored in the server based on the evaluation result and issuing the updated driving assistance algorithm to each vehicle-mounted terminal include steps S41 to S43.
Step S41: and acquiring event data with wrong detection results, and expanding the event data with wrong detection results into a training sample of the detection model of the front vehicle.
Step S42: and training the front vehicle detection model again based on the expanded training sample to obtain the target front vehicle detection model.
Step S43: and updating the front vehicle detection model included in the driving assistance algorithm stored in the server based on the target front vehicle detection model, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
Because the uploaded event data are video data, after the video data are subjected to frame extraction, the video data have high inter-frame similarity, if training data are added into all the extracted frames, the learning of the model is not greatly assisted, the training efficiency of the model is influenced, the model is confused in the learning process, and really useful characteristics cannot be extracted. Therefore, in order to improve the efficiency of model training, before extending the event data with error detection results into the training samples, the redundancy processing needs to be performed on the images after frame extraction to remove a large number of similar images, and after removing a large number of similar images, the data after removing a large number of similar images is extended into the training samples of the detection models of the front vehicle.
Through expanding the training sample, can enrich the detection scene of preceding car detection model, improve preceding car detection model's generalization nature, consequently based on the training sample after the extension, train preceding car detection model, the target preceding car detection model that obtains, its detection effect is better to the optimization of preceding car detection model has been realized.
After the target leading vehicle detection model is obtained, the leading vehicle detection model included in the driving assistance algorithm stored in the server can be updated based on the target leading vehicle detection model, and then the updating of the driving assistance algorithm is realized. After the driving assistance algorithm is updated, the updated driving assistance algorithm can be issued to the vehicle-mounted terminal, and the vehicle-mounted terminal updates the driving assistance algorithm stored in the vehicle-mounted terminal, so that the iteration of the algorithm is realized.
The factors influencing the performance of the driving assistance algorithm not only comprise a front vehicle detection model, but also comprise the formulation of an alarm strategy. Therefore, after the front vehicle detection model is evaluated, the alarm strategy can also be evaluated. Therefore, referring to fig. 6, in the present embodiment, the step of evaluating the driving assistance algorithms stored in the server and each terminal in-vehicle terminal based on the first detection result may further include steps S35 to S38.
Step S35: and acquiring behavior data of the vehicle-mounted terminal.
Step S36: and analyzing the alarm result, the output result and the behavior data, and judging whether alarm error data exists or not.
If yes, go to step S37, otherwise go to step S38.
Step S37: and judging that the detection result of the alarm strategy is wrong.
Step S38: and judging that the detection result of the alarm strategy is correct.
The behavior data of the vehicle-mounted terminal represents the driving behavior of the driver, including, but not limited to, whether to accelerate, decelerate, go straight, turn, change lane and the like. Optionally, in this embodiment, the behavior data may be uploaded to the server through a vehicle behavior detection algorithm deployed in the vehicle-mounted terminal. As another alternative, the server may also obtain behavior data of the vehicle-mounted terminal based on IMU data (acceleration, angular velocity data, and the like) uploaded by the driving assistance algorithm.
And after the behavior data of the vehicle-mounted terminal is obtained, analyzing the alarm result, the output result and the behavior data, and judging whether alarm error data exists or not. If the data with wrong alarm exists, the detection result of the alarm strategy is judged to be wrong, and the alarm strategy cannot identify the alarm processing of the current scene corresponding to the alarm result.
Optionally, in this embodiment, the step of analyzing the alarm result, the output result, and the behavior data and determining whether there is data with an alarm error includes:
and judging whether an alarm is needed or not based on the output result and the behavior data.
And if the alarm is needed and the alarm result is not alarm, or if the alarm is not needed and the alarm result is alarm, judging that the alarm is wrong.
The output result is the detection result data of the front vehicle, and the detection result data comprises the detection position, the distance, the brake state, the relative speed and the like of the front vehicle.
In the embodiment, the alarm result is compared with the output result and the behavior data through combining and analyzing the judgment result, when the judgment result is consistent with the alarm result, the alarm is correct, and if the judgment result is inconsistent, the alarm is wrong. For example, if the behavior data indicates that the current behavior is a rapid deceleration behavior, and the vehicle ahead is detected by the vehicle ahead detection model, and the vehicle distance is smaller than the set threshold, the current scene is determined to be a scene that needs to be alarmed, but the alarm strategy included in the driving assistance algorithm identifies that the scene is a scene that does not need to be alarmed, and if no alarm is given, the alarm is determined to be mistaken. For example, the behavior data indicates that the current behavior is a normal driving behavior, and the preceding vehicle detection model detects that there is a vehicle ahead, but the vehicle distance is greater than the set threshold, it is determined that the current scene does not need to be alarmed, and the alarm strategy included in the driving assistance algorithm identifies that the scene is a scene that needs to be alarmed, and an alarm is given, and it is determined that the alarm is false and is a false alarm.
According to the vehicle-mounted data processing method provided by the embodiment, the output result of the front vehicle detection model and the acquired behavior data are subjected to combined analysis, so that the data of false alarm reporting, non-recall and late recall can be effectively obtained, and the reasons of false alarm reporting, non-recall and late recall can be analyzed and obtained based on the data of false alarm reporting, non-recall and late recall, so that the evaluation of the driving assistance algorithm is realized.
However, if it is determined that the detection result of the warning policy is incorrect, that is, it indicates that the warning policy is to be optimized, so that the steps of updating the driving assistance algorithm stored in the server based on the evaluation result and transmitting the updated driving assistance algorithm to each vehicle-mounted terminal include steps S44 to S45, with reference to fig. 7.
Step S44: and marking the behavior data and the data with the alarm errors to obtain a marking result, and adjusting the alarm strategy according to the marking result.
Step S45: and updating the alarm strategy included in the driving assistance algorithm stored in the server based on the adjusted alarm strategy, and issuing the updated driving assistance algorithm to the vehicle-mounted terminal.
The labeling result comprises a scene to be alarmed and a scene without alarming. After the labeling result is obtained, the scene to be alarmed and the scene without alarming can be expanded into the alarming strategy, so that the optimal adjustment of the alarming strategy is realized.
As an optional implementation manner, the embodiment may perform visual simulation on the data with the wrong alarm and the behavior data, display a video with rendering results such as leading car detection result data, alarm time, speed, driving behavior and the like based on a result of the visual simulation, and implement data labeling based on the displayed video.
As an optional implementation manner, in this embodiment, analysis processing may be performed on data with an alarm error and behavior data, a feature to be alarmed is extracted, and based on the extracted feature to be alarmed, tagging of the data is implemented.
Optionally, in this embodiment, the labels of the scenes to be alarmed include, but are not limited to, various types of alarms such as road conditions (high speed, rural area, etc.), weather illumination (black day, daytime, rainy and snowy day, backlight, etc.), driving behaviors (front vehicle braking, vehicle turning, vehicle overtaking and lane changing, etc.), and other common false alarm scenes (bumpiness).
According to the vehicle-mounted data processing method provided by the embodiment, the accuracy and the recall rate of the driving assistance algorithm in different scenes can be obtained by labeling the behavior data and the data with wrong alarm based on the labeling result. And for scenes with low accuracy and recall rate, the alarm strategy can be adjusted in a targeted manner, and the optimization of the alarm strategy is realized.
After the alarm strategy is adjusted and optimized, the alarm strategy included in the driving assistance algorithm stored in the server can be updated based on the adjusted alarm strategy, so that the driving assistance algorithm is updated. After the driving assistance algorithm is updated, the updated driving assistance algorithm can be issued to the vehicle-mounted terminal, and the vehicle-mounted terminal updates the driving assistance algorithm stored in the vehicle-mounted terminal, so that the iteration of the algorithm is realized, and the landing requirement of the driving assistance algorithm is met.
In this embodiment, when the driving assistance algorithm is issued to the vehicle-mounted terminal, the vehicle-ahead detection model, the filter, the alarm policy, the model scheduling, and the like may be packaged into an independent Application Package (APK). Then, the independent APK is integrated in a main system of the vehicle-mounted terminal, communication with other algorithm modules (such as a behavior detection algorithm, a collision detection algorithm and the like) in the main system is realized, and all algorithms on the vehicle-mounted terminal are uniformly scheduled and arranged by the main system. The APK independent of the algorithm and the main system are formed in a loose coupling mode, the deployment is flexible, the development and iteration efficiency can be greatly improved in the process of landing the algorithm on a large scale, the online period is shortened, and the system stability is improved.
Optionally, before the front vehicle detection model is packaged, in order to reduce the computational overhead, graph optimization and bit quantization may be performed on the front vehicle detection model.
It can be understood that, in this embodiment, the updating of the warning strategy and the preceding vehicle detection model may be performed separately or together, that is, one iteration of the algorithm may be performed, and only the preceding vehicle detection model or the warning strategy may be updated, or the preceding vehicle detection model or the warning strategy may be updated at the same time, depending on the actual situation.
The vehicle-mounted data processing method provided by the embodiment collects data in an actual scene, and reproduces the driving assistance algorithm based on the collected data, so that the landing performance of the driving assistance algorithm is effectively evaluated, the iteration of the driving assistance algorithm is driven based on the evaluation result, the closed loop of the data is realized, and the landing requirement of the driving assistance algorithm is met.
On this basis, the vehicle-mounted data processing method provided by this embodiment may further divide each vehicle-mounted terminal into an experimental group and a control group, turn on a prompt function of the driving assistance algorithm of the experimental group, turn off a prompt function of the driving assistance algorithm of the control group, and verify the landing effect of the driving assistance algorithm by collecting accident occurrence data of the experimental group and the control group and comparing the accident occurrence data of the experimental group and the control group. For example, if the accident occurrence amount of the experimental group is significantly smaller than that of the control group, it indicates that the driving assistance algorithm reduces the accident occurrence rate.
Based on the same inventive concept, please refer to fig. 8, the present embodiment further provides a vehicle-mounted data processing apparatus 150, which includes a data obtaining module 151, a data processing module 152, an algorithm evaluating module 153, and an algorithm updating module 154.
The data acquisition module 151 is configured to receive event data reported by each vehicle-mounted terminal; the event data is obtained by screening based on a driving assistance algorithm stored in the vehicle-mounted terminal.
The data processing module 152 is configured to input the event data into a driving assistance algorithm stored in the server to obtain a first detection result.
The algorithm evaluation module 153 is configured to evaluate the driving assistance algorithms stored in the server and each vehicle-mounted terminal based on the first detection result, so as to obtain an evaluation result.
The algorithm updating module 154 is configured to update the driving assistance algorithm stored in the server based on the evaluation result, and issue the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional implementation manner, an information detection model is pre-constructed in the server, the driving assistance algorithm includes a preceding vehicle detection model, the first detection result includes an output result of the preceding vehicle detection model, and the preceding vehicle detection model is obtained by compressing or pruning the information detection model; the algorithm evaluation module 153 is configured to:
and inputting the event data into a pre-constructed information detection model, and outputting a second detection result.
And comparing the output result with the second detection result, and judging whether the output result is consistent with the second detection result.
And if the two models are not consistent, judging that the detection of the front vehicle detection model is wrong.
In an alternative embodiment, referring to fig. 9 in combination, the algorithm update module 154 is configured to:
and acquiring event data with wrong detection results, and expanding the event data with wrong detection results into a training sample of the front vehicle detection model.
And training the front vehicle detection model again based on the expanded training sample to obtain the target front vehicle detection model.
And updating the front vehicle detection model included in the driving assistance algorithm stored in the server based on the target front vehicle detection model, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
In an optional embodiment, the driving assistance algorithm further comprises a pre-established warning strategy, and the first detection result further comprises a warning result; the algorithm evaluation module 153 is configured to:
acquiring behavior data of the vehicle-mounted terminal;
analyzing the alarm result, the output result and the behavior data, and judging whether alarm error data exists or not;
and if so, judging that the detection result of the alarm strategy is wrong.
In an alternative embodiment, the algorithm evaluation module is configured to:
and judging whether an alarm is needed or not based on the output result and the behavior data.
And if the alarm is needed and the alarm result is not alarm, or if the alarm is not needed and the alarm result is alarm, judging that the alarm is wrong.
In an alternative embodiment, referring to fig. 9 in combination, the algorithm update module 154 is configured to:
marking the behavior data and the data with the alarm error to obtain a marking result, and adjusting an alarm strategy according to the marking result;
and updating the alarm strategy included in the driving assistance algorithm stored in the server based on the adjusted alarm strategy, and issuing the updated driving assistance algorithm to the vehicle-mounted terminal.
In an alternative embodiment, the driving assistance algorithm includes a model layer, a filter layer, and a strategy layer, please refer to fig. 9, the vehicle-mounted data processing apparatus 150 further includes an algorithm training module 155, and the algorithm training module 155 is configured to:
and acquiring the driving data of the vehicle and a labeling scene corresponding to the driving data.
And inputting the running data into the model layer, and outputting the front vehicle detection result data of the vehicle.
And inputting the detection result data of the front vehicle into a filter layer, and smoothing the detection result data of the front vehicle.
And inputting the smoothed detection result data of the front vehicle to a strategy layer, and identifying the scene in the smoothed detection result data of the front vehicle.
And judging whether the identified scene is consistent with the labeled scene, if not, adjusting parameters of a model layer, a filter layer or a strategy layer until the identified scene is consistent with the labeled scene, and obtaining the driving assistance algorithm of the initial version.
Since the principle of solving the problem of the vehicle-mounted data processing device 150 in this embodiment is similar to the vehicle-mounted data processing method in the embodiment of the present invention, the implementation of the vehicle-mounted data processing device 150 may refer to the implementation of the method, and repeated details are not repeated.
The above-described modules of the in-vehicle data processing device 150 may be connected or communicate with each other via a wired connection or a wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may comprise a connection over a LAN, WAN, bluetooth, ZigBee, NFC, or the like, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units.
On the basis of the foregoing, the present embodiment provides a readable storage medium, which stores thereon a computer program, when being executed by a processor, the computer program performs the steps of the vehicle-mounted data processing method according to any one of the foregoing embodiments.
Alternatively, the readable storage medium can be a general storage medium, such as a removable disk, a hard disk, or the like.
The computer program product of the vehicle-mounted data processing method provided in this embodiment includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the vehicle-mounted data processing method in the foregoing method embodiment, which may be specifically referred to in the foregoing method embodiment, and are not described herein again.
In summary, according to the vehicle-mounted data processing method, the vehicle-mounted data processing device, the server and the readable storage medium provided by the embodiments of the present invention, a data closed loop is constructed, after event data uploaded by the vehicle-mounted terminal is received, the event data is input into the driving assistance algorithm stored in the server to obtain the first detection result, the driving assistance algorithms stored in the server and each vehicle-mounted terminal are evaluated based on the first detection result to obtain the evaluation result, the driving assistance algorithms are updated based on the evaluation result, and the updated driving assistance algorithms are issued to each vehicle-mounted terminal.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A vehicle-mounted data processing method is applied to a server, the server is in communication connection with at least one vehicle-mounted terminal, the server and each vehicle-mounted terminal store a driving assistance algorithm, and the method comprises the following steps:
receiving event data reported by each vehicle-mounted terminal; the event data are obtained by screening based on a driving assistance algorithm stored in the vehicle-mounted terminal;
inputting the event data into a driving assistance algorithm stored by the server to obtain a first detection result;
evaluating the driving assistance algorithms stored by the server and each vehicle-mounted terminal based on the first detection result to obtain an evaluation result;
and updating the driving assistance algorithm stored in the server based on the evaluation result, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
2. The vehicle-mounted data processing method according to claim 1, wherein an information detection model is pre-constructed in the server, the driving assistance algorithm includes a leading vehicle detection model, the first detection result includes an output result of the leading vehicle detection model, and the leading vehicle detection model is obtained by compressing or pruning the information detection model;
the step of evaluating the driving assistance algorithms stored in the server and each of the in-vehicle terminals based on the first detection result to obtain an evaluation result includes:
inputting the event data into the information detection model and outputting a second detection result;
comparing the output result with the second detection result, and judging whether the output result is consistent with the second detection result;
and if the two models are not consistent, judging that the detection of the front vehicle detection model is wrong.
3. The vehicle-mounted data processing method according to claim 2, wherein the step of updating the driving assistance algorithm stored in the server based on the evaluation result and issuing the updated driving assistance algorithm to each of the vehicle-mounted terminals includes:
acquiring event data with a wrong detection result, and expanding the event data with the wrong detection result into a training sample of the front vehicle detection model;
based on the expanded training sample, the front vehicle detection model is trained again to obtain a target front vehicle detection model;
and updating the preceding vehicle detection model included in the driving assistance algorithm stored in the server based on the target preceding vehicle detection model, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
4. The vehicle-mounted data processing method according to claim 2, wherein the driving assistance algorithm further comprises a pre-established alarm strategy, and the first detection result further comprises an alarm result; the step of evaluating the driving assistance algorithms stored in the server and each of the in-vehicle terminals based on the first detection result to obtain an evaluation result includes:
acquiring behavior data of the vehicle-mounted terminal;
analyzing the alarm result, the output result and the behavior data, and judging whether alarm error data exists or not;
and if so, judging that the detection result of the alarm strategy is wrong.
5. The vehicle-mounted data processing method according to claim 4, wherein the step of analyzing the alarm result, the output result and the behavior data and judging whether alarm error data exists comprises the following steps:
judging whether an alarm is needed or not based on the output result and the behavior data;
and if the alarm is needed and the alarm result is not alarm, or the alarm is not needed and the alarm result is alarm, judging that the alarm is wrong.
6. The vehicle-mounted data processing method according to claim 4, wherein the step of updating the driving assistance algorithm stored in the server based on the evaluation result and issuing the updated driving assistance algorithm to each of the vehicle-mounted terminals includes:
marking the behavior data and the data with the alarm error to obtain a marking result, and adjusting the alarm strategy according to the marking result;
and updating the alarm strategy included in the driving assistance algorithm stored in the server based on the adjusted alarm strategy, and issuing the updated driving assistance algorithm to each vehicle-mounted terminal.
7. The vehicle-mounted data processing method according to claim 1, wherein the driving assistance algorithm comprises a model layer, a filter layer and a strategy layer, the method further comprising a step of obtaining an initial version of the driving assistance algorithm, the step comprising:
acquiring running data of a vehicle and a labeling scene corresponding to the running data;
inputting the driving data into the model layer, and outputting the front vehicle detection result data of the vehicle;
inputting the detection result data of the front vehicle into the filter layer, and smoothing the detection result data of the front vehicle;
inputting the smoothed front vehicle detection result data into the strategy layer, and identifying scenes in the smoothed front vehicle detection result data;
and judging whether the identified scene is consistent with the labeled scene, if not, adjusting the parameters of the model layer, the filtering layer or the strategy layer until the identified scene is consistent with the labeled scene, and obtaining the driving assistance algorithm of the initial version.
8. The vehicle-mounted data processing device is applied to a server, the server is in communication connection with at least one vehicle-mounted terminal, the server and each vehicle-mounted terminal store a driving assistance algorithm, and the device comprises a data acquisition module, a data processing module, an algorithm evaluation module and an algorithm updating module;
the data acquisition module is used for receiving event data reported by each vehicle-mounted terminal; the event data are obtained by screening based on a driving assistance algorithm stored in the vehicle-mounted terminal;
the data processing module is used for inputting the event data into a driving assistance algorithm stored by the server to obtain a first detection result;
the algorithm evaluation module is used for evaluating the driving assistance algorithms stored by the server and each vehicle-mounted terminal based on the first detection result to obtain an evaluation result;
the algorithm updating module is used for updating the driving assistance algorithm stored in the server based on the evaluation result and sending the updated driving assistance algorithm to each vehicle-mounted terminal.
9. A server, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements the on-board data processing method of any of claims 1 to 7.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the vehicle-mounted data processing method according to any one of claims 1 to 7.
CN202011150412.5A 2020-10-23 2020-10-23 Vehicle-mounted data processing method and device, server and readable storage medium Pending CN112287801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011150412.5A CN112287801A (en) 2020-10-23 2020-10-23 Vehicle-mounted data processing method and device, server and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011150412.5A CN112287801A (en) 2020-10-23 2020-10-23 Vehicle-mounted data processing method and device, server and readable storage medium

Publications (1)

Publication Number Publication Date
CN112287801A true CN112287801A (en) 2021-01-29

Family

ID=74423829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011150412.5A Pending CN112287801A (en) 2020-10-23 2020-10-23 Vehicle-mounted data processing method and device, server and readable storage medium

Country Status (1)

Country Link
CN (1) CN112287801A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287300A (en) * 2020-10-23 2021-01-29 北京嘀嘀无限科技发展有限公司 Data processing method and device, server and storage medium
CN114035960A (en) * 2021-11-16 2022-02-11 京东方科技集团股份有限公司 Edge computing device, interaction method, device, terminal device and storage medium
CN114584584A (en) * 2022-02-28 2022-06-03 福思(杭州)智能科技有限公司 System and method for processing vehicle driving data and storage medium
CN115439954A (en) * 2022-08-29 2022-12-06 上海寻序人工智能科技有限公司 Data closed-loop method based on cloud large model
CN116664964A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data screening method, device, vehicle-mounted equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103723096A (en) * 2014-01-10 2014-04-16 上海大众汽车有限公司 Driving assistance system with wireless communication function
WO2017074966A1 (en) * 2015-10-26 2017-05-04 Netradyne Inc. Joint processing for embedded data inference
CN107150689A (en) * 2017-03-20 2017-09-12 深圳市保千里电子有限公司 A kind of automobile assistant driving method and system
CN111038522A (en) * 2018-10-10 2020-04-21 哈曼国际工业有限公司 System and method for assessing training data set familiarity of driver assistance systems
CN111731284A (en) * 2020-07-21 2020-10-02 平安国际智慧城市科技股份有限公司 Driving assistance method and device, vehicle-mounted terminal equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103723096A (en) * 2014-01-10 2014-04-16 上海大众汽车有限公司 Driving assistance system with wireless communication function
WO2017074966A1 (en) * 2015-10-26 2017-05-04 Netradyne Inc. Joint processing for embedded data inference
CN107150689A (en) * 2017-03-20 2017-09-12 深圳市保千里电子有限公司 A kind of automobile assistant driving method and system
CN111038522A (en) * 2018-10-10 2020-04-21 哈曼国际工业有限公司 System and method for assessing training data set familiarity of driver assistance systems
CN111731284A (en) * 2020-07-21 2020-10-02 平安国际智慧城市科技股份有限公司 Driving assistance method and device, vehicle-mounted terminal equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287300A (en) * 2020-10-23 2021-01-29 北京嘀嘀无限科技发展有限公司 Data processing method and device, server and storage medium
CN114035960A (en) * 2021-11-16 2022-02-11 京东方科技集团股份有限公司 Edge computing device, interaction method, device, terminal device and storage medium
CN114584584A (en) * 2022-02-28 2022-06-03 福思(杭州)智能科技有限公司 System and method for processing vehicle driving data and storage medium
CN115439954A (en) * 2022-08-29 2022-12-06 上海寻序人工智能科技有限公司 Data closed-loop method based on cloud large model
CN116664964A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data screening method, device, vehicle-mounted equipment and storage medium
CN116664964B (en) * 2023-07-31 2023-10-20 福思(杭州)智能科技有限公司 Data screening method, device, vehicle-mounted equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112287801A (en) Vehicle-mounted data processing method and device, server and readable storage medium
US11840239B2 (en) Multiple exposure event determination
US11475770B2 (en) Electronic device, warning message providing method therefor, and non-transitory computer-readable recording medium
CN111038522B (en) Vehicle control unit and method for evaluating a training data set of a driver assistance system
JP5278419B2 (en) Driving scene transition prediction device and vehicle recommended driving operation presentation device
JP6944472B2 (en) Methods, devices, and systems for detecting reverse-way drivers
US11549815B2 (en) Map change detection
CN113165646B (en) Electronic device for detecting risk factors around vehicle and control method thereof
GB2573738A (en) Driving monitoring
US20220388547A1 (en) Method for training a machine learning algorithm for predicting an intent parameter for an object on a terrain
US11645360B2 (en) Neural network image processing
CN116767281A (en) Auxiliary driving method, device, equipment, vehicle and medium
CN115675520A (en) Unmanned driving implementation method and device, computer equipment and storage medium
CN108839615A (en) A kind of driving warning method, device and electronic equipment
EP4038346A1 (en) Crosswalk detection
US10953871B2 (en) Transportation infrastructure communication and control
US12026953B2 (en) Systems and methods for utilizing machine learning for vehicle detection of adverse conditions
Wu et al. Lane-GNN: Integrating GNN for Predicting Drivers' Lane Change Intention
CN117882116A (en) Parameter adjustment and data processing method and device for vehicle identification model and vehicle
JP6732053B2 (en) Method, apparatus, and system for detecting reverse-drive drivers
CN112308434A (en) Traffic safety risk assessment method and system
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle
CN116434041B (en) Mining method, device and equipment for error perception data and automatic driving vehicle
CN117612140B (en) Road scene identification method and device, storage medium and electronic equipment
CN117373263B (en) Traffic flow prediction method and device based on quantum pigeon swarm algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129