CN114722931A - Vehicle-mounted data processing method and device, data acquisition equipment and storage medium - Google Patents

Vehicle-mounted data processing method and device, data acquisition equipment and storage medium Download PDF

Info

Publication number
CN114722931A
CN114722931A CN202210343165.3A CN202210343165A CN114722931A CN 114722931 A CN114722931 A CN 114722931A CN 202210343165 A CN202210343165 A CN 202210343165A CN 114722931 A CN114722931 A CN 114722931A
Authority
CN
China
Prior art keywords
data
scene
information
vehicle
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210343165.3A
Other languages
Chinese (zh)
Inventor
韩东
周克林
吴斌
陈欣欣
杨坤
赵轩
王鸿
赵雪峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foss Hangzhou Intelligent Technology Co Ltd
Original Assignee
Foss Hangzhou Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foss Hangzhou Intelligent Technology Co Ltd filed Critical Foss Hangzhou Intelligent Technology Co Ltd
Priority to CN202210343165.3A priority Critical patent/CN114722931A/en
Publication of CN114722931A publication Critical patent/CN114722931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle-mounted data processing method and device, data acquisition equipment and a machine storage medium. The method comprises the following steps: acquiring scene data, wherein the scene data comprises at least two of user input data, sensor data and external data; determining scene characteristic information based on the scene data; identifying a scene type based on the scene feature information. The purpose of further improving the accuracy of vehicle scene type identification is achieved through the acquired multi-dimensional scene data, and the defect that scene information cannot be comprehensively reflected by scene data acquired in a single way is overcome.

Description

Vehicle-mounted data processing method and device, data acquisition equipment and storage medium
Technical Field
The application relates to the technical field of intelligent driving, in particular to a vehicle-mounted data processing method and device, data acquisition equipment and a storage medium.
Background
With the rapid development of automobile intelligent driving technology in recent years, a driving scene recognition technology for carrying out scene classification based on collected vehicle data is widely applied to intelligent automobiles. In an existing driving scene recognition scheme, current vehicle information is generally acquired in a manner of recognizing a voice command of a driver in real time and scene recognition is performed, or scene data around a vehicle is collected in real time through a vehicle body image collecting device such as a vehicle body sensor and a camera device, and the scene data is classified to realize the scene recognition.
However, in the method of determining the scene type through the voice recognition technology, the method excessively depends on the subjectivity of the driver, so that the method has certain requirements on the specialty of the driver, and the vehicle information which can be acquired is based on the angle of the driver and is too comprehensive. In this manner, the attention of the driver is easily distracted, and dangerous driving is easily caused. When scene data around a vehicle is collected in real time through a vehicle body image collecting device, the collected data has certain limitation due to the self limitation of the device, for example, in a low-grade intelligent driving vehicle, the information around the vehicle cannot be comprehensively identified due to the insufficient sensor; when the driver encounters rainy and snowy weather, the information can not be transmitted to the driver in advance, and the driving mode can be switched in advance.
Aiming at how to further improve the accuracy of vehicle scene type identification in the related technology, no effective solution is provided at present.
Disclosure of Invention
In view of the above, it is necessary to provide an on-board data processing method, an on-board data processing apparatus, a data acquisition device, and a storage medium, which can improve the accuracy of vehicle scene type identification.
In a first aspect, the present application provides a vehicle-mounted data processing method. The method comprises the following steps:
acquiring scene data, wherein the scene data comprises at least two of user input data, sensor data and third-party data;
determining scene characteristic information based on the scene data;
identifying a scene type based on the scene feature information.
In one embodiment, the scene data includes user input data, sensor data, and third party data, and the acquiring the scene data includes:
collecting user input data through a voice input device;
collecting sensor data by a plurality of sensors, wherein the sensor data includes road data and obstacle data;
the method comprises the steps of determining the position of a vehicle through a global positioning system, and acquiring third-party data through a high-precision map and a network query interface based on the position of the vehicle, wherein the third-party data comprises road condition data and real-time weather data.
In one embodiment, the determining scene characteristic information based on the scene data includes:
converting the user input data into a voice text, and determining key information according to the voice text;
matching the key information with a preset voice instruction library, and if the key information is successfully matched, determining a first scene characteristic according to the key information;
and if the matching is unsuccessful, updating a preset voice instruction library through a cloud training model based on the key information, and generating a corresponding first scene characteristic.
In one embodiment, the determining scene characteristic information based on the scene data further comprises:
determining a target type based on the sensor data;
and fusing sensor data corresponding to the same type of target object to obtain a second scene characteristic.
In one embodiment, the determining scene characteristic information based on the scene data further comprises:
and obtaining a third scene characteristic based on the third party data, wherein the third scene characteristic comprises a road network characteristic and a weather characteristic.
In one embodiment, after determining the scene type, the method further includes:
and matching the scene characteristic information with a preset recording condition, and if the matching is successful, generating a scene recording strip and a corresponding scene label according to the scene characteristic information and the scene type.
In one embodiment, the method further comprises:
feeding back the scene record strip to a user and collecting user feedback information;
and updating the preset recording condition according to the user feedback information and the scene recording strip.
In a second aspect, the present application further provides an on-vehicle data processing apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring scene data, and the scene data at least comprises at least two of user input data, sensor data and third-party data;
a processing module for determining scene characteristic information based on the scene data;
and the identification module is used for identifying the scene type based on the scene characteristic information.
In a third aspect, the present application further provides a data acquisition device, which includes a memory and a processor, where the memory stores a computer program, and is characterized in that the processor implements the steps of the vehicle-mounted data processing method when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the above-mentioned vehicle-mounted data processing method.
Compared with the prior art, the vehicle-mounted data processing method, the vehicle-mounted data processing device, the data acquisition equipment and the storage medium are provided, the multi-dimensional scene data of the scene where the vehicle is located are acquired based on three modes, namely user input, various sensors and third-party software, the scene data is subjected to feature extraction to determine scene feature information, and scene type identification is carried out according to the scene feature information. According to the method, the purpose of further improving the vehicle scene type identification accuracy is achieved through the acquired multi-dimensional scene data, the defect that scene information cannot be comprehensively reflected by scene data acquired in a single way is overcome, and the effect of quickly and accurately identifying the scene type is achieved. The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
FIG. 1 is a diagram of an exemplary implementation of a method for processing vehicle data;
FIG. 2 is a schematic flow chart diagram of a vehicle data processing method in one embodiment;
FIG. 3 is a flow chart of a method for on-board data processing in a preferred embodiment;
FIG. 4 is a block diagram showing the construction of an in-vehicle data processing apparatus according to an embodiment;
FIG. 5 is a block diagram of an on-board data processing device in accordance with a preferred embodiment;
FIG. 6 is a diagram showing an internal structure of a data acquisition apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vehicle-mounted data processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Fig. 1 is an application environment diagram of the vehicle-mounted data processing method in this embodiment, where a terminal 101 may acquire scene data around a vehicle through an internal integrated sensor module, acquire the scene data in real time from third-party software 102 through a network interface, and may also acquire the scene data based on an internal integrated voice recognition module. The server 103 is connected with the terminal 101 through a communication network, and after the terminal 101 acquires scene data, scene recognition work can be completed in an integrated chip inside the terminal, or after the scene data is sent to the server 103 to complete the work, whether scene information is stored or not can be judged, and data needing to be stored is sent to the database 104 to be stored. Further, the server 103 may be implemented by a stand-alone server or a server cluster composed of a plurality of servers, and may be used to remotely upgrade a computer program corresponding to the method of the present application.
In one embodiment, as shown in fig. 2, a vehicular data processing method is provided, which is described by taking a schematic flowchart of the method applied to the vehicular data processing method in fig. 2 as an example, and includes the following steps:
step S201, scene data is acquired, where the scene data includes at least two of user input data, sensor data, and third-party data.
The user input data is data which is input by a user and describes a current scene, the sensor data is real-time scene data around the vehicle and acquired by a sensor, and the external data is related data of a city and a street where the vehicle is located and acquired according to a Global Positioning System (GPS).
Specifically, a user may input data through an input device built in the data acquisition device corresponding to this embodiment, or may input data through connection between external input data and the data acquisition device, the related sensors may all be integrated in the data acquisition device, and the external data may be acquired from external software through a network interface.
In step S202, scene feature information is determined based on the scene data.
Because the data acquired through the steps have the defects of being too discrete and not concise and intuitive in scene description, the data needs to be preliminarily processed, and the data of each path can determine corresponding scene characteristic information according to the key information of the data. Specifically, the feature information is the characteristic description of different scenes obtained according to different scene data, and the type of the current scene can be visually displayed.
In step S203, a scene type is identified based on the scene feature information.
When scene recognition is carried out, weights can be distributed to different scene characteristic information, and the scene type can be determined in a weighting summation mode. For example, a certain scene feature information is set as a main identification basis, and other scene information is set as a secondary identification basis, or the scene feature information corresponding to the user input data is divided into a first identification echelon, the scene feature information corresponding to the external data is divided into a second identification echelon, the scene feature information corresponding to the sensor data is divided into a third identification echelon, and the scene type is identified sequentially according to the first identification echelon to the third identification echelon. Further, the scene recognition mode and standard may be specifically set according to actual conditions, and are not specifically limited.
Through steps S201 to S203, the vehicle-mounted data processing method of the embodiment acquires multi-dimensional scene data through multiple ways, performs feature extraction based on the multi-dimensional vehicle-mounted data to determine corresponding multi-dimensional scene feature information, and finally performs scene type identification based on the scene feature information. This implementation can combine sensor, external software and user speech input's mode to carry out data acquisition, has richened the variety and the comprehensiveness of data on the one hand to further improve scene type identification's accuracy, and enlarged corresponding device's application scope, can discern more scene types. On the other hand, the situation that the scene cannot be timely and accurately identified when the user drives the vehicle in a fatigue mode is avoided, and the safety of vehicle driving is improved.
In one embodiment, the scene data includes user input data, sensor data, and third party data, and the acquiring the scene data includes:
collecting user input data through a voice input device;
collecting sensor data by a plurality of sensors, wherein the sensor data includes road data and obstacle data;
the method comprises the steps of determining the position of a vehicle through a global positioning system, and acquiring third-party data through a high-precision map and a network query interface based on the position of the vehicle, wherein the third-party data comprises road condition data and real-time weather data.
Specifically, when the voice input device collects data input by a user, the user may send a preset wake-up word or a voice command sentence to the voice input device before inputting voice information, so as to control the device to be turned on. The voice input device may be an audio receiver internally installed in the data acquisition device corresponding to the method of the present invention, or may be an external audio receiver connected with the data acquisition device through a wired or wireless connection, for example, a bluetooth headset, a wired headset with a microphone, an independent microphone, and the like.
When collecting sensor data through various sensors, the sensors in this embodiment may be a laser radar, a millimeter wave radar, a look-around camera, a long-focus camera, a GPS and Inertial Measurement Unit (IMU for short), an ultrasonic radar, and the like; the second acquired sensor data includes road data and obstacle data. The road data may include a road section type of a road section where the vehicle is currently located, a road section grade, whether the road section is an intersection road section or not, and road conditions, such as an urban road/a rural road/an open road, a first-level road/a second-level road, a straight road/a curve/a cross entrance, an entrance/exit of a highway intersection, a lane number, light water accumulation on the road surface/moderate water accumulation on the road surface, and the like; the obstacle data may be divided into static obstacle data, which may be an object that is stationary with respect to the road, such as a road block, a stationary pedestrian/vehicle, stationary buildings on both sides of the road, etc., and dynamic obstacle data, which may be an object that moves with respect to the road, such as a vehicle in motion, a pedestrian in motion, etc. It should be emphasized that the sensors referred to in the present application can be integrally installed in the corresponding on-board data processing device, and additional installation of other sensors is not required to increase the data acquisition cost.
Further, in this embodiment, the vehicle position may also be determined by a GPS, and third-party data is acquired through a high-precision map and a network query interface based on the vehicle position, where the third-party data includes road condition data and weather data. Specifically, the road condition data including, but not limited to, lane line types (white dotted line/white solid line/yellow dotted line/yellow solid line), lane line types (high speed/urban road/bike lane), lane connection information (preceding and following lanes), lane adjacency information (left lane/right lane/branch lane/merge lane), traffic information (traffic lights/split lanes), speed limit, inclination, and the like may be acquired through the high-precision map based on the vehicle position; further, the real-time congestion condition and congestion tendency of the specified road or area, such as congestion evaluation (clear, slow running, congestion, etc.), congestion distance (10 meters), etc., can also be queried from the operator through the network query interface.
Furthermore, the longitude and latitude of the current position can be determined through the specific position of the vehicle sent by the GPS, the current city and street are obtained in the map according to the longitude and latitude, and the information of the city and the street is sent to the cloud center through a weather query interface, wherein the weather query interface is a query interface provided by an operator, and the operator which can provide complete information at present comprises navigation maps such as a high-grade map, a hundred-degree map and an Tencent map. The cloud center may then return current real-time weather information including, but not limited to, PM2.5 (excellent, good, light pollution, heavy pollution, etc.), weather conditions (sunny, cloudy, rainy, thunderstorm, etc.), early warning information (road snow, etc.), clouds (99999999), visibility (3471), and update time (20200220143500), etc.
In this embodiment, the vehicle-mounted data is acquired through different approaches, so that the acquired vehicle-mounted data can describe the current scene information around the vehicle from multiple angles, the current scene situation can be more comprehensively displayed, more application scenes can be adapted, the diversity and stability of the scene data and the accuracy of the description of the current scene information are improved, for example, when severe weather occurs, the sensor data cannot accurately describe the current scene information, and the current scene information can be compensated through different data acquisition approaches.
In one embodiment, the determining scene characteristic information based on the scene data includes:
converting the user input data into a voice text, and determining key information according to the voice text; matching the key information with a preset voice instruction library, and if the key information is successfully matched, determining a first scene characteristic according to the key information; and if the matching is unsuccessful, updating a preset voice instruction library through a cloud training model based on the key information, and generating a corresponding first scene characteristic.
It can be understood that when a user inputs data during driving, the voice information, namely the user input information, can be acquired through the voice input device, and then the voice information is transferred to the voice module in the data acquisition device. When processing the voice information, firstly, the voice information input by the user needs to be converted into a voice text, and the key information is determined according to the voice text. Specifically, the audio data may be converted into text data by using a Speech Recognition technology (ASR for short), and the key information of the text data is determined by semantic parsing, where the key information may include instruction information, for example, a user may issue an Automatic parking instruction, send a scene of a traffic accident when encountering a traffic accident suddenly in the driving process with another vehicle, send a current scene as a scene of a road crossing with a wildlife when encountering a road crossing with a wildlife in the driving process of a wildlife, and the like. And then, matching the key information with a preset voice instruction library, and if the matching is successful, determining the corresponding first scene characteristics according to the key information. The preset voice instruction can be preset by an engineer when leaving a factory, can be added when a product is subsequently updated, and can be set by a user according to driving habits. Further, if the matching is unsuccessful, the preset voice instruction library is updated through the cloud training model based on the key information, and corresponding first scene features are generated. In this case, the instruction input by the user is a new instruction, the preset voice instruction library does not include a corresponding control instruction, and no corresponding scene type exists.
Optionally, the voice input in this embodiment may be performed after the user obtains the feedback of the current scene type, and when the user finds that the current actual scene type is inconsistent with the received system feedback scene information, the real-time correction may be performed in a voice input manner. Active supplementation may also be performed when the user does not receive the scene type of system feedback.
In this embodiment, the current scene type can be corrected in real time by recognizing the voice information input by the user to obtain the corresponding user input data, so that the recognition error of the computer under the emergency condition is avoided, and the defect of computer recognition is overcome, thereby further improving the accuracy of recognizing the current scene type. And when the emergency is faced, the scene recognition system can correct or recognize the emergency in time, so that the strain capacity of the scene recognition is improved.
In one embodiment, the determining scene characteristic information based on the scene data further comprises:
determining a target type based on the sensor data; and fusing sensor data corresponding to the same type of target object to obtain a second scene characteristic.
When the sensor data is acquired, because the sensor data acquisition ways in the application are numerous, the data acquired by each way has the characteristics of the data, for example, when the camera device acquires the data, the detection range is small, the peripheral environment and objects can be distinguished by using computer vision, the distance of an obstacle can be judged, but the detection accuracy depends on the intensity of light, and the sensor data is easily influenced by severe weather; the millimeter wave radar can sense the vehicle running condition in a large range, but is easily interfered by signals in partial scenes, and the attributes of objects cannot be identified; the laser radar can be applied to obstacle detection, dynamic obstacle detection, identification and tracking and detection of road surface conditions. Therefore, each type of sensor data includes a plurality of different obstacles, and the different types of sensor data include different amounts of road data and obstacle data, and the same road data or obstacle data is described differently in the different types of sensor data.
Therefore, after the sensor data is determined, the sensor data needs to be grouped and associated according to the target object, for example, if the object a appears in the picture taken by the camera, the image collected by the millimeter wave radar, and the image collected by the laser radar, the data describing the object a needs to be extracted from the different sensor data, and the data needs to be grouped. After information extraction and classification association of the sensor data, a fusion algorithm can be utilized to fuse the same set of sensor data to determine traffic flow information. The traffic flow information includes the distance from an obstacle around the vehicle, the speed, the acceleration, and the like, and taking the object a as an example, the traffic flow information may be that the distance from the object a to the vehicle is K, the speed of the relative road surface is L, and the current acceleration is N. Furthermore, all the sensors are integrated in the vehicle-mounted data processing device, and other sensors do not need to be additionally installed to increase data acquisition cost.
Furthermore, functional triggering information for controlling the vehicle travel can also be generated by combining individual sensor data with a separate domain controller. The domain controller comprises a chassis domain controller, a vehicle body domain controller, a power domain controller, an automatic driving domain controller and the like. After the sensors acquire a plurality of sensor data, the sensor data can be transmitted to the corresponding external control device, and the corresponding functional system is triggered to generate a control instruction according to the sensor data. Each domain controller may also generate corresponding function trigger information when receiving a control instruction of the functional system. Specifically, the function trigger information includes an Emergency Braking and stopping function (AEB) trigger, a Lane Keeping function (LKA) trigger, a Lane Departure Warning (LDW) trigger, and the like. The traffic flow information and the function triggering information are the second scene characteristics.
Further, after the scene type is determined, the external control device triggers the corresponding functional system to generate a control instruction according to the scene type, and each domain controller generates corresponding function triggering information according to the control instruction.
Further, after acquiring the plurality of sensor data, route planning may be performed in addition to the perceptual fusion control. Specifically, during planning, a target location where a user travels may be acquired first, then a travel route is planned according to the target location, road data, and obstacle data, and finally planning information is generated according to the travel route, and function trigger information is generated based on the planning information. The driving route can be determined by running a planning control algorithm through the internal chip of the device, can be an optimal driving route, and can be compared with a planned route in the driving process of the vehicle to obtain a corresponding deviation value.
In the embodiment, complete sensor data are acquired by collecting a plurality of sensors with different acquisition functions, so that the data acquisition requirements of a single sensor, an L2 product and a high-order product can be met. Compared with a traditional Advanced Driving Assistance System (ADAS) algorithm, the algorithm module in the invention has higher perception reliability and can output traffic flow information and function trigger information with higher reliability. Furthermore, the multiple sensors that involve in this embodiment all can be integrated in same data acquisition device, solve the not enough problem of low-level intelligent driving product sensor, improve the equipment commonality.
In one embodiment, the determining scene characteristic information based on the scene data further comprises:
and obtaining a third scene characteristic based on the third party data, wherein the third scene characteristic comprises a road network characteristic and a weather characteristic. And determining road network information, and determining weather information based on the real-time weather data.
When the third-party data is processed, firstly, road network information can be determined based on the road condition data, and specifically, the road condition data is data obtained through a high-precision map and a navigation map. The high-precision map can acquire road condition data and perform relocation according to the acquired data acquired by sensors such as a laser radar and the like, so that the problem of inaccurate location after GPS signals are interfered is solved. Furthermore, the positioning information acquired by the high-precision map can also be used as a basis for acquiring real-time road condition data by a navigation system. When road network information is determined by road condition data, grouping determination can be performed according to the content described by the data. Specifically, the road network information includes a road network, a lane network, safety auxiliary information and traffic facility information; wherein, the road network is information describing road types, such as city express lanes, city common lanes, bridges, tunnels and the like; the lane network is information describing lane attributes, such as lane boundary lines, lane center lines, reference priorities, traffic directions, traffic states, and the like; safety assistance information is information describing road construction attributes, such as road inclination, curvature, heading, etc.; the traffic facility information is information describing traffic signs and road facilities, such as road surface signs, traffic lights, guard rails, and the like. Further, it is also necessary to determine weather information based on the real-time weather data, wherein the weather information includes rainy and snowy weather, hail weather, fog weather, windy weather, and the like.
In the existing scene recognition technology, road condition data acquired by directly recording a high-precision map is usually adopted during data recording, and is not specifically classified, so that research and development personnel need to further classify the scene data after receiving the scene data, and the workload of the research and development personnel is greatly increased. Furthermore, in the conventional weather information acquisition, data acquired by a sensor is generally identified to determine the current weather condition, but when severe weather occurs, the accuracy of the data acquired by the sensor is low, which also causes that the acquired weather information cannot accurately describe the current scene.
In the embodiment, the real-time road condition query and real-time weather query functions provided by a high-precision map and an operator are combined, the real-time weather information and the road traffic information can be output, the defects of data acquisition of a sensor are overcome, and the problem that the classification of weather, road condition data and the like cannot be automatically classified is solved. The method can classify complex scenes which are difficult to identify, a complete classification system is constructed, developers can conveniently screen target data, and the research and development efficiency is greatly improved.
It can be understood that after the scene feature information identification work is completed, each scene type needs to be filtered to obtain the scene needed by the product developer, and the scene needs to be uploaded and stored.
Specifically, the scene feature information and the preset recording conditions can be matched, and if the matching is successful, a scene record bar and a corresponding scene label are generated according to the scene feature information and the scene type; if the matching is unsuccessful, a scene record strip can be generated according to the scene information, but the scene record strip is not recorded and a corresponding scene label is generated.
The preset recording condition may be a condition preset by a developer during product design, or a condition added by successive updating during the product use. Optionally, the preset recording condition may be represented by a single piece of scene feature information, or may be represented by a combination of a plurality of pieces of scene feature information, for example, rainy and snowy weather, a school road section with dense obstacles, rainy and snowy weather with water on the road surface, or the like. Taking the preset recording condition of rain and snow weather as an example, when it is detected that the weather information in the scene characteristic information is rain and snow weather, the corresponding recording module in the background records the clustered and integrated scene characteristic information and the corresponding scene type.
For example, in one embodiment, the scene record may include scene feature information and a scene type, and the scene tag may be real-time recording time, which is used to mark the scene record for searching according to time during subsequent tracing; in another embodiment, the content of the scene record bar may also be scene characteristic information and real-time recording time, and a scene tag is generated according to a scene type; in another embodiment, the scene record strip may contain only scene feature information, and the scene tag is generated according to the scene type and the real-time recording time. Optionally, the scene tag in the different embodiments may further include location information corresponding to the current scene, so as to facilitate subsequent data search according to the location.
In this embodiment, data filtering is performed on the identified scene types through preset recording conditions, only the scene types required by developers are recorded and reserved, and meanwhile, the scene data corresponding to the scene types can be stored and uploaded, so that on one hand, the storage space of the recording module can be saved, and on the other hand, the transmission efficiency can be improved to a certain extent when the data are uploaded through a network interface.
Optionally, after the scene type is determined, a scene record bar needs to be fed back to the user, and user feedback information is collected; and updating the preset recording condition according to the user feedback information and the scene record strip.
Specifically, the information is fed back to the user by displaying the content of the scene record bar on a windshield in front of the user through a Head Up Display (HUD) technology, or feeding back the information to the user in a voice broadcast manner, or displaying the information of the scene record bar on a vehicle-mounted Display screen through bluetooth transmission. Taking information displayed on a windshield as an example, the information is projected in a virtual imaging mode during display, and each piece of information in the scene recording strip is converted into an image, number and character combination mode to be fed back to a driving cloud, so that a user can visually know the current scene state.
Further, after the user receives the feedback information, the user may compare the actual situation with the received feedback information to determine whether to add, change or delete the preset recording condition, for example, when the preset recording condition is road congestion, the user may feed back the preset recording condition as an unnecessary recording condition, and when an emergency occurs, the user may feed back the emergency information in a form of voice. The follow-up developer can determine the updating content of the preset recording condition according to the feedback information of the user and the counted scene data. Optionally, the method and the device support remote updating, a user can query the state and the updating batch of the corresponding device through a webpage at any place, select whether to update the program, send the new program to the corresponding device through a network during program updating to realize remote updating and updating, and simultaneously feedback the device updated to the latest version to the user after the updating and updating are successful.
In the embodiment, the scene type is visually displayed to the user through the display module, so that distraction caused by frequently lowering the head of the user to view the display when the user drives the vehicle is avoided, the concentrated attention of the user in the vehicle driving process is improved, and the safe driving of the user is further ensured. Furthermore, the user demand and the actual recording condition are known by collecting the feedback information of the user and the actual scene data, so that developers can comprehensively know various conditions, the product can be updated, the product can be more suitable for the actual demand of the user, and the product is more humanized. Meanwhile, programs are updated remotely through the network, the problems that the vehicle cannot be updated due to the fact that the driving position is far away from the terminal point in the data acquisition process and the vehicle cannot be updated due to the fact that an individual classification strategy or a screening strategy is changed are solved, and research and development efficiency and data acquisition efficiency are improved.
The present embodiment is described and illustrated below by means of preferred embodiments.
Fig. 3 is a flowchart of the on-vehicle data processing method of the present preferred embodiment. As shown in fig. 4, the vehicle-mounted data processing method includes the steps of:
step S301, collecting voice data input by a user;
step S302, voice data is identified, and a first scene characteristic is determined;
step S303, collecting sensor data through various sensors;
step S304, fusing, clustering and planning sensor data to determine traffic flow information and function triggering information;
step S305, determining road condition data and weather data through a high-precision map and an external operator;
step S306, determining road network information and weather information according to the road condition data and the weather data.
Step S307, fusing and clustering the first scene characteristics, traffic flow information, function triggering information, road network information and weather information, and identifying corresponding scene types;
and step S308, feeding back the current scene information to the user through a head-up display technology.
In the steps, the scene data in the driving process of the vehicle is acquired through various data acquisition ways, so that the diversity and the comprehensiveness of the scene data are improved. And then, performing feature extraction on the scene data to determine scene feature information, determining a scene type based on the scene feature information, and finally feeding back the current scene information to a user through a head-up display module. The acquired scene data is acquired through multiple ways, so that the situation that the current scene type cannot be accurately described under a single acquisition way is avoided, and the user can perform supplementary correction by using a voice input mode at the first time when sudden time occurs, so that the accuracy of scene type identification is further improved. Furthermore, the method of the embodiment adopts voice interaction and head-up display technology to feed back the result when interacting with the user, so that distraction caused by head-down interaction of the user is avoided, and safety in the driving process is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an on-board data processing device for realizing the on-board data processing method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so specific limitations in one or more embodiments of the vehicle-mounted data processing device provided below can be referred to the limitations of the vehicle-mounted data processing method in the foregoing, and details are not described herein again.
In one embodiment, as shown in fig. 4, a block diagram of an in-vehicle data processing apparatus is provided, including: an acquisition module 41, a processing module 42 and an identification module 43, wherein:
an obtaining module 41, configured to obtain scene data, where the scene data includes at least two of user input data, sensor data, and third-party data;
a processing module 42 for determining scene characteristic information based on the scene data.
And the identifying module 43 identifies the scene type based on the scene characteristic information.
The device acquires multi-dimensional scene data through multiple ways, performs feature extraction based on the multi-dimensional vehicle-mounted data to determine corresponding multi-dimensional scene feature information, and finally performs scene type identification based on the scene feature information. This implementation can combine sensor, external software and user speech input's mode to carry out data acquisition, has richened the variety and the comprehensiveness of data on the one hand to further improve scene type identification's accuracy, and enlarged corresponding device's application scope, can discern more scene types. On the other hand, the situation that the scene cannot be timely and accurately identified when the user drives the vehicle in a fatigue mode is avoided, and the safety of vehicle driving is improved.
Further, the obtaining module 41 is further configured to collect user input data through a voice input device; collecting sensor data by a plurality of sensors, wherein the sensor data includes road data and obstacle data; the method comprises the steps of determining the position of a vehicle through a global positioning system, and acquiring third-party data through a high-precision map and a network query interface based on the position of the vehicle, wherein the third-party data comprises road condition data and real-time weather data.
Further, the processing module 42 is further configured to convert the user input data into a voice text, and determine key information according to the voice text; matching the key information with a preset voice instruction library, and if the key information is successfully matched, determining a first scene characteristic according to the key information; and if the matching is unsuccessful, updating a preset voice instruction library through a cloud training model based on the key information, and generating a corresponding first scene characteristic.
Further, the processing module 42 is also configured to determine a type of the target object based on the sensor data; and fusing sensor data corresponding to the same type of target object to obtain a second scene characteristic.
Further, the processing module 42 is further configured to obtain road network information through the high-precision map, and obtain a third scene characteristic based on the third-party data, where the third scene characteristic includes a road network characteristic and a weather characteristic.
Further, the apparatus further includes a recording module 44, configured to match the scene feature information with a preset recording condition, and if matching is successful, generate a scene record bar and a corresponding scene tag according to the scene feature information and the scene type.
Further, the device also comprises a display module 45 for feeding back scene recording strips to the user and collecting user feedback information; and updating the preset recording condition according to the user feedback information and the scene record strip.
Fig. 5 is a block diagram of a vehicle-mounted data processing apparatus according to a preferred embodiment of the present application, and as shown in fig. 5, the apparatus includes a voice input module, a voice recognition module, a sensor module, a network module, a perception fusion planning control module, a map parsing module, a scene recognition module, a scene recording module, a head-up display module, a remote upgrade module, and a storage module.
The respective modules in the above-described in-vehicle data processing apparatus may be implemented wholly or partially by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the data acquisition equipment, and can also be stored in a memory in the data acquisition equipment in a software form, so that the processor can call and execute the corresponding operations of the modules.
In one embodiment, a data acquisition device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 6. The data acquisition equipment comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the data acquisition device is configured to provide computational and control capabilities. The memory of the data acquisition equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the data acquisition device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an in-vehicle data processing method. The display screen of the data acquisition equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the data acquisition equipment can be a voice input device, such as a microphone, a Bluetooth headset, a wired headset and the like; or a sensor device and a network communication interface.
It will be appreciated by those skilled in the art that the configuration shown in fig. 6 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the data acquisition device to which the present application is applied, and a particular data acquisition device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In one embodiment, a data acquisition device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring scene data, wherein the scene data comprises at least two of user input data, sensor data and third-party data;
determining scene characteristic information based on the scene data;
identifying a scene type based on the scene feature information.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring scene data, wherein the scene data comprises at least two of user input data, sensor data and third-party data;
determining scene characteristic information based on the scene data;
identifying a scene type based on the scene feature information.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (10)

1. A vehicle-mounted data processing method is characterized by comprising the following steps:
acquiring scene data, wherein the scene data comprises at least two of user input data, sensor data and third-party data;
determining scene feature information based on the scene data;
identifying a scene type based on the scene feature information.
2. The method of claim 1, wherein the context data comprises user input data, sensor data, and third party data, and wherein obtaining context data comprises:
collecting user input data through a voice input device;
collecting sensor data by a plurality of sensors, wherein the sensor data includes road data and obstacle data;
the method comprises the steps of determining the position of a vehicle through a global positioning system, and acquiring third-party data through a high-precision map and a network query interface based on the position of the vehicle, wherein the third-party data comprises road condition data and real-time weather data.
3. The method of claim 1, wherein the determining scene feature information based on the scene data comprises:
converting the user input data into a voice text, and determining key information according to the voice text;
matching the key information with a preset voice instruction library, and if the key information is successfully matched, determining first scene characteristics according to the key information;
and if the matching is unsuccessful, updating a preset voice instruction library through a cloud training model based on the key information, and generating a corresponding first scene characteristic.
4. The method of claim 1, wherein determining scene characteristic information based on the scene data further comprises:
determining a target type based on the sensor data;
and fusing sensor data corresponding to the same type of target object to obtain a second scene characteristic.
5. The method of claim 1, wherein determining scene characteristic information based on the scene data further comprises:
and obtaining a third scene characteristic based on the third party data, wherein the third scene characteristic comprises a road network characteristic and a weather characteristic.
6. The method of claim 1, wherein determining the scene type further comprises:
and matching the scene characteristic information with a preset recording condition, and if the matching is successful, generating a scene recording strip and a corresponding scene label according to the scene characteristic information and the scene type.
7. The method of claim 6, further comprising:
feeding back the scene record strip to a user and collecting user feedback information;
and updating the preset recording condition according to the user feedback information and the scene recording strip.
8. An in-vehicle data processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring scene data, and the scene data at least comprises at least two of user input data, sensor data and third-party data;
a processing module for determining scene characteristic information based on the scene data;
and the identification module is used for identifying the scene type based on the scene characteristic information.
9. A data acquisition device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210343165.3A 2022-04-02 2022-04-02 Vehicle-mounted data processing method and device, data acquisition equipment and storage medium Pending CN114722931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210343165.3A CN114722931A (en) 2022-04-02 2022-04-02 Vehicle-mounted data processing method and device, data acquisition equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210343165.3A CN114722931A (en) 2022-04-02 2022-04-02 Vehicle-mounted data processing method and device, data acquisition equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114722931A true CN114722931A (en) 2022-07-08

Family

ID=82242164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210343165.3A Pending CN114722931A (en) 2022-04-02 2022-04-02 Vehicle-mounted data processing method and device, data acquisition equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114722931A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593892A (en) * 2024-01-19 2024-02-23 福思(杭州)智能科技有限公司 Method and device for acquiring true value data, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593892A (en) * 2024-01-19 2024-02-23 福思(杭州)智能科技有限公司 Method and device for acquiring true value data, storage medium and electronic equipment
CN117593892B (en) * 2024-01-19 2024-04-09 福思(杭州)智能科技有限公司 Method and device for acquiring true value data, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US11860639B2 (en) Vehicle-based road obstacle identification system
US10990105B2 (en) Vehicle-based virtual stop and yield line detection
US20200341478A1 (en) Dynamic Routing For Autonomous Vehicles
US11023745B2 (en) System for automated lane marking
US11373524B2 (en) On-board vehicle stop cause determination system
US20200211370A1 (en) Map editing using vehicle-provided data
CN109641589B (en) Route planning for autonomous vehicles
US20200210769A1 (en) Using image pre-processing to generate a machine learning model
US20200210696A1 (en) Image pre-processing in a lane marking determination system
US20170241791A1 (en) Risk Maps
US10699141B2 (en) Phrase recognition model for autonomous vehicles
WO2020139355A1 (en) System for automated lane marking
US20160210383A1 (en) Virtual autonomous response testbed
KR20200011593A (en) Sparse map for autonomous vehicle navigation
US11741692B1 (en) Prediction error scenario mining for machine learning models
US20200208991A1 (en) Vehicle-provided virtual stop and yield line clustering
CN111845771A (en) Data collection automation system
US11620987B2 (en) Generation of training data for verbal harassment detection
CN113748448B (en) Vehicle-based virtual stop-line and yield-line detection
US20230252084A1 (en) Vehicle scenario mining for machine learning models
WO2020139356A1 (en) Image pre-processing in a lane marking determination system
WO2021138319A1 (en) Training mechanism of verbal harassment detection systems
WO2020139392A1 (en) Vehicle-based road obstacle identification system
CN114722931A (en) Vehicle-mounted data processing method and device, data acquisition equipment and storage medium
US20230360375A1 (en) Prediction error scenario mining for machine learning models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination