CN114834453A - Data processing method, data processing device, vehicle, equipment and storage medium - Google Patents

Data processing method, data processing device, vehicle, equipment and storage medium Download PDF

Info

Publication number
CN114834453A
CN114834453A CN202210358880.4A CN202210358880A CN114834453A CN 114834453 A CN114834453 A CN 114834453A CN 202210358880 A CN202210358880 A CN 202210358880A CN 114834453 A CN114834453 A CN 114834453A
Authority
CN
China
Prior art keywords
vehicle
data
type
end data
vehicle end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210358880.4A
Other languages
Chinese (zh)
Inventor
张衡
刘颖楠
马耀昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd, Apollo Zhixing Technology Guangzhou Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210358880.4A priority Critical patent/CN114834453A/en
Publication of CN114834453A publication Critical patent/CN114834453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a data processing method, an apparatus, a vehicle, a device and a storage medium, which relate to the technical field of artificial intelligence, and in particular to the technical fields of big data, automatic driving, intelligent cabins, intelligent transportation and the like. The data processing method comprises the following steps: obtaining vehicle end data, wherein the vehicle end data comprises: the type information of the vehicle-end data and the data content of the vehicle-end data; generating a visualization object of a representation form corresponding to the type information based on the data content; and displaying the visual object, wherein the displayed visual object is used for carrying out data analysis on the vehicle-end data. The present disclosure may improve the intuitiveness of data analysis.

Description

Data processing method, data processing device, vehicle, equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of technologies such as big data, automatic driving, intelligent cockpit, and intelligent transportation, and in particular, to a data processing method, apparatus, vehicle, device, and storage medium.
Background
An automatic vehicle (Self-driving automatic vehicle) is also called as an unmanned vehicle, a computer-driven vehicle or a wheeled mobile robot, and is an intelligent vehicle which realizes unmanned driving through a computer system.
When the automatic driving vehicle runs, parts of the whole vehicle, sensors and the like can generate mass data.
In order to improve the performance of the autonomous vehicle, data analysis needs to be performed on mass data generated by the autonomous vehicle.
Disclosure of Invention
The present disclosure provides a data processing method, apparatus, vehicle, device, and storage medium.
According to an aspect of the present disclosure, there is provided a data processing method including: obtaining vehicle end data, wherein the vehicle end data comprises: the type information of the vehicle-end data and the data content of the vehicle-end data; generating a visualization object of a representation form corresponding to the type information based on the data content; and displaying the visual object, wherein the displayed visual object is used for carrying out data analysis on the vehicle-end data.
According to another aspect of the present disclosure, there is provided a data processing apparatus including: the acquisition module is used for acquiring vehicle end data, and the vehicle end data comprise: the type information of the vehicle-end data and the data content of the vehicle-end data; the generation module is used for generating a visualization object of an expression form corresponding to the type information based on the data content; and the display module is used for displaying the visual object, and the displayed visual object is used for carrying out data analysis on the vehicle end data.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided an autonomous vehicle including: an electronic device as claimed in any one of the preceding aspects.
According to the technical scheme disclosed by the invention, the intuitiveness of data analysis can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an application scenario for implementing a data processing method of an embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a download interface in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a display interface in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a fourth embodiment of the present disclosure;
fig. 8 is a schematic diagram of an electronic device for implementing a data processing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The number of modules involved in the driving process of the automatic driving vehicle is large, and the generated data volume is huge. If mass data generated by the vehicle end is directly displayed, a user performs data analysis based on the displayed mass data, and the problems of heavy workload and the like are caused due to poor intuitiveness.
In order to more intuitively perform data analysis on mass data generated by a vehicle end, the present disclosure provides the following embodiments.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure, which provides a data processing method, including:
101. obtaining vehicle end data, wherein the vehicle end data comprises: the type information of the vehicle end data and the data content of the vehicle end data.
102. And generating a visualization object of the expression form corresponding to the type information based on the data content.
103. And displaying the visual object, wherein the displayed visual object is used for carrying out data analysis on the vehicle-end data.
The data processing method of the embodiment can be applied to a scene of data analysis of vehicle-end data.
The data processing method of the embodiment can be applied to a display end, such as a Personal Computer (PC), a notebook Computer, or a mobile terminal, and the mobile terminal includes: cell-phones, tablet computer, wearable equipment etc..
The vehicle-end data refers to data generated by a vehicle end, for example, mass data generated by an automatic driving vehicle in a driving process.
During the driving process of the vehicle end, the generated data can be recorded in a storage medium (such as a hard disk) of the vehicle end in real time, so that the vehicle end data can be acquired from the storage medium of the vehicle end.
The vehicle-side data may include type information and data content.
The type information is used to indicate the type of the vehicle-side data, and the type information may specifically be an ID, a keyword, or the like. For example, one type of vehicle end data is indicated by ID 0, and another type of vehicle end data is indicated by ID 1. Or, one type of vehicle end data is represented by the keyword pnc, and the other type of vehicle end data is represented by the keyword perc.
The data content refers to specific values of data, for example, if a certain vehicle end data is the running speed of the vehicle, the data content may be 3 (km/h).
Visualization refers to a theory, method and technology of converting data into charts, three-dimensional model animations, videos and the like to be displayed on a screen by using computer graphics and image processing technology and then performing interactive processing.
The visualization object refers to an object to which data is converted intuitively, such as the above-described chart, three-dimensional model animation, video, and the like.
The visual objects can have different expression forms, for example, the chart, the three-dimensional model animation and the video correspond to the three expression forms respectively.
The type information of the vehicle-end data has a corresponding relation with the expression form of the visual object.
The correspondence between the new type of the vehicle-end data and the expression form of the visual object may be configured according to actual requirements, for example, the first type of vehicle-end data generates a visual object in a first expression form, the second type of vehicle-end data generates a visual object in a second expression form, and the third type of vehicle-end data generates a visual object in a third expression form. The visualization object of the first expression is, for example, a three-dimensional model animation, the visualization object of the second expression is, for example, a chart, and the visualization object of the third expression is, for example, a video.
The data specifically included in the various types of vehicle-end data can also be configured according to actual requirements.
Therefore, after the display end acquires the vehicle end data, a visual object in a corresponding representation form can be generated based on the type information included in the vehicle end data, for example, one or more items of three-dimensional model animation, a chart and a video are generated, and then the visual object can be displayed on a display interface for a user to view.
The user can analyze the vehicle-end data through the visual object. For example, the visualized object is a graph including a relationship curve between the traveling speed of the vehicle and time, and the user can know the change of the traveling speed of the vehicle based on the relationship curve, and further can analyze whether the vehicle is traveling normally or not based on the change of the traveling speed of the vehicle.
In this embodiment, the visualized object is generated based on the data content of the vehicle-side data, and is displayed, and the visualized object is used for performing data analysis on the vehicle-side data, so that the intuitiveness of the data analysis can be improved. In addition, the expression form of the visual object is determined based on the type information of the vehicle-end data, the visual object which is more matched with the vehicle-end data and is in a proper expression form can be generated, and the display effect is improved.
In order to better understand the embodiment of the present disclosure, an automatic driving scenario is taken as an example, and a data processing method of the embodiment of the present disclosure is explained.
As shown in fig. 2, the apparatus involved in the automatic driving scenario includes: autonomous vehicle 201 and server 202, autonomous vehicle 201 and server 202 may be connected by a communication network, which may be a wireless communication network, such as communicating via signals provided by a base station. In addition, a satellite (not shown) may be included in the autonomous driving scenario to achieve accurate positioning of the autonomous vehicle. The server 202 may automatically drive a local server of a vehicle enterprise corresponding to the vehicle or be a cloud server.
The autonomous vehicle 201 may perform relevant operations based on the control of the server 202 during autonomous driving.
The autopilot function of an autonomous vehicle may be implemented by an autopilot system. The level of automatic driving achieved by the automatic driving system can be currently classified into the levels L0 to L5. Wherein, the driving of level L0 represents no automatic driving, namely the traditional driver drives manually; the level-L1 driving is also called auxiliary driving, and includes basic functions such as constant-speed cruising, automatic parking, lane keeping and the like; the L2-level driving is also called semi-automatic driving and comprises functions of automatic auxiliary driving, danger pre-judging braking and the like; level L3 driving, also called conditional automatic driving, can realize fully automatic driving under normal road section compared with level L2, but in some emergency situations, still need manual work to carry out auxiliary braking; the L4 level driving belongs to high automatic driving, the overall braking performance and the reaction capability of the automobile reach a higher level, a driver does not need to operate and control the automobile when sitting in the automobile, and the automobile runs smoothly; the automatic driving of the L5 level can realize unconditional full-automatic driving technology, and under any condition, the full-automatic driving is realized without worrying about road conditions and weather.
Further, as shown in fig. 2, the automatic driving system may include: perception system 2011, decision system 2012, and execution system 2013.
The various systems included in an autopilot system may also be referred to as subsystems, modules, components, units, etc. These subsystems may be software, hardware, or a combination of software and hardware. Some or all of these subsystems may be deployed on an autonomous vehicle, and/or on a server.
The perception system 2011 is configured to obtain perception data, which may include obstacle data, vehicle data, and the like. Obstacle data includes, for example: the position, speed, etc. of the obstacle, and the vehicle-own data includes, for example, the position, speed, etc. of the vehicle itself.
The perception system 2011 may include a sensor mounted on the vehicle, or a sensor and a computing platform, where the sensor has computing capability, the sensor may compute obstacle data, vehicle own data, and the like based on the collected sensor data, and where the sensor does not have computing capability, the computing platform may compute the sensor data collected by the sensor to obtain the obstacle data and the vehicle own data.
The sensor may include: cameras, laser radars, millimeter wave radars, ultrasonic radars, Global Positioning Systems (GPS), Inertial Measurement Units (IMU), and the like.
The computing platform may be a vehicle-mounted computing chip, or the vehicle may send the acquired sensing data to the server, and the computing platform in the server performs the computation of the related information, and the like.
A decision system 2012, configured to make a decision based on the perception data obtained by the perception system 2011 to obtain decision data. The decision may include path planning (path planning), behavior planning (behavior planning), trajectory planning (trajectory planning), and the like. Accordingly, the decision data may include decided driving behavior data, path data, trajectory data, and the like.
The executing system 2013, which may also be referred to as a control system, may specifically include a chassis system of the vehicle, and is configured to control a driving behavior of the vehicle based on the decision data obtained by the decision system 2012, for example, control a steering device, an accelerator device, a brake device, and the like of the vehicle, so that the vehicle travels according to the decided driving behavior, path, trajectory, and the like.
In the embodiment of the present disclosure, in order to perform data analysis more intuitively, as shown in fig. 2, the data analysis device further includes a display end 203, where the display end may specifically include: PCs, notebook computers, mobile terminals, etc. The autonomous vehicle may establish a communication connection with the display terminal 203. Taking the display side as the PC as an example, the autonomous vehicle and the PC may be connected to the same lan, so that the PC may obtain vehicle side data from the autonomous vehicle, generate a visual object based on the vehicle side data, and display the visual object.
In combination with the above scenario example, the present disclosure also provides a data processing method.
Fig. 3 is a schematic diagram of a second embodiment of the present disclosure, which provides a data processing method, and the present embodiment takes the interaction between an autonomous vehicle and a PC as an example, and the method includes:
301. and automatically driving the vehicle, generating vehicle end data and recording the vehicle end data.
The data generated by the autonomous vehicle can be recorded in real time in a storage medium, such as a hard disk, in the autonomous vehicle during driving. The generated data may be referred to as vehicle-end data.
Further, the vehicle-end data may be recorded in the form of a log, that is, each piece of vehicle-end data may further include time information of the vehicle-end data.
When the user needs to analyze the vehicle-end data, the following process can be executed:
302. and the PC displays a downloading interface to the user, wherein the downloading interface comprises an operable item which is used for receiving the selection information input by the user.
In the related art, after the PC needs to acquire the vehicle-end data, the data in the hard disk can be copied to the PC, and the data cannot be acquired as required due to the data copying mode. And the transmission time required by the whole copying process is longer because the data volume of the data in the hard disk is larger.
In this embodiment, the download interface may be displayed to the user, and the user may select data as desired based on the download interface.
As shown in FIG. 4, the actionable items included within download interface 400 can include: the type corresponds to an actionable item 401, and the date corresponds to an actionable item 402.
Accordingly, the selection information may be a type or a date.
The operable item corresponding to the type refers to a type in which the user can input vehicle-end data to be acquired in the operable item. Specific types may include: default, perception (perc), planning and control (pnc), etc. The type is default, the three-dimensional model animation can correspond to the subsequent three-dimensional model animation, the type is perception, the type is planning and control, and the type can correspond to the subsequent diagram.
The operable item corresponding to the date refers to the date on which the user can input the vehicle-end data to be acquired in the operable item, and the date may include a start date and an end date. Since data in a certain time period can be selected, it can be considered that data slicing is performed on the vehicle-end data, and sliced data required by a user is acquired.
It will be appreciated that the input described above may include adding (not providing the selectable content) or selecting (providing the selectable content), etc.
303. And the PC responds to the selection information input by the user in the operable item and sends a downloading instruction to the vehicle end, wherein the downloading instruction comprises the selection information and is used for triggering the vehicle end to acquire vehicle end data corresponding to the selection information.
After the user inputs the type and date of the data of the vehicle end in the downloading interface, the PC can send a downloading instruction to the vehicle end, and the downloading instruction comprises the type and date.
The automatic driving vehicle and the PC can be connected into the same local area network, and can establish Hyper Text Transfer Protocol (HTTP) connection and interact through the HTTP connection. That is, the PC may send a download instruction to the autonomous vehicle over the HTTP connection. Through the HTTP connection, asynchronous downloading of data can be achieved.
In addition, real-time online connection can be established between the vehicle end and the display end, and offline connection can also be established, wherein the online connection refers to connection of the vehicle end in a driving state, and the offline connection refers to connection of the vehicle end in a static state.
In the above example, the download interface is displayed for the user, the selection information input by the user can be acquired through the download interface, and then the vehicle-side data required by the user can be selected based on the selection information, so that the vehicle-side data can be downloaded as required, and the efficiency of data transmission can be improved.
304. And automatically driving the vehicle, and responding to the downloading instruction to acquire vehicle end data corresponding to the selection information.
After receiving the download instruction, the autonomous driving vehicle may obtain vehicle-side data corresponding to the selection information, for example, vehicle-side data within a certain time period, from the mass data stored in the hard disk.
305. And automatically driving the vehicle, and compressing the vehicle end data corresponding to the selection information to obtain compressed data.
The gRPC may be used to compress data, and is an open-source Remote Procedure Call (RPC) framework.
Because data compression can reduce the data volume, therefore, through carrying out compression processing to data, can reduce the data transmission volume between car end and the display end, improve transmission efficiency, reduce the transmission cost.
306. And automatically driving the vehicle and sending the compressed data to the PC.
Wherein the autonomous vehicle may send the compressed data to the PC via an HTTP connection with the PC.
In addition, referring to fig. 4, during the downloading process, downloading progress information, such as the downloading status shown in fig. 4, may also be displayed, and a downloadable data list may also be provided, so that the user may conveniently view and download.
The vehicle end can send the slice data to the display end, and also can send the slice data to the cloud end, and the cloud end performs data analysis, visual display and other operations.
307. The PC is used for decompressing the compressed data to obtain the vehicle end data corresponding to the selection information, and the vehicle end data corresponding to the selection information comprises: type information and data content.
308. And the PC generates a visual object of an expression form corresponding to the type information based on the data content.
The vehicle end data acquired by the PC can be of various types, and different types of vehicle end data can correspond to different expression forms.
309. And the PC is used for displaying the visual object, and the displayed visual object is used for carrying out data analysis on the vehicle end data.
And for different types of vehicle-end data, visual objects in different expression forms can be generated and displayed.
The visualization objects of different representations may specifically include: three-dimensional model animations, charts and videos.
The first type of vehicle end data can generate three-dimensional model animation, the second type of vehicle end data can generate a chart, and the third type of vehicle end data can generate a video. The three types are different, and specifically, the first type may be the default type, the second type may be pnc type, and the third type may be perc type.
In the above example, the visualization objects in different representation forms can be generated by different types of vehicle-end data, so that the visualization objects with more matching effects can be displayed according to the types of the vehicle-end data, and the display performance is improved. In addition, the three-dimensional model animation, the graph, the video and the like can realize the combination of dynamic display effect and static display effect, and further improve the display effect.
Illustratively, the first type of end-of-vehicle data may include:
one or more of vehicle travel data, high-precision map data, and obstacle data.
The vehicle driving data can comprise the driving position of the vehicle and a planned driving route, so that the current and future movement tracks of the vehicle can be accurately displayed through visualization.
The high-precision map data may include data such as lane lines, sidewalks, and road boundaries required for vehicle driving, so that the related information of the high-precision map may be visually expressed in detail through visualization.
The obstacle data may include data of a category of the obstacle, such as a pedestrian, a vehicle, or the like, so that the obstacle category may be accurately displayed through visualization.
For the first type of vehicle-end data, a rendering container may be used, and a three-dimensional model animation may be generated based on corresponding data content, where a frame rate of the three-dimensional model animation may be a preset frame rate, for example, 40 fps. The render container may be a canvas element of the browser. The process of generating the three-dimensional model animation based on the data content can be realized by adopting the prior art.
As shown in fig. 5, a three-dimensional model animation 501 may be generated based on the first type of vehicle-end data, and the three-dimensional model animation 501 intuitively reflects the relevant information of the autonomous vehicle, the obstacle, and the high-precision map, so that the user performs data analysis based on the three-dimensional model animation.
Further, for the three-dimensional model animation, the display visual angle of the three-dimensional model animation can also be adjusted in response to the visual angle adjusting instruction of the three-dimensional model animation.
The rendering container can provide a visual angle adjusting function and comprises a normal visual angle, a top-down visual angle, a near-end visual angle, a map visual angle and the like, so that the requirement that developers (also called users) need different visual angles to observe is met.
In the above example, by adjusting the display view angle of the three-dimensional model animation, the user can conveniently view the three-dimensional model animation at each view angle, thereby better performing data analysis.
Illustratively, the second type of end-of-vehicle data may include: vehicle operating condition data.
Because the related modules of the automatic driving vehicle are numerous, and the corresponding vehicle working condition data are also numerous, a selection function can be provided for a user, and the user can select the vehicle working condition data to be processed according to the requirement of the user.
The vehicle operating condition data may be specific to real-time parameters of modules in the vehicle. For example, vehicle operating condition data may include one or more of:
vehicle base parameters including: the current vehicle state (autonomous driving state, manual driving state), the current steering wheel angle, the current vehicle throttle or brake percentage, the current vehicle position, the current vehicle speed, acceleration, limited speed, etc. Visually displaying in a numerical panel;
control module parameters, including: parameters such as driving modes, gears, transverse distances of reference lines, transverse change rates and the like are expressed in the form of real-time curves.
Planning module parameters, including: parameters such as speed, acceleration and curvature of the planning output are expressed in the form of real-time curves.
Chassis module parameters, including: the chassis speed, the acceleration, the accelerator fed back by the chassis and other parameters ensure that the chassis execution condition is normal and are expressed in a real-time curve form. The speed curve can visually reflect the running state of the vehicle, the acceleration can reflect whether the vehicle brakes, and the running somatosensory condition of the vehicle can be judged through the acceleration.
For the second type of vehicle-end data, a rendering container may also be used, and a corresponding chart (such as a numerical panel, a curve, etc.) is generated based on the corresponding data content. The process of generating a chart based on data content may be implemented using known techniques.
As shown in FIG. 5, a graph 502 may be generated based on the second type of end-of-vehicle data, where the graph 502 intuitively reflects various operating conditions of the autonomous vehicle for a user to perform data analysis based on the graph.
It should be noted that, since the specific parameters and values thereof are only examples, some specific parameters or values are fuzzified in fig. 5, which does not affect understanding of the embodiments of the present disclosure.
Illustratively, the third type of end-of-vehicle data may include: video data.
A camera (camera) may be mounted on the autonomous vehicle, and the camera may capture video data of the environment around the vehicle.
As shown in fig. 5, a video 503 may be generated based on the third type of end-of-vehicle data, and the video 503 visually reflects the surroundings of the autonomous vehicle so that the user can perform data analysis based on the video.
Because the video data is obtained, the corresponding video can be displayed by directly playing based on the video data.
In some embodiments, the vehicle-end data are multiple types of vehicle-end data, and the multiple types of vehicle-end data correspond to visual objects in multiple expression forms; the displaying the visualization object includes: and respectively displaying the visual objects of the multiple expression forms in different areas of the same display interface.
For example, referring to FIG. 5, a three-dimensional model animation 501, a chart 502, and a video 503 may be displayed in different areas of the display interface 500.
In the above example, by displaying the visual objects in different expression forms in different areas of the same display interface, the user can compare the visual objects in multiple expression forms conveniently, so as to perform data analysis better.
Further, the visualization objects in the three expression forms can be associated through the time stamps, so that three-dimensional model animation, charts containing working condition data and videos can be displayed synchronously. In addition, if the number of the cameras is multiple, videos collected by different cameras can be switched, and free switching among videos of all cameras (or called cameras) is achieved. In addition, the position of the obstacle can be identified in the video by adopting a visual analysis mode and the like, and the obstacle information in the video picture can be plotted in real time by using information frames with different colors.
In addition, in the off-line mode, besides providing the above visualization data, an adjustable play speed and a time point jumping function are provided to adjust the three-dimensional model animation and the video data to jump to the abnormal time point so as to specifically analyze the abnormal reason at the time.
Fig. 6 is a schematic diagram according to a third embodiment of the present disclosure, which provides a data processing apparatus. As shown in fig. 6, the apparatus 600 includes: an acquisition module 601, a generation module 602 and a display module 603.
The obtaining module 601 is used for obtaining vehicle end data, and the vehicle end data includes: the type information of the vehicle-end data and the data content of the vehicle-end data; the generating module 602 is configured to generate a visualization object of a representation form corresponding to the type information based on the data content; the display module 603 is configured to display the visual object, where the displayed visual object is used to perform data analysis on the vehicle-end data.
In this embodiment, the visualized object is generated based on the data content of the vehicle-side data, and is displayed, and the visualized object is used for performing data analysis on the vehicle-side data, so that the intuitiveness of the data analysis can be improved. In addition, the expression form of the visual object is determined based on the type information of the vehicle-end data, the visual object which is more matched with the vehicle-end data and is in a proper expression form can be generated, and the display effect is improved.
In some embodiments, the obtaining module 601 is further configured to: displaying a download interface to a user, wherein the download interface comprises an operable item which is used for receiving selection information input by the user; responding to selection information input by the user in the operable item, and sending a downloading instruction to a vehicle end, wherein the downloading instruction comprises the selection information and is used for triggering the vehicle end to acquire vehicle end data corresponding to the selection information; and receiving vehicle end data corresponding to the selection information sent by the vehicle end.
In the above example, the download interface is displayed for the user, the selection information input by the user can be acquired through the download interface, and then the vehicle-side data required by the user can be selected based on the selection information, so that the vehicle-side data can be downloaded as required, and the efficiency of data transmission can be improved.
In some embodiments, the obtaining module 601 is further configured to: receiving compressed data sent by a vehicle end, wherein the compressed data is obtained after the vehicle end data is compressed; and decompressing the compressed data to obtain the vehicle-end data.
Because data compression can reduce the data volume, therefore, through carrying out compression processing to data, can reduce the data transmission volume between car end and the display end, improve transmission efficiency, reduce the transmission cost.
In some embodiments, the generating module 602 is further configured to: if the type information is of a first type, generating a three-dimensional model animation based on the data content of the vehicle-end data of the first type; if the type information is of a second type, generating a chart based on the data content of the vehicle-end data of the second type; or if the type information is a third type, generating a video based on the data content of the vehicle-end data of the third type; wherein the first type, the second type, and the third type are different.
In the above example, the visualization objects in different representation forms can be generated by different types of vehicle-end data, so that the visualization objects with more matched effects can be displayed according to the types of the vehicle-end data, and the display performance is improved. In addition, the three-dimensional model animation, the graph, the video and the like can realize the combination of dynamic display effect and static display effect, and further improve the display effect.
In some embodiments, if the visualization object is the three-dimensional model animation, the apparatus further comprises: and the adjusting module is used for responding to the visual angle adjusting instruction of the three-dimensional model animation and adjusting the display visual angle of the three-dimensional model animation.
In the above example, by adjusting the display view angle of the three-dimensional model animation, the user can conveniently view the three-dimensional model animation at each view angle, thereby better performing data analysis.
In some embodiments, the vehicle-end data are multiple types of vehicle-end data, and the multiple types of vehicle-end data correspond to visual objects in multiple expression forms; the display module 603 is further configured to: and respectively displaying the visual objects of the multiple expression forms in different areas of the same display interface.
In the above example, by displaying the visual objects in different expression forms in different areas of the same display interface, the user can compare the visual objects in multiple expression forms conveniently, so as to perform data analysis better.
It is to be understood that in the disclosed embodiments, the same or similar contents in different embodiments may be mutually referred to.
It is to be understood that "first", "second", and the like in the embodiments of the present disclosure are used for distinction only, and do not indicate the degree of importance, the order of timing, and the like.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, as shown in fig. 7, the present disclosure also provides an autonomous vehicle 700, the autonomous vehicle 700 including: an electronic device 701.
The description of the electronic device may be as follows:
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic apparatus 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the data processing method. For example, in some embodiments, the data processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When loaded into RAM 803 and executed by the computing unit 801, a computer program may perform one or more steps of the data processing method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the data processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable map data collection apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (16)

1. A method of data processing, comprising:
obtaining vehicle end data, wherein the vehicle end data comprises: the type information of the vehicle-end data and the data content of the vehicle-end data;
generating a visualization object of a representation form corresponding to the type information based on the data content;
and displaying the visual object, wherein the displayed visual object is used for carrying out data analysis on the vehicle-end data.
2. The method of claim 1, wherein the obtaining vehicle-end data comprises:
displaying a download interface to a user, wherein the download interface comprises an operable item which is used for receiving selection information input by the user;
responding to selection information input by the user in the operable item, and sending a downloading instruction to a vehicle end, wherein the downloading instruction comprises the selection information and is used for triggering the vehicle end to acquire vehicle end data corresponding to the selection information;
and receiving vehicle end data corresponding to the selection information sent by the vehicle end.
3. The method of claim 1, wherein the obtaining vehicle-end data comprises:
receiving compressed data sent by a vehicle end, wherein the compressed data is obtained after the vehicle end data is compressed;
and decompressing the compressed data to obtain the vehicle-end data.
4. The method of any of claims 1-3, wherein the generating a visualization object of a representation corresponding to the type information based on the data content comprises:
if the type information is of a first type, generating a three-dimensional model animation based on the data content of the vehicle-end data of the first type;
If the type information is of a second type, generating a chart based on the data content of the vehicle-end data of the second type; or,
if the type information is of a third type, generating a video based on the data content of the vehicle-end data of the third type;
wherein the first type, the second type, and the third type are different.
5. The method of claim 4, wherein if the visualization object is the three-dimensional model animation, the method further comprises:
and responding to a visual angle adjusting instruction of the three-dimensional model animation, and adjusting the display visual angle of the three-dimensional model animation.
6. The method according to any one of claims 1 to 3,
the vehicle end data are various types of vehicle end data, and the various types of vehicle end data correspond to visual objects in various expression forms;
the displaying the visualization object includes:
and respectively displaying the visual objects of the multiple expression forms in different areas of the same display interface.
7. A data processing apparatus comprising:
the acquisition module is used for acquiring vehicle end data, and the vehicle end data comprise: the type information of the vehicle-end data and the data content of the vehicle-end data;
The generation module is used for generating a visualization object of an expression form corresponding to the type information based on the data content;
and the display module is used for displaying the visual object, and the displayed visual object is used for carrying out data analysis on the vehicle end data.
8. The apparatus of claim 7, wherein the means for obtaining is further for:
displaying a download interface to a user, wherein the download interface comprises an operable item which is used for receiving selection information input by the user;
responding to selection information input by the user in the operable item, and sending a downloading instruction to a vehicle end, wherein the downloading instruction comprises the selection information and is used for triggering the vehicle end to acquire vehicle end data corresponding to the selection information;
and receiving vehicle end data corresponding to the selection information sent by the vehicle end.
9. The apparatus of claim 7, wherein the means for obtaining is further for:
receiving compressed data sent by a vehicle end, wherein the compressed data is obtained after the vehicle end data is compressed;
and decompressing the compressed data to obtain the vehicle-end data.
10. The apparatus of any of claims 7-9, wherein the generating means is further configured to:
if the type information is of a first type, generating a three-dimensional model animation based on the data content of the vehicle-end data of the first type;
if the type information is of a second type, generating a chart based on the data content of the vehicle-end data of the second type; or,
if the type information is of a third type, generating a video based on the data content of the vehicle-end data of the third type;
wherein the first type, the second type, and the third type are different.
11. The apparatus of claim 7, wherein if the visualization object is the three-dimensional model animation, the apparatus further comprises:
and the adjusting module is used for responding to the visual angle adjusting instruction of the three-dimensional model animation and adjusting the display visual angle of the three-dimensional model animation.
12. The apparatus of any one of claims 7-9,
the vehicle end data are various types of vehicle end data, and the various types of vehicle end data correspond to visual objects in various expression forms;
the display module is further to:
And respectively displaying the visual objects of the multiple expression forms in different areas of the same display interface.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
16. An autonomous vehicle comprising: the electronic device of claim 13.
CN202210358880.4A 2022-04-06 2022-04-06 Data processing method, data processing device, vehicle, equipment and storage medium Pending CN114834453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210358880.4A CN114834453A (en) 2022-04-06 2022-04-06 Data processing method, data processing device, vehicle, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210358880.4A CN114834453A (en) 2022-04-06 2022-04-06 Data processing method, data processing device, vehicle, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114834453A true CN114834453A (en) 2022-08-02

Family

ID=82564578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210358880.4A Pending CN114834453A (en) 2022-04-06 2022-04-06 Data processing method, data processing device, vehicle, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114834453A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579761A (en) * 2024-01-16 2024-02-20 苏州映赛智能科技有限公司 Error display control method of road digital twin model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579761A (en) * 2024-01-16 2024-02-20 苏州映赛智能科技有限公司 Error display control method of road digital twin model
CN117579761B (en) * 2024-01-16 2024-03-26 苏州映赛智能科技有限公司 Error display control method of road digital twin model

Similar Documents

Publication Publication Date Title
WO2016169065A1 (en) Method, device and system for presenting operation information of a mobile platform
CN111931286A (en) Training method, device and equipment of longitudinal dynamics model
US20220229759A1 (en) Method, device, and system for simulation test
CN114661574A (en) Method and device for acquiring sample deviation data and electronic equipment
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
CN114179832A (en) Lane changing method for autonomous vehicle
CN113467875A (en) Training method, prediction method, device, electronic equipment and automatic driving vehicle
CN114834453A (en) Data processing method, data processing device, vehicle, equipment and storage medium
CN113391627A (en) Unmanned vehicle driving mode switching method and device, vehicle and cloud server
US20230391362A1 (en) Decision-making for autonomous vehicle
JP7366180B2 (en) Methods, devices, electronic devices and media for controlling data collection
EP4083336B1 (en) Method and apparatus for detecting operating terrain, and engineering equipment for detecting operating terrain
CN217435657U (en) Electrical system of automatic driving vehicle and automatic driving vehicle
CN115082690B (en) Target recognition method, target recognition model training method and device
CN114228735A (en) Visualization method, device and system for intelligent driving vehicle
CN115556769A (en) Obstacle state quantity determination method and device, electronic device and medium
CN114970112A (en) Method and device for automatic driving simulation, electronic equipment and storage medium
CN115583243B (en) Method for determining lane line information, vehicle control method, device and equipment
CN114877880A (en) Data processing method, data processing device, vehicle, equipment and storage medium
CN114604241A (en) Vehicle driving risk assessment method and device, electronic equipment and edge computing equipment
JP7346638B2 (en) Image data modification method, modification device, electronic equipment, storage medium, computer program and self-driving vehicle
CN115019278B (en) Lane line fitting method and device, electronic equipment and medium
CN114333405B (en) Method for assisting in parking a vehicle
CN115900724A (en) Path planning method and device
CN115952670A (en) Automatic driving scene simulation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination