CN114911630A - Data processing method and device, vehicle, storage medium and chip - Google Patents

Data processing method and device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN114911630A
CN114911630A CN202210827067.7A CN202210827067A CN114911630A CN 114911630 A CN114911630 A CN 114911630A CN 202210827067 A CN202210827067 A CN 202210827067A CN 114911630 A CN114911630 A CN 114911630A
Authority
CN
China
Prior art keywords
data processing
target data
node
cache
processing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210827067.7A
Other languages
Chinese (zh)
Other versions
CN114911630B (en
Inventor
谭哲越
路卫杰
褚向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210827067.7A priority Critical patent/CN114911630B/en
Publication of CN114911630A publication Critical patent/CN114911630A/en
Application granted granted Critical
Publication of CN114911630B publication Critical patent/CN114911630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems

Abstract

The disclosure relates to a data processing method, a data processing device, a vehicle, a storage medium and a chip, and belongs to the field of automatic driving, wherein the method comprises the following steps: determining at least one target data loading node on which a target data processing node in a computational graph depends through topological sorting, wherein the target data processing node is any one or more data processing nodes in the computational graph, the computational graph is a directed acyclic graph and comprises a plurality of nodes, and the plurality of nodes comprise the data loading node and the data processing node; in response to determining that the target data load node loads target data into a cache, invoking the target data processing node; and performing data processing on the target data in the cache through the target data processing node to obtain a data processing result.

Description

Data processing method and device, vehicle, storage medium and chip
Technical Field
The present disclosure relates to the field of automatic driving, and in particular, to a data processing method and apparatus, a vehicle, a storage medium, and a chip.
Background
In the automatic driving process of the vehicle and the automatic driving research and development process, massive data can be collected and need to be processed for efficient screening and utilization. In the related art, the data are read from a distributed file system or a non-temporary storage medium, a plurality of models are input, and then data processing is performed, the same data may be input into a plurality of models in the multi-model reasoning process, and under the scene of mass data, the overall data processing efficiency is greatly influenced by repeated reading of the same data.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a data processing method, an apparatus, a vehicle, a storage medium, and a chip.
According to a first aspect of embodiments of the present disclosure, there is provided a data processing method, the method including:
determining at least one target data loading node on which a target data processing node in a computational graph depends through topological sorting, wherein the target data processing node is any one or more data processing nodes in the computational graph, the computational graph is a directed acyclic graph and comprises a plurality of nodes, and the plurality of nodes comprise the data loading node and the data processing node;
in response to determining that the target data load node loads target data into a cache, invoking the target data processing node;
and performing data processing on the target data in the cache through the target data processing node to obtain a data processing result.
Optionally, the method comprises:
storing the target data to the cache by the target data loading node; and the number of the first and second electrodes,
generating a data pointer based on the node name and the data identification corresponding to the target data loading node;
sending the data pointer to the target data processing node, and determining that the target data loading node loads data to the cache;
the data processing of the target data in the cache by the target data processing node comprises:
and acquiring the target data in the cache based on the data pointer.
Optionally, after determining a target data loading node on which a target data processing node in the computational graph depends, the method includes:
adding the target data processing node into a queue to be scheduled;
the invoking the target data processing node in response to determining that the target data loading node loaded target data into cache comprises:
sequentially scanning the queues to be scheduled, and determining whether the target data is loaded to the cache or not under the condition that the target data processing node is scanned;
adding the target data processing node to a ready queue if it is determined that the target data is loaded to the cache;
scanning the ready queues in sequence, and calling the target data processing node under the condition of scanning the target data processing node; and the number of the first and second electrodes,
and adding the target data processing node into the queue to be scheduled after the target data processing node is called.
Optionally, the computational graph includes output nodes, each of the data processing nodes is connected to the output node, and after the target data processing node is called, the method includes:
adding the target data processing node into a calculation queue;
sequentially scanning the calculation queue, and storing a data processing result of the target data processing node to the cache if the target data processing node completes processing under the condition that the target data processing node is scanned; and the number of the first and second electrodes,
generating a data pointer based on the node name and the data identifier corresponding to the target data processing node;
and sending the data pointer to the output node, and acquiring and outputting the data processing result in the cache through the output node based on the data pointer.
Optionally, the sequentially scanning the queue to be scheduled, and determining whether the target data load is loaded to the cache after scanning to the target data processing node, includes:
and under the condition that the data of the target data loading node is determined not to be ready, calling the target data loading node to load the target data into the cache.
Optionally, after adding the target data processing node to the queue to be scheduled, the method includes:
and calling the target data loading node to load the target data into the cache.
According to a second aspect of the embodiments of the present disclosure, there is provided a data processing apparatus, the apparatus comprising:
a determining module configured to determine, through topology ordering, at least one target data loading node on which a target data processing node in a computational graph depends, where the target data processing node is any one or more data processing nodes in the computational graph, and the computational graph is a directed acyclic graph and includes a plurality of nodes, where the plurality of nodes includes the data loading node and the data processing node;
a calling module configured to call the target data processing node in response to determining that the target data loading node loads target data into a cache;
and the processing module is configured to perform data processing on the target data in the cache through the target data processing node to obtain a data processing result.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining at least one target data loading node on which a target data processing node in a computational graph depends through topological sorting, wherein the target data processing node is any one or more data processing nodes in the computational graph, the computational graph is a directed acyclic graph and comprises a plurality of nodes, and the plurality of nodes comprise the data loading node and the data processing node;
in response to determining that the target data load node loads target data into a cache, invoking the target data processing node;
and performing data processing on the target data in the cache through the target data processing node to obtain a data processing result.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of any one of the first aspects of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a processor and an interface; the processor is configured to read instructions to perform the method of any one of the first aspects of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the data required by the target data processing node is determined based on the pre-drawn calculation graph, the required data is loaded to the cache through the corresponding data loading node, zero-copy transmission of the data among the nodes is achieved based on the cache, when different data processing modules perform data processing, the same data does not need to be obtained for multiple times, calculation resources and storage resources are effectively reduced, and loading of the data is multiplexed to the greatest extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of data processing according to an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating a computational graph according to an exemplary embodiment.
FIG. 3 is another flow chart illustrating a method of data processing according to an exemplary embodiment.
FIG. 4 is a block diagram illustrating a data processing apparatus according to an example embodiment.
FIG. 5 is another block diagram illustrating a data processing apparatus in accordance with an exemplary embodiment.
FIG. 6 is yet another block diagram illustrating a data processing apparatus in accordance with an exemplary embodiment.
FIG. 7 is a functional block diagram schematic of a vehicle shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
Fig. 1 shows a data processing method according to an exemplary embodiment, where the method may be applied to a vehicle, a terminal, or a server, and the terminal may be an electronic device with data processing capability deployed on the vehicle, and may also be a mobile phone, a personal computer, or the like, which is not specifically limited by this disclosure, and the method includes:
s101, determining at least one target data loading node which is depended by a target data processing node in the calculation graph through topological sorting.
The target data processing node is any one or more data processing nodes in the computational Graph, the computational Graph is a Directed Acyclic Graph (DAG) and comprises a plurality of nodes, and the plurality of nodes comprise data loading nodes and data processing nodes.
The computational graph may be configured in advance, specifically, the computational graph may be drawn according to data required by a plurality of data processing nodes, for example, a pre-trained neural network model or other program modules with data processing capabilities (e.g., a path planning model, a target detection model, a data screening model, etc.) may be used as the data processing nodes, and input parameters required by the data processing nodes into the computational graph building module, so that the computational graph building module builds a corresponding computational graph. In addition, it should be understood that after the initial drawing of the computational graph is completed, nodes may be added to the computational graph, and the directional connection relationship between the nodes may be reconstructed, so as to obtain an updated computational graph.
Illustratively, referring to the computational graph shown in fig. 2, the computational graph includes 3 data loading nodes, the 3 data loading nodes include a camera 1 data loading node, a camera 2 data loading node and a lidar data loading node, and the computational graph further includes 4 data processing nodes, each data processing node may be a pre-trained neural network model or other program module with data processing capability, and the 4 data processing nodes include a model 1 data processing node, a model 2 data processing node, a model 3 data processing node and a model 4 data processing node. In addition, the computation graph may also include two special nodes required for scheduling, an Input node and an Output node.
Taking the target data processing node as model 1 as an example, the target data processing node depends on the camera 1 data loading node and the camera 2 data loading node.
Specifically, the topological sort may be to select and output a node without predecessors (i.e., an Input node with an in-degree of 0) from the computational graph, and delete the node and all directed edges starting from the node from the graph. The steps are repeated until the current calculation graph is empty or no predecessor vertex exists in the current calculation graph. And obtaining the dependency relationship among all nodes in the calculation graph through topological sorting.
S102, in response to the fact that the target data loading node loads the target data into the cache, calling the target data processing node.
The cache may be a memory cache, and it is understood that the memory cache is a shared cache space of each node in the computation graph.
It is understood that in step S102, the target data processing node is called again to process the data only after determining that the target data is read from the distributed file system or the non-transitory storage medium to the cache by the target data loading node. That is to say, each node schedules the corresponding node to perform calculation after the preposed data is ready, so that the problem that the model cannot work normally and resources are wasted due to the fact that the model is called to perform data processing under the condition that the data is not ready can be avoided.
It should be noted that, when the target data processing node includes a plurality of target data processing nodes, if all the data required by one of the target data processing nodes is loaded into the cache, the target data processing node may be called first. For example, referring to the computational graph shown in fig. 2, the target data processing nodes include a model 1 node and a model 3 node, and if it is determined that the data loading tasks performed by the camera 1 data loading node and the camera 2 data loading node are completed and the camera 1 data and the camera 2 data are loaded into the cache, the model 1 node may be called first to perform the corresponding data processing tasks, and the data loading tasks performed by the lidar data loading node are completed, the lidar data is loaded into the cache, and then the model 3 node is called to perform the corresponding data processing tasks.
S103, performing data processing on the target data in the cache through the target data processing node to obtain a data processing result.
In the embodiment of the disclosure, data required by a target data processing node is determined based on a pre-drawn calculation graph, and the required data is loaded to a cache through a corresponding data loading node, zero copy transmission of the data between the nodes is realized based on the cache, and when different data processing modules perform data processing, the same data does not need to be acquired for multiple times, so that the calculation resources and the storage resources are effectively reduced, and the loading of the data is multiplexed to the greatest extent.
In some optional embodiments, the method comprises:
storing the target data to the cache by the target data loading node; generating a data pointer based on the node name and the data identifier corresponding to the target data loading node; sending the data pointer to the target data processing node, and determining that the target data loading node loads data to the cache;
the data processing of the target data in the cache by the target data processing node comprises:
and acquiring the target data in the cache based on the data pointer.
For example, referring to the computational graph shown in fig. 2, taking an example that a target data processing node includes a model 1 node, and a corresponding target data loading node includes a camera 1 data loading node and a camera 2 data loading node, for the camera 1 data loading node, a data pointer { camera 1} - { data id } may be generated, where the data id may be a data identifier generated according to a preset rule, and is used to locate output data of a current node, for example, the data identifier may be a random number, or a data line sequence number, and the disclosure is not limited thereto.
By adopting the scheme, the data pointer is generated and sent to the data processing node after the data loading node loads the data into the cache, so that the data processing node can accurately find the data required by the data processing node in the cache according to the received data pointer, and by the scheme, the pointer of the corresponding data is transmitted without copying the actual data, thereby realizing zero-copy transmission of the data and effectively reducing the computing resources and the storage resources.
In some further optional embodiments, after determining a target data loading node on which a target data processing node in the computational graph depends, the method further comprises:
adding the target data processing node into a queue to be scheduled;
the invoking the target data processing node in response to determining that the target data loading node loaded target data into cache comprises:
sequentially scanning the queues to be scheduled, and determining whether the target data is loaded to the cache or not under the condition that the target data processing node is scanned; adding the target data processing node to a ready queue if it is determined that the target data is loaded to the cache; scanning the ready queues in sequence, and calling the target data processing node under the condition of scanning the target data processing node; and after the target data processing node is called, adding the target data processing node into the queue to be scheduled.
It can be understood that, in the process of scanning the queue to be scheduled, if the data required by the currently scanned data processing node is not loaded into the cache (for example, a data pointer sent by one or more data loading nodes that depend on the currently scanned data processing node is not received), the data processing node is skipped, and after the scanning of the queue to be scheduled is completed, if there are data processing nodes to be scheduled in the queue, the scanning is restarted from the beginning. In addition, the ready queue is the same, and is not described herein again.
By adopting the scheme, the to-be-scheduled queue and the ready queue are designed, and the to-be-scheduled node and the ready node are scheduled respectively, so that each node in the calculation graph can be effectively managed, the data processing abnormity caused by scheduling errors is avoided, and the overall data processing robustness is effectively improved.
In further alternative embodiments, the computational graph includes output nodes, each of the data processing nodes is connected to the output node, and the invoking the target data processing node comprises:
adding the target data processing node into a calculation queue; sequentially scanning the calculation queue, and storing a data processing result of the target data processing node to the cache if the target data processing node completes processing under the condition that the target data processing node is scanned; generating a data pointer based on the node name and the data identification corresponding to the target data processing node; and sending the data pointer to the output node, and acquiring and outputting the data processing result in the cache based on the data pointer through the output node.
Similar to the queue to be scheduled, when the calculation queue scans to a data processing node which does not complete data processing, the node can be skipped, and when the calculation queue has data processing nodes after the scanning is completed, the queue is rescanned.
Wherein the Output node may be an Output node as shown in fig. 2. The step of obtaining the data processing result in the cache and outputting the data processing result may be performed in response to an output node being called externally. Specifically, in response to the output node being called and the calling parameter specifying that the output node outputs the data processing result corresponding to model 1, the output node may find the data processing result output by model 1 in the cache based on the data pointer sent by model 1, and send the data processing result to the caller.
By adopting the scheme, the data processing nodes in the data processing process are managed through the computing queue, the data sending pointer is generated to the output node when the data processing nodes finish data processing, the data processing result is obtained in the cache through the output node based on the received data pointer, the data processing nodes can be further scheduled, the data processing result output by the data processing nodes can be obtained by the output node under the condition of zero copy, and the computing resources and the storage resources are effectively reduced.
For the manner of triggering and calling of the data loading node, the present disclosure provides the following two optional schemes.
In an optional scheme, the sequentially scanning the queues to be scheduled, and determining whether the target data load is loaded into the cache after scanning to the target data processing node, includes:
and under the condition that the data of the target data loading node is determined not to be ready, calling the target data loading node to load the target data into the cache.
That is, in this scheme, the data load node may reissue the data load node's call if the data processing node dependent thereon is scanned after joining the queue to be scheduled. By adopting the scheme, the data loading nodes are called again under the condition that the data processing nodes depending on the data processing nodes are scanned, so that the problem of data congestion caused by the fact that a plurality of data loading nodes are called at the same time can be effectively solved.
In another optional scheme, after adding the target data processing node to the queue to be scheduled, the method includes: and calling the target data loading node to load the target data into the cache.
That is, in this scheme, the data load node initiates a call immediately after the data processing node dependent thereon joins the queue to be scheduled. By adopting the scheme, the data can be loaded more quickly by immediately initiating and calling the data processing node depending on the data processing node, so that the data processing speed is integrally improved.
For the two ways of triggering and invoking the data loading node, a person skilled in the art may select the data loading node according to an actual network situation or an actual requirement.
In order to make those skilled in the art understand the scheduling flow of the data processing method provided by the present disclosure, based on the model 1 data processing node in the computational graph shown in fig. 2, for saving space, the model 1 data processing node, the camera 1 data loading node, and the camera 2 data loading node are hereinafter referred to as model 1, camera 1, and camera 2, respectively, and the present disclosure also provides the following flow chart of the data processing method:
s301, obtaining the dependency relationship between the nodes through topological sorting to obtain the dependency relationship between the model 1 and the camera 1 and the model 2.
S302, adding the model 1 into a queue to be scheduled, and calling the camera 1 and the camera 2 to start data loading.
And S303, under the condition that the camera 1 and the camera 2 are determined to load the data into the cache, sending the first data pointer to the model 1.
It is understood that the first data pointer includes a data pointer corresponding to the camera 1 and a data pointer corresponding to the camera 2.
S304, scanning the queue to be scheduled, and adding the model 1 into the ready queue under the condition that the model 1 is determined to receive the first data pointer.
S305, scanning the ready queue, calling the model 1 to execute data processing according to the first data pointer, and adding the model 1 data processing node into the calculation queue.
And S306, under the condition that the model 1 data processing is determined to be completed, sending the second data pointer to the output node.
S307, the calculation queue is scanned, and if it is determined that the output node receives the second data pointer, the process returns to step S301.
Based on the same inventive concept, fig. 4 is a block diagram illustrating a data processing apparatus according to an exemplary embodiment. Referring to fig. 4, the data processing apparatus 40 includes:
a determining module 41 configured to determine, through topology sorting, at least one target data loading node on which a target data processing node in a computational graph depends, where the target data processing node is any one or more data processing nodes in the computational graph, and the computational graph is a directed acyclic graph and includes a plurality of nodes, where the plurality of nodes includes the data loading node and the data processing node;
a calling module 42 configured to call the target data processing node in response to determining that the target data loading node loads target data into a cache;
a processing module 43 configured to perform data processing on the target data in the cache through the target data processing node to obtain a data processing result.
Optionally, the data processing apparatus 40 is further configured to:
storing the target data to the cache by the target data loading node; and the number of the first and second electrodes,
generating a data pointer based on the node name and the data identifier corresponding to the target data loading node;
sending the data pointer to the target data processing node, and determining that the target data loading node loads data to the cache;
the processing module 43 is configured to:
and acquiring the target data in the cache based on the data pointer.
Optionally, the data processing apparatus 40 is further configured to:
adding the target data processing node into a queue to be scheduled;
the calling module 42 is configured to:
sequentially scanning the queues to be scheduled, and determining whether the target data is loaded to the cache or not under the condition that the target data processing node is scanned;
adding the target data processing node to a ready queue if it is determined that the target data is loaded to the cache;
scanning the ready queues in sequence, and calling the target data processing node under the condition of scanning the target data processing node; and the number of the first and second electrodes,
and adding the target data processing node into the queue to be scheduled after the target data processing node is called.
Optionally, the invoking module 42 is configured to:
adding the target data processing node into a calculation queue;
sequentially scanning the calculation queue, and storing a data processing result of the target data processing node to the cache if the target data processing node completes processing under the condition that the target data processing node is scanned; and the number of the first and second electrodes,
generating a data pointer based on the node name and the data identifier corresponding to the target data processing node;
and sending the data pointer to the output node, and acquiring and outputting the data processing result in the cache through the output node based on the data pointer.
Optionally, the data processing apparatus 40 is further configured to:
and under the condition that the data of the target data loading node is determined not to be ready, calling the target data loading node to load the target data into the cache.
Optionally, the data processing apparatus 40 is further configured to include:
and calling the target data loading node to load the target data into the cache.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the data processing method provided by the present disclosure.
FIG. 5 is a block diagram illustrating a data processing apparatus according to an example embodiment. For example, the first data processing apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the first data processing apparatus 500 may include one or more of the following components: a first processing component 502, a first memory 504, a first power component 506, a multimedia component 508, an audio component 510, a first input/output interface 512, a sensor component 515, and a communication component 516.
The first processing component 502 generally controls the overall operation of the first data processing apparatus 500, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The first processing component 502 may include one or more first processors 520 to execute instructions to perform all or a portion of the steps of the method described above. Further, the first processing component 502 can include one or more modules that facilitate interaction between the first processing component 502 and other components. For example, the first processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the first processing component 502.
The first memory 504 is configured to store various types of data to support operations at the first data processing apparatus 500. Examples of such data include instructions for any application or method operating on the first data processing apparatus 500, contact data, phonebook data, messages, pictures, videos, etc. The first memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only first memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A first power supply component 506 provides power to the various components of the first data processing apparatus 500. The first power component 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 comprises a screen providing an output interface between said first data processing means 500 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive an external audio signal when the first data processing apparatus 500 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the first memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The first input/output interface 512 provides an interface between the first processing component 502 and a peripheral interface module, which may be a keyboard, click wheel, button, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 515 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 515 may detect the open/closed state of the apparatus 500, the relative positioning of the components, such as the display and keypad of the apparatus 500, the change in position of the first data processing apparatus 500 or a component of the first data processing apparatus 500, the presence or absence of user contact with the first data processing apparatus 500, the orientation or acceleration/deceleration of the first data processing apparatus 500 and the change in temperature of the first data processing apparatus 500. The sensor assembly 515 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 515 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 515 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the first data processing apparatus 500 and other devices in a wired or wireless manner. The first data processing device 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the first data processing apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described data processing methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the first memory 504 comprising instructions, executable by the first processor 520 of the first data processing apparatus 500 to perform the data processing method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a collection of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, System on Chip, or System on Chip), and the like. The integrated circuit or chip can be used to execute executable instructions (or codes) to realize the data processing method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another device or apparatus, for example, where the integrated circuit or chip includes a processor, a memory, and an interface for communicating with other devices. The executable instructions may be stored in the memory, and when executed by the processor, implement the data processing method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit the executable instructions to the processor for execution, so as to implement the data processing method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned data processing method when executed by the programmable apparatus.
FIG. 6 is a block diagram illustrating a data processing device according to an example embodiment. For example, the second data processing apparatus 600 may be provided as a server. Referring to fig. 6, the apparatus 600 includes a second processing component 622 that further includes one or more processors and memory resources, represented by a second memory 632, for storing instructions, such as applications, executable by the second processing component 622. The application stored in the second memory 632 may include one or more modules each corresponding to a set of instructions. Further, the second processing component 622 is configured to execute instructions to perform the above described autopilot method.
The apparatus 600 may further comprise a second power supply component 626 arranged to perform power management of the second data processing apparatus 600, a wired or wireless network interface 660 arranged to connect the second data processing apparatus 600 to a network, and a second input/output interface 668. The second data processing apparatus 600 may operate based on an operating system, such as Windows Server, stored in the second memory 632 TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Or the like.
Referring to fig. 7, fig. 7 is a functional block diagram of a vehicle 700 according to an exemplary embodiment. The vehicle 700 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 700 may acquire environmental information of its surroundings through the sensing system 720 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 700 may include various subsystems such as infotainment system 710, perception system 720, decision control system 730, drive system 740, and computing platform 750. Alternatively, vehicle 700 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 700 may be interconnected by wire or wirelessly.
In some embodiments, infotainment system 710 may include a communication system 711, an entertainment system 712, and a navigation system 713.
The communication system 711 may include a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 712 may include a display device, a microphone, and a sound box, and a user may listen to a radio in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, a voice signal of the user may be acquired through a microphone, and certain control of the vehicle 700 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a stereo.
The navigation system 713 may include a map service provided by a map provider to provide navigation of the route traveled by the vehicle 700, and the navigation system 713 may be used in conjunction with the global positioning system 721, the inertial measurement unit 722 of the vehicle. The map service provided by the map provider can be a two-dimensional map or a high-precision map.
The perception system 720 may include several types of sensors that sense information about the environment surrounding the vehicle 700. For example, the sensing system 720 may include a global positioning system 721 (the global positioning system may be a GPS system, a beidou system, or other positioning system), an Inertial Measurement Unit (IMU) 722, a laser radar 723, a millimeter wave radar 724, an ultrasonic radar 725, and a camera 726. The sensing system 720 may also include sensors of internal systems of the monitored vehicle 700 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 700.
The global positioning system 721 is used to estimate the geographic location of the vehicle 700.
The inertial measurement unit 722 is used to sense a pose change of the vehicle 700 based on the inertial acceleration. In some embodiments, inertial measurement unit 722 may be a combination of accelerometers and gyroscopes.
Lidar 723 utilizes a laser to sense objects in the environment in which vehicle 700 is located. In some embodiments, lidar 723 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 724 utilizes radio signals to sense objects within the surrounding environment of the vehicle 700. In some embodiments, in addition to sensing objects, the millimeter-wave radar 724 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 725 may sense objects around the vehicle 700 using ultrasonic signals.
The camera 726 is used to capture image information of the surrounding environment of the vehicle 700. The camera 726 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the camera 726 may include a still image or video stream information.
The decision control system 730 includes a computing system 731 that makes analytical decisions based on information obtained by the sensing system 720, the decision control system 730 further includes a vehicle control unit 732 that controls the powertrain of the vehicle 700, and a steering system 733, throttle 734, and brake system 735 for controlling the vehicle 700.
The computing system 731 is operable to process and analyze various information acquired by the perception system 720 in order to identify objects, and/or features in the environment surrounding the vehicle 700. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 731 may use object recognition algorithms, Motion from Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 731 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 731 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle control unit 732 may be used to perform coordinated control of the vehicle's power battery and engine 741 to improve the power performance of the vehicle 700.
The steering system 733 is operable to adjust a heading of the vehicle 700. For example, in one embodiment, a steering wheel system.
The throttle 734 is used to control the operating speed of the engine 741 and thus the speed of the vehicle 700.
The brake system 735 is used to control the deceleration of the vehicle 700. The braking system 735 may use friction to slow the wheels 744. In some embodiments, the braking system 735 may convert kinetic energy of the wheels 744 into electrical current. The braking system 735 may also take other forms to slow the rotational speed of the wheels 744 to control the speed of the vehicle 700.
Drive system 740 may include components that provide powered motion to vehicle 700. In one embodiment, drive system 740 may include an engine 741, an energy source 742, a transmission 743, and wheels 744. The engine 741 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, a hybrid engine of an internal combustion engine and an air compression engine. The engine 741 converts the energy source 742 into mechanical energy.
Examples of energy source 742 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 742 may also provide energy for other systems of the vehicle 700.
A transmission 743 may transmit mechanical power from an engine 741 to wheels 744. The driveline 743 may include a gearbox, a differential, and a driveshaft. In one embodiment, the driveline 743 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 744.
Some or all of the functions of the vehicle 700 are controlled by the computing platform 750. The computing platform 750 can include at least one third processor 751, the third processor 751 can execute instructions 753 stored in a non-transitory computer-readable medium, such as a third memory 752. In some embodiments, the computing platform 750 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 700 in a distributed manner.
The third processor 751 can be any conventional processor, such as a commercially available CPU. Alternatively, the third processor 751 may also include a processor such as a Graphics Processing Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 7 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the third processor 751 may perform the above-described data processing method.
In various aspects described herein, the third processor 751 can be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the third memory 752 can contain instructions 753 (e.g., program logic), which can be executed by the third processor 751 to perform various functions of the vehicle 700. The third memory 752 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 710, perception system 720, decision control system 730, drive system 740.
In addition to the instructions 753, the third memory 752 may also store data such as road maps, route information, location, direction, speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 700 and the computing platform 750 during operation of the vehicle 700 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 750 may control functions of vehicle 700 based on inputs received from various subsystems, such as drive system 740, sensing system 720, and decision control system 730. For example, the computing platform 750 may utilize input from the decision control system 730 in order to control the steering system 733 to avoid obstacles detected by the perception system 720. In some embodiments, the computing platform 750 is operable to provide control over many aspects of the vehicle 700 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the vehicle 700. For example, the third memory 752 may be partially or completely separate from the vehicle 700. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 7 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a road, such as vehicle 700 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 700 or a sensing and computing device associated with the vehicle 700 (e.g., computing system 731, computing platform 750) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 700 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 700, such as the lateral position of the vehicle 700 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 700 to cause the autonomous vehicle to follow a given trajectory and/or to maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 700 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
determining at least one target data loading node on which a target data processing node in a computational graph depends through topological sorting, wherein the target data processing node is any one or more data processing nodes in the computational graph, the computational graph is a directed acyclic graph and comprises a plurality of nodes, and the plurality of nodes comprise the data loading node and the data processing node;
in response to determining that the target data load node loads target data into a cache, invoking the target data processing node;
and performing data processing on the target data in the cache through the target data processing node to obtain a data processing result.
2. The method according to claim 1, characterized in that it comprises:
storing the target data to the cache by the target data loading node; and the number of the first and second electrodes,
generating a data pointer based on the node name and the data identifier corresponding to the target data loading node;
sending the data pointer to the target data processing node, and determining that the target data loading node loads data to the cache;
the data processing of the target data in the cache by the target data processing node comprises:
and acquiring the target data in the cache based on the data pointer.
3. The method of claim 1, wherein the determining a target data loading node in the computational graph upon which the target data processing node depends comprises:
adding the target data processing node into a queue to be scheduled;
the invoking the target data processing node in response to determining that the target data loading node loaded target data into cache comprises:
sequentially scanning the queues to be scheduled, and determining whether the target data is loaded to the cache or not under the condition that the target data processing node is scanned;
adding the target data processing node to a ready queue if it is determined that the target data is loaded to the cache;
scanning the ready queues in sequence, and calling the target data processing node under the condition of scanning the target data processing node; and the number of the first and second electrodes,
and adding the target data processing node into the queue to be scheduled after the target data processing node is called.
4. The method of any one of claims 1-3, wherein the computational graph includes output nodes, each of the data processing nodes being connected to the output nodes, and wherein invoking the target data processing node comprises:
adding the target data processing node into a calculation queue;
sequentially scanning the calculation queue, and storing a data processing result of the target data processing node to the cache if the target data processing node finishes processing under the condition that the target data processing node is scanned; and the number of the first and second electrodes,
generating a data pointer based on the node name and the data identifier corresponding to the target data processing node;
and sending the data pointer to the output node, and acquiring and outputting the data processing result in the cache through the output node based on the data pointer.
5. The method of claim 3, wherein the sequentially scanning the queue to be scheduled and determining whether the target data load is loaded into the cache after scanning to the target data processing node comprises:
and under the condition that the data of the target data loading node is determined not to be ready, calling the target data loading node to load the target data into the cache.
6. The method of claim 3, wherein after adding the target data processing node to the queue to be scheduled, the method comprises:
and calling the target data loading node to load the target data into the cache.
7. A data processing apparatus, characterized in that the apparatus comprises:
a determining module configured to determine, through topology ordering, at least one target data loading node on which a target data processing node in a computational graph depends, where the target data processing node is any one or more data processing nodes in the computational graph, and the computational graph is a directed acyclic graph and includes a plurality of nodes, where the plurality of nodes includes the data loading node and the data processing node;
a calling module configured to call the target data processing node in response to determining that the target data loading node loads target data into a cache;
and the processing module is configured to perform data processing on the target data in the cache through the target data processing node to obtain a data processing result.
8. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining at least one target data loading node on which a target data processing node in a computational graph depends through topological sorting, wherein the target data processing node is any one or more data processing nodes in the computational graph, the computational graph is a directed acyclic graph and comprises a plurality of nodes, and the plurality of nodes comprise the data loading node and the data processing node;
in response to determining that the target data load node loads target data into a cache, invoking the target data processing node;
and performing data processing on the target data in the cache through the target data processing node to obtain a data processing result.
9. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 6.
10. A chip comprising a processor and an interface; the processor is configured to read instructions to perform the method of any of claims 1-6.
CN202210827067.7A 2022-07-14 2022-07-14 Data processing method and device, vehicle, storage medium and chip Active CN114911630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210827067.7A CN114911630B (en) 2022-07-14 2022-07-14 Data processing method and device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210827067.7A CN114911630B (en) 2022-07-14 2022-07-14 Data processing method and device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN114911630A true CN114911630A (en) 2022-08-16
CN114911630B CN114911630B (en) 2022-11-04

Family

ID=82772087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210827067.7A Active CN114911630B (en) 2022-07-14 2022-07-14 Data processing method and device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN114911630B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103069385A (en) * 2010-06-15 2013-04-24 起元技术有限责任公司 Dynamically loading graph-based computations
US10310907B1 (en) * 2018-05-11 2019-06-04 Xactly Corporation Computer system providing numeric calculations with less resource usage
CN110377340A (en) * 2019-07-24 2019-10-25 北京中科寒武纪科技有限公司 Operation method, device and Related product
CN111753983A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 Method, system, device and storage medium for customizing neural network model
CN111752691A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 AI (artificial intelligence) calculation graph sorting method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103069385A (en) * 2010-06-15 2013-04-24 起元技术有限责任公司 Dynamically loading graph-based computations
US10310907B1 (en) * 2018-05-11 2019-06-04 Xactly Corporation Computer system providing numeric calculations with less resource usage
CN110377340A (en) * 2019-07-24 2019-10-25 北京中科寒武纪科技有限公司 Operation method, device and Related product
CN111753983A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 Method, system, device and storage medium for customizing neural network model
CN111752691A (en) * 2020-06-22 2020-10-09 深圳鲲云信息科技有限公司 AI (artificial intelligence) calculation graph sorting method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114911630B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
EP4346247A1 (en) Data interaction method and apparatus, vehicle, readable storage medium and chip
CN114863717B (en) Parking stall recommendation method and device, storage medium and vehicle
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114937351B (en) Motorcade control method and device, storage medium, chip, electronic equipment and vehicle
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN114911630B (en) Data processing method and device, vehicle, storage medium and chip
CN114880408A (en) Scene construction method, device, medium and chip
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN115334109A (en) System architecture, transmission method, vehicle, medium and chip for traffic signal identification
CN114780226B (en) Resource scheduling method and device, computer readable storage medium and vehicle
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN115221260B (en) Data processing method, device, vehicle and storage medium
EP4296132A1 (en) Vehicle control method and apparatus, vehicle, non-transitory storage medium and chip
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115042813B (en) Vehicle control method and device, storage medium and vehicle
CN115297434B (en) Service calling method and device, vehicle, readable storage medium and chip
CN114802217B (en) Method and device for determining parking mode, storage medium and vehicle
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114771514B (en) Vehicle running control method, device, equipment, medium, chip and vehicle
CN115080788A (en) Music pushing method and device, storage medium and vehicle
CN115620258A (en) Lane line detection method, device, storage medium and vehicle
CN115221261A (en) Map data fusion method and device, vehicle and storage medium
CN115052019A (en) Uploading method and device of disk data, vehicle, storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant