CN115056784B - Vehicle control method, device, vehicle, storage medium and chip - Google Patents

Vehicle control method, device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN115056784B
CN115056784B CN202210786945.5A CN202210786945A CN115056784B CN 115056784 B CN115056784 B CN 115056784B CN 202210786945 A CN202210786945 A CN 202210786945A CN 115056784 B CN115056784 B CN 115056784B
Authority
CN
China
Prior art keywords
module
environment sensing
environment
modules
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210786945.5A
Other languages
Chinese (zh)
Other versions
CN115056784A (en
Inventor
汪能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210786945.5A priority Critical patent/CN115056784B/en
Publication of CN115056784A publication Critical patent/CN115056784A/en
Application granted granted Critical
Publication of CN115056784B publication Critical patent/CN115056784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • B60W2420/408

Abstract

The present disclosure relates to a vehicle control method, device, vehicle, storage medium, and chip, the method including: collecting an environment image of the surrounding environment of the vehicle; acquiring an environment sensing result through the environment image and the environment sensing model; controlling the vehicle to run according to the environmental perception result; wherein the environmental perception model is pre-built by: determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules; and constructing the environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing the environment sensing features in the automatic driving process of the vehicle output by each environment sensing module. Therefore, the complexity of the construction of the environment perception model can be reduced, the quality of the environment perception model is higher, and the safety of automatic driving is improved.

Description

Vehicle control method, device, vehicle, storage medium and chip
Technical Field
The disclosure relates to the technical field of vehicles, and in particular relates to a vehicle control method, a device, a vehicle, a storage medium and a chip.
Background
The automatic driving technology obtains an environment image of the surrounding environment of the vehicle through a camera, a radar sensor, a laser range finder and the like, determines the running track and the running strategy of the vehicle according to the environment image, and controls the vehicle to automatically run according to the running track and the running strategy.
In the related art, an environment image is perceived through an environment perception model, and a driving track and a driving strategy of a vehicle are determined according to a perception result, so that the environment perception model is important to an automatic driving technology. However, the surrounding environment in the running process of the vehicle is complex, the environment sensing modules for constructing the environment sensing model are more, and the interaction among the environment sensing modules is complicated, so that the construction difficulty of the environment sensing model is relatively high, the environment sensing model with high quality is difficult to construct, and the automatic driving safety of the vehicle is affected.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a vehicle control method, apparatus, vehicle, storage medium, and chip.
According to a first aspect of an embodiment of the present disclosure, there is provided a vehicle control method including:
collecting an environment image of the surrounding environment of the vehicle;
acquiring an environment sensing result through the environment image and the environment sensing model;
controlling the vehicle to run according to the environmental perception result;
wherein the environmental perception model is pre-built by:
determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules;
and constructing the environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing the environment sensing features in the automatic driving process of the vehicle output by each environment sensing module.
Optionally, the determining a plurality of context awareness modules and target configuration information that construct the context awareness model includes:
determining a plurality of environment sensing modules from a plurality of preset neural network modules according to preset model parameters corresponding to the environment sensing models;
and generating the target configuration information according to a plurality of environment awareness modules.
Optionally, the generating the target configuration information according to the plurality of the environment awareness modules includes:
determining a father node module corresponding to the father node of each environment sensing module to obtain a module association relation of the environment sensing model;
acquiring preset position information for storing the environment image;
and generating the target configuration information according to the module association relation and the preset position information.
Optionally, the environment sensing feature includes a first environment sensing feature and a second environment sensing feature, and the obtaining, by the environment image and the environment sensing model, the environment sensing result includes:
determining a first module and a second module from a plurality of environment sensing modules, wherein the first module comprises an environment sensing module without a father node in the plurality of environment sensing modules, and the second module comprises an environment sensing module except the first module in the plurality of environment sensing modules;
acquiring the first environment sensing characteristics output by the first module according to the environment image, and acquiring the second environment sensing characteristics output by the second module according to the first environment sensing characteristics;
And determining the environment sensing result according to the second environment sensing characteristics.
Optionally, the acquiring the first environmental perception feature output by the first module according to the environmental image, and acquiring the second environmental perception feature output by the second module according to the first environmental perception feature includes:
inputting the environment image into the first module for each first module to acquire the first environment sensing characteristics output by the first module, and storing the first environment sensing characteristics into the characteristic pool;
and for each second module, determining input features corresponding to the second module from a plurality of environment-aware features stored in the feature pool, inputting the input features into the second module to acquire second environment-aware features output by the second module, and storing the second environment-aware features into the feature pool.
Optionally, the determining the input feature corresponding to the second module from the plurality of the environment-aware features stored in the feature pool includes:
determining a target parent node module corresponding to a parent node of the second module;
And determining the target environment sensing characteristics output by the target father node module from a plurality of environment sensing characteristics stored in the characteristic pool, and taking the target environment sensing characteristics as input characteristics corresponding to the second module.
Optionally, the determining the environmental perception result according to the second environmental perception feature includes:
determining a third module from the second modules, wherein the third module comprises a second module without a child node in the second module;
and determining the environment sensing result according to the second environment sensing characteristics output by the third module.
According to a second aspect of the embodiments of the present disclosure, there is provided a vehicle control apparatus including:
an acquisition module configured to acquire an environmental image of an environment surrounding the vehicle;
the acquisition module is configured to acquire an environment sensing result through the environment image and the environment sensing model;
the control module is configured to control the vehicle to run according to the environment sensing result;
wherein the environmental perception model is pre-built by:
determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules;
And constructing the environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing the environment sensing features in the automatic driving process of the vehicle output by each environment sensing module.
Optionally, the determining a plurality of context awareness modules and target configuration information that construct the context awareness model includes:
determining a plurality of environment sensing modules from a plurality of preset neural network modules according to preset model parameters corresponding to the environment sensing models;
and generating the target configuration information according to a plurality of environment awareness modules.
Optionally, the generating the target configuration information according to the plurality of the environment awareness modules includes:
determining a father node module corresponding to the father node of each environment sensing module to obtain a module association relation of the environment sensing model;
acquiring preset position information for storing the environment image;
and generating the target configuration information according to the module association relation and the preset position information.
Optionally, the environment-aware features include a first environment-aware feature and a second environment-aware feature, and the acquisition module is further configured to:
Determining a first module and a second module from a plurality of environment sensing modules, wherein the first module comprises an environment sensing module without a father node in the plurality of environment sensing modules, and the second module comprises an environment sensing module except the first module in the plurality of environment sensing modules;
acquiring the first environment sensing characteristics output by the first module according to the environment image, and acquiring the second environment sensing characteristics output by the second module according to the first environment sensing characteristics;
and determining the environment sensing result according to the second environment sensing characteristics.
Optionally, the acquisition module is further configured to:
inputting the environment image into the first module for each first module to acquire the first environment sensing characteristics output by the first module, and storing the first environment sensing characteristics into the characteristic pool;
and for each second module, determining input features corresponding to the second module from a plurality of environment-aware features stored in the feature pool, inputting the input features into the second module to acquire second environment-aware features output by the second module, and storing the second environment-aware features into the feature pool.
Optionally, the acquisition module is further configured to:
determining a target parent node module corresponding to a parent node of the second module;
and determining the target environment sensing characteristics output by the target father node module from a plurality of environment sensing characteristics stored in the characteristic pool, and taking the target environment sensing characteristics as input characteristics corresponding to the second module.
Optionally, the acquisition module is further configured to:
determining a third module from the second modules, wherein the third module comprises a second module without a child node in the second module;
and determining the environment sensing result according to the second environment sensing characteristics output by the third module.
According to a third aspect of embodiments of the present disclosure, there is provided a vehicle comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the method of the first aspect of the disclosure is implemented.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the vehicle control method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a processor and an interface; the processor is configured to read instructions to perform the method of the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: collecting an environment image of the surrounding environment of the vehicle; acquiring an environment sensing result through the environment image and the environment sensing model; controlling the vehicle to run according to the environmental perception result; wherein the environmental perception model is pre-built by: determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules; and constructing the environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing the environment sensing features in the automatic driving process of the vehicle output by each environment sensing module. That is, the plurality of environment sensing modules of the environment sensing model do not interact directly, interaction among the plurality of environment sensing modules is realized through the target configuration information and the feature pool, decoupling among the plurality of environment sensing modules is realized, and thus, the complexity of constructing the environment sensing model can be reduced, the quality of the constructed environment sensing model is higher, and the safety of automatic driving of a vehicle is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method of vehicle control according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of context awareness model building, according to an example embodiment;
FIG. 3 is a flowchart illustrating another vehicle control method according to an exemplary embodiment;
FIG. 4 is a framework diagram of an environmental awareness model, according to an example embodiment;
FIG. 5 is a block diagram of a vehicle control apparatus according to an exemplary embodiment;
FIG. 6 is a functional block diagram of a vehicle shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
First, an application scenario of the present disclosure will be described. At present, when an environment perception model is built, the environment perception model can be built based on a deep learning model building frame in the prior art, a tool kit of the existing deep learning model building frame can provide various common open source assemblies, various models can be quickly built through the open source assemblies, if a new architecture is needed to be realized on the basis of the deep learning model building frame, such as adding a new algorithm, architecture logic is needed to be written according to the specification defined by the deep learning model building frame, however, the architecture of the existing deep learning model building frame is complex, learning cost is high, the environment perception modules for building the environment perception model are more, interaction among the environment perception modules is complex, the building difficulty of the environment perception model is high, and the environment perception model with high quality is difficult to build, so that the automatic driving safety of a vehicle is affected.
In order to overcome the technical problems in the related art, the present disclosure provides a vehicle control method, a device, a vehicle, a storage medium and a chip, which implement interaction between a plurality of environment sensing modules through target configuration information and a feature pool, and implement decoupling between the plurality of environment sensing modules, so that complexity of construction of an environment sensing model can be reduced, quality of the constructed environment sensing model is higher, and thus safety of automatic driving of the vehicle is improved.
The present disclosure is described below in connection with specific embodiments.
FIG. 1 is a flow chart illustrating a vehicle control method according to an exemplary embodiment, as shown in FIG. 1, which may include:
s101, collecting an environment image of the surrounding environment of the vehicle.
In this step, an environmental image of the surrounding environment of the vehicle may be acquired by a camera, a radar sensor, a laser range finder, or the like mounted on the vehicle.
S102, obtaining an environment sensing result through the environment image and the environment sensing model.
In this step, after the environmental image is acquired, the environmental image may be input into the environmental perception model to obtain the environmental perception result output by the environmental perception model.
Wherein the context awareness model is pre-built by:
determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules;
and constructing the environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing the environment sensing features in the automatic driving process of the vehicle output by each environment sensing module. The context-aware feature may include a first context-aware feature and a second context-aware feature.
And S103, controlling the vehicle to run according to the environment sensing result.
In this step, after the environmental sensing result is obtained, a driving track and a driving strategy corresponding to the vehicle may be determined according to the environmental sensing result, and the vehicle may be controlled to automatically travel by an autopilot system of the vehicle according to the driving track and the driving strategy.
By adopting the method, interaction among the plurality of environment sensing modules is realized through the target configuration information and the feature pool, and decoupling among the plurality of environment sensing modules is realized, so that the complexity of constructing the environment sensing model can be reduced, the quality of the constructed environment sensing model is higher, and the safety of automatic driving of the vehicle is improved.
FIG. 2 is a flow chart illustrating a method of context awareness model building, as shown in FIG. 2, according to an exemplary embodiment, the method may include:
s21, determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model.
The target configuration information may include an association relationship between a plurality of the context awareness modules.
In this step, the preset model parameters corresponding to the environment sensing model may be predetermined according to the environment sensing result that needs to be determined, and the plurality of environment sensing modules may be determined from the plurality of preset neural network modules according to the preset model parameters corresponding to the environment sensing model, and the target configuration information may be generated according to the plurality of environment sensing modules.
For example, a model frame may be pre-constructed, where the model frame includes a plurality of preset neural network modules, and different preset neural network modules may have different functions, for example, the preset neural network modules may be imageview encoder, view transformer, temporal Module, and the disclosure is not limited thereto. After determining the preset model parameters, a plurality of environment sensing modules can be determined from a plurality of preset neural network modules of the model framework according to the preset model parameters.
Correspondingly, after determining a plurality of environment sensing modules, determining a father node module corresponding to the father node of each environment sensing module to obtain a module association relation of the environment sensing model; acquiring preset position information for storing the environment image; and generating the target configuration information according to the module association relation and the preset position information.
For each environment sensing module, at least one parent node corresponding to the environment sensing module can be determined, and the environment sensing module corresponding to the at least one parent node is taken as the parent node module corresponding to the environment sensing module. After determining the father node module corresponding to each environment sensing module, the module association relation of the environment sensing module can be obtained. And then, generating the target configuration information according to the module association relation and the preset position information according to the format of the configuration information preset by the model framework. The target configuration information may be stored in a file to obtain a target configuration file. During the running process of the environment-aware model, the target configuration information may be obtained from the target configuration file.
S22, constructing the environment perception model through a plurality of environment perception modules, the target configuration information and a feature pool.
Wherein, the feature pool can be used for storing the environment sensing features output by each environment sensing module in the automatic driving process of the vehicle, and the feature pool can be expressed in the form of a dictionary.
In this step, after determining a plurality of environment sensing modules and the target configuration information, a module association relationship between the plurality of environment sensing modules may be determined through the target configuration information, and a framework of the environment sensing model may be determined through the module association relationship.
The environment perception model constructed by the method realizes decoupling among a plurality of environment perception modules, thus reducing the complexity of constructing the environment perception model and leading the quality of the constructed environment perception model to be higher.
Fig. 3 is a flowchart illustrating another vehicle control method according to an exemplary embodiment, and as shown in fig. 3, the implementation of step S102 may include:
S1021, determining a first module and a second module from a plurality of the environment sensing modules.
The first module comprises a plurality of environment sensing modules, wherein no father node exists in the environment sensing modules, and the second module comprises environment sensing modules except the first module in the environment sensing modules.
In this step, after the environmental image is acquired, the first module and the second module may be determined from the plurality of environmental awareness modules from the module association relationship in the target configuration information.
Illustratively, FIG. 4 is a framework diagram of an environmental awareness model, including 3 level 1 modules, as shown in FIG. 4, according to an exemplary embodiment: a General Backbone network module (General Backbone), a Backbone network module (Trunk), and a Head module (Head), a level 1 module may include a plurality of level 2 modules. Taking a generic backbone network module as an example, the level 2 module of the generic backbone network module comprises: the device comprises a characteristic view coding module, a characteristic view conversion module and a timing module. As can be seen from fig. 4, the input of the characteristic view coding module is the environmental image, the input of the characteristic view conversion module is the environmental image and the environmental perception feature output by the characteristic view coding module, and the input of the timing module is the environmental perception feature output by the characteristic view conversion module. The input of the backbone network module 1 is the environmental perception feature output by the feature visual angle coding module, the input of the backbone network module 2 is the environmental perception feature output by the feature visual angle conversion module, the input of the backbone network module 3 is the environmental perception feature output by the time sequence module, the input of the detection head 1 is the environmental perception feature output by the backbone network module 1, the input of the detection head 2 is the environmental perception feature output by the backbone network module 2, and the input of the detection head 3 is the environmental perception feature output by the backbone network module 3.
It should be noted that the framework of the environment-aware model shown in fig. 4 is merely an example, and the number and connection manner of the modules of different levels included in the environment-aware model are not limited in the present disclosure. The environmental image may be preprocessed before being input into each module, and the preprocessing method may refer to a processing method in the prior art, which is not limited herein.
Taking the environment sensing model shown in fig. 4 as an example, a first module in the environment sensing model is the characteristic view angle coding module, and other modules in the environment sensing model are all second modules.
S1022, acquiring a first environment sensing characteristic output by the first module according to the environment image, and acquiring a second environment sensing characteristic output by the second module according to the first environment sensing characteristic.
In this step, after determining the first module and the second module, inputting the environmental image into the first module for each of the first modules, so as to obtain the first environmental perception feature output by the first module, and storing the first environmental perception feature into the feature pool; and determining an input characteristic corresponding to the second module from a plurality of environment-aware characteristics stored in the characteristic pool aiming at each second module, inputting the input characteristic into the second module to acquire a second environment-aware characteristic output by the second module, and storing the second environment-aware characteristic into the characteristic pool.
In one possible implementation manner, for each second module, a target parent node module corresponding to a parent node of the second module may be determined, a target environment sensing feature output by the target parent node module is determined from a plurality of environment sensing features stored in the feature pool, and the target environment sensing feature is used as an input feature corresponding to the second module.
For example, after determining the first module and the second module, each environment sensing module in the environment sensing model may be traversed, for each environment sensing module, whether the environment sensing module is the first module is determined, if the environment sensing module is the first module, the environment image may be obtained, the environment image may be input into the environment sensing module, and after obtaining the first environment sensing feature output by the environment sensing module, the first environment sensing feature is stored in the feature pool. If the environment sensing module is a second module, determining a target father node module corresponding to the environment sensing module according to the target configuration information, determining a target environment sensing characteristic corresponding to the target father node module from a plurality of environment sensing characteristics stored in the characteristic pool, inputting the target environment sensing characteristic into the environment sensing module, obtaining a second environment sensing characteristic output by the environment sensing module, and storing the environment sensing characteristic into the characteristic pool.
Continuing to take the environment perception model shown in fig. 4 as an example, for the feature view coding module, after determining that the feature view coding module is the first module, the environment image may be input into the feature view coding module to obtain a first environment perception feature output by the feature view coding module, and the first environment perception feature is stored in the feature pool. For the time sequence module, after the time sequence module is determined to be a second module, a father node module corresponding to the time sequence module can be determined to be a characteristic visual angle conversion module, the target environment sensing characteristic output by the characteristic visual angle conversion module is obtained from a characteristic pool, the target environment sensing characteristic is input into the time sequence module, the second environment sensing characteristic output by the time sequence module is obtained, and the second environment sensing characteristic is stored in the characteristic pool.
S1023, determining the environment sensing result according to the second environment sensing characteristic.
In this step, a third module may be determined from the second modules, where the third module includes a second module in which no child node exists in the second module, and the environmental sensing result is determined according to a second environmental sensing feature output by the third module.
Taking the environment sensing model shown in fig. 4 as an example, it can be seen from fig. 4 that the detecting head 1, the detecting head 2 and the detecting head 3 are the third modules, and the environment sensing result can be determined by the feature fusion method in the prior art according to the second environment sensing features output by the detecting head 1, the detecting head 2 and the detecting head 3.
By adopting the method, interaction among the plurality of environment sensing modules is realized through the target configuration information and the feature pool, and decoupling among the plurality of environment sensing modules is realized, so that the complexity of constructing the environment sensing model can be reduced, the quality of the constructed environment sensing model is higher, and the safety of automatic driving of the vehicle is improved. In addition, the environment perception model can correspond different detection heads and different environment perception modules, and can customize the propagation strategy when the model propagates forwards and backwards, so that developers can optimize the environment perception model by using different strategies, and the quality of the environment perception model is further improved.
Fig. 5 is a block diagram of a vehicle control apparatus according to an exemplary embodiment, as shown in fig. 5, the apparatus may include:
An acquisition module 501 configured to acquire an environmental image of an environment surrounding the vehicle;
an acquisition module 502 configured to acquire an environmental perception result through the environmental image and the environmental perception model;
a control module 503 configured to control the vehicle to run according to the environmental sensing result;
wherein the context awareness model is pre-built by:
determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules;
and constructing the environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing the environment sensing features in the automatic driving process of the vehicle output by each environment sensing module.
Optionally, the determining a plurality of context awareness modules and target configuration information to construct the context awareness model includes:
determining a plurality of environment sensing modules from a plurality of preset neural network modules according to preset model parameters corresponding to the environment sensing model;
and generating the target configuration information according to a plurality of the environment sensing modules.
Optionally, the generating the target configuration information according to the plurality of the context awareness modules includes:
Determining a father node module corresponding to the father node of each environment sensing module to obtain a module association relation of the environment sensing model;
acquiring preset position information for storing the environment image;
and generating the target configuration information according to the module association relation and the preset position information.
Optionally, the context-aware feature includes a first context-aware feature and a second context-aware feature, the obtaining module 502 is further configured to:
determining a first module and a second module from the plurality of environment sensing modules, wherein the first module comprises an environment sensing module without a father node in the plurality of environment sensing modules, and the second module comprises an environment sensing module except the first module in the plurality of environment sensing modules;
acquiring a first environment sensing characteristic output by the first module according to the environment image, and acquiring a second environment sensing characteristic output by the second module according to the first environment sensing characteristic;
and determining the environment sensing result according to the second environment sensing characteristic.
Optionally, the obtaining module 502 is further configured to:
inputting the environmental image into the first module for each first module to acquire the first environmental perception feature output by the first module, and storing the first environmental perception feature into the feature pool;
And determining an input characteristic corresponding to the second module from a plurality of environment-aware characteristics stored in the characteristic pool aiming at each second module, inputting the input characteristic into the second module to acquire a second environment-aware characteristic output by the second module, and storing the second environment-aware characteristic into the characteristic pool.
Optionally, the obtaining module 502 is further configured to:
determining a target parent node module corresponding to a parent node of the second module;
and determining the target environment sensing characteristics output by the target father node module from a plurality of environment sensing characteristics stored in the characteristic pool, and taking the target environment sensing characteristics as input characteristics corresponding to the second module.
Optionally, the obtaining module 502 is further configured to:
determining a third module from the second modules, the third module comprising a second module in which no child node exists in the second module;
and determining the environment sensing result according to the second environment sensing characteristics output by the third module.
Through the device, interaction among the plurality of environment sensing modules is realized through the target configuration information and the feature pool, and decoupling among the plurality of environment sensing modules is realized, so that the complexity of constructing the environment sensing model can be reduced, the quality of the constructed environment sensing model is higher, and the safety of automatic driving of the vehicle is improved.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the vehicle control method provided by the present disclosure.
The apparatus may be a stand-alone electronic device or may be part of a stand-alone electronic device, for example, in one embodiment, the apparatus may be an integrated circuit (Integrated Circuit, IC) or a chip, where the integrated circuit may be an IC or may be a collection of ICs; the chip may include, but is not limited to, the following: GPU (Graphics Processing Unit, graphics processor), CPU (Central Processing Unit ), FPGA (Field Programmable Gate Array, programmable logic array), DSP (Digital Signal Processor ), ASIC (Application Specific Integrated Circuit, application specific integrated circuit), SOC (System on Chip, SOC, system on Chip or System on Chip), etc. The integrated circuits or chips described above may be used to execute executable instructions (or code) to implement the vehicle control methods described above. The executable instructions may be stored on the integrated circuit or chip or may be retrieved from another device or apparatus, such as the integrated circuit or chip including a processor, memory, and interface for communicating with other devices. The executable instructions may be stored in the memory, which when executed by the processor, implement the vehicle control method described above; alternatively, the integrated circuit or chip may receive executable instructions through the interface and transmit the executable instructions to the processor for execution to implement the vehicle control method described above.
Referring to fig. 6, fig. 6 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous mode. For example, the vehicle 600 may obtain environmental information of its surroundings through the perception system 620 and derive an automatic driving strategy based on analysis of the surrounding environmental information to achieve full automatic driving, or present the analysis results to the user to achieve partial automatic driving.
The vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of vehicle 600 may be interconnected via wires or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system, which may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication systems may include one or more dedicated short-range communication (dedicated short range communications, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
Entertainment system 612 may include a display device, a microphone, and an audio, and a user may listen to the broadcast in the vehicle based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, the screen of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate through touching the screen.
In some cases, the user's voice signal may be acquired through a microphone and certain controls of the vehicle 600 by the user may be implemented based on analysis of the user's voice signal, such as adjusting the temperature within the vehicle, etc. In other cases, music may be played to the user through sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a travel route for the vehicle 600, and the navigation system 613 may be used with the global positioning system 621 and the inertial measurement unit 622 of the vehicle. The map service provided by the map provider may be a two-dimensional map or a high-precision map.
The perception system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, sensing system 620 may include a global positioning system 621 (which may be a GPS system, or may be a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 622, a lidar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) of the internal systems of the monitored vehicle 600. Sensor data from one or more of these sensors may be used to detect objects and their corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
The global positioning system 621 is used to estimate the geographic location of the vehicle 600.
The inertial measurement unit 622 is configured to sense a change in the pose of the vehicle 600 based on inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
The lidar 623 uses a laser to sense objects in the environment in which the vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, millimeter-wave radar 624 may be used to sense the speed and/or heading of an object in addition to sensing the object.
The ultrasonic radar 625 may utilize ultrasonic signals to sense objects around the vehicle 600.
The image pickup device 626 is used to capture image information of the surrounding environment of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the image capturing device 626 may include still images or video stream information.
The decision control system 630 includes a computing system 631 that makes analysis decisions based on information acquired by the perception system 620, and the decision control system 630 also includes a vehicle controller 632 that controls the powertrain of the vehicle 600, as well as a steering system 633, throttle 634, and braking system 635 for controlling the vehicle 600.
The computing system 631 may be operable to process and analyze the various information acquired by the perception system 620 in order to identify targets, objects, and/or features in the environment surrounding the vehicle 600. The targets may include pedestrians or animals and the objects and/or features may include traffic signals, road boundaries, and obstacles. The computing system 631 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map the environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the acquired various information and derive control strategies for the vehicle.
The vehicle controller 632 may be configured to coordinate control of the power battery and the engine 641 of the vehicle to enhance the power performance of the vehicle 600.
Steering system 633 is operable to adjust the direction of travel of vehicle 600. For example, in one embodiment may be a steering wheel system.
Throttle 634 is used to control the operating speed of engine 641 and thereby the speed of vehicle 600.
The braking system 635 is used to control deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheels 644. In some embodiments, the braking system 635 may convert kinetic energy of the wheels 644 into electrical current. The braking system 635 may take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transfer mechanical power from the engine 641 to wheels 644. The transmission 643 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission 643 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one processor 651, and the processor 651 may execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 651 may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (FieldProgrammable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof. Although FIG. 6 functionally illustrates a processor, memory, and other elements of a computer in the same block, it will be understood by those of ordinary skill in the art that the processor, computer, or memory may in fact comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only calculations related to the component-specific functions.
In the presently disclosed embodiments, the processor 651 may perform the vehicle control methods described above.
In various aspects described herein, the processor 651 can be located remotely from and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
In some embodiments, memory 652 may contain instructions 653 (e.g., program logic), which instructions 653 may be executed by processor 651 to perform various functions of vehicle 600. Memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of infotainment system 610, perception system 620, decision control system 630, drive system 640.
In addition to instructions 653, memory 652 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control the functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, computing platform 650 is operable to provide control over many aspects of vehicle 600 and its subsystems.
Alternatively, one or more of these components may be mounted separately from or associated with vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 6 should not be construed as limiting the embodiments of the present disclosure.
An autonomous car traveling on a road, such as the vehicle 600 above, may identify objects within its surrounding environment to determine adjustments to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and based on its respective characteristics, such as its current speed, acceleration, spacing from the vehicle, etc., may be used to determine the speed at which the autonomous car is to adjust.
Alternatively, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on a road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous car is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 so that the autonomous vehicle follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on a roadway).
The vehicle 600 may be various types of traveling tools, such as a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a recreational vehicle, a train, etc., and embodiments of the present disclosure are not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned vehicle control method when being executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A vehicle control method characterized by comprising:
collecting an environment image of the surrounding environment of the vehicle;
acquiring an environment sensing result through the environment image and the environment sensing model;
controlling the vehicle to run according to the environmental perception result;
wherein the environmental perception model is pre-built by:
determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules;
constructing an environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing environment sensing features in the automatic driving process of the vehicle output by each environment sensing module;
the determining a plurality of context awareness modules and target configuration information to construct the context awareness model includes:
determining a plurality of environment sensing modules from a plurality of preset neural network modules according to preset model parameters corresponding to the environment sensing models;
and generating the target configuration information according to a plurality of environment awareness modules.
2. The method of claim 1, wherein generating the target configuration information from a plurality of the context awareness modules comprises:
determining a father node module corresponding to the father node of each environment sensing module to obtain a module association relation of the environment sensing model;
acquiring preset position information for storing the environment image;
and generating the target configuration information according to the module association relation and the preset position information.
3. The method of claim 1, wherein the context-aware features comprise a first context-aware feature and a second context-aware feature, and wherein the obtaining the context-aware result from the context image and the context-aware model comprises:
determining a first module and a second module from a plurality of environment sensing modules, wherein the first module comprises an environment sensing module without a father node in the plurality of environment sensing modules, and the second module comprises an environment sensing module except the first module in the plurality of environment sensing modules;
acquiring the first environment sensing characteristics output by the first module according to the environment image, and acquiring the second environment sensing characteristics output by the second module according to the first environment sensing characteristics;
And determining the environment sensing result according to the second environment sensing characteristics.
4. A method according to claim 3, wherein the obtaining the first context awareness feature output by the first module from the context image and the obtaining the second context awareness feature output by the second module from the first context awareness feature comprises:
inputting the environment image into the first module for each first module to acquire the first environment sensing characteristics output by the first module, and storing the first environment sensing characteristics into the characteristic pool;
and for each second module, determining input features corresponding to the second module from a plurality of environment-aware features stored in the feature pool, inputting the input features into the second module to acquire second environment-aware features output by the second module, and storing the second environment-aware features into the feature pool.
5. The method of claim 4, wherein the determining the input feature corresponding to the second module from the plurality of context-aware features stored in the feature pool comprises:
Determining a target parent node module corresponding to a parent node of the second module;
and determining the target environment sensing characteristics output by the target father node module from a plurality of environment sensing characteristics stored in the characteristic pool, and taking the target environment sensing characteristics as input characteristics corresponding to the second module.
6. A method according to claim 3, wherein said determining said context-aware result from said second context-aware feature comprises:
determining a third module from the second modules, wherein the third module comprises a second module without a child node in the second module;
and determining the environment sensing result according to the second environment sensing characteristics output by the third module.
7. A vehicle control apparatus characterized by comprising:
an acquisition module configured to acquire an environmental image of an environment surrounding the vehicle;
the acquisition module is configured to acquire an environment sensing result through the environment image and the environment sensing model;
the control module is configured to control the vehicle to run according to the environment sensing result;
wherein the environmental perception model is pre-built by:
Determining a plurality of environment sensing modules and target configuration information for constructing the environment sensing model, wherein the target configuration information comprises association relations among the plurality of environment sensing modules;
constructing an environment sensing model through a plurality of environment sensing modules, the target configuration information and a feature pool, wherein the feature pool is used for storing environment sensing features in the automatic driving process of the vehicle output by each environment sensing module;
the determining a plurality of context awareness modules and target configuration information to construct the context awareness model includes:
determining a plurality of environment sensing modules from a plurality of preset neural network modules according to preset model parameters corresponding to the environment sensing models;
and generating the target configuration information according to a plurality of environment awareness modules.
8. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
a method for carrying out the method of any one of claims 1-6.
9. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-6.
10. A chip, comprising a processor and an interface; the processor is configured to read instructions to perform the method of any of claims 1-6.
CN202210786945.5A 2022-07-04 2022-07-04 Vehicle control method, device, vehicle, storage medium and chip Active CN115056784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210786945.5A CN115056784B (en) 2022-07-04 2022-07-04 Vehicle control method, device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210786945.5A CN115056784B (en) 2022-07-04 2022-07-04 Vehicle control method, device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN115056784A CN115056784A (en) 2022-09-16
CN115056784B true CN115056784B (en) 2023-12-05

Family

ID=83204288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210786945.5A Active CN115056784B (en) 2022-07-04 2022-07-04 Vehicle control method, device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN115056784B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116985840A (en) * 2022-09-27 2023-11-03 腾讯云计算(北京)有限责任公司 Vehicle control method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014118178A1 (en) * 2013-01-30 2014-08-07 Bayerische Motoren Werke Aktiengesellschaft Creation of an environment model for a vehicle
CN106845547A (en) * 2017-01-23 2017-06-13 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN108388834A (en) * 2017-01-24 2018-08-10 福特全球技术公司 The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN110531754A (en) * 2018-05-24 2019-12-03 通用汽车环球科技运作有限责任公司 Control system, control method and the controller of autonomous vehicle
WO2020224761A1 (en) * 2019-05-06 2020-11-12 Zenuity Ab Automated map making and positioning
CN112783506A (en) * 2021-01-29 2021-05-11 展讯通信(上海)有限公司 Model operation method and related device
CN114240816A (en) * 2022-02-24 2022-03-25 魔门塔(苏州)科技有限公司 Road environment sensing method and device, storage medium, electronic equipment and vehicle
CN114255351A (en) * 2022-02-28 2022-03-29 魔门塔(苏州)科技有限公司 Image processing method, device, medium, equipment and driving system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11378956B2 (en) * 2018-04-03 2022-07-05 Baidu Usa Llc Perception and planning collaboration framework for autonomous driving

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014118178A1 (en) * 2013-01-30 2014-08-07 Bayerische Motoren Werke Aktiengesellschaft Creation of an environment model for a vehicle
CN106845547A (en) * 2017-01-23 2017-06-13 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN108388834A (en) * 2017-01-24 2018-08-10 福特全球技术公司 The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN110531754A (en) * 2018-05-24 2019-12-03 通用汽车环球科技运作有限责任公司 Control system, control method and the controller of autonomous vehicle
WO2020224761A1 (en) * 2019-05-06 2020-11-12 Zenuity Ab Automated map making and positioning
CN112783506A (en) * 2021-01-29 2021-05-11 展讯通信(上海)有限公司 Model operation method and related device
CN114240816A (en) * 2022-02-24 2022-03-25 魔门塔(苏州)科技有限公司 Road environment sensing method and device, storage medium, electronic equipment and vehicle
CN114255351A (en) * 2022-02-28 2022-03-29 魔门塔(苏州)科技有限公司 Image processing method, device, medium, equipment and driving system

Also Published As

Publication number Publication date
CN115056784A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2021027568A1 (en) Obstacle avoidance method and device
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN115330923B (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
CN115123257B (en) Pavement deceleration strip position identification method and device, vehicle, storage medium and chip
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115056784B (en) Vehicle control method, device, vehicle, storage medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN115334109A (en) System architecture, transmission method, vehicle, medium and chip for traffic signal identification
CN114828131A (en) Communication method, medium, vehicle-mounted communication system, chip and vehicle
CN115179930B (en) Vehicle control method and device, vehicle and readable storage medium
CN115407344B (en) Grid map creation method, device, vehicle and readable storage medium
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN115139946B (en) Vehicle falling water detection method, vehicle, computer readable storage medium and chip
CN115649165B (en) Vehicle starting control method and device, vehicle and storage medium
CN115147794B (en) Lane line determining method, lane line determining device, vehicle, medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant