CN115649184A - Vehicle control instruction generation method, device and equipment - Google Patents

Vehicle control instruction generation method, device and equipment Download PDF

Info

Publication number
CN115649184A
CN115649184A CN202210632826.4A CN202210632826A CN115649184A CN 115649184 A CN115649184 A CN 115649184A CN 202210632826 A CN202210632826 A CN 202210632826A CN 115649184 A CN115649184 A CN 115649184A
Authority
CN
China
Prior art keywords
perception information
target
lane
map data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210632826.4A
Other languages
Chinese (zh)
Inventor
谭业辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210632826.4A priority Critical patent/CN115649184A/en
Publication of CN115649184A publication Critical patent/CN115649184A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides a method, a device, equipment, a medium and a product for generating a vehicle control instruction, and relates to the field of artificial intelligence, in particular to the technical field of intelligent transportation and unmanned driving. The specific implementation scheme comprises the following steps: responding to the acquired environment perception information of the vehicle running environment, and determining local map data matched with the environment perception information; screening environmental perception information according to road topological features and lane topological features indicated by local map data to obtain target perception information; and generating a vehicle control instruction based on the target perception information.

Description

Vehicle control instruction generation method, device and equipment
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly to the field of intelligent transportation and unmanned driving technologies, and can be applied to a generation scenario of vehicle control instructions.
Background
The generation of the vehicle control command has important significance for ensuring safe and efficient driving of the vehicle, but in some scenes, the generation process of the vehicle control command has the phenomena of large calculation burden and poor command accuracy.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, medium, and article of manufacture for generating vehicle control commands.
According to an aspect of the present disclosure, there is provided a method of generating a vehicle control instruction, including: in response to the acquired environment perception information of the vehicle running environment, determining local map data matched with the environment perception information; screening the environment perception information according to road topological features and lane topological features indicated by the local map data to obtain target perception information; and generating a vehicle control instruction based on the target perception information.
According to another aspect of the present disclosure, there is provided a vehicle control instruction generation device including: the system comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for responding to acquired environment perception information of a vehicle running environment and determining local map data matched with the environment perception information; the second processing module is used for screening the environment perception information according to road topological features and lane topological features indicated by the local map data to obtain target perception information; and the third processing module is used for generating a vehicle control instruction based on the target perception information.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the vehicle control instruction generation methods described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of generating vehicle control instructions of any one of the above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of generating vehicle control instructions of any of the above.
According to another aspect of the present disclosure, a cloud control platform is provided, which includes the electronic device described above, and the electronic device is configured to execute the vehicle control instruction generation method described in any one of the above.
According to another aspect of the present disclosure, there is provided an unmanned vehicle comprising an electronic device according to any one of the above, the electronic device being configured to perform the method of generating vehicle control commands.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically shows a system architecture of a method and apparatus for generating vehicle control commands according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of generating vehicle control commands according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of a method of generating vehicle control commands according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a generation process of vehicle control commands according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a diagram of a screening process of context awareness information, according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a process of determining candidate lane sections according to an embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of a vehicle control instruction generation apparatus according to an embodiment of the present disclosure;
fig. 8 schematically shows a block diagram of an electronic device for executing a method of generating vehicle control instructions according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
The embodiment of the disclosure provides a method for generating a vehicle control instruction. The method for generating the vehicle control command comprises the following steps: the method comprises the steps of responding to acquired environment perception information of a vehicle running environment, determining local map data matched with the environment perception information, screening the environment perception information according to road topological features and lane topological features indicated by the local map data to obtain target perception information, and generating a vehicle control command based on the target perception information.
Fig. 1 schematically shows a system architecture of a method and apparatus for generating vehicle control commands according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
The system architecture 100 according to this embodiment may include a data collection side 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between data collection end 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The server 103 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud services, cloud computing, network services, middleware services, and the like.
The data acquisition terminal 101 communicates with the server 103 via the network 102 to receive or transmit data and the like. The data collection terminal 101 may be used to collect environment awareness information of a driving environment of the vehicle.
The server 103 may be a server providing various services, and the server 103 may be integrated in an autonomous vehicle, or may be disposed at a remote end capable of establishing communication with a vehicle-mounted terminal, for example, a cloud-controlled platform server (for example only) that processes environment awareness information. The server 103 may generate a vehicle control instruction for controlling the vehicle to travel based on the acquired environment awareness information.
For example, the server 103 determines local map data matching the environment perception information in response to the environment perception information of the vehicle running environment acquired from the data acquisition terminal 101, screens the environment perception information according to road topology features and lane topology features indicated by the local map data to obtain target perception information, and generates a vehicle control instruction based on the target perception information.
It should be noted that the method for generating the vehicle control instruction provided in the embodiment of the present disclosure may be executed by the server 103. Accordingly, the vehicle control instruction generation device provided by the embodiment of the present disclosure may be provided in the server 103. The method for generating the vehicle control instruction provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 103 and can communicate with the server 103. Correspondingly, the vehicle control instruction generating device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster that is different from the server 103 and is capable of communicating with the server 103.
It should be understood that the number of data collection terminals, networks, and servers in fig. 1 is merely illustrative. There may be any number of data collection terminals, networks, and servers, as desired for implementation.
The embodiment of the present disclosure provides a method for generating a vehicle control command, and the method for generating a vehicle control command according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2 to 3 in conjunction with the system architecture of fig. 1.
Fig. 2 schematically shows a flowchart of a method of generating vehicle control commands according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 for generating a vehicle control instruction according to the embodiment of the present disclosure may include, for example, operations S210 to S230.
In operation S210, local map data matching the environment perception information is determined in response to the acquired environment perception information of the vehicle driving environment.
In operation S220, the environment awareness information is screened according to the road topology characteristics and the lane topology characteristics indicated by the local map data, so as to obtain target awareness information.
In operation S230, a vehicle control command is generated based on the target perception information.
An example flow of each operation of the vehicle control instruction generation method of the present embodiment is explained below by way of example.
For example, the environmental perception information of the driving environment of the vehicle may be acquired through the data acquisition terminal. The data acquisition end may include, for example, a GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit) and an environment sensing module, which may include, for example, a laser radar, a millimeter wave radar, a vehicle-mounted camera, a wheel speed sensor, and the like.
The GNSS/IMU may be used to obtain vehicle absolute position and vehicle attitude information of the target vehicle, and the environment sensing module may be used to obtain information such as relative position of an obstacle, relative speed of the obstacle, shape of the obstacle, and the like. The coordinate system conversion can be carried out according to the vehicle absolute position and the vehicle attitude information of the target vehicle, and information such as the absolute position of the obstacle, the absolute speed of the obstacle and the like can be obtained.
The obstacles may include other vehicles in the target vehicle's driving environment, which may be considered dynamic obstacles in the target vehicle's driving environment, for example, may include other vehicles within the range covered by the target vehicle when acquiring the environmental data.
In one example approach, a target mapping point for which the current vehicle location is based on global map data may be determined based on the current vehicle location indicated by the context awareness information. And taking the target mapping point as a central point, and taking map data corresponding to a preset range from the central point in the global map data as local map data matched with the environment perception information. The global map data may be, for example, high-precision map data providing lane-level navigation information, and the current position of the vehicle may be indicated by, for example, latitude and longitude coordinates of the position where the vehicle is located.
And screening the environment perception information according to the road topological characteristic and the lane topological characteristic indicated by the local map data to obtain target perception information. For example, the environment awareness information may be subjected to road-level-based screening processing according to road topology features indicated by the local map data, so as to obtain candidate awareness information. And carrying out lane-level-based screening processing on the candidate perception information according to the topological characteristics of the lanes indicated by the local map data to obtain target perception information.
And generating a vehicle control instruction for controlling the vehicle to run based on the target perception information. The vehicle control instruction may, for example, instruct the vehicle to track a sequence of point points based on a target time period, and to indicate speed information corresponding to a track point in the sequence of track points. The target time period may be, for example, a preset time period in which the current time is the start time.
According to the embodiment of the disclosure, in response to the acquired environment perception information of the vehicle running environment, local map data matched with the environment perception information is determined, the environment perception information is screened according to road topology features and lane topology features indicated by the local map data to obtain target perception information, and a vehicle control instruction is generated based on the target perception information. The accuracy of the generated vehicle control instruction can be effectively ensured, the calculation burden generated by the vehicle control instruction is favorably reduced, credible decision support is favorably provided for driving auxiliary control, and the safe running of the unmanned vehicle can be effectively ensured.
Fig. 3 schematically shows a schematic diagram of a method of generating a vehicle control instruction according to another embodiment of the present disclosure.
As shown in FIG. 3, method 300 may include operations S310-S340, for example.
In operation S310, in response to the acquired environment awareness information of the vehicle running environment, local map data matching the environment awareness information is determined.
In operation S320, the environment awareness information is subjected to a road-level-based screening process according to the road topology characteristics indicated by the local map data, resulting in candidate awareness information.
In operation S330, a screening process based on a lane hierarchy is performed on the candidate perception information according to the topological feature of the lane indicated by the local map data, resulting in target perception information.
In operation S340, a vehicle control command is generated based on the target perception information.
An example flow of each operation of the vehicle control instruction generation method of the embodiment is explained below by way of example.
Illustratively, the environment awareness information may include, for example, vehicle-side awareness information, road-side awareness information, and cloud awareness information. The context awareness information may indicate a current location of the vehicle, such as latitude and longitude coordinates that may indicate a location of the vehicle. And determining local map data matched with the environment perception information according to the current position of the vehicle. The local map data includes map data corresponding to a preset range from the current position of the vehicle in the global map data. The global map data may be, for example, high-precision map data, which may provide road semantic information at a lane level.
The target road segment where the vehicle is located within the preset range may be determined according to the current position of the vehicle indicated by the environment awareness information. And determining candidate road sections having a connection relation with the target road section within a preset range according to the road topological characteristics indicated by the local map data. And screening perception information based on the target road section and the candidate road section from the environment perception information to obtain candidate perception information. The connectivity relationship may include at least one of the following relationships: an intersection relationship, a split relationship, and an import relationship.
According to the road topology characteristics indicated by the local map data, the environment perception information is screened based on the road hierarchy, the operation burden generated by a vehicle control instruction can be effectively reduced, the adaptability of the driving auxiliary control based on a complex driving scene can be effectively improved, and the reliable data support can be provided for the driving auxiliary control.
The target lane section in which the vehicle is located within the preset range may be determined according to the current position of the vehicle indicated by the environment awareness information. And determining a candidate lane interval with a lane change relation with the target lane interval in a preset range according to the lane topological characteristics indicated by the local map data. And screening the perception information based on the target lane interval and the candidate lane interval from the candidate perception information to obtain the target perception information.
The lane change relationship is determined by, for example, a lane section position and a lane boundary type. The lane section having the lane change relationship may be, for example, an adjacent lane section. The lane boundary type of the lane section determines whether lane change is allowed for the corresponding lane section. Illustratively, in the case where an arbitrary side lane boundary line of the lane section is a broken line segment, it is determined that lane change is permitted based on the side direction. When any side lane boundary line of the lane section is a real line segment, it is determined that lane change is prohibited based on the side direction.
According to the topological characteristics of the lanes indicated by the local map data, the candidate perception information is screened based on the lane hierarchy, so that the calculation burden generated by a vehicle control instruction is further reduced, the driving auxiliary control cost can be effectively reduced, and the safe driving of the unmanned vehicle is ensured.
For example, a candidate lane section having a lane change relationship with the target road section within a preset range may be determined according to the lane attribute characteristics indicated by the local map data. The lane property features may include, for example, lane direction features that may indicate a direction of permitted traffic for a lane section, and lane type features that may include, for example, two-way traffic, forward traffic, reverse traffic, and two-way no traffic, among others. The lane type feature may indicate a lane interval type, which may include, for example, a general lane, an entrance lane, an exit lane, a connection lane, a parking lane, a u-turn lane, and the like.
In one example, a local path may be planned based on the target awareness information to obtain a locally planned path, and the vehicle control command may be generated based on the locally planned path. For example, a route point sequence based on a target time period and speed information corresponding to a route point in the route point sequence may be determined from the target perception information, and a vehicle control command may be generated based on the route point sequence and the speed information. The sequence of path points and the speed information corresponding to the sequence of path points may constitute a locally planned path.
Based on the target perception information, a vehicle control command for controlling the vehicle to run is generated, the generation efficiency of the vehicle control command can be effectively improved, the consumption of computing resources generated by the vehicle control command is effectively reduced, and the time performance of driving auxiliary control in a structured environment can be effectively improved while the safe running of the unmanned vehicle is effectively ensured.
In another example way, local path planning may be performed based on the target perception information, resulting in a locally planned path. And taking the local planned path as reference path information, adjusting the global planned path according to the reference path information to obtain an adjusted planned path, and generating a vehicle control command based on the adjusted planned path. The local planned path comprises a path point sequence based on a target time period and speed information corresponding to the path points in the path point sequence, and the global planned path is a road-level planned path based on the starting path point and the target path point.
And screening the environment perception information according to the road topology characteristics and the lane topology characteristics indicated by the local map data to obtain target perception information, and generating a vehicle control instruction based on the target perception information. The reasonability of the generated vehicle control command can be effectively ensured, and the safe running of the unmanned vehicle is favorably ensured. The generation efficiency of the vehicle control instruction can be effectively improved, and the time performance of driving auxiliary control in a structured environment can be effectively improved. The method can effectively reduce the operation burden generated by the vehicle control instruction, is favorable for reducing the system size of the unmanned system, and is favorable for improving the adaptability of the driving auxiliary control based on a complex driving scene.
Fig. 4 schematically shows a schematic diagram of a generation process of a vehicle control instruction according to an embodiment of the present disclosure.
As shown in fig. 4, the unmanned system may include a decision module 401, a map data module 402, a context awareness module 403, and a vehicle control module 404. The map data module 402 and the environment sensing module 403 may perform information interaction with the decision module 401 through a V2X vehicle communication module, respectively. The decision module 401 may send a vehicle control instruction to the vehicle control module 404 through an on-vehicle CAN (Controller Area Network) bus.
For example, the environment awareness module 403 may acquire environment awareness information of the driving environment of the vehicle and send the environment awareness information to the decision module 401. The environment awareness information may include, for example, vehicle-side awareness information, road-side awareness information, cloud awareness information, and the like.
The decision module 401, in response to the obtained environment awareness information, determines local map data matching the environment awareness information according to the high-precision map data obtained from the map data module 402. For example, according to the current position of the vehicle indicated by the environment awareness information, the map data corresponding to a preset range from the current position of the vehicle in the high-precision map data is used as the local map data matched with the environment awareness information.
The decision module 401 performs screening processing based on a road hierarchy on the environment perception information according to the road topology characteristics indicated by the local map data to obtain candidate perception information. And screening the candidate perception information based on the lane level according to the topological characteristic of the lane indicated by the local map data to obtain target perception information. And generating vehicle control instructions based on the target awareness information and sending the vehicle control instructions to the vehicle control module 404.
The method can effectively reduce the operation burden and the calculation resource consumption generated by the vehicle control command, and can effectively reduce the cost of the unmanned system. The time performance of driving auxiliary control under the structured environment can be effectively improved while the safe driving of the unmanned vehicle is effectively ensured.
Fig. 5 schematically shows a schematic diagram of a screening process of context awareness information according to an embodiment of the present disclosure.
As shown in fig. 5, the context awareness information 501 acquired by the context awareness module may be unstructured context awareness information. Local map data matching the environmental awareness information is determined based on the current position of the vehicle indicated by the environmental awareness information. The unstructured context awareness information 501 is converted into structured context awareness information 502 according to road topology features and lane topology features indicated by the local map data.
The target road segment where the vehicle is located within the preset range is determined according to the vehicle current position of the target vehicle indicated by the structured environmental awareness information 502 (the target vehicle may be, for example, a vehicle in a dashed circle). And determining candidate road sections having a connection relation with the target road section in a preset range according to the road topological characteristics. For other road segments having non-continuous relationship with the target road segment, the perception information based on other road segments is screened out in the structured environment perception information 502 (as shown in a graph 503). And screening the perception information based on the target road section and the candidate road section in the structured environment perception information 502 to obtain candidate perception information 504.
And determining a target lane section where the vehicle is located within a preset range according to the current vehicle position of the target vehicle. And determining a candidate lane interval with a lane change relation with the target lane interval in a preset range according to the topological characteristics of the lanes. For other lane sections having a non-lane change relationship with the target lane section, perception information based on the other lane sections is screened out in the candidate perception information 504 (as shown in fig. 505). And screening the sensing information based on the target lane interval and the candidate lane interval from the candidate sensing information 504 to obtain target sensing information 506.
And screening the environment perception information according to the road topological characteristic and the lane topological characteristic indicated by the local map data. The reasonability of the generated vehicle control instruction can be effectively ensured, the operation burden generated by the vehicle control instruction can be effectively reduced, and the adaptability of the driving auxiliary control based on a complex driving scene is favorably improved.
Fig. 6 schematically shows a process of determining a candidate lane section according to an embodiment of the present disclosure.
As shown in fig. 6, the target Lane section where the vehicle is located within the preset range is determined to be a Lane2 Lane section according to the current position of the vehicle indicated by the environment awareness information. Lane1 Lane section and Lane3 Lane section are respectively adjacent to Lane2 Lane section, and the Lane boundary types of Lane1 Lane section and Lane3 Lane section are dotted lines, and the dotted line Lane boundary indicates that Lane change is allowed for the corresponding Lane section.
Lane change relations exist between Lane sections Lane1 and Lane3 and Lane sections Lane2, and Lane sections Lane4 and Lane2 respectively. Lane1 Lane sections and Lane3 Lane sections constitute Lane candidate sections that match Lane2 Lane sections.
The method can effectively reduce the operation burden generated by the vehicle control command, can effectively reduce the driving auxiliary control cost, and is favorable for improving the time performance of driving auxiliary control in a structured environment.
Fig. 7 schematically shows a block diagram of a vehicle control instruction generation apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, a vehicle control instruction generation device 700 according to an embodiment of the present disclosure includes, for example, a first processing module 710, a second processing module 720, and a third processing module 730.
A first processing module 710, configured to determine, in response to the acquired environment awareness information of the vehicle driving environment, local map data that matches the environment awareness information; the second processing module 720 is configured to screen the environmental awareness information according to the road topology features and the lane topology features indicated by the local map data, so as to obtain target awareness information; and a third processing module 730 for generating vehicle control instructions based on the target perception information.
According to the embodiment of the disclosure, in response to the acquired environment perception information of the vehicle running environment, local map data matched with the environment perception information is determined, the environment perception information is screened according to road topology features and lane topology features indicated by the local map data to obtain target perception information, and a vehicle control instruction is generated based on the target perception information. The accuracy of the generated vehicle control command can be effectively ensured, the calculation burden generated by the vehicle control command is favorably reduced, credible decision support is favorably provided for driving auxiliary control, and the safe running of the unmanned vehicle can be effectively ensured.
According to an embodiment of the present disclosure, a first processing module includes: a first processing sub-module for determining a target mapping point of the current position of the vehicle based on the global map data, according to the current position of the vehicle indicated by the environment awareness information; and the second processing submodule is used for taking the target mapping point as a central point and taking the map data corresponding to the preset range from the central point in the global map data as local map data matched with the environment perception information.
According to an embodiment of the present disclosure, the second processing module includes: the third processing submodule is used for carrying out screening processing based on road levels on the environment perception information according to the road topological characteristics indicated by the local map data to obtain candidate perception information; and the fourth processing submodule is used for screening the candidate perception information based on the lane level according to the topological characteristic of the lane indicated by the local map data to obtain the target perception information.
According to an embodiment of the present disclosure, the environment awareness information indicates a current position of the vehicle, and the local map data includes map data corresponding to a preset range from a center point in the global map data; the third processing submodule includes: the first processing unit is used for determining a target road section where a vehicle is located in a preset range according to the current position of the vehicle; the second processing unit is used for determining candidate road sections which have a connection relation with the target road section in a preset range based on the road topological characteristics; and a third processing unit, configured to filter, from the environment perception information, perception information based on the target road segment and the candidate road segment to obtain candidate perception information, where the continuation relation includes at least one of the following relations: an intersection relationship, a split relationship, and an import relationship.
According to an embodiment of the present disclosure, the fourth processing submodule includes: the fourth processing unit is used for determining a target lane interval where the vehicle is located within a preset range according to the current position of the vehicle; the fifth processing unit is used for determining a candidate lane interval with a lane change relation with the target lane interval in a preset range based on the topological characteristic of the lane; and a sixth processing unit, configured to filter, from the candidate perception information, perception information based on the target lane section and the candidate lane section to obtain the target perception information, where the lane change relationship is determined by a lane section position and a lane boundary type.
According to an embodiment of the present disclosure, the third processing module includes: the fifth processing submodule is used for determining a path point sequence based on a target time period and speed information corresponding to the path points in the path point sequence according to the target perception information; and the sixth processing submodule is used for generating a vehicle control command based on the path point sequence and the speed information.
According to an embodiment of the present disclosure, the third processing module includes: the seventh processing sub-module is used for carrying out local path planning based on the target perception information to obtain reference path information; the eighth processing submodule is used for adjusting the global planned path according to the reference path information to obtain an adjusted planned path; and the ninth processing submodule is used for generating a vehicle control instruction based on the adjusted planned path, the reference path information comprises a path point sequence based on the target time period and speed information corresponding to the path points in the path point sequence, and the global planned path is a road-level planned path based on the initial path point and the target path point.
It should be noted that the technical solutions of the present disclosure, including the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like, all comply with the regulations of the relevant laws and regulations, and do not violate the customs of the public order.
According to an embodiment of the present disclosure, an electronic device, a readable storage medium, and a computer program product are also provided.
According to an embodiment of the present disclosure, a cloud control platform is further provided, and the cloud control platform includes, for example, the electronic device described above. The electronic device includes at least one processor and a memory communicatively coupled to the at least one processor. The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle control method described above.
There is also provided, in accordance with an embodiment of the present disclosure, an autonomous vehicle, including, for example, an electronic device. The electronic device includes at least one processor and a memory communicatively coupled to the at least one processor. The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of generating vehicle control instructions described above
Fig. 8 schematically shows a block diagram of an electronic device for executing a method of generating vehicle control instructions according to an embodiment of the present disclosure.
FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. The electronic device 800 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 801 executes the respective methods and processes described above, such as the generation method of the vehicle control instruction. For example, in some embodiments, the method of generating vehicle control instructions may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the vehicle control instruction generation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method of generating vehicle control instructions in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with an object, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to an object; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which objects can provide input to the computer. Other kinds of devices may also be used to provide for interaction with an object; for example, feedback provided to the subject can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the object may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., an object computer having a graphical object interface or a web browser through which objects can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A method of generating vehicle control commands, comprising:
in response to the acquired environment perception information of the vehicle running environment, determining local map data matched with the environment perception information;
screening the environment perception information according to road topological features and lane topological features indicated by the local map data to obtain target perception information; and
and generating a vehicle control instruction based on the target perception information.
2. The method of claim 1, wherein the determining, in response to the obtained environmental awareness information of the vehicle's driving environment, local map data that matches the environmental awareness information comprises:
determining a target mapping point of the vehicle current position based on global map data according to the vehicle current position indicated by the environment perception information; and
and taking the target mapping point as a central point, and taking map data corresponding to a preset range from the central point in the global map data as the local map data matched with the environment perception information.
3. The method according to claim 1, wherein the screening the environmental awareness information according to road topology features and lane topology features indicated by the local map data to obtain target awareness information comprises:
screening the environmental perception information based on a road hierarchy according to road topological features indicated by the local map data to obtain candidate perception information; and
and screening the candidate perception information based on lane levels according to the topological characteristics of the lanes indicated by the local map data to obtain the target perception information.
4. The method of claim 3, wherein,
the environment perception information indicates the current position of the vehicle, and the local map data comprises map data corresponding to a preset range from the central point in the global map data;
the screening processing based on the road hierarchy is performed on the environment perception information according to the road topological feature indicated by the local map data to obtain candidate perception information, and the screening processing comprises the following steps:
determining a target road section where the vehicle is located in the preset range according to the current position of the vehicle;
determining candidate road sections having a connection relation with the target road section in the preset range based on the road topological characteristics; and
screening perception information based on the target road segment and the candidate road segment from the environment perception information to obtain the candidate perception information,
wherein the connectivity relationship comprises at least one of: an intersection relationship, a split relationship, and an import relationship.
5. The method according to claim 4, wherein the performing lane-level-based screening processing on the candidate perception information according to the lane topology characteristics indicated by the local map data to obtain the target perception information comprises:
determining a target lane interval where the vehicle is located within the preset range according to the current position of the vehicle;
determining a candidate lane interval with a lane change relation with the target lane interval in the preset range based on the lane topological characteristic; and
screening perception information based on the target lane interval and the candidate lane interval from the candidate perception information to obtain the target perception information,
wherein the lane change relationship is determined by a lane section position and a lane boundary type.
6. The method of any of claims 1-5, wherein the generating vehicle control instructions based on the target perception information comprises:
determining a path point sequence based on a target time period and speed information corresponding to path points in the path point sequence according to the target perception information; and
generating the vehicle control command based on the sequence of path points and the speed information.
7. The method of any of claims 1-5, wherein the generating vehicle control instructions based on the target perception information comprises:
performing local path planning based on the target perception information to obtain reference path information;
adjusting the global planned path according to the reference path information to obtain an adjusted planned path; and
generating the vehicle control command based on the adjusted planned path,
the reference path information comprises a path point sequence based on a target time period and speed information corresponding to path points in the path point sequence, and the global planned path is a road-level planned path based on a starting path point and a target path point.
8. A vehicle control instruction generation device comprising:
the system comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for responding to acquired environment perception information of a vehicle running environment and determining local map data matched with the environment perception information;
the second processing module is used for screening the environment perception information according to the road topological characteristic and the lane topological characteristic indicated by the local map data to obtain target perception information; and
and the third processing module is used for generating a vehicle control instruction based on the target perception information.
9. The apparatus of claim 8, wherein the first processing module comprises:
a first processing sub-module for determining a target mapping point at which a current position of a vehicle is based on global map data, according to the current position of the vehicle indicated by the environment awareness information; and
and the second processing submodule is used for taking the target mapping point as a central point and taking map data corresponding to a preset range from the central point in the global map data as the local map data matched with the environment perception information.
10. The apparatus of claim 8, wherein the second processing module comprises:
the third processing submodule is used for screening the environment perception information based on road levels according to the road topology characteristics indicated by the local map data to obtain candidate perception information; and
and the fourth processing submodule is used for carrying out screening processing based on lane levels on the candidate perception information according to the topological characteristic of the lane indicated by the local map data to obtain the target perception information.
11. The apparatus of claim 10, wherein,
the environment perception information indicates the current position of the vehicle, and the local map data comprises map data corresponding to a preset range from the central point in the global map data; the third processing sub-module comprises:
the first processing unit is used for determining a target road section where the vehicle is located in the preset range according to the current position of the vehicle;
the second processing unit is used for determining a candidate road section in a continuing relation with the target road section in the preset range based on the road topological characteristic; and
a third processing unit, configured to filter perception information based on the target road segment and the candidate road segment from the environment perception information to obtain the candidate perception information,
wherein the continuation relationship comprises at least one of the following relationships: an intersection relationship, a split relationship, and an import relationship.
12. The apparatus of claim 11, wherein the fourth processing submodule comprises:
the fourth processing unit is used for determining a target lane interval where the vehicle is located within the preset range according to the current position of the vehicle;
the fifth processing unit is used for determining a candidate lane interval with a lane change relation with the target lane interval in the preset range based on the lane topological characteristic; and
a sixth processing unit configured to filter perception information based on the target lane section and the candidate lane section from the candidate perception information to obtain the target perception information,
wherein the lane change relationship is determined by a lane section position and a lane boundary type.
13. The apparatus of any of claims 8 to 12, wherein the third processing module comprises:
the fifth processing submodule is used for determining a path point sequence based on a target time period and speed information corresponding to the path points in the path point sequence according to the target perception information; and
and the sixth processing submodule is used for generating the vehicle control instruction based on the path point sequence and the speed information.
14. The apparatus of any of claims 8 to 12, wherein the third processing module comprises:
the seventh processing sub-module is used for carrying out local path planning based on the target perception information to obtain reference path information;
the eighth processing sub-module is configured to adjust the global planned path according to the reference path information, so as to obtain an adjusted planned path; and
a ninth processing sub-module for generating the vehicle control command based on the adjusted planned path,
the reference path information comprises a path point sequence based on a target time period and speed information corresponding to the path points in the path point sequence, and the global planned path is a road-level planned path based on a starting path point and a target path point.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of generating vehicle control instructions of any one of claims 1 to 7.
16. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of generating vehicle control instructions according to any one of claims 1 to 7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method of generating vehicle control instructions of any one of claims 1 to 7.
18. A cloud control platform comprising the electronic device according to claim 15, the electronic device being configured to execute the method for generating vehicle control instructions according to any one of claims 1 to 7.
19. An unmanned vehicle comprising an electronic device according to claim 15, the electronic device being configured to perform the method of generating a vehicle control instruction according to any one of claims 1 to 7.
CN202210632826.4A 2022-06-06 2022-06-06 Vehicle control instruction generation method, device and equipment Pending CN115649184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210632826.4A CN115649184A (en) 2022-06-06 2022-06-06 Vehicle control instruction generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210632826.4A CN115649184A (en) 2022-06-06 2022-06-06 Vehicle control instruction generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN115649184A true CN115649184A (en) 2023-01-31

Family

ID=85024121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210632826.4A Pending CN115649184A (en) 2022-06-06 2022-06-06 Vehicle control instruction generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN115649184A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
US20190219681A1 (en) * 2016-09-23 2019-07-18 Hitachi Construction Machinery Co., Ltd. Mining work machine and obstacle discrimination device
CN111368641A (en) * 2020-02-11 2020-07-03 北京百度网讯科技有限公司 Obstacle screening method, obstacle screening device, electronic device and storage medium
CN113587951A (en) * 2021-09-30 2021-11-02 国汽智控(北京)科技有限公司 Path planning method, device, system, server, storage medium and product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
US20190219681A1 (en) * 2016-09-23 2019-07-18 Hitachi Construction Machinery Co., Ltd. Mining work machine and obstacle discrimination device
CN111368641A (en) * 2020-02-11 2020-07-03 北京百度网讯科技有限公司 Obstacle screening method, obstacle screening device, electronic device and storage medium
CN113587951A (en) * 2021-09-30 2021-11-02 国汽智控(北京)科技有限公司 Path planning method, device, system, server, storage medium and product

Similar Documents

Publication Publication Date Title
CN114047760B (en) Path planning method and device, electronic equipment and automatic driving vehicle
CN113722342A (en) High-precision map element change detection method, device and equipment and automatic driving vehicle
CN116533987A (en) Parking path determination method, device, equipment and automatic driving vehicle
CN114394111B (en) Lane changing method for automatic driving vehicle
CN113276888B (en) Riding method, device, equipment and storage medium based on automatic driving
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN115583254A (en) Path planning method, device and equipment and automatic driving vehicle
CN114779705A (en) Method, device, electronic equipment and system for controlling automatic driving vehicle
CN115649184A (en) Vehicle control instruction generation method, device and equipment
CN114299758A (en) Vehicle control method and apparatus, device, medium, and product
CN114689061A (en) Navigation route processing method and device of automatic driving equipment and electronic equipment
CN114620039A (en) Trajectory correction method and equipment, cloud control platform and automatic driving vehicle
CN113587937A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN113610059A (en) Vehicle control method and device based on regional assessment and intelligent traffic management system
CN113946729A (en) Data processing method and device for vehicle, electronic equipment and medium
CN115294764B (en) Crosswalk area determination method, crosswalk area determination device, crosswalk area determination equipment and automatic driving vehicle
CN116842392B (en) Track prediction method and training method, device, equipment and medium of model thereof
CN114379588B (en) Inbound state detection method, apparatus, vehicle, device and storage medium
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN115235487B (en) Data processing method, device, equipment and medium
CN114527758A (en) Path planning method and device, equipment, medium and product
CN117470269A (en) Method, device, equipment and storage medium for planning path of automatic driving vehicle
CN116414845A (en) Method, apparatus, electronic device and medium for updating map data
CN114550131A (en) Electronic map processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230131