CN114154510A - Control method and device for automatic driving vehicle, electronic equipment and storage medium - Google Patents

Control method and device for automatic driving vehicle, electronic equipment and storage medium Download PDF

Info

Publication number
CN114154510A
CN114154510A CN202111439754.3A CN202111439754A CN114154510A CN 114154510 A CN114154510 A CN 114154510A CN 202111439754 A CN202111439754 A CN 202111439754A CN 114154510 A CN114154510 A CN 114154510A
Authority
CN
China
Prior art keywords
semantic data
data
vehicle
target semantic
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111439754.3A
Other languages
Chinese (zh)
Inventor
陈健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Intelligent Network Automobile Innovation Center Co ltd
Original Assignee
Jiangsu Intelligent Network Automobile Innovation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Intelligent Network Automobile Innovation Center Co ltd filed Critical Jiangsu Intelligent Network Automobile Innovation Center Co ltd
Priority to CN202111439754.3A priority Critical patent/CN114154510A/en
Publication of CN114154510A publication Critical patent/CN114154510A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles

Abstract

The embodiment of the application discloses a control method and device for an automatic driving vehicle, electronic equipment and a storage medium, and relates to the technical field of intelligent driving. Wherein, the method comprises the following steps: the method comprises the steps that data processing is carried out on original data collected by data collection equipment configured in a vehicle to obtain original semantic data, and the original semantic data are graded according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels; and controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode. According to the technical scheme provided by the embodiment of the application, the calculation power and the utility of automatic driving can be maximized, and the realization of a higher-level automatic driving planning algorithm is facilitated.

Description

Control method and device for automatic driving vehicle, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of intelligent driving, in particular to a control method and device for an automatic driving vehicle, electronic equipment and a storage medium.
Background
The intelligent development of automobiles brings the revolution of electronic and electrical architecture, the integrated electronic and electrical architecture becomes the current development trend, and various functions of automobiles are controlled by specific regions by integrating and classifying various control functions on a domain controller. In the currently mainstream and well-recognized category of bosch companies, the power domain, chassis domain, cockpit domain, vehicle body domain and automatic driving domain are included.
In the prior art, the software structure of the automatic driving domain controller is complex due to the complexity of the algorithm of multiple sensors in the direction of the automatic driving domain, and the computing power of the automatic driving domain controller is wasted due to the low cooperative degree of the multiple sensors and the fuzzy semantic layering.
Disclosure of Invention
The embodiment of the application provides a control method and device for an automatic driving vehicle, electronic equipment and a storage medium, so that the computational effectiveness maximization of automatic driving is realized, and the realization of a higher-level automatic driving planning algorithm is facilitated.
In a first aspect, an embodiment of the present application provides a control method for an autonomous vehicle, including:
the method comprises the steps that data processing is carried out on original data collected by data collection equipment configured in a vehicle to obtain original semantic data, and the original semantic data are graded according to the characteristics of the original semantic data to obtain target semantic data of at least two grades;
determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels;
and controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
In a second aspect, an embodiment of the present application provides a control apparatus for an autonomous vehicle, the apparatus including:
the classification module is used for carrying out data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and classifying the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two levels;
the determining module is used for determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels;
and the control module is used for controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
In a third aspect, an embodiment of the present application provides an electronic device, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the control method of an autonomous vehicle of any embodiment of the application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing a method for controlling an autonomous vehicle according to any of the embodiments of the present application.
The embodiment of the application provides a control method, a control device, electronic equipment and a storage medium for an automatic driving vehicle, wherein the method comprises the following steps: the method comprises the steps that data processing is carried out on original data collected by data collection equipment configured in a vehicle to obtain original semantic data, and the original semantic data are graded according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels; and controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode. The method and the device can maximize the computational power and the utility of automatic driving, and are beneficial to realizing a higher-level automatic driving planning algorithm.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a first flowchart illustrating a control method for an autonomous vehicle according to an embodiment of the present disclosure;
FIG. 2 is a second flowchart of a control method for an autonomous vehicle according to an embodiment of the present disclosure;
FIG. 3 is a hierarchical diagram of semantic data provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a control device of an autonomous vehicle according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing the control method of an autonomous vehicle according to the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
Fig. 1 is a first flowchart of a control method for an autonomous vehicle according to an embodiment of the present disclosure, which is applicable to a case where after raw data collected by different sensors are processed and semantically classified, corresponding target semantic data is called according to a current driving mode to perform unmanned driving. The control method for the autonomous vehicle provided by the embodiment of the present application may be executed by the control device for the autonomous vehicle provided by the embodiment of the present application, and the control device may be implemented by software and/or hardware and integrated in an electronic device executing the method. Preferably, the electronic device in the embodiment of the present application may be a vehicle-mounted terminal.
Referring to fig. 1, the method of the present embodiment includes, but is not limited to, the following steps:
s110, performing data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades.
Wherein, data acquisition equipment includes: at least one of a vision sensor, a lidar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and a combination navigation device; the raw data includes at least one of static characteristic information of the vehicle, dynamic characteristic information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located. The original semantic data refers to data obtained by performing data processing to different degrees on the original data.
In the embodiment of the present application, by configuring different data acquisition devices such as a vision sensor, a laser radar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and a combined navigation device on a vehicle, raw data related to the vehicle (such as static characteristic information of the vehicle, dynamic characteristic information of the vehicle) or related to the current environment (environment information) of the vehicle can be acquired, for example: the image information of the surrounding environment of the vehicle collected by a vision sensor (such as a camera), the point cloud data of the surrounding environment of the vehicle collected by a laser radar sensor, and the electromagnetic wave information of the surrounding environment of the vehicle collected by a millimeter wave radar sensor or an ultrasonic radar sensor. The static characteristic information of the vehicle comprises the color, the category and the size of the vehicle, and the dynamic characteristic information of the vehicle comprises the position, the posture and the speed of the vehicle.
It should be noted that the vehicle in the static characteristic information of the vehicle and the dynamic characteristic information of the vehicle in the embodiment may be a vehicle (i.e., a host vehicle) in which the data acquisition device is disposed, may be a vehicle (i.e., a front vehicle, a rear vehicle, a left vehicle, or a right vehicle) closer to the front, the rear, the left, or the right of the vehicle in which the data acquisition device is disposed, or may be the front vehicle, the rear vehicle, the left vehicle, or the right vehicle.
Specifically, the step of classifying the original semantic data according to the characteristics of the original semantic data to obtain at least two levels of target semantic data includes: determining original data as primary target semantic data; performing data processing on the primary target semantic data to obtain semantic data which has physical dimension and is irrelevant to vehicle driving, and determining the semantic data which has the physical dimension and is irrelevant to vehicle driving as secondary target semantic data; performing data processing on the primary target semantic data and the secondary target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as third-level target semantic data; performing data processing on the three-level target semantic data to obtain semantic data identified by the vehicle intention, and determining the semantic data identified by the vehicle intention as four-level target semantic data; and combining the four-level target semantic data with at least one of the first-level target semantic data, the second-level target semantic data and the third-level target semantic data to obtain five-level target semantic data.
In the embodiment of the application, the target semantic data of at least two levels includes low-level target semantic data and high-level target semantic data, the first-level target semantic data and the second-level target semantic data can be used as the low-level target semantic data, and the third-level target semantic data, the fourth-level target semantic data and the fifth-level target semantic data can be used as the high-level target semantic data. The high-level target semantic data can be obtained by performing data processing or combining on the low-level target semantic data.
In the embodiment of the application, the original data acquired by all the data acquisition devices are directly used as primary target semantic data without data processing, and are stored in the memory corresponding to the primary target semantic data. This has the advantage of being left as a base for later inspection or other use of the vehicle based on the raw data. The primary target semantic data is the same as the primary target semantic data, and comprises at least one of static characteristic information of the vehicle, dynamic characteristic information of the vehicle, road information and environment information of the current environment where the vehicle is located.
In the embodiment of the application, the secondary target semantic data is an attribute feature calculated according to the primary target semantic data and is biased to physical attributes, and the primary feature is that the secondary target semantic data has a measurement unit or dimension but does not have semantic information related to vehicle running. Such as the speed and attitude of the host vehicle or the preceding vehicle, the position of the host vehicle or the preceding vehicle, the appearance characteristics (color, category, or size) of the preceding vehicle, the physical attributes of the road (curvature, gradient, width, lane line attributes), and polygon label information, etc. The secondary target semantic data may be stored into a corresponding memory.
For example, the method for determining the forward speed in the secondary target semantic data is as follows: and judging the speed, the attitude and the acceleration of the front vehicle and the relative speed between the front vehicle and the vehicle by combining the mirror image distance in the multi-frame picture according to the point cloud data of the surrounding environment of the vehicle acquired by the laser radar sensor. The method for determining the appearance characteristics of the front vehicle in the secondary target semantic data comprises the following steps: and carrying out data processing operation according to the image information acquired by the vision sensor and the point cloud data of the surrounding environment of the vehicle acquired by the laser radar sensor to obtain the appearance characteristics of the front vehicle. And manual, semi-automatic or automatic polygon labeling can be carried out on automobiles, pedestrians or obstacles in the image information acquired by the vision sensor for related use of high-level target semantic data. The vehicle position in the secondary target semantic data can be determined through a combined navigation device, the combined navigation device can be a differential Global Positioning System (GPS) combined with an Inertial navigation System (IMU), and the Positioning accuracy can reach the meter level. The road physical attribute is mainly provided by a high-precision map.
In the embodiment of the application, the three-level target semantic data refers to semantic data related to vehicle running, and is mainly characterized in that no physical dimension exists generally, and a geometric travelable area can be preliminarily judged. Such as historical trajectory of the vehicle, vehicle behavior, target lane affiliation, identification semantics (speed limit), lane line constraint semantics (parking space), lane direction (left turn), and legal reduction bundle semantics, etc. The three levels of target semantic data may be stored into corresponding memories.
Illustratively, the historical track in the three-level target semantic data is obtained by performing a multi-frame position judgment and superposition algorithm on the vehicle position in the two-level target semantic data and drawing a vehicle driving route. The vehicle behavior is judged by comparing the data of the vehicle position, the attitude, the speed and the like in the secondary target semantic data with the conventional driving mode in the scene library, and can also be used for predicting the future behavior of the vehicle in high-level target semantic data (such as four-level target semantic data). The attribution relation of the target lane is comprehensively known by instant information and road information. The identification semantics, the lane line constraint semantics and the lane direction are obtained by processing data of the road information shot by the visual sensor and then acquiring the identification plate, the lane line constraint semantics and the lane direction semantic information. The law and regulation constraint semantics are that the relevant limit ranges of the current vehicle speed, lane change and the like are comprehensively obtained by the road information and real-time laws and regulations in the database.
In the embodiment of the application, the four-level target semantic data is semantic data for identifying the vehicle intention, which is obtained by further processing on the basis of the three-level target semantic data, and the four-level target semantic data mainly comprises the following types: (1) the combination of two three levels of semantics, such as a vehicle in a certain lane; (2) the two dimensions of time and space are simultaneously provided, such as historical track and track prediction; (3) and identifying and judging the intention, such as current behavior and behavior prediction. Four levels of target semantic data may be stored into corresponding memories.
In the embodiment of the application, the five-level target semantic data is a combination of a plurality of low-level target semantic data, for example, a travelable region obtained by adding traffic regulations (i.e. three-level target semantic data) after the behavior and trajectory prediction (i.e. four-level target semantic data) of the target. The five-level target semantic data may be stored into a corresponding memory.
In the embodiment of the present application, the target semantic data of at least two levels refers to target semantic data of five levels, the number of levels is not specifically limited in this embodiment, and a person skilled in the art can divide the target semantic data into a plurality of levels according to actual application requirements. The implementation divides the software of the unmanned application level into different functional modules according to the independence of the unmanned software function and the uniqueness of the interface. The division of the target semantic data is divided according to the functional modules, and may be divided in other manners. In the implementation, the grade of the target semantic data is changed from shallow to deep, and the target semantic data is gradually changed from initial original data to initial physical information and initial driving information to obtain a drivable area; based on the target semantic data, a link mode between a sensor identification and analysis algorithm and a functional module can be established, and analysis and calling after different sensor data are transmitted on a software layer are realized.
And S120, determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of at least two levels.
In the embodiment of the present application, after the target semantic data of at least two levels is determined in step S110, the target semantic data of the level corresponding to the current driving mode of the vehicle needs to be determined from the target semantic data of at least two levels.
Specifically, because the grade of the target semantic data is obtained by grading the data characteristics, the data characteristics of the data required for executing the current driving mode can be determined firstly; and determining the target semantic data of the level corresponding to the current driving mode from the target semantic data of at least two levels according to the data characteristics. For example, the current driving mode is an adaptive cruise mode, which has the greatest advantage over the "constant cruise" mode in that the vehicle speed of the vehicle can be adaptively adjusted according to the state of the vehicle ahead, and the vehicle can be automatically stopped and started at an appropriate timing. Therefore, the data characteristics required by the pattern are dynamic characteristic data of the vehicle, such as speed, posture and position information of the front vehicle and the host vehicle, and the pattern is determined to correspond to the secondary target semantic data.
Optionally, the current driving mode may be target semantic data corresponding to one level, or may be target semantic data corresponding to multiple levels.
And S130, controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
In the embodiment of the application, after the target semantic data of one or more levels corresponding to the current running mode is determined, the vehicle-mounted terminal controls the vehicle to run in the current running mode according to the target semantic data of one or more levels.
According to the technical scheme provided by the embodiment, original semantic data is obtained by carrying out data processing on original data acquired by data acquisition equipment configured in a vehicle, and the original semantic data is graded according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of at least two levels; and controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode. According to the method, the information perception of the automatic driving is divided into clear semantic data levels, so that low-level target semantic data can be reused in the planning decision of the automatic driving, and the high-level target semantic data is directly used for controlling the vehicle to run in the current running mode, the calculation power waste caused by repeatedly calling original data can be avoided, and the utilization rate of the calculation power of the automatic driving is improved; in addition, the automatic driving planning algorithm can be more reasonable and simple by setting high-level target semantic data, the information perception and the division of planning decision in automatic driving can be clearer, and different algorithms only do work within the range of the algorithms, so that the realization of the high-level automatic driving planning algorithm is facilitated.
Example two
FIG. 2 is a second flowchart of a control method for an autonomous vehicle according to an embodiment of the present disclosure; fig. 3 is a schematic diagram of a hierarchy of semantic data according to an embodiment of the present application. The embodiment of the application is optimized on the basis of the embodiment, and specifically optimized as follows: this embodiment explains the calling process of the general semantic data in detail.
Referring to fig. 2, the method of the present embodiment includes, but is not limited to, the following steps:
s210, performing data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and grading the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two grades.
Wherein, data acquisition equipment includes: at least one of a vision sensor, a lidar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and a combination navigation device; the raw data includes at least one of static characteristic information of the vehicle, dynamic characteristic information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located. The original semantic data refers to data obtained by performing data processing on the original data to different degrees, wherein the data processing to different degrees includes data labeling, data analysis (such as stream processing, interactive query, batch processing, machine learning, artificial intelligence) and visual data display.
As shown in fig. 3, which is a hierarchical diagram of semantic data, the primary target semantic data is original data without any semantic original physical signal; the secondary target semantic data is biased to physical attributes but does not have semantic data related to vehicle running; the third-level target semantic data is semantic data related to vehicle running at the beginning; the four-level target semantic data comprises semantic data processed further for preceding vehicle behavior prediction, track prediction and the like; the five-level target semantic data is the semantic data of the feasible region finally obtained. Each single sensor has different sensing capabilities and describes a certain attribute of the internal and external environments of the vehicle, and if an ideal completely accurate model system is established for the whole sensing system in automatic driving, each sensor is equivalent to a filter of the model and expresses the attribute of a certain aspect of the ideal model respectively. The perception of autonomous driving is actually to approximate the ideal model with as many sensors as possible plus an algorithm. The more sensors available, the more accurate the algorithm, and the less deviation from the ideal model.
Optionally, in the semantic classification model, the high-level semantic algorithm can obtain the low-level target semantic data, so that the low-level target semantic data can be used for multiple times, the calculation power is saved, and a reality model of an ideal perception model can be finally spliced.
And S220, extracting common semantic data shared among different driving modes from the target semantic data of at least two levels.
In the embodiment of the present application, the driving modes of the vehicle include automatic parking, adaptive cruise and automatic emergency braking. And extracting general target semantic data, namely general semantic data, which can be shared between automatic parking and adaptive cruise, between adaptive cruise and automatic emergency braking and between automatic parking and automatic emergency braking from the target semantic data. The method has the advantages that the general semantic data are extracted, the automatic driving domain controller can coordinate and schedule the computing resources, and the computing power utility is maximized.
For example, the automatic parking mode, the adaptive cruise mode and the automatic emergency braking mode are not started at the same time, the automatic parking mode stops being executed after the speed exceeds 20 kilometers, the adaptive cruise mode and the automatic emergency braking mode are started, the computing power of the automatic parking mode is also vacant, the general semantic data can be calculated by the same electronic control unit, and further the computing power is saved.
And S230, acquiring the current running mode of the vehicle, and judging whether the current running mode needs to share the common semantic data with other running modes.
Wherein the current driving mode of the vehicle includes: automatic parking, adaptive cruise or automatic emergency braking.
In the embodiment of the application, the vehicle-mounted terminal may acquire the current running mode of the vehicle from the controller corresponding to the running mode, or may acquire the current running mode of the vehicle in other manners. Then the vehicle-mounted controller judges whether the current driving mode needs to share the common semantic data with other driving modes, if so, the step S240 is executed; if not, step S250 is executed.
And S240, if necessary, calling the general semantic data.
In the embodiment of the application, if the current driving mode needs to share the common semantic data with other driving modes, which indicates that the shared common semantic data exists between the current driving mode and the other driving modes, the vehicle-mounted terminal calls the common semantic data and controls the vehicle to drive in the current driving mode according to the common semantic data. The advantage of setting up like this lies in, can effectually carry out the sharing of general semantic data between each independent controller, avoids the repeated wheel of making, and the perception result that can utilize other controllers is realized to the function realization of each controller, avoids the wasting of resources.
Optionally, a part of data of the current driving mode may need to call the general semantic data, and another part of data may need to determine the target semantic data of the corresponding level from the target semantic data of at least two levels.
And S250, if not needed, determining the target semantic data of the level corresponding to the current running mode of the vehicle from the target semantic data of at least two levels.
In the embodiment of the application, if the current driving mode does not need to share the common semantic data with other driving modes, and the common semantic data shared between the current driving mode and other driving modes is indicated, the target semantic data of the level corresponding to the current driving mode of the vehicle is determined from the target semantic data of at least two levels.
Specifically, because the grade of the target semantic data is obtained by grading the data characteristics, the data characteristics of the data required for executing the current driving mode can be determined according to the current driving mode and the current environment of the vehicle; and determining the target semantic data of the level corresponding to the current driving mode from the target semantic data of at least two levels according to the data characteristics. For example, if the current driving mode is an automatic parking mode and the required data characteristics are dynamic characteristic data (such as vehicle position information) and parking space information of the vehicle, it is determined that the mode corresponds to the secondary target semantic data and the tertiary target semantic data.
And S260, controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
In the embodiment of the application, after the target semantic data of one or more levels corresponding to the current running mode is determined, the vehicle-mounted terminal controls the vehicle to run in the current running mode according to the target semantic data of one or more levels.
According to the technical scheme provided by the embodiment, original semantic data is obtained by carrying out data processing on original data acquired by data acquisition equipment configured in a vehicle, and the original semantic data is graded according to the characteristics of the original semantic data to obtain target semantic data of at least two grades; extracting common semantic data shared among different driving modes from target semantic data of at least two levels; acquiring a current driving mode of a vehicle, and judging whether the current driving mode needs to share common semantic data with other driving modes; if so, calling the general semantic data; if not, determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of at least two levels; and controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode. The method and the device can effectively share the general semantic data by extracting the general semantic data, and can coordinate and schedule the computing resources of the autopilot domain controller, so that the computing power utility is maximized, and the realization of a higher-level autopilot planning algorithm is facilitated.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a control device of an autonomous vehicle according to an embodiment of the present disclosure, and as shown in fig. 4, the device 400 may include:
the classification module 410 is configured to perform data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and classify the original semantic data according to characteristics of the original semantic data to obtain target semantic data of at least two levels.
A determining module 420, configured to determine, from the at least two levels of target semantic data, target semantic data of a level corresponding to the current driving mode of the vehicle.
And the control module 430 is configured to control the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
Further, the aforementioned grading module 410 may be specifically configured to: determining the original data as primary target semantic data; performing data processing on the primary target semantic data to obtain semantic data which has a physical dimension and is irrelevant to vehicle driving, and determining the semantic data which has the physical dimension and is irrelevant to vehicle driving as secondary target semantic data; performing data processing on the primary target semantic data and the secondary target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as tertiary target semantic data; performing data processing on the three-level target semantic data to obtain semantic data identified by the vehicle intention, and determining the semantic data identified by the vehicle intention as four-level target semantic data; and combining the four-level target semantic data with at least one of the first-level target semantic data, the second-level target semantic data and the third-level target semantic data to obtain five-level target semantic data.
Further, the control device for an autonomous vehicle may further include: a data calling module;
the data calling module is used for extracting common semantic data shared among different driving modes from the target semantic data of the at least two levels before determining the target semantic data of the level corresponding to the current driving mode of the vehicle from the target semantic data of the at least two levels; acquiring a current running mode of the vehicle; judging whether the current driving mode needs to share the general semantic data with other driving modes; if so, calling the general semantic data; and if not, triggering and executing the step of determining the target semantic data of the level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels.
Further, the determining module 420 may include a feature determining unit and a data determining unit;
the characteristic determining unit is used for determining the data characteristics of the data required for executing the current running mode.
And the data determining unit is used for determining the target semantic data of the level corresponding to the current driving mode from the target semantic data of the at least two levels according to the data characteristics.
Further, the feature determining unit may be further specifically configured to: and determining the data characteristics of the data required for executing the current running mode according to the current running mode and the current environment of the vehicle.
Optionally, the current driving mode of the vehicle includes: automatic parking, adaptive cruise or automatic emergency braking.
Optionally, the data acquisition device includes: at least one of a vision sensor, a lidar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and a combination navigation device; the raw data includes at least one of static characteristic information of the vehicle, dynamic characteristic information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located.
The control device of the autonomous vehicle provided by the embodiment can be applied to the control method of the autonomous vehicle provided by any embodiment, and has corresponding functions and beneficial effects.
Example four
Fig. 5 is a block diagram of an electronic device for implementing a control method of an autonomous vehicle according to an embodiment of the present application, and fig. 5 shows a block diagram of an exemplary electronic device suitable for implementing an embodiment of the present application. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application. The electronic device can be a smart phone, a tablet computer, a notebook computer, a vehicle-mounted terminal, a wearable device and the like. Preferably, the electronic device in the embodiment of the present application may be a vehicle-mounted terminal.
As shown in fig. 5, the electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to: one or more processors or processing units 516, a memory 528, and a bus 518 that couples the various system components including the memory 528 and the processing unit 516.
Bus 518 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 500 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 500 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)530 and/or cache memory 532. The electronic device 500 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 540 having a set (at least one) of program modules 542, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in, for example, the memory 528, each of which examples or some combination may include an implementation of a network environment. The program modules 542 generally perform the functions and/or methods described in embodiments herein.
The electronic device 500 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), with one or more devices that enable a user to interact with the electronic device 500, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 522. Also, the electronic device 500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 520. As shown in FIG. 5, the network adapter 520 communicates with the other modules of the electronic device 500 via the bus 518. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with the electronic device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 516 executes various functional applications and data processing by executing programs stored in the memory 528, for example, to implement the control method of the autonomous vehicle provided in any embodiment of the present application.
EXAMPLE five
A fifth embodiment of the present application further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the computer program, when executed by a processor, can be used to execute the method for controlling an autonomous vehicle provided in any of the above embodiments of the present application.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

Claims (10)

1. A control method for an autonomous vehicle, the method comprising:
the method comprises the steps that data processing is carried out on original data collected by data collection equipment configured in a vehicle to obtain original semantic data, and the original semantic data are graded according to the characteristics of the original semantic data to obtain target semantic data of at least two grades;
determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels;
and controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
2. The method of claim 1, wherein the ranking the raw semantic data according to its features to obtain at least two levels of target semantic data comprises:
determining the original data as primary target semantic data;
performing data processing on the primary target semantic data to obtain semantic data which has a physical dimension and is irrelevant to vehicle driving, and determining the semantic data which has the physical dimension and is irrelevant to vehicle driving as secondary target semantic data;
performing data processing on the primary target semantic data and the secondary target semantic data to obtain semantic data related to vehicle running, and determining the semantic data related to vehicle running as tertiary target semantic data;
performing data processing on the three-level target semantic data to obtain semantic data identified by the vehicle intention, and determining the semantic data identified by the vehicle intention as four-level target semantic data;
and combining the four-level target semantic data with at least one of the first-level target semantic data, the second-level target semantic data and the third-level target semantic data to obtain five-level target semantic data.
3. The control method of an autonomous vehicle according to claim 1, further comprising, before determining target semantic data of a level corresponding to a current traveling mode of the vehicle from among the target semantic data of the at least two levels:
extracting common semantic data shared among different driving modes from the target semantic data of at least two levels;
acquiring a current running mode of the vehicle;
judging whether the current driving mode needs to share the general semantic data with other driving modes;
if so, calling the general semantic data;
and if not, triggering and executing the step of determining the target semantic data of the level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels.
4. The control method of an autonomous vehicle as claimed in claim 1, wherein the determining target semantic data of a level corresponding to a current driving mode of the vehicle from the target semantic data of the at least two levels comprises:
determining data characteristics of data required for executing the current driving mode;
and determining the target semantic data of the level corresponding to the current driving mode from the target semantic data of the at least two levels according to the data characteristics.
5. The control method of an autonomous vehicle according to claim 4, wherein the determining the data characteristic of the data required to execute the current running mode includes:
and determining the data characteristics of the data required for executing the current running mode according to the current running mode and the current environment of the vehicle.
6. The control method of an autonomous vehicle according to claim 3, characterized in that the current running mode of the vehicle includes: automatic parking, adaptive cruise or automatic emergency braking.
7. The control method of an autonomous vehicle as claimed in claim 1, characterized in that the data acquisition device comprises: at least one of a vision sensor, a lidar sensor, a millimeter wave radar sensor, an ultrasonic radar sensor, and a combination navigation device;
the raw data includes at least one of static characteristic information of the vehicle, dynamic characteristic information of the vehicle, road information, and environmental information of an environment in which the vehicle is currently located.
8. A control apparatus of an autonomous vehicle, characterized in that the apparatus comprises:
the classification module is used for carrying out data processing on original data acquired by data acquisition equipment configured in a vehicle to obtain original semantic data, and classifying the original semantic data according to the characteristics of the original semantic data to obtain target semantic data of at least two levels;
the determining module is used for determining target semantic data of a level corresponding to the current running mode of the vehicle from the target semantic data of the at least two levels;
and the control module is used for controlling the vehicle to run in the current running mode according to the target semantic data of the level corresponding to the current running mode.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the control method of an autonomous vehicle as recited in any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a control method of an autonomous vehicle as claimed in any one of claims 1 to 7.
CN202111439754.3A 2021-11-30 2021-11-30 Control method and device for automatic driving vehicle, electronic equipment and storage medium Pending CN114154510A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111439754.3A CN114154510A (en) 2021-11-30 2021-11-30 Control method and device for automatic driving vehicle, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111439754.3A CN114154510A (en) 2021-11-30 2021-11-30 Control method and device for automatic driving vehicle, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114154510A true CN114154510A (en) 2022-03-08

Family

ID=80455033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111439754.3A Pending CN114154510A (en) 2021-11-30 2021-11-30 Control method and device for automatic driving vehicle, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114154510A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866586A (en) * 2022-04-28 2022-08-05 岚图汽车科技有限公司 SOA architecture-based intelligent driving system, method, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866586A (en) * 2022-04-28 2022-08-05 岚图汽车科技有限公司 SOA architecture-based intelligent driving system, method, equipment and storage medium
CN114866586B (en) * 2022-04-28 2023-09-19 岚图汽车科技有限公司 Intelligent driving system, method, equipment and storage medium based on SOA architecture

Similar Documents

Publication Publication Date Title
JP2022536030A (en) Multiple Object Tracking Using Correlation Filters in Video Analytics Applications
JP2023531330A (en) Sensor Fusion for Autonomous Machine Applications Using Machine Learning
CN111133447A (en) Object detection and detection confidence suitable for autonomous driving
CN113056749A (en) Future object trajectory prediction for autonomous machine applications
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
JP2023509831A (en) Lane Change Planning and Control in Autonomous Machine Applications
CN109558854B (en) Obstacle sensing method and device, electronic equipment and storage medium
JP2023131069A (en) Object data curation of map information using neural networks for autonomous systems and applications
JP2022132075A (en) Ground Truth Data Generation for Deep Neural Network Perception in Autonomous Driving Applications
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN114154510A (en) Control method and device for automatic driving vehicle, electronic equipment and storage medium
CN114550116A (en) Object identification method and device
CN109885392B (en) Method and device for allocating vehicle-mounted computing resources
US11117572B2 (en) Cognition enabled driving pattern detection
CN116107576A (en) Page component rendering method and device, electronic equipment and vehicle
CN113715817B (en) Vehicle control method, vehicle control device, computer equipment and storage medium
CN113276860B (en) Vehicle control method, device, electronic device, and storage medium
JP2023133049A (en) Perception-based parking assistance for autonomous machine system and application
CN115112138A (en) Trajectory planning information generation method and device, electronic equipment and storage medium
CN113799799A (en) Security compensation method and device, storage medium and electronic equipment
EP4224433A1 (en) Training a perception model for a vehicle
US20220402522A1 (en) Tree based behavior predictor
EP4231044A1 (en) Object detection and state estimation from deep learned per-point radar representations
Wang et al. Research on unmanned driving interface based on lidar imaging technology
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination