CN117775022A - Vehicle control method, device, system, vehicle, electronic device and storage medium - Google Patents

Vehicle control method, device, system, vehicle, electronic device and storage medium Download PDF

Info

Publication number
CN117775022A
CN117775022A CN202311716751.9A CN202311716751A CN117775022A CN 117775022 A CN117775022 A CN 117775022A CN 202311716751 A CN202311716751 A CN 202311716751A CN 117775022 A CN117775022 A CN 117775022A
Authority
CN
China
Prior art keywords
vehicle
target
user
control
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311716751.9A
Other languages
Chinese (zh)
Inventor
赵文伯
应东平
高弈
李子伦
吴静涛
苏建业
卢万成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United Automotive Electronic Systems Co Ltd
Original Assignee
United Automotive Electronic Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United Automotive Electronic Systems Co Ltd filed Critical United Automotive Electronic Systems Co Ltd
Priority to CN202311716751.9A priority Critical patent/CN117775022A/en
Publication of CN117775022A publication Critical patent/CN117775022A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a vehicle control method, a device, a system, a vehicle, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining perception data of a traffic environment of the vehicle in a running process, carrying out target detection on the perception data to obtain a target detection result of each object to be detected, highlighting the result missing object if the result missing object exists so as to prompt a user in the vehicle to carry out manual identification, receiving the manual identification result of the user in the vehicle as a target label of the result missing object until all the target detection results at least comprise a target position and a target label, determining a traffic environment state according to all the target detection results after manual identification, generating risk prompt information if the traffic environment state is a risk state, prompting the user in the vehicle to carry out instruction feedback, and controlling the vehicle based on a user instruction if the user instruction is received; and the processing of the perceived edge case is realized by manually identifying auxiliary target detection and controlling the vehicle according to a user instruction.

Description

Vehicle control method, device, system, vehicle, electronic device and storage medium
Technical Field
The application relates to the technical field of automatic driving, in particular to a vehicle control method, a device, a system, a vehicle, electronic equipment and a storage medium.
Background
With the continuous enrichment of sensor equipment equipped with intelligent automobiles and continuous improvement of the calculation power of vehicle-mounted chips, the automatic driving technology is rapidly developed, and the automobile automation level is also higher and higher. However, the existing fully automatic driving technology is still not mature, and the automatic driving automobile still faces a great challenge in facing the treatment of Corner Case. Edge cases include behavioral edge cases and perceived edge cases, where perceived edge cases refer to scenes in road traffic environments where objects of a type that are difficult to identify appear, such as scenes with raised trucks, rollover vehicles, blocked stop signs, animals that break into the road, tire fragments that scatter in the middle of the road, cones that are temporarily placed for road maintenance, and the like. These scenes are often missed in the data set due to the very low occurrence probability, and because the data set of the automatic driving deep learning algorithm is just like an exhaustion method, new samples can be generated at any time in the real traffic scene, and all target object types cannot be exhausted during advanced deep learning.
Although the probability of occurrence of the perceived edge cases is low, it is unavoidable that once the perceived edge cases occur, especially the perceived edge cases affecting traffic safety, the automatic driving system may not work normally due to the inability to identify the target object, and the function of the automatic driving system is withdrawn, thereby reducing the experience of the automatic driving function, even possibly causing traffic accidents, and endangering the life safety of drivers and passengers.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present application provides a vehicle control method, device, system, vehicle, electronic device and storage medium, so as to solve the technical problems that if a sensing edge case occurs in the above-mentioned automatic driving process, the automatic driving system cannot work normally due to the fact that the target object cannot be identified, so that the function of the automatic driving system is withdrawn, the experience of the automatic driving function is reduced, even traffic accidents are caused, and the life safety of drivers, passengers or pedestrians is endangered.
The application provides a vehicle control method, which comprises the following steps: the method comprises the steps of obtaining perception data of a vehicle on a traffic environment in the driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position; if a result missing object exists, highlighting the result missing object in a perceived image according to the target position of the result missing object to prompt an in-vehicle user of the vehicle to manually identify the result missing object, receiving a manual identification result of the in-vehicle user on the result missing object and taking the result missing object as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on the perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label; determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and prompting an in-vehicle user to perform instruction feedback based on the risk prompt information; and if a user instruction fed back by the user in the vehicle is received, controlling the vehicle based on the user instruction.
In an embodiment of the present application, controlling the vehicle based on the user command includes at least one of: if the received user instruction is an artificial driving instruction, controlling the vehicle to enter an artificial driving mode; and if the received user instruction is a target interaction instruction, obtaining target control information by identifying the target interaction instruction, so as to control the vehicle based on the target control information.
In an embodiment of the present application, the obtaining the target control information by identifying the target interaction instruction includes: matching a target control type to which a target voice command belongs according to a preset voice command-control type corresponding relation, wherein the target interaction command comprises the target voice command; and carrying out semantic recognition on the target voice instruction according to the target control type to obtain the target control information.
In an embodiment of the present application, before controlling the vehicle based on the target control information, the method includes: acquiring the number of the in-vehicle users, and if the number of the in-vehicle users is a plurality of, identifying the in-vehicle users which send out the target interaction instruction; if the target in-vehicle user is a main driving user, controlling the vehicle based on the target control information, wherein the in-vehicle user comprises the main driving user; and if the target in-vehicle user is a co-driver user, matching the target control level corresponding to the target control information according to the corresponding relation between the preset control information and the control level, and controlling the vehicle based on the target control information when the preset co-driver permission level is higher than or equal to the target control level, wherein the in-vehicle user also comprises the co-driver user.
In an embodiment of the present application, when the preset level of co-driving authority is lower than the target control level, the method includes: generating and sending out a copilot permission level upgrading request message based on the target control level so as to prompt the main driving user to confirm the copilot permission level upgrading request message; if the request confirmation information of the primary driving user for the auxiliary driving permission level upgrading request information is received, and the request confirmation information is agreeing, upgrading the preset auxiliary driving permission level so that the upgraded auxiliary driving permission level is higher than or equal to the target control level; and controlling the vehicle based on the target control information.
In an embodiment of the present application, after prompting the primary driving user to confirm the auxiliary driving permission level upgrade request information, the method further includes at least one of: if the request confirmation information is received and is refused, carrying out risk avoidance control on the vehicle, and prompting failure of execution to the user in the vehicle; and if the request confirmation information is not received within the preset request confirmation time, carrying out risk avoidance control on the vehicle, and prompting the user in the vehicle that the execution fails.
In an embodiment of the present application, before matching the target control type to which the target voice command belongs, the method further includes: acquiring and displaying a plurality of control types; determining a control type to be set from a plurality of control types according to control type selection information of the in-vehicle user, receiving at least one custom voice instruction of the in-vehicle user, and configuring the corresponding relation between each custom voice instruction and the control type to be set so as to finish voice instruction classification setting of the control type to be set; after finishing the classification setting of all the voice instructions of the control types, taking the corresponding relation between all the configured custom voice instructions and the control types as the corresponding relation between the preset voice instructions and the control types.
In an embodiment of the present application, if the traffic environment state is a risk state, the method further includes: determining the traffic environment risk level according to the target detection results of all the objects to be detected after manual identification; and if the traffic environment risk level is lower than or equal to a preset risk level, generating the risk prompt information according to the traffic environment risk level, and prompting the in-vehicle user to perform instruction feedback based on the risk prompt information.
In an embodiment of the present application, if the traffic environment risk level is higher than the preset risk level, the method includes: and generating the risk prompt information according to the traffic environment risk level, and controlling the vehicle to enter an active safety mode.
In an embodiment of the present application, controlling the vehicle based on the target control information includes: acquiring vehicle condition information of the vehicle; determining the execution permission state of the target control information according to the vehicle condition information, the traffic environment risk level, the target control information and the target detection results of all the manually identified objects to be detected; if the execution authority state of the target control information is allowed, executing the target control information to control the vehicle; and if the execution permission state of the target control information is forbidden, carrying out risk avoidance control on the vehicle, and prompting the failure of execution to the user in the vehicle.
In an embodiment of the present application, if the result missing object is not present, the method includes: determining the traffic environment state according to the target detection results of all the objects to be detected; if the traffic environment state is a risk state, generating the risk prompt information, and prompting the in-vehicle user to perform instruction feedback based on the risk prompt information; and if the user instruction is received, controlling the vehicle based on the user instruction.
In an embodiment of the present application, after reminding the in-vehicle user to perform instruction feedback based on the risk prompt information, the method includes: and if the user command is not received within the preset command feedback time, carrying out risk avoiding control on the vehicle.
In an embodiment of the present application, after the target detection results of all the objects to be detected at least include the target position and the target tag, the method further includes: taking the perception data and target detection results of all the objects to be detected after manual identification as edge case data and storing the edge case data; counting the number of stored edge case data, and if the number is larger than a preset threshold value, performing iterative training on a current target detection model based on the stored edge case data to obtain an iterated target detection model, wherein a target detection result of each object to be detected is obtained by performing target detection on each object to be detected in the perception data by the current target detection model.
In an embodiment of the present application, there is also provided a vehicle control apparatus including: the acquisition module is used for acquiring the perception data of the vehicle on the traffic environment in the driving process; the target detection module is used for carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position; the auxiliary recognition module is used for highlighting the result missing object in a perceived image according to the target position of the result missing object, so as to prompt an in-vehicle user of the vehicle to manually recognize the result missing object, receiving the manual recognition result of the in-vehicle user on the result missing object and taking the manual recognition result as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on the perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label; the risk determination module is used for determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and reminding an in-vehicle user to perform instruction feedback based on the risk prompt information; and the instruction response module is used for controlling the vehicle based on the user instruction if the user instruction fed back by the user in the vehicle is received.
In an embodiment of the present application, there is further provided a vehicle control system, the system including a multi-sensor module, a determination module, a display module, and an interaction module, the determination module including a sensing unit, a decision unit, and a control unit: the multi-sensor module is used for acquiring perception data of the vehicle on the traffic environment in the running process; the sensing unit is used for carrying out target detection on a plurality of objects to be detected in the sensing data to obtain a target detection result of each object to be detected, and the target detection result at least comprises a target position; the display module is used for highlighting the result missing object in the perceived image according to the target position of the result missing object if the result missing object exists; the interaction module is used for prompting an in-vehicle user of the vehicle to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object and taking the manual identification result as a target label of the result missing object; the decision unit is used for determining the traffic environment state according to the target detection results of all the objects to be detected after manual identification until the target detection results of all the objects to be detected at least comprise target positions and target labels; the interaction module is further used for generating risk prompt information if the traffic environment state is a risk state, and prompting the in-vehicle user to perform instruction feedback based on the risk prompt information so as to receive a user instruction fed back by the in-vehicle user; and the control unit is used for controlling the vehicle based on the user instruction if receiving the user instruction fed back by the user in the vehicle.
In one embodiment of the present application, there is also provided a vehicle including the vehicle control system as described above.
In an embodiment of the present application, there is further provided a vehicle cloud system, the system including a cloud end and a vehicle; the vehicle is used for acquiring perception data of traffic environment in the driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position; if a result missing object exists, highlighting the result missing object in a perceived image according to the target position of the result missing object to prompt an in-vehicle user to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object and taking the result missing object as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on the perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label; determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and prompting an in-vehicle user to perform instruction feedback based on the risk prompt information; if a user instruction fed back by the user in the vehicle is received, vehicle control is carried out based on the user instruction; taking the perceived data and the target detection results of all the objects to be detected after manual identification as edge case data and uploading the edge case data to the cloud; the cloud end is used for receiving and storing the edge case data; counting the number of stored edge case data, if the number is larger than a preset threshold value, performing iterative training on a current target detection model based on the stored edge case data to obtain an iterated target detection model, wherein a target detection result of each object to be detected is obtained by performing target detection on each object to be detected in the perception data by a target detection model configured in the vehicle, and the target detection model configured in the vehicle is identical to the version of the current target detection model; and issuing the iterated target detection model to the vehicle so as to update the target detection model configured in the vehicle.
In an embodiment of the present application, there is also provided an electronic device including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the vehicle control method as described above.
In an embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the vehicle control method as described above.
The invention has the beneficial effects that: the invention provides a vehicle control method, a device, a system, a vehicle, electronic equipment and a storage medium, wherein after target detection is carried out on perceived data of a traffic environment where a vehicle is located, the method is characterized in that the result missing object of a missing target label in a target detection result is highlighted to prompt an in-vehicle user to manually identify the result missing object to obtain the target label of the result missing object, the traffic environment state is determined according to the target detection result of all objects to be detected after manual identification, risk prompt information is generated when the traffic environment state is in a risk state, the in-vehicle user is reminded to carry out instruction feedback based on the risk prompt information, so that a user instruction fed back by the in-vehicle user is obtained, the vehicle is controlled, the integrity of the target detection result is ensured by assisting in the detection of the manual identification of the perceived edge case when the perceived edge case occurs in the running process of the vehicle, good basis is provided for determining the traffic environment state, the accuracy of judging the traffic environment state is further improved, the user is reminded when the traffic risk exists, the safety of the vehicle is improved, and driving safety is ensured. Meanwhile, when traffic risks exist, the vehicle is controlled through user instructions fed back by users in the vehicle, so that the processing of the perceived edge cases with risks is realized, and driving safety is further ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
FIG. 1 is a schematic diagram of an autopilot system functional architecture shown in an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of an environment in which a vehicle control method is implemented, as shown in an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating a vehicle control method according to an exemplary embodiment of the present application;
FIG. 4 is a block diagram of a vehicle control apparatus shown in an exemplary embodiment of the present application;
FIG. 5 is a block diagram of a vehicle control system shown in an exemplary embodiment of the present application;
FIG. 6 is a schematic structural diagram of a human-machine co-driving vehicle steering system according to an embodiment of the present application;
FIG. 7 is a flow chart illustrating learning of automotive semantic instructions according to an embodiment of the present application;
FIG. 8 is a flow chart illustrating a method of handling a co-vehicle in accordance with an embodiment of the present application;
fig. 9 is a schematic structural view of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that, the illustrations provided in the following embodiments merely illustrate the basic concepts of the application by way of illustration, and only the components related to the application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complex.
It should be noted that, in this application, "first", "second", and the like are merely distinguishing between similar objects, and are not limited to the order or precedence of similar objects. The description of variations such as "comprising," "having," etc., means that the subject of the word is not exclusive, except for the examples shown by the word.
It should be understood that the various numbers, step numbers, etc. described in this application are for ease of description and are not intended to limit the scope of this application. The size of the reference numerals in this application does not mean the order of execution, and the order of execution of the processes should be determined by their functions and inherent logic.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present application, however, it will be apparent to one skilled in the art that embodiments of the present application may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an architecture of an autopilot system according to an exemplary embodiment of the present application. As shown in fig. 1, the automatic driving system comprises a vehicle-mounted sensor module, a sensing module, a decision module, a planning module and a control module, wherein the vehicle-mounted sensor module comprises a camera, a millimeter wave radar, a laser radar and other sensors, the sensing module comprises a recognition unit, a positioning unit, a prediction unit and the like, the automatic driving system detects and collects roads and targets through the vehicle-mounted sensor module, the sensing module recognizes, positions and predicts the targets according to collected data to obtain sensing information, the decision judgment and path planning are carried out through the decision module and the planning module according to the sensing information, and finally the control module controls the vehicle according to decision and planning results to realize automatic driving of the vehicle.
At present, the recognition of an object by an automatic driving automobile mainly depends on a camera and a laser radar, and a vision system formed by a monocular or trinocular camera is the mainstream of intelligent driving at present, and the defect is that the recognition and the detection of the object are integrated, namely the object to be detected must be recognized firstly. However, in the real world, new samples are generated at all times in the road traffic environment, and the exhaustion of all object types makes the data set extremely huge and costly, and the exhaustion of all object types is difficult to achieve in the state of the art. Thus, the occurrence of perceived edge cases is unavoidable, and despite the low probability of occurrence, perceived edge cases, particularly perceived edge cases that are at risk, may endanger the life safety of the driver or the pedestrian once they occur.
Since the occurrence of perceived edge cases is unavoidable, the automatic driving model is currently retrained mainly by collecting data of more similar cases, and iterative optimization is performed on the automatic driving model. However, the cost of collecting and labeling perceived edge cases is high, while the collection behavior may be very dangerous or even impractical. Therefore, the occurrence of the perceived edge case may cause that the automatic driving system cannot work normally due to the fact that the target object cannot be identified, so that the function of the automatic driving system is withdrawn, the experience of the automatic driving function is reduced, even traffic accidents are caused, and the life safety of drivers and passengers is endangered.
To solve these problems, embodiments of the present application propose a vehicle control method, a vehicle control apparatus, a vehicle control system, a vehicle cloud system, an electronic device, a computer-readable storage medium, and a computer program product, respectively, which will be described in detail below.
Referring to fig. 2, fig. 2 is a schematic view of an implementation environment of a vehicle control method according to an exemplary embodiment of the present application.
As shown in fig. 2, the implementation environment may include an automatic driving car 210 and a computer device 220, wherein the automatic driving car 210 is taken as an example of a vehicle, the computer device 220 may be at least one of a microcomputer, an embedded computer, a neural network computer, etc., the computer device 220 may be configured in the automatic driving car 210, and the computer device 220 may also be a stand-alone computer device, which is not limited herein. The autonomous car 210 collects sensing data of the traffic environment through sensors during driving and provides the sensing data to the computer device 220 for processing. The computer device 220 detects the target of the perception data, judges the traffic environment state according to the target detection result, and generates risk prompt information when the traffic environment state is a risk state so as to receive a user instruction, thereby completing the control of the vehicle based on the user instruction.
Schematically, the method comprises the steps of obtaining perception data of a vehicle on a traffic environment in a driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position; if a result missing object exists, highlighting the result missing object in a perceived image according to the target position of the result missing object to prompt an in-vehicle user of the vehicle to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object and taking the result missing object as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label; determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and reminding a user in the vehicle to perform instruction feedback based on the risk prompt information; and if a user instruction fed back by the user in the vehicle is received, controlling the vehicle based on the user instruction. Therefore, the technical scheme of the embodiment of the application can assist in target detection through manual identification of the perceived edge case when the perceived edge case appears in the vehicle driving process, ensure the integrity of the target detection result, provide good basis for determining the traffic environment state, further improve the accuracy of traffic environment state judgment, so that users can be timely reminded when traffic risks exist, the driving safety is improved, and the driving safety is ensured. Meanwhile, when traffic risks exist, the vehicle is controlled through user instructions fed back by users in the vehicle, so that the processing of the perceived edge cases with risks is realized, and driving safety is further ensured.
It should be noted that, the vehicle control method provided in the embodiment of the present application is generally specifically executed by the computer device 220, and accordingly, the vehicle control apparatus is generally disposed in the computer device 220.
Referring to fig. 3, fig. 3 is a flowchart illustrating a vehicle control method according to an exemplary embodiment of the present application. The vehicle control method may be applied to the implementation environment shown in fig. 2 and specifically executed by the computer device 220 in the implementation environment. It should be understood that the vehicle control method may also be applied to other exemplary implementation environments and be specifically executed by devices in other implementation environments, and the implementation environments to which the vehicle control method is applied are not limited by the present embodiment.
As shown in fig. 3, in an exemplary embodiment, the vehicle control method at least includes steps S310 to S340, and is described in detail as follows:
step S310, obtaining the perception data of the vehicle on the traffic environment in the driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain the target detection result of each object to be detected.
In one embodiment of the present application, since fully automatic driving is not yet mature, in the event that fully automatic driving cannot be guaranteed, automatic driving will be in a complex man-machine co-driving mode stage for a long time, i.e. the task of driving the vehicle is commonly undertaken by natural drivers and automatic driving systems. Accordingly, the travel course may be a travel course of the man-machine co-driving mode, and may include at least one of an L1 (assisted driving) travel course, an L2 (partially automatic driving) travel course, an L3 (conditional automatic driving) travel course, an L4 (highly automatic driving) travel course, and the like. The sensing data can be at least one of image data, point cloud data and the like, and the sensing data of the traffic environment can be acquired through a sensor such as a camera, a radar and the like. And carrying out target detection on the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position. The target position in the target detection result of each object to be detected represents the position information of the object to be detected in the perception data. In addition, the target detection result can further comprise target labels, and the target labels of the objects to be detected represent the types of the objects to be detected. There are many ways of target detection, and the target detection result of each object to be detected can be obtained by extracting features from the sensing data and classifying and identifying the features by using a classifier, or by performing target detection on the sensing data by using a target detection model, which is not limited herein.
Step S320, if the result missing object exists, highlighting the result missing object in the perceived image according to the target position of the result missing object to prompt an in-vehicle user of the vehicle to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object, and taking the manual identification result as the target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label.
In one embodiment of the present application, the perceived image is generated based on perceived data, and as a result, the missing object is an object to be detected whose target detection result includes a target position and lacks a target tag, and the in-vehicle user includes at least one of a driver, a passenger, and the like. After target detection is carried out on the perception data to obtain a target detection result of each object to be detected, traversing is carried out on the target detection result of each object to be detected, whether a target label exists in the target detection result of each object to be detected is judged, if the target label is absent in the target detection result of a certain object to be detected, the system cannot determine the type of the object to be detected, and at the moment, a perception edge case can be considered to appear. The object to be detected can be determined to be a result missing object, a perception image is generated according to the perception data, the result missing object is highlighted in the perception image according to the target position of the result missing object, and a user in the vehicle is prompted to manually identify the type of the result missing object. Illustratively, the resulting missing object may be highlighted in the perceived image by an in-vehicle Display device, such as a vehicle screen, AR-HUD (Augmented Reality-Head Up Display), dashboard, electronic rearview mirror, etc., or by a user mobile device, such as a cell phone, tablet, notebook computer, etc.; highlighting the resulting missing objects in the perceived image, or other highlighting means, may be performed using different colors, blinking, or bolded recognition frames; the manner in which the user is prompted to manually identify in the vehicle may include voice message prompts, text message prompts, audible and visual alarm prompts, etc., all without limitation. If the result missing objects are multiple, identification and differentiation can be performed on different result missing objects in the perceived image, including but not limited to numerical numbering or labeling of different colors on different result missing objects.
After the in-car user manually identifies the result missing object, the manual identification result can be fed back through voice or text input and other modes. If a manual identification result of the result missing object is received, the manual identification result is used as a target label of the result missing object until all target detection results of the objects to be detected at least comprise a target position and a target label. According to the technical scheme, the in-vehicle user participates in the identification of the object with the result missing completion auxiliary, the use scene of the automatic driving function can be effectively expanded, the perceived edge case is manually identified when the perceived edge case appears, so that the target detection is assisted, the integrity of the target detection result can be ensured, good basis is provided for subsequent operation, and the safety of automatic driving is further improved.
In one embodiment of the present application, until all target detection results of the objects to be detected include at least the target position and the target tag, the method further includes: taking the sensing data and the target detection results of all the objects to be detected after manual identification as edge case data and storing the edge case data; counting the number of stored edge case data, and if the number is larger than a preset threshold value, performing iterative training on the current target detection model based on the stored edge case data to obtain an iterated target detection model, wherein the target detection result of each target to be detected is obtained by performing target detection on each target to be detected in the perception data by the current target detection model.
In this embodiment, the accuracy of the target detection model can be further improved by performing optimization iteration on the target detection model by the edge case data. Illustratively, the vehicle end can directly utilize stored edge case data to carry out iterative training on the current target detection model to obtain an iterated target detection model, or the stored edge case data is uploaded to the cloud end, the cloud end utilizes the received edge case data to carry out iterative training on the current target detection model, and the iterated target detection model is issued to the vehicle end so as to upgrade and update the current target detection model of the vehicle end, which is not limited.
In another embodiment of the present application, after prompting the in-vehicle user to manually identify the result missing object, if the in-vehicle user cannot identify the type of the result missing object, or the in-vehicle user autonomously determines the risk of existence of the traffic environment according to the actual road traffic condition, the in-vehicle user may exit the man-machine co-driving mode and take over the vehicle, so that the vehicle enters the manual driving mode.
And step S330, determining the traffic environment state according to the target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and reminding a user in the vehicle to perform instruction feedback based on the risk prompt information.
In one embodiment of the present application, after the target detection results of all the objects to be detected at least include the target position and the target label, the target detection results of all the objects to be detected after the manual identification are used as the sensing results, and whether the road traffic environment is at risk or not is determined according to the sensing results, that is, the traffic environment state is determined, wherein the traffic environment state includes a risk state or a safety state. If the road traffic environment has risks, namely the traffic environment state is a risk state, generating risk prompt information, and reminding a user in the vehicle to perform instruction feedback based on the risk prompt information. The risk prompt information may be at least one of a voice prompt, a text prompt, an acousto-optic prompt, etc., and the user in the vehicle may perform instruction feedback through gesture motion, touch selection of a control screen in the vehicle, facial muscle activity, eye movement tracking, voice, text input, etc., which are not limited herein. If there is no risk in the road traffic environment, i.e. the traffic environment status is a safe status, no subsequent actions are performed, and the process returns to step S310.
In one embodiment of the present application, if the traffic environment state is a risk state, the method further includes: determining the traffic environment risk level according to the target detection results of all the objects to be detected after manual identification; if the traffic environment risk level is lower than or equal to the preset risk level, generating risk prompt information according to the traffic environment risk level, and reminding a user in the vehicle to perform instruction feedback based on the risk prompt information.
In this embodiment, after determining that the road traffic environment has a risk, if the risk is greater, it is not possible to wait for the user to instruct to control the vehicle, so that risk classification may be performed, and the traffic environment risk classification is determined according to the target detection results of all the objects to be detected after manual identification. And comparing the traffic environment risk level with a preset risk level, and if the traffic environment risk level is lower than or equal to the preset risk level, indicating that the road traffic environment has smaller risk, waiting for a user instruction to control the vehicle.
In one embodiment of the present application, if the traffic environment risk level is higher than the preset risk level, the method includes: and generating risk prompt information according to the traffic environment risk level, and controlling the vehicle to enter an active safety mode.
In this embodiment, if the traffic environment risk level is higher than the preset risk level, it indicates that the road traffic environment is at a high risk, and if the user command is waited to control the vehicle, the driving safety of the vehicle may be affected. Therefore, the risk prompt information is generated according to the traffic environment risk level, so that the risk prompt is carried out on the user in the vehicle, and the vehicle is controlled to enter the active safety mode, so that the user in the vehicle can clearly determine the traffic environment risk level, and the driving safety can be ensured. Illustratively, the active safety mode may be AEB (Autonomous Emergency Braking, automatic emergency brake system), or an active cruise control system, or other active safety function, without limitation.
Step S340, if a user instruction fed back by the user in the vehicle is received, controlling the vehicle based on the user instruction.
In one embodiment of the application, as the human being is good at handling the situation of the perceived edge case, the user in the vehicle participates in assisting in completing the identification of the target object and the decision planning, so that the use scene of the automatic driving function can be effectively expanded, and the safety is improved. Therefore, when the road traffic environment is at risk, the vehicles can be jointly controlled through a human-computer system, the user feeds back the user instruction, and then the automatic driving system determines a specific control strategy according to the user instruction so as to control the vehicles, so that the manual assistance of the automatic driving system can be realized, good automatic driving experience is brought to the user, and driving safety is ensured. In addition, the convenient and efficient man-machine interaction can enable the automatic driving behavior to be clearer and more visual.
In another embodiment of the present application, if there is no result missing object, the method includes: determining the traffic environment state according to the target detection results of all the objects to be detected; if the traffic environment state is a risk state, generating risk prompt information, and reminding a user in the vehicle to perform instruction feedback based on the risk prompt information; and if the user instruction is received, controlling the vehicle based on the user instruction.
In this embodiment, when the target detection result of each object to be detected obtained through target detection includes at least a target position and a target label, it indicates that no sensing edge case occurs, and the traffic environment state may be directly determined according to all the target detection results of the objects to be detected, and when the traffic environment state is a risk state, risk prompt information is generated to remind a user in the vehicle to perform instruction feedback, and after receiving a user instruction, the vehicle is controlled based on the user instruction. When the traffic environment state is the safe state, the process returns to step S310.
In one embodiment of the present application, controlling the vehicle based on user instructions includes at least one of:
if the received user instruction is an artificial driving instruction, controlling the vehicle to enter an artificial driving mode;
and if the received user instruction is a target interaction instruction, obtaining target control information by identifying the target interaction instruction so as to control the vehicle based on the target control information.
In this embodiment, the in-vehicle user may choose to take over the vehicle manually or drive automatically based on the risk cues. If the received user command is an artificial driving command, the automatic driving mode is exited, the main driving user takes over the vehicle manually, and the vehicle enters the artificial driving mode, wherein the artificial driving command at least comprises 1 kind of artificial braking commands fed back by stepping on a brake pedal, or artificial accelerating commands by stepping on an accelerator pedal, or artificial control commands such as a steering wheel, a steering lamp and the like, or other artificial driving commands, and the manual driving command is not limited. If the received user instruction is a target interaction instruction, the automatic driving system identifies the target interaction instruction to obtain specific target control information so as to control the vehicle based on the target control information, wherein the target interaction instruction can be a target voice instruction, a target gesture instruction, a target text instruction or a target interaction instruction fed back by other interaction modes, and the method is not limited.
In one embodiment of the present application, obtaining target control information by identifying target interaction instructions includes: matching a target control type to which a target voice command belongs according to a preset voice command-control type corresponding relation, wherein the target interaction command comprises the target voice command; and carrying out semantic recognition on the target voice instruction according to the target control type to obtain target control information.
In this embodiment, the in-vehicle user may feed back the target voice command through a voice interaction manner, and after receiving the target voice command fed back by the user, the in-vehicle user may perform semantic recognition on the target voice command to obtain specific target control information. Before semantic recognition, the target control type to which the target voice command belongs can be determined according to the preset corresponding relation between the voice command and the control type, and then the semantic recognition is carried out on the target voice command according to the target control type, so that the efficiency and the accuracy of the semantic recognition can be effectively improved. In addition, there are many ways to perform semantic recognition on the target voice command, and the semantic recognition may be performed on the target voice command by a third party such as a voice service provider, or on the target voice command by a neural network model, which is not limited herein.
In one embodiment of the present application, before matching the target control type to which the target voice command belongs, the method further includes: acquiring and displaying a plurality of control types; determining a control type to be set from a plurality of control types according to control type selection information of an in-vehicle user, receiving at least one custom voice instruction of the in-vehicle user, and configuring the corresponding relation between each custom voice instruction and the control type to be set so as to finish voice instruction classification setting of the control type to be set; after finishing the classification setting of the voice instructions of all control types, taking the corresponding relation between all configured custom voice instructions and the control types as the corresponding relation between the preset voice instructions and the control types.
In this embodiment, the correspondence relationship of the voice instruction-control type may be established in advance by performing voice instruction classification setting on the control type, for example: in the parking mode, the user in the vehicle can acquire all control types including steering, speed changing, lane changing, air conditioning, vehicle windows, lamplight and the like. And then selecting the control type to be set as the control type to be set, and recording a corresponding custom voice command according to own habit so as to set the corresponding relation between the custom voice command and the control type to be set. And similarly, finishing voice instruction classification setting of all control types, and obtaining the corresponding relation between all custom voice instructions and the control types as the preset corresponding relation between the voice instructions and the control types. Wherein a control type can enter a plurality of different custom voice commands, for example: for the steering control type, different custom voice instructions such as left turn, right turn, large turn, small turn and the like can be input.
In one embodiment of the present application, before controlling a vehicle based on target control information, the method includes: acquiring the number of in-vehicle users, and if the number of in-vehicle users is a plurality of, identifying the in-vehicle users which send out the target interaction instruction; if the target in-vehicle user is a main driving user, controlling the vehicle based on the target control information, wherein the in-vehicle user comprises the main driving user; if the target in-vehicle user is a co-driver user, matching the target control level corresponding to the target control information according to the corresponding relation between the preset control information and the control level, and controlling the vehicle based on the target control information when the preset co-driver right is higher than or equal to the target control level, wherein the in-vehicle user also comprises the co-driver user.
In this embodiment, the in-vehicle user may be one or more, and when there is only one in-vehicle user in the vehicle, the in-vehicle user may be directly determined as the main driving user; when a plurality of in-vehicle users exist in the vehicle, because the main driving user and the assistant driving user can participate in the auxiliary driving, and the passenger user at the rear seat does not participate in the auxiliary driving, the identity of the target in-vehicle user sending out the target interaction instruction is required to be identified, and the influence on the driving safety caused by the response of the system to the target interaction instruction of the in-vehicle user not participating in the auxiliary driving is avoided. In addition, the knowledge degree and concentration degree of the information such as the vehicle condition, the road condition and the like of the co-driver user are lower than those of the main driver user, and the permission limit can be carried out on the co-driver user, namely, the permission level is preset for the co-driver user in advance and is used as the preset co-driver permission level. Therefore, before controlling the vehicle based on the target control information, firstly, the identity of the target in-vehicle user sending the target interaction instruction is identified, and whether the target in-vehicle user is the main driving user, the co-driving user or the rear seat passenger user is judged, if the target interaction instruction is the target voice instruction, the identity of the target voice instruction can be identified through a voiceprint identification technology, for example: before the vehicle runs, voice input is carried out on a main driving user and a co-driving user respectively, the voice input is used as a main driving user voice sample and a co-driving user voice sample, after a target voice command is received, voiceprint recognition is carried out on the main driving user voice sample, the co-driving user voice sample and the target voice command respectively, and the obtained three voiceprint features are compared, so that the identity of the user in the target vehicle is determined. The position of the user in the target vehicle sending the target voice command can be judged through a sound direction positioning technology or a sound pressure level detection technology, so that the identity of the user in the target vehicle can be determined. If the target interaction instruction is a target gesture instruction or a target text instruction, the target in-vehicle user sending the target gesture instruction or the target text instruction and the position of the target in-vehicle user can be determined through an image recognition technology, so that the identity of the target in-vehicle user can be determined according to the position of the target in-vehicle user.
And configuring a corresponding control level for each piece of control information in advance, comparing the preset copilot permission level with a target control level corresponding to the target control information if the user in the target vehicle is a copilot user, and controlling the vehicle based on the target control information if the target control level is lower than or equal to the preset copilot permission level, wherein the copilot user has the permission of the target control information. If the user in the target vehicle is the main driving user, the vehicle can be directly controlled based on the target control information because the authority of the main driving user is the maximum. If the user in the target vehicle is neither the main driving user nor the assistant driving user, the target control information is not executed without responding to the target interaction instruction of the user in the target vehicle, the automatic driving system carries out risk avoidance control on the vehicle, and prompts failure in executing the target interaction instruction. Illustratively, the risk avoidance control includes, but is not limited to, any one of deceleration, stopping at the side, lane changing avoidance, etc., and the failure of execution of the prompt target interaction instruction may be in the form of text display or voice broadcast, etc., but is not limited thereto.
In one embodiment of the present application, when the preset level of the co-driving permission is lower than the target control level, which indicates that the co-driving user does not have permission of the target control information, the target control information may not be executed without responding to the target interaction instruction of the co-driving user, and the automatic driving system performs risk avoidance control on the vehicle and prompts failure of execution of the target interaction instruction.
In another embodiment of the present application, when the preset level of co-driving authority is lower than the target control level, the method includes: generating and sending out the auxiliary driving permission level upgrading request information based on the target control level so as to prompt a main driving user to confirm the auxiliary driving permission level upgrading request information; if the request confirmation information of the main driving user for the auxiliary driving permission level upgrading request information is received, and the request confirmation information is agreeing, upgrading the preset auxiliary driving permission level so that the upgraded auxiliary driving permission level is higher than or equal to the target control level; the vehicle is controlled based on the target control information.
In this embodiment, if the target control level corresponding to the target control information is higher than the preset co-driving permission level, it indicates that the co-driving user does not have permission of the target control information, and whether to apply for giving higher co-driving permission to the main driving user may be applied. For example: if the target control level is level 4, the co-driving permission level request information such as "whether to agree to upgrade the co-driving permission level to level 4" may be generated. The request information of the co-driving permission level can be displayed on a display device such as a central control screen and an electronic rearview mirror in a text mode to prompt a main driving user to confirm the request, or sent in a voice mode to prompt the main driving user to confirm the request, and the method is not limited. If the request confirmation information is received and is agreeable, the preset copilot permission level is updated, and the vehicle is controlled based on the target control information.
In one embodiment of the present application, after prompting the primary driving user to confirm the co-driving permission level upgrade request information, the method further includes at least one of:
if the request confirmation information is received and is refused, carrying out risk avoidance control on the vehicle, and prompting failure of execution to the user in the vehicle;
and if the request confirmation information is not received within the preset request confirmation time, carrying out danger avoiding control on the vehicle, and prompting the failure of execution to the user in the vehicle.
In this embodiment, if the primary driving user does not agree to upgrade the level of the co-driving permission, the target interaction instruction of the co-driving user is not responded, but traffic accidents may be caused by continuing to drive according to the original plan, driving safety is affected, and in order to avoid this situation, the automatic driving system may perform risk avoidance control on the vehicle. In addition, if the main driving user does not feed back the request confirmation information for a long time, and the vehicle does not receive the request confirmation information in time, in order to avoid possible danger caused by continuous running according to the original plan, the main driving user can be prompted to confirm the request information of the upgrade of the level of the co-driving permission, and timing can be started through a timer or other modes, if the request confirmation information is not received yet after the time exceeds the preset request confirmation time, the target interaction instruction of the co-driving user is not responded, and the automatic driving system carries out risk avoidance control on the vehicle. Illustratively, the preset request confirmation time period may be 2 seconds, or other time periods, which are not limited herein; the danger avoiding control can be a mode of decelerating, stopping by side, avoiding by changing the way, and the like, and is not limited here. Under the two conditions, the vehicle is subjected to risk avoidance control, so that traffic risks can be effectively treated, and the running safety is ensured. When the vehicle is subjected to the risk avoidance control, the failure prompt of the target interaction instruction execution is fed back to the user in the vehicle, so that the user in the vehicle can clearly determine the execution result of the target interaction instruction, the misunderstanding of vehicle faults or vehicle out-of-control and the like caused by the user in the vehicle is avoided when the vehicle is subjected to the risk avoidance control, and the user experience and the comfort are improved. In addition, the reason of the execution failure can be given when the execution failure prompt is fed back.
In one embodiment of the present application, controlling a vehicle based on target control information includes: acquiring vehicle condition information of a vehicle; determining the execution permission state of the target control information according to the vehicle condition information, the traffic environment risk level, the target control information and the target detection results of all the objects to be detected after manual identification; if the execution authority state of the target control information is allowed, executing the target control information to control the vehicle; and if the execution permission state of the target control information is forbidden, carrying out risk avoidance control on the vehicle, and prompting failure of execution to a user in the vehicle.
In this embodiment, when driving the car, the driving styles of different drivers are quite different, some drivers are more aggressive, the feedback of the target interaction instruction is more general, some drivers are relatively cautious and conservative, and the feedback of the target interaction instruction is also cautious. Therefore, in order to ensure the driving safety and avoid the traffic risk caused by executing the dangerous target control information, the vehicle condition information, the traffic environment risk level, the target control information and the target detection results of all the objects to be detected after manual identification can be combined to judge whether the target control information has potential risk if the target control information is executed, so that the execution permission state of the target control information is determined. If the potential risk does not exist, the execution authority state of the target control information is allowed, the target control information can be executed to control the vehicle, otherwise, the execution authority state of the target control information is forbidden, the target control information cannot be executed, the vehicle is subjected to risk avoidance control, and a prompt of failure in execution of the target interaction instruction is sent. In addition, if it cannot be determined whether there is a potential risk, it is also determined that the execution authority state of the target control information is prohibited in order to ensure safe driving. The embodiment can better unify driving styles, and enhance predictability of the target vehicle, so that safety of automatic driving is improved.
In another embodiment of the present application, after step S330, the method includes: and if the user command is not received within the preset command feedback time, carrying out danger avoiding control on the vehicle.
In this embodiment, if the user in the vehicle does not feedback the user command for a long time, the vehicle does not receive the user command in time, and the vehicle may cause traffic accidents to continue driving, so as to affect driving safety. In order to avoid the situation, the user in the vehicle can be reminded to carry out instruction feedback and simultaneously start to count through a timer or other modes, if the time exceeds the preset instruction feedback time length and the user instruction is not received yet, the vehicle is subjected to danger avoidance control, traffic risks can be effectively treated, and running safety is guaranteed. Illustratively, the preset command feedback duration may be 2 seconds, or another duration, without limitation.
In one embodiment of the present application, if the traffic environment state is a safe state, the method further includes: monitoring a user instruction; if a user instruction sent by a user is monitored, and the user instruction is a manual driving instruction, controlling the vehicle to enter a manual driving mode; and if the monitored user instruction is a target interaction instruction, controlling the vehicle based on the target interaction instruction.
The step of controlling the vehicle based on the target interaction instruction comprises the following steps: identifying the target interaction instruction to obtain target control information, and obtaining the number of users in the vehicle; if the number of the in-vehicle users is one, the in-vehicle users are considered to be main driving users, and target control information is executed to control the vehicle; if the number of the in-vehicle users is multiple, carrying out identity recognition on the in-vehicle users which send out the target interaction instruction, and executing target control information to control the vehicle when the in-vehicle users are main driving users; when the user in the target vehicle is a co-driver user, matching the target control level corresponding to the target control information according to the corresponding relation between the preset control information and the control level, executing the target control information to control the vehicle when the preset co-driver authority level is higher than or equal to the target control level, and automatically controlling the vehicle to normally run according to the current state by an automatic driving system/ADAS (Advanced Driving Assistance System ) controller when the preset co-driver authority level is lower than the target control level and not executing the target control information, and prompting the failure of executing the target interaction instruction to the user in the vehicle.
In another embodiment of the present application, when the preset level of co-driving authority is lower than the target control level, the method includes: generating and sending out the auxiliary driving permission level upgrading request information based on the target control level so as to prompt a main driving user to confirm the auxiliary driving permission level upgrading request information; if the request confirmation information of the main driving user for the auxiliary driving permission level upgrading request information is received, and the request confirmation information is agreeing, upgrading the preset auxiliary driving permission level so that the upgraded auxiliary driving permission level is higher than or equal to the target control level, and executing the target control information to control the vehicle; if the request confirmation information is received and is refused, the preset copilot permission level is not updated, the target control information is not executed, the automatic driving system/ADAS controller automatically controls the vehicle to normally run according to the current state, and the failure of executing the target interaction instruction is prompted to the user in the vehicle; if the request confirmation information is not received within the preset request confirmation time, the target control information is not executed, the automatic driving system/ADAS controller automatically controls the vehicle to normally run according to the current state, and the user in the vehicle is prompted that the execution of the target interaction instruction fails.
Referring to fig. 4, fig. 4 is a block diagram of a vehicle control apparatus according to an exemplary embodiment of the present application. The apparatus may be applied in the implementation environment shown in fig. 2 and is specifically configured in the computer device 220. The apparatus may also be suitable for other exemplary implementation environments, and may be specifically configured in other devices, and the embodiment is not limited to the implementation environment in which the apparatus is suitable.
As shown in fig. 4, the exemplary vehicle control apparatus includes: an acquisition module 410, configured to acquire perceived data of a vehicle on a traffic environment during a driving process; the target detection module 420 is configured to perform target detection on a plurality of objects to be detected in the perception data, so as to obtain a target detection result of each object to be detected, where the target detection result at least includes a target position; the auxiliary recognition module 430 is configured to, if there is a result missing object, highlight the result missing object in a perceived image according to a target position of the result missing object, so as to prompt an in-vehicle user of the vehicle to manually recognize the result missing object, receive a manual recognition result of the in-vehicle user on the result missing object, and serve as a target label of the result missing object until target detection results of all the objects to be detected at least include the target position and the target label, where the perceived image is generated based on perceived data, and the result missing object is the object to be detected that includes the target position and lacks the target label; the risk determination module 440 is configured to determine a traffic environment state according to the target detection results of all the manually identified objects to be detected, generate risk prompt information if the traffic environment state is a risk state, and prompt a user in the vehicle to perform instruction feedback based on the risk prompt information; the instruction response module 450 is configured to, if a user instruction fed back by a user in the vehicle is received, control the vehicle based on the user instruction.
Referring to fig. 5, fig. 5 is a block diagram of a vehicle control system according to an exemplary embodiment of the present application. As shown in fig. 5, the exemplary vehicle control system includes a multi-sensor module 510, a determination module 520, a display module 530, an interaction module 540, the determination module 520 including a sensing unit 521, a decision unit 522, and a control unit 523: the multi-sensor module 510 is used for acquiring perception data of the vehicle on the traffic environment in the driving process; the sensing unit 521 is configured to perform target detection on a plurality of objects to be detected in the sensing data, so as to obtain a target detection result of each object to be detected, where the target detection result at least includes a target position; the display module 530 is configured to highlight the result missing object in the perceived image according to the target position of the result missing object if the result missing object exists; the interaction module 540 is configured to prompt an in-vehicle user of the vehicle to manually identify the result missing object, receive a manual identification result of the in-vehicle user on the result missing object, and serve as a target tag of the result missing object; the decision unit 522 is configured to determine a traffic environment state according to the target detection results of all the manually identified objects to be detected until the target detection results of all the objects to be detected at least include a target position and a target tag; the interaction module 540 is further configured to generate risk prompt information if the traffic environment state is a risk state, and remind an in-vehicle user to perform instruction feedback based on the risk prompt information so as to receive a user instruction fed back by the in-vehicle user; the control unit 523 is configured to control the vehicle based on the user instruction if the user instruction fed back by the user in the vehicle is received.
The present embodiment also provides a vehicle including the vehicle control system provided in the foregoing embodiment.
The embodiment also provides a vehicle cloud system, which comprises a cloud end and a vehicle; the vehicle is used for acquiring perception data of traffic environment in the driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position; if a result missing object exists, highlighting the result missing object in a perceived image according to the target position of the result missing object to prompt an in-vehicle user to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object and taking the result missing object as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label; determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and reminding a user in the vehicle to perform instruction feedback based on the risk prompt information; if a user instruction fed back by a user in the vehicle is received, vehicle control is performed based on the user instruction; taking the sensing data and target detection results of all the objects to be detected after manual identification as edge case data and uploading the edge case data to a cloud; the cloud end is used for receiving and storing the edge case data; counting the number of stored edge case data, if the number is larger than a preset threshold value, carrying out iterative training on a current target detection model based on the stored edge case data to obtain an iterated target detection model, wherein a target detection result of each target to be detected is obtained by carrying out target detection on each target to be detected in perception data by a target detection model configured in a vehicle, and the target detection model configured in the vehicle is identical with the current target detection model version; and issuing the iterated target detection model to the vehicle so as to update the target detection model configured in the vehicle.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a man-machine co-driving vehicle control system according to an embodiment of the present application. As shown in fig. 6, the man-machine co-driving vehicle control system includes a cloud server, a vehicle-mounted sensor module, a vehicle-mounted central computing platform, ECUs (Electronic Control Unit, ECU for short, electronic control unit, controller, ECUs refer to multiple controller modules), a vehicle-mounted display unit and a voice interaction system. The cloud server is used for receiving the edge case data, performing iterative training on a perception algorithm model, namely a target detection model, by utilizing the edge case data, and sending the iterative perception algorithm model to the vehicle-mounted central computing platform for OTA (Over-the-Air) upgrading. The vehicle-mounted sensor module is used for collecting sensing data of traffic environment, vehicle condition information and the like, and comprises sensors such as cameras, millisecond wave radars, laser radars and the like. The vehicle-mounted central computing platform is used for target detection, risk judgment and the like and comprises a perception module, a decision module and a regulation module. The ECUs are used for executing control instructions or control information, including controllers such as a power controller, a steering controller, an evacuation controller and the like. The vehicle-mounted display unit is used for displaying information, and comprises display equipment such as an AR-HUD, a vehicle screen and the like. The voice interaction system comprises a voice instruction authority judging module, a vehicle control semantic instruction learning module, an edge case auxiliary recognition module, an event data recording module and the like.
The voice instruction authority judging module is used for verifying the identity of the user in the vehicle through the voiceprint recognition system, and can automatically recognize the identities of the main driving user and the co-driving user when a plurality of users in the vehicle exist on the vehicle. During the running process of the vehicle, the voice interaction system can recognize and respond to voice instructions sent by two driving participants of the main driving user and the co-driving user, and corresponding control authorities are distributed according to identities of the main driving user and the co-driving user. The voice command authority judging module is used for judging whether the voice command sent by the user in the vehicle accords with the corresponding control authority of the user in the vehicle, wherein the control authority of the main driving user is higher than that of the co-driving user, and when the system detects that the voice command sent by the co-driving user does not accord with the control authority of the co-driving user, the system can apply for higher control authority to the main driving user. The voice command authority judging module analyzes whether the voice command sent by the user has possibility of causing traffic accidents or dangers according to the road traffic scene, and judges the execution authority state of the voice command sent by the user according to the analysis result. When the execution authority state of the voice command sent by the user is allowed, the system controls the vehicle to complete the corresponding function according to the voice command and gives a feedback prompt of successful execution; when the voice command sent by the user can cause the vehicle to violate the traffic rule or the potential safety risk occurs, the voice interaction system can give a feedback prompt of the execution failure and apply whether the user has higher voice command execution authority. Basic speech recognition and voiceprint recognition services can be provided by a speech service provider and the semantic information related to autopilot and vehicle control is input to a speech interactive system after translation and classification through an interactive interface.
The vehicle control semantic instruction learning module is used for in-vehicle user to define various voice instructions related to vehicle control functions. Referring to fig. 7, fig. 7 is a flowchart illustrating learning of automotive semantic instructions according to an embodiment of the present application. As shown in fig. 7, in the parking mode, the user may start a voice command learning mode, select a type of a vehicle control function to be learned, that is, a control type, set a corresponding voice command according to own habits, and import a learning result of a custom voice command set into the vehicle control semantic command library. The voice instruction set supporting the self definition comprises basic functions such as steering, acceleration/deceleration, lane changing, air conditioner temperature/air volume adjustment, car window opening and closing, car interior and exterior light control, and complex functions such as ramp up and down, continuous overtaking, and the like.
The edge case auxiliary recognition module is used for interacting with the vehicle-mounted central computing platform, when the vehicle is in an automatic driving state, and when the perception module of the vehicle-mounted central computing platform judges that unrecognizable targets exist in the road traffic environment, namely, result missing targets, the occurrence of edge cases is indicated, the unrecognizable targets are highlighted through the vehicle-mounted display unit, and an edge case recognition confirmation request is sent to driving participants such as in-vehicle users, namely, main/assistant driving users and the like through the edge case auxiliary recognition module in the voice interaction system. When the user gives clear feedback, the edge case auxiliary recognition module sends the classification information of the target object to the vehicle-mounted central computing platform, stores the edge case and uploads the edge case to the database of the cloud server, so that the automatic driving algorithm provider can further perform iterative optimization on the perception algorithm model, namely the target detection model.
The EDR (Event Data Recorder, event data recording module) is used for recording voice interaction data in a period of time before or after a certain collision accident occurs or is about to occur, and the voice interaction data comprise voice instructions sent by all in-vehicle users, corresponding vehicle control functions translated by the voice instruction translation system after analysis, and voice instruction corresponding authorities obtained by the voice instruction authority judging module after judgment. In the driving process, the event data recording module can upload the data to the database of the cloud server, store the data in the local memory for backup, and empty the local backup data after confirming that the cloud receives the data. The combination of the data and EDR data of the whole vehicle provides important basis for accurate reconstruction of traffic accidents and responsibility identification.
It should be appreciated that the vehicle-mounted central computing platform may communicate with the voice interactive system, the vehicle-mounted display unit, and the ECUs via a variety of vehicle-mounted communication networks including, but not limited to, CAN (Controller Area Network ), LIN (Local Interconnect Network, local internet), flexRay (flexible bus communication network), ethernet (Ethernet), and the like. The on-board central computing platform is an example of the determination module 520, and the determination module 520 may also be a combination of one or more controllers having similar functions, such as ZCU (Zone Control Unit, area control unit, area controller), intelligent driving area controller, VCU (Vehicle Control Unit, whole vehicle control unit, whole vehicle controller), and the like.
It should be appreciated that the voice interaction system is an example of the interaction module 540, and the interaction module 540 may also be an interaction module that uses gesture interaction, touch interaction, expression interaction, and the like.
Referring to fig. 8, fig. 8 is a flowchart illustrating a method for controlling a co-operating vehicle according to an embodiment of the present application. The man-machine co-driving vehicle control method can be applied to the man-machine co-driving vehicle control system provided by the embodiment, and can also be applied to other systems, and the embodiment does not limit the system to which the system is applied.
As shown in fig. 8, the man-machine co-driving vehicle control method at least includes the following steps:
1. after the user starts the vehicle, the man-machine co-driving vehicle control system is selected to be started;
2. verifying the identity of the user through a voice instruction authority judging module, confirming the number and the identity of driving participants (main/assistant driving users), and automatically distributing voice instruction authorities of vehicle control to all the driving participants;
3. the method comprises the steps that sensing data of road traffic environment are obtained through vehicle-mounted sensors such as cameras and radars, and target objects, namely objects to be identified, are identified and detected through sensing algorithms of an ADAS controller;
4. Outputting a target detection result on a vehicle-mounted display unit such as a vehicle-mounted screen or an AR-HUD, and judging whether an unrecognizable target object exists, namely a result missing object;
5. if an unrecognizable target exists on the vehicle driving route, the system can highlight the unrecognizable target on a vehicle-mounted display unit such as a vehicle-mounted screen or an AR-HUD and prompts a user to judge through a voice interaction system;
6. the user can choose to exit the man-machine co-driving mode and take over the vehicle according to the actual road traffic condition, or identify the unknown object and confirm the category of the unknown object, the system can save the edge cases and upload the edge cases to a database of a cloud server, and the perception algorithm model can be further optimized by identifying the edge cases by a large number of objects provided by the user;
7. if no unrecognizable target exists on the vehicle driving route, enabling an automatic driving function by the system, and controlling the vehicle to drive according to a preset navigation route by an ADAS controller;
8. during the running process of the vehicle, the system can automatically detect whether a user takes over the behavior of the vehicle manually or whether an effective voice control instruction, namely a voice instruction, is sent out; meanwhile, the system judges whether the road traffic environment has risks or not according to the perception result;
9. When the road traffic environment is at risk in the running process of the vehicle, the automatic driving system can judge the risk level (traffic environment risk level), and meanwhile, the voice interaction system can send out a risk prompt;
10. when the risk level is higher, namely the traffic environment risk level is higher than the preset risk level, the system can automatically activate active safety functions such as AEB (automatic braking system) and the like under the conditions that vehicles in front suddenly brake, vehicles approaching a lane cut in quickly, pedestrians or two-wheeled vehicles traverse roads and the like;
11. when the risk level is lower, namely the risk level of the traffic environment is lower than or equal to the preset risk level, the system waits for the user to give feedback, and if the user has the action of manually taking over the vehicle, such as stepping on a brake, an accelerator pedal or controlling a steering lamp of a steering wheel, the system exits from a man-machine co-driving mode and takes over the control right of the vehicle by the user;
12. if the user does not take over the vehicle manually, the system waits for the user to give out voice control instruction feedback, and when the user (main driving/auxiliary driving) does not give out corresponding voice instruction feedback in time, the ADAS controller controls the vehicle to take a conservative and safe mode such as deceleration, side-by-side parking, lane changing avoidance and the like to cope with traffic risks;
13. when a user (main driving/auxiliary driving) gives voice instruction feedback in time, the voice instruction authority judging system judges whether the voice instruction sent by the user has corresponding execution authority according to road conditions and road traffic risk levels;
14. If the voice command sent by the user has the corresponding execution authority, the system controls the vehicle to complete the corresponding function according to the command, namely, execute the corresponding control information and give a feedback prompt for successful command execution, wherein the prompt modes include but are not limited to voice, text prompt on a vehicle-mounted display medium and the like;
15. when the execution authority of the voice instruction sent by the user is insufficient, the system gives a feedback prompt of the execution failure and applies for whether the user gives higher voice instruction execution authority;
16. if the corresponding execution authority is obtained, the system controls the vehicle to complete the corresponding function according to the voice instruction; if the corresponding execution authority is not obtained yet, controlling the vehicle by the ADAS controller, and adopting a conservative and safe mode such as deceleration, side-by-side parking, lane change avoidance and the like to cope with traffic risks;
17. the system can automatically record the places or traffic scenes where the user triggers the intervention of the voice command or manually takes over the voice command, store the voice command of the user and the driving operation and control behavior of the manual take over the voice command of the user and upload the voice command and the driving operation and control behavior of other users in the same place or similar scene to the cloud server, and optimize the regulation and control algorithm of the automatic driving function.
In addition, when the road traffic environment is not at risk in the running process of the vehicle, if the behavior of manually taking over the vehicle is detected, the system exits the man-machine co-driving mode and takes over the vehicle control right by the user; if the behavior of the vehicle is not detected, which is taken over manually by the user, or the voice control instruction is not detected, the automatic driving system/ADAS controller controls the vehicle to automatically control the vehicle to normally run according to the current state; if a voice control instruction is detected, the voice instruction authority judging system judges whether the voice control instruction sent by the user has corresponding execution authority, when the voice control instruction sent by the user has the corresponding execution authority, the system controls the vehicle to complete corresponding functions according to the voice control instruction and gives a feedback prompt of successful instruction execution, when the execution authority of the voice instruction sent by the user is insufficient, the system gives a feedback prompt of failure execution and applies for the user whether higher voice instruction execution authority is given or not, if the corresponding execution authority is obtained, the system controls the vehicle to complete corresponding functions according to the voice control instruction, and if the corresponding execution authority is not obtained yet, the automatic driving system/ADAS controller controls the vehicle to automatically control the vehicle to normally run according to the current state.
Please refer to the descriptions in the foregoing embodiments for details of the specific embodiments of the present application, and a detailed description thereof is omitted herein.
When the vehicle is in the automatic driving process and the perceived edge case appears, the driver can intervene in the vehicle control through the voice interaction system to help the automatic driving system to recognize and solve the perceived edge scene, so that the use experience of the automatic driving system is improved. When the perception edge case is recognized, a voice instruction is issued by a human driver to convey the driving intention, and an automatic driving system is used for completing the control of the vehicle, so that the man-machine co-driving is realized, the driving style can be unified better, the predictability of the target vehicle is enhanced, and the safety of automatic driving is improved.
The man-machine interaction in the scheme is a key for ensuring the safe operation of the vehicle in the co-driving process of the man-machine, provides an interaction mode for exchanging driving information and operation between a person (natural driver) and a machine (automatic driving system), enables the automatic driving behavior to be clearer and more visual, and improves the driving safety of the vehicle.
According to the scheme, when the road traffic condition of a user in the vehicle meets corresponding conditions, a man-machine co-driving vehicle control system is started, the automatic driving system controls the transverse and longitudinal movement of the vehicle, road environment perception is realized by using rich sensor configuration of the automatic driving system, perception recognition results are output through vehicle-mounted display media such as a vehicle screen or an AR-HUD, the disadvantage that a human driver has a visual field blind area in the driving process is overcome, the main/co-driving user can transmit driving intention through a voice interaction system, a user instruction is sent, the ADAS controller finishes the control of the vehicle, hands and feet of the driver are liberated, fatigue feeling of the driver in the long-distance driving condition is reduced, smoothness, safety and energy consumption in the vehicle driving process are optimized, meanwhile, the driver can concentrate on observation and judgment of road conditions, the advantage that the human is good at processing perception edges is brought into play, the defect of the automatic driving system in object recognition is overcome, the safety of the vehicle is improved, and the probability of traffic accidents is reduced.
For the target objects which cannot be identified by the perception algorithm of the automatic driving system, the perception algorithm can be optimized and iterated through a cloud database formed by a large number of target object identification edge cases provided by users. Meanwhile, the system can automatically record places or traffic scenes where the voice command intervention of the driver is triggered or manually taken over for many times, record and upload the voice command of the driver and the driving operation and control behavior of the manual take over to the cloud server, and optimize the regulation algorithm of the automatic driving function by combining the voice commands and the driving operation and control behaviors of other users in the same places or similar scenes, so that the performance of the automatic driving algorithm is improved.
It should be noted that, the apparatus and the system provided by the foregoing embodiments belong to the same concept as the method provided by the foregoing embodiments, and the specific manner in which each module and unit perform the operation has been described in detail in the method embodiment, which is not described herein again. In practical application, the device and the system provided in the above embodiments may distribute the functions to be completed by different functional modules according to needs, that is, the internal structures of the device and the system are divided into different functional modules to complete all or part of the functions described above, which is not limited herein.
The embodiment also provides an electronic device, including: one or more processors; and a storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the vehicle control method provided in the respective embodiments described above.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application. It should be noted that, the electronic device 900 shown in fig. 9 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 9, the electronic device 900 includes a processor 901, a memory 902, and a communication bus 903; a communication bus 903 is used to connect the processor 901 and the memory connection 902; the processor 901 is configured to execute computer programs stored in the memory 902 to implement the methods of one or more of the embodiments described above.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the vehicle control method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device.
The present embodiments also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device executes the vehicle control method provided in the above-described respective embodiments.
The electronic device provided in this embodiment includes a processor, a memory, a transceiver, and a communication interface, where the memory and the communication interface are connected to the processor and the transceiver and perform communication therebetween, the memory is used to store a computer program, the communication interface is used to perform communication, and the processor and the transceiver are used to run the computer program, so that the electronic device performs each step of the above method.
In this embodiment, the memory may include a random access memory (Random Access Memory, abbreviated as RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The computer readable storage medium in this embodiment, as will be appreciated by those of ordinary skill in the art: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media capable of storing program codes, such as ROM (read only memory), RAM (random access memory), magnetic disk or optical disk.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness and are not intended to limit the present application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. It is therefore contemplated that the appended claims will cover all such equivalent modifications and changes as fall within the true spirit and scope of the disclosure.

Claims (19)

1. A vehicle control method, characterized in that the method comprises:
the method comprises the steps of obtaining perception data of a vehicle on a traffic environment in the driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position;
If a result missing object exists, highlighting the result missing object in a perceived image according to the target position of the result missing object to prompt an in-vehicle user of the vehicle to manually identify the result missing object, receiving a manual identification result of the in-vehicle user on the result missing object and taking the result missing object as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on the perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label;
determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and prompting an in-vehicle user to perform instruction feedback based on the risk prompt information;
and if a user instruction fed back by the user in the vehicle is received, controlling the vehicle based on the user instruction.
2. The vehicle control method according to claim 1, characterized in that the vehicle is controlled based on the user instruction, including at least one of:
If the received user instruction is an artificial driving instruction, controlling the vehicle to enter an artificial driving mode;
and if the received user instruction is a target interaction instruction, obtaining target control information by identifying the target interaction instruction, so as to control the vehicle based on the target control information.
3. The vehicle control method according to claim 2, characterized in that obtaining target control information by recognizing the target interaction instruction includes:
matching a target control type to which a target voice command belongs according to a preset voice command-control type corresponding relation, wherein the target interaction command comprises the target voice command;
and carrying out semantic recognition on the target voice instruction according to the target control type to obtain the target control information.
4. The vehicle control method according to claim 2, characterized in that before the vehicle is controlled based on the target control information, the method includes:
acquiring the number of the in-vehicle users, and if the number of the in-vehicle users is a plurality of, identifying the in-vehicle users which send out the target interaction instruction;
if the target in-vehicle user is a main driving user, controlling the vehicle based on the target control information, wherein the in-vehicle user comprises the main driving user;
And if the target in-vehicle user is a co-driver user, matching the target control level corresponding to the target control information according to the corresponding relation between the preset control information and the control level, and controlling the vehicle based on the target control information when the preset co-driver permission level is higher than or equal to the target control level, wherein the in-vehicle user also comprises the co-driver user.
5. The vehicle control method according to claim 4, characterized in that when the preset co-driving permission level is lower than the target control level, the method includes:
generating and sending out a copilot permission level upgrading request message based on the target control level so as to prompt the main driving user to confirm the copilot permission level upgrading request message;
if the request confirmation information of the primary driving user for the auxiliary driving permission level upgrading request information is received, and the request confirmation information is agreeing, upgrading the preset auxiliary driving permission level so that the upgraded auxiliary driving permission level is higher than or equal to the target control level;
and controlling the vehicle based on the target control information.
6. The vehicle control method according to claim 5, characterized in that after prompting the primary driving user to confirm the co-driving permission level upgrade request information, the method further includes at least one of:
if the request confirmation information is received and is refused, carrying out risk avoidance control on the vehicle, and prompting failure of execution to the user in the vehicle;
and if the request confirmation information is not received within the preset request confirmation time, carrying out risk avoidance control on the vehicle, and prompting the user in the vehicle that the execution fails.
7. The vehicle control method according to claim 3, characterized in that before matching the target control type to which the target voice instruction belongs, the method further comprises:
acquiring and displaying a plurality of control types;
determining a control type to be set from a plurality of control types according to control type selection information of the in-vehicle user, receiving at least one custom voice instruction of the in-vehicle user, and configuring the corresponding relation between each custom voice instruction and the control type to be set so as to finish voice instruction classification setting of the control type to be set;
After finishing the classification setting of all the voice instructions of the control types, taking the corresponding relation between all the configured custom voice instructions and the control types as the corresponding relation between the preset voice instructions and the control types.
8. The vehicle control method according to any one of claims 2-7, characterized in that if the traffic environment state is a risk state, the method further includes:
determining the traffic environment risk level according to the target detection results of all the objects to be detected after manual identification;
and if the traffic environment risk level is lower than or equal to a preset risk level, generating the risk prompt information according to the traffic environment risk level, and prompting the in-vehicle user to perform instruction feedback based on the risk prompt information.
9. The vehicle control method according to claim 8, characterized in that if the traffic environment risk level is higher than the preset risk level, the method includes:
and generating the risk prompt information according to the traffic environment risk level, and controlling the vehicle to enter an active safety mode.
10. The vehicle control method according to claim 8, characterized in that controlling the vehicle based on the target control information includes:
Acquiring vehicle condition information of the vehicle;
determining the execution permission state of the target control information according to the vehicle condition information, the traffic environment risk level, the target control information and the target detection results of all the manually identified objects to be detected;
if the execution authority state of the target control information is allowed, executing the target control information to control the vehicle;
and if the execution permission state of the target control information is forbidden, carrying out risk avoidance control on the vehicle, and prompting the failure of execution to the user in the vehicle.
11. The vehicle control method according to any one of claims 1 to 7, characterized in that if the result missing object is not present, the method includes:
determining the traffic environment state according to the target detection results of all the objects to be detected;
if the traffic environment state is a risk state, generating the risk prompt information, and prompting the in-vehicle user to perform instruction feedback based on the risk prompt information;
and if the user instruction is received, controlling the vehicle based on the user instruction.
12. The vehicle control method according to any one of claims 1 to 7, characterized in that after reminding the in-vehicle user of instruction feedback based on the risk prompt information, the method includes:
And if the user command is not received within the preset command feedback time, carrying out risk avoiding control on the vehicle.
13. The vehicle control method according to any one of claims 1 to 7, characterized in that, until after the target detection results of all the objects to be detected include at least the target position and the target tag, the method further comprises:
taking the perception data and target detection results of all the objects to be detected after manual identification as edge case data and storing the edge case data;
counting the number of stored edge case data, and if the number is larger than a preset threshold value, performing iterative training on a current target detection model based on the stored edge case data to obtain an iterated target detection model, wherein a target detection result of each object to be detected is obtained by performing target detection on each object to be detected in the perception data by the current target detection model.
14. A vehicle control apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the perception data of the vehicle on the traffic environment in the driving process;
the target detection module is used for carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position;
The auxiliary recognition module is used for highlighting the result missing object in a perceived image according to the target position of the result missing object, so as to prompt an in-vehicle user of the vehicle to manually recognize the result missing object, receiving the manual recognition result of the in-vehicle user on the result missing object and taking the manual recognition result as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on the perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label;
the risk determination module is used for determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and reminding an in-vehicle user to perform instruction feedback based on the risk prompt information;
and the instruction response module is used for controlling the vehicle based on the user instruction if the user instruction fed back by the user in the vehicle is received.
15. A vehicle control system, characterized in that the system comprises a multi-sensor module, a determination module, a display module and an interaction module, wherein the determination module comprises a sensing unit, a decision unit and a control unit:
the multi-sensor module is used for acquiring perception data of the vehicle on the traffic environment in the running process;
the sensing unit is used for carrying out target detection on a plurality of objects to be detected in the sensing data to obtain a target detection result of each object to be detected, and the target detection result at least comprises a target position;
the display module is used for highlighting the result missing object in the perceived image according to the target position of the result missing object if the result missing object exists;
the interaction module is used for prompting an in-vehicle user of the vehicle to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object and taking the manual identification result as a target label of the result missing object;
the decision unit is used for determining the traffic environment state according to the target detection results of all the objects to be detected after manual identification until the target detection results of all the objects to be detected at least comprise target positions and target labels;
The interaction module is further used for generating risk prompt information if the traffic environment state is a risk state, and prompting the in-vehicle user to perform instruction feedback based on the risk prompt information so as to receive a user instruction fed back by the in-vehicle user;
and the control unit is used for controlling the vehicle based on the user instruction if receiving the user instruction fed back by the user in the vehicle.
16. A vehicle comprising the vehicle control system of claim 15.
17. A vehicle cloud system, wherein the system comprises a cloud end and a vehicle;
the vehicle is used for acquiring perception data of traffic environment in the driving process, and carrying out target detection on a plurality of objects to be detected in the perception data to obtain a target detection result of each object to be detected, wherein the target detection result at least comprises a target position; if a result missing object exists, highlighting the result missing object in a perceived image according to the target position of the result missing object to prompt an in-vehicle user to manually identify the result missing object, receiving the manual identification result of the in-vehicle user on the result missing object and taking the result missing object as a target label of the result missing object until all target detection results of the objects to be detected at least comprise the target position and the target label, wherein the perceived image is generated based on the perceived data, and the result missing object is the object to be detected of which the target detection result comprises the target position and lacks the target label; determining a traffic environment state according to target detection results of all the objects to be detected after manual identification, generating risk prompt information if the traffic environment state is a risk state, and prompting an in-vehicle user to perform instruction feedback based on the risk prompt information; if a user instruction fed back by the user in the vehicle is received, vehicle control is carried out based on the user instruction; taking the perceived data and the target detection results of all the objects to be detected after manual identification as edge case data and uploading the edge case data to the cloud;
The cloud end is used for receiving and storing the edge case data; counting the number of stored edge case data, if the number is larger than a preset threshold value, performing iterative training on a current target detection model based on the stored edge case data to obtain an iterated target detection model, wherein a target detection result of each object to be detected is obtained by performing target detection on each object to be detected in the perception data by a target detection model configured in the vehicle, and the target detection model configured in the vehicle is identical to the version of the current target detection model; and issuing the iterated target detection model to the vehicle so as to update the target detection model configured in the vehicle.
18. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the vehicle control method of any of claims 1-13.
19. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the vehicle control method according to any one of claims 1 to 13.
CN202311716751.9A 2023-12-13 2023-12-13 Vehicle control method, device, system, vehicle, electronic device and storage medium Pending CN117775022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311716751.9A CN117775022A (en) 2023-12-13 2023-12-13 Vehicle control method, device, system, vehicle, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311716751.9A CN117775022A (en) 2023-12-13 2023-12-13 Vehicle control method, device, system, vehicle, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117775022A true CN117775022A (en) 2024-03-29

Family

ID=90393631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311716751.9A Pending CN117775022A (en) 2023-12-13 2023-12-13 Vehicle control method, device, system, vehicle, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117775022A (en)

Similar Documents

Publication Publication Date Title
EP3357778B1 (en) Driving control device, driving control method, and program
CN109562758B (en) Control method and control device for automatic driving vehicle
EP3357780B1 (en) Driving control device, driving control method, and program
CN107640159A (en) A kind of automatic Pilot man-machine interactive system and method
CN111439271A (en) Auxiliary driving method and auxiliary driving equipment based on voice control
CN110663073B (en) Policy generation device and vehicle
CN111452789A (en) Automatic driving overtaking control method and system
JP7158352B2 (en) DRIVING ASSIST DEVICE, VEHICLE CONTROL METHOD, AND PROGRAM
WO2021129156A1 (en) Control method, device and system of intelligent car
WO2018138980A1 (en) Control system, control method, and program
US11587461B2 (en) Context-sensitive adjustment of off-road glance time
CN113085873B (en) Method and device for acquiring driving strategy, computer equipment and storage medium
CN111464972A (en) Prioritized vehicle messaging
KR20190134906A (en) Apparatus and method for autonomous driving reflecting user's driving style
CN109987090A (en) Driving assistance system and method
US20220161819A1 (en) Automatic motor-vehicle driving speed control based on driver's driving behaviour
CN114586044A (en) Information processing apparatus, information processing method, and information processing program
US10118612B2 (en) Vehicles, electronic control units, and methods for effecting vehicle changes based on predicted actions of target vehicles
US11753047B2 (en) Impaired driver assistance
JP2006160032A (en) Driving state determination device and its method
CN117775022A (en) Vehicle control method, device, system, vehicle, electronic device and storage medium
Shah Safe-av: A fault tolerant safety architecture for autonomous vehicles
González et al. Arbitration and sharing control strategies in the driving process
Amparore et al. Adaptive Artificial Co-pilot as Enabler for Autonomous Vehicles and Intelligent Transportation Systems.
WO2023189578A1 (en) Mobile object control device, mobile object control method, and mobile object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication