WO2022205159A1 - 一种车辆控制方法及装置 - Google Patents

一种车辆控制方法及装置 Download PDF

Info

Publication number
WO2022205159A1
WO2022205159A1 PCT/CN2021/084650 CN2021084650W WO2022205159A1 WO 2022205159 A1 WO2022205159 A1 WO 2022205159A1 CN 2021084650 W CN2021084650 W CN 2021084650W WO 2022205159 A1 WO2022205159 A1 WO 2022205159A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
intent
intention
driver
drag
Prior art date
Application number
PCT/CN2021/084650
Other languages
English (en)
French (fr)
Inventor
许明霞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/084650 priority Critical patent/WO2022205159A1/zh
Priority to CN202180003366.2A priority patent/CN113840766B/zh
Publication of WO2022205159A1 publication Critical patent/WO2022205159A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system

Definitions

  • the present application relates to the field of automatic driving, and in particular, to a vehicle control method and device.
  • L0 to L5 there are 6 levels of L0 to L5 in autonomous driving.
  • low-level autonomous driving scenarios such as L2 and below
  • manual Driving plus advanced driving assistance system ADAS
  • L3 is the automatic driving performed in a specific area.
  • Driving L4 is highly autonomous driving
  • L5 is fully autonomous driving
  • L3 to L5 autonomous driving are all autonomous driving operations by the vehicle, and the driver does not need to concentrate on driving operations. That is to say, automatic driving, especially high-level automatic driving, liberates the hands and feet of the driver, and even liberates the driver to think about driving.
  • the technology of automatic driving based on the driver's driving intention mainly recognizes the driver's driving actions such as steering wheel, brake or accelerator, obtains the driving intention corresponding to the driving action, and further realizes automatic driving. Therefore, the driving intention The identification method is not flexible enough.
  • the present application provides a vehicle control method and device for flexibly identifying a driver's driving intention.
  • the vehicle control method provided by the present application may be executed by an electronic device supporting a vehicle control function.
  • Electronic devices can be abstracted as computer systems.
  • the electronic device supporting the vehicle control function in this application may also be referred to as a vehicle control device.
  • the vehicle control device may be the whole machine of the electronic device, or may be a part of the device in the electronic device, for example: a chip related to the vehicle control function, such as a system chip or an image chip.
  • the system chip is also called a system on chip (system on chip, SOC), or is called a SoC chip.
  • the vehicle control device may be a terminal device or in-vehicle equipment such as an in-vehicle computer, an in-vehicle device, etc., or a system chip, an image processing chip or a computer system that can be provided in the vehicle or in-vehicle equipment. other types of chips.
  • a vehicle control method includes: the vehicle control device can acquire the first gesture operation of the driver, determine the first intention of the driver according to the first gesture operation, and control the driving state of the vehicle according to the first intention.
  • the first gesture operation includes the driver's touch operation on the touch screen and/or the driver's air gesture operation, and the touch operation includes a touch operation or a drag operation.
  • the vehicle control device can flexibly recognize the driving intention of the driver based on the first gesture operation of the driver, and realize the control of the driving state of the vehicle according to the driving intention, thereby providing a more flexible driving intention recognition method, bringing A better driving experience.
  • the first intention may include at least one of an overtaking intention, a lane change intention, a steering intention, or a driving trajectory intention. Therefore, various types of driving intentions can be recognized according to the first gesture, and flexible control of the vehicle according to the first gesture operation can be realized.
  • the vehicle control device may also move the vehicle's drag track according to the change of the drag track of the drag operation.
  • the display position of the icon on the touch screen, and/or the drag track of the drag operation displayed on the touch screen Therefore, the visualization of the dragging trajectory can be realized, and the user experience can be improved.
  • the vehicle control device may control the driving state of the vehicle according to the drag track.
  • the vehicle control device may further perform at least one of the following operations: sending a first prompt message, where the first prompt message is used to notify that the first intent corresponding to the dragging operation is not allowed to be executed. is allowed to be executed; or, display the icon of the vehicle in the first display position, where the first display position is the display position of the icon of the vehicle before the drag operation is acquired; or, clear the drag track displayed on the touch screen. Therefore, when the first intent is allowed to be executed, the vehicle driving state can be controlled according to the first intent, and when the first intent is not allowed to be executed, timely feedback can be given to the driver to improve user experience.
  • the vehicle control device may According to at least one of environmental conditions or comfort conditions, the drag track is corrected, and the corrected drag track is displayed on the touch screen, and the corrected drag track may represent a suggested driving route. Therefore, when the first intention corresponding to the drag operation is not allowed to be executed, the drag trajectory can be corrected according to the first condition, and the correction of the drag trajectory can be intuitively fed back to the driver.
  • the vehicle control device may also send a second prompt message, where the second prompt message is used to inquire whether to control the driving state of the vehicle according to the revised drag trajectory.
  • the vehicle control device may further acquire a first operation of the driver, wherein the first operation indicates agreeing to control the driving state of the vehicle according to the revised drag trajectory, and control the driving state of the vehicle according to the revised drag trajectory. Therefore, it is possible to control the driving state of the vehicle according to the revised drag trajectory according to the driver's feedback on the second prompt message, so as to improve the success rate of controlling the driving state of the vehicle according to the driver's driving intention.
  • the vehicle control device may further determine that the first intention does not satisfy the first condition, where the first condition includes at least one of a traffic rule condition, a safe driving condition, an environmental condition or a comfort condition.
  • the vehicle control device may also determine a second intention according to the first intention, the second intention satisfies the first condition, and the execution timing of the second intention is different from the execution timing of the first intention.
  • the vehicle control device may also control the driving state of the vehicle according to the second intention. Therefore, when the first intention is not allowed to be executed, the first intention can be modified according to the first condition and the second intention can be obtained, so controlling the driving state of the vehicle according to the second intention can improve the success rate of driving intention recognition.
  • the vehicle control device may also send a third prompt message, where the third prompt message is used to inquire whether to control the driving state of the vehicle according to the second intention.
  • the vehicle control device may also acquire a second operation of the driver, the second operation expressing agreement to control the driving state of the vehicle according to the second intention. Therefore, it is possible to control the driving state of the vehicle according to the revised second intention according to the driver's feedback on the third prompt message, so as to improve the success rate of controlling the driving state of the vehicle according to the driving intention of the driver.
  • the vehicle control device may further determine that the first intention does not satisfy the first condition, where the first condition includes at least one of a traffic rule condition, a safe driving condition or a comfort condition.
  • the vehicle control device may also determine a second intent based on the first condition, the second intent satisfying the first condition.
  • the vehicle control device may also send a third prompt message for inquiring whether to control the driving state of the vehicle according to the second intention.
  • a third operation of the driver is acquired, and the third operation indicates that the driver does not agree to control the driving state of the vehicle according to the second intention.
  • the vehicle control device may also stop controlling the driving state of the vehicle according to the second intention. Therefore, according to the third operation, it is possible to effectively decide whether or not to perform the vehicle driving state control according to the operation of the driver.
  • the present application provides a vehicle control device, which includes a processing module and an input and output module.
  • the input and output module can be used to obtain the first gesture operation of the driver.
  • the processing module may be configured to determine the first intention of the driver according to the first gesture operation, and control the driving state of the vehicle according to the first intention.
  • the first gesture operation includes at least one of the following operations: a touch operation by the driver on the touch screen, where the touch operation includes a touch operation or a drag operation; and an air gesture operation by the driver.
  • the first intent includes at least one of the following intents: an intent to overtake, an intent to change lanes, an intent to turn, or an intent to travel.
  • the input and output module can also be used to: move the icon according to the change of the drag track of the drag operation.
  • the display position of the icon of the vehicle on the touch screen; and/or, the drag track of the drag operation is displayed on the touch screen.
  • the processing module can also be used to control the driving state of the vehicle according to the drag track when the first intent corresponding to the drag track is allowed to be executed, or to control the driving state of the vehicle when the first intent corresponding to the drag track is not executed.
  • execution execute: Clear the drag track displayed on the touch screen.
  • the input/output module may also be configured to send a first prompt message, where the first prompt message is used to notify that the first intent corresponding to the drag operation is not allowed to be executed, and/or , the icon of the vehicle is displayed in the first display position, and the first display position is the display position of the icon of the vehicle before the drag operation is acquired.
  • the first intent corresponding to the dragging operation includes the driving trajectory intent
  • the processing module is further configured to: according to traffic rule conditions, safe driving conditions, At least one of environmental conditions or comfort conditions, the drag trajectory is corrected.
  • the input and output module can also be used to: display the corrected dragging track on the touch screen, and the corrected dragging track represents a suggested driving route.
  • the input and output module can also be used to send a second prompt message, the second prompt message is used to ask whether to control the driving state of the vehicle according to the revised drag trajectory, and the input and output module can also be used to obtain the driver's information.
  • the first operation the first operation means agreeing to control the driving state of the vehicle according to the corrected drag trajectory.
  • the processing module can also be used to control the driving state of the vehicle according to the revised drag trajectory.
  • the processing module can also be used to: determine that the first intention does not meet the first condition, where the first condition includes at least one of traffic rule conditions, safe driving conditions, environmental conditions or comfort conditions; the processing module can also be used The second intention is determined according to the first intention, the second intention satisfies the first condition, and the execution timing of the second intention is different from the execution timing of the first intention; the processing module can be specifically used to control the driving state of the vehicle according to the second intention.
  • the input and output module can also be used to send a third prompt message
  • the third prompt message is used to ask whether to control the driving state of the vehicle according to the second intention
  • the input and output module can also be used to obtain the second operation of the driver.
  • the second operation means agreeing to control the driving state of the vehicle according to the second intention.
  • the processing module may further determine that the first intention does not satisfy the first condition, where the first condition includes at least one of a traffic rule condition, a safe driving condition or a comfort condition.
  • the processing module may also be configured to determine a second intent according to the first condition, and the second intent satisfies the first condition.
  • the input and output module may also be used for: sending a third prompt message, where the third prompt message is used to inquire whether to control the driving state of the vehicle according to the second intention.
  • the input and output module can also be used to obtain a third operation of the driver, the third operation indicates that the driver does not agree to control the driving state of the vehicle according to the second intention.
  • the processing module is also operable to cease controlling the driving state of the vehicle in accordance with the second intent.
  • the present application provides a computing device, including a processor, the processor is connected to a memory, the memory stores computer programs or instructions, and the processor is configured to execute the computer programs or instructions stored in the memory, so that the computing device executes the above-mentioned first Aspect or the method in any of the possible implementations of the first aspect.
  • the present application provides a computer-readable storage medium on which a computer program or instruction is stored, and when the computer program or instruction is executed, enables a computer to execute the above-mentioned first aspect or any one of the first aspects. method in the implementation.
  • the present application provides a computer program product that, when the computer executes the computer program product, causes the computer to execute the first aspect or the method in any possible implementation manner of the first aspect.
  • the present application provides a chip, which is connected to a memory for reading and executing computer programs or instructions stored in the memory, so as to realize the first aspect or any of the possible implementations of the first aspect.
  • the present application provides a vehicle, which includes the on-board control device and the execution device in any possible implementation manner of the second aspect or the second aspect, so as to realize the first aspect or the first aspect. method in any of the possible implementations.
  • the present application provides a vehicle, which includes the chip of the sixth aspect and the execution device, so as to implement the method of the first aspect or any possible implementation manner of the first aspect.
  • the vehicle control device can determine the driver's first intention according to the driver's touch operation and/or air gesture operation, and control the vehicle in the automatic driving state according to the first intention.
  • the driving state or in other words, the driving intention is used to control the driving state of the vehicle, wherein the driving state of the vehicle refers to the state related to the driving mode of the vehicle. Therefore, the above solution can provide a more flexible driving intention recognition method for automatic driving scenarios, and can Provide drivers with a better autonomous driving experience.
  • the control of the driving state may be regarded as a short-term control or a limited number of times of control.
  • the vehicle control device detects a touch operation and performs one-time control of the driving state according to the touch operation.
  • the vehicle control device may display the drag track of the drag operation on the display screen, and/or follow the driver's drag operation. Move the vehicle's icon to visualize drag and drop operations. If the vehicle control device recognizes that the drag track is not allowed to be executed, it can clear the drag track or restore the position of the vehicle icon, or notify the driver that the drag operation or the driving intention corresponding to the drag operation is not allowed to be executed . In addition, the vehicle control device may also modify the drag track according to at least one of traffic rule conditions, safe driving conditions, environmental conditions or comfort conditions, and display the corrected drag track on the touch screen.
  • the trailing drag track can represent a suggested driving route, so the correction to the drag track can be intuitively fed back to the driver. In addition, it can also ask the driver whether to control the driving state of the vehicle according to the revised drag trajectory. If the driver agrees to execute the revised drag trajectory through gestures or other human-computer interaction operations, the vehicle control device The corrected drag trajectory controls the driving state of the vehicle, thereby improving the success rate of vehicle control.
  • the vehicle control device when the vehicle control device recognizes that the first intention does not satisfy at least one of traffic rule conditions, safe driving conditions, environmental conditions or comfort conditions, it may prompt the driver that the first intention is not allowed to be executed, so as to inform the driver that the first intention is not allowed to be executed. Provide feedback for the first gesture operation. Further, the vehicle control device may also ask the driver whether to execute the second intention, and the second intention may be the intention determined according to the first intention to satisfy traffic rule conditions, safe driving conditions, environmental conditions and comfort conditions, such as the first intention. The second intent may be the first intent that delays execution. After that, the driver can be asked whether to execute the second intent. If the driver agrees to execute the second intent through gestures or other human-computer interaction operations, the vehicle control device can control the driving state of the vehicle according to the second intent, thereby improving the success of vehicle control. Rate.
  • the technical solutions provided by the embodiments of the present application can support the vehicle control device to flexibly identify the driver's driving intention, and improve the user experience in the automatic driving scenario.
  • the vehicle control device can timely interact with the driver on whether the driving intention is allowed to be executed, so as to ensure driving safety and driving compliance, and enhance the driver's interactive experience and driving participation. .
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a vehicle control device according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of another vehicle control device provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of another vehicle control device provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a vehicle control method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a vehicle icon display mode provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a drag track provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another drag track provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another drag track provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an interface for asking whether to execute a second intent according to an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of another vehicle control method provided by an embodiment of the present application.
  • the present application provides a vehicle control method and device, which are used to flexibly identify the driver's driving intention according to the driver's gesture operation, and provide driving safety and driving compliance to the vehicle control instructions generated according to the driving intention through an automatic driving algorithm. sexual security.
  • the method and the device are based on the same technical concept. Since the principles of the method and the device to solve the problem are similar, the implementation of the device and the method can be referred to each other, and the repetition will not be repeated.
  • the vehicle control device may determine the first intention of the driver according to the first gesture operation of the driver, and control the driving state of the vehicle according to the first intention to realize automatic driving of the vehicle.
  • the first gesture operation includes the driver's touch operation on the touch screen or an operation such as an air gesture.
  • the touch operation is, for example, a touch screen operation or a drag operation and the like.
  • the vehicle is controlled based on the driver's human-computer interaction operation, and the vehicle control device recognizes the driver's driving intention according to the driver's human-computer interaction operation, thereby realizing flexible recognition of the driving intention.
  • the present application can be applied to autonomous vehicles (hereinafter referred to as vehicles), especially with functions of human machine interaction (HMI), the functions of calculating and judging the driving state of the vehicle through the automatic driving algorithm, and the movement of the vehicle. control functions of the vehicle.
  • vehicles autonomous vehicles
  • HMI human machine interaction
  • the vehicle may include at least one automated driving system to support the vehicle in autonomous driving.
  • vehicle can also be replaced by other vehicles or vehicles such as trains, aircrafts, mobile platforms, etc., according to actual needs. This application does not limit this.
  • the vehicle control device can support the function of human-computer interaction, the function of calculating and judging the driving state of the vehicle through the automatic driving algorithm, and the function of controlling the motion of the vehicle.
  • the vehicle control device may be integrated with the vehicle, for example, the vehicle control device may be provided inside the vehicle.
  • a separate setup can be used, with remote communication between the vehicle controls and the vehicle.
  • the vehicle control device can be implemented in the form of a terminal device and/or a server.
  • the terminal device here may be, for example, a mobile phone, a computer, a tablet computer, or a vehicle-mounted device.
  • the server may be a cloud server or the like.
  • the terminal device can provide the following functions: collecting the user's gesture operation or human-computer interaction operation through the terminal device, inquiring or prompting the user, and controlling the driving state of the vehicle according to the gesture operation input by the user, etc. .
  • the driving state of the vehicle can be controlled based on the communication between the terminal device and the vehicle. For example, as shown in FIG.
  • the present application can assist in identifying the driver's driving intention through the terminal, and control the driving state of the vehicle. It should be understood that the input in this application refers to the information transmission from the driver to the vehicle control device. Output refers to the transmission of information from the vehicle controls to the driver.
  • FIG. 2 shows a schematic structural diagram of a possible vehicle control device, and the structure may include a processing module 210 and an input and output module 220 .
  • the structure shown in FIG. 2 may be a terminal device, or have functional components of the vehicle control device shown in the present application.
  • the input and output module 220 may include a touch screen, a camera, a speaker, a microphone or an ultrasonic sensor and other devices for supporting human-computer interaction functions (or referred to as human-computer interaction devices), or , which supports modules for communicating with devices that support human-computer interaction, such as touch screens, cameras, speakers, microphones, or ultrasonic sensors.
  • the processing module 210 may be a processor, eg, a central processing unit (CPU).
  • the input and output module 220 may be an interface circuit, which is used to connect with a touch screen, a camera, a millimeter-wave radar, a speaker, a microphone or an ultrasonic sensor, etc. to support human beings.
  • the processing module 210 may be a processor to communicate with a device with a computer interaction function.
  • the input/output module 220 may be an input/output interface of the chip, and the processing module 210 may be a processor of the chip, which may include one or more central processing units.
  • processing module 210 in this embodiment of the present application may be implemented by a processor or processor-related circuit components, and the input/output module 220 may be implemented by a transceiver, a device for supporting human-computer interaction functions, or related circuit components.
  • the processing module 210 may be used to perform all operations performed by the vehicle control device in any embodiment of the present application except for input and output operations, such as recognizing the driving intention, judging whether the driving intention is allowed to execute and controlling the driving of the vehicle processing operations such as status, and/or other processes used to support the embodiments described herein, such as generating messages, information, and/or signaling output by the input-output module 220, and processing messages input by the input-output module 220 , information and/or signaling.
  • input and output operations such as recognizing the driving intention, judging whether the driving intention is allowed to execute and controlling the driving of the vehicle processing operations such as status, and/or other processes used to support the embodiments described herein, such as generating messages, information, and/or signaling output by the input-output module 220, and processing messages input by the input-output module 220 , information and/or signaling.
  • the input and output module 220 can be used to perform all input and output operations performed by the terminal identification and/or server in any embodiment of the present application, such as realizing human-computer interaction through a device for supporting human-computer interaction functions or interacting with devices for supporting human-computer interaction functions.
  • the device for human-computer interaction functions communicates with, or communicates with, the device for controlling the driving state of the vehicle, and/or for supporting other processes in the embodiments described herein.
  • the input/output module 220 may be a functional module, which can perform both output operations and input operations.
  • the input/output module 220 can be considered as a sending module, and when performing a receiving operation, it can be considered as a sending module.
  • the input and output module 220 is a receiving module; alternatively, the input and output module 220 can also include two functional modules, and the input and output module 220 can be regarded as the general name of these two functional modules, which are respectively a sending module and a receiving module , the sending module is used to complete the sending operation.
  • FIG. 3 shows a schematic structural diagram of another vehicle control device, which is used to execute the actions performed by the vehicle control device provided by the embodiments of the present application.
  • the vehicle control device may include at least one component of a processor, a memory, an interface circuit or a human-computer interaction device.
  • the processor is mainly used to implement the processing operations provided by the embodiments of the present application, such as controlling the vehicle control device, executing a software program, and processing data of the software program.
  • the memory is mainly used to store software programs and data.
  • Human-computer interaction devices can be used to support human-computer interaction, and can include touch screens, cameras, millimeter-wave radars, speakers, microphones, or ultrasonic sensors, etc.
  • the functions include but are not limited to: obtaining the gestures input by the driver, etc.
  • the interface circuit can be used to support the communication of the vehicle control device, for example, when the human-computer interaction device is externally connected to the vehicle control device (that is, the vehicle control device does not include the human-computer interaction device), it is used to support the communication with the human-computer interaction device .
  • the interface circuit may include a transceiver or an input-output interface.
  • FIG. 3 In the production of an actual vehicle control device, there may be one or more processors and one or more memories.
  • the memory may also be referred to as a storage medium or a storage device or the like.
  • the memory may be set independently of the processor, or may be integrated with the processor, which is not limited in this embodiment of the present application.
  • FIG. 4 shows another vehicle control device according to an embodiment of the present application. It can be seen that the vehicle control device may include a human-computer interaction module, a human-computer interaction control module, a decision planning calculation module and a vehicle motion control module.
  • the vehicle control device may include a human-computer interaction module, a human-computer interaction control module, a decision planning calculation module and a vehicle motion control module.
  • the human-computer interaction module can be used to realize the input and output interaction with the driver in the vehicle.
  • the specific form is related to the way of interaction.
  • the human-computer interaction module may include a touch screen.
  • the human-computer interaction module may include a camera, millimeter-wave radar or ultrasonic sensor, etc.
  • the human-computer interaction module may include a microphone and a speaker.
  • the input and output are mixed, for example, the input is through the microphone, and the output is through the touch screen.
  • the input method includes a combination of touch gestures and space gestures
  • the output method includes a variety of methods such as displaying and playing voice, sound, or music through the display screen, This application is not specifically limited.
  • the human-computer interaction module can also be used to notify the human-computer interaction control module of the detected human-computer interaction operation.
  • the human-computer interaction module can identify the driver's human-computer interaction according to the predefined characteristics corresponding to the human-computer interaction operation. operation, and the detected information of the human-computer interaction operation is sent to the human-computer interaction control module, so that the human-computer interaction control module can identify the human-computer interaction operation according to the information of the human-computer interaction operation.
  • the human-computer interaction module can at least be used to detect the first gesture operation and human-computer interaction operation involved in this application.
  • the human-computer interaction module can send the touch operation information collected by the touch screen to the human-computer interaction control module. interactive content information such as the location, drag track, drag duration, text and symbols displayed on the display.
  • the human-computer interaction module can send the voice signal to the human-computer interaction control module, or can extract the semantics of the voice signal and send the semantics of the voice signal to the human-computer interaction control module.
  • the human-computer interaction module can send the video signal collected by the camera, the detection signal of the ultrasonic sensor or the photo signal to the human-computer interaction control module, or the human-computer interaction module can The photo signal extracts the description information corresponding to the air gesture, such as the information used to describe the air gesture as a nodding or waving gesture, and sends the description information to the human-computer interaction control module.
  • the human-computer interaction module can also be used to output (or present) information for the purpose of inquiring and/or informing the driver, for example, displaying symbols or text through a display screen, or playing through a speaker Output (or presentation) by means of speech, sound or music, etc. If the purpose of outputting (or presenting) information is to query the driver, the HMI module can also detect the driver's feedback on the query, for example, a period of time after the query (eg 10 seconds or 20 seconds, etc.) ), to detect whether the driver has made a human-machine interaction that represents the feedback of the query.
  • a period of time after the query eg 10 seconds or 20 seconds, etc.
  • the human-computer interaction control module can know the feedback result of the driver on the query.
  • the feedback result may be used for the content of the query. express agreement or objection, etc.
  • the human-computer interaction control module can be used to realize the recognition of the driving intention input by the human-computer interaction, and forward the recognized intention to the decision planning calculation module.
  • the human-computer interaction control module can also be used to forward the information from the decision planning calculation module to the human-computer interaction module, which may need to be output (or presented) by the human-computer interaction module to the driver/user.
  • the human-computer interaction control module can identify the driving intention corresponding to the human-computer interaction operation according to the description information of the human-computer interaction operation sent by the human-computer interaction module.
  • the driver can be inquired through the human-computer interaction module according to the information from the self-decision planning calculation module.
  • the decision planning calculation module can be used to judge whether the driving intention determined by the human-computer interaction control module conforms to the rules of automatic driving, and adjust the driving intention if necessary.
  • the decision planning calculation module can also be used to determine a control command according to the driving intention, so that the vehicle motion control module can control the vehicle according to the control command.
  • the human-computer interaction control module may also be called an autonomous driving brain or an autonomous driving system, which may include chips that execute autonomous driving algorithms, such as artificial intelligence (AI) chips, graphics processing unit (GPU) chips, Chips such as a central processing unit (CPU) may also be a system composed of multiple chips, which are not limited in the embodiments of the present application.
  • autonomous driving brain or an autonomous driving system, which may include chips that execute autonomous driving algorithms, such as artificial intelligence (AI) chips, graphics processing unit (GPU) chips, Chips such as a central processing unit (CPU) may also be a system composed of multiple chips, which are not limited in the embodiments of the present application.
  • AI artificial intelligence
  • GPU graphics processing unit
  • CPU central processing unit
  • the vehicle motion control module can be used to control the driving state of the vehicle according to the control command from the decision planning calculation module, so it can be used to realize the control of processing the driving state designed in the embodiment of the present application.
  • vehicle control device in this application can also be replaced by electronic equipment, vehicle or vehicle-mounted equipment, and the like.
  • the actions performed by the vehicle control device in the present application may also be performed by a main body such as an electronic device, a vehicle, or an in-vehicle device.
  • the method provided by the embodiment of the present application will be described below with reference to FIG. 5 .
  • the method may be performed by a vehicle control device.
  • the vehicle control device may include any one or more of the structures shown in FIGS. 2 to 4 .
  • the application can be implemented by the processing module 210 shown in FIG. 2 or the processor shown in FIG. 3 , or the human-computer interaction control module, decision planning calculation module and vehicle motion control module shown in FIG. 4 .
  • the processing actions in the method provided in this example include, but are not limited to, recognition of gestures or human-computer interaction operations, determination of driving intention, or control of the driving state of the vehicle according to the driving intention.
  • the interaction between the vehicle control device and the driver can also be realized by the input and output module shown in FIG. 2 or the interface circuit or human-computer interaction device shown in FIG. Computer interaction operation, or, display or present information to the driver to realize actions such as driver-oriented display, notification or inquiry.
  • the vehicle control apparatus acquires a first gesture operation of the driver.
  • the first gesture operation includes at least one of a driver's touch operation on the touch screen or an air gesture operation collected through the touch screen.
  • the touch operation may be a touch operation and/or a drag operation performed by the driver on the touch screen.
  • Air gesture operations include actions made by the driver through body parts such as hands or head, including but not limited to waving, making a fist, nodding, blinking or shaking his head.
  • first gesture operation may be one or more of the operations exemplified above.
  • the first gesture operation may be a touch operation or an air gesture input operation, or a touch operation or an air space Various combinations of gesture operations.
  • the vehicle control device may recognize the touch type of the driver on the touch screen.
  • the touch operation may include a touch-type operation (or a touch operation) and/or a drag-type operation (or a touch operation). Touch operations can be further divided into operations such as single-click and double-click.
  • the touch screen can also identify information such as the track and/or drag speed of the drag operation. It should be understood that an icon of the vehicle may be displayed on the display screen, and the driver may enter the touch operation by touching or dragging the icon. As shown in FIG.
  • an icon representing the vehicle may be displayed on the touch screen, and the current driving state of the vehicle may be displayed, such as information such as the direction of the head of the vehicle, the lane in which the vehicle is located, and/or the driving speed.
  • the driver can perform a first touch operation on the vehicle displayed on the touch screen.
  • the vehicle control device can recognize the driver's air gesture operation through a camera, a millimeter-wave radar, or an ultrasonic sensor.
  • Different air gesture operations can be preset to have different meanings, for example, nodding or clenching a fist to express agreement, shaking your head or waving your hand to express disagreement.
  • the camera can collect continuous multi-frame images, and determine the position change characteristics of the feature pixels in the multi-frame images by identifying the feature pixels in the images.
  • the feature pixels are obtained by the edge detection algorithm. Edge image of the face. If the position change feature conforms to the preset feature, it can be determined that the meaning possessed by the air gesture operation is the preset meaning corresponding to the feature.
  • the feature pixels include edge images of the user's fingers and palms.
  • the edge images of the user's fingers tend to be more curved and move closer to the palm, so the gesture can be identified as making a fist, and it can be further determined that the gesture is The meaning of the gesture is the meaning corresponding to the preset fist.
  • the vehicle control device determines the first intention of the driver according to the first gesture operation.
  • the first intention may also be referred to as a driving intention, and is used to represent the driver's intention to control the driving state of the vehicle.
  • the first intent includes at least one of acceleration intent, deceleration intent, parking intent, overtaking intent, lane change intent, steering intent, driving trajectory intent, drift intent, following intent, or acceleration intent.
  • the driver can express the driving intention through the first gesture operation. For example, when the driver wants the vehicle to perform an overtaking action, the driver can perform the first gesture operation corresponding to the overtaking intention.
  • the vehicle can The control device recognizes that the driver's intention is an overtaking intention according to the first gesture operation.
  • the vehicle control device may also ask the driver whether to execute the first intention, so as to avoid errors in the recognition of the driving intention. For example, after obtaining the first intention, the vehicle control device may inquire whether the driver agrees to execute the first intention, or in other words, ask the driver whether to control the driving state of the vehicle according to the first intention.
  • the vehicle control device can display the text corresponding to the first intent through the display screen, or play the voice corresponding to the first intent to the driver through the speaker, and detect the person of the driver.
  • the machine-machine interaction operation is performed, and whether the driver agrees to execute the first intention is determined according to the detected human-machine interaction operation of the driver.
  • the human-computer interaction operation expressing agreement to execute the first intention for example, the driver touches the virtual button showing agreement displayed on the touch screen, answers the voice containing words such as "agree” or “confirm”, or nods his head. Wait for actions and gestures to express consent.
  • Human-computer interaction operations that express disagreement with the execution of the first intention For example, the driver touches the virtual button on the touch screen expressing disagreement, responds to a voice containing words such as "disagree” or "cancel", or shakes his head or Hand gestures, gestures, etc.
  • the vehicle control device can identify the intention corresponding to the operation according to the position information of the touch operation. For example, if the touch operation is the driver's touch operation on a specific area, the vehicle control device can identify what kind of human-computer interaction operation the touch operation is based on the meaning of the specific area.
  • the touch screen can display at least one virtual key corresponding to the driving intention.
  • the driving intention expressed by the touch operation is the corresponding driving intention of the virtual key. driving intent.
  • the virtual key may display the driving intention corresponding to the virtual key by means of text or icon display.
  • the virtual keys can display information such as "acceleration” and/or the acceleration value "5 kilometers per hour (km/h)", for example, the "acceleration" virtual button is displayed on the touch screen.
  • the virtual button when the driver touches the "acceleration" virtual button, the virtual button with an acceleration value of 5km/h and a virtual button with an acceleration value of 10km/h can be displayed.
  • the vehicle control device can recognize that the first intention is the acceleration intention with the acceleration value of 5km/h.
  • the virtual keys displayed on the touch screen can also correspond to the numerical value and the speed change value represented by the sign. For example, when the driver touches and operates a virtual key with a value of "-5km/h", it indicates that the first intention is a deceleration intention with a deceleration value of 5km/h.
  • buttons corresponding to driving intentions such as deceleration intention, parking intention, overtaking intention, drifting intention, following intention and/or lane changing intention can also be set in a similar manner.
  • the virtual keys can also be replaced by virtual areas, icons, characters, windows, and the like.
  • the input of the driving intention can also be realized by touching the vehicle icon or the blank display area shown in FIG. 6 .
  • the corresponding driving intention is the acceleration intention according to the set threshold.
  • the driver touches the blank area in the rear direction of the vehicle icon it indicates the intention to decelerate. It is also possible to determine the driving intention that the driver wishes to realize when the driver clicks and/or double-clicks the vehicle icon by means of manual setting or pre-definition, so as to facilitate the input of the driving intention.
  • the vehicle control device may identify the intention corresponding to the drag operation according to information such as the trajectory of the drag operation, the drag acceleration, and the like. Among them, acceleration intention, deceleration intention, parking intention, overtaking intention, lane change intention, steering intention, driving trajectory intention, drift intention, following intention, or acceleration intention can be determined according to the drag operation.
  • the drag track of the drag operation of the driver may be displayed on the display screen.
  • the display screen may display the drag track of the dragging of the icon.
  • the display screen may display the icon at the first position where the driver's finger is at the first moment, and as the drag operation progresses, at the second moment, The display screen may display the icon at a second position where the driver's finger is located, wherein the first moment is different from the second moment, and the first position is different from the second position. That is to say, the display screen can move the display position of the vehicle icon in the display screen as the driver drags the track.
  • the following describes the manner in which the vehicle control device recognizes the first intention according to the drag track by using an example.
  • the position (in the form of coordinates) touched by the finger on the screen is determined by the touch screen controller It is detected and sent to the CPU through an interface (such as an RS-232 serial port) to determine the input information.
  • the vehicle control device determines that the first intention is a lane change intention according to the coordinate position of the lane line displayed on the touch screen, the initial vehicle position coordinates detected by the touch screen, and the position coordinates of the vehicle after dragging. Lane intends to perform a lane change.
  • the CPU of the touch screen can record the coordinates and time points at which the initial position and the drag end position are detected, and calculate the coordinate points by calculating the coordinates and time points. The distance between the two points is divided by the time difference between the two points to obtain the drag speed. If the drag speed is greater than the current vehicle speed, it means that the first intention is an acceleration intention; otherwise, if the determined drag speed is lower than the current vehicle speed, it means that the first intention is a deceleration intention.
  • the vehicle control device can also obtain a driving trajectory intent according to the driver's dragging trajectory on the display screen, and the driving trajectory intent is a driving intent to control the driving trajectory of the vehicle according to (or according to) the dragging trajectory.
  • the vehicle control device may also determine the acceleration intent based on the acceleration of the drag operation. For example, the driver's drag operation indicates that the vehicle is expected to travel according to (or according to) the acceleration of the drag operation, and the acceleration intention determined by the vehicle control device is to control the acceleration of the vehicle according to the acceleration of the driver's drag operation.
  • the vehicle control device may also determine acceleration intention or deceleration intention according to the drag direction of the drag operation. For example, if the included angle between the direction of the drag track and the direction of the head of the vehicle is less than 90 degrees, it indicates the acceleration intention, and the acceleration value can be the default value or the value input by the driver through other means. If the angle between the direction of the drag track and the direction of the head of the vehicle is greater than 90 degrees, it indicates the intention to decelerate, and the acceleration value can be the default value or the value input by the driver through other methods.
  • the speed and/or acceleration corresponding to the dragging operation can also be displayed on the display screen.
  • the driver controls the increase or decrease of the speed and/or acceleration by changing the dragging speed.
  • the display is displayed.
  • the speed and/or acceleration displayed on the screen is the speed and/or acceleration that the driver expects the vehicle to achieve. If the speed is greater than the current speed of the vehicle, the driving intent corresponding to the drag operation is an acceleration intent; if the speed is less than the current vehicle speed, the driving intent corresponding to the drag operation is a deceleration intent. If the acceleration is greater than The current vehicle acceleration, the driving intent corresponding to the drag operation is the acceleration intent to increase the acceleration, and if the acceleration is less than the current vehicle acceleration, the driving intent corresponding to the drag operation is the acceleration intent to decrease the acceleration.
  • the drift intention can also be identified according to the shape of the trajectory of the drag operation, and a possible implementation is shown in FIG. 8 . It can be seen that the driver can draw a line in front of the vehicle on the touch screen, first make a small but obvious turn in one direction, and then turn sharply in the other direction, maintaining a relatively fast speed during the process, and finally stop suddenly like a sudden brake.
  • the vehicle control device reads the trajectory of the drag operation from the touch screen, and identifies three features according to the trajectory, namely: A) After the trajectory turns in one direction, it quickly turns to the other; B) The final turn of the trajectory is the same as the original direction The angle between is in the other direction in the range of 90 degrees to 120 degrees; C) The trajectory speed drops sharply and finally stops. If these three characteristics are satisfied, it is judged that the first intention is to drift to the map. After that, a prompt "Whether to drift” may pop up. After the user clicks "Confirm", the drift intention is confirmed.
  • the following intention can also be realized by dragging the track on the touch screen, so that the driving state of the control vehicle, such as the running track and acceleration and deceleration, is similar to the driving state of the vehicle in front under the condition of ensuring compliance.
  • controlling the vehicle Accelerate, decelerate, or turn at the same intersection as the vehicle ahead For example, controlling the vehicle Accelerate, decelerate, or turn at the same intersection as the vehicle ahead.
  • the driver double-clicks the vehicle icon, and then drags the icon without leaving the touch screen with his finger; after that, drags the vehicle icon to coincide with the icon of the vehicle ahead, as shown in Figure 9, and then leaves the finger Touch the screen (i.e. let go) and double-tap on the icon of the vehicle ahead.
  • the vehicle control device reads the touch operation of the user's operation from the touch screen, and recognizes three characteristics, namely: A) double-click and drag the icon of the vehicle; B) when the user lets go, the vehicle icon moves to the same position as other vehicles The icons overlapped; C) Double-clicked after letting go. In this way, the pre-judgment is set to follow the designated vehicle in front, and a prompt "Do you want to follow this vehicle?" pops up. After the user clicks "Confirm", the intention to follow the vehicle is confirmed.
  • the vehicle control device can recognize the driver's air gesture collected by the camera, millimeter-wave radar or ultrasonic sensor, etc., and identify the first intent corresponding to the air gesture, so as to obtain the first intent.
  • the driver's gesture collected by the camera includes a gesture indicating acceleration, and the vehicle control device may identify the first intent as an acceleration intent according to the gesture.
  • the corresponding relationship between the air gesture and the driving intention may be set.
  • the correspondence between the shape of the gesture and the driving intention can be pre-defined, such as pointing to the left means changing lanes to the left, and pointing to the right means to the right.
  • Changing lanes, pointing a finger forward means accelerating forward, pointing a finger backward means decelerating, etc.
  • the correspondence between the gesture movement trajectory and the driving intention can be pre-defined, for example, a palm movement to the left means a left lane change, and a palm movement to the right It means changing lanes to the right, moving the palm forward means accelerating forward, and moving the palm backward means decelerating, etc.
  • Another example is the posture of holding the steering wheel and turning with both hands in the air. Turning left means changing lanes while turning right means changing lanes right.
  • the technical means of air gesture recognition are not limited here, for example, cameras or millimeter-wave radars can be used for recognition.
  • S103 The vehicle control device controls the driving state of the vehicle according to the first intention.
  • the driving state refers to a state related to the driving mode of the vehicle, rather than a state unrelated to driving such as in-vehicle entertainment systems such as car audio or audio-visual equipment, or lighting control.
  • Driving mode for example, acceleration, acceleration, approach, reverse, parking, whether to change the acceleration, whether to change the driving trajectory, whether to overtake, whether to turn, whether to drift, whether to follow the car, or whether to change lanes, etc.
  • the vehicle control device may control the driving state of the vehicle for a limited number of times (eg, 1 time) according to the first intention, or control the vehicle for a limited period of time (eg, 1 minute).
  • a limited number of times eg, 1 time
  • a limited period of time eg, 1 minute
  • the number of times or the effective duration of the first intent can be set.
  • the vehicle control device can ensure safety and compliance in a driving state in which the vehicle is controlled according to the first intention.
  • the vehicle control device may control the vehicle according to the rules of automatic driving according to the first intention.
  • the vehicle control device can flexibly recognize the driving intention based on the driver's gesture operation, and realize the control of the driving state of the vehicle according to the driving intention, providing a more flexible driving intention recognition method and bringing better driving experience.
  • the vehicle control device may determine whether the first intention is allowed to be executed.
  • the first intent that meets the first condition is allowed to be executed, and the first intent that does not meet the first condition is not allowed to be executed.
  • the vehicle control device may determine whether the first intent meets the first condition, and when the first condition is met, the first intent is allowed to be executed, otherwise, the first intent is not allowed to be executed.
  • the first condition includes, but is not limited to, at least one of a traffic regulation condition, a safe driving condition, or a comfort condition. Therefore, when it is judged that the first intention does not meet the first condition, the vehicle control device may refuse to control the processing according to the first intention, so as to ensure driving safety, compliance and comfort.
  • traffic rules conditions such as traffic laws and vehicle driving conditions required by the road where the vehicle is currently located, including but not limited to double solid line no-crossing, diversion line prohibition, prohibition of long-term riding on the line, the maximum speed limit of the road and Minimum speed limit conditions, etc.
  • the safe driving condition is a condition proposed to further improve the driving safety of the vehicle under the premise of satisfying the conditions of the traffic rules.
  • the safe driving conditions may be related to the current vehicle condition information or current road information of the vehicle.
  • safe driving conditions include maintaining a safe driving distance from pedestrians, other vehicles, traffic facilities such as guardrails, and road structures along the road.
  • a maximum speed limit condition, a minimum speed limit condition, and the like of the vehicle may be additionally set.
  • the comfort condition is the driving condition specified to meet the needs of the driver and passengers for a comfortable ride, for example, to limit the vehicle acceleration within the set acceleration range to avoid the discomfort caused by violent acceleration or deceleration of the vehicle; another example, when limiting steering.
  • the speed of the vehicle does not exceed the set speed and other conditions.
  • the comfort condition may be a condition that meets the general comfort demands of the public, or may be a condition set according to the individual comfort demands of the driver or passengers.
  • the vehicle control device when the vehicle control device recognizes that the first intention does not meet the first condition, the vehicle control device stops the execution of the first intention, that is, stops controlling the driving state of the vehicle according to the first intention. At this time, the vehicle control device may instruct the driver to refuse to control the driving state of the vehicle according to the first intention, or to refuse to execute the control instruction corresponding to the first intention.
  • the vehicle control device stops controlling the driving state of the vehicle according to the intention. For example, if the driver's drag operation drags the vehicle to the lane in the opposite direction, or the drag track crosses the double solid line, the lane change intention does not meet the first condition, and the driving of the vehicle is not allowed to be controlled according to the intention. status, the vehicle control device may refuse to execute.
  • the vehicle control device may prompt the driver that the driving state of the vehicle is not allowed to be controlled according to the driver's first intention by playing a voice or displaying on a touch screen.
  • the vehicle control device may stop the execution of the first intent after determining that the first intent does not meet the first condition, and send a first prompt message, where the first prompt message may be used to notify the driver that the first intent is not allowed. be executed.
  • the vehicle control device may send a first prompt message to indicate that the first intent corresponding to the drag operation is not allowed to be executed.
  • the vehicle control device when the vehicle control device sends prompt messages (such as the first prompt message, the second prompt message, the third prompt message, etc.), it can be understood that the vehicle control device outputs (or presents) prompt messages to the driver, or it can be It is understood that the vehicle control device sends a prompt message to a human-computer interaction device such as a display screen, and the software control device outputs (or presents) the prompt message to the driver.
  • prompt messages such as the first prompt message, the second prompt message, the third prompt message, etc.
  • the vehicle control device may send the first prompt message to a human-computer interaction device such as a display screen, so the human-computer interaction device notifies the driver that the first intention is not allowed to be executed.
  • sending the first prompt message may also mean that the vehicle control device notifies the driver that the first intention is not allowed to be executed through the human-machine interaction module.
  • the method of sending the first prompt message is, for example, displaying a symbol such as "X" on the display screen or
  • the way in which words such as "cannot be executed” are displayed means that it is not allowed to control the driving state of the vehicle according to the drag operation.
  • the vehicle control device may also clear the drag track of the drag operation displayed on the display screen. And/or, if the vehicle control device changes the display position of the icon of the vehicle displayed on the display screen along with the change of the drag track, in the case where it is determined that the drag operation is not allowed to be performed, the vehicle control device may The first display position displays the icon of the vehicle, wherein the first display position is the display position of the icon of the vehicle before the dragging operation is obtained, or in other words, the vehicle control device can restore the display position of the icon of the vehicle on the display screen to The display position of the vehicle icon before the driver performs the drag operation on the vehicle icon.
  • the vehicle control device when the vehicle control device recognizes that the first intent does not meet the first condition, the vehicle control device may determine the second intent that meets the first condition according to the first intent. After obtaining the second intention, the vehicle control device may ask the driver whether to agree to execute the second intention through a third prompt message, or in other words, ask the driver whether to control the driving state of the vehicle according to the second intention.
  • the manner of obtaining the second intent according to the first intent obtain the second intent by changing the value corresponding to the first intent, the drag track and other information. For example, if the current speed of the vehicle is 75km/h, and the driver's first intention is to accelerate by 10km/h, but the current maximum speed limit on the road is 80km/h, the vehicle can determine the second speed limit based on the maximum speed limit. The intention is to accelerate 5km/h. For another example, the maximum speed limit of the current road section is 80km/h, but due to the large number of vehicles on the current road section, in order to ensure smooth and safe driving, the safe driving conditions may appropriately limit the maximum speed of the vehicle.
  • the maximum speed limit of the vehicle is 70km/h, at this time, if the current vehicle speed is 65km/h and the driver's first intention is to accelerate by 10km/h, the vehicle can determine that the second intention is to accelerate by 5km/h.
  • the first intention is an acceleration intention
  • the vehicle control device may generate a second intention, and the acceleration corresponding to the second intention belongs to the acceleration range.
  • the dragging track is corrected according to the first condition, and the second dragging track is obtained according to the corrected dragging track. intention.
  • the first intention is the driving trajectory intention, but the dragging trajectory input by the driver passes through the opposite lane or other areas that the vehicle is not allowed to pass through, etc.
  • driving according to the dragging trajectory will not meet the first requirement.
  • the drag track can be modified according to the first condition to overcome the reason that the drag track does not meet the first condition, and a second intent can be obtained according to the corrected drag track, and the second intent can also be Driving trajectory intent.
  • the vehicle control device if the vehicle control device identifies that the first intent is a driving intent that does not satisfy the first condition, the first intent may be adjusted to obtain a second intent that satisfies the first condition. Thereafter, the vehicle control device may ask the driver whether to execute the second intent.
  • the vehicle control device when asking the driver whether to agree to execute the second intention, can display the text or the modified drag track corresponding to the second intention through the display screen, or play the text corresponding to the second intention to the driver through the speaker. voice, and detect the human-computer interaction operation of the driver, and determine whether the driver agrees to execute the second intention according to the detected human-computer interaction operation of the driver.
  • the human-computer interaction operation expressing agreement to execute the second intention is, for example, the driver touches the virtual button showing agreement displayed on the touch screen, answers the voice containing words such as "agree” or “confirm”, or nods his head. Wait for actions and gestures to express consent.
  • Human-computer interaction operations that express disagreement with the execution of the second intent For example, the driver touches the virtual button on the touch screen that expresses disagreement, responds to a voice containing words such as "disagree” or "cancel", or shakes his head or Hand gestures, gestures, etc.
  • the vehicle control device may control the driving state of the vehicle according to the second intention. If it is detected that the driver does not agree to perform the human-computer interaction operation of the second intention, or the driver's human-computer interaction operation that expresses the consent to perform the second intention is not received within a set period of time, the vehicle control device can determine the driving If the driver does not agree to execute the second intention, the execution of the second intention is stopped, and at this time, the vehicle control device may prompt the driver to re-input the driving intention.
  • the query operation in this application can be implemented through a dialog box displayed on the screen of the mobile phone as shown in FIG. 10 .
  • the dialog box may prompt the vehicle control device to determine the second intent that meets the first condition, and ask the driver whether to execute the intent.
  • the screen of the mobile phone can also display a virtual key (such as a "confirm” key) expressing agreement to execute the second intention and a virtual key (such as a "cancel” key) expressing disagreement with the execution of the second intention.
  • the mobile phone when the driver touches any virtual key, the mobile phone can obtain the touch result, that is, whether the driver agrees to execute the second intention.
  • the dialog box can also display a countdown of the set duration. After the countdown is 0, if the mobile phone does not detect the driver's touch operation on the virtual button expressing his agreement to execute the second intention, the driver is determined. Disagree to execute second intent.
  • the following describes the process of modifying the first intention to obtain the second intention and asking the driver whether to execute the second intention by taking a specific example of the first intention as an example.
  • the vehicle control device When the vehicle control device recognizes that the driving trajectory intention does not meet the first condition, it can correct the driving trajectory corresponding to the driving trajectory intention to obtain the revised driving trajectory to represent the proposed driving trajectory (for example, to identify the driving trajectory that satisfies the first condition). trajectory), the driving trajectory intent corresponding to the corrected driving trajectory is the second intent.
  • the vehicle control device may send a second prompt message for inquiring whether to control the driving state of the vehicle according to the revised drag trajectory (or in other words, for inquiring whether to agree to execute the second intention, or for inquiring whether to agree to execute the second intention according to the first
  • the second intention is to control the driving state of the vehicle).
  • sending the second prompt message may mean that the vehicle control device sends the second prompt message to the human-computer interaction device such as the display screen, so the human-computer interaction device asks the driver whether to control the driving of the vehicle according to the revised drag trajectory state.
  • sending the second prompt message may also mean that the vehicle control device asks the driver through the human-machine interaction module whether to control the driving state of the vehicle according to the corrected drag trajectory.
  • the vehicle control device may also display the corrected drag track through the display screen.
  • the vehicle control device may acquire the first operation of the driver, the first operation expressing agreement to control the driving state of the vehicle according to the revised drag trajectory (or, in other words, expressing agreement to execute the second intention, or agree to agree to control the driving state of the vehicle according to the revised drag trajectory intention to control the driving state of the vehicle), the vehicle control device may control the driving state of the vehicle according to the corrected drag trajectory, or the vehicle control device may control the driving state of the vehicle according to the second intention.
  • the driver's operation indicating that he does not agree to control the driving state of the vehicle according to the corrected drag trajectory is acquired, the second intention is not executed.
  • the vehicle control device may obtain the second intent according to the first intent by delaying the execution time of the first intent, that is, the execution timing of the first intent and the second intent are different.
  • the first intention includes the acceleration intention, but the current vehicle's formal speed has reached the maximum speed limit of the current road section. If the vehicle control device finds that it is allowed to accelerate according to the acceleration value corresponding to the first intention, if the vehicle control device finds that the road section drops to the passing road after a certain time. The user can be prompted to accelerate after reaching the road section. In this example, the acceleration after reaching the road section to be passed in the future can be regarded as the second intention obtained according to the first intention.
  • the vehicle control device may send a third prompt message to the driver for inquiring whether to control the driving state of the vehicle according to the second intention (or, in other words, for inquiring Whether to agree to execute the second intention, or to ask whether to agree to control the driving state of the vehicle according to the second intention).
  • the vehicle control device obtains the second operation of the driver, the driving state of the vehicle is controlled according to the second intention.
  • the second operation may be a human-machine interaction operation expressing agreement to control the driving state of the vehicle according to the second intention (or, in other words, expressing agreement to execute the second intention, or agree to control the driving state of the vehicle according to the second intention).
  • the driver's operation indicating that he does not agree to control the driving state of the vehicle according to the revised second intention is acquired, the second intention is not executed.
  • the method provided by the embodiment of the present application may include the following steps :
  • the human-computer interaction module acquires the first gesture operation of the driver.
  • S202 The human-computer interaction module notifies the human-computer interaction control module of the first gesture operation.
  • S203 The human-computer interaction control module determines the first intention according to the first gesture operation.
  • S204 The human-computer interaction control module notifies the decision planning calculation module of the first intention.
  • S205 The decision planning calculation module determines whether the first intention satisfies the first condition, and if so, executes S206, and if not, executes S207 and/or S208.
  • the decision planning calculation module generates a control command according to the first intention, and executes the control command through the vehicle motion control module, so as to control the driving state of the vehicle according to the first intention.
  • the decision planning calculation module notifies the driver through the human-computer interaction module that it is not allowed to control the driving state of the vehicle according to the first intention.
  • the decision planning calculation module sends an instruction to the human-computer interaction module, where the instruction is used by the human-computer interaction module to notify the driver that the first intention is not allowed to be executed. Thereafter, the HMI module can notify the driver through the display screen and/or speaker.
  • the decision planning calculation module determines a second intention that meets the first condition according to the first intention.
  • S210 The human-computer interaction module asks the driver whether to execute the second intention.
  • the human-computer interaction module acquires the first human-computer interaction operation of the driver.
  • the first human-computer interaction operation is, for example, a touch operation or an air gesture operation performed by the driver through a touch screen, or a voice input operation performed by the driver through voice, etc., which is not specifically limited in this application.
  • the human-computer interaction module notifies the human-computer interaction control module of the driver's first human-computer interaction operation.
  • the human-computer interaction control module identifies whether the first human-computer interaction operation is a human-computer interaction operation expressing agreement to execute the second intention. If the first human-computer interaction operation is a human-computer interaction operation expressing agreement to execute the second intent, then execute S214-S215; otherwise, if the first human-computer interaction operation is not a human-computer interaction operation expressing agreement to execute the second intent, execute S214-S215 S216.
  • S214 The human-computer interaction control module notifies the decision planning calculation module that the driver agrees to execute the second intention.
  • the decision planning calculation module generates a control command according to the second intention, and executes the control command through the vehicle motion control module to realize the control of the driving state of the vehicle according to the first intention.
  • S216 The human-computer interaction control module notifies the decision planning calculation module that the driver does not agree to execute the second intention. Then, the flow of controlling the driving state of the vehicle according to the first intention and/or the second intention is ended, that is, the driving state of the vehicle is not controlled according to the first intention and the second intention.
  • the vehicle control method provided by the embodiment of the present application can be implemented by the vehicle control device shown in FIG. 4 .
  • the steps shown in FIG. 11 implemented by the vehicle control device shown in FIG. 4 are exemplary, and according to the vehicle control method provided by the embodiments of the present application, some steps shown in FIG. 11 may be omitted, or may be implemented by other The steps replace some of the steps in FIG. 11 , or the vehicle control device may also perform some steps not shown in FIG. 11 .
  • the present application further provides a vehicle control device for implementing the functions of the vehicle control device in the vehicle control method described in the above method embodiments, so it has the beneficial effects of the above method embodiments.
  • the vehicle control device may include any one of the structures in FIGS. 2 to 4 , or be implemented by a combination of any of the structures in FIGS. 2 to 4 .
  • the vehicle control device shown in FIG. 2 may be a terminal or a vehicle, or may be a chip inside the terminal or the vehicle.
  • the vehicle control device can implement the vehicle control method shown in FIG. 5 or FIG. 11 and the above-mentioned optional embodiments.
  • the vehicle control device may include a processing module 210 and an input and output module 220 .
  • the processing module 210 can be used to perform any step of S102, S103 in the method shown in FIG. 5, S203, S205, S206, S208, S213 or S215 in the method shown in FIG. 11, or can be used to perform the above-mentioned optional steps
  • the embodiment involves any step of vehicle control, intention recognition, and judging whether to allow execution of driving intention or correction of driving intention.
  • the input and output module 220 can be used to perform any step of S101 in the method shown in FIG. 5 , S201 , S207 , S210 or S211 in the method shown in FIG. 11 , or can be used to perform the above-mentioned optional embodiments involving human-computer interaction. any step.
  • S101 in the method shown in FIG. 5
  • S201 , S207 , S210 or S211 in the method shown in FIG. 11
  • any step for details, refer to the detailed description in the method example, which is not repeated here.
  • the input and output module 220 can be used to obtain the first gesture operation of the driver.
  • the processing module 210 may be configured to determine the first intention of the driver according to the first gesture operation, and control the driving state of the vehicle according to the first intention.
  • the first gesture operation includes at least one of the following operations: a touch operation by the driver on the touch screen, where the touch operation includes a touch operation or a drag operation; and an air gesture operation by the driver.
  • the vehicle control device in the embodiments of the present application may be implemented by software, for example, a computer program or instruction having the above-mentioned functions, and the corresponding computer program or instruction may be stored in the internal memory of the terminal, and read by the processor.
  • the above-mentioned functions of the processing module 210 and/or the input-output module 220 can be realized by taking the corresponding computer program or instructions inside the memory.
  • the vehicle control device in the embodiment of the present application may also be implemented by hardware.
  • the processing module 210 may be a processor (such as a CPU or a processor in a system chip), and the input/output module 220 may include a human-computer interaction device, or an interface that supports the processing module 210 to communicate with the human-computer interaction device, such as an interface
  • the circuit is configured to notify the processing module 210 of the first gesture operation recognized by the human-computer interaction device. If a human-computer interaction device is included, the input and output device 220 may include a processor for the recognized first gesture operation.
  • the vehicle control device in the embodiment of the present application may also be implemented by a combination of a processor and a software module.
  • the first intent includes at least one of the following intents: an intent to overtake, an intent to change lanes, an intent to turn, or an intent to travel.
  • the input and output module 220 may also be used to: follow the drag trajectory of the drag operation. change, the display position of the icon of the moving vehicle on the touch screen; and/or, the drag track of the drag operation is displayed on the touch screen.
  • the input/output module 220 may include a touch screen, or an interface connected to the touch screen, so as to make the touch screen display a drag track.
  • the processing module 210 can also be used to control the driving state of the vehicle according to the drag track when the first intention corresponding to the drag track is allowed to be executed, or to control the driving state of the vehicle according to the drag track
  • the input and output module 220 may include the touch screen, or include an interface connected to the touch screen for clearing the touch screen. Drag track.
  • the input and output module 220 can also be used to send a first prompt message, and the first prompt message is used to notify that the first intent corresponding to the dragging operation is not allowed to be executed.
  • the input and output module 220 may include a touch screen, a speaker or other human-computer interaction device, or an interface connected with the human-computer interaction device, for notifying the driver that the first intention corresponding to the dragging operation is incorrect through the human-computer interaction device. and/or, display the icon of the vehicle in the first display position, where the first display position is the display position of the icon of the vehicle before the drag operation is acquired, and the input and output module 220 at this time may include a touch screen, or An interface connected with the touch screen is included for displaying the icon of the vehicle in the first display position.
  • the first intent corresponding to the dragging operation includes the driving trajectory intent
  • the processing module 210 is further configured to: At least one of driving conditions, environmental conditions, or comfort conditions, the drag trajectory is corrected.
  • the input and output module 220 can also be used to: display the corrected drag track on the touch screen, and the corrected drag track represents the suggested driving route.
  • the input and output module 220 may include a touch screen, or may include a The interface connected to the control screen is used to display the corrected drag track.
  • the input and output module 220 can also be used to send a second prompt message, and the second prompt message is used to ask whether to control the driving state of the vehicle according to the revised drag trajectory.
  • the input and output module 220 can also be used to The first operation of the driver is acquired, and the first operation indicates agreement to control the driving state of the vehicle according to the corrected drag trajectory.
  • the processing module 210 can also be used to control the driving state of the vehicle according to the revised drag trajectory.
  • the input and output module 220 may include a touch screen, a speaker or other human-computer interaction device, or an interface connected to the human-computer interaction device, for sending the second prompt message to the driver through the human-computer interaction device, and obtaining The driver's first action.
  • the processing module 210 may be further configured to: determine that the first intention does not satisfy a first condition, where the first condition includes at least one of a traffic rule condition, a safe driving condition, an environmental condition or a comfort condition; processing The module 210 can also be used to determine a second intention according to the first intention, the second intention satisfies the first condition, and the execution timing of the second intention is different from the execution timing of the first intention; the processing module 210 can be specifically used to control the vehicle according to the second intention. driving status.
  • the input and output module 220 can also be used to send a third prompt message, the third prompt message is used to ask whether to control the driving state of the vehicle according to the second intention, and the input and output module 220 can also be used to obtain the driver.
  • the second operation of expresses agreement to control the driving state of the vehicle according to the second intention.
  • the input and output module 220 may include a touch screen, a speaker or other human-computer interaction device, or an interface connected to the human-computer interaction device, for sending a third prompt message to the driver through the human-computer interaction device, and obtaining the The driver's second action.
  • the processing module 210 may further determine that the first intention does not satisfy the first condition, where the first condition includes at least one of a traffic rule condition, a safe driving condition or a comfort condition.
  • the processing module 210 may also be configured to determine a second intent according to the first condition, and the second intent satisfies the first condition.
  • the input and output module 220 is further configured to: send a third prompt message, where the third prompt message is used to inquire whether to control the driving state of the vehicle according to the second intention.
  • the input and output module 220 can also be used to obtain a third operation of the driver, which indicates that the driver does not agree to control the driving state of the vehicle according to the second intention.
  • the processing module 210 may also be used to stop controlling the driving state of the vehicle according to the second intent.
  • the vehicle control device shown in FIG. 3 may be a terminal or a vehicle, or may be a chip inside the terminal or the vehicle.
  • the vehicle control device can implement the vehicle control method shown in FIG. 5 or FIG. 11 and the above-mentioned optional embodiments.
  • the vehicle control device may include at least one of a processor, a memory, an interface circuit or a human-machine interaction device. It should be understood that although only one processor, one memory, one interface circuit and one (or one) human-computer interaction device are shown in FIG. 3 . Vehicle controls may include other numbers of processors and interface circuits.
  • the interface circuit is used for the vehicle control device to communicate with the terminal or other components of the vehicle, such as a memory or other processors, or a human-computer interaction device.
  • the processor can be used for signal interaction with other components through the interface circuit.
  • the interface circuit may be an input/output interface of the processor.
  • a processor may read computer programs or instructions in a memory coupled thereto through an interface circuit, and decode and execute the computer programs or instructions. It should be understood that these computer programs or instructions may include the above-mentioned functional programs, and may also include the above-mentioned functional programs of the vehicle control device. When the corresponding functional program is decoded and executed by the processor, the vehicle control device can be made to implement the solution in the vehicle control method provided by the embodiments of the present application.
  • these function programs are stored in a memory outside the vehicle control device, in which case the vehicle control device may not include a memory.
  • the above-mentioned function program is decoded and executed by the processor, part or all of the above-mentioned function program is temporarily stored in the memory.
  • these function programs are stored in a memory inside the vehicle control device.
  • the vehicle control device may be provided in the vehicle control device of the embodiment of the present invention.
  • these function programs are stored in a memory outside the vehicle control device, and other parts of these function programs are stored in a memory inside the vehicle control device.
  • the above-mentioned processor may be a chip.
  • the processor may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a system on chip (SoC), or a It is a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller unit). , MCU), it can also be a programmable logic device (PLD) or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • MCU microcontroller unit
  • MCU programmable logic device
  • PLD programmable logic device
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the aforementioned processors may be general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • the methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the computer program or instruction can be stored in the memory, and the computer program or instruction stored in the memory can be executed by the processor.
  • the actions performed by the input/output module 220 when the vehicle control device is implemented by the structure shown in FIG. 2 may also be performed by the interface circuit and/or the human-computer interaction device.
  • the processing module 210 shown in FIG. 2 may be implemented by the processor and memory shown in FIG. 3 , or the processing module 210 shown in FIG. 2 includes a processor and a memory, or the processor executes the processing in the memory.
  • the input/output module 220 shown in FIG. 2 can be implemented by the interface circuit and/or the human-computer interaction device shown in FIG. 3, or in other words, the processing module 210 shown in FIG. 2 includes the interface circuit shown in FIG. 3 and /or a human-computer interaction device, or in other words, the actions performed by the input and output module 220 shown in FIG. 2 are performed by the interface circuit and/or the human-computer interaction device.
  • the vehicle control device When the vehicle control device is realized by the structure shown in FIG. 4 , it can be executed by the human-computer interaction control module, the decision planning calculation module and the vehicle motion control module. When the vehicle control device is realized by the structure shown in FIG. 2 , it can be executed by the processing module 210 Actions. Actions performed by the input/output module 220 when the vehicle control device is implemented by the structure shown in FIG. 2 can also be performed by the human-machine interaction module. When the vehicle control device is implemented through the structure shown in FIG. 4 , the actions performed by the human-computer interaction module, the human-computer interaction control module, the decision planning calculation module and the vehicle motion control module respectively may refer to the descriptions in the process shown in FIG. 11 . , which will not be repeated here.
  • the present application provides a computing device, including a processor, the processor is connected to a memory, the memory is used for storing computer programs or instructions, and the processor is used for executing the computer program stored in the memory, so that the computing device executes The method in the above method embodiment.
  • the present application provides a computer-readable storage medium on which a computer program or instruction is stored.
  • the computing device executes the method in the above method embodiment.
  • the present application provides a computer program product, when a computer executes the computer program product, so that the computing device executes the method in the above method embodiment.
  • the present application provides a chip connected to a memory for reading and executing computer programs or instructions stored in the memory, so that a computing device executes the methods in the above method embodiments.
  • an embodiment of the present application provides an apparatus, the apparatus includes a processor and an interface circuit, the interface circuit is configured to receive a computer program or instruction and transmit it to the processor; the processor The computer program or instructions are executed to perform the methods in the above method embodiments.
  • each functional module in each embodiment of the present application may be integrated into one processor, or may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

一种车辆控制方法及装置,可基于驾驶员的手势操作识别驾驶员的驾驶意图,并根据驾驶意图实现对车辆的驾驶状态的进行控制,提供了更加灵活的驾驶意图识别与驾驶控制,带来更好的驾驶体验。

Description

一种车辆控制方法及装置 技术领域
本申请涉及自动驾驶领域,特别涉及一种车辆控制方法及装置。
背景技术
按照美国机动车工程师学会(society of automotive engineers,SAE)关于自动化层级的定义,自动驾驶的L0至L5共6个层级中,在低等级的自动驾驶的场景(如L2及以下)中,采用人工驾驶加上高级驾驶辅助系统(advanced driving assistance system,ADAS),基于根据驾驶员的驾驶意图实现自动驾驶;在高等级的自动驾驶的场景(L3至L5)中,L3为特定区域内执行的自动驾驶,L4为高度自动驾驶,L5级为完全自动驾驶,L3至L5的自动驾驶均由车辆自主完成所有驾驶操作,驾驶员不需要集中注意力用于驾驶操作。也就是说,自动驾驶尤其是高等级自动驾驶解放了驾驶员的手、脚,甚至解放驾驶员对驾驶的思考。
目前基于驾驶员的驾驶意图进行自动驾驶的技术中,主要是对于驾驶员对于方向盘、制动或油门等的驾驶动作进行识别,获得驾驶动作对应的驾驶意图,并进一步实现自动驾驶,因此驾驶意图的识别方式不够灵活。
发明内容
本申请提供一种车辆控制方法及装置,用以对驾驶员的驾驶意图实现灵活识别。
本申请提供的车辆控制方法可以由支持车辆控制功能的电子装置执行。电子装置能够被抽象为计算机系统。本申请中的支持车辆控制功能的电子装置,也可称为车辆控制装置。车辆控制装置可以是该电子装置的整机,也可以是该电子装置中的部分器件,例如:车辆控制功能相关的芯片,如系统芯片或图像芯片。其中,系统芯片也称为片上系统(system on chip,SOC),或称为SoC芯片。具体地,车辆控制装置可以是诸如车辆中的车载电脑、车机等这样的终端装置或车载设备,也可以是能够被设置在车辆或车载设备中的计算机系统中的系统芯片、图像处理芯片或其他类型的芯片。
第一方面,提供一种车辆控制方法。该方法包括:车辆控制装置可获取驾驶员的第一手势操作,根据第一手势操作确定驾驶员的第一意图,并根据第一意图控制车辆的驾驶状态。其中,第一手势操作包括驾驶员在触控屏上的触控操作和/或驾驶员的隔空手势操作,触控操作包括触碰操作或拖拽操作。
采用该方法,车辆控制装置可基于驾驶员的第一手势操作灵活识别驾驶员的驾驶意图,并根据驾驶意图实现对车辆的驾驶状态的进行控制,提供了更加灵活的驾驶意图识别方式,带来更好的驾驶体验。
一种可能的设计中,第一意图可以包括超车意图、变道意图、转向意图,或者行驶轨迹意图中的至少一种。因此能够根据第一手势识别多种类型的额驾驶意图,实现根据第一手势操作对车辆的灵活控制。
一种可能的设计中,如果第一手势操作包括驾驶员对于触控屏上显示的车辆的图标的 拖拽操作,车辆控制装置还可随着拖拽操作的拖拽轨迹的变化,移动车辆的图标在触控屏中的显示位置,和/或,在触控屏中显示拖拽操作的拖拽轨迹。因此可实现拖拽轨迹的可视化,提高用户体验。
一种可能的设计中,在拖拽轨迹对应的第一意图被允许执行时,车辆控制装置可根据拖拽轨迹控制车辆的驾驶状态。在拖拽轨迹对应的第一意图不被允许执行时,车辆控制装置还可执行以下中的至少一个操作:发送第一提示消息,第一提示消息用于通知拖拽操作对应的第一意图不被允许执行;或者,在第一显示位置显示车辆的图标,第一显示位置是获取到拖拽操作之前车辆的图标的显示位置;或者,清除触控屏中显示的拖拽轨迹。因此,在第一意图被允许执行时,可根据第一意图实现对于车辆驾驶状态的控制,并在第一意图不允许执行时,可向驾驶员进行及时反馈,提高用户体验。
一种可能的设计中,如果拖拽操作对应的第一意图包括行驶轨迹意图,在拖拽轨迹对应的第一意图不被允许执行时,车辆控制装置还可根据交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个,对拖拽轨迹进行修正,并在触控屏中显示修正后的拖拽轨迹,该修正后的拖拽轨迹可表示建议的行驶路线。因此,在拖拽操作对应的第一意图不被允许执行时,可根据第一条件对拖拽轨迹进行修正并向驾驶员直观反馈对于拖拽轨迹的修正。
一种可能的设计中,车辆控制装置还可发送第二提示消息,第二提示消息用于询问是否根据修正后的拖拽轨迹控制车辆的驾驶状态。车辆控制装置还可获取驾驶员的第一操作,其中,第一操作表示同意根据修正后的拖拽轨迹控制车辆的驾驶状态,并根据修正后的拖拽轨迹控制车辆的驾驶状态。因此可根据驾驶员对于第二提示消息的反馈决定根据修正后的拖拽轨迹控制车辆的驾驶状态,以提高根据驾驶员的驾驶意图进行车辆驾驶状态控制的成功率。
一种可能的设计中,车辆控制装置还可确定第一意图不满足第一条件,第一条件包括交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个。车辆控制装置还可根据第一意图确定第二意图,第二意图满足第一条件,第二意图的执行时机与第一意图的执行时机不同。车辆控制装置还可根据第二意图控制车辆的驾驶状态。因此,在第一意图不被允许执行时,可根据第一条件对第一意图进行修正并获得第二意图,因此根据第二意图控制车辆的驾驶状态可提高驾驶意图识别的成功率。
一种可能的设计中,车辆控制装置还可发送第三提示消息,第三提示消息用于询问是否根据第二意图控制车辆的驾驶状态。车辆控制装置还可获取驾驶员的第二操作,第二操作表示同意根据第二意图控制车辆的驾驶状态。因此可根据驾驶员对于第三提示消息的反馈决定根据修正后的第二意图控制车辆的驾驶状态,以提高根据驾驶员的驾驶意图进行车辆驾驶状态控制的成功率。
一种可能的设计中,车辆控制装置还可确定第一意图不满足第一条件,第一条件包括交通规则条件、安全行驶条件或舒适性条件中的至少一个。车辆控制装置还可根据第一条件确定第二意图,第二意图满足第一条件。车辆控制装置还可发送第三提示消息,第三提示消息用于询问是否根据第二意图控制车辆的驾驶状态。获取驾驶员的第三操作,第三操作表示不同意根据第二意图控制车辆的驾驶状态。车辆控制装置还可停止根据第二意图控制车辆的驾驶状态。因此根据该第三操作,可有效根据驾驶员的操作决定是否进行车辆驾驶状态控制。
第二方面,本申请提供一种车辆控制装置,该装置包括处理模块和输入输出模块。输入输出模块可用于获取驾驶员的第一手势操作。处理模块可用于根据第一手势操作确定驾驶员的第一意图,根据第一意图控制车辆的驾驶状态。其中,第一手势操作包括以下操作中的至少一种:驾驶员在触控屏上的触控操作,触控操作包括触碰操作或拖拽操作;驾驶员的隔空手势操作。
一种可能的设计中,第一意图包括以下意图中的至少一种:超车意图、变道意图、转向意图,或者行驶轨迹意图。
一种可能的设计中,若第一手势操作包括驾驶员对于触控屏上显示的车辆的图标的拖拽操作,输入输出模块还可用于:随着拖拽操作的拖拽轨迹的变化,移动车辆的图标在触控屏中的显示位置;和/或,在触控屏中显示拖拽操作的拖拽轨迹。
一种可能的设计中,处理模块还可用于在拖拽轨迹对应的第一意图被允许执行时,根据拖拽轨迹控制车辆的驾驶状态,或用于在拖拽轨迹对应的第一意图不被允许执行时,执行:清除触控屏中显示的拖拽轨迹。在拖拽轨迹对应的第一意图不被允许执行时,输入输出模块还可用于发送第一提示消息,第一提示消息用于通知拖拽操作对应的第一意图不被允许执行,和/或,在第一显示位置显示车辆的图标,第一显示位置是获取到拖拽操作之前车辆的图标的显示位置。
一种可能的设计中,拖拽操作对应的第一意图包括行驶轨迹意图,在拖拽轨迹对应的第一意图不被允许执行时,处理模块还用于:根据交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个,对拖拽轨迹进行修正。输入输出模块还可用于:在触控屏中显示修正后的拖拽轨迹,修正后的拖拽轨迹表示建议的行驶路线。
一种可能的设计中,输入输出模块还可用于发送第二提示消息,第二提示消息用于询问是否根据修正后的拖拽轨迹控制车辆的驾驶状态,输入输出模块还可用于获取驾驶员的第一操作,第一操作表示同意根据修正后的拖拽轨迹控制车辆的驾驶状态。处理模块还可用于根据修正后的拖拽轨迹控制车辆的驾驶状态。
一种可能的设计中,处理模块还可用于:确定第一意图不满足第一条件,第一条件包括交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个;处理模块还可用于根据第一意图确定第二意图,第二意图满足第一条件,第二意图的执行时机与第一意图的执行时机不同;处理模块具体可用于根据第二意图控制车辆的驾驶状态。
一种可能的设计中,输入输出模块还可用于发送第三提示消息,第三提示消息用于询问是否根据第二意图控制车辆的驾驶状态,输入输出模块还可用于获取驾驶员的第二操作,第二操作表示同意根据第二意图控制车辆的驾驶状态。
一种可能的设计中,处理模块还可用确定第一意图不满足第一条件,第一条件包括交通规则条件、安全行驶条件或舒适性条件中的至少一个。处理模块还可用于根据第一条件确定第二意图,第二意图满足第一条件。输入输出模块还可用于:发送第三提示消息,第三提示消息用于询问是否根据第二意图控制车辆的驾驶状态。输入输出模块还可用于获取驾驶员的第三操作,第三操作表示不同意根据第二意图控制车辆的驾驶状态。处理模块还可用于停止根据第二意图控制车辆的驾驶状态。
第三方面,本申请提供一种计算设备,包括处理器,处理器与存储器相连,存储器存储计算机程序或指令,处理器用于执行存储器中存储的计算机程序或指令,以使得计算设备执行上述第一方面或第一方面的任一种可能的实现方式中的方法。
第四方面,本申请提供一种计算机可读存储介质,其上存储有计算机程序或指令,当该计算机程序或指令被执行时,使得计算机执行上述第一方面或第一方面的任一种可能的实现方式中的方法。
第五方面,本申请提供一种计算机程序产品,当计算机执行计算机程序产品时,使得计算机执行上述第一方面或第一方面的任一种可能的实现方式中的方法。
第六方面,本申请提供一种芯片,芯片与存储器相连,用于读取并执行存储器中存储的计算机程序或指令,以实现上述第一方面或第一方面的任一种可能的实现方式中的方法。
第七方面,本申请提供一种车辆,该车辆包括上述第二方面或第二方面的任一种可能的实现方式中的车载控制装置和执行装置,以实现上述第一方面或第一方面的任一种可能的实现方式中的方法。
第八方面,本申请提供一种车辆,该车辆包括上述第六方面中的芯片和执行装置,以实现上述第一方面或第一方面的任一种可能的实现方式中的方法。
基于本申请所提供的技术方案,本申请可由车辆控制装置根据驾驶员的触控操作和/或隔空手势操作确定驾驶员的第一意图,并根据第一意图控制处于自动驾驶状态的车辆的驾驶状态,或者说,驾驶意图用于控制车辆的驾驶状态,其中,车辆的驾驶状态是指与车辆行驶方式相关的状态,因此上述方案能够为自动驾驶场景提供更加灵活的驾驶意图识别方式,能够向驾驶员提供更好的自动驾驶体验。
其中,对于驾驶状态的控制可视为短时内的控制或有限次数内的控制,例如,车辆控制装置检测到触控操作,根据触控操作执行一次驾驶状态的控制。
进一步的,如果驾驶员的触控操作是在触控屏进行的拖拽操作,车辆控制装置可在显示屏中显示该拖拽操作的拖拽轨迹,和/或随着驾驶员的拖拽操作移动车辆的图标,实现拖拽操作的可视化。如果车辆控制装置识别该拖拽轨迹不允许被执行,则可清除该拖拽轨迹或还原车辆图标所在的位置或向驾驶员通知该拖拽操作或该拖拽操作对应的驾驶意图不允许被执行。另外,车辆控制装置还可根据交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个,对拖拽轨迹进行修正,并在触控屏中显示修正后的拖拽轨迹,该修正后的拖拽轨迹可表示建议的行驶路线,因此可以直观地向驾驶员反馈对于拖拽轨迹的修正。此外,还可询问驾驶员是否根据修正后的拖拽轨迹执行车辆驾驶状态的控制,如果驾驶员通过手势或其他人机交互操作表示同意执行修正后的拖拽轨迹,则车辆控制装置可根据该修正后的拖拽轨迹控制车辆的驾驶状态,从而提高车辆控制成功率。
另外,在车辆控制装置识别第一意图不满足交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个时,可向驾驶员提示该第一意图不允许被执行,从而向驾驶员提供针对第一手势操作的反馈。进一步的,还可由车辆控制装置向驾驶员询问是否执行第二意图,第二意图可以是根据第一意图确定的满足交通规则条件、安全行驶条件、环境条件和舒适性条件的意图,比如,第二意图可以是延迟执行的第一意图。此后可以向驾驶员询问是否执行第二意图,如果驾驶员通过手势或其他人机交互操作表示同意执行第二意图,则车辆控制装置可根据第二意图控制车辆的驾驶状态,从而提高车辆控制成功率。
因此,本申请实施例提供的技术方案能够支持车辆控制装置灵活识别驾驶员的驾驶意图,提高自动驾驶场景下的用户体验。此外,以上技术方案的执行中,车辆控制装置能够及时就驾驶意图是否允许被执行等情况与驾驶员进行交互,以确保驾驶安全和驾驶合规性, 并增强驾驶员的交互体验和驾驶参与感。
附图说明
图1为本申请实施例提供的一种应用场景示意图;
图2为本申请实施例提供的一种车辆控制装置的结构示意图;
图3为本申请实施例提供的另一种车辆控制装置的结构示意图;
图4为本申请实施例提供的另一种车辆控制装置的结构示意图;
图5为本申请实施例提供的一种车辆控制方法的流程示意图;
图6为本申请实施例提供的一种车辆图标显示方式示意图;
图7为本申请实施例提供的一种拖拽轨迹的示意图;
图8为本申请实施例提供的另一种拖拽轨迹的示意图;
图9为本申请实施例提供的另一种拖拽轨迹的示意图;
图10为本申请实施例提供的一种询问是否执行第二意图的界面示意图;
图11为本申请实施例提供的另一种车辆控制方法的流程示意图。
具体实施方式
本申请提供一种车辆控制方法及装置,用以根据驾驶员的手势操作灵活识别驾驶员的驾驶意图,并通过自动驾驶的算法对根据该驾驶意图生成的车辆控制指令提供驾驶安全和驾驶合规性保障。其中,方法和装置是基于同一技术构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
在本申请实施例提供的方法中,车辆控制装置可以根据驾驶员的第一手势操作确定驾驶员的第一意图,并根据第一意图控制车辆的驾驶状态以实现车辆的自动驾驶。其中,第一手势操作包括驾驶员在触控屏上的触控操作或隔空手势等操作。该触控操作例如触屏操作或拖拽操作等等。以上方法中,车辆的控制是基于驾驶员的人机交互操作进行的,由车辆控制装置根据驾驶员的人机交互操作识别驾驶员的驾驶意图,实现了驾驶意图的灵活识别。
以下,对本申请中的部分用于进行解释说明,以便于本领域技术人员理解。
1、车辆
本申请可适用于自动驾驶车辆(以下简称为车辆),尤其是具有人机交互(human machine interaction,HMI)功能、针对车辆的驾驶状态通过自动驾驶算法进行计算、判断功能,和对车辆进行运动控制功能的车辆。
可选的,车辆可以包含至少一个自动驾驶系统,以支持车辆进行自动驾驶。
应理解,根据实际使用的需要,也可将车辆替换为火车、飞行器、移动平台等其他载具或交通工具。本申请对此不做限定。
2、车辆控制装置
可用于支持车辆实现本申请实施例提供的方法。车辆控制装置可支持人机交互的功能、针对车辆的驾驶状态通过自动驾驶算法进行计算和判断的功能,和对车辆进行运动控制的功能。
可选的,车辆控制装置可与车辆采用一体设置,比如,车辆控制装置可设置于车辆内 部。或者,可采用分离式设置,车辆控制装置与车辆之间支持远程通信。
车辆控制装置可通过终端设备和/或服务器等形式实现。这里的终端设备例如可以是手机、电脑、平板电脑或者车载设备等。服务器可以是云服务器等。以车辆控制装置是终端设备为例,终端设备可提供以下功能:通过终端设备采集用户的手势操作或人机交互操作、向用户进行询问或提示、根据用户输入的手势操作控制车辆的驾驶状态等。其中,若采用分离式设置,控制车辆的驾驶状态可基于终端设备与车辆之间的通信进行。例如图1所述,以车辆控制装置是终端为例,本申请可通过终端协助识别驾驶员的驾驶意图,并控制车辆的行驶状态。应理解,本申请中的输入,是指由驾驶员向车辆控制装置进行信息传输。输出是指车辆控制装置向驾驶员进行信息传输。
示例性的,图2示出了一种可能的车辆控制装置的结构示意图,该结构可包括处理模块210和输入输出模块220。示例性地,图2所示结构可以是终端设备,或具有本申请所示车辆控制装置的功能部件。当该结构是终端设备或其他电子设备时,输入输出模块220可包括触控屏、摄像头、扬声器、麦克风或者超声波传感器等用于支持人机交互功能的装置(或称人机交互装置),或者,支持与用于与触控屏、摄像头、扬声器、麦克风或者超声波传感器等用于支持人机交互功能的装置进行通信的模块。处理模块210可以是处理器,例如,中央处理单元(central processing unit,CPU)。当该结构是具有本申请所示车辆控制装置的功能部件时,输入输出模块220可以是接口电路,用于与触控屏、摄像头、毫米波雷达、扬声器、麦克风或者超声波传感器等用于支持人机交互功能的装置进行通信,处理模块210可以是处理器。当该结构是芯片或芯片系统时,输入输出模块220可以是芯片的输入输出接口、处理模块210可以是芯片的处理器,可以包括一个或多个中央处理单元。应理解,本申请实施例中的处理模块210可以由处理器或处理器相关电路组件实现,输入输出模块220可以由收发器、用于支持人机交互功能的装置或相关的电路组件实现。
例如,处理模块210可以用于执行本申请任一实施例中由车辆控制装置所执行的除了输入输出操作之外的全部操作,例如识别驾驶意图、判断驾驶意图是否被允许执行和控制车辆的驾驶状态等处理操作,和/或用于支持本文所描述的实施例中的其它过程,比如生成由输入输出模块220输出的消息、信息和/或信令,和对由输入输出模块220输入的消息、信息和/或信令进行处理。输入输出模块220可以用于执行本申请任一实施例中由终端识别和/服务器所执行的全部输入和输出操作,例如通过用于支持人机交互功能的装置实现人机交互或与用于支持人机交互功能的装置进行通信,或与用于控制车辆的驾驶状态的装置进行通信,和/或用于支持本文所描述的实施例中的其它过程。
另外,输入输出模块220可以是一个功能模块,该功能模块既能完成输出操作也能完成输入操作,在执行发送操作时,可以认为输入输出模块220是发送模块,而在执行接收操作时,可以认为输入输出模块220是接收模块;或者,输入输出模块220也可以包括两个功能模块,输入输出模块220可以视为这两个功能模块的统称,这两个功能模块分别为发送模块和接收模块,发送模块用于完成发送操作。
图3示出了另一种车辆控制装置的结构示意图,用于执行本申请实施例提供的由车辆控制装置执行的动作。便于理解和图示方便。如图3所示,该车辆控制装置可包括处理器、存储器、接口电路或人机交互装置中的至少一个组件。处理器主要用于实现本申请实施例提供的处理操作,例如对车辆控制装置进行控制,执行软件程序,处理软件程序的数据等。 存储器主要用于存储软件程序和数据。人机交互装置可用于支持人机交互,可包括触控屏、摄像头、毫米波雷达、扬声器、麦克风或者超声波传感器等,功能包括但不限于:获取驾驶员输入的手势等操作、向驾驶员进行信息的输出(或呈现)。接口电路可用于支持车辆控制装置的通信,例如,当人机交互装置采用外接于车辆控制装置的方式(即车辆控制装置不包括人机交互装置)时,用于支持与人机交互装置的通信。接口电路可包括收发器或输入输出接口。
应理解,为便于说明,图3中仅示出了一个存储器和处理器。在实际的车辆控制装置的产品中,可以存在一个或多个处理器和一个或多个存储器。存储器也可以称为存储介质或者存储设备等。存储器可以是独立于处理器设置,也可以是与处理器集成在一起,本申请实施例对此不做限制。
图4所示为本申请实施例提供的另一种车辆控制装置。可见,该车辆控制装置可包括人机交互模块、人机交互控制模块、决策规划计算模块和整车运动控制模块。
其中,人机交互模块可用于实现与车内驾驶员的输入输出交互。具体形式与交互方式有关。比如通过触碰实现的交互方式,人机交互模块可包括触控屏,如果是通过隔空手势识别实现的交互方式,人机交互模块可包括摄像头、毫米波雷达或者超声波传感器等,如果是通过语义识别实现的方式,人机交互模块可包括麦克风和扬声器。当然还可能输入和输出是混合的方式,比如输入通过麦克风,而输出通过触控屏等。也可能存在多种输入输出方式组合,比如,输入方式包括触控手势和隔空手势的结合,和/或,输出方式包括通过显示屏显示和播放语音、声音或音乐等方式中的多种,本申请不予具体限定。
人机交互模块还可用于将检测到的人机交互操作通知给人机交互控制模块,例如,人机交互模块可根据人机交互操作所对应的预定义的特征,识别驾驶员的人机交互操作,并将检测到的人机交互操作的信息发送至人机交互控制模块,以便人机交互控制模块根据人机交互操作的信息识别该人机交互操作。应理解,人机交互模块至少可用于检测本申请中涉及的第一手势操作和人机交互操作。
比如,人机交互操作是触控操作,则人机交互模块可将触控屏采集的触控操作信息发送给人机交互控制模块,触控操作信息可包括触控操作类型、在触控屏的位置、拖拽轨迹、拖拽时长、显示屏上显示的文字、符号等交互内容信息。如果人机交互操作是语音,则人机交互模块可将语音信号发送至人机交互控制模块,或者,可提取语音信号的语义,将语音信号的语义发送至人机交互控制模块。如果人机交互操作是隔空手势,人机交互模块可将摄像头采集的视频信号、超声波传感器的检测信号或者照片信号发送至人机交互控制模块,或者,人机交互模块可根据视频和/或照片信号提取隔空手势对应的描述信息,如用于描述隔空手势是点头或摆手等隔空手势的信息,并将描述信息发送至人机交互控制模块。
人机交互模块还可用于输出(或呈现)信息,输出(或呈现)这些信息的目的是用于向驾驶员进行询问和/或通知,比如,通过显示屏显示符号或文字,或通过扬声器播放语音、声音或音乐等等方式进行输出(或呈现)。如果输出(或呈现)信息的目的是向驾驶员进行询问,则人机交互模块还可检测驾驶员针对该询问进行的反馈,比如,在进行询问后的一段时间(如10秒或20秒等)内,检测驾驶员是否作出了表示该询问的反馈的人机交互操作。并向人机交互控制模块发送该表示该询问的反馈的人机交互操作的信息,以便人机交互控制模块获知驾驶员对该询问的反馈结果,例如,该反馈结果可能用于对询问的内容 表示同意或反对等。
人机交互控制模块可用于实现对人机交互输入的驾驶意图的识别,并转发识别出的意图到决策规划计算模块。可选的,人机交互控制模块还可用于向人机交互模块转发来自于决策规划计算模块的信息,这些信息可能需要由人机交互模块向驾驶员/用户进行输出(或呈现)。
比如,人机交互控制模块可根据人机交互模块发送的人机交互操作的描述信息,识别人机交互操作所对应的驾驶意图。在需要向驾驶员进行询问时,可根据来自于自决策规划计算模块的信息通过人机交互模块对驾驶员进行询问。
决策规划计算模块可用于判断人机交互控制模块确定的驾驶意图是否符合自动驾驶的规则,并在需要进行调整的情况下对驾驶意图进行调整。决策规划计算模块还可用于根据驾驶意图确定控制命令,用于整车运动控制模块根据控制命令对车辆进行控制。
人机交互控制模块又可以称为自动驾驶大脑或自动驾驶系统,其可包括执行自动驾驶算法的芯片,例如人工能(artificial intelligence,AI)芯片、图形处理器(graphics processing unit,GPU)芯片、中央处理器(central processing unit,CPU)等芯片,也可以是由其中多种芯片构成的系统,本申请实施例对此并不多作限制。
整车运动控制模块可用于根据来自于决策规划计算模块的控制命令对车辆的驾驶状态进行控制,因此可用于实现本申请实施例设计的对处理驾驶状态的控制。
应理解,本申请中的车辆控制装置也可替换为电子设备、车辆或车载设备等。或者说,本申请中由车辆控制装置执行的动作,也可替换为由电子设备、车辆或车载设备等主体实施。
下面结合图5对本申请实施例提供的方法进行说明。该方法可由车辆控制装置执行。车辆控制装置可包括图2至图4所示的任意一个或多个结构。在实现该车辆控制方法时,可由图2所示处理模块210或图3所示处理器,或图4所示的人机交互控制模块、决策规划计算模块和整车运动控制模块实现本申请实施例提供的方法中的处理动作,包括但不限于,对手势或人机交互操作的识别、驾驶意图的确定,或者根据驾驶意图进行车辆驾驶状态的控制。还可由图2所示的输入输出模块或图3所示的接口电路或人机交互装置,实现车辆控制装置与驾驶员之间的交互,这些交互包括但不限于:获取驾驶员的手势或人机交互操作,或者,向驾驶员显示或呈现信息,以实现面向驾驶员的显示、通知或询问等动作。
S101:车辆控制装置获取驾驶员的第一手势操作。
其中,第一手势操作包括通过触控屏采集的驾驶员在触控屏上的触控操作或隔空手势操作中的至少一个。其中,触控操作可以是驾驶员在触控屏进行的触碰操作和/或拖拽操作等。隔空手势操作包括驾驶员通过手部或头部等身体部位作出的动作,包括但不限于摆手、握拳、点头、眨眼或摇头等。
应理解,以上第一手势操作可以是以上所举例的操作中的一种或多种,比如,第一手势操作可以是触控操作或者隔空手势输入操作,也可以是触控操作或者隔空手势操作中多种的组合。
如果第一手势操作为触控操作,车辆控制装置可识别驾驶员在触控屏上的触控类型。触控操作可包括触碰类型的操作(或称触碰操作)和/或拖拽类型的操作(或称触碰操作)。触碰操作又可分为单击和双击等操作。对于拖拽类型的操作,触控屏还可识别拖拽操作的 轨迹和/或拖拽速度等信息。应理解,显示屏上可显示该车辆的图标,驾驶员可通过对该图标的触碰或拖拽驶入触控操作。如图6所示,触控屏上可显示代表该车辆的图标,以及显示该车辆当前的驾驶状态,比如显示车辆的车头方向、车辆所在的车道和/或行驶速度等信息。其中,驾驶员可对触控屏上显示的车辆进行第一触控操作。
如果第一手势操作为隔空手势操作,车辆控制装置可通过摄像头、毫米波雷达或超声波传感器等识别驾驶员隔空手势操作。可以预设不同的隔空手势操作具备不同含义,例如,点头或握拳表示同意,摇头或摆手表示不同意。
这里以通过摄像头采集隔空手势操作为例,介绍隔空手势的识别方式。摄像头可采集连续的多帧图像,通过对图像中特征像素点的识别,确定特征像素点在多帧图像中的位置变化特征,比如,特征像素点是通过边缘检测算法获得的用户手指、手掌或面部的边缘图像。如果位置变化特征符合预设的特征,则可确定隔空手势操作具备的含义为预设的该特征对应的含义。比如,特征像素点包括用户手指和手掌的边缘图像,在多帧图像中用户手指的边缘图像呈现更加弯曲并朝向手掌靠拢的趋势,则可识别该隔空手势为握拳,进一步可确定该隔空手势具备的含义为预设的握拳对应的含义。
S102:车辆控制装置根据该第一手势操作确定驾驶员的第一意图。该第一意图也可称为驾驶意图,用于表示驾驶员对于车辆的驾驶状态控制的意图。
可选的,第一意图包括加速意图、减速意图、停车意图、超车意图、变道意图、转向意图、行驶轨迹意图、漂移意图、跟车意图,或者加速度意图中的至少一个。
应理解,在S101中,驾驶员可通过第一手势操作表达驾驶意图,比如,驾驶员希望车辆执行超车动作时,驾驶员可执行超车意图对应的第一手势操作,则在S102中,可由车辆控制装置根据该第一手势操作识别驾驶员的意图为超车意图。
车辆控制装置还可在识别第一意图后,询问驾驶员是否执行该第一意图,以避免驾驶意图识别出错。例如,在获得第一意图后,车辆控制装置可询问驾驶员是否同意执行该第一意图,或者说,询问驾驶员是否根据第一意图控制车辆的驾驶状态。
在询问驾驶员是否同意执行该第一意图时,车辆控制装置可通过显示屏显示该第一意图对应的文字,或通过扬声器向驾驶员播放该第一意图对应的语音,并检测驾驶员的人机交互操作,根据检测到的驾驶员的人机交互操作判断驾驶员是否同意执行第一意图。
其中,表示同意执行第一意图的人机交互操作例如,驾驶员对触控屏上显示的表示同意的虚拟按键进行触控操作、回答包含“同意”或“确认”等词汇的语音或者通过点头等动作、手势表示同意。表示不同意执行第一意图的人机交互操作例如,驾驶员对于触控屏中表示不同意的虚拟按键的触控、回答包含“不同意”或“取消”等词汇的语音,或作出摇头或摆手等动作、手势。
这里通过举例的方式,示例性说明几种人机交互操作和驾驶意图之间的对应关系,以便于理解本申请。
(1)对于触碰操作,车辆控制装置可根据触碰操作的位置信息识别该操作对应的意图。比如,触控操作是驾驶员对于特定区域的触碰操作,则车辆控制装置可结合该特定区域的含义识别该触控操作为何种人机交互操作。
例如,触控屏可显示至少一个对应于驾驶意图的虚拟按键,当用户对触控屏上显示的虚拟按键进行触碰操作时,该触碰操作所表达的驾驶意图为该虚拟按键所对应的驾驶意图。
其中,虚拟按键可通过文字或图标显示等方式显示该虚拟按键所对应的驾驶意图。以加速意图为例,如图6所示,虚拟按键可显示“加速”和/或加速数值“5千米每小时(km/h)”等信息,例如,触控屏中显示“加速”虚拟按键,当驾驶员对“加速”虚拟按键进行触控操作后,可显示加速数值为5km/h的虚拟按键和加速数值为10km/h的虚拟按键等,当驾驶员对加速数值为5km/h的虚拟按键进行触碰操作后,车辆控制装置可识别第一意图为加速数值为5km/h的加速意图。
另外,触控屏所显示的虚拟按键也可对应于数值和正负号表示的速度变化值,如果虚拟按键所显示的数值为正数则表示加速意图,数值为负责表示减速意图。比如,当驾驶员对数值为“-5km/h”的虚拟按键进行触碰操作时,表示第一意图为减速数值为5km/h的减速意图。
同理,还可通过类似方式设置减速意图、停车意图、超车意图、漂移意图、跟车意图和/或变道意图等驾驶意图对应的虚拟按键。
应理解,在实际使用中,虚拟按键也可替换为虚拟区域、图标、文字或窗口等。
又如,也可通过对图6所示车辆图标或空白显示区域的触碰,实现驾驶意图的输入。比如,驾驶员对于图6所示车辆图标的车头方向的空白区域进行触碰,对应的驾驶意图为按照设定的阈值进行加速意图。反之,如果驾驶员对车辆图标的车尾方向的空白区域进行触控,则表示减速意图。也可通过人工设定或预定义等方式,确定驾驶员对车辆图标进行单击和/或双击操作时希望实现的驾驶意图,以方便驾驶意图的输入。
(2)对于拖拽操作,车辆控制装置可根据拖拽操作的轨迹、拖拽加速度等信息识别该拖拽操作对应的意图。其中,根据拖拽操作可确定加速意图、减速意图、停车意图、超车意图、变道意图、转向意图、行驶轨迹意图、漂移意图、跟车意图,或者加速度意图等。
可选的,如图7所示,显示屏上可以显示驾驶员的拖拽操作的拖拽轨迹。比如,驾驶员对车辆的图标进行拖拽时,显示屏可显示对图标进行拖拽的拖拽轨迹。
另外,可选的,在对图标的拖拽操作开始后,显示屏可在第一时刻在驾驶员手指所在的第一位置显示该图标,而随着拖拽操作的进行,在第二时刻,显示屏可在驾驶员手指所在的第二位置显示该图标,其中,第一时刻与第二时刻不同,第一位置与第二位置不同。也就是说,显示屏可随着驾驶员拖拽轨迹的变化,移动车辆图标在显示屏中的显示位置。
下面通过举例说明车辆控制装置根据拖拽轨迹识别第一意图的方式。
如图7所示,当驾驶员将车辆拖拽到当前车道以外的其他车道时,以触控屏是电容屏为例,手指在屏幕上触摸的位置(以坐标形式)由触控屏控制器检测,并通过接口(如RS-232串行口)送到CPU,从而确定输入的信息。车辆控制装置根据触控屏上显示的车道线的坐标位置和触控屏检测到的初始车辆位置坐标和拖拽后的车辆位置坐标判断第一意图为变道意图,车辆控制装置可根据该变道意图执行变道操作。又如,如果驾驶员向车辆的行驶方向(即车头方向)的相同方向拖拽车辆,触控屏的CPU可以记录检测到初始位置和拖拽终点位置的坐标和时间点,并通过计算坐标点之间的距离,再除以两点之间的时间差即可获得拖拽速度。如果拖拽速度大于当前的车行速度,则表示第一意图为加速意图,反之,如果确定的拖拽速度小于当前的车行速度,则表示第一意图为减速意图。
车辆控制装置还可根据驾驶员在显示屏上的拖拽轨迹,获得行驶轨迹意图,行驶轨迹意图即按照(或根据)该拖拽轨迹控制车辆的行驶轨迹的驾驶意图。
车辆控制装置还可根据拖拽操作的加速度,确定加速度意图。比如,驾驶员的拖拽操 作表示希望车辆按照(或根据)拖拽操作的加速度行驶,则车辆控制装置所确定的加速意图即根据驾驶员的拖拽操作的加速度控制车辆的加速度。
车辆控制装置还可根据拖拽操作的拖拽方向,确定加速意图或减速意图。比如,如果拖拽轨迹的方向与车头方向之间的夹角小于90度,则表示加速意图,加速数值可以是默认值或驾驶员通过其他方式输入的值。如果拖拽轨迹的方向与车头方向之间的夹角大于90度,则表示减速意图,加速数值可以是默认值或驾驶员通过其他方式输入的值。
显示屏上还可显示拖拽操作所对应的速度和/或加速度,由驾驶员通过改变拖拽速度等方式控制速度和/或加速度的增加或减小,当驾驶员结束拖拽操作时,显示屏上显示的速度和/或加速度为驾驶员希望车辆达到的速度和/或加速度。如果该速度大于当前车辆的行驶速度,则该拖拽操作对应的驾驶意图为加速意图,如果该速度小于当前车辆的行驶速度,则该拖拽操作对应的驾驶意图为减速意图,如果该加速度大于当前车辆的加速度,则该拖拽操作对应的驾驶意图为增加加速度的加速度意图,如果该加速度小于当前车辆的加速度,则该拖拽操作对应的驾驶意图为减小加速度的加速度意图。
又如,还可根据拖拽操作的轨迹的形状识别漂移意图,一种可能的实现方式如图8所示。可见,驾驶员可在触控屏上的本车前方划线,先在一个方向有个较小但明显拐弯,然后急速转向另一个方向,过程保持较快速度,最后像急刹车一样急停止。车辆控制装置从触控屏读取到拖拽操作的轨迹,根据轨迹识别出三个特征,分别为:A)轨迹向一个方向转弯后,快速转向另一方;B)轨迹最后转向与原前进方向之间的角度在90度到120度范围的另一方向;C)轨迹速度急速下降,最终停止。如果满足这三个特征即判断第一意图为漂移以图。此后可弹出提示“是否漂移”,用户点击“确认”后,确认此漂移意图。
又如,还可以通过触控屏上的拖拽轨迹实现跟车意图,从而在保证合规情况下,控制车辆的运行轨迹和加减速等驾驶状态与前方车辆的驾驶状态相似,比如,控制车辆与前方车辆在相近的位置进行加速、减速,或在同一个路口转向等。一种可能的实现方式例如,驾驶员双击车辆图标,然后手指不离开触控屏地拖拽图标;之后,将车辆图标拖拽至与前方车辆的图标重合,如图9所示,之后手指离开触控屏(即放手),并在前方车辆图标所在位置进行双击。车辆控制装置从触控屏读取到用户操作的触控操作,识别出三个特征,分别为:A)双击和拖拽本车图标;B)用户放手时,该车辆图标移动到与其他车辆的图标重合;C)放手后进行了双击。这样预判设定跟随指定前车,弹出提示“是否设定跟随此车”,用户点击“确认”后,确认此跟车意图。
应理解,以上对于根据拖拽操作识别驾驶意图的说明仅仅是示例性的,不应理解为本申请保护的方式以上述举例为限。
(3)对于隔空手势操作,车辆控制装置可识别摄像头、毫米波雷达或超声波传感器等采集的驾驶员的隔空手势,识别隔空手势对应的第一意图,以获得该第一意图。比如,摄像头采集的驾驶员手势包括表示加速的手势,车辆控制装置可根据该手势识别第一意图为加速意图。
在具体实施中,可以设定隔空手势与驾驶意图之间的对应关系。考虑到目前的隔空手势识别技术可以识别出手势的形状和运动轨迹,因此可以预先定义手势的形状与驾驶意图的对应,如手指向左指表示向左变道,手指向右指表示向右变道,手指向前指表示向前加速,手指向后指表示减速等,和/或,可以预先定义手势运动轨迹与驾驶意图的对应,如手掌向左运动表示左变道,手掌向右运动表示向右变道,手掌向前运动表示向前加速,手掌 向后运动表示减速等。再如,两手在空中做握方向盘并转向的姿势,向左转表示坐变道,向右转表示右变道等。此处不限制隔空手势识别的技术手段,如可以采用摄像头或毫米波雷达等进行识别。当车辆控制装置识别到驾驶意图所对应的隔空手势时,即识别第一意图为该隔空手势。
S103:车辆控制装置根据第一意图控制车辆的驾驶状态。
其中,驾驶状态是指与车辆的行驶方式有关的状态,而不是与车载音响或影音设备等车载娱乐系统或灯光控制等与行驶不相关的状态。行驶方式例如,加速、加速、趋近、倒退、停车、是否改变加速度、是否改变行驶轨迹、是否超车、是否转向、是否漂移、是否跟车,或者是否变道等状态。
示例性的,车辆控制装置可根据第一意图对车辆的驾驶状态进行有限次(例如1次)的控制,或者,在有限的时段(例如1分钟)内对车辆进行控制。比如,可设定第一意图的生效次数或生效时长。
应理解,车辆控制装置在根据第一意图控制车辆的驾驶状态中,可确保安全性和合规定性。比如,S103中车辆控制装置可根据第一意图,按照自动驾驶的规则对车辆进行控制。
采用以上图5流程,车辆控制装置可基于驾驶员的手势操作灵活识别驾驶意图,并根据驾驶意图实现对车辆的驾驶状态的进行控制,提供了更加灵活的驾驶意图识别方式,带来更好的驾驶体验。
可选的,在S103中,车辆控制装置可确定第一意图是否允许被执行。其中,符合第一条件的第一意图允许被执行,不符合第一条件的第一意图不允许被执行。
示例性的,车辆控制装置可判断第一意图是否符合第一条件,在符合第一条件时,第一意图允许被执行,否则,第一意图不允许被执行。第一条件包括但不限于交通规则条件、安全行驶条件或舒适性条件中的至少一项。因此,当判断第一意图不符合第一条件时,车辆控制装置可拒绝执根据第一意图对处理进行控制,以确保驾驶安全性、合规性和舒适性。
其中,交通规则条件例如交通法规和当前车辆所在道路所要求的车辆行驶条件,包括但不限于双实线禁越、导流线禁越、禁止长时间骑线行驶、道路的最高限速条件和最低限速条件等。
安全行驶条件是满足交通规则条件的前提下,为了进一步提高车辆行驶的安全性而提出的条件。安全行驶条件可以与车辆当前的车况信息或当前道路信息等相关。例如,安全行驶条件包括与行人、其他车辆、护栏等交通设施和路沿路面构造保持安全的行驶距离等。另外,为确保车辆行驶安全,可以另外设置车辆的最高限速条件、最低限速条件等。
舒适性条件是为了满足驾驶员和乘客舒适乘车的需要而指定的行驶条件,例如,限制车辆加速度在设定的加速度范围内,避免车辆剧烈加速或减速带来不适;又如,限制转向时车辆的速度不超过设定速度等条件等。舒适性条件可以是满足大众普遍的舒适性需求的条件,也可以是根据驾驶员或乘客的个性化舒适需求设置的条件。
一种可能的实现中,当车辆控制装置识别到第一意图不符合第一条件时,车辆控制装置停止该第一意图的执行,即停止按照第一意图控制车辆的驾驶状态。此时,车辆控制装置可向驾驶员指示拒绝根据第一意图控制车辆的驾驶状态,或指示拒绝执行第一意图所对应的控制指令。
举例来说,当驾驶员通过拖拽操作表示变道意图时,如果拖拽轨迹所示的变道方式不被允许,则车辆控制装置停止按照该意图控制车辆的驾驶状态。比如,驾驶员的拖拽操作将车辆拖拽到了相反方向的车道,或拖拽轨迹跨越了双实线,则该变道意图不符合第一条件,此时不允许按照该意图控制车辆的驾驶状态,则车辆控制装置可拒绝执行。另外,当第一意图不符合第一条件时,车辆控制装置可通过播放语音或通过触控屏的显示等方式,向驾驶员提示不允许按照驾驶员的第一意图控制车辆的驾驶状态。
可选的,车辆控制装置可在确定第一意图不符合第一条件后,停止第一意图的执行,并发送第一提示消息,该第一提示消息可用于通知驾驶员该第一意图不允许被执行。以拖拽操作为例,在拖拽操作对应的第一意图不被允许执行时,车辆控制装置可发送第一提示消息,以指示该拖拽操作对应的第一意图不允许被执行。
本申请中,车辆控制装置发送提示消息(如第一提示消息、第二提示消息和第三提示消息等),可理解为由车辆控制装置向驾驶员输出(或呈现)提示消息,或者,可理解为由车辆控制装置向显示屏等人机交互装置发送提示消息,由软件控制装置向驾驶员输出(或呈现)提示消息。
具体到发送第一提示消息,可以是车辆控制装置向显示屏等人机交互装置发送第一提示消息,因此由人机交互装置向驾驶员通知该第一意图不允许被执行。或者,发送第一提示消息,也可以是指车辆控制装置通过人机交互模块向驾驶员通知该第一意图不允许被执行。
例如,如果第一手势操作包括拖拽操作,当车辆控制装置识别该拖拽操作对应的驾驶意图不允许被执行时,发送第一提示消息的方式例如是在显示屏显示“×”等符号或显示“无法执行”等文字的方式,表示不允许按照该拖拽操作控制车辆的驾驶状态。
此外,对于不被允许执行的拖拽操作,车辆控制装置还可清除显示屏中显示的该拖拽操作的拖拽轨迹。和/或,如果车辆控制装置随着拖拽轨迹的变化,改变显示屏中显示的该车辆的图标的显示位置,则在确定该拖拽操作不被允许执行的情况下,车辆控制装置可以在第一显示位置显示车辆的图标,其中,第一显示位置是获取到该拖拽操作之前车辆的图标的显示位置,或者说,则车辆控制装置可将车辆图标在显示屏中的显示位置恢复到驾驶员对车辆图标进行该拖拽操作之前,该车辆图标所在的显示位置。
另一种可能的实现中,当车辆控制装置识别到第一意图不符合第一条件时,车辆控制装置可根据第一意图确定符合第一条件的第二意图。在获得第二意图后,车辆控制装置可通过第三提示消息询问驾驶员是否同意执行该第二意图,或者说,询问驾驶员是否根据第二意图控制车辆的驾驶状态。
根据第一意图获得第二意图的方式例如,在不改变第一意图的意图类型的前提下,通过改变第一意图对应的数值、拖拽轨迹等信息获得第二意图。比如,车辆当前的行驶速度为75km/h,驾驶员的第一意图为加速10km/h的加速意图,但当前道路的最高限速为80km/h,则车辆可根据该最高限速确定第二意图为加速5km/h。再比如,当前路段的最高限速为80km/h,但由于当前路段车辆较多,为了确保行驶平稳和安全,安全行驶条件可能适当限制车辆行驶的最高速度,比如,限制车辆行驶的最高速度为70km/h,则此时,若车辆当前行驶速度为65km/h,且驾驶员的第一意图为加速10km/h的加速意图,则车辆可确定第二意图为加速5km/h。又如,如果第一意图是加速度意图,当驾驶员的第一意图指示的加速 度超出了舒适性条件所要求的加速度范围,则车辆控制装置可生成第二意图,第二意图所对应的加速度属于该加速度范围。
又例如,在根据驾驶员通过触控屏输入的拖拽轨迹所确定的第一意图不被允许执行时,根据第一条件对该拖拽轨迹进行修正,根据修正后的拖拽轨迹获得第二意图。举例来说,如果第一意图是行驶轨迹意图,但是由于驾驶员输入的拖拽轨迹途经对向车道或其他的不允许该车辆途经的区域等原因,导致按照该拖拽轨迹行驶会不符合第一条件,则可根据第一条件对拖拽轨迹进行修正,以克服导致该拖拽轨迹不符合第一条件的原因,并根据修正后的拖拽轨迹获得第二意图,第二意图同样可以是行驶轨迹意图。
应理解,在本申请中,如果车辆控制装置识别第一意图是不满足第一条件的驾驶意图,则可以对第一意图进行调整,获得满足第一条件的第二意图。此后,车辆控制装置可向驾驶员询问是否执行第二意图。
其中,在询问驾驶员是否同意执行该第二意图时,车辆控制装置可通过显示屏显示该第二意图对应的文字或修正的拖拽轨迹,或通过扬声器向驾驶员播放该第二意图对应的语音,并检测驾驶员的人机交互操作,根据检测到的驾驶员的人机交互操作判断驾驶员是否同意执行第二意图。
其中,表示同意执行第二意图的人机交互操作例如,驾驶员对触控屏上显示的表示同意的虚拟按键进行触控操作、回答包含“同意”或“确认”等词汇的语音或者通过点头等动作、手势表示同意。表示不同意执行第二意图的人机交互操作例如,驾驶员对于触控屏中表示不同意的虚拟按键的触控、回答包含“不同意”或“取消”等词汇的语音,或作出摇头或摆手等动作、手势。
进一步的,如果检测到驾驶员表示同意执行第二意图的人机交互操作,则车辆控制装置可根据第二意图控制车辆的驾驶状态。如果检测到驾驶员表示不同意执行第二意图的人机交互操作,或者,在设定时长内未接收到驾驶员的表示同意执行第二意图的人机交互操作,则车辆控制装置可判断驾驶员不同意执行第二意图,则停止第二意图的执行,此时车辆控制装置可提示驾驶员重新输入驾驶意图。
以车辆控制装置是具备触控屏的手机为例,本申请中的询问操作可通过图10所示的手机屏幕中显示的对话框实现。以第二意图是调整后的加速意图为例,对话框中可提示车辆控制装置确定了符合第一条件的第二意图,并询问驾驶员是否执行该意图。可选的,通过图9所示对话框,手机屏幕还可显示表示同意执行第二意图的虚拟按键(如“确认”按键)和表示不同意执行第二意图的虚拟按键(如“取消”按键),当驾驶员对任一虚拟按键进行触碰操作时,手机可获知触碰结果,即获知驾驶员是否同意执行第二意图。可选的,对话框中还可显示设定时长的倒计时,在倒计时的计时为0后,若手机未检测到驾驶员对于表示同意执行第二意图的虚拟按键的触碰操作,则确定驾驶员不同意执行第二意图。
下面结合第一意图的具体示例为例,说明对第一意图修正获得第二意图,并询问驾驶员是否执行第二意图的过程。
在车辆控制装置识别该行驶轨迹意图不符合第一条件时,可对该行驶轨迹意图对应的行驶轨迹进行修正,获得修正的行驶轨迹,以表示建议的行驶轨迹(比如标识满足第一条件的行驶轨迹),该修正后的行驶轨迹所对应的行驶轨迹意图即第二意图。车辆控制装置可发送第二提示消息,用于询问是否根据修正后的拖拽轨迹控制车辆的驾驶状态(或者说,用于询问是否同意执行该第二意图,或用于询问是否同意根据该第二意图控制车辆的驾驶 状态)。其中,发送第二提示消息,可以是指车辆控制装置向显示屏等人机交互装置发送第二提示消息,因此由人机交互装置向驾驶员询问是否根据修正后的拖拽轨迹控制车辆的驾驶状态。或者,发送第二提示消息,也可以是指车辆控制装置通过人机交互模块向驾驶员询问是否根据修正后的拖拽轨迹控制车辆的驾驶状态。
可选的,车辆控制装置还可通过显示屏显示该修正后的拖拽轨迹。
此后,车辆控制装置可获取驾驶员的第一操作,该第一操作表示同意根据修正后的拖拽轨迹控制车辆的驾驶状态(或者说,表示同意执行该第二意图,或同意根据该第二意图控制车辆的驾驶状态),则车辆控制装置可根据修正后的拖拽轨迹控制车辆的驾驶状态,或者说,车辆控制装置可根据第二意图控制车辆的驾驶状态。另外,如果获取驾驶员的表示不同意根据修正后的拖拽轨迹控制车辆的驾驶状态的操作,则不执行该第二意图。
另外,本申请中,车辆控制装置根据第一意图获得第二意图的方式还可以是延迟第一意图的执行时间,也就是说,第一意图和第二意图的执行时机不同。比如,第一意图包括加速意图,但当前车辆的形式速度已经达到当前路段的最高限速,如果车辆控制装置发现一端时间以后降到途经的路段允许按照第一意图对应的加速数值进行加速,则可以提示用户在到达该路段后再进行加速。此示例中,到达未来途经的路段后在进行加速可视为根据第一意图获得的第二意图。
示例性的,车辆控制装置在根据第一意图获得该第二意图后,可向驾驶员发送第三提示消息,用于询问是否根据该第二意图控制车辆的驾驶状态(或者说,用于询问是否同意执行该第二意图,或用于询问是否同意根据该第二意图控制车辆的驾驶状态)。之后,如果车辆控制装置获得驾驶员的第二操作,则根据第二意图控制车辆的驾驶状态。该第二操作可以是表示同意根据该第二意图控制车辆的驾驶状态(或者说,表示同意执行该第二意图,或同意根据该第二意图控制车辆的驾驶状态)的人机交互操作。另外,如果获取驾驶员的表示不同意根据修正后的第二意图控制车辆的驾驶状态的操作,则不执行该第二意图。
如图11所示,当车辆控制装置包括图4所示的人机交互模块、人机交互控制模块、决策规划计算模块以及整车运动控制模块时,本申请实施例提供的方法可包括以下步骤:
S201:人机交互模块获取驾驶员的第一手势操作。
S202:人机交互模块向人机交互控制模块通知第一手势操作。
S203:人机交互控制模块根据第一手势操作确定第一意图。
S204:人机交互控制模块向决策规划计算模块通知第一意图。
S205:决策规划计算模块判断第一意图是否满足第一条件,如果满足,则执行S206,如果不满足,则执行S207和/或S208。
S206:决策规划计算模块根据第一意图生成控制命令,通过整车运动控制模块执行该控制命令,实现根据第一意图对于车辆驾驶状态的控制。
S207:决策规划计算模块通过人机交互模块向驾驶员通知:不允许根据该第一意图控制车辆的驾驶状态。
比如,决策规划计算模块向人机交互模块发送指令,该指令用于人机交互模块通知驾驶员该第一意图不允许被执行。此后,人机交互模块可通过显示屏和/或扬声器向驾驶员进行通知。
S208:决策规划计算模块根据第一意图确定符合第一条件的第二意图。
S209:决策规划计算模块通知该第二意图。
S210:人机交互模块向驾驶员询问:是否执行该第二意图。
S211:人机交互模块获取驾驶员的第一人机交互操作。
第一人机交互操作例如是驾驶员通过触控屏进行的触控操作或隔空手势操作,也可能是驾驶员通过语音进行的语音输入操作等,本申请不进行具体限定。
S212:人机交互模块向人机交互控制模块通知驾驶员的第一人机交互操作。
S213:人机交互控制模块识别第一人机交互操作是否是表示同意执行第二意图的人机交互操作。如果第一人机交互操作是表示同意执行第二意图的人机交互操作,则执行S214-S215,否则,如果第一人机交互操作不是表示同意执行第二意图的人机交互操作,则执行S216。
S214:人机交互控制模块向决策规划计算模块通知,驾驶员同意执行第二意图。
S215:决策规划计算模块根据第二意图生成控制命令,通过整车运动控制模块执行该控制命令,实现根据第一意图对于车辆驾驶状态的控制。
S216:人机交互控制模块向决策规划计算模块通知,驾驶员不同意执行第二意图。之后结束根据第一意图和/或第二意图控制车辆的驾驶状态的流程,也就是说,不按照第一意图和第二意图控制车辆的驾驶状态。
根据图11所示流程,可由图4所示车辆控制装置实现本申请实施例提供的车辆控制方法。应理解,图11中所示的由图4所示车辆控制装置实现的步骤实施例性的,根据本申请实施例提供的车辆控制方法,图11所示的一些步骤可以省略,也可以由其他步骤替换图11中的一些步骤,或者该车辆控制装置还可执行图11未示出的一些步骤。
基于上述内容和相同构思,本申请还提供一种车辆控制装置,用于实现以上方法实施例部分介绍的车辆控制方法中车辆控制装置的功能,因此具备上述方法实施例所具备的有益效果。该车辆控制装置可包括图2至图4中任一结构,或由图2至图4中任意多个结构的组合实现。
如图2所示的车辆控制装置可以是终端或车辆,也可以是终端或车辆内部的芯片。该车辆控制装置可以实现如图5或图11所示的车辆控制方法以及上述各可选实施例。其中,该车辆控制装置可包括处理模块210和输入输出模块220。
其中,处理模块210可用于执行图5所示方法中的S102、S103、图11所述方法中S203、S205、S206、S208、S213或S215中的任一步骤,或可用于执行上述可选的实施例中涉及车辆控制、意图识别、判断是否允许执行驾驶意图或驾驶意图的修正等任一步骤。输入输出模块220可用于执行图5所示方法中的S101、图11所述方法中S201、S207、S210或S211中的任一步骤,或可用于执行上述可选的实施例中涉及人机交互的任一步骤。具体参见方法示例中的详细描述,此处不做赘述。
输入输出模块220可用于获取驾驶员的第一手势操作。处理模块210可用于根据第一手势操作确定驾驶员的第一意图,根据第一意图控制车辆的驾驶状态。其中,第一手势操作包括以下操作中的至少一种:驾驶员在触控屏上的触控操作,触控操作包括触碰操作或拖拽操作;驾驶员的隔空手势操作。
应理解的是,本申请实施例中的车辆控制装置可以由软件实现,例如,具有上述功能的计算机程序或指令来实现,相应计算机程序或指令可以存储在终端内部的存储器中,通过处理器读取该存储器内部的相应计算机程序或指令来实现处理模块210和/或输入输出 模块220的上述功能。或者,本申请实施例中的车辆控制装置还可以由硬件来实现。其中,处理模块210可以是处理器(如CPU或系统芯片中的处理器),输入输出模块220可包括人机交互装置,或包括支持处理模块210与人机交互装置进行通信的接口,例如接口电路,用于将人机交互装置识别的第一手势操作通知给处理模块210。如果包括人机交互装置,则输入输出装置220可包括处理器,用于识别的第一手势操作。或者,本申请实施例中的车辆控制装置还可以由处理器和软件模块的结合实现。
一种可选的实现方式中,第一意图包括以下意图中的至少一种:超车意图、变道意图、转向意图,或者行驶轨迹意图。
一种可选的实现方式中,若第一手势操作包括驾驶员对于触控屏上显示的车辆的图标的拖拽操作,输入输出模块220还可用于:随着拖拽操作的拖拽轨迹的变化,移动车辆的图标在触控屏中的显示位置;和/或,在触控屏中显示拖拽操作的拖拽轨迹。此时的输入输出模块220可包括触控屏,或包括与触控屏连接的接口,用于令触控屏显示拖拽轨迹。
一种可选的实现方式中,处理模块210还可用于在拖拽轨迹对应的第一意图被允许执行时,根据拖拽轨迹控制车辆的驾驶状态,或用于在拖拽轨迹对应的第一意图不被允许执行时,执行:清除触控屏中显示的拖拽轨迹,此时的输入输出模块220可包括触控屏,或包括与触控屏连接的接口,用于令触控屏清除拖拽轨迹。在拖拽轨迹对应的第一意图不被允许执行时,输入输出模块220还可用于发送第一提示消息,第一提示消息用于通知拖拽操作对应的第一意图不被允许执行,此时的输入输出模块220可包括触控屏、扬声器或其他人机交互装置,或包括与人机交互装置连接的接口,用于通过人机交互装置向驾驶员通知拖拽操作对应的第一意图不被允许执行;和/或,在第一显示位置显示车辆的图标,第一显示位置是获取到拖拽操作之前车辆的图标的显示位置,此时的输入输出模块220可包括触控屏,或包括与触控屏连接的接口,用于在第一显示位置显示车辆的图标。
一种可选的实现方式中,拖拽操作对应的第一意图包括行驶轨迹意图,在拖拽轨迹对应的第一意图不被允许执行时,处理模块210还用于:根据交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个,对拖拽轨迹进行修正。输入输出模块220还可用于:在触控屏中显示修正后的拖拽轨迹,修正后的拖拽轨迹表示建议的行驶路线,此时的输入输出模块220可包括触控屏,或包括与触控屏连接的接口,用于显示修正后的拖拽轨迹。
一种可选的实现方式中,输入输出模块220还可用于发送第二提示消息,第二提示消息用于询问是否根据修正后的拖拽轨迹控制车辆的驾驶状态,输入输出模块220还可用于获取驾驶员的第一操作,第一操作表示同意根据修正后的拖拽轨迹控制车辆的驾驶状态。处理模块210还可用于根据修正后的拖拽轨迹控制车辆的驾驶状态。此时的输入输出模块220可包括触控屏、扬声器或其他人机交互装置,或包括与人机交互装置连接的接口,用于通过人机交互装置向驾驶员发送第二提示消息,并获取驾驶员的第一操作。
一种可选的实现方式中,处理模块210还可用于:确定第一意图不满足第一条件,第一条件包括交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个;处理模块210还可用于根据第一意图确定第二意图,第二意图满足第一条件,第二意图的执行时机与第一意图的执行时机不同;处理模块210具体可用于根据第二意图控制车辆的驾驶状态。
一种可选的实现方式中,输入输出模块220还可用于发送第三提示消息,第三提示消息用于询问是否根据第二意图控制车辆的驾驶状态,输入输出模块220还可用于获取驾驶 员的第二操作,第二操作表示同意根据第二意图控制车辆的驾驶状态。此时的输入输出模块220可包括触控屏、扬声器或其他人机交互装置,或包括与人机交互装置连接的接口,用于通过人机交互装置向驾驶员发送第三提示消息,并获取驾驶员的第二操作。
一种可选的实现方式中,处理模块210还可用确定第一意图不满足第一条件,第一条件包括交通规则条件、安全行驶条件或舒适性条件中的至少一个。处理模块210还可用于根据第一条件确定第二意图,第二意图满足第一条件。输入输出模块220还可用于:发送第三提示消息,第三提示消息用于询问是否根据第二意图控制车辆的驾驶状态。输入输出模块220还可用于获取驾驶员的第三操作,第三操作表示不同意根据第二意图控制车辆的驾驶状态。处理模块210还可用于停止根据第二意图控制车辆的驾驶状态。
应理解,本申请实施例中的车辆控制装置的处理细节可以参考图5、图11及本申请方法实施例中的相关表述,这里不再重复赘述。
如图3所示的车辆控制装置可以是终端或车辆,也可以是终端或车辆内部的芯片。该车辆控制装置可以实现如图5或图11所示的车辆控制方法以及上述各可选实施例。其中,该车辆控制装置可包括处理器、存储器、接口电路或人机交互装置中的至少一个。应理解,虽然图3中仅示出了一个处理器、一个存储器、一个接口电路和一个(或一种)人机交互装置。车辆控制装置可以包括其他数目的处理器和接口电路。
其中,接口电路用于车辆控制装置与终端或车辆的其他组件连通,例如存储器或其他处理器,或人机交互装置等。处理器可用于通过接口电路与其他组件进行信号交互。接口电路可以是处理器的输入/输出接口。
例如,处理器可通过接口电路读取与之耦合的存储器中的计算机程序或指令,并译码和执行这些计算机程序或指令。应理解,这些计算机程序或指令可包括上述功能程序,也可以包括上述车辆控制装置的功能程序。当相应功能程序被处理器译码并执行时,可以使得车辆控制装置实现本申请实施例所提供的车辆控制方法中的方案。
可选的,这些功能程序存储在车辆控制装置外部的存储器中,此时车辆控制装置可以不包括存储器。当上述功能程序被处理器译码并执行时,存储器中临时存放上述功能程序的部分或全部内容。
可选的,这些功能程序存储在车辆控制装置内部的存储器中。当车辆控制装置内部的存储器中存储有上述功能程序时,车辆控制装置可被设置在本发明实施例的车辆控制装置中。
可选的,这些功能程序存储在车辆控制装置外部的存储器中,这些功能程序的其他部分存储在车辆控制装置内部的存储器中。
应理解,上述处理器可以是一个芯片。例如,该处理器可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成 电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
应理解,在通过图3所示结构实现该车辆控制装置时,可由存储器存储计算机程序或指令,由处理器执行存储器中存储的计算机程序或指令,执行在通过图2所示结构实现该车辆控制装置时由处理模块210执行的动作。还可由接口电路和/或人机交互装置执行在通过图2所示结构实现该车辆控制装置时由输入输出模块220执行的动作。可选的,可由图3所示的处理器和存储器实现图2所示的处理模块210,或者说,图2所示的处理模块210包括处理器和存储器,或者说,由处理器执行存储器中存储的计算机程序或指令,实现由以上图2所示处理模块210执行的动作。和/或,可由图3所示的接口电路和/或人机交互装置实现图2所示的输入输出模块220,或者说,图2所示的处理模块210包括图3所示的接口电路和/或人机交互装置,或者说,由接口电路和/或人机交互装置执行以上图2所示输入输出模块220执行的动作。
在通过图4所示结构实现该车辆控制装置时,可由人机交互控制模块、决策规划计算模块和整车运动控制模块执行在通过图2所示结构实现该车辆控制装置时由处理模块210执行的动作。还可由人机交互模块执行在通过图2所示结构实现该车辆控制装置时由输入输出模块220执行的动作。在通过图4所示结构实现该车辆控制装置时,由人机交互模块、人机交互控制模块、决策规划计算模块和整车运动控制模块分别执行的动作可参照图11所示流程中的说明,这里不再赘述。
应理解,图2至图4任一所示的车辆控制装置的结构可以互相结合,图2至图4任一所示的车辆控制装置以及各可选实施例相关设计细节可互相参考,也可以参考图2至图4任一所示的车辆控制方法以及各可选实施例相关设计细节。此处不再重复赘述。
基于上述内容和相同构思,本申请提供一种计算设备,包括处理器,处理器与存储器相连,存储器用于存储计算机程序或指令,处理器用于执行存储器中存储的计算机程序, 以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请提供一种计算机可读存储介质,其上存储有计算机程序或指令,当该计算机程序或指令被执行时,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请提供一种计算机程序产品,当计算机执行计算机程序产品时,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请提供一种芯片,芯片与存储器相连,用于读取并执行存储器中存储的计算机程序或指令,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请实施例提供一种装置,所述装置包括处理器和接口电路,所述接口电路,用于接收计算机程序或指令并传输至所述处理器;所述处理器运行所述计算机程序或指令以执行上述方法实施例中的方法。
应理解,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (21)

  1. 一种车辆控制方法,其特征在于,所述车辆处于自动驾驶状态,包括:
    获取驾驶员的第一手势操作;
    根据所述第一手势操作确定所述驾驶员的第一意图;
    根据所述第一意图控制所述车辆的驾驶状态;
    其中,所述第一手势操作包括以下操作中的一种或多种:
    所述驾驶员在触控屏上的触控操作,所述触控操作包括触碰操作或拖拽操作;或者,
    所述驾驶员的隔空手势操作。
  2. 如权利要求1所述的方法,其特征在于,所述第一意图包括以下意图中的至少一种:
    超车意图、变道意图、转向意图,或者行驶轨迹意图。
  3. 如权利要求1或2所述的方法,其特征在于,所述第一手势操作包括所述驾驶员对于所述触控屏上显示的所述车辆的图标的拖拽操作;
    所述方法还包括:
    随着所述拖拽操作的拖拽轨迹的变化,移动所述车辆的图标在所述触控屏中的显示位置;和/或,
    在所述触控屏中显示所述拖拽操作的拖拽轨迹。
  4. 如权利要求3所述的方法,其特征在于,还包括:
    在所述拖拽轨迹对应的第一意图被允许执行时,根据所述拖拽轨迹控制所述车辆的驾驶状态;
    在所述拖拽轨迹对应的第一意图不被允许执行时,执行以下中的至少一个操作:
    发送第一提示消息,所述第一提示消息用于通知所述拖拽操作对应的第一意图不被允许执行;或者,
    在第一显示位置显示所述车辆的图标,所述第一显示位置是获取到所述拖拽操作之前所述车辆的图标的显示位置;或者,
    清除所述触控屏中显示的所述拖拽轨迹。
  5. 如权利要求3或4所述的方法,其特征在于,所述拖拽操作对应的第一意图包括行驶轨迹意图,在所述拖拽轨迹对应的第一意图不被允许执行时,还包括:
    根据交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个,对所述拖拽轨迹进行修正;
    在所述触控屏中显示修正后的拖拽轨迹,所述修正后的拖拽轨迹表示建议的行驶路线。
  6. 如权利要求5所述的方法,其特征在于,还包括:
    发送第二提示消息,所述第二提示消息用于询问是否根据所述修正后的拖拽轨迹控制所述车辆的驾驶状态;
    获取所述驾驶员的第一操作,所述第一操作表示同意根据所述修正后的拖拽轨迹控制所述车辆的驾驶状态;
    根据所述第一意图控制所述车辆的驾驶状态,包括:
    根据所述修正后的拖拽轨迹控制所述车辆的驾驶状态。
  7. 如权利要求1或2所述的方法,其特征在于,还包括:
    确定所述第一意图不满足第一条件,所述第一条件包括交通规则条件、安全行驶条件、 环境条件或舒适性条件中的至少一个;
    根据所述第一意图确定第二意图,所述第二意图满足所述第一条件,所述第二意图的执行时机与所述第一意图的执行时机不同;
    所述根据所述第一意图控制所述车辆的驾驶状态,包括:
    根据所述第二意图控制所述车辆的驾驶状态。
  8. 如权利要求7所述的方法,其特征在于,还包括:
    发送第三提示消息,所述第三提示消息用于询问是否根据所述第二意图控制所述车辆的驾驶状态;
    获取所述驾驶员的第二操作,所述第二操作表示同意根据所述第二意图控制所述车辆的驾驶状态。
  9. 如权利要求1或2所述的方法,其特征在于,还包括:
    确定所述第一意图不满足第一条件,所述第一条件包括交通规则条件、安全行驶条件或舒适性条件中的至少一个;
    根据所述第一条件确定第二意图,所述第二意图满足所述第一条件;
    发送第三提示消息,所述第三提示消息用于询问是否根据所述第二意图控制所述车辆的驾驶状态;
    获取所述驾驶员的第三操作,所述第三操作表示不同意根据所述第二意图控制所述车辆的驾驶状态;
    停止根据所述第二意图控制所述车辆的驾驶状态。
  10. 一种车辆控制装置,其特征在于,所述车辆处于自动驾驶状态,包括处理模块和输入输出模块:
    所述输入输出模块,用于获取驾驶员的第一手势操作;
    所述处理模块,用于根据所述第一手势操作确定所述驾驶员的第一意图,根据所述第一意图控制所述车辆的驾驶状态;
    其中,所述第一手势操作包括以下操作中的至少一种:
    所述驾驶员在触控屏上的触控操作,所述触控操作包括触碰操作或拖拽操作;或者,
    所述驾驶员的隔空手势操作。
  11. 如权利要求10所述的车辆控制装置,其特征在于,所述第一意图包括以下意图中的至少一种:
    超车意图、变道意图、转向意图,或者行驶轨迹意图。
  12. 如权利要求10或11所述的车辆控制装置,其特征在于,所述第一手势操作包括所述驾驶员对于所述触控屏上显示的所述车辆的图标的拖拽操作;
    所述输入输出模块还用于:
    随着所述拖拽操作的拖拽轨迹的变化,移动所述车辆的图标在所述触控屏中的显示位置;和/或,
    在所述触控屏中显示所述拖拽操作的拖拽轨迹。
  13. 如权利要求12所述的车辆控制装置,其特征在于,所述处理模块还用于:
    在所述拖拽轨迹对应的第一意图被允许执行时,根据所述拖拽轨迹控制所述车辆的驾驶状态;
    在所述拖拽轨迹对应的第一意图不被允许执行时,清除所述触控屏中显示的所述拖拽 轨迹;
    在所述拖拽轨迹对应的第一意图不被允许执行时,所述输入输出模块还用于:
    发送第一提示消息,所述第一提示消息用于通知所述拖拽操作对应的第一意图不被允许执行;和/或,
    在第一显示位置显示所述车辆的图标,所述第一显示位置是获取到所述拖拽操作之前所述车辆的图标的显示位置。
  14. 如权利要求12或13所述的车辆控制装置,其特征在于,所述拖拽操作对应的第一意图包括行驶轨迹意图,在所述拖拽轨迹对应的第一意图不被允许执行时,所述处理模块还用于:
    根据交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个,对所述拖拽轨迹进行修正;
    所述输入输出模块还用于:
    在所述触控屏中显示修正后的拖拽轨迹,所述修正后的拖拽轨迹表示建议的行驶路线。
  15. 如权利要求14所述的车辆控制装置,其特征在于,所述输入输出模块还用于:
    发送第二提示消息,所述第二提示消息用于询问是否根据所述修正后的拖拽轨迹控制所述车辆的驾驶状态;
    获取所述驾驶员的第一操作,所述第一操作表示同意根据所述修正后的拖拽轨迹控制所述车辆的驾驶状态;
    所述处理模块具体用于:
    根据所述修正后的拖拽轨迹控制所述车辆的驾驶状态。
  16. 如权利要求10或11所述的车辆控制装置,其特征在于,所述处理模块还用于:
    确定所述第一意图不满足第一条件,所述第一条件包括交通规则条件、安全行驶条件、环境条件或舒适性条件中的至少一个;
    根据所述第一意图确定第二意图,所述第二意图满足所述第一条件,所述第二意图的执行时机与所述第一意图的执行时机不同;
    所述处理模块具体用于:
    根据所述第二意图控制所述车辆的驾驶状态。
  17. 如权利要求16所述的车辆控制装置,其特征在于,所述输入输出模块还用于:
    发送第三提示消息,所述第三提示消息用于询问是否根据所述第二意图控制所述车辆的驾驶状态;
    获取所述驾驶员的第二操作,所述第二操作表示同意根据所述第二意图控制所述车辆的驾驶状态。
  18. 如权利要求10或11所述的车辆控制装置,其特征在于,所述处理模块还用:
    确定所述第一意图不满足第一条件,所述第一条件包括交通规则条件、安全行驶条件或舒适性条件中的至少一个;
    根据所述第一条件确定第二意图,所述第二意图满足所述第一条件;
    所述输入输出模块还用于:
    发送第三提示消息,所述第三提示消息用于询问是否根据所述第二意图控制所述车辆的驾驶状态;
    获取所述驾驶员的第三操作,所述第三操作表示不同意根据所述第二意图控制所述车 辆的驾驶状态;
    所述处理模块还用于:
    停止根据所述第二意图控制所述车辆的驾驶状态。
  19. 一种计算设备,其特征在于,包括处理器,所述处理器与存储器相连,所述存储器存储计算机程序或指令,所述处理器用于执行所述存储器中存储的计算机程序或指令,以使得所述计算设备执行如权利要求1至9中任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或指令被计算设备执行时,以使得所述计算设备执行如权利要求1至9中任一项所述的方法。
  21. 一种芯片,其特征在于,包括至少一个处理器和接口;
    所述接口,用于为所述至少一个处理器提供计算机程序、指令或者数据;
    所述至少一个处理器用于执行所述计算机程序或指令,以使得如权利要求1至9中任一项所述的方法被执行。
PCT/CN2021/084650 2021-03-31 2021-03-31 一种车辆控制方法及装置 WO2022205159A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/084650 WO2022205159A1 (zh) 2021-03-31 2021-03-31 一种车辆控制方法及装置
CN202180003366.2A CN113840766B (zh) 2021-03-31 2021-03-31 一种车辆控制方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/084650 WO2022205159A1 (zh) 2021-03-31 2021-03-31 一种车辆控制方法及装置

Publications (1)

Publication Number Publication Date
WO2022205159A1 true WO2022205159A1 (zh) 2022-10-06

Family

ID=78971725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084650 WO2022205159A1 (zh) 2021-03-31 2021-03-31 一种车辆控制方法及装置

Country Status (2)

Country Link
CN (1) CN113840766B (zh)
WO (1) WO2022205159A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510193A (zh) * 2022-02-24 2022-05-17 芜湖雄狮汽车科技有限公司 车辆的控制方法、装置、车辆及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107848540A (zh) * 2015-07-31 2018-03-27 松下知识产权经营株式会社 驾驶辅助装置、驾驶辅助系统、驾驶辅助方法、驾驶辅助程序以及自动驾驶车辆
CN107848541A (zh) * 2015-07-31 2018-03-27 松下知识产权经营株式会社 驾驶辅助装置、驾驶辅助系统、驾驶辅助方法以及自动驾驶车辆
CN107851395A (zh) * 2015-07-31 2018-03-27 松下知识产权经营株式会社 驾驶辅助装置、驾驶辅助系统、驾驶辅助方法以及自动驾驶车辆
CN108334258A (zh) * 2018-04-11 2018-07-27 刘连波 自动驾驶辅助装置、自动驾驶辅助方法以及自动驾驶辅助系统
CN111813314A (zh) * 2019-04-12 2020-10-23 比亚迪股份有限公司 车辆控制方法和装置、存储介质、电子设备
CN112141124A (zh) * 2019-08-27 2020-12-29 英属开曼群岛商麦迪创科技股份有限公司 用于车辆的辅助驾驶系统及其操作方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5957745B1 (ja) * 2015-07-31 2016-07-27 パナソニックIpマネジメント株式会社 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両
JP5957744B1 (ja) * 2015-07-31 2016-07-27 パナソニックIpマネジメント株式会社 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両
JP5910904B1 (ja) * 2015-07-31 2016-04-27 パナソニックIpマネジメント株式会社 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107848540A (zh) * 2015-07-31 2018-03-27 松下知识产权经营株式会社 驾驶辅助装置、驾驶辅助系统、驾驶辅助方法、驾驶辅助程序以及自动驾驶车辆
CN107848541A (zh) * 2015-07-31 2018-03-27 松下知识产权经营株式会社 驾驶辅助装置、驾驶辅助系统、驾驶辅助方法以及自动驾驶车辆
CN107851395A (zh) * 2015-07-31 2018-03-27 松下知识产权经营株式会社 驾驶辅助装置、驾驶辅助系统、驾驶辅助方法以及自动驾驶车辆
CN108334258A (zh) * 2018-04-11 2018-07-27 刘连波 自动驾驶辅助装置、自动驾驶辅助方法以及自动驾驶辅助系统
CN111813314A (zh) * 2019-04-12 2020-10-23 比亚迪股份有限公司 车辆控制方法和装置、存储介质、电子设备
CN112141124A (zh) * 2019-08-27 2020-12-29 英属开曼群岛商麦迪创科技股份有限公司 用于车辆的辅助驾驶系统及其操作方法

Also Published As

Publication number Publication date
CN113840766B (zh) 2022-10-18
CN113840766A (zh) 2021-12-24

Similar Documents

Publication Publication Date Title
JP7371671B2 (ja) 車両に安全に追い付けるように運転を支援するシステムおよび方法
CN109562760B (zh) 对于自主车辆测试预测
CN108089571B (zh) 用于预测无人驾驶车辆的车辆交通行为以做出驾驶决策的方法和系统
US11001196B1 (en) Systems and methods for communicating a machine intent
JP6575818B2 (ja) 運転支援方法およびそれを利用した運転支援装置、自動運転制御装置、車両、運転支援システム、プログラム
US9919708B2 (en) Selectable autonomous driving modes
JP5957745B1 (ja) 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両
JP5945999B1 (ja) 運転支援装置、運転支援システム、運転支援方法、運転支援プログラム及び自動運転車両
WO2017047176A1 (ja) 情報処理装置、情報処理方法、およびプログラム
JP7329755B2 (ja) 支援方法およびそれを利用した支援システム、支援装置
US20170221480A1 (en) Speech recognition systems and methods for automated driving
JP2017133893A (ja) 報知制御装置及び報知制御方法
US11945442B2 (en) Autonomous driving system and control method for autonomous driving system
JP2018005797A (ja) 運転支援方法およびそれを利用した運転支援装置、運転支援システム、自動運転制御装置、車両、プログラム
WO2022205159A1 (zh) 一种车辆控制方法及装置
WO2016170773A1 (ja) 運転支援方法およびそれを利用した運転支援装置、自動運転制御装置、車両、運転支援プログラム
CN112542061B (zh) 基于车联网的借道超车控制方法、装置、系统及存储介质
WO2023178508A1 (zh) 一种智能提醒方法及装置
CN114253392A (zh) 用于控制多个车载智能虚拟助手的虚拟对话代理
JP2023525088A (ja) 自動運転システムを適応的に最適化するための方法および装置
Wang et al. Designing for prediction-level collaboration between a human driver and an automated driving system
CN113393687A (zh) 驾驶辅助装置、驾驶辅助方法、车辆和介质
JP2020121619A (ja) 車両制御装置および車両制御方法
US20200385019A1 (en) Bi-Directional Autonomous Vehicle
WO2023001068A1 (zh) 一种车辆驾驶方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933824

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933824

Country of ref document: EP

Kind code of ref document: A1