WO2022267354A1 - 人机交互方法、装置、电子设备以及存储介质 - Google Patents
人机交互方法、装置、电子设备以及存储介质 Download PDFInfo
- Publication number
- WO2022267354A1 WO2022267354A1 PCT/CN2021/135044 CN2021135044W WO2022267354A1 WO 2022267354 A1 WO2022267354 A1 WO 2022267354A1 CN 2021135044 W CN2021135044 W CN 2021135044W WO 2022267354 A1 WO2022267354 A1 WO 2022267354A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- gesture action
- target control
- control object
- touchpad
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 102
- 230000003993 interaction Effects 0.000 title claims abstract description 74
- 230000009471 action Effects 0.000 claims abstract description 220
- 230000004044 response Effects 0.000 claims abstract description 44
- 238000004590 computer program Methods 0.000 claims description 12
- 238000012790 confirmation Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 5
- 210000003811 finger Anatomy 0.000 description 89
- 230000006870 function Effects 0.000 description 46
- 238000010586 diagram Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 11
- 230000005057 finger movement Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 6
- 239000002828 fuel tank Substances 0.000 description 4
- 210000004247 hand Anatomy 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000004378 air conditioning Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 210000000245 forearm Anatomy 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000005224 forefinger Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0098—Details of control systems ensuring comfort, safety or stability not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
Definitions
- the present disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of automatic driving and intelligent transportation, and in particular to a human-computer interaction method, device, electronic device, and storage medium.
- buttons such as central control button, steering wheel button, cabin door button, etc.
- the driver is required to look away from the road ahead to find and select the position of the touch buttons, which poses a greater safety hazard to the driving process.
- the present disclosure provides a human-computer interaction method, device, electronic equipment, storage medium and computer program product.
- a human-computer interaction method including: in response to detecting a first gesture action in a first area of a touchpad, determining a target control object corresponding to the first gesture action in the first area; In response to detecting the second gesture action in the second area of the touch panel, determine the target control mode corresponding to the second gesture action in the second area; and control the target control object according to the target control mode.
- a human-computer interaction device including: a first determining module, configured to determine the first The target control object corresponding to the gesture action; the second determination module, configured to determine the target control method corresponding to the second gesture action in the second area of the touchpad in response to detecting the second gesture action in the second area of the touchpad; control The module is used to control the target control object according to the target control mode.
- an electronic device including: at least one processor; and a memory communicated with the at least one processor; wherein, the memory stores instructions executable by the at least one processor, and the instructions are executed by Executed by at least one processor, so that the at least one processor can execute the above-mentioned human-computer interaction method.
- a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the human-computer interaction method as described above.
- a computer program product including a computer program, which when executed by a processor implements the human-computer interaction method as described above.
- FIG. 1 is a schematic flowchart of a human-computer interaction method according to a first embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of a human-computer interaction method according to a second embodiment of the present disclosure
- FIG. 3-21 is an exemplary diagram of a human-computer interaction method according to a second embodiment of the present disclosure.
- Fig. 22 is a schematic flowchart of a human-computer interaction method according to a third embodiment of the present disclosure.
- Fig. 23 is a schematic structural diagram of a human-computer interaction device according to a fourth embodiment of the present disclosure.
- Fig. 24 is a schematic structural diagram of a human-computer interaction device according to a fifth embodiment of the present disclosure.
- Fig. 25 is a block diagram of an electronic device for implementing the human-computer interaction method of the embodiment of the present disclosure.
- buttons such as central control button, steering wheel button, cabin door button, etc.
- the driver is required to look away from the road ahead to find and select the position of the touch buttons, which poses a greater safety hazard to the driving process.
- the present disclosure proposes a human-computer interaction method.
- the human-computer interaction method first, in response to detecting the first gesture action in the first area of the touch panel, determine the target corresponding to the first gesture action in the first area.
- the control object in response to detecting the second gesture action in the second area of the touchpad, determines the target control mode corresponding to the second gesture action in the second area, and then controls the target control object according to the target control mode .
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- the present disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of automatic driving and intelligent transportation.
- artificial intelligence is a discipline that studies the use of computers to simulate certain human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), including both hardware-level technology and software-level technology.
- Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, and big data processing; artificial intelligence software technologies mainly include computer vision, speech recognition technology, natural language processing technology, and machine learning/deep learning , big data processing technology, knowledge graph technology and other major directions.
- Autonomous driving refers to an assisted driving system that can assist the driver to steer and keep driving on the road, and realize a series of operations such as following the car, braking and changing lanes.
- the driver can control the vehicle at any time, and the system will Under the circumstances, the driver will be reminded to intervene in the control.
- Intelligent transportation is an effective integration and application of advanced information technology, data communication transmission technology, electronic sensing technology, control technology and computer technology in the entire ground traffic management system, and establishes a large-scale and all-round development.
- Functional, real-time, accurate and efficient comprehensive transportation management system which consists of two parts: traffic information service system and traffic management system.
- Fig. 1 is a schematic flowchart of a human-computer interaction method according to a first embodiment of the present disclosure.
- the human-computer interaction method provided by the embodiment of the present disclosure is executed by a human-computer interaction device.
- the human-computer interaction device can be an electronic device, and can also be configured in the electronic device, so as to control the target control object according to the detected gestures in different areas of the touch panel, so that the driver's sight does not have to leave the road ahead , You can control various functions in the car with just finger movements, improving driving safety.
- the electronic device can be any static or mobile computing device capable of data processing, such as mobile computing devices such as laptops, smart phones, and wearable devices, or stationary computing devices such as desktop computers, or servers, or touch panels. etc., the present disclosure does not limit this.
- the embodiments of the present disclosure are described by taking a scene in which a driver controls various functions in a car by using a touch panel as an example.
- the touch panel may be composed of electrodes supporting multi-touch, a pressure sensor sensing unit, a control unit, a storage unit and a connection interface.
- the human-computer interaction device in the embodiments of the present disclosure can be understood as the control unit in the touch panel.
- the human-computer interaction method may include the following steps:
- Step 101 in response to detecting a first gesture action in a first area of a touch panel, determine a target control object corresponding to the first gesture action in the first area.
- the first area may be any area of the touch panel, which is not limited in the embodiments of the present disclosure.
- the first gesture action can be any gesture action such as single-finger double-tap tap, single-finger triangle drawing, three-finger single-tap tap, etc.
- the touch method, the number of touch points, and the trajectory corresponding to the first gesture action No limit.
- the touch method corresponding to the first gesture action can be click, long press, drag, etc.; the number of touch points can be one (that is, the driver uses one finger to make Two fingers make a gesture), three, four, five, etc.; the trajectory can be a triangle, a straight line, etc.
- the electrodes in the touch panel and the pressure sensor sensing unit can detect the first area of the touch panel The first gesture action and output the signal to the control unit, the control unit can convert the acquired signal into coordinate values to form trajectory information, and determine the target control object corresponding to the first gesture action in the first area according to the trajectory information.
- the target control object is an object to be controlled, for example, objects such as rearview mirrors, sunroofs, air conditioners, audio and video, lights, etc. in the car cockpit.
- Step 102 in response to detecting a second gesture action in the second area of the touch panel, determine a target control mode corresponding to the second gesture action in the second area.
- the second area may be any area of the touch panel, which is not limited in the embodiments of the present disclosure.
- the second gesture action can be any gesture action such as sliding up with one finger, triple-clicking with one finger, or drawing an ellipse with one finger. limit.
- the touch method corresponding to the second gesture action can be click, long press, drag, etc.; the number of touch points can be one, two, three, four, five, etc.; the trajectory can be triangle, Straight lines etc.
- the electrodes in the touch panel and the pressure sensor sensing unit can detect the second gesture of the touch panel.
- the second gesture action in the area, and output the signal to the control unit the control unit can convert the acquired signal into coordinate values to form trajectory information, and determine the target control corresponding to the second gesture action in the second area according to the trajectory information Way.
- the target control method is a method of controlling the target control object, for example, it may be to turn on the air conditioner, turn off the double flashing lights outside the car, and so on.
- Step 103 control the target control object according to the target control mode.
- the control unit of the touch panel can generate control instructions according to the target control object and target control mode, and send the control instructions to the target through the connection interface of the touch panel.
- the control object sends control instructions to control the target control object.
- the touch panel can be installed in the car cabin, so as to control the objects, air conditioner, audio and video, lighting, software, and auxiliary driving functions in the car through the human-computer interaction method in the embodiment of the present disclosure.
- the driver does not need to take his eyes off the road ahead, and can control various functions in the car with just finger movements, which improves driving safety.
- object control may include controlling body object switches, such as rearview mirror adjustment, opening or closing the sunroof, locking or unlocking, opening the hood or the trunk lid, opening the fuel tank cap or opening the charging port cover, etc.
- Air conditioning control can include controlling and adjusting various modes of air conditioning in the car, such as air conditioning switch, temperature adjustment, wind speed adjustment, wind direction mode selection, defogging switch, internal and external circulation, air filtration, etc.
- Audio-visual control can include controlling and adjusting the functions of the in-car audio-visual system, such as play, pause, previous song, next song, fast forward, fast rewind, loop playback, sequential playback, random playback, etc.
- Lighting control can include controlling the lighting modes inside and outside the car, such as double flashing outside the car, dome lights at the front and rear of the cockpit inside the car, ambient lights and brightness, etc.
- Software control can include control of on-board software switches and common functions in the software, such as navigation entry or exit, incoming call answering or hanging up, Bluetooth connection or disconnection, function confirmation, function cancellation, function return or function exit, etc.
- the assisted driving function control may include controlling the on-board automatic assisted driving mode activation and function settings, such as pilot assisted driving, automatic parking, etc.
- the touch panel in order to realize convenient control of various functions in the car, the touch panel may be installed at a convenient position of the driver in the car cabin as required.
- the touchpad can be mounted in any of the following locations on the vehicle: the right-hand console, the surface of the gearshift, the left-hand front door trim, or the center of the steering wheel.
- the right-hand console is located on the front side of the storage box, and the front side of the area where the driver’s right forearm is naturally placed.
- the touch panel is installed on the right-hand console, which can facilitate the driver to use the idle right hand to pass in real time when driving with the left hand.
- the touchpad controls various functions in the car.
- the surface of the gear shifter is located at the top of the gear lever. A certain area can be reserved on the top of the gear lever to install the touch panel.
- the touch panel is installed on the surface of the gear shifter, which is convenient for drivers who are used to holding the gear shifter with their right hands to pass through in real time.
- the touchpad controls various functions in the car.
- the left-hand front door decoration is located in front of the window lift button, at the front end of the natural place for the driver's left forearm.
- the touchpad is installed on the left-hand front door decoration, which can facilitate the driver to use the idle left hand to touch in real time when driving with the right hand.
- the control panel controls various functions in the car. Installing the touchpad in the inner area of the multi-function steering wheel under the car logo or in the button area can facilitate the driver's center control when driving with both hands, and is more suitable for drivers who are accustomed to driving with both hands and have no habit of idle hands.
- the installation method of the touchpad can be as follows: reserve a hole for the touchpad on the shell at the installation position of the cabin, embed the touchpad in the hole, and use a bracket with an extension to pass the touchpad on the shell.
- the positioning post is fixed.
- the human-computer interaction method in the embodiment of the present disclosure divides the control of the control object into two stages in essence.
- the target control object can enter the trigger mode of the target control object at this time.
- the target control mode is determined according to the detected second gesture action in the second area of the touch panel, and then the target control mode is controlled according to the target control mode. Therefore, even if the driver touches the touchpad by mistake, it will not cause mismanagement of various functions in the car, thereby avoiding the frequent occurrence of false touches.
- the target control object in response to detecting the first gesture action in the first area of the touchpad, determine the target control object corresponding to the first gesture action in the first area; For the second gesture action in the second area, determine the target control mode corresponding to the second gesture action in the second area; and control the target control object according to the target control mode.
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- the target control object can be determined according to the detected first gesture action in the first area of the touchpad, and the target control object can be determined according to the detected second gesture in the second area of the touchpad.
- the action is to determine the target control mode.
- Fig. 2 is a schematic flowchart of a human-computer interaction method according to a second embodiment of the present disclosure. As shown in Figure 2, the human-computer interaction method may include the following steps:
- Step 201 in response to detecting the first gesture action in the first area of the touchpad, according to the preset area of the touchpad, the first correspondence between the gesture action and the control object, determine the correspondence between the first gesture action in the first area target control object.
- the first corresponding relationship between the preset area of the touchpad, the gesture action, and the control object can be set in advance, so that after the first gesture action in the first area of the touchpad is detected, the Query the first corresponding relationship, and determine the target control object corresponding to the first gesture action in the first area.
- the first corresponding relationship can be set arbitrarily as required.
- the human-computer interaction device After the human-computer interaction device detects the first gesture action in the first area of the touch panel, the query touch The preset area of the control panel, the first corresponding relationship between the gesture action and the control object, if it is determined that there is no control object corresponding to the first gesture action in the first area, at this time, the driver can be prompted to re-input the gesture action to Guide the driver to correctly control the various functions of the car.
- the driver may be prompted to re-input the gesture action by means of at least one of a single long vibration of the touch panel, a vehicle-mounted voice announcement, and the like.
- the touch panel can be connected to the display screen, wherein the display screen can be a central control display screen or other display screens, and the connection method can be CAN (Controller Area Network, controller area network) bus connection or other connection methods, so as to respond Based on the preset area of the touchpad, the first corresponding relationship between the gesture action and the control object, it is determined that there is no control object corresponding to the first gesture action in the first area, and a prompt message for re-inputting the gesture action can be displayed on the display screen .
- the prompt information here may be called fourth prompt information.
- the prompt message of gesture action error can be displayed in the form of text, or the gesture action corresponding to each control object and the corresponding area can be displayed in the form of animation, Embodiments of the present disclosure do not limit this.
- Step 202 according to the target control object, determine a second corresponding relationship between the preset area of the touch panel, the gesture action and the control mode of the target control object.
- Step 203 in response to detecting the second gesture action in the second area of the touch panel, according to the second correspondence, determine the target control mode corresponding to the second gesture action in the second area.
- the corresponding relationship between the preset area of the touch panel, the gesture action and the control mode of the control object can be set in advance, so that after the target control object is determined, the control object can be controlled according to the target control object. , determine the second corresponding relationship between the preset area of the touchpad, the gesture action, and the control method of the target control object, and then after detecting the second gesture action in the second area of the touchpad, according to the second corresponding relationship , to determine the target control mode corresponding to the second gesture action in the second area.
- the second corresponding relationship can be set arbitrarily as required.
- the preset setting is for the rearview mirror
- sliding with one finger on any area of the touchpad corresponds to adjusting the angle of the rearview mirror
- for the hood sliding up with one finger on any area of the touchpad corresponds to turning on the engine Cover
- for the trunk lid slide one finger upward on any area of the touchpad to open the trunk lid.
- step 202 may also be executed after detecting the second gesture action in the second area of the touch panel, and the embodiment of the present disclosure does not limit the execution timing of step 202 .
- the human-computer interaction device detects the second gesture in the second area of the touch panel due to an incorrect touch or an error in the driver's memorized gesture
- the query The second corresponding relationship between the preset area of the touchpad, the gesture action and the control mode of the target control object, if it is determined that there is no control mode corresponding to the second gesture action in the second area, at this time, it can also prompt the driving The driver can re-input the gesture action to guide the driver to correctly control the various functions of the target control object of the car.
- the driver may be prompted to re-input an action by means of at least one of a single long vibration of the touch panel, a voice announcement, or the prompt information for re-inputting a gesture action may be displayed on the display screen.
- the prompt information here may be called fifth prompt information.
- the prompt message of gesture action error can be displayed in the form of text, or the gesture action corresponding to each control mode of the target control object can be displayed in the form of animation and corresponding areas, which are not limited in the embodiments of the present disclosure.
- the preset times threshold can be set as required.
- At least one of two long vibrations of the touch panel, voice playback, and the like may be used to prompt the driver for a wrong gesture.
- the human-computer interaction method in the embodiment of the present disclosure determines the target control object and the target control method of the target control object in combination with the gesture action and the action area of the gesture action on the touch panel, so that through the The same gesture action in different areas or different gesture actions in the same area can realize the control of different control objects, or different control of the same control object.
- the driver needs to memorize fewer gestures.
- the second correspondence between the preset area of the touchpad, the gesture action and the control method of the control object The corresponding relationship can realize that for different control objects, the same gesture can be used to control the control object.
- Step 204 control the target control object according to the target control mode.
- the target control method of the target control object is determined according to the second gesture action in the second area of the touch panel, and after the target control object is controlled according to the target control method, it may still be It is necessary to further adjust the target control object.
- the sunroof after controlling the sunroof to fully open according to the second gesture action in the second area of the touchpad, it may be necessary to further adjust the degree of opening of the sunroof.
- the parameter adjustment mode corresponding to the third gesture action in the third area is determined according to the third corresponding relationship, and then the parameter of the target control object is adjusted according to the parameter adjustment mode.
- the target control object can be determined only through a detected gesture in a certain area of the touch panel. And the target control method, and then control the target control object according to the target control method.
- Fig. 3 it can be set as shown in the left diagram of Fig. 3, double click and click with one finger in the area 301 of the touch panel, and the corresponding control object is the left rearview mirror; as shown in the left diagram of Fig. 3, in the touch Single-finger double click on the area 302 of the board, and the corresponding control object is the right rearview mirror.
- the left rearview mirror or right rearview mirror slide with one finger on any area of the touchpad to adjust the angle direction of the rearview mirror accordingly.
- Fig. 6 as shown in the left diagram of Fig. 6, it can be set that a single finger slides horizontally from left to right in the area 601 of the touch panel, and the corresponding control object is the hood, as shown in the left diagram of Fig.
- the area 602 of the control panel slides horizontally from left to right with one finger, and the corresponding control object is the trunk lid.
- the hood or trunk lid as shown in the right figure of Figure 6, slide one finger upward on any area of the touchpad to open the hood or trunk lid.
- FIG 7 it can be set as shown in the left figure of Figure 7, double-click with one finger on the area 701 of the touchpad, and the corresponding control object is the fuel tank cap or the charging port cover.
- the fuel tank cap or charging port cover as shown in the right figure of Figure 7, slide upwards with one finger on any area of the touchpad to open the fuel tank cap or charging port cover accordingly.
- FIG 8-13 it can be set as shown in the left figure of Figure 8-13, slide a clothoid curve to the right with one finger on any area of the touchpad, and the corresponding control object is the air conditioner.
- the air conditioner as shown in the middle figure of Figure 8, slide one finger upward on any area of the touchpad to turn on the air conditioner; as shown in the right figure of Figure 8, slide one finger down on any area of the touchpad to Turn off the air conditioner; as shown in the middle picture of Figure 9, slide two fingers up on any area of the touchpad to increase the temperature; slide two fingers down on any area of the touchpad to decrease the temperature; as shown in Figure 9
- FIG. 17 it can be set as shown in the first picture of Figure 17, use the index finger and middle finger to click on any area of the touchpad successively and click back and forth twice (that is, first use the index finger to click, then use the middle finger to click , and then use the index finger to click, and then use the middle finger to click), correspondingly turn on the double flash; That is, use the middle finger to click first, then use the index finger to click, then use the middle finger to click, and then use the index finger to click), correspondingly turn off the double flash; as shown in the second picture of Figure 17, in the upper half of the touchpad Use one finger to swipe the lightning trace from top to bottom, which corresponds to turning on the front dome light; as shown in the second picture of Figure 17, use one finger to swipe the lightning trace from bottom to top on the upper half of the touchpad, corresponding to turning off the front dome light; As shown in the second picture of Figure 17, use a single finger to draw the lightning trace from top to bottom on the lower half of the touchpad, corresponding to turning on the rear
- FIG 18 it can be set as shown in the left figure of Figure 18, slide up with one finger on any area of the touchpad and double-click with one finger at the end of the track to enter the corresponding navigation; as shown in the right figure of Figure 18, touch Swipe down with one finger on any area of the control panel and double-click with one finger at the end of the track to exit the navigation.
- FIG 19 it can be set as shown in the left figure of Figure 19, slide right with one finger on any area of the touchpad and double-click with one finger at the end of the track, correspondingly open the Bluetooth connection; as shown in the right figure of Figure 19, in Swipe left with one finger on any area of the touchpad and double-click with one finger at the end of the track to disconnect the Bluetooth connection.
- FIG 21 it can be set as shown in the left figure of Figure 21, use four fingers to swipe up twice on any area of the touchpad, corresponding to enable the assisted driving function; as shown in the middle figure of Figure 21, in the touchpad Use two fingers to draw a double wavy line forward in any area, corresponding to start the navigator assisted driving; Automatic parking function.
- gesture actions corresponding to the above-mentioned control objects and the gesture actions corresponding to the control methods and parameter adjustment methods of each control object it can be known that in the embodiments of the present disclosure, the same gesture actions can be used to control different control objects, so that The gestures that the driver needs to memorize are reduced.
- the rearview mirror you can use one finger to slide on any area of the touchpad to control the angle of the rearview mirror; for the sunroof, you can use one finger to slide on any area of the touchpad to control the opening or closing of the sunroof Close; for the trunk lid or hood, you can use a single finger slide on any area of the touchpad to control the opening of the trunk lid or hood; for the air conditioner, you can use a single finger slide on any area of the touchpad , to control the opening of the air conditioner, etc.
- the human-computer interaction device detects the gesture of drawing a clothoid curve to the right with one finger in any area of the touch panel, it can be determined that the target control object is the air conditioner, and the preset area of the touch panel, the gesture action and the air conditioner can be determined.
- the corresponding relationship of the control mode and then after detecting the gesture action of sliding up with one finger on any area of the touch panel, it can be determined that the control mode is to turn on the air conditioner, so that the air conditioner can be controlled to turn on.
- the target control object in response to detecting the first gesture action in the first area of the touch panel, according to the preset area of the touch panel, the first corresponding relationship between the gesture action and the control object, determine the first The target control object corresponding to the first gesture action in the area, according to the target control object, determine the second corresponding relationship between the preset area of the touchpad, the gesture action and the control mode of the target control object, and respond to the detection of the first gesture of the touchpad For the second gesture action in the second area, according to the second corresponding relationship, determine the target control mode corresponding to the second gesture action in the second area, and control the target control object according to the target control mode.
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- the control process of the control object can be divided into two stages.
- the target control mode of the target control object is determined by performing the second gesture action on the second area of the touch panel.
- Fig. 22 is a schematic flowchart of a human-computer interaction method according to a third embodiment of the present disclosure. As shown in Figure 22, the human-computer interaction method may include the following steps:
- Step 2201 in response to detecting a first gesture action in a first area of the touchpad, determine a target control object corresponding to the first gesture action in the first area.
- Step 2202 displaying the first prompt information of the first gesture action corresponding to the target control object in the first area and the second prompt information of the gesture action corresponding to at least one control mode of the target control object in a preset manner.
- Step 2203 obtaining a confirmation instruction for the target control object.
- Step 2204 in response to detecting the second gesture action in the second area of the touch panel, determine the target control mode corresponding to the second gesture action in the second area.
- the target control object after determining the target control object corresponding to the first gesture action in the first area, the target control object can be entered into the trigger mode, and at least through double vibration of the touch panel, vehicle voice broadcast, etc.
- the prompt information herein may be referred to as first prompt information.
- the second prompt information of the gesture action corresponding to at least one control mode of the target control object can also be displayed through voice playback, etc., so as to guide the driver to use correct gesture actions to control the target control object.
- the target control object is the hood
- the touch panel can be connected to the display screen, wherein the display screen can be a central control display screen or other display screens, and the connection method can be a CAN bus connection or other connection methods, so that when determining the first area After the target control object corresponding to the first gesture action, the first prompt information and the second prompt information can be displayed on the display screen, so that the driver can intuitively understand which control object the current control object is, and which gesture action is used to control the target control object.
- the target control object is controlled.
- the first prompt information and the second prompt information may be displayed in the form of text, or the first prompt information and the second prompt information may be displayed in the form of animation, Embodiments of the present disclosure do not limit this.
- the human-computer interaction device may respond to the The detected second gesture action in the second area of the touch panel determines the target control mode corresponding to the second gesture action in the second area, so as to avoid the situation where the target control object determined by the human-computer interaction device is wrong, According to the fact that the target control method corresponding to the second gesture action in the second area controls the wrong control object, the control accuracy of various functions in the car is improved.
- the confirmation instruction for the target control object can be a confirmation instruction issued by the driver through voice, or a confirmation instruction triggered by the driver through a specific gesture in a specific area of the touch panel.
- a single-finger swipe in any area of the control panel is used to confirm that the target control object is correct, or a confirmation command triggered in other forms may also be used, which is not limited in the embodiments of the present disclosure.
- the driver after the driver inputs the first gesture action in the first area of the touch panel, the driver fails to input further gestures on the touch panel due to the wrong target control object or other temporary reasons. action situation.
- the second hand in the second area of the touchpad is not detected within the first preset time period. Gesture action, displaying the third prompt information of the input gesture action to guide the driver to correctly control the target control object.
- the second gesture action of the second area of the touchpad has not been detected within the second preset time period, and the display screen is controlled to quit displaying the first prompt information and the second prompt information.
- Interface for prompt information In response to the display of the third prompt information of the input gesture action, the second gesture action of the second area of the touchpad has not been detected within the second preset time period, and the display screen is controlled to quit displaying the first prompt information and the second prompt information. Interface for prompt information.
- the first preset time period and the second preset time period can be set as required, for example, both the first preset time period and the second preset time period can be set to 5 seconds.
- a single vibration of the touchpad, vehicle voice broadcast, etc. in response to no detection of the second gesture action in the second area of the touchpad within the first preset time period, a single vibration of the touchpad, vehicle voice broadcast, etc. At least one is to prompt the driver to input the gesture action, and the prompt information of the input gesture action can also be displayed through the display screen, so as to guide the driver to correctly control the target control object.
- the prompt information here may be called third prompt information.
- the trigger mode of the target control object may be controlled to exit, and an action may be made.
- the voice announces the exit prompt, and controls the display screen to exit the interface displaying the first prompt information and the second prompt information.
- the driver may be prompted that the second gesture action in the second area has been is responded correctly.
- Step 2205 control the target control object according to the target control mode.
- a sixth prompt message for exiting the current wheel function control may be displayed.
- the third preset time period can be set as required.
- At least one of methods such as double-vibration of the touch panel, voice broadcast, etc., may be used to display the sixth prompt information for exiting the current wheel function control.
- the display screen can stay on the control interface of the target control object, so as to facilitate the driver to further control the target control object.
- the target control object in response to detecting the first gesture action in the first area of the touch panel, determine the target control object corresponding to the first gesture action in the first area, and display the first gesture action in a preset manner.
- the first gesture action in a region corresponds to the first prompt information of the target control object
- the second prompt information shows the gesture action corresponding to at least one control mode of the target control object, obtains a confirmation instruction for the target control object, and responds to the detection of
- the second gesture action in the second area of the touch panel determines the target control mode corresponding to the second gesture action in the second area, and controls the target control object according to the target control mode.
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- the human-computer interaction device provided by the present disclosure will be described below with reference to FIG. 23 .
- Fig. 23 is a schematic structural diagram of a human-computer interaction device according to a fourth embodiment of the present disclosure.
- the human-computer interaction device 2300 includes: a first determination module 2301 , a second determination module 2302 and a control module 2303 .
- the first determining module 2301 is configured to determine the target control object corresponding to the first gesture action in the first area of the touchpad in response to detecting the first gesture action in the first area of the touch panel;
- the second determination module 2302 is configured to determine a target control method corresponding to the second gesture action in the second area of the touchpad in response to detecting the second gesture action in the second area of the touchpad;
- the control module 2303 is configured to control the target control object according to the target control mode.
- the human-computer interaction device 2300 provided in this embodiment can execute the human-computer interaction method of the foregoing embodiments.
- the human-computer interaction device can be an electronic device, and can also be configured in the electronic device to realize the control of the target control object according to the detected gestures in different areas of the touch panel, so that the driver does not have to leave the front line of sight On the road, you can control various functions in the car with just finger movements, improving driving safety.
- the electronic device can be any static or mobile computing device capable of data processing, such as mobile computing devices such as laptops, smart phones, and wearable devices, or stationary computing devices such as desktop computers, or servers, or touch panels. etc., the present disclosure does not limit this.
- the target control object in response to detecting the first gesture action in the first area of the touch panel, determine the target control object corresponding to the first gesture action in the first area; For the second gesture action in the second area, determine the target control mode corresponding to the second gesture action in the second area; and control the target control object according to the target control mode.
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- the human-computer interaction device provided by the present disclosure will be described below with reference to FIG. 24 .
- Fig. 24 is a schematic structural diagram of a human-computer interaction device according to a fifth embodiment of the present disclosure.
- the human-computer interaction device 2400 may include: a first determination module 2401 , a second determination module 2402 and a control module 2403 .
- the first determination module 2401 , the second determination module 2402 and the control module 2403 in FIG. 24 have the same function and structure as the first determination module 2301 , the second determination module 2302 and the control module 2303 in FIG. 23 .
- the first determination module 2401 includes:
- the first determination unit is configured to determine the target control object corresponding to the first gesture action in the first area according to the preset area of the touch panel, the first corresponding relationship between the gesture action and the control object.
- the second determination module 2402 includes:
- the second determination unit is configured to determine a second corresponding relationship between the preset area of the touch panel, the gesture action, and the control mode of the target control object according to the target control object;
- the third determining unit is configured to determine the target control mode corresponding to the second gesture action in the second area according to the second correspondence.
- the human-computer interaction device 2400 further includes:
- the first display module 2404 is configured to display the first prompt information of the first gesture action in the first area corresponding to the target control object in a preset manner, and display the second prompt information of the gesture action corresponding to at least one control mode of the target control object. Prompt information.
- the human-computer interaction device 2400 further includes:
- An acquisition module 2405 configured to acquire a confirmation instruction for the target control object.
- the touch panel is connected to the display screen
- the first display module 2404 includes:
- the first display unit is configured to display the first prompt information and the second prompt information through the display screen.
- the first display module 2404 further includes:
- the second display unit is configured to display third prompt information for input gesture actions in response to no second gesture actions in the second area of the touchpad being detected within the first preset time period;
- the control unit is configured to control the display screen to quit displaying the first prompt in response to the second gesture action in the second area of the touchpad not being detected within a second preset time period after the third prompt information of the input gesture action is displayed. Information and the interface of the second prompt information.
- the human-computer interaction device 2400 further includes:
- the second display module 2406 is configured to display the re-input in response to determining that there is no control object corresponding to the first gesture action in the first area according to the preset area of the touch panel, the first correspondence between the gesture action and the control object The fourth prompt information of the gesture action.
- the human-computer interaction device 2400 further includes:
- the third presentation module 2407 is configured to display fifth prompt information for re-input gesture action in response to determining that there is no control mode corresponding to the second gesture action in the second area according to the second correspondence.
- the touchpad is mounted on any one of the following locations of the vehicle: a right-hand console, a gearshift surface, a left-hand front door trim, and the center of the steering wheel.
- the target control object in response to detecting the first gesture action in the first area of the touch panel, determine the target control object corresponding to the first gesture action in the first area; For the second gesture action in the second area, determine the target control mode corresponding to the second gesture action in the second area; and control the target control object according to the target control mode.
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
- FIG. 25 shows a schematic block diagram of an example electronic device 2500 that may be used to implement embodiments of the present disclosure.
- Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
- Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
- the device 2500 includes a computing unit 2501 that can be executed according to a computer program stored in a read-only memory (ROM) 2502 or loaded from a storage unit 2508 into a random-access memory (RAM) 2503. Various appropriate actions and treatments. In the RAM 2503, various programs and data necessary for the operation of the device 2500 can also be stored.
- the computing unit 2501, ROM 2502, and RAM 2503 are connected to each other through a bus 2504.
- An input/output (I/O) interface 2505 is also connected to the bus 2504 .
- the I/O interface 2505 includes: an input unit 2506, such as a keyboard, a mouse, etc.; an output unit 2507, such as various types of displays, speakers, etc.; a storage unit 2508, such as a magnetic disk, an optical disk, etc. ; and a communication unit 2509, such as a network card, a modem, a wireless communication transceiver, and the like.
- the communication unit 2509 allows the device 2500 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
- the computing unit 2501 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of computing units 2501 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
- the computing unit 2501 executes various methods and processes described above, such as human-computer interaction methods.
- the human-computer interaction method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 2508 .
- part or all of the computer program may be loaded and/or installed on the device 2500 via the ROM 2502 and/or the communication unit 2509.
- the computer program When the computer program is loaded into the RAM 2503 and executed by the computing unit 2501, one or more steps of the human-computer interaction method described above can be performed.
- the computing unit 2501 may be configured in any other appropriate way (for example, by means of firmware) to execute the human-computer interaction method.
- Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOC system of systems
- CPLD load programmable logic device
- computer hardware firmware, software, and/or combinations thereof.
- programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
- Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
- the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and pointing device eg, a mouse or a trackball
- Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
- the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
- the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: local area networks (LANs), wide area networks (WANs), the Internet, and blockchain networks.
- a computer system may include clients and servers.
- Clients and servers are generally remote from each other and typically interact through a communication network.
- the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
- the server can be a cloud server, also known as cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the problem of traditional physical host and VPS service ("Virtual Private Server", or "VPS”) Among them, there are defects such as difficult management and weak business scalability.
- the server can also be a server of a distributed system, or a server combined with a blockchain.
- the target control object in response to detecting the first gesture action in the first area of the touchpad, determine the target control object corresponding to the first gesture action in the first area; For the second gesture action in the second area, determine the target control mode corresponding to the second gesture action in the second area; and control the target control object according to the target control mode.
- the target control object is controlled according to the detected gestures in different areas of the touch panel, so that the driver does not need to leave the road ahead, and can control various functions in the car with just finger movements , Improve driving safety.
- steps may be reordered, added or deleted using the various forms of flow shown above.
- each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种人机交互方法、装置(2300;2400)、电子设备(2500)以及存储介质。人机交互方法包括:响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;根据目标控制方式,对目标控制对象进行控制。
Description
相关申请的交叉引用
本公开要求在2021年6月25日在中国提交的中国专利申请号“202110710822.9”的优先权,其全部内容通过引用并入本文。
本公开涉及人工智能技术领域,特别涉及自动驾驶、智能交通技术领域,尤其涉及人机交互方法、装置、电子设备以及存储介质。
随着汽车电子的迅速发展和普及应用,汽车中有着越来越多的控制功能。目前普遍用于汽车座舱内外各项功能控制的交互方法为按键(比如中控按键、方向盘按键、舱门按键等)结合触屏选项的点按触控交互,这种交互方式虽然能实现对汽车的各项功能的直接控制,但是在操作时均要求驾驶员视线离开前方路面,寻找触控按键位置并予以选择,对驾驶过程造成了较大的安全隐患。
发明内容
本公开提供了一种人机交互方法、装置、电子设备、存储介质以及计算机程序产品。
根据本公开的一方面,提供了一种人机交互方法,包括:响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;根据目标控制方式,对目标控制对象进行控制。
根据本公开的另一方面,提供了一种人机交互装置,包括:第一确定模块,用于响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;第二确定模块,用于响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;控制模块,用于根据目标控制方式,对目标控制对象进行控制。
根据本公开的另一方面,提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的 指令,指令被至少一个处理器执行,以使至少一个处理器能够执行如上所述的人机交互方法。
根据本公开的另一方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,计算机指令用于使计算机执行如上所述的人机交互方法。
根据本公开的另一方面,提供了一种计算机程序产品,包括计算机程序,计算机程序在被处理器执行时实现根据如上所述的人机交互方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开第一实施例的人机交互方法的流程示意图;
图2是根据本公开第二实施例的人机交互方法的流程示意图;
图3-21是根据本公开第二实施例的人机交互方法的示例图;
图22是根据本公开第三实施例的人机交互方法的流程示意图;
图23是根据本公开第四实施例的人机交互装置的结构示意图;
图24是根据本公开第五实施例的人机交互装置的结构示意图;
图25是用来实现本公开实施例的人机交互方法的电子设备的框图。
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
可以理解的是,随着汽车电子的迅速发展和普及应用,汽车中有着越来越多的控制功能。目前普遍用于汽车座舱内外各项功能控制的交互方法为按键(比如中控按键、方向盘按键、舱门按键等)结合触屏选项的点按触控交互,这种交互方式虽然能实现对汽车的各项功能的直接控制,但是在操作时均要求驾驶员视线离开前方路面,寻找触控按键位置并予以选择,对驾驶过程造成了较大的安全隐患。
本公开针对上述问题,提出一种人机交互方法,该人机交互方法,首先响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控 制对象,再响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式,进而根据目标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
下面参考附图描述本公开实施例的人机交互方法、装置、电子设备、非瞬时计算机可读存储介质以及计算机程序产品。
需要说明的是,本公开涉及人工智能技术领域,特别涉及自动驾驶、智能交通技术领域。
其中,人工智能,是研究使计算机来模拟人的某些思维过程和智能行为(如学习、推理、思考、规划等)的学科,既有硬件层面的技术也有软件层面的技术。人工智能硬件技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理等技术;人工智能软件技术主要包括计算机视觉、语音识别技术、自然语言处理技术以及机器学习/深度学习、大数据处理技术、知识图谱技术等几大方向。
自动驾驶,是指能够协助驾驶员转向和保持在道路内行驶,实现跟车、制动和变道等一系列操作的辅助驾驶系统,驾驶员能随时对车辆进行控制,并且系统在一些特定的环境下会提醒驾驶员介入操控。
智能交通,是将先进的信息技术、数据通讯传输技术、电子传感技术、控制技术及计算机技术等有效地集成运用于整个地面交通管理系统,而建立的一种在大范围内、全方位发挥作用的,实时、准确、高效的综合交通运输管理系统,由交通信息服务系统、交通管理系统两部分组成。
首先结合图1,对本公开提供的人机交互方法进行详细描述。
图1是根据本公开第一实施例的人机交互方法的流程示意图。其中,需要说明的是,本公开实施例提供的人机交互方法,执行主体为人机交互装置。该人机交互装置可以为电子设备,也可以被配置在电子设备中,以实现根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高驾驶的安全性。
其中,电子设备,可以是任意能够进行数据处理的静止或者移动计算设备,例如笔记本电脑、智能手机、可穿戴设备等移动计算设备,或者台式计算机等静止的计算设备,或者服务器,或者触控板等,本公开对此不作限制。
需要说明的是,本公开实施例以驾驶员利用触控板对汽车中各项功能进行控制的场景为例进行说明。其中,触控板可以由支持多点触控的电极、压力传感器感应单元、 控制单元、存储单元和连接接口组成。本公开实施例中的人机交互装置可以理解为触控板中的控制单元。
如图1所示,人机交互方法,可以包括以下步骤:
步骤101,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象。
其中,第一区域,可以为触控板的任意区域,本公开实施例对此不作限制。
第一手势动作,可以为单指双击点按、单指画三角形、三指单击点按等任意手势动作,本公开实施例对第一手势动作对应的触控方式、触控点的数量以及轨迹不作限制。比如,第一手势动作对应的触控方式可以为点击、长按、拖动等;触控点的数量可以为一个(即驾驶员用一根手指作出手势动作)、两个(即驾驶员用两根手指作出手势动作)、三个、四个、五个等;轨迹可以为三角形、直线等。
在本公开的实施例中,在驾驶员在触控板的第一区域作出第一手势动作的情况下,触控板中的电极和压力传感器感应单元,可以检测到触控板的第一区域的第一手势动作,并输出信号到控制单元,控制单元可以将获取的信号转换为坐标值从而形成轨迹信息,并根据轨迹信息,确定第一区域的第一手势动作对应的目标控制对象。
其中,目标控制对象,为待控制的对象,例如可以为汽车座舱内的后视镜、天窗等物件、空调、影音、灯光等。
步骤102,响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式。
其中,第二区域,可以为触控板的任意区域,本公开实施例对此不作限制。
第二手势动作,可以为单指向上滑动、单指三击、单指画椭圆等任意手势动作,本公开实施例对第二手势动作对应的触控方式、触控点的数量以及轨迹不作限制。比如,第二手势动作对应的触控方式可以为点击、长按、拖动等;触控点的数量可以为一个、两个、三个、四个、五个等;轨迹可以为三角形、直线等。
在本公开的实施例中,在驾驶员在触控板的第二区域作出第二手势动作的情况下,触控板中的电极和压力传感器感应单元,可以检测到触控板的第二区域的第二手势动作,并输出信号到控制单元,控制单元可以将获取的信号转换为坐标值从而形成轨迹信息,并根据轨迹信息,确定第二区域的第二手势动作对应的目标控制方式。
其中,目标控制方式,为对目标控制对象进行控制的方式,例如可以为开启空调、关闭车外双闪灯等。
步骤103,根据目标控制方式,对目标控制对象进行控制。
在本公开的实施例中,确定了目标控制对象及目标控制方式后,触控板的控制单 元即可根据目标控制对象及目标控制方式,生成控制指令,并通过触控板的连接接口向目标控制对象发送控制指令,以对目标控制对象进行控制。
在实际应用中,触控板可以安装在汽车车舱内,从而通过本公开实施例中的人机交互方法,对汽车中的物件、空调、影音、灯光、软件、辅助驾驶功能进行控制。由此,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
其中,物件控制,可以包括控制车身物件开关,比如后视镜调节、开天窗或关天窗、上锁或解锁、开启引擎盖或开启后备箱盖、开启油箱盖或开启充电口盖等。
空调控制,可以包括控制和调节汽车内空调多种模式,比如空调开关、温度调节、风速调节、风向模式选择、除雾开关、内外循环、空气过滤等。
影音控制,可以包括控制和调节车内影音系统功能,比如播放、暂停、上一首、下一首、快进、快退、循环播放、顺序播放、随机播放等。
灯光控制,可以包括控制车内外灯光模式,比如车外双闪、车内座舱前后顶灯、氛围灯及亮度等。
软件控制,可以包括控制车载软件开关及软件内常用功能,比如导航进入或退出、来电接听或挂断、蓝牙连接或断开、功能确认、功能取消、功能返回或功能退出等。
辅助驾驶功能控制,可以包括控制车载自动辅助驾驶模式开启和功能设置,比如领航辅助驾驶、自动泊车等。
在本公开的实施例中,为了实现对汽车中各项功能的便捷控制,触控板可以根据需要安装在汽车车舱内驾驶员的顺手位置。比如,触控板可以安装在车辆的以下位置中的任意一个:右手位操纵台、挂档器表面、左手位前门装饰件、方向盘中央。其中,右手位操纵台,位于储物箱前侧,驾驶员右手小臂自然放置的区域前侧,将触控板安装在右手位操纵台,可以方便驾驶员左手驾驶时,使用闲置右手实时通过触控板对汽车中各项功能进行控制。挂档器表面,位于档杆顶端,在档杆顶端可以预留一定面积安装触控板,将触控板安装在挂档器表面,可以方便习惯用右手握住挂档器的驾驶员实时通过触控板对汽车中各项功能进行控制。左手位前门装饰件,位于车窗升降按键前,驾驶员左手小臂自然放置区域的前端,将触控板安装在左手位前门装饰件,可以方便驾驶员右手驾驶时,使用闲置左手实时通过触控板对汽车中各项功能进行控制。将触控板安装在多功能方向盘内部区域的车标下方位置或按键区,可以方便驾驶员双手驾驶时居中操控,更适合习惯双手驾驶、无闲置手习惯的驾驶员使用。
其中,触控板的安装方式可以为,在车舱的安装位置外壳上预留触控板孔,在孔内嵌入触控板,并使用带有延伸部的支架将触控板在外壳上通过定位柱固定。
可以理解的是,触控板安装在汽车车舱内驾驶员的顺手位置时,易出现对触控板的误触控的情况,而若频繁出现误触控,易对驾驶员带来叨扰。而本公开实施例中的人机交互方法,实质上是将对控制对象的控制分成了两个阶段,第一阶段,先根据检测到的触控板的第一区域的第一手势动作,确定目标控制对象,此时可以进入目标控制对象的触发模式,第二阶段,根据检测到的触控板的第二区域的第二手势动作,确定目标控制方式,进而根据目标控制方式对目标控制对象进行控制,由此,即使驾驶员误触了触控板,也不会造成对汽车中各项功能的误操控,从而避免了误触控的频繁发生。
本公开实施例提供的人机交互方法,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;根据目标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
通过上述分析可知,本公开实施例中,可以根据检测到的触控板的第一区域的第一手势动作,确定目标控制对象,根据检测到的触控板的第二区域的第二手势动作,确定目标控制方式,下面结合图2,对本公开提供的人机交互方法中,确定目标控制对象及目标控制方式的过程进一步说明。
图2是根据本公开第二实施例的人机交互方法的流程示意图。如图2所示,人机交互方法,可以包括以下步骤:
步骤201,响应于检测到触控板的第一区域的第一手势动作,根据触控板的预设区域、手势动作与控制对象的第一对应关系,确定第一区域的第一手势动作对应的目标控制对象。
在本公开的实施例中,可以预先设置触控板的预设区域、手势动作与控制对象的第一对应关系,从而在检测到触控板的第一区域的第一手势动作后,可以通过查询第一对应关系,确定第一区域的第一手势动作对应的目标控制对象。其中,第一对应关系可以根据需要任意设置。
需要说明的是,在实际应用中,可能出现因误触控或者驾驶员记忆的手势动作出现错误,而导致人机交互装置检测到触控板的第一区域的第一手势动作后,查询触控板的预设区域、手势动作与控制对象的第一对应关系,确定不存在与第一区域的第一手势动作对应的控制对象的情况,此时,可以提示驾驶员重新输入手势动作,以引导 驾驶员正确对汽车各项功能进行控制。
在本公开的实施例中,可以通过触控板单次长震动、车载语音播报等方式中的至少一种,提示驾驶员重新输入手势动作。
另外,触控板可以与显示屏连接,其中,显示屏可以为中控显示屏或其它显示屏,连接方式可以为CAN(Controller Area Network,控制器局域网络)总线连接或其它连接方式,从而响应于根据触控板的预设区域、手势动作与控制对象的第一对应关系,确定不存在与第一区域的第一手势动作对应的控制对象,可以通过显示屏展示重新输入手势动作的提示信息。其中,为了与其它提示信息进行区分,此处的提示信息可以称为第四提示信息。
其中,通过显示屏展示重新输入手势动作的第四提示信息的情况下,可以以文字的形式展示手势动作错误的提示消息,也可以通过动画的形式展示各控制对象对应的手势动作以及对应区域,本公开实施例对此不作限制。
步骤202,根据目标控制对象,确定触控板的预设区域、手势动作与目标控制对象的控制方式的第二对应关系。
步骤203,响应于检测到触控板的第二区域的第二手势动作,根据第二对应关系,确定第二区域的第二手势动作对应的目标控制方式。
在本公开的实施例中,可以预先针对每个控制对象,设置触控板的预设区域、手势动作与控制对象的控制方式的对应关系,从而在确定目标控制对象后,可以根据目标控制对象,确定触控板的预设区域、手势动作与目标控制对象的控制方式的第二对应关系,进而在检测到触控板的第二区域的第二手势动作后,可以根据第二对应关系,确定第二区域的第二手势动作对应的目标控制方式。其中,第二对应关系可以根据需要任意设置。
举例来说,假设预先设置针对后视镜,在触控板的任意区域单指滑动,对应调节后视镜的角度;针对引擎盖,在触控板的任意区域单指向上滑动,对应开启引擎盖;针对后备箱盖,在触控板的任意区域单指向上滑动对应开启后备箱盖。则在确定目标控制对象为后备箱盖后,可以确定触控板的预设区域、手势动作与后备箱盖的控制方式的第二对应关系。进而在检测到触控板的任意区域的单指向上滑动的第二手势动作后,可以根据触控板的预设区域、手势动作与后备箱盖的控制方式的第二对应关系,确定任意区域的单指向上滑动的第二手势动作对应的目标控制方式为开启后备箱盖。
需要说明的是,步骤202也可以在检测到触控板的第二区域的第二手势动作后执行,本公开实施例对步骤202的执行时机不作限制。
需要说明的是,在实际应用中,可能出现因误触控或者驾驶员记忆的手势动作出 现错误,而导致人机交互装置检测到触控板的第二区域的第二手势动作后,查询触控板的预设区域、手势动作与目标控制对象的控制方式的第二对应关系,确定不存在与第二区域的第二手势动作对应的控制方式的情况,此时,也可以提示驾驶员重新输入手势动作,以引导驾驶员正确对汽车的目标控制对象的各项功能进行控制。
在本公开的实施例中,可以通过触控板单次长震动、语音播报等方式中的至少一种,提示驾驶员重新输入动作,也可以通过显示屏展示重新输入手势动作的提示信息。其中,为了与其它提示信息进行区分,此处的提示信息可以称为第五提示信息。
其中,通过显示屏展示重新输入手势动作的第五提示信息的情况下,可以以文字的形式展示手势动作错误的提示消息,也可以通过动画的形式展示目标控制对象的各控制方式对应的手势动作以及对应区域,本公开实施例对此不作限制。
在本公开的实施例中,在确定不存在与第一区域的第一手势动作对应的控制对象的次数超过预设次数阈值的情况下,或者确定不存在与第二区域的第二手势动作对应的控制方式的次数超过预设次数阈值的情况下,可以进一步展示手势动作错误的提示消息,并退出该轮对目标控制对象的控制。其中,预设次数阈值,可以根据需要设置。
在本公开的实施例中,可以通过触控板双次长震动、语音播放等方式中的至少一种,来提示驾驶员手势动作错误。
可以理解的是,本公开实施例的人机交互方法,结合手势动作以及手势动作在触控板的作用区域,来确定目标控制对象以及目标控制对象的目标控制方式,从而通过在触控板的不同区域的同一手势动作或同一区域的不同手势动作,可以实现对不同的控制对象进行控制,或者对同一控制对象进行不同的控制,相比仅通过手势动作来确定目标控制对象及目标控制方式的人机交互方法,驾驶员需要记忆的手势动作更少。并且,通过分别设置触控板的预设区域、手势动作与控制对象的第一对应关系,以及针对每个控制对象,触控板的预设区域、手势动作与控制对象的控制方式的第二对应关系,可以实现对于不同的控制对象,采用相同的手势动作来对控制对象进行控制,比如对于引擎盖,采用单指向上滑动的手势动作开启引擎盖,对于空调,采用单指向上滑动开启空调,从而进一步减少了驾驶员需要记忆的手势动作,并且,提高了对各控制对象进行控制时的灵活性。
步骤204,根据目标控制方式,对目标控制对象进行控制。
其中,步骤204的具体实现过程及原理,可以参考上述实施例的描述,此处不再赘述。
需要说明的是,对于某些控制对象,根据触控板的第二区域的第二手势动作,确定目标控制对象的目标控制方式,并根据目标控制方式对目标控制对象进行控制后, 可能还需要对目标控制对象进一步调节,比如对于天窗,根据触控板的第二区域的第二手势动作,控制天窗全开后,可能还需要进一步调节天窗开启的程度。那么本公开实施例中,还可以响应于检测到的触控板的第三区域的第三手势动作,根据目标控制对象,确定触控板的预设区域、手势动作与目标控制对象的参数调节方式的第三对应关系,根据第三对应关系,确定第三区域的第三手势动作对应的参数调节方式,进而根据参数调节方式,对目标控制对象的参数进行调节。
另外,对于一些功能比较简单的控制对象,为了实现对控制对象的便捷控制,本公开实施例中,可以设置仅通过检测到的触控板的某个区域的某个手势动作,确定目标控制对象及目标控制方式,进而根据目标控制方式对目标控制对象进行控制。
下面结合附图3-21,对触控板的预设区域、手势动作与控制对象的第一对应关系,以及针对各控制对象,触控板的预设区域、手势动作与控制对象的控制方式的第二对应关系,以及触控板的预设区域、手势动作与控制对象的参数调节方式的第三对应关系进行举例说明。
参考图3,可以设置如图3的左图所示,在触控板的区域301单指双击点按,对应的控制对象为左后视镜;如图3的左图所示,在触控板的区域302单指双击点按,对应的控制对象为右后视镜。对于左后视镜或右后视镜,如图3的右图所示,在触控板的任意区域单指滑动,对应调节后视镜角度方向。
参考图4,可以设置如图4的左图所示,在触控板中央的圆形区域单指画三角形,对应的控制对象为天窗。对于天窗,如图4的中间图所示,在触控板的任意区域单指滑动,对应控制天窗开启或关闭。其中,在触控板的任意区域单指向上滑动,对应控制天窗斜开;在触控板的任意区域单指向下滑动,对应控制天窗斜关;在触控板的任意区域单指向左滑动,对应控制天窗全关;在触控板的任意区域单指向右滑动,对应控制天窗全开。对于天窗,如图4的右图所示,在触控板的任意区域单指长按,对应调节天窗的开启程度。
参考图5,可以设置如图5的左图所示,在触控板任意区域五指向内抓紧,对应的控制对象为车门锁,对应的控制方式为上锁。并且,设置如图5的右图所示,在触控板任意区域五指点按向外伸张,对应的控制对象为车门锁,对应的控制方式为解锁。
参考图6,可以设置如图6的左图所示,在触控板的区域601单指从左至右横向滑动,对应的控制对象为引擎盖,如图6的左图所示,在触控板的区域602单指从左向右横向滑动,对应的控制对象为后备箱盖。对于引擎盖或后备箱盖,如图6的右图所示,在触控板的任意区域单指向上滑动,对应开启引擎盖或后备箱盖。
参考图7,可以设置如图7的左图所示,在触控板的区域701单指双击点按,对应 的控制对象为油箱盖或充电口盖。对于油箱盖或充电口盖,如图7的右图所示,在触控板的任意区域单指向上滑动,对应开启油箱盖或充电口盖。
参考图8-13,可以设置如图8-13的左图所示,在触控板的任意区域单指右滑一道回旋曲线,对应的控制对象为空调。对于空调,如图8的中间图所示,在触控板的任意区域单指向上滑动,对应开启空调;如图8的右图所示,在触控板的任意区域单指向下滑动,对应关闭空调;如图9的中间图所示,在触控板的任意区域双指向上滑动,对应调高温度;在触控板的任意区域双指向下滑动,对应调低温度;如图9的右图所示,在触控板的任意区域两指(比如拇指和食指)向外放大,对应调大风速,在触控板的任意区域两指向内缩小,对应降低风速;如图10的第2幅图所示,在触控板的任意区域单指向右滑动一段距离,对应开启前向出风;如图10的第3幅图所示,在触控板的任意区域单指向下滑动一段距离,对应开启下向出风;如图10的第4幅图所示,在触控板的任意区域单指向右滑动一段距离再向下滑动一段距离,对应开启多向出风;如图11的中间图所示,在触控板的任意区域单指划出一道椭圆轨迹,对应开启车舱空调内循环;如图11的右图所示,在触控板的任意区域单指划出一道椭圆轨迹后向外延伸,对应关闭车舱空调内循环;如图12的中间图所示,在触控板的上半区域三指向右划一道弯折直线,对应开启前挡风玻璃除雾;如图12的中间图所示,在触控板的下半区域三指向右划一道弯折直线,对应开启后挡风玻璃除雾;如图12的右图所示,在触控板的上半区域三指向右划一道弯折直线后向下滑动一段距离,对应关闭前挡风玻璃除雾;如图12的右图所示,在触控板的下半区域三指向右划一道弯折直线后向下滑动一段距离,对应关闭后挡风玻璃除雾;如图13的中间图所示,在触控板的任意区域三指向左斜下滑动一段距离,对应开启车舱空气过滤;如图13的右图所示,在触控板的任意区域三指向左斜下滑动一段距离后向下滑动一段距离,对应关闭车舱空气过滤。
参考图14,可以设置如图14的左图所示,在触控板的任意区域依次使用三指单击点按(比如先使用拇指点按、再使用食指点按、再使用中指点按),对应的控制对象为影音。对于影音,如图14的中间图所示,在触控板的任意区域单指双击,对应启动播放;如图14的右图所示,在触控板的任意区域单指三击,对应暂停播放;在影音播放模式下,如图15的左图所示,在触控板的任意区域使用双指向右滑动,对应切换下一首播放;在影音播放模式下,如图15的中间图所示,在触控板的任意区域使用双指向左滑动,对应切换上一首播放;在影音播放模式下,如图15的右图所示,在触控板的任意区域使用双指长按2秒后向右缓慢滑动,对应进行快进控制;在影音播放模式下,如图15的右图所示,在触控板的任意区域使用双指长按2秒后向左缓慢滑动,对应进行快退控制;在影音播放模式下,如图16的左图所示,在触控板的任意区域使用 单指划圆,对应单曲循环播放;在影音播放模式下,如图16的中间图所示,在触控板的任意区域使用单指先后向下划三次,对应列表顺序播放;在影音播放模式下,如图16的右图所示,在触控板的任意区域使用单指划出无穷符号,对应列表随机播放。
参考图17,可以设置如图17的第1幅图所示,在触控板的任意区域先后使用食指和中指单击且来回点按两次(即先使用食指点按、再使用中指点按、再使用食指点按、再使用中指点按),对应开启双闪;如图17的第1幅图所示,在触控板的任意区域先后使用中指和食指单击来回点按两次(即先使用中指点按、再使用食指点按、再使用中指点按、再使用食指点按),对应关闭双闪;如图17的第2幅图所示,在触控板的上半区域使用单指从上至下划闪电轨迹,对应开启前顶灯;如图17的第2幅图所示,在触控板的上半区域使用单指从下至上划闪电轨迹,对应关闭前顶灯;如图17的第2幅图所示,在触控板的下半区域使用单指从上至下划闪电轨迹,对应开启后顶灯;如图17的第2幅图所示,在触控板的下半区域使用单指从下至上划闪电轨迹,对应关闭后顶灯;如图17的第3幅图所示,在触控板的边缘区域绕触控板顺时针划满一周,对应开启氛围灯,在触控板的边缘区域绕触控板逆时针划满一周,对应关闭氛围灯;在开灯后,如图17的第4幅图所示,在手势尾端长按2秒后向右缓慢滑动,对应调亮灯光;在开灯后,如图17的第4幅图所示,在手势尾端长按2秒后向左缓慢滑动,对应调暗灯光。
参考图18,可以设置如图18的左图所示,在触控板的任意区域使用单指向上滑动并在轨迹末端单指双击,对应进入导航;如图18的右图所示,在触控板的任意区域使用单指向下滑动并在轨迹末端单指双击,对应退出导航。
参考图19,可以设置如图19的左图所示,在触控板的任意区域使用单指向右滑动并在轨迹末端单指双击,对应打开蓝牙连接;如图19的右图所示,在触控板的任意区域使用单指向左滑动并在轨迹末端单指双击,对应断开蓝牙连接。
参考图20,对于某个功能,如图20的第1幅图所示,在触控板的任意区域使用单指划勾,对应确认选择;如图20的第2幅图所示,在触控板的任意区域使用单指划叉,对应取消选择;如图20的第3幅图所示,在触控板的任意区域使用单指划左折括弧,对应返回上一页或上一项;如图20的第4幅图所示,在触控板的任意区域使用四指向左滑,对应退出功能返回主页。对于来电接听,可以使用与图20的第1幅图所示的功能确认手势相同的手势;对于来电或通话挂断,可以使用与图20的第2幅图所示的功能取消手势相同的手势。
参考图21,可以设置如图21的左图所示,在触控板的任意区域使用四指向上划两次,对应开启辅助驾驶功能;如图21的中间图所示,在触控板的任意区域使用双指向 前划双波浪线,对应启动领航辅助驾驶;如图21的右图所示,在触控板的任意区域使用双指从上向下再向右划直角双线,对应启动自动泊车功能。
根据上述各控制对象对应的手势动作以及各控制对象的控制方式和参数调节方式分别对应的手势动作可知,本公开实施例中,对于不同的控制对象,可以采用相同的手势动作来进行控制,从而减少了驾驶员需要记忆的手势动作。比如,对于后视镜,可以采用在触控板的任意区域的单指滑动,来控制后视镜角度;对于天窗,可以采用在触控板的任意区域的单指滑动,来控制天窗开或关;对于后备箱盖或引擎盖,可以采用在触控板的任意区域的单指滑动,来控制后备箱盖或引擎盖开启;对于空调,可以采用在触控板的任意区域的单指滑动,来控制空调开启等。
假设人机交互装置检测到在触控板的任意区域的单指向右划一道回旋曲线的手势动作,则可以确定目标控制对象为空调,并且可以确定触控板的预设区域、手势动作与空调的控制方式的对应关系,进而在检测到在触控板的任意区域的单指向上滑动的手势动作后,可以确定控制方式为开启空调,从而可以控制空调开启。
本公开实施例的人机交互方法,响应于检测到触控板的第一区域的第一手势动作,根据触控板的预设区域、手势动作与控制对象的第一对应关系,确定第一区域的第一手势动作对应的目标控制对象,根据目标控制对象,确定触控板的预设区域、手势动作与目标控制对象的控制方式的第二对应关系,响应于检测到触控板的第二区域的第二手势动作,根据第二对应关系,确定第二区域的第二手势动作对应的目标控制方式,根据目标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
通过上述分析可知,本公开实施例中可以将对控制对象的控制过程分为两个阶段,先根据检测到的触控板的第一区域的第一手势动作,确定目标控制对象,再根据检测到的触控板的第二区域的第二手势动作,确定目标控制对象的目标控制方式。在实际应用中,可能出现检测到触控板的第一区域的第一手势动作后,未检测到触控板的第二区域的第二手势动作的情况,下面结合图22,对本公开提供的人机交互方法进行进一步说明。
图22是根据本公开第三实施例的人机交互方法的流程示意图。如图22所示,人机交互方法,可以包括以下步骤:
步骤2201,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象。
其中,步骤2201的具体实现过程及原理,可以参考上述实施例的描述,此处不再赘述。
步骤2202,通过预设的方式,展示第一区域的第一手势动作对应目标控制对象的第一提示信息,以及展示目标控制对象的至少一个控制方式对应的手势动作的第二提示信息。
步骤2203,获取对目标控制对象的确认指令。
步骤2204,响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式。
在本公开的实施例中,确定第一区域的第一手势动作对应的目标控制对象后,可以进入目标控制对象的触发模式,并通过触控板双次震动、车载语音播报等方式中的至少一种,提示驾驶员第一区域的第一手势动作对应的目标控制对象,以使驾驶员知晓其在触控板的第一区域的第一手势动作已被正确响应。其中,为了与其它提示信息进行区分,此处的提示信息可以称为第一提示信息。并且,还可以通过语音播放等方式,展示目标控制对象的至少一个控制方式对应的手势动作的第二提示信息,以引导驾驶员采用正确的手势动作对目标控制对象进行控制。比如,在目标控制对象为引擎盖的情况下,可以通过语音播放“请在触控板的任意区域单指向上滑动以开启引擎盖,或者在触控板的任意区域单指向下滑动以关闭引擎盖”,以引导驾驶员采用正确的手势动作对引擎盖进行控制。
在本公开的实施例中,触控板可以与显示屏连接,其中,显示屏可以为中控显示屏或其它显示屏,连接方式可以为CAN总线连接或其它连接方式,从而在确定第一区域的第一手势动作对应的目标控制对象后,可以通过显示屏展示第一提示信息和第二提示信息,以使驾驶员直观的了解当前的控制对象为哪个控制对象,以及采用何种手势动作对目标控制对象进行控制。
通过显示屏展示第一提示信息以及第二提示信息的情况下,可以以文字的形式展示第一提示信息以及第二提示信息,也可以通过动画的形式展示第一提示信息以及第二提示信息,本公开实施例对此不作限制。
在本公开的实施例中,在展示第一区域的第一手势动作对应的目标控制对象的第一提示信息后,人机交互装置可以在获取到对目标控制对象的确认指令后,再响应于检测到的触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式,从而避免出现人机交互装置确定的目标控制对象错误的情况下,根据第二区域的第二手势动作对应的目标控制方式对错误的控制对象进行控制的情况,提高对汽车中各项功能的控制准确性。
其中,对目标控制对象的确认指令,可以是驾驶员通过语音的方式下达的确认指示,也可以是驾驶员通过在触控板的特定区域的特定手势动作触发的确认指令,比如驾驶员在触控板的任意区域使用单指划勾来确认目标控制对象正确,或者,也可以是以其它形式触发的确认指令,本公开实施例对此不作限制。
在本公开的实施例中,还可能出现驾驶员在触控板的第一区域输入第一手势动作后,因目标控制对象错误或者临时有其它事等原因,而未在触控板进一步输入手势动作的情况。针对上述情况,本公开实施例中,可以响应于通过显示屏展示第一提示信息以及第二提示信息之后,在第一预设时间段内未检测到触控板的第二区域的第二手势动作,展示输入手势动作的第三提示信息,以引导驾驶员对目标控制对象进行正确控制。响应于展示输入手势动作的第三提示信息之后,第二预设时间段内仍未检测到触控板的第二区域的第二手势动作,控制显示屏退出展示第一提示信息以及第二提示信息的界面。
其中,第一预设时间段以及第二预设时间段,可以根据需要设置,比如,可以设置第一预设时间段以及第二预设时间段均为5秒。
在本公开的实施例中,响应于第一预设时间段内未检测到触控板的第二区域的第二手势动作,可以通过触控板单次震动、车载语音播报等方式中的至少一种,提示驾驶员输入手势动作,也可以通过显示屏展示输入手势动作的提示信息,以引导驾驶员对目标控制对象进行正确控制。其中,为了与其它提示信息进行区分,此处的提示信息可以称为第三提示信息。
响应于在展示输入手势动作的第三提示信息之后,第二预设时间段内未检测到触控板的第二区域的第二手势动作,可以控制退出目标控制对象的触发模式,并作出语音播报退出提示,并控制显示屏退出展示第一提示信息以及第二提示信息的界面。
在本公开的实施例中,确定目标控制对象的目标控制方式后,也可以通过触控板单次震动、语音播报、显示屏显示等方式,提示驾驶员第二区域的第二手势动作已被正确响应。
步骤2205,根据目标控制方式,对目标控制对象进行控制。
在本公开的实施例中,响应于根据目标控制方式,对目标控制对象进行控制之后,第三预设时间段内未检测到手势动作,可以展示退出当轮功能控制的第六提示信息。其中,第三预设时间段,可以根据需要设置。
在本公开的实施例中,可以通过触控板双次震动、语音播报等方式中的至少一种,展示退出当轮功能控制的第六提示信息。另外,退出当轮控制后,显示屏可以停留在对目标控制对象的控制界面,以方便驾驶员进一步对目标控制对象进行控制。
本公开实施例的人机交互方法,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象,通过预设的方式,展示第一区域的第一手势动作对应目标控制对象的第一提示信息,以及展示目标控制对象的至少一个控制方式对应的手势动作的第二提示信息,获取对目标控制对象的确认指令,响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式,根据目标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
下面结合图23,对本公开提供的人机交互装置进行说明。
图23是根据本公开第四实施例的人机交互装置的结构示意图。
如图23所示,本公开提供的人机交互装置2300,包括:第一确定模块2301、第二确定模块2302以及控制模块2303。
其中,第一确定模块2301,用于响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;
第二确定模块2302,用于响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;
控制模块2303,用于根据目标控制方式,对目标控制对象进行控制。
需要说明的是,本实施例提供的人机交互装置2300,可以执行前述实施例的人机交互方法。其中,人机交互装置可以为电子设备,也可以被配置在电子设备中,以实现根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高驾驶的安全性。
其中,电子设备,可以是任意能够进行数据处理的静止或者移动计算设备,例如笔记本电脑、智能手机、可穿戴设备等移动计算设备,或者台式计算机等静止的计算设备,或者服务器,或者触控板等,本公开对此不作限制。
需要说明的是,前述对于人机交互方法的实施例的说明,也适用于本公开提供的人机交互装置,此处不再赘述。
本公开实施例提供的人机交互装置,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;根据目 标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
下面结合图24,对本公开提供的人机交互装置进行说明。
图24是根据本公开第五实施例的人机交互装置的结构示意图。
如图24所示,人机交互装置2400,可以包括:第一确定模块2401、第二确定模块2402以及控制模块2403。其中,图24中第一确定模块2401、第二确定模块2402以及控制模块2403与图23中第一确定模块2301、第二确定模块2302以及控制模块2303具有相同功能和结构。
在本公开的实施例中,第一确定模块2401,包括:
第一确定单元,用于根据触控板的预设区域、手势动作与控制对象的第一对应关系,确定第一区域的第一手势动作对应的目标控制对象。
在本公开的实施例中,第二确定模块2402,包括:
第二确定单元,用于根据目标控制对象,确定触控板的预设区域、手势动作与目标控制对象的控制方式的第二对应关系;
第三确定单元,用于根据第二对应关系,确定第二区域的第二手势动作对应的目标控制方式。
在本公开的实施例中,人机交互装置2400,还包括:
第一展示模块2404,用于通过预设的方式,展示第一区域的第一手势动作对应目标控制对象的第一提示信息,以及展示目标控制对象的至少一个控制方式对应的手势动作的第二提示信息。
在本公开的实施例中,人机交互装置2400,还包括:
获取模块2405,用于获取对目标控制对象的确认指令。
在本公开的实施例中,触控板与显示屏连接;
其中,第一展示模块2404,包括:
第一展示单元,用于通过显示屏展示第一提示信息以及第二提示信息。
在本公开的实施例中,第一展示模块2404,还包括:
第二展示单元,用于响应于第一预设时间段内未检测到触控板的第二区域的第二手势动作,展示输入手势动作的第三提示信息;
控制单元,用于响应于展示输入手势动作的第三提示信息之后,第二预设时间段内未检测到触控板的第二区域的第二手势动作,控制显示屏退出展示第一提示信息以 及第二提示信息的界面。
在本公开的实施例中,人机交互装置2400,还包括:
第二展示模块2406,用于响应于根据触控板的预设区域、手势动作与控制对象的第一对应关系,确定不存在与第一区域的第一手势动作对应的控制对象,展示重新输入手势动作的第四提示信息。
在本公开的实施例中,人机交互装置2400,还包括:
第三展示模块2407,用于响应于根据第二对应关系,确定不存在与第二区域的第二手势动作对应的控制方式,展示重新输入手势动作的第五提示信息。
在本公开的实施例中,触控板安装在车辆的以下位置中的任意一个:右手位操纵台、挂档器表面、左手位前门装饰件、方向盘中央。
需要说明的是,前述对于人机交互方法的实施例的说明,也适用于本公开提供的人机交互装置,此处不再赘述。
本公开实施例提供的人机交互装置,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;根据目标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图25示出了可以用来实施本公开的实施例的示例电子设备2500的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图25所示,设备2500包括计算单元2501,其可以根据存储在只读存储器(ROM)2502中的计算机程序或者从存储单元2508加载到随机访问存储器(RAM)2503中的计算机程序,来执行各种适当的动作和处理。在RAM 2503中,还可存储设备2500操作所需的各种程序和数据。计算单元2501、ROM 2502以及RAM 2503通过总线2504彼此相连。输入/输出(I/O)接口2505也连接至总线2504。
设备2500中的多个部件连接至I/O接口2505,包括:输入单元2506,例如键盘、鼠标等;输出单元2507,例如各种类型的显示器、扬声器等;存储单元2508,例如磁盘、光盘等;以及通信单元2509,例如网卡、调制解调器、无线通信收发机等。通信单元2509允许设备2500通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元2501可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元2501的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元2501执行上文所描述的各个方法和处理,例如人机交互方法。例如,在一些实施例中,人机交互方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元2508。在一些实施例中,计算机程序的部分或者全部可以经由ROM 2502和/或通信单元2509而被载入和/或安装到设备2500上。当计算机程序加载到RAM 2503并由计算单元2501执行时,可以执行上文描述的人机交互方法的一个或多个步骤。备选地,在其他实施例中,计算单元2501可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行人机交互方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。 机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、互联网和区块链网络。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS服务("Virtual Private Server",或简称"VPS")中,存在的管理难度大,业务扩展性弱的缺陷。服务器也可以为分布式系统的服务器,或者是结合了区块链的服务器。
根据本公开实施例的技术方案,响应于检测到触控板的第一区域的第一手势动作,确定第一区域的第一手势动作对应的目标控制对象;响应于检测到触控板的第二区域的第二手势动作,确定第二区域的第二手势动作对应的目标控制方式;根据目标控制方式,对目标控制对象进行控制。由此,实现了根据检测到的触控板的不同区域的手势动作,对目标控制对象进行控制,使得驾驶员视线不必离开前方路面,单凭手指的动作即可对汽车中各项功能进行控制,提高了驾驶的安全性。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。
Claims (23)
- 一种人机交互方法,包括:响应于检测到触控板的第一区域的第一手势动作,确定所述第一区域的第一手势动作对应的目标控制对象;响应于检测到所述触控板的第二区域的第二手势动作,确定所述第二区域的第二手势动作对应的目标控制方式;根据所述目标控制方式,对所述目标控制对象进行控制。
- 根据权利要求1所述的方法,所述响应于检测到触控板的第一区域的第一手势动作,确定所述第一区域的第一手势动作对应的目标控制对象,包括:根据所述触控板的预设区域、手势动作与控制对象的第一对应关系,确定所述第一区域的第一手势动作对应的所述目标控制对象。
- 根据权利要求1或2所述的方法,其中,所述响应于检测到所述触控板的第二区域的第二手势动作,确定所述第二区域的第二手势动作对应的目标控制方式,包括:根据所述目标控制对象,确定所述触控板的预设区域、手势动作与所述目标控制对象的控制方式的第二对应关系;根据所述第二对应关系,确定所述第二区域的第二手势动作对应的所述目标控制方式。
- 根据权利要求1或2所述的方法,还包括:通过预设的方式,展示所述第一区域的第一手势动作对应所述目标控制对象的第一提示信息,以及展示所述目标控制对象的至少一个控制方式对应的手势动作的第二提示信息。
- 根据权利要求4所述的方法,还包括:获取对所述目标控制对象的确认指令。
- 根据权利要求4所述的方法,其中,所述触控板与显示屏连接;其中,所述通过预设的方式,展示所述第一区域的第一手势动作对应所述目标控制对象的第一提示信息,以及展示所述目标控制对象的至少一个控制方式对应的手势动作的第二提示信息,包括:通过所述显示屏展示所述第一提示信息以及所述第二提示信息。
- 根据权利要求6所述的方法,还包括:响应于第一预设时间段内未检测到所述触控板的第二区域的第二手势动作,展示输入手势动作的第三提示信息;响应于展示所述输入手势动作的第三提示信息之后,第二预设时间段内未检测到所述触控板的第二区域的第二手势动作,控制所述显示屏退出展示所述第一提示信息以及所述第二提示信息的界面。
- 根据权利要求2所述的方法,还包括:响应于根据所述触控板的预设区域、手势动作与控制对象的第一对应关系,确定不存在与所述第一区域的第一手势动作对应的控制对象,展示重新输入手势动作的第四提示信息。
- 根据权利要求3所述的方法,还包括:响应于根据所述第二对应关系,确定不存在与所述第二区域的第二手势动作对应的控制方式,展示重新输入手势动作的第五提示信息。
- 根据权利要求1-9任一项所述的方法,其中,所述触控板安装在车辆的以下位置中的任意一个:右手位操纵台、挂档器表面、左手位前门装饰件、方向盘中央。
- 一种人机交互装置,包括:第一确定模块,用于响应于检测到触控板的第一区域的第一手势动作,确定所述第一区域的第一手势动作对应的目标控制对象;第二确定模块,用于响应于检测到所述触控板的第二区域的第二手势动作,确定所述第二区域的第二手势动作对应的目标控制方式;控制模块,用于根据所述目标控制方式,对所述目标控制对象进行控制。
- 根据权利要求11所述的装置,所述第一确定模块,包括:第一确定单元,用于根据所述触控板的预设区域、手势动作与控制对象的第一对应关系,确定所述第一区域的第一手势动作对应的所述目标控制对象。
- 根据权利要求11或12所述的装置,其中,所述第二确定模块,包括:第二确定单元,用于根据所述目标控制对象,确定所述触控板的预设区域、手势动作与所述目标控制对象的控制方式的第二对应关系;第三确定单元,用于根据所述第二对应关系,确定所述第二区域的第二手势动作对应的所述目标控制方式。
- 根据权利要求11或12所述的装置,还包括:第一展示模块,用于通过预设的方式,展示所述第一区域的第一手势动作对应所述目标控制对象的第一提示信息,以及展示所述目标控制对象的至少一个控制方式对应的手势动作的第二提示信息。
- 根据权利要求14所述的装置,还包括:获取模块,用于获取对所述目标控制对象的确认指令。
- 根据权利要求14所述的装置,其中,所述触控板与显示屏连接;其中,所述第一展示模块,包括:第一展示单元,用于通过所述显示屏展示所述第一提示信息以及所述第二提示信息。
- 根据权利要求16所述的装置,其中,所述第一展示模块,还包括:第二展示单元,用于响应于第一预设时间段内未检测到所述触控板的第二区域的第二手势动作,展示输入手势动作的第三提示信息;控制单元,用于响应于展示所述输入手势动作的第三提示信息之后,第二预设时间段内未检测到所述触控板的第二区域的第二手势动作,控制所述显示屏退出展示所述第一提示信息以及所述第二提示信息的界面。
- 根据权利要求12所述的装置,还包括:第二展示模块,用于响应于根据所述触控板的预设区域、手势动作与控制对象的第一对应关系,确定不存在与所述第一区域的第一手势动作对应的控制对象,展示重新输入手势动作的第四提示信息。
- 根据权利要求13所述的装置,还包括:第三展示模块,用于响应于根据所述第二对应关系,确定不存在与所述第二区域的第二手势动作对应的控制方式,展示重新输入手势动作的第五提示信息。
- 根据权利要求11-19任一项所述的装置,其中,所述触控板安装在车辆的以下位置中的任意一个:右手位操纵台、挂档器表面、左手位前门装饰件、方向盘中央。
- 一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-10中任一项所述的方法。
- 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-10中任一项所述的方法。
- 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-10中任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21921647.0A EP4137380A4 (en) | 2021-06-25 | 2021-12-02 | HUMAN-MACHINE INTERACTION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM |
US17/760,344 US20240211126A1 (en) | 2021-06-25 | 2021-12-02 | Human-machine interaction method, electronic device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110710822.9A CN113548061B (zh) | 2021-06-25 | 2021-06-25 | 人机交互方法、装置、电子设备以及存储介质 |
CN202110710822.9 | 2021-06-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022267354A1 true WO2022267354A1 (zh) | 2022-12-29 |
Family
ID=78102404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/135044 WO2022267354A1 (zh) | 2021-06-25 | 2021-12-02 | 人机交互方法、装置、电子设备以及存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240211126A1 (zh) |
EP (1) | EP4137380A4 (zh) |
CN (1) | CN113548061B (zh) |
WO (1) | WO2022267354A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113548061B (zh) * | 2021-06-25 | 2023-04-18 | 北京百度网讯科技有限公司 | 人机交互方法、装置、电子设备以及存储介质 |
CN115027386B (zh) * | 2022-04-29 | 2023-08-22 | 北京龙腾佳讯科技股份公司 | 基于汽车云栈的车载服务控制方法、系统、装置及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103180812A (zh) * | 2011-08-31 | 2013-06-26 | 观致汽车有限公司 | 车辆交互系统 |
CN105760096A (zh) * | 2016-01-04 | 2016-07-13 | 钟林 | 一种支持盲操作的汽车中控台方位手势操控方法及装置 |
US20170308225A1 (en) * | 2016-04-20 | 2017-10-26 | Samsung Electronics Co., Ltd. | Electronic device and method for processing gesture input |
CN110633044A (zh) * | 2019-08-27 | 2019-12-31 | 联想(北京)有限公司 | 一种控制方法、装置、电子设备及存储介质 |
CN113548061A (zh) * | 2021-06-25 | 2021-10-26 | 北京百度网讯科技有限公司 | 人机交互方法、装置、电子设备以及存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102866803B (zh) * | 2012-08-30 | 2016-02-17 | 浙江大学 | 一种支持盲操作的汽车虚拟中控台手势操控方法及装置 |
US9720591B2 (en) * | 2014-08-20 | 2017-08-01 | Harman International Industries, Incorporated | Multitouch chording language |
CN108131808B (zh) * | 2017-12-08 | 2020-03-27 | 厦门瑞为信息技术有限公司 | 基于分级手势识别的空调控制装置及方法 |
CN110825296A (zh) * | 2019-11-07 | 2020-02-21 | 深圳传音控股股份有限公司 | 应用控制方法、设备及计算机可读存储介质 |
-
2021
- 2021-06-25 CN CN202110710822.9A patent/CN113548061B/zh active Active
- 2021-12-02 US US17/760,344 patent/US20240211126A1/en active Pending
- 2021-12-02 WO PCT/CN2021/135044 patent/WO2022267354A1/zh active Application Filing
- 2021-12-02 EP EP21921647.0A patent/EP4137380A4/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103180812A (zh) * | 2011-08-31 | 2013-06-26 | 观致汽车有限公司 | 车辆交互系统 |
CN105760096A (zh) * | 2016-01-04 | 2016-07-13 | 钟林 | 一种支持盲操作的汽车中控台方位手势操控方法及装置 |
US20170308225A1 (en) * | 2016-04-20 | 2017-10-26 | Samsung Electronics Co., Ltd. | Electronic device and method for processing gesture input |
CN110633044A (zh) * | 2019-08-27 | 2019-12-31 | 联想(北京)有限公司 | 一种控制方法、装置、电子设备及存储介质 |
CN113548061A (zh) * | 2021-06-25 | 2021-10-26 | 北京百度网讯科技有限公司 | 人机交互方法、装置、电子设备以及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4137380A4 * |
Also Published As
Publication number | Publication date |
---|---|
CN113548061A (zh) | 2021-10-26 |
EP4137380A4 (en) | 2024-01-24 |
US20240211126A1 (en) | 2024-06-27 |
EP4137380A1 (en) | 2023-02-22 |
CN113548061B (zh) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110045825B (zh) | 用于车辆交互控制的手势识别系统 | |
WO2022267354A1 (zh) | 人机交互方法、装置、电子设备以及存储介质 | |
US9261908B2 (en) | System and method for transitioning between operational modes of an in-vehicle device using gestures | |
US10209832B2 (en) | Detecting user interactions with a computing system of a vehicle | |
JP2017076408A (ja) | 自動車用のジェスチャーに基づいた情報およびコマンドの入力 | |
CN105446172B (zh) | 一种车载控制方法、车载控制系统及汽车 | |
CN102866803A (zh) | 一种支持盲操作的汽车虚拟中控台手势操控方法及装置 | |
KR102686009B1 (ko) | 단말기, 그를 가지는 차량 및 그 제어 방법 | |
WO2020087964A1 (zh) | 一种手势触控的多屏操作方法 | |
US20180307405A1 (en) | Contextual vehicle user interface | |
US10671205B2 (en) | Operating apparatus for vehicle | |
KR101806172B1 (ko) | 차량 단말기 조작 시스템 및 그 방법 | |
US11507194B2 (en) | Methods and devices for hand-on-wheel gesture interaction for controls | |
CN105760096A (zh) | 一种支持盲操作的汽车中控台方位手势操控方法及装置 | |
CN109739428A (zh) | 触控交互方法及装置、显示设备及存储介质 | |
WO2022142331A1 (zh) | 车载显示屏的控制方法及装置、电子设备和存储介质 | |
US11853469B2 (en) | Optimize power consumption of display and projection devices by tracing passenger's trajectory in car cabin | |
KR20220098339A (ko) | 차량용 스크린의 제어 방법 및 장치, 전자 기기 및 저장 매체 | |
US10052955B2 (en) | Method for providing an operating device in a vehicle and operating device | |
JP2018501998A (ja) | 自動車の機器を制御するためのシステムおよび方法 | |
CN114537417A (zh) | 一种基于hud和触摸设备的盲操作方法、系统和车辆 | |
CN116204253A (zh) | 一种语音助手显示方法及相关装置 | |
CN103558917A (zh) | 一种基于车载电子设备的盲操作方法及装置 | |
CN113791713B (zh) | 应用于车载智能座舱的多屏幕显示窗口分享方法及装置 | |
JP2016099891A (ja) | 表示操作装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 17760344 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2021921647 Country of ref document: EP Effective date: 20220804 |
|
ENP | Entry into the national phase |
Ref document number: 2021921647 Country of ref document: EP Effective date: 20220804 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |