CN114987364A - Multi-mode human-vehicle interaction system - Google Patents

Multi-mode human-vehicle interaction system Download PDF

Info

Publication number
CN114987364A
CN114987364A CN202210642131.4A CN202210642131A CN114987364A CN 114987364 A CN114987364 A CN 114987364A CN 202210642131 A CN202210642131 A CN 202210642131A CN 114987364 A CN114987364 A CN 114987364A
Authority
CN
China
Prior art keywords
subsystem
information
interaction
module
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210642131.4A
Other languages
Chinese (zh)
Inventor
吕健安
阿卜杜勒·阿齐兹·扎莱
娜尔·西亚兹瓦尼·宾蒂·梅特·萨利赫
王红生
吴银坤
杨镜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou University
Original Assignee
Huizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou University filed Critical Huizhou University
Priority to CN202210642131.4A priority Critical patent/CN114987364A/en
Publication of CN114987364A publication Critical patent/CN114987364A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Abstract

The invention relates to a multi-mode human-vehicle interaction system, which comprises: a speech recognition subsystem: the system is used for acquiring semantic information and identifying and processing the semantic information; the image processing subsystem: the system comprises a signal acquisition unit, a signal processing unit and a signal processing unit, wherein the signal acquisition unit is used for acquiring image input signals, and the image input signals comprise a human face image signal and a pupil image signal; a steering wheel control subsystem: the system is used for sending a control instruction through a physical interaction method to complete a corresponding operation item; the information interaction subsystem: the device is used for executing the received instruction information and executing corresponding operation; HUD display subsystem: the system is used for displaying the interactive information; the voice recognition subsystem is respectively connected with the image processing subsystem and the information interaction subsystem, the image processing subsystem and the steering wheel control subsystem are respectively connected with the information interaction subsystem, and the information interaction subsystem is connected with the HUD display subsystem. According to the invention, the human-computer interaction experience of the user is improved through accurate identification of the input information, accurate semantic understanding and personalized output.

Description

Multi-mode human-vehicle interaction system
Technical Field
The invention relates to the technical field of intelligent vehicle-mounted equipment, in particular to a multi-mode human-vehicle interaction system.
Background
In modern society, vehicles increasingly permeate life, study and work of people, and automobiles serving as important members of the vehicles gradually become a part of life of people. With the rapid development of the automobile industry, automobiles become indispensable transportation tools for people to go out. In the driving process, in order to ensure driving safety, a driver needs to pay attention to driving and cannot perform other operation behaviors, such as selecting a radio station. The existing vehicle-mounted equipment only can realize the inherent functions of the equipment by working independently without mutual matching, so that the function of the vehicle-mounted equipment is single, and the requirement of a user on the omnibearing condition in a vehicle-mounted environment cannot be met.
With the rapid and explosive development of the car networking technology, the interconnection and intellectualization of the car become possible. An important component of the car networking technology is the human-vehicle interaction system. The rise of the artificial intelligence technology injects new vitality into the human-vehicle interaction system, and raises a new generation of intellectualization of the human-vehicle interaction system.
The existing man-machine interaction mode mainly comprises: and (3) voice interaction, wherein the user directly sends a voice instruction to the target object, and the system reads the user intention from the voice instruction and executes corresponding control operation. However, the above solutions have many disadvantages, and the control relying on voice alone is easily affected by many interference factors, for example, the difference of volume or direction of each voice command sent by the user will directly determine the accuracy of the system for recognizing the intention.
Therefore, a multimodal human-vehicle interaction system which only needs to be operated through voice and physical keys and has operation results displayed on a HUD display screen and a central control display screen is urgently needed.
Disclosure of Invention
The invention provides a multi-mode human-vehicle interaction system which can realize that in the driving process, a display screen is not required to be touched, only operation is carried out through voice and physical keys, and the operation result is displayed on a HUD display screen and a central control display screen, so that a driver can watch the operation result through the HUD without watching the central control display screen, and the human-computer interaction experience of a user is improved.
In order to achieve the purpose, the invention provides the following scheme:
a multimodal human-vehicle interaction system comprising:
a speech recognition subsystem: the system is used for acquiring semantic information and identifying and processing the semantic information;
the image processing subsystem: the system comprises a signal acquisition module, a signal processing module and a signal processing module, wherein the signal acquisition module is used for acquiring image input signals, and the image input signals comprise human face image signals and pupil image signals;
a steering wheel control subsystem: the system is used for sending a control instruction through a physical interaction method to complete a corresponding operation item;
the information interaction subsystem: the device is used for executing the received instruction information and executing corresponding operation;
HUD display subsystem: the system is used for displaying the interactive information;
the voice recognition subsystem is respectively connected with the image processing subsystem and the information interaction subsystem, the image processing subsystem and the steering wheel control subsystem are respectively connected with the information interaction subsystem, and the information interaction subsystem is connected with the HUD display subsystem.
Preferably, the voice recognition subsystem comprises a data processing module, and an output end of the data processing module is connected with an input end of the information interaction subsystem, and is configured to generate a control instruction according to the semantic information, and send the control instruction to the information interaction subsystem.
Preferably, the speech recognition subsystem further comprises a noise reduction module for reducing noise in the semantic information.
Preferably, the image processing subsystem comprises: the system comprises a control photographing module and an execution photographing module, wherein the control photographing module executes photographing operation and is used for acquiring a face image signal of a driver; the image pickup module executes image pickup operation and is used for collecting pupil image signals of a driver.
Preferably, the position information of the attracting driver can be captured based on the face image signal and the pupil image signal, and the position image of the attracting driver is photographed through an instruction of the voice recognition subsystem and sent to the HUD display subsystem for storage.
Preferably, the steering wheel control subsystem is installed on a steering wheel, wherein the steering wheel control subsystem comprises four direction keys and a confirmation key, and is used for controlling the information interaction subsystem through the keys.
Preferably, the steering wheel control subsystem further comprises an analog-to-digital conversion module, and the analog-to-digital conversion module is configured to convert an analog signal generated by the steering wheel control subsystem into a digital signal and send the digital signal to the information interaction subsystem.
Preferably, the information interaction subsystem is connected with the HUD display subsystem through Bluetooth, and displays the response information of the information interaction subsystem in the HUD display subsystem, and distinguishes and displays the response information.
Preferably, the multimodal human-vehicle interaction system further comprises a communication module, wherein the communication module comprises wired communication and wireless communication, and the wireless communication comprises bluetooth communication or WiFi communication.
Preferably, the multimodal human-vehicle interaction system further comprises a central control display screen, wherein the central control display screen is respectively connected with the information interaction subsystem and the HUD display subsystem, and corresponding operations are selected through touching the central control display screen and are displayed in the HUD display subsystem.
The invention has the beneficial effects that:
according to the invention, accurate identification, accurate semantic understanding and personalized output of input information are realized through the multi-mode human-vehicle interaction system, and the control instruction is generated according to the gazing information and the semantic information of the user so as to execute the operation corresponding to the control instruction, so that the multi-modes of the user can be fused, and the human-computer interaction experience of the user is improved. The accuracy of operation can also be improved, and the interaction efficiency is improved.
The system can be operated only through voice and physical keys without touching a display screen in the driving process, and the operation result is displayed on the HUD display screen and the central control display screen, so that a driver can watch the operation result through the HUD without watching the central control display screen, and the distraction of the driver during driving is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic diagram of a system module connection relationship according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a system work flow in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention provides a multi-modal human-vehicle interaction system, wherein the term "modality" is a biological concept proposed by German physiological Helmholtz, namely that a channel through which a living being receives information by means of sensing organs and experiences, and for example, a human can acquire external information by five senses. The concept of modality was initially applied in the field of human science, and was later extended to the field of computer science, referring to the channel of computer to physical world. The multi-mode is that multiple senses are fused, as is known, an intelligent sound box is internet of things equipment with an auditory mode, and a camera loaded with AI analysis capability is internet of things equipment with a visual mode. The multi-modal interaction is that various senses of human vision, auditory sense, touch sense and the like are fused, and a computer responds to input by utilizing various communication channels and fully simulates an interaction mode between people. The method is widely applied to interaction designs of various intelligent products, human beings input information through modes such as voice, gestures and expressions, computers respond through channels such as computer vision and hearing, and the like, and for example, the method is multi-mode interaction applied to air conditioners through voice recognition and position control blowing.
As shown in fig. 1, a multimodal human-vehicle interaction system specifically includes:
a speech recognition subsystem: the system is used for acquiring semantic information and identifying the semantic information;
the image processing subsystem: the system comprises a signal acquisition module, a signal processing module and a signal processing module, wherein the signal acquisition module is used for acquiring image input signals, and the image input signals comprise human face image signals and pupil image signals;
a steering wheel control subsystem: the system is used for sending a control instruction through a physical interaction method to complete a corresponding operation item;
the information interaction subsystem: the device is used for executing the received instruction information and executing corresponding operation;
HUD display subsystem: the interactive information is displayed;
the voice recognition subsystem is respectively connected with the image processing subsystem and the information interaction subsystem, the image processing subsystem and the steering wheel control subsystem are respectively connected with the information interaction subsystem, and the information interaction subsystem is connected with the HUD display subsystem through Bluetooth.
In a further optimization scheme, the voice recognition subsystem comprises a data processing module, and an output end of the data processing module is connected with an input end of the information interaction subsystem and is used for generating a control instruction according to the semantic information and sending the control instruction to the information interaction subsystem. The voice recognition subsystem further comprises a noise reduction module for reducing noise in the semantic information and improving the accuracy of the data processing module in semantic processing.
In a further optimization scheme, the image processing subsystem comprises: the system comprises a control photographing module and an execution photographing module, wherein the control photographing module executes photographing operation and is used for acquiring a face image signal of a driver; the image pickup module executes image pickup operation and is used for collecting pupil image signals of a driver. And capturing position information of the attracting driver based on the face image signal and the pupil image signal, photographing the position image of the attracting driver through an instruction of the voice recognition subsystem, and sending the position image to the HUD display subsystem for storage.
According to the further optimization scheme, the HUD display subsystem can be connected to the mobile terminal equipment through wireless communication, obtains entertainment or navigation information in the mobile terminal equipment and displays the information on the HUD display.
The HUD display subsystem is used as a parallel display system, and is a multifunctional instrument panel which is operated by a driver in a blind mode. The system can display the speed per hour of the automobile, can display navigation information and can also be used for displaying the simplified content on the central control display screen. With the vehicle having the head-up display, the driver does not need to shift his or her sight line when viewing some information. The design taking the driver as the center is to make the driver more convenient to see the vehicle-mounted screen and operate the knob when driving; blind operation is to make the driver not to leave the sight ahead as much as possible; the multifunctional instrument panel can display navigation information so as to transfer the attention of a driver as little as possible in navigation.
The HUD is mainly used for displaying the content of the central control display screen and is an extension of the central control display screen. The driver just can be at the in-process of driving a vehicle like this, does not need touch display screen, only needs to operate through pronunciation and physics button, and the result of operation shows on HUD display screen and well accuse display screen, and the driver just can see the result of operation through HUD like this, and need not watch well accuse display screen.
In this embodiment, the HUD heads-up display subsystem uses the android8.0 operating system. The system supports map display, receives map data transmitted by the information interaction subsystem through Bluetooth, and redraws the map data on the head-up display for navigation.
In a further optimized scheme, the steering wheel control subsystem SWC is installed on a steering wheel, wherein the steering wheel control subsystem SWC comprises four direction keys and a confirmation key, and is used for controlling the information interaction subsystem through the keys. The steering wheel control subsystem SWC further comprises an analog-to-digital conversion module, and the analog-to-digital conversion module is used for converting analog signals generated by the steering wheel control subsystem into digital signals and sending the digital signals to the information interaction subsystem.
The information interaction subsystem is connected with the HUD display subsystem through Bluetooth, response information of the information interaction subsystem is displayed in the HUD display subsystem, and the response information is displayed in a distinguishing mode. The information interaction subsystem can also receive a command sent by the steering wheel control subsystem to carry out corresponding operation.
The information interaction subsystem adopts an android8.0 operating system, supports Bluetooth application, adopts a capacitive touch screen, supports FM broadcasting, GPS, audio and video playing, and supports common applications such as Google maps, YouTube, Twitter and watch. The system is mainly used for the infotainment function of a vehicle central control system, receives the instruction of an SWC steering wheel controller to perform corresponding operation, and transmits control data to the HUD display through Bluetooth.
In some embodiments, the system comprises a steering wheel control subsystem SWC, an information interaction subsystem and a HUD display subsystem, and is configured to implement a vehicle-mounted infotainment function (as shown in fig. 2), first, the information interaction subsystem is started, and performs a self-checking function, a bluetooth device is turned on, and searches and connects to the HUD display subsystem, at this time, menu selection can be performed through a key in the steering wheel control subsystem, and control can also be performed through a central control display screen in the information interaction subsystem, the central control display screen is touched, navigation data is selected and confirmed, a navigation operation item event is obtained, the confirmed navigation data is packaged and sent to the HUD display subsystem, at this time, the HUD display subsystem connects a request, navigation map data is received and drawn, and final image data is displayed through the HUD display subsystem.
In some embodiments, gaze information and semantic information of the user is obtained by the speech recognition subsystem and the image processing subsystem, wherein the gaze information may include one or more of gaze point information (gaze direction), gaze vector, or gaze depth. The gazing information of the user can be obtained by obtaining an eye image of the user; extracting eye feature information based on the eye image; and determining the gazing information according to the eye characteristic information.
And generating a control instruction according to the gazing information and the semantic information. The control instructions may include: and controlling the photographing module to photograph, controlling the photographing module to photograph or controlling the data processing module to acquire and process sound. And analyzing the gazing information and the semantic information to generate a control instruction, and sending the control instruction to the information interaction subsystem for storage.
And executing corresponding operation according to the control instruction, wherein semantic information in the user voice collected by the voice recognition subsystem is photographing information, the watching information of the user is obtained, and the camera is controlled to turn to the direction of the watching information to photograph based on the watching information. For example, during driving, a driver sees a landscape far outside the vehicle, which is beautiful and wants to record, but since the driver is driving, the driver cannot shoot manually, and here the driver looks outside the window and says "shoot the landscape seen by him", at this time, the control shooting module in the image processing subsystem generates a control instruction for shooting the landscape outside the window according to the sight direction and semantic information of the user, and the vehicle control module controls the camera to shoot the landscape outside the vehicle window according to the control instruction.
Semantic information in user voice collected by the voice recognition module is shooting information, and the shooting module is executed to control the camera to turn to the direction where the watching information is located to shoot according to the shooting information by obtaining the watching information of the user. For example, a driver sees that a performance outside the vehicle is very wonderful and wants to record the performance, but the driver cannot manually take a picture because the driver is driving, and the driver looks outside the window and says "record the scenery seen by him", at this time, the camera module is executed to generate a control instruction for recording the picture according to the sight direction and semantic information of the user, and the vehicle control module controls the camera to record the performance outside the window according to the control instruction. According to the technical scheme of the embodiment, the gazing information and the semantic information of the user are firstly obtained, then the control instruction is generated according to the gazing information and the semantic information, and finally corresponding operation is executed according to the control instruction.
The voice interaction subsystem can also be used for operating the system, such as dialing, selecting a radio station and the like, and can also be used for operating by touching the central control display screen.
The multi-mode human-vehicle interaction system provided by the embodiment of the invention generates the control instruction according to the gazing information and the semantic information of the user so as to execute the operation corresponding to the control instruction, and can fuse the multi-modes of the user to realize human-computer interaction. Can also be at the in-process of driving a vehicle, need not touch display screen, only need operate through pronunciation and physics button, the result of operation shows on HUD display screen and well accuse display screen, makes the driver see the result of operation through HUD, and need not see well accuse display screen, driver's distraction when avoiding driving.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (10)

1. A multimodal human-vehicle interaction system, comprising:
a speech recognition subsystem: the system is used for acquiring semantic information and identifying the semantic information;
the image processing subsystem: the system comprises a signal acquisition module, a signal processing module and a signal processing module, wherein the signal acquisition module is used for acquiring image input signals, and the image input signals comprise human face image signals and pupil image signals;
a steering wheel control subsystem: the system is used for sending a control instruction through a physical interaction method to complete a corresponding operation item;
the information interaction subsystem comprises: the device is used for executing the received instruction information and executing corresponding operation;
HUD display subsystem: the system is used for displaying the interactive information;
the voice recognition subsystem is respectively connected with the image processing subsystem and the information interaction subsystem, the image processing subsystem and the steering wheel control subsystem are respectively connected with the information interaction subsystem, and the information interaction subsystem is connected with the HUD display subsystem.
2. The multimodal human-vehicle interaction system as claimed in claim 1, wherein the voice recognition subsystem comprises a data processing module, and an output end of the data processing module is connected with an input end of the information interaction subsystem, and is configured to generate a control command according to the semantic information and send the control command to the information interaction subsystem.
3. The multimodal human-vehicle interaction system according to claim 2, wherein the speech recognition subsystem further comprises a noise reduction module for reducing noise in the semantic information.
4. The multi-modal human-vehicle interaction system of claim 1, wherein the image processing subsystem comprises: the system comprises a control photographing module and an execution photographing module, wherein the control photographing module executes photographing operation and is used for acquiring a face image signal of a driver; the image pickup module executes image pickup operation and is used for collecting pupil image signals of a driver.
5. The multimodal human-vehicle interaction system according to claim 4, wherein position information attracting a driver can be captured based on the face image signal and the pupil image signal, and the position image attracting the driver is photographed and sent to the HUD display subsystem for storage through an instruction of the voice recognition subsystem.
6. The multimodal human-vehicle interaction system of claim 1, wherein the steering wheel control subsystem is mounted on a steering wheel, wherein the steering wheel control subsystem comprises four directional buttons and a confirmation button, and is configured to control the information interaction subsystem through the buttons.
7. The multimodal human-vehicle interaction system of claim 6, wherein the steering wheel control subsystem further comprises an analog-to-digital conversion module, and the analog-to-digital conversion module is configured to convert an analog signal generated by the steering wheel control subsystem into a digital signal and send the digital signal to the information interaction subsystem.
8. The multimodal human-vehicle interaction system of claim 1, wherein the information interaction subsystem is connected with the HUD display subsystem through Bluetooth, and the response information of the information interaction subsystem is displayed in the HUD display subsystem and is displayed in a differentiated manner.
9. The multimodal human-vehicle interaction system as claimed in claim 1, further comprising a communication module, wherein the communication module comprises wired communication and wireless communication, and the wireless communication comprises bluetooth communication or WiFi communication.
10. The multimodal human-vehicle interaction system of claim 1, further comprising a central control display screen, wherein the central control display screen is connected with the information interaction subsystem and the HUD display subsystem respectively, and corresponding operations are selected by touching the central control display screen and presented in the HUD display subsystem.
CN202210642131.4A 2022-06-08 2022-06-08 Multi-mode human-vehicle interaction system Pending CN114987364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210642131.4A CN114987364A (en) 2022-06-08 2022-06-08 Multi-mode human-vehicle interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210642131.4A CN114987364A (en) 2022-06-08 2022-06-08 Multi-mode human-vehicle interaction system

Publications (1)

Publication Number Publication Date
CN114987364A true CN114987364A (en) 2022-09-02

Family

ID=83033011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210642131.4A Pending CN114987364A (en) 2022-06-08 2022-06-08 Multi-mode human-vehicle interaction system

Country Status (1)

Country Link
CN (1) CN114987364A (en)

Similar Documents

Publication Publication Date Title
US9235269B2 (en) System and method for manipulating user interface in vehicle using finger valleys
US8886399B2 (en) System and method for controlling a vehicle user interface based on gesture angle
US8532871B2 (en) Multi-modal vehicle operating device
US20180046255A1 (en) Radar-based gestural interface
KR101490908B1 (en) System and method for providing a user interface using hand shape trace recognition in a vehicle
KR101438615B1 (en) System and method for providing a user interface using 2 dimension camera in a vehicle
CN113302664A (en) Multimodal user interface for a vehicle
US20140168068A1 (en) System and method for manipulating user interface using wrist angle in vehicle
US20180150133A1 (en) Glasses-type terminal and control method therefor
CN105867640A (en) Smart glasses and control method and control system of smart glasses
KR101698102B1 (en) Apparatus for controlling vehicle and method for controlling the same
CN111638786B (en) Display control method, device, equipment and storage medium of vehicle-mounted rear projection display system
CN107548483B (en) Control method, control device, system and motor vehicle comprising such a control device
CN210573658U (en) Vehicle-mounted eye interaction device
CN114987364A (en) Multi-mode human-vehicle interaction system
WO2023036230A1 (en) Execution instruction determination method and apparatus, device, and storage medium
KR20140079025A (en) Method for providing a user interface using leg gesture recognition in a vehicle
CN115793852A (en) Method for acquiring operation indication based on cabin area, display method and related equipment
WO2023272629A1 (en) Interface control method, device, and system
CN105974586A (en) Intelligent glasses and operating method and system therefor
CN112446695A (en) Data processing method and device
CN115033133B (en) Progressive information display method and device, electronic equipment and storage medium
KR101678088B1 (en) Vehicle, and controlling method for vehicle
CN116710979A (en) Man-machine interaction method, system and processing device
EP4328765A1 (en) Method and apparatus for recommending vehicle driving strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination