CN110857067A - Human-vehicle interaction device and human-vehicle interaction method - Google Patents

Human-vehicle interaction device and human-vehicle interaction method Download PDF

Info

Publication number
CN110857067A
CN110857067A CN201810974398.7A CN201810974398A CN110857067A CN 110857067 A CN110857067 A CN 110857067A CN 201810974398 A CN201810974398 A CN 201810974398A CN 110857067 A CN110857067 A CN 110857067A
Authority
CN
China
Prior art keywords
information
module
human
dimensional
installation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810974398.7A
Other languages
Chinese (zh)
Other versions
CN110857067B (en
Inventor
牛凤海
张冀青
胡俊伟
屠大维
赵威驰
赵其杰
李豪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Beijing Transpacific Technology Development Ltd
Original Assignee
SAIC Motor Corp Ltd
Beijing Transpacific Technology Development Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd, Beijing Transpacific Technology Development Ltd filed Critical SAIC Motor Corp Ltd
Priority to CN201810974398.7A priority Critical patent/CN110857067B/en
Publication of CN110857067A publication Critical patent/CN110857067A/en
Application granted granted Critical
Publication of CN110857067B publication Critical patent/CN110857067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • B60R16/0373Voice control

Abstract

The application discloses a human-vehicle interaction device and a human-vehicle interaction method, wherein the device comprises a multifunctional installation adjusting module, a visual voice acquisition module and an information processing module; the multifunctional installation adjusting module is connected with the visual voice acquisition module, and the visual voice acquisition module is connected with the information processing module; the multifunctional installation adjusting module is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module according to the driving posture of the user; the visual voice acquisition module acquires head three-dimensional information, eye two-dimensional image information and voice information of a user and sends the information to the information processing module; and the information processing module determines the sight direction of the user, the corresponding gazing object and the auxiliary information according to the information fusion and generates a man-vehicle interaction instruction. Therefore, the sight channel is used as the main channel to send out a human-vehicle interaction instruction with other channels in a main-auxiliary combined cooperation mode, human-vehicle interaction is achieved by controlling the vehicle-mounted equipment, the reliability and the naturalness of human-vehicle interaction are improved, and the requirement on an independent channel is reduced.

Description

Human-vehicle interaction device and human-vehicle interaction method
Technical Field
The application relates to the technical field of vehicle driving, in particular to a human-vehicle interaction device and a human-vehicle interaction method.
Background
At the present stage, automobiles are still important transportation means in daily life of people in the current society, and with the rapid development of vehicle electronic technology and informatization technology, on the premise that the aspects of vehicle driving operability, safety and the like are improved, the expectations of users on driving comfort, convenience and operation intelligence when driving vehicles are higher and higher.
In the prior art, due to the fact that a voice technology, a touch screen technology and the like are gradually applied to an automobile, a user can control the automobile to drive through modes of voice input, limb operation and the like, and interaction between a person and the automobile is enabled to be simpler and more natural.
However, through research, the inventor finds that most human-vehicle interaction modes in the prior art, such as voice input, limb operation and the like, are unidirectional control modes, are not natural enough, and voice input, limb operation and the like are controlled by channels in a single channel in working modes, so that the accuracy and the reliability are low, and the interactivity between a human and a vehicle is poor.
Disclosure of Invention
The technical problem to be solved by the application is to provide a human-vehicle interaction device and a human-vehicle interaction method, wherein a sight channel is used as a main channel to send a human-vehicle interaction instruction with other channels in a main-auxiliary combined cooperation mode, vehicle-mounted equipment is controlled to achieve human-vehicle interaction, the reliability and the naturalness of human-vehicle interaction are improved, and the requirement for an independent channel is reduced.
In a first aspect, an embodiment of the present application provides a human-vehicle interaction device, which includes:
the system comprises a multifunctional installation adjusting module, a visual voice acquisition module and an information processing module; the multifunctional installation adjusting module is connected with the visual voice acquisition module, and the visual voice acquisition module is connected with the information processing module;
the multifunctional installation adjusting module is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module according to the driving posture of a user;
the visual voice acquisition module is used for acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user and sending the information to the information processing module;
the information processing module is used for determining the sight direction and the corresponding watching object of the user according to the three-dimensional head information and the two-dimensional eye image information in a fusion mode, recognizing the voice information to determine auxiliary information, and generating a man-vehicle interaction instruction according to the sight direction, the watching object and the auxiliary information.
Preferably, the visual voice acquisition module comprises a visual acquisition sub-module and a voice acquisition sub-module; the vision acquisition sub-module comprises a first infrared camera, a second infrared camera, a color camera and a laser light source with texture, the first infrared camera, the second infrared camera, the color camera and the laser light source with texture are installed at the same vertical height, and the first infrared camera and the second infrared camera are separated by a preset distance to keep optical axes parallel; the voice acquisition submodule comprises a microphone and a loudspeaker.
Preferably, the information processing module is specifically configured to obtain head three-dimensional posture information according to the head three-dimensional information, obtain eye feature information according to the eye two-dimensional image information, determine the gaze direction and the gaze object by fusing the head three-dimensional posture information and the eye feature information, identify the voice information, determine auxiliary information, and generate a human-vehicle interaction instruction according to the gaze direction, the gaze object, and the auxiliary information.
Preferably, the three-dimensional head posture information comprises an x displacement amount, a y displacement amount, a z displacement amount, an α rotation angle, a β rotation angle and a gamma rotation angle, and the eye characteristic information comprises pupil characteristic information, iris characteristic information, eyelid characteristic information and/or eye angle characteristic information.
Preferably, the information processing module comprises a communication interface, and the information processing module is further configured to send the human-vehicle interaction instruction to a vehicle electronic control unit by using the communication interface.
Preferably, the multifunctional installation and adjustment module comprises a device installation and adjustment submodule and a visual voice installation and adjustment submodule, and the visual voice installation and adjustment submodule is connected with the visual voice acquisition module and the device installation and adjustment submodule;
the device mounting and adjusting sub-module is used for mounting and adjusting the orientation and the posture of the human-vehicle interaction device according to the driving posture;
and the visual voice installation and adjustment sub-module is used for installing and adjusting the visual voice acquisition module according to the driving posture.
Preferably, the visual voice installation and adjustment sub-module comprises a first knob and a visual voice installation and adjustment support, and the first knob is installed on the visual voice installation and adjustment support;
the first knob is used for responding to the rotation operation of a user to drive the visual voice installation and adjustment bracket;
the visual voice installation and adjustment support is used for adjusting the height of the visual voice acquisition module to adapt to the driving posture according to the rotation of the first knob.
Preferably, the device installation adjustment sub-module comprises a rotating platform, a shell, a universal joint base and a bottom sucker; the rotating table is connected with the shell, the shell is connected with the universal joint base, and the universal joint base is connected with the bottom sucker;
the rotating platform is used for adjusting the direction of the human-vehicle interaction device through horizontal rotation according to the driving posture;
the shell is used for being meshed with the rotating platform through a gear;
the universal joint base is used for adjusting the space attitude of the human-vehicle interaction device through pitching, yawing and/or side-turning rotation according to the driving attitude;
and the bottom sucker is used for being adsorbed on a vehicle instrument desk and fixedly installed with the device.
Preferably, the multifunctional installation and adjustment module further comprises an auxiliary light source module, and the auxiliary light source module is connected with the multifunctional installation and adjustment module and the information processing module;
the auxiliary light source module is used for providing auxiliary light source illumination to adapt to illumination change in the driving environment of the user;
correspondingly, the multifunctional installation and adjustment module further comprises an auxiliary light source installation and adjustment submodule which is connected with the auxiliary light source module and the device installation and adjustment submodule;
and the auxiliary light source installation and adjustment sub-module is used for installing and adjusting the auxiliary light source module according to the driving posture of a user.
Preferably, the auxiliary light source mounting and adjusting submodule comprises a second knob and an auxiliary light source mounting and adjusting bracket, and the second knob is mounted on the auxiliary light source mounting and adjusting bracket;
the second knob is used for responding to the rotation operation of a user to drive the auxiliary light source to install and adjust the bracket;
and the visual voice installation and adjustment support is used for adjusting the height of the auxiliary light source module to adapt to the driving posture according to the rotation of the second knob.
In a second aspect, an embodiment of the present application provides a human-vehicle interaction method, which is applied to the human-vehicle interaction device in any one of the above first aspects, and the method includes:
acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user;
fusing and determining the sight direction of the user and the corresponding gazing object according to the head three-dimensional information and the eye two-dimensional image information;
recognizing the voice information and determining auxiliary information;
and generating a man-vehicle interaction instruction according to the sight direction, the gazing object and the auxiliary information.
Preferably, the fusion determination of the gaze direction of the user and the corresponding gaze object according to the head three-dimensional information and the eye two-dimensional image information includes:
obtaining head three-dimensional posture information according to the head three-dimensional information;
obtaining eye feature information according to the eye two-dimensional image information;
and fusing the head three-dimensional posture information and the eye feature information to determine the sight line direction and the gazing object.
Preferably, the three-dimensional head posture information comprises an x displacement amount, a y displacement amount, a z displacement amount, an α rotation angle, a β rotation angle and a gamma rotation angle, and the eye characteristic information comprises pupil characteristic information, iris characteristic information, eyelid characteristic information and/or eye angle characteristic information.
Preferably, the method further comprises the following steps:
and sending the human-vehicle interaction instruction to a vehicle electronic control unit.
Compared with the prior art, the method has the advantages that:
by adopting the technical scheme of the embodiment of the application, the man-vehicle interaction device comprises a multifunctional installation adjusting module, a visual voice acquisition module and an information processing module; the multifunctional installation adjusting module is connected with the visual voice acquisition module, and the visual voice acquisition module is connected with the information processing module; the multifunctional installation adjusting module is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module according to the driving posture of the user; the visual voice acquisition module is used for acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user and sending the information to the information processing module; and the information processing module is used for determining the sight direction and the corresponding gazing object of the user according to the three-dimensional head information and the two-dimensional eye image information in a fusion manner, recognizing the voice information to determine auxiliary information, and generating a man-car interaction instruction according to the sight direction, the gazing object and the auxiliary information. Therefore, the visual channel is used as a perception channel and an effect channel, visual information is obtained through the visual channel, the sight direction and the watching object are determined, information is obtained and auxiliary information is determined based on the voice channel, the sight channel is used as a main channel and is sent out a human-vehicle interaction instruction in a main-auxiliary combined cooperation mode with other channels, vehicle-mounted equipment is controlled to achieve human-vehicle interaction, the reliability and the naturalness of human-vehicle interaction are improved, and the requirement for an independent channel is lowered.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a human-vehicle interaction device provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of another human-vehicle interaction device provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another human-vehicle interaction device provided in the embodiment of the present application;
FIG. 4 is a schematic structural diagram of another human-vehicle interaction device provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of a human-vehicle interaction method according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a system framework related to an application scenario in an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, due to the gradual application of the voice technology, the touch screen technology, and the like to the automobile, a user may control the driving of the vehicle through modes of voice input, limb operation, and the like, for example, the user may control the driving of the vehicle through generating an interactive instruction by pressing a finger or by touching a button, a key, a touch screen, and the like, or the user may control the driving of the vehicle through generating an interactive instruction through the voice input, although the interactive communication between the user and the vehicle is simple and natural through the touch screen, the voice input, and the like. However, through research, the inventor finds that most of human-vehicle interaction modes such as voice input and limb operation are unidirectional control modes, which are not natural enough, and the voice input and limb operation modes are channel independent channel control, so that the accuracy and the reliability are low, and the interactivity between people and vehicles is poor. The visual channel is only used as a perception channel, and the optimal utilization of the visual channel is not realized.
In order to solve the problem, in the embodiment of the application, the human-vehicle interaction device comprises a multifunctional installation adjusting module, a visual voice acquisition module and an information processing module; the multifunctional installation adjusting module is connected with the visual voice acquisition module, and the visual voice acquisition module is connected with the information processing module; the multifunctional installation adjusting module is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module according to the driving posture of the user; the visual voice acquisition module is used for acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user and sending the information to the information processing module; and the information processing module is used for determining the sight direction and the corresponding gazing object of the user according to the three-dimensional head information and the two-dimensional eye image information in a fusion manner, recognizing the voice information to determine auxiliary information, and generating a man-car interaction instruction according to the sight direction, the gazing object and the auxiliary information. Therefore, the visual channel is used as a perception channel and an effect channel, visual information is obtained through the visual channel, the sight direction and the watching object are determined, information is obtained and auxiliary information is determined based on the voice channel, the sight channel is used as a main channel and is sent out a human-vehicle interaction instruction in a main-auxiliary combined cooperation mode with other channels, vehicle-mounted equipment is controlled to achieve human-vehicle interaction, the reliability and the naturalness of human-vehicle interaction are improved, and the requirement for an independent channel is lowered.
The following describes in detail a specific implementation manner of the human-vehicle interaction device and the human-vehicle interaction method in the embodiments of the present application by way of embodiments with reference to the accompanying drawings.
Exemplary device
Referring to fig. 1, a schematic structural diagram of a human-vehicle interaction device in an embodiment of the present application is shown. In this embodiment, the apparatus may specifically include:
the system comprises a multifunctional installation adjusting module 101, a visual voice acquisition module 102 and an information processing module 103; the multifunctional installation and adjustment module 101 is connected with the visual voice acquisition module 102, and the visual voice acquisition module 102 is connected with the information processing module 103;
the multifunctional installation adjusting module 101 is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module 102 according to the driving posture of a user;
the visual voice acquisition module 102 is configured to acquire three-dimensional head information, two-dimensional eye image information, and voice information of a user and send the information to the information processing module 103;
the information processing module 103 is configured to determine a sight direction and a corresponding gaze object of a user according to the head three-dimensional information and the eye two-dimensional image information in a fusion manner, identify the voice information, determine auxiliary information, and generate a human-vehicle interaction instruction according to the sight direction, the gaze object, and the auxiliary information.
It should be noted that, in order to adapt the device to the driving posture of the user, and in order to facilitate the visual voice collecting module 102 to adapt to the head posture of the user, collect the head behavior of the user, and the like, the multifunctional installation and adjustment module 101 needs to have both the function of installing the visual voice collecting module 102 and the human-vehicle interaction device and the function of adjusting the orientation and posture of the visual voice collecting module 102 and the human-vehicle interaction device. Therefore, in some embodiments of the present embodiment, the multifunctional installation adjustment module 101 includes a device installation adjustment submodule and a visual voice installation adjustment submodule, and the visual voice installation adjustment submodule is connected with the visual voice acquisition module 102 and the device installation adjustment submodule; the device mounting and adjusting sub-module is used for mounting and adjusting the orientation and the posture of the human-vehicle interaction device according to the driving posture; and the visual voice installation and adjustment sub-module is used for installing and adjusting the visual voice acquisition module 102 according to the driving posture.
In order to fix the whole device on a vehicle and adapt to the driving posture of a user, the whole device needs to realize the functions of rotating to change the direction, adjusting the posture by steering, installing and fixing and the like. Therefore, in some embodiments of the present embodiment, the device mounting adjustment sub-module should first have a bottom suction cup for fixedly mounting the device, so that the device is fixedly mounted on the vehicle instrument desk by the suction force of the suction cup. And then the universal joint base is connected so as to freely turn the device to a proper space attitude according to the driving attitude of a user by utilizing the characteristics of three degrees of freedom of pitching, yawing and rolling over of the universal joint base. Finally, the connection housing should have a rotation table that can be horizontally rotated by gear-engagement mounting connection so as to horizontally rotate the device to an appropriate orientation according to the driving posture of the user.
Wherein, because the device installation adjustment submodule is provided to adjust the direction and the space posture of the whole device, the visual voice installation adjustment submodule is provided with the height for adjusting the visual voice acquisition module 102. Therefore, in some embodiments of this embodiment, the visual voice installation and adjustment sub-module should have a visual voice installation and adjustment support for adjusting the height of the visual voice acquisition module 102 in a lifting manner, and the visual voice installation and adjustment support should be provided with a rotatable first knob, so that the user can drive the visual voice installation and adjustment support to lift and lower by rotating the first knob, and then adjust the visual voice acquisition module 102 to a suitable height to adapt to the driving posture of the user.
It can be understood that, in practical applications, the visual speech acquisition module 102 adopts two separate channels for cooperation, specifically, the visual channel is used as a main channel, and the speech channel is used as an auxiliary channel. Therefore, the visual voice capturing module 102 can be divided into two sub-modules, a visual capturing sub-module and a voice capturing sub-module, according to the visual channel and the voice channel.
The vision acquisition submodule can only obtain two-dimensional eye images if a monocular camera is adopted, and the sight line direction obtained based on the two-dimensional eye images is not accurate and cannot be directly applied to human-vehicle interaction control of vehicle driving. Therefore, in some embodiments of this embodiment, two infrared cameras are used, the optical axes are kept parallel to each other and are installed at the same vertical height at a fixed distance from left to right, so that binocular vision is finally formed, in addition, a color camera and a textured laser light source are also added, a color image can be obtained by using the color camera to provide more image information for subsequent determination of the sight line direction, images with abundant textures can be acquired by using the textured laser light source, subsequent analysis and determination of the sight line direction are facilitated, and the method is suitable for a scene in which people and vehicles interactively control vehicle driving, and the sight line direction of a user is more accurate through fusion and comprehensive consideration.
The voice acquisition submodule mainly acquires feedback voice information and the like. Thus, in some implementations of this embodiment, the voice capture sub-module may include, for example, a common microphone and speaker.
For example, as shown in fig. 2, another structural diagram of a device for generating a driving intention instruction, on the basis of fig. 1, the multifunctional installation adjustment module 101 includes a device installation adjustment sub-module 201 and a visual voice installation adjustment sub-module 202, and the visual voice installation adjustment sub-module 202 is connected to the visual voice acquisition module 102 and the device installation adjustment sub-module 201. Wherein the device installation and adjustment submodule 201 comprises a rotary table 203, a shell 204, a gimbal base 205 and a bottom suction cup 206; the rotary table 203 is connected to the housing 204, the housing 204 is connected to the gimbal base 205, and the gimbal base 205 is connected to the bottom suction cup 206; the visual voice mount adjustment sub-module 202 includes a first knob 207 mounted on the visual voice mount adjustment bracket and a visual voice mount adjustment bracket 208. The visual voice acquisition module 102 comprises a visual acquisition submodule 209 and a voice acquisition submodule 210; the vision acquisition sub-module 209 comprises a first infrared camera 211, a second infrared camera 212, a color camera 213 and a textured laser light source 214 which are installed at the same vertical height, wherein the first infrared camera 211 and the second infrared camera 212 are separated by a preset distance to keep optical axes parallel; the voice acquisition sub-module 210 includes a microphone 215 and a speaker 216.
It should be noted that the purpose of acquiring the three-dimensional information of the head is to represent the posture of the head, and the purpose of acquiring the two-dimensional image information of the eyes is to represent the eye characteristics, on the basis of the prior art, the eye characteristics and the head posture are fused, and more accurate sight direction of the user is obtained through comprehensive judgment, so that a gazing object corresponding to the sight direction of the user can be determined; the purpose of collecting the speech information is to determine auxiliary information of the user, i.e., information that assists in confirming the gazing object. After the gazing direction, the gazing object and the auxiliary information are determined, the gazing direction and the gazing object are mainly matched with the auxiliary information to obtain a human-vehicle interaction instruction. Therefore, in some embodiments of this embodiment, the information processing module 103 is specifically configured to obtain head three-dimensional pose information according to the head three-dimensional information, obtain eye feature information according to the eye two-dimensional image information, determine the gaze direction and the gaze object by fusing the head three-dimensional pose information and the eye feature information, identify the voice information to determine auxiliary information, and generate a human-vehicle interaction instruction according to the gaze direction, the gaze object, and the auxiliary information.
The method includes the steps of establishing a head rigid body model based on head three-dimensional information, and characterizing the head rigid body model by selecting three angles and three displacement amounts, extracting eye features based on eye two-dimensional image information, wherein the common eye features include pupil features, iris features, eyelid features, eye angle features and the like.
The auxiliary information determined by recognizing the voice information may be, for example, "yes" or "no", which indicates that the obtained confirmation information of the gazing object is obtained.
It should be noted that, after the human-vehicle interaction instruction is generated, in order to enable human-vehicle interaction and control vehicle driving based on user gaze and voice input, the information processing module 103 needs to provide a communication interface so as to communicate with the vehicle electronic control unit through the communication interface and the vehicle-mounted communication interface. Therefore, in some embodiments of the present embodiment, the information processing module 103 includes a communication interface, and the information processing module 103 is further configured to send the human-vehicle interaction instruction to the vehicle electronic control unit by using the communication interface. The device combines the vehicle-mounted electronic man-machine interface through the communication interface, and achieves the purpose that a user controls vehicle driving through sight formed by head behaviors and eye behaviors and voice input. The communication interface of the information processing module 103 may be, for example, a USB interface, a bluetooth interface, or a LIN bus interface, and the vehicle-mounted communication interface may be, for example, a CAN bus interface or a LIN bus interface.
By combining the description of the information processing module and the communication interface, the embodiment adopts the information processing module to extract and fuse the multi-channel information to generate the human-vehicle interaction instruction, and then sends the human-vehicle interaction instruction to the vehicle electronic control unit of the vehicle through the communication interface, so that the human-vehicle interaction control of vehicle driving is realized.
It should be further noted that, in the process of driving the vehicle, the illumination in the driving environment of the user is likely to change, and the illumination change affects the visual acquisition sub-module in the visual voice acquisition module 102 to acquire the three-dimensional head information and the two-dimensional eye image information of the user. In order to solve this problem, in some embodiments of this embodiment, an auxiliary light source module connected to the multifunctional installation and adjustment module 101 and the information processing module 103 is added in the apparatus, and the auxiliary light source module has good adaptability to the illumination change in the driving environment of the user, so as to prevent the illumination change in the driving environment of the user from affecting the quality of the visual acquisition sub-module for acquiring the three-dimensional information of the head and the two-dimensional image information of the eyes of the user. The auxiliary light source in the auxiliary light source module and the textured laser light source in the vision acquisition submodule can work alternately so as to obtain different image information and perform more accurate feature analysis.
Accordingly, in order to adapt the auxiliary light source module to the head posture of the user, the multifunctional mounting and adjusting module 101 needs to have both the function of mounting the auxiliary light source module and the function of adjusting the orientation and posture of the auxiliary light source module. The multifunctional installation and adjustment module 101 should have an auxiliary light source installation and adjustment submodule connected to the auxiliary light source module so as to facilitate installation and adjustment of the auxiliary light source module to a proper azimuth attitude or the like by using the auxiliary light source installation and adjustment submodule according to a driving attitude of a user.
For example, as shown in fig. 3, another structural schematic diagram of a device for generating a driving intention instruction is that, on the basis of fig. 2, an auxiliary light source module 301 is added, the auxiliary light source module 301 is connected to the multifunctional installation adjustment module 101 and the information processing module 103, and the auxiliary light source module 301 is configured to provide auxiliary light source illumination to adapt to illumination changes in a driving environment of a user. The multifunctional installation and adjustment module 101 further comprises an auxiliary light source installation and adjustment submodule 302, the auxiliary light source installation and adjustment submodule 302 is connected with the auxiliary light source module 301 and the device installation and adjustment submodule 101, and the auxiliary light source installation and adjustment submodule 302 is used for installing and adjusting the auxiliary light source module 301 according to the driving posture of a user.
In the same way, it should be noted that, since the device installation adjustment submodule is already provided to adjust the orientation and the spatial attitude of the whole device, the auxiliary light source installation adjustment submodule is only required to adjust the height of the auxiliary light source module. Therefore, in some embodiments of this embodiment, the auxiliary light source installation and adjustment sub-module should have an auxiliary light source installation and adjustment bracket for adjusting the height of the auxiliary light source module in a lifting manner, and the light source installation and adjustment bracket should be installed with a rotatable second knob, so that a user can drive the auxiliary light source installation and adjustment bracket to lift and lower by rotating the second knob, and then adjust the auxiliary light source module to a suitable height to adapt to the driving posture of the user.
For example, as shown in fig. 4, a schematic structural diagram of another driving intention instruction generating device, on the basis of fig. 3, the auxiliary light source mounting adjustment submodule 302 includes a second knob 401 and an auxiliary light source mounting adjustment bracket 402, and the second knob 401 is mounted on the auxiliary light source mounting adjustment bracket 402.
According to the description of the multifunctional installation and adjustment module, the multifunctional installation and adjustment module is provided with various installation and adjustment modules, and the visual voice acquisition module, the auxiliary light source module, the human-vehicle interaction device and the like are accurately adjusted respectively, so that the multifunctional installation and adjustment module is suitable for different heights, different driving postures and the like of different users.
Through various implementation manners provided by the embodiment, the man-vehicle interaction device comprises a multifunctional installation adjusting module, a visual voice acquisition module and an information processing module; the multifunctional installation adjusting module is connected with the visual voice acquisition module, and the visual voice acquisition module is connected with the information processing module; the multifunctional installation adjusting module is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module according to the driving posture of the user; the visual voice acquisition module is used for acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user and sending the information to the information processing module; and the information processing module is used for determining the sight direction and the corresponding gazing object of the user according to the three-dimensional head information and the two-dimensional eye image information in a fusion manner, recognizing the voice information to determine auxiliary information, and generating a man-car interaction instruction according to the sight direction, the gazing object and the auxiliary information. Therefore, the visual channel is used as a perception channel and an effect channel, visual information is obtained through the visual channel, the sight direction and the watching object are determined, information is obtained and auxiliary information is determined based on the voice channel, the sight channel is used as a main channel and is sent out a human-vehicle interaction instruction in a main-auxiliary combined cooperation mode with other channels, vehicle-mounted equipment is controlled to achieve human-vehicle interaction, the reliability and the naturalness of human-vehicle interaction are improved, and the requirement for an independent channel is lowered.
Exemplary method
Referring to fig. 5, a flowchart illustrating a human-vehicle interaction method in an embodiment of the present application is shown. In this embodiment, when applied to the human-vehicle interaction device described in the above embodiments, the method may include the following steps:
step 501: the method comprises the steps of collecting three-dimensional head information, two-dimensional eye image information and voice information of a user.
Step 502: and fusing and determining the sight direction of the user and the corresponding gazing object according to the head three-dimensional information and the eye two-dimensional image information.
As can be seen from the above device embodiments, the step 502 may include the following steps:
step 5021: obtaining head three-dimensional posture information according to the head three-dimensional information;
step 5022: obtaining eye feature information according to the eye two-dimensional image information;
step 5023: and fusing the head three-dimensional posture information and the eye feature information to determine the sight line direction and the gazing object.
According to the embodiment of the device, the three-dimensional head posture information comprises an x displacement amount, a y displacement amount, a z displacement amount, an α rotation angle, a β rotation angle and a gamma rotation angle, and the eye characteristic information comprises pupil characteristic information, iris characteristic information, eyelid characteristic information and/or eye angle characteristic information.
Step 503: and recognizing the voice information and determining auxiliary information.
Step 504: and generating a man-vehicle interaction instruction according to the sight direction, the gazing object and the auxiliary information.
As can be seen from the above apparatus embodiment, after step 504, for example, the method may further include: and sending the human-vehicle interaction instruction to a vehicle electronic control unit.
For example, one of the scenarios in the embodiment of the present application may be applied to the scenario shown in fig. 6. The scene comprises a device 601 for generating any driving intention instruction in the device embodiment, namely a human-vehicle interaction device 601 and an electronic control unit 602 for short. After the human-vehicle interaction device 601 is adaptively adjusted to be suitable for the driving posture of the user, acquiring head three-dimensional information, eye two-dimensional image information and voice information of the user; the human-vehicle interaction device 601 fuses and determines the sight line direction of the user and the corresponding gazing object according to the head three-dimensional information and the eye two-dimensional image information, recognizes the voice information and determines auxiliary information; the human-vehicle interaction device 601 generates a human-vehicle interaction instruction according to the sight line direction, the gazing object and the auxiliary information. The human-vehicle interaction device 601 sends the human-vehicle interaction instruction to the electronic control unit 602 through the CAN bus, so that the electronic control unit 602 controls vehicle driving according to the human-vehicle interaction instruction, the user sight and voice input is realized to control and drive the vehicle, and human-vehicle interaction is completed.
It is to be understood that, in the above application scenario, although the actions of the embodiment of the present application are described as being performed by the human-vehicle interaction device 601, the present application is not limited in terms of the subject of execution as long as the actions disclosed in the embodiment of the present application are performed. It should also be understood that the above scenario is only one scenario example provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
Through various implementation manners provided by the embodiment, the head three-dimensional information, the eye two-dimensional image information and the voice information of a user are acquired; fusing and determining the sight direction of the user and the corresponding gazing object according to the head three-dimensional information and the eye two-dimensional image information; recognizing the voice information and determining auxiliary information; and generating a man-vehicle interaction instruction according to the sight direction, the gazing object and the auxiliary information. Therefore, the visual channel is used as a perception channel and an effect channel, visual information is obtained through the visual channel, the sight direction and the watching object are determined, information is obtained and auxiliary information is determined based on the voice channel, the sight channel is used as a main channel and is sent out a human-vehicle interaction instruction in a main-auxiliary combined cooperation mode with other channels, vehicle-mounted equipment is controlled to achieve human-vehicle interaction, the reliability and the naturalness of human-vehicle interaction are improved, and the requirement for an independent channel is lowered.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (14)

1. A human-vehicle interaction device, comprising: the system comprises a multifunctional installation adjusting module, a visual voice acquisition module and an information processing module; the multifunctional installation adjusting module is connected with the visual voice acquisition module, and the visual voice acquisition module is connected with the information processing module;
the multifunctional installation adjusting module is used for installing and adjusting the human-vehicle interaction device and the visual voice acquisition module according to the driving posture of a user;
the visual voice acquisition module is used for acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user and sending the information to the information processing module;
the information processing module is used for determining the sight direction and the corresponding watching object of the user according to the three-dimensional head information and the two-dimensional eye image information in a fusion mode, recognizing the voice information to determine auxiliary information, and generating a man-vehicle interaction instruction according to the sight direction, the watching object and the auxiliary information.
2. The apparatus of claim 1, wherein the visual speech acquisition module comprises a visual acquisition sub-module and a speech acquisition sub-module; the vision acquisition sub-module comprises a first infrared camera, a second infrared camera, a color camera and a laser light source with texture, the first infrared camera, the second infrared camera, the color camera and the laser light source with texture are installed at the same vertical height, and the first infrared camera and the second infrared camera are separated by a preset distance to keep optical axes parallel; the voice acquisition submodule comprises a microphone and a loudspeaker.
3. The apparatus according to claim 1, wherein the information processing module is specifically configured to obtain head three-dimensional pose information from the head three-dimensional information, obtain eye feature information from the eye two-dimensional image information, determine the gaze direction and the gaze object by fusing the head three-dimensional pose information and the eye feature information, recognize the voice information to determine auxiliary information, and generate a human-vehicle interaction command from the gaze direction, the gaze object, and the auxiliary information.
4. The apparatus according to claim 3, wherein the head three-dimensional posture information includes an x-displacement amount, a y-displacement amount, a z-displacement amount, an α rotation angle, a β rotation angle, and a γ rotation angle, and the eye characteristic information includes pupil characteristic information, iris characteristic information, eyelid characteristic information, and/or eye angle characteristic information.
5. The device of claim 1, wherein the information processing module comprises a communication interface, and the information processing module is further configured to send the human-vehicle interaction instruction to a vehicle electronic control unit by using the communication interface.
6. The device of claim 1, wherein the multi-function installation adjustment module comprises a device installation adjustment submodule and a visual voice installation adjustment submodule, the visual voice installation adjustment submodule being connected to the visual voice acquisition module and the device installation adjustment submodule;
the device mounting and adjusting sub-module is used for mounting and adjusting the orientation and the posture of the human-vehicle interaction device according to the driving posture;
and the visual voice installation and adjustment sub-module is used for installing and adjusting the visual voice acquisition module according to the driving posture.
7. The apparatus of claim 6, wherein the visual voice-mounted adjustment sub-module comprises a first knob and a visual voice-mounted adjustment bracket, the first knob being mounted on the visual voice-mounted adjustment bracket;
the first knob is used for responding to the rotation operation of a user to drive the visual voice installation and adjustment bracket;
the visual voice installation and adjustment support is used for adjusting the height of the visual voice acquisition module to adapt to the driving posture according to the rotation of the first knob.
8. The apparatus of claim 6, wherein the apparatus mounting adjustment sub-module comprises a rotary table, a housing, a gimbal base, and a bottom suction cup; the rotating table is connected with the shell, the shell is connected with the universal joint base, and the universal joint base is connected with the bottom sucker;
the rotating platform is used for adjusting the direction of the human-vehicle interaction device through horizontal rotation according to the driving posture;
the shell is used for being meshed with the rotating platform through a gear;
the universal joint base is used for adjusting the space attitude of the human-vehicle interaction device through pitching, yawing and/or side-turning rotation according to the driving attitude;
and the bottom sucker is used for being adsorbed on a vehicle instrument desk and fixedly installed with the device.
9. The device of claim 6, further comprising an auxiliary light source module, wherein the auxiliary light source module is connected with the multifunctional installation adjustment module and the information processing module;
the auxiliary light source module is used for providing auxiliary light source illumination to adapt to illumination change in the driving environment of the user;
correspondingly, the multifunctional installation and adjustment module further comprises an auxiliary light source installation and adjustment submodule which is connected with the auxiliary light source module and the device installation and adjustment submodule;
and the auxiliary light source installation and adjustment sub-module is used for installing and adjusting the auxiliary light source module according to the driving posture of a user.
10. The apparatus of claim 1, wherein the auxiliary light source mounting adjustment submodule comprises a second knob and an auxiliary light source mounting adjustment bracket, the second knob being mounted on the auxiliary light source mounting adjustment bracket;
the second knob is used for responding to the rotation operation of a user to drive the auxiliary light source to install and adjust the bracket;
and the visual voice installation and adjustment support is used for adjusting the height of the auxiliary light source module to adapt to the driving posture according to the rotation of the second knob.
11. A human-vehicle interaction method, which is applied to the human-vehicle interaction device of any one of the above claims 1-10, and comprises: acquiring head three-dimensional information, eye two-dimensional image information and voice information of a user;
fusing and determining the sight direction of the user and the corresponding gazing object according to the head three-dimensional information and the eye two-dimensional image information;
recognizing the voice information and determining auxiliary information;
and generating a man-vehicle interaction instruction according to the sight direction, the gazing object and the auxiliary information.
12. The method according to claim 11, wherein the fusion determining of the gaze direction of the user and the corresponding gaze object according to the head three-dimensional information and the eye two-dimensional image information comprises:
obtaining head three-dimensional posture information according to the head three-dimensional information;
obtaining eye feature information according to the eye two-dimensional image information;
and fusing the head three-dimensional posture information and the eye feature information to determine the sight line direction and the gazing object.
13. The method according to claim 12, wherein the three-dimensional head pose information comprises an x-displacement amount, a y-displacement amount, a z-displacement amount, an α rotation angle, a β rotation angle, and a γ rotation angle, and the eye feature information comprises pupil feature information, iris feature information, eyelid feature information, and/or eye angle feature information.
14. The method of claim 11, further comprising:
and sending the human-vehicle interaction instruction to a vehicle electronic control unit.
CN201810974398.7A 2018-08-24 2018-08-24 Human-vehicle interaction device and human-vehicle interaction method Active CN110857067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810974398.7A CN110857067B (en) 2018-08-24 2018-08-24 Human-vehicle interaction device and human-vehicle interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810974398.7A CN110857067B (en) 2018-08-24 2018-08-24 Human-vehicle interaction device and human-vehicle interaction method

Publications (2)

Publication Number Publication Date
CN110857067A true CN110857067A (en) 2020-03-03
CN110857067B CN110857067B (en) 2023-04-07

Family

ID=69635545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810974398.7A Active CN110857067B (en) 2018-08-24 2018-08-24 Human-vehicle interaction device and human-vehicle interaction method

Country Status (1)

Country Link
CN (1) CN110857067B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486760A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Object speaking detection method and device, electronic equipment and storage medium
CN113561988A (en) * 2021-07-22 2021-10-29 上汽通用五菱汽车股份有限公司 Voice control method based on sight tracking, automobile and readable storage medium
CN114327051A (en) * 2021-12-17 2022-04-12 北京乐驾科技有限公司 Human-vehicle intelligent interaction method

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0976815A (en) * 1995-09-11 1997-03-25 Mitsubishi Motors Corp Auxiliary device for driving operation
CN102419632A (en) * 2011-11-04 2012-04-18 上海大学 Adjusted sight line tracking man-machine interaction device
CN103076876A (en) * 2012-11-22 2013-05-01 西安电子科技大学 Character input device and method based on eye-gaze tracking and speech recognition
CN103235645A (en) * 2013-04-25 2013-08-07 上海大学 Standing type display interface self-adaption tracking regulating device and method
CN103886307A (en) * 2014-04-15 2014-06-25 王东强 Sight tracking and fatigue early warning method
DE102013003059A1 (en) * 2013-02-22 2014-08-28 Audi Ag Method for controlling functional unit of motor vehicle, involves automatically detecting whether view of vehicle occupant to given area is directed by view detection unit
US20140350942A1 (en) * 2013-05-23 2014-11-27 Delphi Technologies, Inc. Vehicle human machine interface with gaze direction and voice recognition
US20150189241A1 (en) * 2013-12-27 2015-07-02 Electronics And Telecommunications Research Institute System and method for learning driving information in vehicle
US20150314792A1 (en) * 2014-05-05 2015-11-05 GM Global Technology Operations LLC System and method for controlling an automobile using eye gaze data
CN105128862A (en) * 2015-08-18 2015-12-09 上海擎感智能科技有限公司 Vehicle terminal eyeball identification control method and vehicle terminal eyeball identification control system
CN105468145A (en) * 2015-11-18 2016-04-06 北京航空航天大学 Robot man-machine interaction method and device based on gesture and voice recognition
US20160236690A1 (en) * 2015-02-12 2016-08-18 Harman International Industries, Inc. Adaptive interactive voice system
US20160336009A1 (en) * 2014-02-26 2016-11-17 Mitsubishi Electric Corporation In-vehicle control apparatus and in-vehicle control method
KR101709129B1 (en) * 2015-10-08 2017-02-22 국민대학교산학협력단 Apparatus and method for multi-modal vehicle control
US20170060234A1 (en) * 2015-08-26 2017-03-02 Lg Electronics Inc. Driver assistance apparatus and method for controlling the same
CN107230476A (en) * 2017-05-05 2017-10-03 众安信息技术服务有限公司 A kind of natural man machine language's exchange method and system
CN107239139A (en) * 2017-05-18 2017-10-10 刘国华 Based on the man-machine interaction method and system faced
US20180048482A1 (en) * 2016-08-11 2018-02-15 Alibaba Group Holding Limited Control system and control processing method and apparatus
CN108297797A (en) * 2018-02-09 2018-07-20 安徽江淮汽车集团股份有限公司 A kind of vehicle mirrors regulating system and adjusting method

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0976815A (en) * 1995-09-11 1997-03-25 Mitsubishi Motors Corp Auxiliary device for driving operation
CN102419632A (en) * 2011-11-04 2012-04-18 上海大学 Adjusted sight line tracking man-machine interaction device
CN103076876A (en) * 2012-11-22 2013-05-01 西安电子科技大学 Character input device and method based on eye-gaze tracking and speech recognition
DE102013003059A1 (en) * 2013-02-22 2014-08-28 Audi Ag Method for controlling functional unit of motor vehicle, involves automatically detecting whether view of vehicle occupant to given area is directed by view detection unit
CN103235645A (en) * 2013-04-25 2013-08-07 上海大学 Standing type display interface self-adaption tracking regulating device and method
US20140350942A1 (en) * 2013-05-23 2014-11-27 Delphi Technologies, Inc. Vehicle human machine interface with gaze direction and voice recognition
US20150189241A1 (en) * 2013-12-27 2015-07-02 Electronics And Telecommunications Research Institute System and method for learning driving information in vehicle
US20160336009A1 (en) * 2014-02-26 2016-11-17 Mitsubishi Electric Corporation In-vehicle control apparatus and in-vehicle control method
CN103886307A (en) * 2014-04-15 2014-06-25 王东强 Sight tracking and fatigue early warning method
US20150314792A1 (en) * 2014-05-05 2015-11-05 GM Global Technology Operations LLC System and method for controlling an automobile using eye gaze data
US20160236690A1 (en) * 2015-02-12 2016-08-18 Harman International Industries, Inc. Adaptive interactive voice system
CN105128862A (en) * 2015-08-18 2015-12-09 上海擎感智能科技有限公司 Vehicle terminal eyeball identification control method and vehicle terminal eyeball identification control system
US20170060234A1 (en) * 2015-08-26 2017-03-02 Lg Electronics Inc. Driver assistance apparatus and method for controlling the same
KR101709129B1 (en) * 2015-10-08 2017-02-22 국민대학교산학협력단 Apparatus and method for multi-modal vehicle control
CN105468145A (en) * 2015-11-18 2016-04-06 北京航空航天大学 Robot man-machine interaction method and device based on gesture and voice recognition
US20180048482A1 (en) * 2016-08-11 2018-02-15 Alibaba Group Holding Limited Control system and control processing method and apparatus
CN107230476A (en) * 2017-05-05 2017-10-03 众安信息技术服务有限公司 A kind of natural man machine language's exchange method and system
CN107239139A (en) * 2017-05-18 2017-10-10 刘国华 Based on the man-machine interaction method and system faced
CN108297797A (en) * 2018-02-09 2018-07-20 安徽江淮汽车集团股份有限公司 A kind of vehicle mirrors regulating system and adjusting method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486760A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Object speaking detection method and device, electronic equipment and storage medium
CN113561988A (en) * 2021-07-22 2021-10-29 上汽通用五菱汽车股份有限公司 Voice control method based on sight tracking, automobile and readable storage medium
CN114327051A (en) * 2021-12-17 2022-04-12 北京乐驾科技有限公司 Human-vehicle intelligent interaction method

Also Published As

Publication number Publication date
CN110857067B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110857067B (en) Human-vehicle interaction device and human-vehicle interaction method
CN107223223B (en) Control method and system for first-view-angle flight of unmanned aerial vehicle and intelligent glasses
US9377869B2 (en) Unlocking a head mountable device
KR101824501B1 (en) Device and method for controlling display of the image in the head mounted display
KR102056221B1 (en) Method and apparatus For Connecting Devices Using Eye-tracking
US20180143432A1 (en) Remote control device, remote control product and remote control method
US20140225812A1 (en) Head mounted display, control method for head mounted display, and image display system
CN105700676A (en) Wearable glasses, control method thereof, and vehicle control system
CN109791296A (en) Adjustable nose rest component for Wearing-on-head type computer
WO2019234877A1 (en) Portable information terminal
KR20180017674A (en) Mobile terminal and controlling method the same
WO2015156970A1 (en) Method and apparatus for adjusting view mirror in vehicle
CN111873911B (en) Method, device, medium, and electronic apparatus for adjusting rearview mirror
CN103839054A (en) Multi-functional mobile intelligent terminal sensor supporting iris recognition
KR20140072734A (en) System and method for providing a user interface using hand shape trace recognition in a vehicle
JP2019023767A (en) Information processing apparatus
WO2019034045A1 (en) Hand-held stabiliser and method for adjusting camera parameter therefor
CN109968979A (en) Vehicle-mounted projection processing method, device, mobile unit and storage medium
KR20180004112A (en) Eyeglass type terminal and control method thereof
US20060109201A1 (en) Wearable apparatus for converting vision signal into haptic signal, agent system using the same, and operating method thereof
CN110858467A (en) Display screen control system and vehicle
CN105721820A (en) Interactive remote video communication system
CN105446579B (en) A kind of control method and portable electronic device
CN110857064A (en) Device and method for generating driving intention instruction
CN108140124B (en) Prompt message determination method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant