WO2015062320A1 - 人体耦合智能信息输入系统及方法 - Google Patents
人体耦合智能信息输入系统及方法 Download PDFInfo
- Publication number
- WO2015062320A1 WO2015062320A1 PCT/CN2014/083202 CN2014083202W WO2015062320A1 WO 2015062320 A1 WO2015062320 A1 WO 2015062320A1 CN 2014083202 W CN2014083202 W CN 2014083202W WO 2015062320 A1 WO2015062320 A1 WO 2015062320A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- manipulation
- human body
- information
- mode
- processing unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 56
- 238000006073 displacement reaction Methods 0.000 claims description 60
- 230000033001 locomotion Effects 0.000 claims description 39
- 230000001133 acceleration Effects 0.000 claims description 25
- 239000004984 smart glass Substances 0.000 claims description 15
- 230000003321 amplification Effects 0.000 claims description 9
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000036548 skin texture Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims description 2
- 230000008447 perception Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 15
- 210000003128 head Anatomy 0.000 description 12
- 210000000988 bone and bone Anatomy 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 210000000613 ear canal Anatomy 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- the present invention relates to the field of network terminal control, and in particular, to a human body coupled intelligent information input system and method.
- the conventional glasses display is controlled by a button device or a touchpad, and the ease of use is poor, and the similar problems exist in the above mobile terminal, and it is difficult to achieve precise positioning and complicated manipulation of the manipulation interface.
- Sensors such as conventional gyroscopes can be calibrated by GPS, but they need to be realized in a relatively empty and unobstructed place. At the same time, only the horizontal direction of the two-dimensional space can be calibrated, and the stereoscopic orientation of the three-dimensional space cannot be calibrated, and the gyro is used for a long time in the three-dimensional space. Accumulated errors in sensors and accelerometers such as instruments are large, which will lead to an increase in error.
- Traditional smart glasses using touchpads or buttons, make it difficult to efficiently input complex text such as Chinese characters.
- Traditional smart glasses lack an efficient user identity authentication mechanism when users log in. In order to ensure efficiency, user identity authentication is often cancelled, which brings potential information leakage risks.
- Sensors such as conventional gyroscopes can be calibrated by GPS, but they need to be realized in a relatively open and unobstructed place. At the same time, only the horizontal direction of the two-dimensional can be calibrated, and the three-dimensional orientation of the three-dimensional space cannot be calibrated;
- Portable headsets such as traditional PCs and mobile smart terminals such as mobile phones and pads are often earphone-type earphones connected by cords, which are easy to hook when picking up;
- the object of the present invention is to provide a human body coupled intelligent information input system for realizing dynamic matching of orientation, posture, time information and human body motion, so that space and time information tightly coupled with the human body can be input efficiently and accurately, and the pair is realized. Natural manipulation and precise positioning of the software interface.
- a human body coupled intelligent information input system comprising: a spatial information sensing unit 101, which is worn on a predetermined part of a human body, and is used for acquiring spatial three-dimensional information of the human body and transmitting it to the processing unit 103; 102, connected to the processing unit 103, for providing time information; the processing unit 103, for processing spatial information and time information of the human body, and outputting corresponding manipulation instructions to the output unit 104 according to the information; the output unit 104, For transmitting the manipulation command to an external device.
- the spatial information includes orientation information, posture information, and location information of the human body.
- the spatial information sensing unit 101 includes: a compass for acquiring orientation information of the human body; a gyroscope for acquiring posture information of the human body; and/or a wireless signal module for acquiring position information of the human body.
- the wireless signal module acquires location information of the human body through at least one of a satellite positioning system, a mobile phone base station, and a WIFI.
- the spatial information sensing unit 101 further includes at least one of the following: an acceleration sensor, a direction sensor, a magnetic sensor, a gravity sensor, a rotation vector sensor, and a linear acceleration sensor.
- the orientation and posture information of the human body includes: three dimensions of the head and the hand in the space: including front and rear displacement, up and down displacement, left and right displacement, or a combination of these displacements; each of the head and the hand A variety of angular changes, including left and right horizontal rotation, up and down rotation and side rotation, or a combination of these rotation modes; and / or absolute displacement and relative displacement.
- the system further includes: a voice input unit 105, configured to receive and identify voice commands issued by the human body, convert the voice signals into a voice signal, and then send the signals to the processing unit 103; and/or the optical collection unit, for being close to the user's body.
- a voice input unit 105 configured to receive and identify voice commands issued by the human body, convert the voice signals into a voice signal, and then send the signals to the processing unit 103; and/or the optical collection unit, for being close to the user's body.
- the user's eye or skin texture information is collected, and the identity authentication and login are realized by comparing with the saved input information.
- the processing unit 103 corrects the control error by at least one of a boundary return mode, a manipulation amplification mode, a manipulation acceleration mode, a manipulation lock mode, a positioning focus passive reset mode, a positioning focus active reset mode, and a relative displacement manipulation mode. ,among them:
- the boundary return mode presets an error boundary on the display interface, which will operate the device
- the positioning focus is limited to move within the error boundary, and error correction is performed when the control device returns to the position
- the operation amplification mode realizes correction of a manipulation error by amplifying a displacement of the manipulation device on the display interface
- the acceleration of the manipulation device is transmitted to the interface to focus the focus, so that the corresponding acceleration movement reaches the control purpose;
- the interface positioning focus corresponding to the manipulation device is locked, and the error is corrected by manipulating the device to return the position;
- the passive reset of the positioning focus is driven by the acceleration return position of the operating device to correct the error
- the error is corrected by an active reset of the interface positioning focus
- motion manipulation is achieved by acquiring relative displacement between a plurality of manipulation devices.
- the processing unit 103 parses the relative motion between different sensors by using absolute motions of different sensors, calculates relative displacement of the human body, and performs manipulation by relative displacement of the human body;
- the processing unit 103 detects only the change of the spatial angle of the spatial information sensing unit 101 by turning off the displacement mode of the spatial information sensing unit 101, and performs manipulation by the change of the angle;
- the processing unit 103 is located in the setting
- the spatial information sensing unit 101 in the ring implements recognition and input of a gesture to achieve enlargement, reduction, and browsing of images; the processing unit 103 passes the spatial information sensing unit disposed in the smart glasses.
- the spatial information sensing unit 101 realizes recognition and input of rotation and/or movement of the head, realizes enlargement and reduction of the image, and browsing of various angles; and/or the spatial information sensing unit 101 parses the spatial motion trajectory of the hand into characters, and realizes Identify and input text.
- the processing unit 103 analyzes various possible controls related to the control in which the focus is located according to the relevant information of the location where the focus is currently located, and extracts the original corpus corresponding to the related control from the basic corpus; the processing unit 103 Matching and recognizing the collected voice input signal with the original corpus of the control-related manipulation to implement voice manipulation of the interface corresponding to the position at which the focus is currently operated; and/or the processing unit 103 according to the orientation of the human body Position
- the state information identifies and processes the voice input signal of the voice input unit 105.
- a human body coupled intelligent information input method includes the following steps: Step S1: acquiring spatial information and time information of a human body; and step S2, processing spatial information and time information of the human body, and And outputting a corresponding manipulation instruction according to the information; in step S3, the manipulation instruction is sent to an external device to implement a corresponding operation.
- the human body coupled intelligent information input system and method according to the present invention have the following remarkable technical effects: (1) capable of accurately positioning and complexly manipulating the device; and (2) capable of calibrating the three-dimensional orientation of the three-dimensional space; (3) can distinguish between the movement of the carrier or the movement of the person; (4) reduce the difficulty of speech recognition, and can achieve global manipulation by voice; (5) use columnar or water droplets extending from the lower part of the temple of the smart glasses to the external auditory canal
- the audio output device is easy to wear and has good sound effect; (6) It can realize efficient input of complex characters such as Chinese characters; (7) It can efficiently authenticate the user identity.
- FIG. 1 is a schematic structural view of a human body coupled intelligent information input system of the present invention
- FIG. 2 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting a steering error by a boundary return mode
- FIG. 3 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting a steering error by operating an amplification mode
- FIG. 4 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting a steering error by operating an acceleration mode
- FIG. 5 is a schematic diagram of a human body coupled intelligent information input system of the present invention correcting an error by positioning a focus passive reset mode
- FIG. 6 is a schematic diagram of a human body coupled intelligent information input system of the present invention correcting an error by manipulating a lock mode
- FIG. 7 is a schematic diagram of a human body coupled intelligent information input system of the present invention correcting errors by positioning a focus active reset mode
- FIG. 8 is a schematic diagram of a human body coupled intelligent information input system of the present invention correcting errors by a relative displacement mode
- FIG. 9 is a schematic diagram of a voice recognition mode in the human body coupled intelligent information input system of the present invention.
- FIG. 10 is a schematic flow chart of a human body coupled intelligent information input method of the present invention.
- FIG. 1 is a schematic structural view of a human body coupled intelligent information input system of the present invention.
- the human body coupled intelligent information input system of the present invention includes a spatial information sensing unit 101, a clock unit 102, a processing unit 103, and an output unit 104.
- the spatial information sensing unit 101 is worn on a predetermined part of the human body, and is used to acquire spatial three-dimensional information of the human body and send it to the processing unit 103.
- the spatial information sensing unit 101 is connected to the processing unit 103.
- the spatial information sensing unit 101 may be a finger ring worn on the hand and/or smart glasses worn on the head, for acquiring the orientation information and posture information of the human body. And location information.
- the spatial information sensing unit 101 may include components such as a compass, a gyroscope, an acceleration sensor, a wireless signal module, and the like. Among them, the compass, the gyroscope, and the acceleration sensor are used to acquire the orientation and posture information of the human body.
- the azimuth and posture information of the human body includes: displacement of the head and the hand in three dimensions in space (including front-rear displacement, up-and-down displacement, left-right displacement, or a combination of these displacements); various angle changes of the head and the hand (including left and right horizontal rotation, up and down rotation and side rotation, or a combination of these rotation methods); absolute displacement and relative displacement.
- the wireless signal module is configured to receive wireless signals to acquire position information of the human body, and realize human body positioning, for example, acquiring position information of the human body through at least one of a satellite positioning system, a mobile phone base station, and a WIFI.
- the clock unit 102 is configured to provide time information.
- the clock unit 102 is connected to a processing unit 103, which is typically a timer for recording time and providing it to the processing unit 103.
- the clock unit 102 can be worn in the finger ring of the hand and/or in the smart glasses worn on the head.
- the processing unit 103 is configured to process spatial information and time information of the human body, and output corresponding manipulation instructions to the output unit 104 according to the information.
- the processing unit 103 passes the boundary return mode, the manipulation amplification mode, the manipulation acceleration mode, the manipulation locking mode, and the positioning. At least one of a focus passive reset mode, a positioning focus active reset mode, and a relative displacement steering mode to correct a steering error.
- the output unit 104 is configured to send the manipulation instruction sent by the processing unit 103 to the external device.
- the output unit 104 includes a columnar or drop-shaped audio output device extending from the lower portion of the temple of the smart glasses to the external auditory canal.
- the system of the present invention further includes a voice input unit 105, configured to receive and recognize voice commands issued by the human body, convert the voice signals into voice signals, and send the signals to the processing unit 103.
- a voice input unit 105 configured to receive and recognize voice commands issued by the human body, convert the voice signals into voice signals, and send the signals to the processing unit 103.
- the system of the present invention further includes an optical collection unit configured to collect user's eye or skin texture information when approaching the user's body, and achieve identity authentication and login by comparing with the saved input information.
- the optical acquisition unit is, for example, a camera or an optical scanning device.
- the processing unit 103 processes and processes the spatial information and time information of the human body acquired by the spatial information sensing unit 101 and the clock unit 102, thereby realizing the orientation, posture, time information, and the human body.
- the dynamic matching of the action enables the space and time information coupled with the human body to be input efficiently and accurately, realizing the natural manipulation and precise positioning of the software interface.
- FIG. 2 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting a steering error by a boundary return mode.
- the boundary return mode of the processing unit 103 pre-sets an error boundary on the display interface (for example, a positioning boundary of front and rear, left and right, up and down displacement, or a positioning boundary of each direction of rotation), and the positioning focus of the manipulation device It can only move within the error boundary, so that the error of the control device is limited within the boundary.
- error correction can be implemented.
- the control device continues to move in the error direction (ie, the right side).
- the error direction ie, the right side.
- the positioning focus of the manipulation device cannot be moved outside the boundary, that is, the focus does not change, and the control is performed at this time.
- the device has been moved to the right of the control interface.
- the control device is moved to the middle position of the boundary (ie, the return position), and the interface positioning focus also returns to the middle position.
- the position of the manipulation device and the position of the interface are in the same position, and the error is corrected.
- FIG. 3 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting a steering error by operating an amplification mode.
- the operation amplification mode of the processing unit 103 mainly realizes the correction of the manipulation error by amplifying the displacement of the manipulation device on the interface, as follows.
- the steering device moves a small distance and the interface positioning focus moves a large distance accordingly.
- the interface operation error can be kept within a tolerable range of manipulation.
- FIG. 4 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting a steering error by operating an acceleration mode.
- the acceleration of the manipulation device is transmitted to the interface to focus the focus, and the corresponding acceleration movement is achieved for the purpose of manipulation.
- FIG. 5 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting an error by positioning a focus passive reset mode.
- the passive reset of the positioning focus is driven by the acceleration return of the steering device to correct the error.
- Fig. 5a the steering device moves to the right with a small displacement, the positioning focus moves to the right for a large displacement, and the positioning focus has a large error.
- the steering device moves back by the reverse acceleration, and drives the positioning focus to accelerate the movement back in the opposite direction, thereby effectively reducing the error.
- FIG. 6 is a human body coupled intelligent information input system of the present invention correcting errors by manipulating a lock mode A poor schematic.
- the interface is positioned to focus by the manipulation device, and the error is corrected by manipulating the device.
- Fig. 6a after a large error occurs in the positioning focus positioning, the locking operation is performed, that is, the device is moved, and the interface positioning focus is not moved.
- FIG. 7 is a schematic diagram of the human body coupled intelligent information input system of the present invention correcting errors by positioning the focus active reset mode.
- the error is corrected by the active reset of the interface positioning focus.
- FIG. 8 is a schematic diagram of a human body coupled intelligent information input system of the present invention through a relative displacement manipulation mode.
- motion manipulation is achieved by acquiring relative displacement between a plurality of manipulation devices.
- two or more manipulation devices are connected by the processing unit 103 in the present invention.
- the two manipulation devices can respectively sense the displacement change, and the processing unit 103 first parses the two. Control the absolute displacement of the device and then resolve the dual control The relative displacement between the devices enables effective manipulation under motion through relative displacement.
- the processing unit 103 can perform manipulation by the relative displacement of the human body in the motion state; when the carrier is more vigorously moved, the processing unit 103 can lock the screen and provide only some simple operations.
- the processing unit 103 can analyze the relative motion between different sensors by the absolute motion of different sensors, thereby calculating the relative displacement between different parts of the human body.
- the processing unit 103 may turn off the displacement mode of the spatial information sensing unit, detect only the change of the spatial angle of the spatial information sensing unit, and perform manipulation by the change of the angle.
- the system of the present invention realizes the recognition and input of gestures by the spatial information sensing unit 101 located in the ring, such as “matching the number”, “crossing the number”, “drawing the circle”, etc., through these natural gestures. Confirmation of commonly used keys, such as “Yes”, “Confirm”, “No”, “Cancel”, etc.
- system of the present invention realizes the recognition and input of rotation and/or movement of the head by the spatial information sensing unit 101 located in the smart glasses.
- the system of the present invention can implement an image browsing function.
- the system may detect the back and forth movement of the head and the rotation of the up, down, left, and right sides by the spatial information sensing unit 101, and realize the natural enlargement and reduction of the image by moving back and forth; , can not be fully displayed in the display, through the head up, down, left and right and lateral rotation, to achieve a view of the various angles of the image;
- the space information sensing unit 101 can detect the back and forth movement of the hand and the rotation of the up, down, left, and right sides, and realize the natural enlargement and reduction of the image by moving back and forth; , can not be fully displayed in the display, through the hand up, left and right and lateral rotation, to achieve a view of the image angles.
- the system of the present invention can implement a text input function. For example, when the character input is performed, the system analyzes the spatial motion trajectory of the hand into characters by the spatial information sensing unit 101, thereby realizing natural and efficient input of characters.
- the user's eye or skin texture information is collected by means of a camera or optical scanning, and is compared with the saved input information, thereby achieving high efficiency. Authentication and quick login.
- FIG. 9 is a schematic diagram of a speech recognition mode in the human body coupled intelligent information input system of the present invention.
- the human body coupled intelligent information input system of the present invention further includes a voice input unit 105 for performing acquisition, conversion, and transmission of voice input signals.
- a conventional speech recognition mode is shown in FIG. 9a.
- this recognition mode it is necessary to perform comparison and recognition with a large corpus, which has large resource consumption, low efficiency, and low recognition accuracy.
- FIG. 9b shows a speech recognition mode in the human body coupled intelligent information input system of the present invention, in which the collected input speech signal is matched with the corpus related to the control, which greatly reduces the complexity of the speech matching and can effectively improve The efficiency and accuracy of speech recognition.
- the various possible controls related to the control of the focus are analyzed, and then the original corpus related to the control is accurately extracted from the basic corpus, and the corpus related to the control is accurately extracted. Match, compare, and identify, then return to the recognition result.
- the present invention automatically matches the acquired input speech signal with the original corpus associated with the control, and implements voice manipulation of the interface corresponding to the position at which the focus is currently operated. Since the focus positioning and the voice manipulation corresponding to various types of controls are realized, the global manipulation of the voice in the software system can be realized, and the breadth and depth of the voice manipulation can be effectively expanded.
- FIG. 10 is a schematic flow chart of a human body coupled intelligent information input method of the present invention.
- the human body coupled intelligent information input method of the present invention comprises the following steps:
- step S1 spatial information and time information of the human body are acquired. Specifically, the orientation, posture information, and time information of the human body are acquired by a finger ring worn on the hand and/or smart glasses worn on the head.
- the spatial information of the human body includes orientation and posture information, and includes, for example, displacements of three dimensions of the head and the hand in the space: including front and rear displacement, up and down displacement, left and right displacement, or a combination of these displacements.
- the spatial information of the human body includes location information, such as human body location information acquired by at least one of a satellite positioning system, a mobile phone base station, and a WIFI.
- Step S2 processing spatial information and time information of the human body, and outputting corresponding manipulation instructions according to the information.
- this step by processing the acquired body orientation, posture information and time information, dynamic matching of the orientation, posture, time information and the movement of the human body is realized, so that the spatial and temporal information coupled with the human body can be input efficiently and accurately. Achieve natural control and precise positioning of the software interface.
- the boundary return mode, the manipulation amplification mode, At least one of the acceleration mode, the lock manipulation mode, and the focus active/passive reset mode is manipulated to correct the steering error.
- step S3 the manipulation instruction is sent to the external device to implement the corresponding operation.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (11)
- 一种人体耦合智能信息输入系统,包括:空间信息感知单元(101),佩戴在人体预定部位上,用于获取人体的空间三维信息并发送给处理单元(103);时钟单元(102),连接到处理单元(103),用于提供时间信息;处理单元(103),用于对人体的空间信息和时间信息进行处理,并根据所述信息向输出单元(104)输出相应的操控指令;以及输出单元(104),用于将所述操控指令发送给外部设备。
- 根据权利要求1所述的系统,所述空间信息包括人体的方位信息、姿态信息和位置信息。
- 根据权利要求2所述的系统,所述空间信息感知单元(101)包括:指南针,用于获取人体的方位信息;陀螺仪,用于获取人体的姿态信息;和/或无线信号模块,用于获取人体的位置信息。
- 根据权利要求3所述的系统,所述无线信号模块通过卫星定位系统、手机基站、WIFI中的至少一种获取人体的位置信息。
- 根据权利要求3所述的系统,所述空间信息感知单元(101)进一步包括下述中的至少一种:加速度传感器、方向传感器、磁力传感器、重力传感器、旋转矢量传感器、线性加速度传感器。
- 根据权利要求2所述的系统,所述人体的方位、姿态信息包括:头部、手部在空间中三个维度的位移:包括前后位移,上下位移、左右位移,或者是这些位移的组合;头部、手部的各种角度变化,包括左右水平旋转、上下旋转和侧向旋转,或是这些旋转方式的组合;和/或绝对位移与相对位移。
- 根据权利要求1所述的系统,还包括:语音输入单元(105),用于接收并识别人体发出的语音指令,转换成语音信号后发送给处理单元(103);和/或光学采集单元,用于在靠近用户身体时采集用户眼睛或皮肤纹理信 息,通过与保存的录入信息比对,实现身份认证和登录。
- 根据权利要求1所述的系统,所述处理单元(103)通过边界回位模式、操控放大模式、操控加速模式、操控锁定模式、定位焦点被动复位模式、定位焦点主动复位模式以及相对位移操控模式中的至少一种来修正操控误差,其中:所述边界回位模式在显示界面上预先设置了误差边界,将操控设备的定位焦点限制在该误差边界范围内移动,当操控设备回位时实施误差修正;所述操作放大模式通过将操控设备的位移在显示界面上放大来实现操控误差的修正;所述操作加速模式中,通过将操控设备的加速度传递给界面定位焦点,使其相应的加速移动达到操控目的;所述操控锁定模式中,通过将操控设备对应的界面定位焦点进行锁定,通过操控设备回位以修正误差;所述定位焦点被动复位模式中,通过操控设备的加速回位来驱动定位焦点的被动复位从而修正误差;所述定位焦点主动复位模式中,通过界面定位焦点的主动复位来修正误差;所述相对位移操控模式中,通过获取多个操控设备之间的相对位移来实现运动状态下的操控。
- 根据权利要求8所述的系统,其特征在于,在运动状态下所述处理单元(103)通过不同传感器各自的绝对运动,解析出不同传感器之间的相对运动,计算出人体的相对位移,并通过人体的相对位移来进行操控;所述处理单元(103)通过关闭所述空间信息感知单元(101)的位移模式,只探测所述空间信息感知单元(101)的空间角度的变化,并通过所述角度的变化来进行操控;所述处理单元(103)通过位于设置在指环中的所述空间信息感知单元(101)实现手势的识别与输入,实现图像的放大、缩小和各个角度的浏览;所述处理单元(103)通过设置在所述智能眼镜中的所述空间信息感知单元(101)实现头部的旋转和/或移动的识别与输入,实现图像的放大、缩小和各个角度的浏览;和/或所述空间信息感知单元(101)将手部的空间运动轨迹解析成文字,实现的识别与文字的输入。
- 根据权利要求7所述的系统,其特征在于,所述处理单元(103)根据定位焦点当前所处位置的相关信息,分析定位焦点所在的控件相关的各种可能操控,从基础语料中提取相关操控对应的原始语料;所述处理单元(103)将所采集的语音输入信号与控件相关操控的原始语料进行匹配和识别,实现对操控焦点当前所处的位置所对应的界面的语音操控;和/或所述处理单元(103)根据所述人体的方位、姿态信息对所述语音输入单元(105)的语音输入信号进行识别与处理。
- 一种人体耦合智能信息输入方法,包括以下步骤:步骤S1,获取人体的空间信息和时间信息;步骤S2,对人体的空间信息和时间信息进行处理,并根据所述信息输出相应的操控指令;步骤S3,将操控指令发送给外部设备以实现相应的操作。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/033,587 US20160283189A1 (en) | 2013-11-01 | 2014-07-29 | Human Body Coupled Intelligent Information Input System and Method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310529685.4A CN103558915B (zh) | 2013-11-01 | 2013-11-01 | 人体耦合智能信息输入系统及方法 |
CN201310529685.4 | 2013-11-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015062320A1 true WO2015062320A1 (zh) | 2015-05-07 |
Family
ID=50013192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/083202 WO2015062320A1 (zh) | 2013-11-01 | 2014-07-29 | 人体耦合智能信息输入系统及方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20160283189A1 (zh) |
CN (1) | CN103558915B (zh) |
WO (1) | WO2015062320A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106557170A (zh) * | 2016-11-25 | 2017-04-05 | 三星电子(中国)研发中心 | 对虚拟现实设备上的图像进行缩放的方法及装置 |
US11068048B2 (en) | 2016-11-25 | 2021-07-20 | Samsung Electronics Co., Ltd. | Method and device for providing an image |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103558915B (zh) * | 2013-11-01 | 2017-11-07 | 王洪亮 | 人体耦合智能信息输入系统及方法 |
CN104133593A (zh) * | 2014-08-06 | 2014-11-05 | 北京行云时空科技有限公司 | 一种基于体感的文字输入系统及方法 |
CN104156070A (zh) * | 2014-08-19 | 2014-11-19 | 北京行云时空科技有限公司 | 一种人体智能输入体感操控系统及方法 |
CN104200555A (zh) * | 2014-09-12 | 2014-12-10 | 四川农业大学 | 一种指环手势开门的方法和装置 |
CN104166466A (zh) * | 2014-09-17 | 2014-11-26 | 北京行云时空科技有限公司 | 一种带辅助控制的体感操控系统及方法 |
CN104484047B (zh) * | 2014-12-29 | 2018-10-26 | 北京智谷睿拓技术服务有限公司 | 基于可穿戴设备的交互方法及交互装置、可穿戴设备 |
CN106204431B (zh) * | 2016-08-24 | 2019-08-16 | 中国科学院深圳先进技术研究院 | 智能眼镜的显示方法及装置 |
CN106325527A (zh) * | 2016-10-18 | 2017-01-11 | 深圳市华海技术有限公司 | 人体动作识别系统 |
CN108509048A (zh) * | 2018-04-18 | 2018-09-07 | 黄忠胜 | 一种智能设备的操控装置及其操控方法 |
EP4163764A4 (en) * | 2020-07-03 | 2023-11-22 | Huawei Technologies Co., Ltd. | IN-VEHICLE AIR GESTURE INTERACTION METHOD, ELECTRONIC DEVICE AND SYSTEM |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1653412A (zh) * | 2002-05-14 | 2005-08-10 | 克瑞斯特·劳瑞尔 | 光标控制装置 |
CN101807112A (zh) * | 2009-02-16 | 2010-08-18 | 董海坤 | 基于手势识别的pc智能输入系统 |
CN101968655A (zh) * | 2009-07-28 | 2011-02-09 | 十速科技股份有限公司 | 光标位置的偏差校正方法 |
WO2011112113A2 (en) * | 2009-10-26 | 2011-09-15 | Softwin S.R.L. | Systems and methods for assessing the authenticity of dynamic handwritten signature |
CN202433845U (zh) * | 2011-12-29 | 2012-09-12 | 海信集团有限公司 | 一种手持激光发射装置 |
CN103150036A (zh) * | 2013-02-06 | 2013-06-12 | 宋子健 | 一种信息采集系统和方法、人机交互系统和方法及一种鞋 |
CN103369383A (zh) * | 2012-03-26 | 2013-10-23 | 乐金电子(中国)研究开发中心有限公司 | 空间遥控器的控制方法、装置、空间遥控器及多媒体终端 |
CN103558915A (zh) * | 2013-11-01 | 2014-02-05 | 王洪亮 | 人体耦合智能信息输入系统及方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8416187B2 (en) * | 2010-06-22 | 2013-04-09 | Microsoft Corporation | Item navigation using motion-capture data |
JP5494423B2 (ja) * | 2010-11-02 | 2014-05-14 | ソニー株式会社 | 表示装置、位置補正方法およびプログラム |
CN102023731B (zh) * | 2010-12-31 | 2012-08-29 | 北京邮电大学 | 一种适用于移动终端的无线微型指环鼠标 |
CN102915111B (zh) * | 2012-04-06 | 2017-05-31 | 寇传阳 | 一种腕上手势操控系统和方法 |
-
2013
- 2013-11-01 CN CN201310529685.4A patent/CN103558915B/zh not_active Expired - Fee Related
-
2014
- 2014-07-29 US US15/033,587 patent/US20160283189A1/en not_active Abandoned
- 2014-07-29 WO PCT/CN2014/083202 patent/WO2015062320A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1653412A (zh) * | 2002-05-14 | 2005-08-10 | 克瑞斯特·劳瑞尔 | 光标控制装置 |
CN101807112A (zh) * | 2009-02-16 | 2010-08-18 | 董海坤 | 基于手势识别的pc智能输入系统 |
CN101968655A (zh) * | 2009-07-28 | 2011-02-09 | 十速科技股份有限公司 | 光标位置的偏差校正方法 |
WO2011112113A2 (en) * | 2009-10-26 | 2011-09-15 | Softwin S.R.L. | Systems and methods for assessing the authenticity of dynamic handwritten signature |
CN202433845U (zh) * | 2011-12-29 | 2012-09-12 | 海信集团有限公司 | 一种手持激光发射装置 |
CN103369383A (zh) * | 2012-03-26 | 2013-10-23 | 乐金电子(中国)研究开发中心有限公司 | 空间遥控器的控制方法、装置、空间遥控器及多媒体终端 |
CN103150036A (zh) * | 2013-02-06 | 2013-06-12 | 宋子健 | 一种信息采集系统和方法、人机交互系统和方法及一种鞋 |
CN103558915A (zh) * | 2013-11-01 | 2014-02-05 | 王洪亮 | 人体耦合智能信息输入系统及方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106557170A (zh) * | 2016-11-25 | 2017-04-05 | 三星电子(中国)研发中心 | 对虚拟现实设备上的图像进行缩放的方法及装置 |
US11068048B2 (en) | 2016-11-25 | 2021-07-20 | Samsung Electronics Co., Ltd. | Method and device for providing an image |
Also Published As
Publication number | Publication date |
---|---|
US20160283189A1 (en) | 2016-09-29 |
CN103558915A (zh) | 2014-02-05 |
CN103558915B (zh) | 2017-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015062320A1 (zh) | 人体耦合智能信息输入系统及方法 | |
US10621583B2 (en) | Wearable earpiece multifactorial biometric analysis system and method | |
US20240039908A1 (en) | Wireless Dongle for Communications with Wireless Earpieces | |
US20180014102A1 (en) | Variable Positioning of Distributed Body Sensors with Single or Dual Wireless Earpiece System and Method | |
KR102290892B1 (ko) | 이동단말기 및 그것의 제어방법 | |
WO2018103525A1 (zh) | 人脸关键点跟踪方法和装置、存储介质 | |
US10327082B2 (en) | Location based tracking using a wireless earpiece device, system, and method | |
EP2891954B1 (en) | User-directed personal information assistant | |
US10747337B2 (en) | Mechanical detection of a touch movement using a sensor and a special surface pattern system and method | |
US20170308689A1 (en) | Gesture-based Wireless Toggle Control System and Method | |
CN110633018A (zh) | 控制大屏设备显示的方法、移动终端及第一系统 | |
CN109901698B (zh) | 一种智能交互方法、可穿戴设备和终端以及系统 | |
US20210409531A1 (en) | Mobile terminal | |
WO2022252823A1 (zh) | 直播视频生成方法及装置 | |
WO2014121670A1 (en) | Method, device and storage medium for controlling electronic map | |
US20200029214A1 (en) | A device, computer program and method | |
KR102135378B1 (ko) | 이동 단말기 및 그 제어방법 | |
GB2520069A (en) | Identifying a user applying a touch or proximity input | |
CN106873764A (zh) | 一种基于体感控制系统的手机手势输入系统 | |
WO2017134732A1 (ja) | 入力装置、入力支援方法および入力支援プログラム | |
US10270963B2 (en) | Angle switching method and apparatus for image captured in electronic terminal | |
US20210149483A1 (en) | Selective image capture based on multi-modal sensor input | |
WO2018068484A1 (zh) | 三维手势解锁方法、获取手势图像的方法和终端设备 | |
KR102130801B1 (ko) | 손목 스탭 검출 장치 및 그 방법 | |
CN110298305A (zh) | 一种指纹识别方法和终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14858870 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15033587 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.09.2016) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14858870 Country of ref document: EP Kind code of ref document: A1 |