CN103376884B - Man-machine interaction method and its device - Google Patents

Man-machine interaction method and its device Download PDF

Info

Publication number
CN103376884B
CN103376884B CN201210117974.9A CN201210117974A CN103376884B CN 103376884 B CN103376884 B CN 103376884B CN 201210117974 A CN201210117974 A CN 201210117974A CN 103376884 B CN103376884 B CN 103376884B
Authority
CN
China
Prior art keywords
target object
man
icon
human
position relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210117974.9A
Other languages
Chinese (zh)
Other versions
CN103376884A (en
Inventor
武寿昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd filed Critical SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd
Priority to CN201210117974.9A priority Critical patent/CN103376884B/en
Publication of CN103376884A publication Critical patent/CN103376884A/en
Application granted granted Critical
Publication of CN103376884B publication Critical patent/CN103376884B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

Include successively the invention discloses a kind of man-machine interaction method and its device:Step 11)The step of for recognizing target object;Step 12)For detecting the target object, so that the step of obtaining the position relationship information between the target object and human-computer interaction device;Step 13)For according to the step of the position relationship information architecture scene;Step 13)The step of operating is performed for human-computer interaction device.Human-computer interaction device provided by the present invention, including:Identification module, detecting module, interactive information processing module and performing module.Method provided by the present invention can realize man-machine non-contact control, it is to avoid due to long-time touch keyboard, mouse or screen in the terminal use of contact man-machine interaction mode, the symptom such as the stiff pain, numbness, the spasm that occur such as wrist.

Description

Man-machine interaction method and its device
Technical field
The present invention relates to telecommunications field, more particularly to human-computer interaction technique field.
Background technology
With the development of science and technology, the device such as mobile phone, PC is widely used, mobile phone, PC it is man-machine Interactive input mode also from keyboard is tapped, develops into click mouse, and developed touch screen on this basis. China Intellectual Property Office disclosed a kind of man-machine interaction scheme based on touch-screen, Publication No. CN on 2 27th, 2008 101133385A.This programme discloses a kind of handheld device with multiple touch sensing devices.But this scheme and other existing skills Art scheme, either still touches the man-machine interaction scheme of screen based on keyboard, mouse, all rests on the computer screen of control two dimension In curtain image.So long-time touch keyboard, mouse or screen, wrist can be due to needing keyboard, mouse or screen to be maintained at one Fixed height, and necessary dorsiflex certain angle, it is impossible to stretch naturally, forefinger and the stiff pain of middle finger, numbness can be caused for a long time With thumb muscles powerlessness, serious meeting causes the symptom such as wrist muscle or joint paralysis, swelling, pain, spasm.
The content of the invention
It is an object of the invention to provide a kind of man-machine interaction method and its device, man-machine non-contact control can be achieved.
Man-machine interaction method provided by the present invention, includes successively:
Step 11)The step of for recognizing target object;
Step 12)For detecting the target object, so as to obtain between the target object and human-computer interaction device The step of position relationship information;
Step 13)The step of operating is performed for human-computer interaction device.
Human-computer interaction device provided by the present invention, including:
Identification module 101, for recognizing target object and instruction being sent when recognizing target object;
Detecting module 102, for when receiving the instruction that the identification module 101 is sent, detecting the target object Position relationship information between the human-computer interaction device is simultaneously transmitted;
Interactive information processing module 103, for being handled the position relationship information and being sent according to result Control instruction;
Performing module 104, the control instruction for being sent according to the interactive information processing module 103 performs operation.
Man-machine interaction method and its device provided by the present invention, can be achieved man-machine contactless interaction, are not only lifted Consumer's Experience, and avoid in the terminal use of contact man-machine interaction mode due to long-time touch keyboard, mouse Or screen, the stiff pain, numbness, the spasm that occur such as wrist etc..
Brief description of the drawings
Fig. 1 chooses the flow chart of figure layer for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 2 chooses the flow chart of icon for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 3 controls the flow chart of scene for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 4 sets for the method in the embodiment of the present invention one using the man-machine interaction and performs the flow of shortcut Figure;
Fig. 5 reminds the flow chart of eye distance for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 6 is the structure chart of the device of man-machine interaction described in the embodiment of the present invention three.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is this Invent a part of embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art exist The every other embodiment obtained under the premise of creative work is not made, the scope of protection of the invention is belonged to.
Embodiment one
The present embodiment one provides a kind of method of man-machine interaction, includes successively:
Step 11)The step of for recognizing target object;
It will be understood by those skilled in the art that the target object refers to the thing that interactive relation is set up with human-computer interaction device Body;For example, there is the first object, the second object and third body in space, and only the second object is built with the interactive device Grade separation mutual relation, then second object is target object.
Step 12)For detecting the target object, so as to obtain between the target object and human-computer interaction device The step of position relationship information;
It will be understood by those skilled in the art that be can determine whether out according to the position relationship information relative to the man-machine interaction Device, the target object is inactive state or motion state, and motion direction, speed and acceleration.The scene is Refer to by the position relationship information, the position relationship information of target object, target object of human-computer interaction device be inactive state or Motion state, and motion direction, the information of speed and acceleration, the number for utilizing two dimension or 3-D graphic generation technique to set up Word scene.For the digitlization scene of two dimension, target is detected by the distance measurement element of two or more diverse location simultaneously The distance of object and device, can calculate position coordinates of the target object relative to device.For three-dimensional digitlization scene, By the distance measurement element of more than three diverse locations while the distance of detecting objects body and device, can calculate target Position coordinates of the object relative to device.So it is achieved that identification and function of the detecting object from the position of mobile phone and formation institute State the model of target object and build the function of scene.
Step 13)The step of operating is performed for human-computer interaction device.
It will be understood by those skilled in the art that by recognizing target object, then detecting objects body phase for man-machine first The position relationship information of interactive device, the position relationship information obtained according to detection sets up the model of target object and builds field Scape.Perpetually iterate over above-mentioned steps, can obtain relative to the human-computer interaction device, the target object be inactive state or Motion state, and motion direction, the information of speed and acceleration.According to target object of the object in the stereo scene It is inactive state or motion state, and direction, speed and the acceleration moved performs corresponding operation, it is achieved thereby that non-connect The function of tactile man-machine interaction.
Further, the step 12)It can also include:
Step 121)The step of position relationship information for detecting objects body profile.
It will be understood by those skilled in the art that substituting object with geometric figure in digitlization scene.Pass through object The position relationship information of body profile, using two dimension or 3-D graphic generation technique, can build geometric figure, in the scene Substitute target object.The geometric figure is the model of target object.So it is achieved that structure target object mould in the scene The function of type.
Further, the step 13)The step of human-computer interaction device performs operation, including:
Step 131)The step of for choosing figure layer.
Further, the step 131)The step of for choosing figure layer, including:
Step 1311)The step of very first time length threshold is set;
Step 1312)According to the position relationship information, judge the object current location whether with non-selected figure layer Non-icon region is corresponding, if corresponding, performs step 1313);If be not correspond to, do not perform for choosing figure layer Associative operation;
Step 1313)According to the position relationship information, record the object and rest on and described and non-selected figure layer The time span of the corresponding position in non-icon region;
Step 1314)Compare the time value and the magnitude relationship of the very first time length threshold of setting, when the time When value is more than the very first time length threshold, the operation for choosing the figure layer is performed.
It will be understood by those skilled in the art that the operating habit of different user is not quite similar, so corresponding optimal first Time span threshold value also usually requires to set different numerical value according to the operating habit of different user.By setting the very first time long Threshold value is spent, adaptation of the human-computer interaction device to the operating habit of different user is realized.The operation screen of human-computer interaction device leads to Often it is divided into icon area and non-icon region, user can perform the function corresponding to icon by choosing icon.Man-machine friendship The operation screen of mutual device is generally also provided with more than one figure layer, and user can realize different figures by choosing figure layer therein Switching between layer.Mapped by the coordinate of the scene of structure and the operation screen of device, target object can be obtained in device Operation screen in position.If the position corresponding to target object in the operation screen of device is non-selected figure layer Non-icon region, then start recording object rest on the time span in the region.Set if object residence time length is more than Fixed very first time length threshold, performs the operation for choosing the figure layer.So it is achieved that the step of choosing figure layer.
Further, the step 13)The step of human-computer interaction device performs operation, in step 131)Afterwards, it can also wrap Include:
Step 132)The step of for moving figure layer.
, can be with it will be understood by those skilled in the art that mapped by the coordinate of the scene of structure and the operation screen of device The motion track that the motion track of target object is changed on the operation screen of device.In the case where figure layer is selected, Selected figure layer can also be moved according to the motion track on the operation screen of device.So it is achieved that the function of mobile figure layer.
Further, the step 13)The step of human-computer interaction device performs operation, can also include:143)For choosing The step of icon.
Further, the step 133)The step of for choosing icon, including:
Step 1331)The step of for setting the second time span threshold value;
Step 1332)According to the position relationship information, judge whether the object current location is relative with icon area Should, if corresponding, perform step 1333);If be not correspond to, the associative operation for choosing icon is not performed;
Step 1333)According to the position relationship information, record the object rest on it is described relative with icon area The time span for the position answered;
Step 1334)Compare the magnitude relationship of the time value and the second time span threshold value of setting, when the time When value is more than the second time span threshold value, the operation for choosing the icon is performed.
It will be understood by those skilled in the art that the operating habit of different user is not quite similar, so corresponding optimal second Time span threshold value also usually requires to set different numerical value according to the operating habit of different user.By setting for the second time long Threshold value is spent, adaptation of the human-computer interaction device to the operating habit of different user is realized.The operation screen of human-computer interaction device leads to Often it is divided into icon area and non-icon region, user can perform the function corresponding to icon by choosing icon.Pass through structure The scene and the operation screen of device built carry out coordinate mapping, can obtain position of the target object in the operation screen of device Put.If the position corresponding to target object in the operation screen of device is icon area, start recording object is stopped Time span in the region.If object residence time length is more than the second time span threshold value of setting, execution is chosen The operation of the icon.So it is achieved that the step of choosing icon.
Further, the step 13)The step of human-computer interaction device performs operation, in step 133)Afterwards, it can also wrap Include:
Step 134)The step of for moving icon.
, can be with it will be understood by those skilled in the art that mapped by the coordinate of the scene of structure and the operation screen of device The motion track that the motion track of target object is changed on the operation screen of device.In the case where icon is selected, Selected icon can also be moved according to the motion track on the operation screen of device.So it is achieved that the function of moving icon.
It will be understood by those skilled in the art that user by operating realization to choose, mobile figure layer, choose, moving icon Function when need not contact human-computer interaction device, this contactless interaction not only improves Consumer's Experience, and avoid Due to long-time touch keyboard, mouse or screen in the terminal use of contact man-machine interaction mode, what wrist etc. occurred Stiff pain, numbness, spasm etc..
Further, the step 13)The step of human-computer interaction device performs operation, can also include:
Step 135)The step of for being controlled in the scene.
Further, the step 135)Include successively:
Step 1351)The step of for building scene;
Step 1352)The step of for building target object model;
Step 1353)The step of spatial relation for setting up target object model and scene.
Further, the scene is stereo scene.
It will be understood by those skilled in the art that with the maturation of 3D technology, the digitlization of many scenes starts to obtain more next Wider application.Such stereo scene has 3D scene of game, 3D video conference rooms, 3D design offices.By the step for building scene Suddenly, target object model is built, and sets up target object model and the spatial relation of scene on this basis, can be mesh The mark static or motion track that object is static in the scene of structure or motion track is changed into stereo scene, so as to realize Control to stereo scene.
Further, the step 13)The step of human-computer interaction device performs operation, can also include:
Step 136)The step of for performing shortcut.
Further, the step 136)The step of for performing shortcut, including:
Step 1361)For recognizing target object in the scene of structure the step of static or motion track.
Further, the step 136)The step of for performing shortcut, including:
Step 1362)For recognize target object in the scene of structure model and its change the step of.
It will be understood by those skilled in the art that in the stereo scene being inactive state or fortune by monitoring objective object Dynamic state, and the model change in the direction of motion, the information of speed and acceleration and target object information, when the information with When shortcut setting matches, the order corresponding to shortcut is performed, the function of performing shortcut is so achieved that. For example, the device control command that the motion of the picture fork of handle is shut down with device is mutually bound, when user makes picture fork with hand again During action, device meeting automatic identification order simultaneously performs shutdown.In another example, handle pinches the action of fist by palm and device shuts down Device control command mutually bind, when recognizing that model in the scene in one's hands turns to fist by palm deformation, device can be from It is dynamic to recognize the order and perform shutdown.
Further, the method for described man-machine interaction, in addition to:
Step 15)The step of shortcut is set.
So it is achieved that the setting of shortcut.It is inactive state by recording target object in the stereo scene Or motion state, and motion direction, the information of the model change of the information of speed and acceleration or target object, and this Information is mutually bound with a certain device control command, when target object is again with the state or mesh of same or like static or motion When the model change of mark object is recognized by device, device is just automatic to perform corresponding device control command, is so achieved that The setting of man-machine interaction shortcut.For example, by setting, the device control command of the motion and device shutdown of the picture fork of handle Mutually bind, when user makes the action of picture fork with hand again, device meeting automatic identification order simultaneously performs shutdown.In another example, By setting, the device control command that the action that handle pinches fist by palm is shut down with device is mutually bound, in one's hands on the scene when recognizing When model in scape turns to fist by palm deformation, device meeting automatic identification order simultaneously performs shutdown.
Embodiment two
The present embodiment two provides a kind of method of man-machine interaction, includes successively:
Step 21)The step of for identifying eyes;
Step 22)For detecting the step of eyes are from current distance L between described device;
Step 23)For judge it is described apart from L whether be less than setting threshold value L0, when it is described apart from L be less than the threshold value During LO, step 24 is performed);
Step 24)The step of for pointing out user.
It will be understood by those skilled in the art that being filled by recognizing human eye first and detecting eyes relative to man-machine interaction The position relationship information put.The current threshold value L0 for whether being less than setting apart from L of contrast, if current is less than setting apart from L Threshold value L0, then remind user to keep the distance of eyes and device, be so achieved that a kind of non-contacting prompting user keeps Between the device function of distance, it is to avoid user with eye with excessively causing kopiopia even visual impairment.
Further, the method for described man-machine interaction, in addition to:
Step 25)The step of for setting threshold value L0 numerical value.
It will be understood by those skilled in the art that because different people height situation is different, eyesight status, ring during use device Border is also different, and the optimal values of prompting distance in different situations between eyes and device are different.By setting L0 threshold values, realize adaptation of the human-computer interaction device to the operating habit of different user.
Embodiment three
A kind of man-machine friendship of the method for man-machine interaction described in the offer of the present embodiment three implementation embodiment one and embodiment two Mutual device, including:
Identification module 101, for recognizing target object and instruction being sent when recognizing target object;
Detecting module 102, for when receiving the instruction that the identification module 101 is sent, detecting the target object Position relationship information between the human-computer interaction device is simultaneously transmitted;
Interactive information processing module 103, is controlled for being handled the interactive information and being sent according to result Instruction;
Performing module 104, the control instruction for being sent according to the interactive information processing module 103 performs operation.
It will be understood by those skilled in the art that can so realize that user is moved using object before the interactive device, To control the human-computer interaction device to perform the function of operation.Figure layer, choosing are chosen as described in embodiment one and embodiment two Middle icon, mobile figure layer, the operation of moving icon.User by keyboard or touch-screen without inputting control instruction, with being controlled Human-computer interaction device do not produce contact, reduce the mechanical wear of controlled human-computer interaction device.
Further, the performing module 104, including:
Modeling unit 1041, for according to the position relationship information architecture scene;
Display unit 1042, for showing the scene.
So, scene and position relationship that can be residing for simulative display target object and the human-computer interaction device, make user Control of the target object to the human-computer interaction device is more intuitively observed, ease for use is stronger.
Further, the detecting module 102 includes multiple distance measurement elements, and the quantity of distance measurement element is at least three It is individual.So, the three-dimensional position relation between the target object and the human-computer interaction device can be detected from different directions.It is described Modeling unit 1041 can build three-dimensional scenic and be shown by display unit 1042.
Further, the performing module 104 includes alarm set, for what is sent according to interactive information processing module 103 Control instruction, which is performed, reminds operation.
It will be understood by those skilled in the art that the interactive information processing module 103 can be realized when position relationship letter The instruction for controlling prompting operation is sent when breath is less than the threshold value set.Such as when the eyes and the human-computer interaction device of user The distance between be less than setting threshold value when, perform remind operation, so as to point out user to note.The threshold value can be according to user's Need to set.
The present apparatus can also be realized:The function of device control is realized with the shortcut recorded in advance.At interactive information It is inactive state or motion state in the stereo scene that reason module 103, which records target object, and motion direction, speed The information changed with the information of acceleration or the model of target object, mutually binds the information with a certain device control command.When Target object is changed by distance measurement element with the model of same or like static or motion state or target object again During identification, device is just automatic to perform corresponding device control command.So it is achieved that the setting of man-machine interaction shortcut With the function that device control is realized with the shortcut recorded in advance.For example, by setting, the motion of the picture fork of handle and device The device control command of shutdown is mutually bound, when user makes the action of picture fork with hand again, the order of device meeting automatic identification And perform shutdown.In another example, by setting, the device control command that the action that handle pinches fist by palm is shut down with device is mutually tied up Fixed, when recognizing that model in the scene in one's hands turns to fist by palm deformation, device meeting automatic identification order is simultaneously performed Shutdown.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although The present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be used To be modified to the technical scheme described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic; And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and Scope.

Claims (4)

1. a kind of method of man-machine interaction, includes successively:
Step 11) for the step of recognizing target object;
Step 12) it is used to detect the target object, so as to obtain the reality between the target object and human-computer interaction device The step of position relationship information;
Step 13) it is used to perform the step of operating;
It is characterized in that:The step 13) it is used to perform the step of operating, including:
Step 131) for the step of choosing figure layer;
Characterized in that, the step 131) it is used for the step of choose figure layer, including:
Step 1311) for the step of setting very first time length threshold;
Step 1312) be used for according to the position relationship information, judge the object current location whether with non-selected figure layer Region is corresponding, if corresponding, performs step 1313);
Step 1313) it is used for according to the position relationship information, record the object and rest on and described and non-selected figure layer The time span of the corresponding position in non-icon region;
Step 1314) it is used to compare the time value and the magnitude relationship of the very first time length threshold of setting, when the time When value is more than the very first time length threshold, the operation for choosing the figure layer is performed.
2. the method for man-machine interaction as claimed in claim 1, it is characterised in that in the step 131) after, in addition to:
Step 132) for the step of moving figure layer.
3. a kind of method of man-machine interaction, includes successively:
Step 11) for the step of recognizing target object;
Step 12) it is used to detect the target object, so as to obtain the reality between the target object and human-computer interaction device The step of position relationship information;
Step 13) it is used to perform the step of operating;
The step 13) man-machine execution is the step of operate, including:
Step 133) for the step of choosing icon;
Characterized in that, the step 133) it is used for the step of choose icon, including:
Step 1331) be used for set the second time span threshold value the step of;
Step 1332) it is used for according to the position relationship information, judge whether the object current location is relative with icon area Should, if corresponding, perform step 1333);
Step 1333) be used for according to the position relationship information, record the object rest on it is described relative with icon area The time span for the position answered;
Step 1334) it is used to compare the time value and the magnitude relationship of the second time span threshold value set, when the time When value is more than the second time span threshold value, the operation for choosing the icon is performed.
4. the method for man-machine interaction as claimed in claim 3, it is characterised in that in the step 133) after, in addition to:
Step 134) be used for moving icon the step of.
CN201210117974.9A 2012-04-22 2012-04-22 Man-machine interaction method and its device Expired - Fee Related CN103376884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210117974.9A CN103376884B (en) 2012-04-22 2012-04-22 Man-machine interaction method and its device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210117974.9A CN103376884B (en) 2012-04-22 2012-04-22 Man-machine interaction method and its device

Publications (2)

Publication Number Publication Date
CN103376884A CN103376884A (en) 2013-10-30
CN103376884B true CN103376884B (en) 2017-08-29

Family

ID=49462108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210117974.9A Expired - Fee Related CN103376884B (en) 2012-04-22 2012-04-22 Man-machine interaction method and its device

Country Status (1)

Country Link
CN (1) CN103376884B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577025B (en) * 2013-10-31 2016-05-11 中国电子科技集团公司第四十一研究所 A kind of unitized processing method of instrument man-machine interaction
CN105353871B (en) * 2015-10-29 2018-12-25 上海乐相科技有限公司 The control method and device of target object in a kind of virtual reality scenario
CN108932062B (en) * 2017-05-28 2021-09-21 姚震 Electronic device, input device control method
CN113129340B (en) * 2021-06-15 2021-09-28 萱闱(北京)生物科技有限公司 Motion trajectory analysis method and device for operating equipment, medium and computing equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN101918908A (en) * 2007-09-28 2010-12-15 阿尔卡特朗讯 Method for determining user reaction with specific content of a displayed page

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101918908A (en) * 2007-09-28 2010-12-15 阿尔卡特朗讯 Method for determining user reaction with specific content of a displayed page
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display

Also Published As

Publication number Publication date
CN103376884A (en) 2013-10-30

Similar Documents

Publication Publication Date Title
US8866781B2 (en) Contactless gesture-based control method and apparatus
US9244544B2 (en) User interface device with touch pad enabling original image to be displayed in reduction within touch-input screen, and input-action processing method and program
KR101919169B1 (en) Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device
KR101861395B1 (en) Detecting gestures involving intentional movement of a computing device
US9696882B2 (en) Operation processing method, operation processing device, and control method
CN105117056B (en) A kind of method and apparatus of operation touch-screen
KR20160003031A (en) Simulation of tangible user interface interactions and gestures using array of haptic cells
EP2676178A1 (en) Breath-sensitive digital interface
KR20170109695A (en) Depth-based user interface gesture control
US20140022171A1 (en) System and method for controlling an external system using a remote device with a depth sensor
CN104866097B (en) The method of hand-held signal output apparatus and hand-held device output signal
CN103376884B (en) Man-machine interaction method and its device
Rekimoto Organic interaction technologies: from stone to skin
KR20190059726A (en) Method for processing interaction between object and user of virtual reality environment
TWI471792B (en) Method for detecting multi-object behavior of a proximity-touch detection device
CN103558913A (en) Virtual input glove keyboard with vibration feedback function
Watanabe et al. Generic method for crafting deformable interfaces to physically augment smartphones
KR101688193B1 (en) Data input apparatus and its method for tangible and gestural interaction between human-computer
CN103885696A (en) Information processing method and electronic device
WO2018042923A1 (en) Information processing system, information processing method, and program
CN104951211A (en) Information processing method and electronic equipment
CN204740560U (en) Handheld signal output device
Matulic et al. Terrain modelling with a pen & touch tablet and mid-air gestures in virtual reality
CN108008819A (en) A kind of page map method and terminal device easy to user's one-handed performance
TWI483162B (en) Method for detecting multi-object behavior of a proximity-touch detection device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170829

Termination date: 20200422