CN106527729A - Non-contact type input method and device - Google Patents
Non-contact type input method and device Download PDFInfo
- Publication number
- CN106527729A CN106527729A CN201611034173.0A CN201611034173A CN106527729A CN 106527729 A CN106527729 A CN 106527729A CN 201611034173 A CN201611034173 A CN 201611034173A CN 106527729 A CN106527729 A CN 106527729A
- Authority
- CN
- China
- Prior art keywords
- content
- contactless
- user
- input mode
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a non-contact type input method and device. The method comprises the following steps: determining at least one non-contact type input mode as the first non-contact type input mode in multiple non-contact type input modes; receiving the content input by a user through the first non-contact type input mode according to the first non-contact type input mode; and processing the content. By use of the method provided by the application, the man-machine interaction is more natural, convenient and diverse in modes, a man-man feeling is provided for the user, and the user experience is greatly improved.
Description
Technical field
The application is related to electronic information technical field, more particularly to a kind of contactless input method and device.
Background technology
Increasingly mature with artificial intelligence's correlation technique, increasing smart machine is occurred in face of people, people
Also increasingly get used to interacting with machine, to realize the different demands of user, such as Intelligent worn device, virtual reality device
Deng.Therefore, the natural, convenient of man-machine interaction becomes subject matter of concern, entrance of the input method as man-machine interaction,
It is directly connected to the experience effect of man-machine interaction.
Input method in correlation technique is typically all entered based on hardware such as keyboard, mouse, touch screens in input content
Row input, contacts the modes such as touch screen by user's click, mobile mouse, finger during input, and user needs to enter with input equipment
Row contact, this mode cannot meet the demand of man-machine naturally convenient interaction very well;Even if in addition, can adopt in correlation technique
Contactless phonetic entry, but input mode is too single.
The content of the invention
The application is intended at least to solve to a certain extent one of technical problem in correlation technique.
For this purpose, a purpose of the application is to propose a kind of contactless input method, the method can cause man-machine
Interaction is more naturally convenient and mode is various, gives user a kind of sensation of Health For All, substantially increases Consumer's Experience.
Further object is to propose a kind of non-contact inputting devices.
For reaching above-mentioned purpose, the contactless input method that the application first aspect embodiment is proposed, including:Various
In contactless input mode, it is determined that at least one contactless input mode is used as the first contactless input mode;According to
The first contactless input mode, receive user is with the content of the described first contactless input mode input;To described
Content is processed.
The contactless input method that the application first aspect embodiment is proposed, by being entered using contactless input mode
Row content is input into, can be more naturally convenient in man-machine interaction, by determining at least in various contactless input modes
A kind of contactless input mode, can cause contactless input mode to be selectable, rather than fixed single, can be with
The multiformity of input mode is realized, motility is improved;Such that it is able to realize that man-machine interaction is more naturally convenient and mode is various, give
A kind of sensation of Health For All of user, substantially increases Consumer's Experience.
For reaching above-mentioned purpose, the non-contact inputting devices that the application second aspect embodiment is proposed, including:Determine mould
Block, in various contactless input modes, it is determined that at least one contactless input mode is contactless as first
Input mode;Input module, for according to the described first contactless input mode, receive user is contactless with described first
The content of input mode input;Processing module, for processing to the content.
The non-contact inputting devices that the application second aspect embodiment is proposed, by being entered using contactless input mode
Row content is input into, can be more naturally convenient in man-machine interaction, by determining at least in various contactless input modes
A kind of contactless input mode, can cause contactless input mode to be selectable, rather than fixed single, can be with
The multiformity of input mode is realized, motility is improved;Such that it is able to realize that man-machine interaction is more naturally convenient and mode is various, give
A kind of sensation of Health For All of user, substantially increases Consumer's Experience.
The aspect and advantage that the application is added will be set forth in part in the description, and partly will become from the following description
Obtain substantially, or recognized by the practice of the application.
Description of the drawings
The above-mentioned and/or additional aspect of the application and advantage will become from the following description of the accompanying drawings of embodiments
It is substantially and easy to understand, wherein:
Fig. 1 is the schematic flow sheet of the contactless input method that the application one embodiment is proposed;
Fig. 2 is the schematic flow sheet of the contactless input method that the application another embodiment is proposed;
Fig. 3 is the gesture schematic diagram for starting aerial handwriting input mode in the embodiment of the present application;
Fig. 4 is the gesture schematic diagram for stopping aerial handwriting input mode in the embodiment of the present application;
Fig. 5 is the gesture schematic diagram of single-click operation in the embodiment of the present application;
Fig. 6 is the structural representation of the non-contact inputting devices that the application one embodiment is proposed;
Fig. 7 is the structural representation of the non-contact inputting devices that the application another embodiment is proposed.
Specific embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from start to finish
Same or similar label represents same or similar module or the module with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, is only used for explaining the application, and it is not intended that restriction to the application.Conversely, this
The embodiment of application includes all changes, modification and the equivalent fallen in the range of the spirit and intension of attached claims
Thing.
Fig. 1 is the schematic flow sheet of the contactless input method that the application one embodiment is proposed.
As shown in figure 1, the method for the present embodiment includes:
S11:In various contactless input modes, it is determined that at least one contactless input mode non-is connect as first
Touch input mode.
The first contactless input mode is the input mode adopted by subsequent user input content.
Various contactless input modes for example include:Phonetic entry mode, aerial handwriting input mode, gesture are defeated
Enter mode, in input mode of taking pictures etc. at least two.
In some examples, the first contactless input side can be determined according to the selection of user with the machine of user mutual
Formula.Such as, order can be started with the input mode of receiving user's input, the input mode includes the first noncontact in starting order
The information of formula input mode, determines the first contactless input mode such that it is able to start order according to input mode.
In some examples, the first contactless input mode, such as, machine can be automatically determined with the machine of user mutual
Can be using the various contactless input mode itself supported all as the first contactless input mode.Now, follow-up
In flow process, the content that detects is possible to as the content of user input.
When the input mode of receiving user's input starts order, user can be in the various contactless input side known
At least one is selected in formula as the first contactless input mode.Specifically when in use, user can pass through machine builder
The data such as the description of offer know the various contactless input mode that machine is supported, so as to that what is supported in machine various non-connects
At least one is selected in touch input mode as the first contactless input mode.Or, user can also be without reference to data
And it is optionally at least one as the first contactless input mode in the contactless input mode known to itself, in rear afterflow
Cheng Zhong, carries out respective response if the contactless input mode that machine supports user to select, else if machine is not supported,
Then do not responded, further, may remind the user that when being not responding to machine is not supported, selects other contactless input sides
Formula etc..
When the input mode of receiving user's input starts order, the input mode starts order can be by user by non-
Contact input mode or the input of contact input mode, such as, user can start order using voice mode input is described,
Or, user can click on described startup of the input such as the physical button on machine or virtual push button and order.
Further, when user starts order so that the input of contactless input mode is described, it is assumed that input is described to start life
The contactless input mode of order is referred to as the second contactless input mode, then the second contactless input mode can be with first
Contactless input mode is consistent or inconsistent.Such as, user select the first contactless input mode be voice, i.e., after
With voice mode input content, now, user can also start order so that voice mode input is described to continuous user, such as the startup
Order " the startup voice " said for user;Or, the first contactless input mode that user selects is for taking pictures, i.e., follow-up to use
Family can start order to start order so that voice mode input is described as described with shooting style input content, now user
For " startup is taken pictures " that user says.
In some examples, the first contactless input mode can be including at least one in following item:Phonetic entry side
Formula, aerial handwriting input mode, gesture input mode, input mode of taking pictures.Second contactless input mode can include as
At least one in lower item:Phonetic entry mode, gesture input mode.
S12:According to the described first contactless input mode, receive user is defeated with the described first contactless input mode
The content for entering.
It is determined that after the first contactless input mode, machine can carry out the process of corresponding input mode needs.Such as,
First contactless input mode is voice, if then not actuated mike before, now opens mike, by Mike's elegance
The speech data of collection user input, and the speech data of collection is recorded to carry out subsequent treatment, and invalid other input modes
The data of collection, such as do not start photographic head, or ignore the image that photographic head is gathered, or the image gathered to photographic head is not
It is further processed.
S13:The content is processed.
In some examples, the content can be included in interactive screen.
The content shown in interactive screen can include:Text data, view data, video data, application program etc..
It should be noted that as some contents for receiving can not directly be shown that such as user is defeated with voice mode
When entering content, the content of reception is speech data, at this time, it may be necessary to the content that can not be directly displayed is changed, being converted to can
Shown after the content of display, such as using speech recognition technology, speech data is identified as into text data, and then shows text
Notebook data.
The interactive screen can be common electronic equipment, such as various electronic displays, electronic touch screen etc.;Or,
Interactive screen can also be the display screen under new technique new forms of energy, such as air screen etc..
In some examples, the content can be responded.Such as, if the content is to open certain browser, beat
Drive the browser that the content is indicated;Or, if the content is one-touch commands, respond one-touch commands of user etc..
In the present embodiment, by carrying out content input using contactless input mode, can be in man-machine interaction more
Naturally it is convenient, by determining at least one contactless input mode in various contactless input modes, can cause non-
Contact input mode is selectable, rather than fixed single, it is possible to achieve the multiformity of input mode, improves flexible
Property;Such that it is able to realize that man-machine interaction is more naturally convenient and mode is various, give user a kind of sensation of Health For All, carry significantly
High Consumer's Experience.
Fig. 2 is the schematic flow sheet of the contactless input method that the application another embodiment is proposed.
The present embodiment is by taking display content as an example.
As shown in Fig. 2 the method for the present embodiment includes:
S21:Receive user starts order with the input mode of the second contactless input mode input, and is opened according to described
Dynamic order determines the first contactless input mode.
S22:According to the described first contactless input mode, receive user is defeated with the described first contactless input mode
The content for entering.
S23:The content is included in interactive screen.
It is phonetic entry mode, aerial handwriting input mode, gesture to be directed to the first contactless input mode separately below
Input mode, input mode of taking pictures accordingly are described.
Example one:Phonetic entry mode.
Under which, the mike that user is arranged by machine, saying needs the content of input.
During concrete input, user can first say input mode and start order, such as say " startup voice ";Then user with
Phonetic entry content;System after speech data is collected can adopt near field or far field speech recognition technology, be identified as
Text data is simultaneously displayed in interactive screen, such as shows the corresponding content of text of speech data that user says " today Hefei
Weather is fine, 20 degrees Celsius of temperature ".
Example two:Aerial handwriting input mode.
Under which, the photographic head that user is arranged by machine writes out the content of input every sky.
During concrete input, user can first produce input mode and start order, and the generation input mode that such as uses gesture starts orders
Order, the concrete forefinger of stretch out again as shown in figure 3, user first can clenching fist, then show to start aerial handwriting input mode.Then user
The content for needing input is write out in beginning in the air, and in input process, the system records user hand motion track of machine, in carrying out
After value filtering, after obtaining user's hand motion track point set, the tracing point is spliced, is entered according to spliced track
Row handwriting recongnition, obtains user input content, after system receives user stops input gesture, user input content is shown
Illustrate and;Handss are expanded into palmation by the stopping input gesture such as user, and such as Fig. 4 is stopping input in hand-written pattern in the air
Gesture schematic diagram;Other gestures can certainly be predefined, concrete the application is not construed as limiting.
It should be noted that user in the air hand-written pattern when, during input content of text, can be using individual character input, folded
The modes such as input, write the two or more syllables of a word together input, row input are write, concrete the application is not construed as limiting;The individual character input is that user merely enters every time
One word, after the completion of input, user stops input, and the individual character of input is directly shown by system;Described folding writes input, that is, use
Family is continuously input into multiple words in same region;The write the two or more syllables of a word together input refers to that user is continuously input into multiple words in zones of different;It is described
Row input refers to that user sequentially inputs each word according to order from left to right;
Example three:Gesture input mode.
It is similar with handwriting input, but be input into when, be not limited to be tracked hand motion track, can also be arm or
The track of other body parts movement, or both hands or many handss combination operations etc.;As gesture input does not have handwriting input text
Content is convenient, therefore, gesture input is generally used for being input into some control commands, such as click, mobile cursor, crawl display content,
It is determined that, cancel etc., by taking hand gesture as an example, such as Fig. 5 is single-click operation user's hand motion schematic diagram, i.e. the handss of user are first opened up
Open, then clench fist;By taking arm gesture as an example, when such as the both hands arm of user intersects, cancellation can be represented;
During concrete input, according to predetermined gesture motion, the dynamic of user's body corresponding site is tracked by photographic head
After making track, gesture identification is carried out, respective response is provided according to recognition result system, the gesture identification method may refer to
The technology for having or occurring in the future, will not be described in detail herein.
Example four:Take pictures input mode.
User is directed at content to be taken pictures by system external camera, and after being taken pictures, to taking pictures, content carries out image knowledge
Not, obtain corresponding input content;
Specifically take pictures input when, need user first to open corresponding exposal model, during unlatching, it is possible to use phonetic entry is corresponding
Open command, such as " unlatching is taken pictures ", it is of course also possible to make opening the gesture taken pictures;After opening exposal model, system prompt is used
Family adjusts photographic head photo angle and distance of taking pictures, and such as shows the spherical action bars of adjustment photographic head, user in interactive screen
Corresponding operating bar, the gesture can be caught such as to clench fist by gesture, then by moving up and down action bars controlling to take the photograph
As head direction, by moving forward and backward adjusting focal length, after adjustment terminates, by voice or gesture, user can determine that adjustment terminates;
Photographic head after article alignment adjustment of taking pictures can be provided photographing command, such as phonetic entry by voice or gesture by user
" placement is finished, and is taken pictures ";Final system starts takes pictures, by the recognition result input system of article of taking pictures;
It should be noted that system can open multiple input modes input content simultaneously, it is also possible to only receive one every time
Kind of input mode input content, such as user with phonetic entry " I will go to Beijing today, help my Ding Yizhangqu Pekinese train ticket ",
The gesture that user is intersected as both hands simultaneously, after system receives corresponding gesture correspondence order, then directly cancels ticket booking;So as to protect
It is natural, convenient during card man-machine interaction, man-machine interaction is made closer to Health For All.
S24:Edlin is entered according to the contactless operation of user to the content for showing, and shows the content after editor.
The content of the display can be text data, view data, video data, application program etc..
Content to showing enters edlin for example to be included:Modification to text data, the increase to view data delete, right
The upper various application programs of interaction screen carry out corresponding operating etc..
Edit mode can be divided into:Delete, insert and change, wherein, modification can be divided into deletion and insert, according to deletion
Carry out with the flow process of insertion.
Specifically include:
If edit mode includes deleting, determine that content to be deleted and reception are deleted according to the contactless operation of user
Except order, and the content to be deleted is deleted after delete command is received;Or,
If edit mode includes insertion, determined according to the contactless operation of user in being inserted into position and being inserted into
Hold, and content is inserted in described being inserted into described in the insertion of position;Or,
If edit mode includes modification, determine that content to be modified and reception are deleted according to the contactless operation of user
Except order, and after delete command is received, using the content to be modified as content to be deleted, delete described to be deleted interior
Hold, and, content after modification is obtained according to the contactless operation of user, and using content after the modification as in being inserted into
Hold, described in the content position insertion to be modified, be inserted into content.
Further, before editing can first receive user started with the editor that contactless input mode be input into and order
Order, starts editor to start order according to editor.Editor is started order and can be input into by voice and/or gesture.
During delete step in deletion or modification, user can pass through voice and/or gesture selects content to be deleted,
And user can pass through voice and/or gesture input delete command.During inserting step in insertion or modification, Yong Huke
To select to be inserted into position by voice and/or gesture, and user can pass through voice, aerial hand-written, gesture and take pictures
At least one input mode input is inserted into content.
Below by taking the editor to text data as an example, editing process is described.
The main deletion action for including text data of editor, insertion operation and modification operation to text data, interaction screen
Upper to show text data to be edited, user enters edlin in the case of not touch interaction screen, described in detail below:
Example one:Deletion action is carried out to text data
User passes through voice or gesture opens deletion action, such as user speech input " I will delete text fragments " first;
Then user is by gesture and/to be deleted text filed of voice selecting;Finally deletion confirmation is carried out using voice or gesture, such as
User input voice command " is deleted ", deletes the text filed of selection.Details are provided below:
(1) gesture is deleted text filed
When deleting text filed by gesture, after being mainly identified by the gesture motion to user, execution is accordingly deleted
Division operation;Details are provided below:
User lifts handss first, selects text filed with initiation gesture, now, an icon occurs in interactive screen,
To mark cursor position, the icon or other shapes icon of the icon such as hand, concrete the application are not construed as limiting, and such as one is erected
Line chart mark;The icon is consistent with the motion track of user's hand region, position of the user according to the icon, using gesture by cursor
Move to it is to be deleted it is text filed before;Then start to select text to be deleted, when specifically chosen, user can pass through what is clenched fist
After action launching text chooses pattern, mobile text filed afterbody to be deleted of clenching fist chooses all texts to be deleted successively
Region;Now can be by deleting gesture by the text filed deletion chosen, the palm clenched fist such as is launched by the deletion gesture,
Complete text suppression process;
It should be noted that using gesture delete it is text filed during, it is possible to use voice combine gesture method
Deletion process is completed, such as when text data is chosen, it is possible to use cursor is changed into voice command pattern of choosing, such as voice command
For " selection text ", then, after cursor becomes to choose pattern, the afterbody of text data to be deleted is moved the cursor to using gesture;Language
Sound is directly deleted and chooses text, and such as voice command is " deletion ";Directly text suppression can will be chosen then;Handss can also used
After gesture chooses all text datas to be deleted, directly using voice command, text data is deleted;
(2) voice is deleted text filed
By voice delete it is text filed mainly carried out after semantic understanding according to user voice data, perform and corresponding delete behaviour
Make, details are provided below:
Receive user deletes speech data first, such as " deleting the weather in Hefei ", " deletes the text between weather to temperature
This " etc., the deletion speech data is mainly used in describing text data content to be deleted;If the upper only one of which user of interaction screen
The text data of description, then directly can be deleted the text data by semantic understanding result;If having many on interaction screen
Locate the text data of user's description, then all text datas being described can be shown to user by system, such as shown using runic,
Shown using its solid color font or using choosing pattern to show;User needs text data again to needing to delete
Selected, such as shown 5 text datas being described altogether, then user can directly be said each and be described text data
Label, deletes corresponding text data, and such as " second ", system is then directly described text data deletion by second;
It should be noted that during text data is deleted using voice, it is also possible to combine gesture operation, such as interaction
When the text data that multiple users describe is shown by screen, user directly can select that text to be deleted using gesture
Data, delete text data;
Example two:Insertion operation is carried out to text data
User passes through voice or gesture opens insertion Text Mode first, and such as " I will insert text piece for user speech input
Section ";Then user will insert the position of text by gesture and/voice selecting;Finally treat using voice or every empty handwriting input
The text of insertion, concrete insertion process are as described below:
(1) gesture insertion text data
Insertion Text Mode is opened by gesture first;Then move cursor using gesture to select to insert the position of text
Put, such as user starts movement after palm is launched, open mobile cursor mode, move the cursor to the position for needing to insert text
Put;Opened every empty-handed WriteMode using gesture, handss are stretched out forefinger and clenched fist by such as user, opened every empty-handed WriteMode, user input
Text data is inserted into, after end of input, stops input using gesture, such as palm is launched by user, system shows the text of insertion
Data.
It should be noted that during using gesture insertion text data, can be combined with voice and realize insertion behaviour
Make, such as user is directly moved cursor behind the position being inserted into using voice, is inserted into text data using every empty handwriting input;
Insertion can also be needed using phonetic entry directly after user uses gesture and moves the cursor to the position for needing to insert text
Content of text.
(2) text data is inserted using voice
Insertion Text Mode is opened by voice first, such as " I wants to insert text ";Then system receive user parenthesis
Sound data, the speech data are used for selecting insertion text position and accordingly insert content, such as " being previously inserted into today in weather ",
If upper only one of which " weather " is shielded in current interaction, " today " can be previously inserted at " weather " directly;If current interaction
There are multiple " weather " on screen, then need user to select on position, system to user, to use multiple position displays that are inserted into again
Family needs the position of insertion by voice selecting, has 5 " weather " on such as current interaction screen, and user directly says " second ",
Then can determine that second " weather " is above on position, system directly will be inserted on relevant position " today ".
It should be noted that when text is inserted using voice, it is also possible to realize that text is inserted with reference to gesture operation, such as exist
When system shows multiple on position, user is directly moved the cursor to after corresponding on position using gesture, using voice or
Gesture determines the on position, and system will be inserted into automatically text and be inserted into the position, and show the text data after insertion;
Example three:Text data is modified operation
User passes through voice or gesture opens modification Text Mode, such as user speech input " I wants to change text " first;
Then user needs the text of modification by gesture and/voice selecting;Finally using voice or every empty-handedly writing to choosing text to enter
Row modification, concrete modification process are as described below:
(1) gesture modification text data
User opens modification Text Mode by gesture first;Then moved the cursor to before text to be modified using gesture
Face, after choosing the text suppression for needing modification, it is identical that the delet method deletes text method with above-mentioned use gesture;Finally exist
Amended text is inserted using every the method empty-handedly write in delete position, and detailed process inserts text data process with using gesture
It is identical;
It should be noted that when changing text data using gesture, it is also possible to text data is modified with reference to voice,
Concrete associated methods are identical with associated methods when above-mentioned deletion text or insertion text, such as choose the text for needing to delete using gesture
After this, corresponding text is deleted using voice;Or amended text etc. is inserted using voice;
(2) speech modification text data
User opens modification pattern using voice first;Then the text and amended of modification is needed using voice selecting
Text, such as " Nanjing is revised as in Hefei ";If during the upper only one of which of current interaction screen text to be modified, can be done directly
The modification of text;If have the text to be modified of multiple user's descriptions on current interaction screen, system automatically will be all to be modified
Text shows that user can need the text of modification using voice selecting;
It should be noted that when using speech modification text, it is also possible to modify with reference to gesture, such as have multiple in system
During the text to be modified of user's description, directly select to need the text to be modified of modification using gesture.
Further, the application can also carry out many to the view data of interaction screen display, video data, application program
Pattern editor, concrete edit methods are similar with the edit methods to text data, that is, can use voice, handss during editing
Gesture and realize every various edit methods such as empty-handedly writing, during if desired for deleting the upper one of image of interaction screen, user can use
Gesture is moved the cursor to before image to be deleted, and chooses the image, is directly deleted using voice.
In the present embodiment, by carrying out content input using contactless input mode, can be in man-machine interaction more
Naturally it is convenient, by determining at least one contactless input mode in various contactless input modes, can cause non-
Contact input mode is selectable, rather than fixed single, it is possible to achieve the multiformity of input mode, improves flexible
Property;Such that it is able to realize that man-machine interaction is more naturally convenient and mode is various, give user a kind of sensation of Health For All, carry significantly
High Consumer's Experience.Realized also by contactless operation by starting and editing, can be realized in man-machine interaction substantially complete
The contactless interaction of journey, further improves Consumer's Experience.
Fig. 6 is the structural representation of the non-contact inputting devices that the application one embodiment is proposed.
As shown in fig. 6, the device 60 of the present embodiment includes:Determining module 61, input module 62 and processing module 63.
Determining module 61, in various contactless input modes, it is determined that at least one contactless input mode
As the first contactless input mode;
Input module 62, for according to the described first contactless input mode, receive user is with first noncontact
The content of formula input mode input;
Processing module 63, for processing to the content.
In some embodiments, the determining module 61 specifically for:
Receive user starts order with the input mode of the second contactless input mode input, and starts life according to described
Order determines the first contactless input mode, selects in various contactless input modes comprising user in the startup order
At least one input mode information.
In some embodiments, the first contactless input mode includes at least one in following item:
Phonetic entry mode, aerial handwriting input mode, gesture input mode, input mode of taking pictures.
In some embodiments, the second contactless input mode includes at least one in following item:
Phonetic entry mode, gesture input mode.
In some embodiments, the processing module 63 specifically for:
The content is included in interactive screen;
Referring to Fig. 7, described device 60 also includes:
Editor module 64, for entering edlin to the content for showing according to the contactless operation of user, and shows volume
Content after volume.
In some embodiments, the editor module 64 is carried out to the content for showing for the contactless operation according to user
Editor, including:
If edit mode includes deleting, determine that content to be deleted and reception are deleted according to the contactless operation of user
Except order, and the content to be deleted is deleted after delete command is received;Or,
If edit mode includes insertion, determined according to the contactless operation of user in being inserted into position and being inserted into
Hold, and content is inserted in described being inserted into described in the insertion of position;Or,
If edit mode includes modification, determine that content to be modified and reception are deleted according to the contactless operation of user
Except order, and after delete command is received, using the content to be modified as content to be deleted, delete described to be deleted interior
Hold, and, content after modification is obtained according to the contactless operation of user, and using content after the modification as in being inserted into
Hold, described in the content position insertion to be modified, be inserted into content.
In some embodiments, the position of the content to be deleted, the delete command and described it is inserted in position extremely
One item missing is user using at least one input in following item:
Phonetic entry mode, gesture input mode.
In some embodiments, the content that is inserted into is that user is input into using at least one in following item:
Phonetic entry mode, aerial handwriting input mode, gesture input mode, input mode of taking pictures.
It is understood that the device of the present embodiment is corresponding with said method embodiment, particular content may refer to method
The associated description of embodiment, here are no longer described in detail.
In the present embodiment, by carrying out content input using contactless input mode, can be in man-machine interaction more
Naturally it is convenient, by determining at least one contactless input mode in various contactless input modes, can cause non-
Contact input mode is selectable, rather than fixed single, it is possible to achieve the multiformity of input mode, improves flexible
Property;Such that it is able to realize that man-machine interaction is more naturally convenient and mode is various, give user a kind of sensation of Health For All, carry significantly
High Consumer's Experience.
It is understood that same or similar part mutually can refer in the various embodiments described above, in certain embodiments
Unspecified content may refer to same or analogous content in other embodiment.
It should be noted that in the description of the present application, term " first ", " second " etc. are only used for describing purpose, and not
It is understood that as indicating or implying relative importance.Additionally, in the description of the present application, unless otherwise stated, the implication of " multiple "
Refer at least two.
In flow chart or here any process described otherwise above or method description are construed as, expression includes
It is one or more for realizing specific logical function or process the step of the module of code of executable command, fragment or portion
Point, and the scope of the preferred implementation of the application includes other realization, wherein can not be by the suitable of shown or discussion
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be by the application
Embodiment person of ordinary skill in the field understood.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, the software that multiple steps or method can be performed in memory and by suitable order execution system with storage
Or firmware is realizing.For example, if realized with hardware, and in another embodiment, can be with well known in the art
Any one of row technology or their combination are realizing:With for realizing the logic gates of logic function to data signal
Discrete logic, the special IC with suitable combinational logic gate circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method is carried
Suddenly can be by program and carry out the related hardware of order completing, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
Additionally, each functional unit in the application each embodiment can be integrated in a processing module, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a module.Above-mentioned integrated mould
Block both can be realized in the form of hardware, it would however also be possible to employ the form of software function module is realized.The integrated module is such as
Fruit using in the form of software function module realize and as independent production marketing or use when, it is also possible to be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read only memory, disk or CD etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
Example ", or the description of " some examples " etc. mean specific features with reference to the embodiment or example description, structure, material or spy
Point is contained at least one embodiment or example of the application.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example are referred to necessarily.And, the specific features of description, structure, material or feature can be any
One or more embodiments or example in combine in an appropriate manner.
Although embodiments herein has been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to the restriction to the application is interpreted as, one of ordinary skill in the art within the scope of application can be to above-mentioned
Embodiment is changed, changes, replacing and modification.
Claims (16)
1. a kind of contactless input method, it is characterised in that include:
In various contactless input modes, it is determined that at least one contactless input mode is used as the first contactless input
Mode;
According to the described first contactless input mode, receive user is with the described first contactless input mode input
Hold;
The content is processed.
2. method according to claim 1, it is characterised in that described in various contactless input modes, it is determined that extremely
A kind of few contactless input mode as the first contactless input mode, including:
Receive user starts order with the input mode of the second contactless input mode input, and orders true according to described startup
Fixed first contactless input mode, in the startup order comprising user select in various contactless input modes to
A kind of few information of input mode.
3. method according to claim 1, it is characterised in that during the first contactless input mode includes following item
At least one:
Phonetic entry mode, aerial handwriting input mode, gesture input mode, input mode of taking pictures.
4. method according to claim 2, it is characterised in that during the second contactless input mode includes following item
At least one:
Phonetic entry mode, gesture input mode.
5. method according to claim 1, it is characterised in that if described process includes:The content is included handing over
Mutually on screen, methods described also includes:
Edlin is entered according to the contactless operation of user to the content for showing, and shows the content after editor.
6. method according to claim 5, it is characterised in that the contactless operation according to user to showing in
Hold into edlin, including:
If edit mode includes deleting, determine content to be deleted and receive according to the contactless operation of user and delete life
Order, and the content to be deleted is deleted after delete command is received;Or,
If edit mode includes insertion, determined according to the contactless operation of user and be inserted into position and be inserted into content,
And content is inserted in described being inserted into described in the insertion of position;Or,
If edit mode includes modification, determine content to be modified and receive according to the contactless operation of user and delete life
Order, and after delete command is received, using the content to be modified as content to be deleted, the content to be deleted is deleted, with
And, content after modification is obtained according to the contactless operation of user, and using content after the modification as content is inserted into, in institute
State content is inserted into described in content position insertion to be modified.
7. method according to claim 6, it is characterised in that the position of the content to be deleted, the delete command and
At least one being inserted in position is user using at least one input in following item:
Phonetic entry mode, gesture input mode.
8. method according to claim 6, it is characterised in that it is described be inserted into content be user using in following item extremely
One item missing input:
Phonetic entry mode, aerial handwriting input mode, gesture input mode, input mode of taking pictures.
9. a kind of non-contact inputting devices, it is characterised in that include:
Determining module, in various contactless input modes, it is determined that at least one contactless input mode is used as
One contactless input mode;
Input module, for according to the described first contactless input mode, receive user is with the described first contactless input
The content that mode is input into;
Processing module, for processing to the content.
10. device according to claim 9, it is characterised in that the determining module specifically for:
Receive user starts order with the input mode of the second contactless input mode input, and orders true according to described startup
Fixed first contactless input mode, in the startup order comprising user select in various contactless input modes to
A kind of few information of input mode.
11. devices according to claim 8, it is characterised in that the first contactless input mode includes following item
In at least one:
Phonetic entry mode, aerial handwriting input mode, gesture input mode, input mode of taking pictures.
12. devices according to claim 9, it is characterised in that the second contactless input mode includes following item
In at least one:
Phonetic entry mode, gesture input mode.
13. devices according to claim 8, it is characterised in that the processing module specifically for:
The content is included in interactive screen;
Described device also includes:
Editor module, for entering edlin according to the contactless operation of user, and after showing editor to the content for showing
Content.
14. devices according to claim 13, it is characterised in that the editor module is for according to the contactless of user
Operate and edlin is entered to the content for showing, including:
If edit mode includes deleting, determine content to be deleted and receive according to the contactless operation of user and delete life
Order, and the content to be deleted is deleted after delete command is received;Or,
If edit mode includes insertion, determined according to the contactless operation of user and be inserted into position and be inserted into content,
And content is inserted in described being inserted into described in the insertion of position;Or,
If edit mode includes modification, determine content to be modified and receive according to the contactless operation of user and delete life
Order, and after delete command is received, using the content to be modified as content to be deleted, the content to be deleted is deleted, with
And, content after modification is obtained according to the contactless operation of user, and using content after the modification as content is inserted into, in institute
State content is inserted into described in content position insertion to be modified.
15. devices according to claim 14, it is characterised in that the position of the content to be deleted, the delete command
It is that user is input into using at least one in following item with least one being inserted in position:
Phonetic entry mode, gesture input mode.
16. devices according to claim 14, it is characterised in that the content that is inserted into is during user adopts following item
At least one input:
Phonetic entry mode, aerial handwriting input mode, gesture input mode, input mode of taking pictures.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611034173.0A CN106527729A (en) | 2016-11-17 | 2016-11-17 | Non-contact type input method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611034173.0A CN106527729A (en) | 2016-11-17 | 2016-11-17 | Non-contact type input method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106527729A true CN106527729A (en) | 2017-03-22 |
Family
ID=58357811
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611034173.0A Pending CN106527729A (en) | 2016-11-17 | 2016-11-17 | Non-contact type input method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106527729A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107045419A (en) * | 2017-05-04 | 2017-08-15 | 奇酷互联网络科技(深圳)有限公司 | A kind of input electronic equipment |
CN107436749A (en) * | 2017-08-03 | 2017-12-05 | 安徽智恒信科技有限公司 | Character input method and system based on three-dimension virtual reality scene |
CN108509029A (en) * | 2018-03-09 | 2018-09-07 | 苏州佳世达电通有限公司 | Contactless input method and contactless input system |
CN109119079A (en) * | 2018-07-25 | 2019-01-01 | 天津字节跳动科技有限公司 | voice input processing method and device |
CN109143875A (en) * | 2018-06-29 | 2019-01-04 | 广州市得腾技术服务有限责任公司 | A kind of gesture control smart home method and its system |
CN110427139A (en) * | 2018-11-23 | 2019-11-08 | 网易(杭州)网络有限公司 | Text handling method and device, computer storage medium, electronic equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1449558A (en) * | 2000-09-20 | 2003-10-15 | 国际商业机器公司 | Eye gaze for contextual speech recognition |
CN102629166A (en) * | 2012-02-29 | 2012-08-08 | 中兴通讯股份有限公司 | Device for controlling computer and method for controlling computer through device |
CN102866827A (en) * | 2012-08-21 | 2013-01-09 | 刘炳林 | Document editing method and device for man-machine interaction equipment |
CN103226388A (en) * | 2013-04-07 | 2013-07-31 | 华南理工大学 | Kinect-based handwriting method |
US20130304479A1 (en) * | 2012-05-08 | 2013-11-14 | Google Inc. | Sustained Eye Gaze for Determining Intent to Interact |
CN103577072A (en) * | 2012-07-26 | 2014-02-12 | 中兴通讯股份有限公司 | Terminal voice assistant editing method and device |
CN103885743A (en) * | 2012-12-24 | 2014-06-25 | 大陆汽车投资(上海)有限公司 | Voice text input method and system combining with gaze tracking technology |
CN104793724A (en) * | 2014-01-16 | 2015-07-22 | 北京三星通信技术研究有限公司 | Sky-writing processing method and device |
CN105468145A (en) * | 2015-11-18 | 2016-04-06 | 北京航空航天大学 | Robot man-machine interaction method and device based on gesture and voice recognition |
CN105573489A (en) * | 2014-11-03 | 2016-05-11 | 三星电子株式会社 | Electronic device and method for controlling external object |
CN105653093A (en) * | 2015-12-29 | 2016-06-08 | 广州三星通信技术研究有限公司 | Method and device for adjusting screen brightness in portable terminal |
WO2016151396A1 (en) * | 2015-03-20 | 2016-09-29 | The Eye Tribe | Method for refining control by combining eye tracking and voice recognition |
-
2016
- 2016-11-17 CN CN201611034173.0A patent/CN106527729A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1449558A (en) * | 2000-09-20 | 2003-10-15 | 国际商业机器公司 | Eye gaze for contextual speech recognition |
CN102629166A (en) * | 2012-02-29 | 2012-08-08 | 中兴通讯股份有限公司 | Device for controlling computer and method for controlling computer through device |
US20130304479A1 (en) * | 2012-05-08 | 2013-11-14 | Google Inc. | Sustained Eye Gaze for Determining Intent to Interact |
CN103577072A (en) * | 2012-07-26 | 2014-02-12 | 中兴通讯股份有限公司 | Terminal voice assistant editing method and device |
CN102866827A (en) * | 2012-08-21 | 2013-01-09 | 刘炳林 | Document editing method and device for man-machine interaction equipment |
CN103885743A (en) * | 2012-12-24 | 2014-06-25 | 大陆汽车投资(上海)有限公司 | Voice text input method and system combining with gaze tracking technology |
CN103226388A (en) * | 2013-04-07 | 2013-07-31 | 华南理工大学 | Kinect-based handwriting method |
CN104793724A (en) * | 2014-01-16 | 2015-07-22 | 北京三星通信技术研究有限公司 | Sky-writing processing method and device |
CN105573489A (en) * | 2014-11-03 | 2016-05-11 | 三星电子株式会社 | Electronic device and method for controlling external object |
WO2016151396A1 (en) * | 2015-03-20 | 2016-09-29 | The Eye Tribe | Method for refining control by combining eye tracking and voice recognition |
CN105468145A (en) * | 2015-11-18 | 2016-04-06 | 北京航空航天大学 | Robot man-machine interaction method and device based on gesture and voice recognition |
CN105653093A (en) * | 2015-12-29 | 2016-06-08 | 广州三星通信技术研究有限公司 | Method and device for adjusting screen brightness in portable terminal |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107045419A (en) * | 2017-05-04 | 2017-08-15 | 奇酷互联网络科技(深圳)有限公司 | A kind of input electronic equipment |
CN107436749A (en) * | 2017-08-03 | 2017-12-05 | 安徽智恒信科技有限公司 | Character input method and system based on three-dimension virtual reality scene |
CN108509029A (en) * | 2018-03-09 | 2018-09-07 | 苏州佳世达电通有限公司 | Contactless input method and contactless input system |
CN108509029B (en) * | 2018-03-09 | 2021-07-02 | 苏州佳世达电通有限公司 | Non-contact input method and non-contact input system |
CN109143875A (en) * | 2018-06-29 | 2019-01-04 | 广州市得腾技术服务有限责任公司 | A kind of gesture control smart home method and its system |
CN109143875B (en) * | 2018-06-29 | 2021-06-15 | 广州市得腾技术服务有限责任公司 | Gesture control smart home method and system |
CN109119079A (en) * | 2018-07-25 | 2019-01-01 | 天津字节跳动科技有限公司 | voice input processing method and device |
CN109119079B (en) * | 2018-07-25 | 2022-04-01 | 天津字节跳动科技有限公司 | Voice input processing method and device |
CN110427139A (en) * | 2018-11-23 | 2019-11-08 | 网易(杭州)网络有限公司 | Text handling method and device, computer storage medium, electronic equipment |
CN110427139B (en) * | 2018-11-23 | 2022-03-04 | 网易(杭州)网络有限公司 | Text processing method and device, computer storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106527729A (en) | Non-contact type input method and device | |
JP6802876B2 (en) | Real-time handwriting recognition management | |
AU2020267498B2 (en) | Handwriting entry on an electronic device | |
US20230333732A1 (en) | Managing real-time handwriting recognition | |
US10379719B2 (en) | Emoji recording and sending | |
JP5616325B2 (en) | How to change the display based on user instructions | |
CN105830011B (en) | For overlapping the user interface of handwritten text input | |
TWI653545B (en) | Method, system and non-transitory computer-readable media for real-time handwriting recognition | |
CN108897420B (en) | Device, method, and graphical user interface for transitioning between display states in response to a gesture | |
JP2023099007A (en) | Use of confirmation response option in graphical message user interface | |
CN107102723B (en) | Methods, apparatuses, devices, and non-transitory computer-readable media for gesture-based mobile interaction | |
WO2021231494A2 (en) | Interacting with handwritten content on an electronic device | |
CN106687889A (en) | Display-efficient text entry and editing | |
CN108536293B (en) | Man-machine interaction system, man-machine interaction method, computer-readable storage medium and interaction device | |
CN106796810A (en) | On a user interface frame is selected from video | |
CN112541375A (en) | Hand key point identification method and device | |
CN107390881A (en) | A kind of gestural control method | |
JPH09237151A (en) | Graphical user interface | |
Yi et al. | Generating 3D architectural models based on hand motion and gesture | |
JPH06242882A (en) | Information processor | |
Song et al. | HotGestures: Complementing command selection and use with delimiter-free gesture-based shortcuts in virtual reality | |
Chaudhry et al. | Music Recommendation System through Hand Gestures and Facial Emotions | |
US20240061546A1 (en) | Implementing contactless interactions with displayed digital content | |
Sachdeva et al. | A Novel Technique for Hand Gesture Recognition | |
Kim | 3D Multimodal Interaction Design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |