CN105843401A - Screen reading instruction input method and device based on camera - Google Patents
Screen reading instruction input method and device based on camera Download PDFInfo
- Publication number
- CN105843401A CN105843401A CN201610313048.7A CN201610313048A CN105843401A CN 105843401 A CN105843401 A CN 105843401A CN 201610313048 A CN201610313048 A CN 201610313048A CN 105843401 A CN105843401 A CN 105843401A
- Authority
- CN
- China
- Prior art keywords
- gesture
- reading
- reading screen
- subject performance
- feature operation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a screen reading application instruction input method based on a camera. The method comprises the steps that a video image sent by the connected camera is obtained, and movement operation data in the video image is recognized; feature data of the movement operation data is obtained; a preset action/gesture database is searched for a target action/gesture matched with the feature data; a screen reading function operation corresponding to the target action/gesture is determined and executed. In addition, the embodiment of the invention further discloses a screen reading application instruction input device based on the camera. By the adoption of the screen reading application instruction input method and device based on the camera, operation convenience of screen reading application instruction input can be improved.
Description
Technical field
The present invention relates to human-computer interaction technique field, particularly relate to the application instruction of a kind of reading screen based on photographic head
Input method and device.
Background technology
Along with quickly popularizing of the computer equipment such as smart mobile phone, PC, various mobile Internets are applied
Also emerge in an endless stream, increasing user can the life that brings to mobile interchange technology of sense of reality convenient and
Wireless enjoyment;But, in society, some specific group is also required to use the computers such as smart mobile phone to set
Standby, that is, to have certain handicapped crowd, the visually impaired colony of such as disturbance people, especially total blindness,
It is entirely and listens to sound to operate computer by ear.
(include in the auxiliary operation function using the terminal units such as smart mobile phone, panel computer, PC
But it is not limited to Voiceover, Talkback etc. and reads screen software and the application of other similar functions) time, Yong Hujie
Surface element and function thereof can be extracted and pass through TTS (Text to Speech, text-to-speech technology)
Selected word voice is played back, to help user to understand the content that mobile phone screen is currently displaying;
Further, user can by carrying out on the touchscreen clicking on, the operation such as slip, terminal is operated, gives
User brings the most more rich experience, in particular so that there is certain handicapped crowd's (example
Such as specific groups such as the disability personnel of visual disorder, old peoples) can clog-free use smart mobile phone etc. eventually
End equipment.
It is to say, terminal must could be operated by the input equipment such as hand-held intelligent mobile phone, mouse by user.
As a example by smart mobile phone, from the point of view of user, even if there being the auxiliary of voice, it is also desirable to user is showing with finger
Show and repeatedly click on or slide on interface.If it addition, mobile phone is the most at one's side, then cannot be carried out behaviour
Make so that user uses the experience of the terminal units such as smart mobile phone the best.Further, for existing
From the point of view of certain handicapped crowd (specific group such as the disability personnel of such as visual disorder, old people),
Will there is also certain problem by searching mobile phone quickly and accurately, this has resulted in the certain handicapped people of existence
Group specific groups such as () the disability personnel of such as visual disorder, old peoples is the facility of operation when using mobile phone
Property not enough.
It is to say, there is the facility of operation in the mode of operation of the terminal units such as smart mobile phone of the prior art
Property not enough problem.
Summary of the invention
Based on this, there is operation in the mode of operation for terminal units such as the smart mobile phones in solution conventional art
The technical problem that convenience is not enough, spy proposes a kind of reading screen application instruction input method based on photographic head.
A kind of reading screen application instruction input method based on photographic head, including:
Obtain the video image that the photographic head connected sends, identify that the excercises in described video image are counted
According to;
Obtain the characteristic of described motor performance data, search and institute in default action/gesture database
State the subject performance/gesture of characteristic coupling;
Determine the reading screen feature operation corresponding with described subject performance/gesture, perform described reading and shield feature operation.
Optionally, wherein in an embodiment, after the described step reading screen feature operation of described execution also
Including: obtain the described execution result reading screen feature operation;In described default speech database search with
Described voice message message corresponding to execution result reading screen feature operation;Play described voice message message.
Optionally, wherein in an embodiment, described lookup and institute in default action/gesture database
Also include after the step of the subject performance/gesture stating characteristic coupling:
Obtain the current display interface of terminal, obtain the reading screen function check boxes of described current display interface;
Described determine corresponding with the described subject performance/gesture step reading screen feature operation particularly as follows:
Screen function check boxes of reading according to described current display interface determines corresponding with described subject performance/gesture
Read screen feature operation.
Optionally, wherein in an embodiment, described reading screen feature operation is that voice applications opens operation;
The described step reading screen feature operation of described execution is particularly as follows: open operation according to described voice applications and open
Dynamic unlatching with described voice applications operates corresponding voice applications.
Optionally, wherein in an embodiment, described determine the reading screen corresponding with described subject performance/gesture
The step of feature operation particularly as follows: be sent to read screen application by described subject performance/gesture, and described reading shields application
For searching in described default reading screen feature operation data base and described mesh according to described subject performance/gesture
Subject performance/the gesture of mark action/gesture coupling.
Optionally, wherein in an embodiment, the video image that the photographic head that described acquisition has connected sends
Step before also include: receive user input reading screen application open command, described reading screen application unlatching refers to
Make and apply corresponding with described screen of reading, start described reading screen application according to described screen application open command of reading.
Optionally, wherein in an embodiment, described lookup in default action/action/gesture database
Also include after the step of the subject performance/action/gesture mated with described characteristic: obtain user's input
Feedback information for described subject performance/action/gesture;Or obtain described characteristic and described subject performance
The coupling reference value of/action/gesture, generates for described subject performance/action/hands according to described coupling reference value
The feedback information of gesture;Described method also includes: according to described feedback information determine described default action/action/
The more new data of gesture database;According to the action/gesture database preset described in described renewal Refresh Data.
Additionally, there is operation just in the mode of operation for terminal units such as the smart mobile phones in solution conventional art
The technical problem that profit is not enough, it is also proposed that a kind of reading screen application instruction inputting device based on photographic head.
A kind of reading screen application instruction inputting device based on photographic head, including:
Video image acquisition module, for obtaining the video image that the photographic head connected sends, identifies described
Motor performance data in video image;
Target gesture searches module, for obtaining the characteristic of described motor performance data, moves default
Work/gesture database is searched the subject performance/gesture mated with described characteristic;
Operation determines execution module, for determining the reading screen feature operation corresponding with described subject performance/gesture,
Perform described reading and shield feature operation.
Optionally, wherein in an embodiment, described device also includes voice message message playback module,
For: obtain the described execution result reading screen feature operation;In described default speech database search with
Described voice message message corresponding to execution result reading screen feature operation;Play described voice message message.
Optionally, wherein in an embodiment, described device also includes display interface acquisition module, is used for
Obtain the current display interface of terminal, obtain the reading screen function check boxes of described current display interface;Described behaviour
Make to determine that the reading screen function check boxes that execution module is additionally operable to according to described current display interface determines and described mesh
The reading screen feature operation that mark action/gesture is corresponding.
Optionally, wherein in an embodiment, described reading screen feature operation is that voice applications opens operation;
Described operation determines that execution module is additionally operable to: opening operation startup according to described voice applications should with described audio frequency
The voice applications corresponding with opening operation.
Optionally, wherein in an embodiment, described operation determines that execution module is additionally operable to: by described mesh
Mark action/gesture is sent to read to shield application, and described reading screen application is used for according to described subject performance/gesture described
That presets reads subject performance/gesture that in screen feature operation data base, lookup is mated with described subject performance/gesture.
Optionally, wherein in an embodiment, described device also includes reading screen application opening module, is used for
Receiving the reading screen application open command of user's input, described reading screen application open command is right with described reading screen application
Should, start described reading according to described reading screen application open command and shield application.
Optionally, wherein in an embodiment, described device also includes feedback information acquisition module and data
Storehouse more new module, wherein: described feedback information acquisition module is used for: obtain user input for described mesh
The feedback information of mark action/action/gesture;Or obtain mating of described characteristic and described subject performance/gesture
Reference value, generates the feedback information for described subject performance/gesture according to described coupling reference value;Described number
It is used for according to storehouse more new module: determine described default action/action/gesture database according to described feedback information
More new data;According to the action/action/gesture database preset described in described renewal Refresh Data.
Implement the embodiment of the present invention, will have the advantages that
After have employed above-mentioned reading screen application instruction input method based on photographic head and device, user can be led to
Cross the motor performance data such as hand motion or body action, in the coverage of the photographic head being connected with terminal
Under, the hand motion of user or body action all can be identified and be converted into the action/gesture behaviour of correspondence
Make, so that it is determined that user needs the operational order corresponding with reading screen application of input to complete to read the instruction of screen application
Input and performed by reading screen software device that corresponding operational order is sent to computer corresponding, real
Having showed user applies input instruction and reading screen software by operational order by motor performance data to the reading screen of terminal
Output, must be by the physical button of terminal or touch screen input instruction compared to user in conventional art
From the point of view of scheme, the operation ease of instruction input can be improved;Further, such scheme also achieve with
The most operable terminal in the case of the not handheld terminal of family, carries out associative operation, only to reading screen application transmission instruction
Want it within the coverage of the photographic head of terminal, terminal can be remotely controlled operation, further
Improve user and use the operation ease reading screen application;In particular for there is certain handicapped crowd
From the point of view of (specific group such as the disability personnel of such as visual disorder, old people), the raising of its operation ease
Particularly evident.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to enforcement
In example or description of the prior art, the required accompanying drawing used is briefly described, it should be apparent that, describe below
In accompanying drawing be only some embodiments of the present invention, for those of ordinary skill in the art, do not paying
On the premise of going out creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Wherein:
Fig. 1 is a kind of flow process signal reading screen application instruction input method based on photographic head in an embodiment
Figure;
Fig. 2 is that in another embodiment, a kind of flow process reading screen application instruction input method based on photographic head is shown
It is intended to;
Fig. 3 is the schematic flow sheet of the feedback update method of action/gesture database in an embodiment;
Fig. 4 is a kind of structural representation reading screen application instruction inputting device based on photographic head in an embodiment
Figure;
Fig. 5 is to run the aforementioned calculating reading screen application instruction input method based on photographic head in an embodiment
The structural representation of machine equipment.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clearly
Chu, be fully described by, it is clear that described embodiment be only a part of embodiment of the present invention rather than
Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creation
The every other embodiment obtained under property work premise, broadly falls into the scope of protection of the invention.
There is the convenience of operation not in the mode of operation for terminal units such as the smart mobile phones in solution conventional art
The technical problem of foot, in the present embodiment, spy proposes the application instruction input of a kind of reading screen based on photographic head
Method, the realization of the method can be dependent on computer program, and this computer program can run on based on Feng Nuoyi
On the computer system of graceful system, this computer program can be and somatosensory recognition function based on photographic head
Corresponding reads screen software and the application of other similar functions and auxiliary operation with Voiceover, Talkback etc.
Function (includes but not limited to that Voiceover, Talkback etc. read screen software and the application of other similar functions)
The application program of the somatosensory recognition/action recognition corresponding with photographic head of association.This computer system can be with
Smart mobile phone, panel computer, palm PC, notebook computer or the individual's electricity that outside camera device connects
The computer equipments such as brain.
It should be noted that the reading screen application of terminal, some is user installation, and some is also likely to be originally
The most built-in: such as based on IOS system voiceover is exactly built-in, based on windows system
NVDA is exactly the third party software of user installation.
Concrete, as it is shown in figure 1, above-mentioned reading screen application instruction input method based on photographic head includes as follows
Step:
Step S101: obtain the video image that the photographic head connected sends, identify in described video image
Motor performance data.
Photographic head mentioned by the present embodiment refers to the outside photographic head being connected with terminal, such as, fixedly mount
At the photographic head in somewhere, room, and, this photographic head contains communication module, and by this communication mould
Foundation communication connection between block with terminal (wire communication is connected or radio communication connects, such as WIFI connection,
Bluetooth connects and other wireless telecommunications connected modes), and connected or radio communication by above-mentioned wire communication
Connect and the video data collected is sent to terminal.Further, in the present embodiment, above-mentioned photographic head
Can be monocular cam, it is possible to for multi-cam, such as, multiple photographic head can be used to gather from different perspectives
Video image obtain corresponding depth map or obtain corresponding 3-D view.
In this step, after the video image that terminal is arrived by camera collection, this video image is carried out
Image procossing, obtains the motor performance data in this video image, and the process obtaining motor performance data is
The process of image recognition.
Concrete, in the present embodiment, above-mentioned motor performance data can be the hand motion information of user (i.e.
Gesture operation), it is also possible to it is the limb action information (i.e. motion action) of user, in other embodiments,
Motor performance data can also is that other data that can represent that the motor performance corresponding with user is corresponding or information.
In one embodiment, the process of above-mentioned image recognition acquisition motor performance data may is that according to regarding
Frequently each picture frame of image, obtain the hand region in each picture frame or portrait area and with this district
The exterior contour that territory is corresponding, obtains hand region or portrait area and corresponding with this region by interframe encode
Exterior contour change between each picture frame, thus obtain movement locus or the limbs of user's hand
Action.
In another embodiment, above-mentioned image recognition obtains the process of motor performance data it is also possible that obtain
Take each picture frame of video image, according to default hand structure sample, determine in each picture frame
The characteristic point to be measured of hand, and according to the hand characteristic point to be measured of each picture frame in video image, determine
Movement locus/the denomination of dive of user's both hands and/or position.
It should be noted that in the present embodiment, the specific implementation of the process of above-mentioned image recognition does not limits
In the above-mentioned concrete image recognition mode be given, it is also possible to include that other can extract user's hand in video image
The arbitrary image recognizer of action or limb action, does not enumerates.
Step S102: obtain the characteristic of described motor performance data, at default action/gesture database
Subject performance/gesture that middle lookup is mated with described characteristic.
In the present embodiment, when motor performance data are mated with the parameter preset, it is by motion
The characteristic parameter of operation data is compared, such as, at the motion rail that motor performance data are user's hand
During mark, when needing the formation of the position of the characteristic point to the predetermined number of this movement locus and this movement locus
Length is compared, and therefore, after getting motor performance data, needs to carry out these motor performance data
Data analysis and/or feature extraction, obtain and this motor performance data characteristic of correspondence parameter.
Such as, characteristic can be movement locus or the finger tip action of user's hand of user's hand motion
The positional information of movement locus characteristic of correspondence point, the formation duration of movement locus, it is also possible to be action width
The information such as degree size.Characteristic be used to preset action/gesture database in action/gesture sample in
Relevant parameter compare.
In the present embodiment, read to shield and apply discernible gesture to include but not limited to left stroke, right stroke, upper downslide
Moving, double-click, click, accordingly, do not limit discernible action, user can set as required
Put multiple action/gesture.
Action/gesture database is the operational motion/gesture reading to preset in screen application, say, that when detecting
During operational motion/the gesture mated with the operational motion/gesture in action/gesture database, this operation can be moved
Work/gesture is judged to effective action/gesture, is otherwise judged to invalid operation action/gesture.
In the present embodiment, the operational motion/gesture in action/gesture database need to be preset and by shooting
Corresponding relation between the motor performance data of the user that head identifies, namely in setting action/gesture database
Corresponding between operational motion/gesture with the characteristic of the motor performance data by identifying in step S101
Relation.After above-mentioned corresponding relation is set up, can be by this corresponding relation at default action/gesture database
Subject performance/gesture that middle lookup is corresponding with characteristic, i.e. with camera collection to video image in with
Subject performance/hands that motor performance data that the relevant motor performance data in family are inputted by wearable device are corresponding
Gesture.
It should be noted that in the present embodiment, the matching relationship in subject performance/gesture with characteristic is
Determine according to characteristic.Such as, when characteristic is characteristic point information, above-mentioned matching relationship is permissible
Characteristic point on the desired guiding trajectory corresponding with subject performance/gesture with the characteristic point in motion trace data
Degree of joining exceedes preset value, in other embodiments, characteristic be the amplitude of movement locus, length, time
During the information such as length, above-mentioned matching relationship can also is that the parameters such as the amplitude of movement locus, length, duration meet
The preset value corresponding with subject performance/gesture.
Step S103: determine the reading screen feature operation corresponding with described subject performance/gesture, performs described reading and shields
Feature operation.
In implementing, in reading screen application, each operational motion/gesture is an all corresponding concrete behaviour
Instruct, for example, it is possible to arrange the buttons/areas choosing place, touch point corresponding to " clicking " and play and be somebody's turn to do
The speech message that buttons/areas is corresponding, it is also possible to arrange " double-click " correspondence open link corresponding to current check boxes/
The page.
In the present embodiment, the reading screen feature operation that subject performance/gesture is corresponding includes but not limited to open reading screen
Apply, open certain page, change the position etc. reading screen check boxes.Further, subject performance/gesture is corresponding
Read screen feature operation to be determined according to default corresponding relation, such as, when input in step S101
When motor performance data are for rocking twice, corresponding subject performance/gesture can be to dub twice, corresponding reading
Screen feature operation is to open current reading to shield the page/link that check boxes is corresponding.
Optionally, in one embodiment, after step s 103, for read screen feature operation perform knot
Really, user can also know relevant instruction execution result by the form of speech play, in order to user is not
In the case of the related content on the display screen checking terminal, know relevant reading screen function by sound
The implementation status of operation.
Concrete, as in figure 2 it is shown, in step S103: perform described read screen feature operation after also include as
Lower step:
Step S104: obtain the described execution result reading screen feature operation;
Step S105: in described default speech database search with described read screen feature operation perform knot
The voice message message that fruit is corresponding;
Step S106: play described voice message message.
Read the execution result of screen feature operation to include and run succeeded, perform failure, and run succeeded and also wrap
Include the specific instruction result of this reading screen function command, such as, have selected certain space of current display interface
Or the information such as picture, further for example, the position at Xu's center place moved in current display interface, then example
As, open a new operation pages etc..
Such as, when reading the operation pages that screen feature operation opens QQ music, voice message message is permissible
It is: QQ music is opened.The most such as, when read screen feature operation executions result corresponding be have selected ought
" deletion " button in front display interface, then voice message message may is that selected button " returns ".
In one embodiment, can determine in step S103 according to the particular content on the display interface of terminal
The performed particular content reading screen feature operation.
Concrete, default action/gesture database is searched the subject performance/gesture mated with characteristic
Step after also include: obtain terminal current display interface, obtain current display interface reading screen function
Check boxes;Further, determine corresponding with the subject performance/gesture step reading screen feature operation particularly as follows: according to
The screen function check boxes of reading of current display interface determines the reading screen feature operation corresponding with subject performance/gesture.
The related content that the current display interface of terminal is i.e. shown at the display interface of terminal.Read screen function to choose
Frame is check boxes corresponding with reading screen application on display interface, in general, on the display interface of terminal
The quantity reading screen function check boxes is one, and reading screen function check boxes can a corresponding button, it is also possible to right
Answer an icon or a control or passage or a link etc..
It should be noted that in the present embodiment, if read screen function check boxes corresponding be one exercisable
Button, then the operation for this reading screen function check boxes include but not limited to click enter, mobile check boxes,
Return upper level catalogue etc.;If read screen function check boxes corresponding be one section of inoperable word, then for
The operation of this reading screen function check boxes can be mobile check boxes, but can not be click on the operations such as entrance.Therefore,
For reading, screen particular content corresponding to function check boxes is different, and operational order corresponding to subject performance/gesture also can
Change therewith.
Concrete, according to the reading screen function check boxes of current display interface, determine corresponding with subject performance/gesture
Reading screen feature operation, and perform this reading screen feature operation in step s 103.
In another embodiment, the particular content of above-mentioned reading screen feature operation by read screen should be for determining.Tool
Body, determine the reading screen feature operation corresponding with described subject performance/gesture particularly as follows: by subject performance/gesture
Being sent to read to shield application, described reading screen application is used for according to described subject performance/gesture at described default reading screen
Feature operation data base searches the reading screen feature operation mated with described subject performance/gesture.
It is to say, after finding subject performance/gesture in step s 102, by this subject performance/gesture
It is sent to read screen application, reads screen and apply after receiving this subject performance/gesture, according to this subject performance/hands
Gesture read screen application reading screen feature operation data base in, according to operational motion/gesture with read screen feature operation it
Between corresponding relation, search the reading screen feature operation corresponding with subject performance/gesture.Master is noted that
In the present embodiment, reading screen feature operation corresponding to subject performance/gesture is unique, and if do not find right
The reading screen feature operation answered, then be judged to invalid operation action/gesture by corresponding subject performance/gesture.The most right
Should be used for saying in reading screen, the present embodiment disclosure of that is not required to change the result reading screen application itself,
Have only to the interface that a corresponding subject performance/gesture receives, by being correlated with that wearable device inputs
Reception and the extraction of data all can have been applied by other.
In one embodiment, reading screen feature operation is that voice applications opens operation;Perform to read screen feature operation
Step particularly as follows: open operation according to voice applications and start and open audio frequency corresponding to operation with voice applications and answer
With.
It is to say, user can pre-set the unlocking condition of voice applications, i.e. open behaviour with voice applications
Make corresponding subject performance/gesture and the characteristic of the motor performance data corresponding with subject performance/gesture
The requirement that should meet, or open subject performance/gesture corresponding to operation with voice applications and count with excercises
According to characteristic between corresponding relation;So that user all can directly can open at arbitrary interface
Voice applications, from the point of view of this is for the disability personnel of visual disorder, can open use frequency the highest more easily
Voice applications, improve the convenience of operation.
In the present embodiment, as in figure 2 it is shown, optional step also includes: in step S101: receive and connect
Step S100 is also included: connect before the motion trace data of the motor performance data that the wearable device connect sends
Receiving the reading screen application open command of user's input, reading screen application open command is applied corresponding, according to reading with reading screen
Screen application open command starts reads screen application.
Read the application open command that screen application is corresponding of reading that screen application open command is and installs in terminal, right
From the point of view of the disability personnel of visual disorder, the first step of mobile phone is used to be accomplished by opening reading screen application, by mobile phone
It is provided with reading screen pattern to facilitate use.User can pre-set read screen application open command concrete operations or
Person's system can also carry out the setting being correlated with, and such as, adopting consecutive click chemical reaction Home key 3 times, when terminal detects
During the reading screen application open command that user inputs, start according to the reading screen application open command detected to read to shield and answer
With, it is set to terminal read screen pattern.
Optionally, in one embodiment, it is also possible to the motor performance every time inputted according to user is to data base
Correct, say, that the motor performance every time inputted according to user and the input habit of user, set up
Personalized motion/the gesture database corresponding with user so that user during upper once input operation,
Comparison with sample database is the quickest and accurate.
Concrete, the described target that lookup is mated with described characteristic in default action/gesture database
After the step of action/gesture, as it is shown on figure 3, said method also includes:
Step S2011: obtain the feedback information for described subject performance/gesture of user's input;Or step
S2012: obtain described characteristic and described subject performance/gesture mates reference value, according to described coupling ginseng
Examine value and generate the feedback information for described subject performance/gesture;
Step S202: determine the more new data of described default action/gesture database according to described feedback information;
Step S203: according to the action/gesture database preset described in described renewal Refresh Data.
Feedback information includes the lookup result of subject performance/gesture and does not meets when user expects user accordingly
Feedback page input feedback information, further comprises user input motor performance concrete operating parameter with
User's input habit of determining of matching degree between sample parameter in the action/gesture database preset anti-
Feedforward information, say, that feedback information can be user's manual feedback, it is also possible to terminal is according to user's
Operating habit determines.
After feedback information gets, according to the particular content of feedback information, to default action/gesture number
It is modified according to the corresponding relation between the sample parameter in storehouse and sample parameter and operating parameter, then
Update action/gesture database, in order to action/gesture database that user uses when inputting corresponding operating next time
For the data base after updating.
Additionally, be the operation of the mode of operation existence of the terminal units such as the smart mobile phone in solution conventional art
The technical problem that convenience is not enough, in one embodiment, it is also proposed that a kind of reading screen based on photographic head should
With instruction inputting device, as shown in Figure 4, said apparatus includes video image acquisition module 101, subject performance
/ gesture searches module 102 and operation determines execution module 103.
Concrete, video image acquisition module 101, for obtaining the video image that the photographic head connected sends,
Identify the motor performance data in described video image.
Photographic head mentioned by the present embodiment refers to the outside photographic head being connected with terminal, such as, fixedly mount
At the photographic head in somewhere, room, and, this photographic head contains communication module, and by this communication mould
Foundation communication connection between block with terminal (wire communication is connected or radio communication connects, such as WIFI connection,
Bluetooth connects and other wireless telecommunications connected modes), and connected or radio communication by above-mentioned wire communication
Connect and the video data collected is sent to terminal.Further, in the present embodiment, above-mentioned photographic head
Can be monocular cam, it is possible to for multi-cam, such as, multiple photographic head can be used to gather from different perspectives
Video image obtain corresponding depth map or obtain corresponding 3-D view.
Video image acquisition module 101 after go to photographic head send video image after, to this video figure
As carrying out image procossing, obtain the motor performance data in this video image, obtain the mistake of motor performance data
Journey is the process of image recognition.
Concrete, in the present embodiment, above-mentioned motor performance data can be the hand motion information of user (i.e.
Gesture operation), it is also possible to it is the limb action information (i.e. motion action) of user, in other embodiments,
Motor performance data can also is that other data that can represent that the motor performance corresponding with user is corresponding or information.
In one embodiment, the process of above-mentioned image recognition acquisition motor performance data may is that video figure
As acquisition module 101 is according to each picture frame of video image, obtain the hand region in each picture frame
Or portrait area and the exterior contour corresponding with this region, obtain hand region or portrait by interframe encode
Region and the exterior contour corresponding with this region change between each picture frame, thus obtain user's hands
The movement locus in portion or limb action.
In another embodiment, above-mentioned image recognition obtains the process of motor performance data it is also possible that regard
Frequently image capture module 101 obtains each picture frame of video image, according to default hand structure sample,
The characteristic point to be measured of hand is determined in each picture frame, and according to the hands of each picture frame in video image
Portion's characteristic point to be measured, determines movement locus/denomination of dive and/or the position of user's both hands.
It should be noted that in the present embodiment, the specific implementation of the process of above-mentioned image recognition does not limits
In the above-mentioned concrete image recognition mode be given, it is also possible to include that other can extract user's hand in video image
The arbitrary image recognizer of action or limb action, does not enumerates.
Target gesture searches module 102, for obtaining the characteristic of described motor performance data, default
Gesture database is searched the target gesture mated with described characteristic.
In the present embodiment, motor performance data are carried out by target gesture lookup module 102 with the parameter preset
During coupling, it is by the characteristic parameter of motor performance data is compared, such as, counts in excercises
According to during for the movement locus of user's hand, need the predetermined number to this movement locus characteristic point position with
And the formation duration of this movement locus compares, therefore, video image acquisition module 101 is getting fortune
After dynamic operation data, target gesture searches module 102 to be needed these motor performance data are carried out data analysis
And/or feature extraction, obtain and this motor performance data characteristic of correspondence parameter.
Such as, characteristic can be movement locus or the finger tip action of user's hand of user's hand motion
The positional information of movement locus characteristic of correspondence point, the formation duration of movement locus, it is also possible to be action width
The information such as degree size.Characteristic be used to preset action/gesture database in action/gesture sample in
Relevant parameter compare.
In the present embodiment, read to shield and apply discernible gesture to include but not limited to left stroke, right stroke, upper downslide
Moving, double-click, click, accordingly, do not limit discernible action, user can set as required
Put multiple action/gesture.
Action/gesture database is the operational motion/gesture reading to preset in screen application, say, that when video figure
As acquisition module 101 detect the operational motion that mates with the operational motion/gesture in action/gesture database/
During gesture, this operational motion/gesture can be judged to effective action/gesture, otherwise be judged to that invalid operation is moved
Work/gesture.
In the present embodiment, the operational motion/gesture in action/gesture database need to be preset and by shooting
Corresponding relation between the motor performance data of the user that head identifies, namely in setting action/gesture database
Between the characteristic of the motor performance data that operational motion/gesture and video image acquisition module 101 identify
Corresponding relation.After above-mentioned corresponding relation is set up, can be by this corresponding relation at default action/gesture number
According to storehouse is searched the subject performance/gesture corresponding with characteristic, i.e. with camera collection to video image in
Target corresponding to the motor performance data that inputted by wearable device with user-dependent motor performance data is moved
Work/gesture.
It should be noted that in the present embodiment, the matching relationship in subject performance/gesture with characteristic is
Determine according to characteristic.Such as, when characteristic is characteristic point information, above-mentioned matching relationship is permissible
Characteristic point on the desired guiding trajectory corresponding with subject performance/gesture with the characteristic point in motion trace data
Degree of joining exceedes preset value, in other embodiments, characteristic be the amplitude of movement locus, length, time
During the information such as length, above-mentioned matching relationship can also is that the parameters such as the amplitude of movement locus, length, duration meet
The preset value corresponding with subject performance/gesture.
Operation determines execution module 103, for determining the reading screen feature operation corresponding with described target gesture, holds
Row is described reads screen feature operation.
In implementing, in reading screen application, each operational motion/gesture is an all corresponding concrete behaviour
Instruct, for example, it is possible to arrange the buttons/areas choosing place, touch point corresponding to " clicking " and play and be somebody's turn to do
The speech message that buttons/areas is corresponding, it is also possible to arrange " double-click " correspondence open link corresponding to current check boxes/
The page.
In the present embodiment, the reading screen feature operation that subject performance/gesture is corresponding includes but not limited to open reading screen
Apply, open certain page, change the position etc. reading screen check boxes.Further, subject performance/gesture is corresponding
Read screen feature operation to be determined according to default corresponding relation, such as, when video image acquisition module
When the 101 motor performance data obtained are for rocking twice, corresponding subject performance/gesture can be to dub twice,
Corresponding screen feature operation of reading is to open the page/link that current reading screen check boxes is corresponding.
Optionally, in one embodiment, as shown in Figure 4, described device also includes that voice message message is broadcast
Amplification module 104, is used for: obtain the described execution result reading screen feature operation;At described default speech data
Storehouse is searched the voice message message corresponding with the execution result of described reading screen feature operation;Play described voice
Prompting message.
Read the execution result of screen feature operation to include and run succeeded, perform failure, and run succeeded and also wrap
Include the specific instruction result of this reading screen function command, such as, have selected certain space of current display interface
Or the information such as picture, further for example, the position at Xu's center place moved in current display interface, then example
As, open a new operation pages etc..
Such as, when reading the operation pages that screen feature operation opens QQ music, voice message message is play
The voice message message that module 104 is play may is that QQ music is opened.The most such as, when reading screen function
Operation execution result corresponding be to have selected " deletion " button in current display interface, then voice message message
The voice message message that playing module 104 is play may is that selected button " returns ".
In one embodiment, can determine that operation determines according to the particular content on the display interface of terminal to hold
The particular content reading screen feature operation performed by row module 103.
Concrete, as shown in Figure 4, described device also includes display interface acquisition module 105, is used for obtaining end
The current display interface of end, obtains the reading screen function check boxes of described current display interface;Described operation determines
The reading screen function check boxes that execution module 103 is additionally operable to according to described current display interface determines and described target
The reading screen feature operation that action/gesture is corresponding.
The related content that the current display interface of terminal is i.e. shown at the display interface of terminal.Read screen function to choose
Frame is check boxes corresponding with reading screen application on display interface, in general, on the display interface of terminal
The quantity reading screen function check boxes is one, and reading screen function check boxes can a corresponding button, it is also possible to right
Answer an icon or a control or passage or a link etc..
It should be noted that in the present embodiment, if read screen function check boxes corresponding be one exercisable
Button, then the operation for this reading screen function check boxes include but not limited to click enter, mobile check boxes,
Return upper level catalogue etc.;If read screen function check boxes corresponding be one section of inoperable word, then for
The operation of this reading screen function check boxes can be mobile check boxes, but can not be click on the operations such as entrance.Therefore,
For reading, screen particular content corresponding to function check boxes is different, and operational order corresponding to subject performance/gesture also can
Change therewith.
Concrete, according to the reading screen function check boxes of current display interface, display interface acquisition module 105 is true
The fixed reading screen feature operation corresponding with subject performance/gesture, and determined that execution module 103 performs this reading by operation
Screen feature operation.
In another embodiment, the particular content of above-mentioned reading screen feature operation by read screen should be for determining.
Concrete, described operation determines that execution module 103 is additionally operable to: described subject performance/gesture be sent to
Reading screen to apply, described reading shields application for grasping in described default reading screen function according to described subject performance/gesture
Make data base searches the reading screen feature operation mated with described subject performance/gesture.
It is to say, after target gesture lookup module 102 finds subject performance/gesture, operation determines holds
Row module 103 this subject performance/gesture is sent to read screen application, read screen apply receive this subject performance/
After gesture, according to this subject performance/gesture in the reading screen feature operation data base reading screen application, according to behaviour
Make action/gesture and the corresponding relation read between screen feature operation, search the reading screen corresponding with subject performance/gesture
Feature operation.
It should be noted that in the present embodiment, reading screen feature operation corresponding to subject performance/gesture is unique
, and if do not find the reading screen feature operation of correspondence, then corresponding subject performance/gesture is judged to nothing
Effect operational motion/gesture.I.e. should be used for saying for reading screen, the present embodiment disclosure of that is not required to change
Read the structure of screen application itself, it is only necessary to the interface that a corresponding subject performance/gesture receives, pass through
Reception and the extraction of the related data of wearable device input all can have been applied by other.
In one embodiment, reading screen feature operation is that voice applications opens operation;Described operation determines execution
Module 103 is additionally operable to: open operation startup according to described voice applications right with the unlatching operation of described voice applications
The voice applications answered.
It is to say, user can pre-set the unlocking condition of voice applications, i.e. open behaviour with voice applications
Make corresponding subject performance/gesture and the characteristic of the motor performance data corresponding with subject performance/gesture
The requirement that should meet, or open subject performance/gesture corresponding to operation with voice applications and count with excercises
According to characteristic between corresponding relation;So that user all can directly can open at arbitrary interface
Voice applications, from the point of view of this is for the disability personnel of visual disorder, can open use frequency the highest more easily
Voice applications, improve the convenience of operation.
In the present embodiment, optionally, as shown in Figure 4, described device also includes reading screen application opening module
106, for receiving the reading screen application open command of user's input, described reading screen application open command and described reading
Screen application correspondence, starts described reading according to described reading screen application open command and shields application.
Read the application open command that screen application is corresponding of reading that screen application open command is and installs in terminal, right
From the point of view of the disability personnel of visual disorder, the first step of mobile phone is used to be accomplished by opening reading screen application, by mobile phone
It is provided with reading screen pattern to facilitate use.User can pre-set read screen application open command concrete operations or
Person's system can also carry out the setting being correlated with, such as, adopting consecutive click chemical reaction Home key 3 times, opens when reading screen application
Opening module 106 when the reading screen application open command that user inputs being detected, the reading screen application according to detecting is opened
Open instruction and start reading screen application, read screen and apply opening module 106 to be set to terminal read screen pattern.
Optionally, in one embodiment, it is also possible to the motor performance every time inputted according to user is to data base
Correct, say, that the motor performance every time inputted according to user and the input habit of user, set up
Personalized motion/the gesture database corresponding with user so that user during upper once input operation,
Comparison with sample database is the quickest and accurate.
Concrete, as shown in Figure 4, described device also includes feedback information acquisition module 107 and data base more
New module 108, wherein: described feedback information acquisition module 107 is used for: obtain user input for described
The feedback information of subject performance/gesture;Or obtain described characteristic and described subject performance/gesture mate ginseng
Examine value, generate the feedback information for described subject performance/gesture according to described coupling reference value;Described data
Storehouse more new module 108 is used for: determine described default action/gesture database more according to described feedback information
New data;According to the action/gesture database preset described in described renewal Refresh Data.
Feedback information includes the lookup result of subject performance/gesture and does not meets when user expects user accordingly
Feedback page input feedback information, further comprises user input motor performance concrete operating parameter with
User's input habit of determining of matching degree between sample parameter in the action/gesture database preset anti-
Feedforward information, say, that the feedback information that feedback information acquisition module 107 obtains can be that user is the most anti-
Feedback, it is also possible to terminal determines according to the operating habit of user.
After feedback information gets, according to the particular content of feedback information, database update module 108
Corresponding between sample parameter and sample parameter with the operating parameter in default action/gesture database
Relation is modified, then update action/gesture database, in order to user makes when inputting corresponding operating next time
Action/gesture database be update after data base.
Implement the embodiment of the present invention, will have the advantages that
After have employed above-mentioned reading screen application instruction input method based on photographic head and device, user can be led to
Cross the motor performance data such as hand motion or body action, in the coverage of the photographic head being connected with terminal
Under, the hand motion of user or body action all can be identified and be converted into the action/gesture behaviour of correspondence
Make, so that it is determined that user needs the operational order corresponding with reading screen application of input to complete to read the instruction of screen application
Input and performed by reading screen software device that corresponding operational order is sent to computer corresponding, real
Having showed user applies input instruction and reading screen software by operational order by motor performance data to the reading screen of terminal
Output, must be by the physical button of terminal or touch screen input instruction compared to user in conventional art
From the point of view of scheme, the operation ease of instruction input can be improved;Further, such scheme also achieve with
The most operable terminal in the case of the not handheld terminal of family, carries out associative operation, only to reading screen application transmission instruction
Want it within the coverage of the photographic head of terminal, terminal can be remotely controlled operation, further
Improve user and use the operation ease reading screen application;In particular for there is certain handicapped crowd
From the point of view of (specific group such as the disability personnel of such as visual disorder, old people), the raising of its operation ease
Particularly evident.
In one embodiment, as it is shown in figure 5, Fig. 5 illustrates and a kind of runs above-mentioned reading based on photographic head
The terminal of the computer system based on von Neumann system of screen application instruction input method.This computer system
Can be connected with outside photographic head smart mobile phone, panel computer, palm PC, notebook computer or
The terminal units such as PC.Concrete, it may include the outer input interface 1001 that connected by system bus,
Processor 1002, memorizer 1003 and output interface 1004.Wherein, outer input interface 1001 is optional
Can at least include network interface 10012.Memorizer 1003 can include external memory 10032 (such as hard disk,
CD or floppy disk etc.) and built-in storage 10034.Output interface 1004 can at least include display screen 10042 etc.
Equipment.
In the present embodiment, the operation of this method is based on computer program, the program file of this computer program
Be stored in the external memory 10032 of aforementioned computer system based on von Neumann system, operationally by
It is loaded in built-in storage 10034, is transferred to after being then compiled as machine code in processor 1002 perform,
So that computer system based on von Neumann system is formed video image acquisition module in logic
101, target gesture searches module 102 and operation determines execution module 103.And above-mentioned based on photographic head
Reading screen application instruction input method perform during, the parameter of input is all connect by outer input interface 1001
Receive, and be transferred in memorizer 1003 caching, be then input in processor 1002 process, process
Result data or be cached in memorizer 1003 subsequently process, or be passed to output interface 1004
Export.
Above disclosed be only present pre-ferred embodiments, certainly can not with this limit the present invention it
Interest field, the equivalent variations therefore made according to the claims in the present invention, still belong to the scope that the present invention is contained.
Claims (14)
1. a reading screen application instruction input method based on photographic head, it is characterised in that including:
Obtain the video image that the photographic head connected sends, identify that the excercises in described video image are counted
According to;
Obtain the characteristic of described motor performance data, search and institute in default action/gesture database
State the subject performance/gesture of characteristic coupling;
Determine the reading screen feature operation corresponding with described subject performance/gesture, perform described reading and shield feature operation.
2. according to the method shown in claim 1, it is characterised in that described execution is described reads screen feature operation
Step after also include:
Obtain the described execution result reading screen feature operation;
The language corresponding with the execution result of described reading screen feature operation is searched in described default speech database
Sound prompting message;
Play described voice message message.
Method the most according to claim 1, it is characterised in that described at default action/gesture data
Also include after storehouse is searched the step of the subject performance/gesture mated with described characteristic:
Obtain the current display interface of terminal, obtain the reading screen function check boxes of described current display interface;
Described determine corresponding with the described subject performance/gesture step reading screen feature operation particularly as follows:
Screen function check boxes of reading according to described current display interface determines corresponding with described subject performance/gesture
Read screen feature operation.
Method the most according to claim 1, it is characterised in that described reading screen feature operation is that audio frequency should
With opening operation;
Described execution described read screen feature operation step particularly as follows:
Open operation according to described voice applications and start the voice applications corresponding with the unlatching operation of described voice applications.
Method the most according to claim 1, it is characterised in that described determine with described subject performance/
Gesture corresponding read screen feature operation step particularly as follows:
Described subject performance/gesture is sent to read screen application, and described reading shields application for moving according to described target
Work/gesture is searched in described default reading screen feature operation data base and is mated with described subject performance/gesture
Read screen feature operation.
Method the most according to claim 5, it is characterised in that the shooting hair that described acquisition has connected
Also include before the step of the video image sent:
Receiving the reading screen application open command of user's input, described reading screen application open command should with described screen of reading
By correspondence, start described reading screen application according to described screen application open command of reading.
7. according to the arbitrary described method of claim 1 to 6, it is characterised in that described in default action
Also include after/gesture database is searched the step of the subject performance/gesture mated with described characteristic:
Obtain the feedback information for described subject performance/gesture of user's input;
Or
Obtain described characteristic and described subject performance/gesture mates reference value, according to described coupling reference
Value generates the feedback information for described subject performance/gesture;
Described method also includes:
The more new data of described default action/gesture database is determined according to described feedback information;
According to the action/gesture database preset described in described renewal Refresh Data.
8. a reading screen application instruction inputting device based on photographic head, it is characterised in that including:
Video image acquisition module, for obtaining the video image that the photographic head connected sends, identifies described
Motor performance data in video image;
Subject performance/gesture searches module, for obtaining the characteristic of described motor performance data, is presetting
Action/gesture database in search subject performance/gesture of mating with described characteristic;
Operation determines execution module, for determining the reading screen feature operation corresponding with described subject performance/gesture,
Perform described reading and shield feature operation.
Device the most according to claim 8, it is characterised in that described device also includes that voice message disappears
Breath playing module, is used for:
Obtain the described execution result reading screen feature operation;
The language corresponding with the execution result of described reading screen feature operation is searched in described default speech database
Sound prompting message;
Play described voice message message.
Device the most according to claim 8, it is characterised in that described device also includes display interface
Acquisition module, for obtaining the current display interface of terminal, obtains the reading screen function of described current display interface
Check boxes;
Described operation determines that the reading screen function check boxes that execution module is additionally operable to according to described current display interface is true
The fixed reading screen feature operation corresponding with described subject performance/gesture.
11. devices according to claim 8, it is characterised in that described reading screen feature operation is audio frequency
Operation is opened in application;
Described operation determines that execution module is additionally operable to: opens operation according to described voice applications and starts and described sound
The voice applications that operation is corresponding is opened in frequency application.
12. devices according to claim 8, it is characterised in that described operation determines execution module also
For: described subject performance/gesture is sent to read screen application, and described reading shields application for according to described target
Action/gesture is searched in described default reading screen feature operation data base and is mated with described subject performance/gesture
Reading screen feature operation.
13. devices according to claim 12, it is characterised in that described device also includes reading screen application
Opening module, for receive user input reading screen application open command, described reading screen application open command and
Correspondence applied by described screen of reading, and shields application according to described reading screen application described reading of open command startup.
14. according to Claim 8 to 13 arbitrary described devices, it is characterised in that described device also includes
Feedback information acquisition module and database update module, wherein:
Described feedback information acquisition module is used for:
Obtain the feedback information for described subject performance/gesture of user's input;
Or
Obtain described characteristic and described subject performance/gesture mates reference value, according to described coupling reference
Value generates the feedback information for described subject performance/gesture;
Described database update module is used for:
The more new data of described default action/gesture database is determined according to described feedback information;
According to the action/gesture database preset described in described renewal Refresh Data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610313048.7A CN105843401A (en) | 2016-05-12 | 2016-05-12 | Screen reading instruction input method and device based on camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610313048.7A CN105843401A (en) | 2016-05-12 | 2016-05-12 | Screen reading instruction input method and device based on camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105843401A true CN105843401A (en) | 2016-08-10 |
Family
ID=56592224
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610313048.7A Pending CN105843401A (en) | 2016-05-12 | 2016-05-12 | Screen reading instruction input method and device based on camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105843401A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107678547A (en) * | 2017-09-27 | 2018-02-09 | 维沃移动通信有限公司 | A kind of processing method and mobile terminal of information notice |
WO2019023999A1 (en) * | 2017-08-02 | 2019-02-07 | 深圳传音通讯有限公司 | Operation method and operation apparatus for smart device |
CN111899442A (en) * | 2020-08-17 | 2020-11-06 | 中国银行股份有限公司 | Method and device for blind self-service handling of banking business |
CN112270210A (en) * | 2020-10-09 | 2021-01-26 | 珠海格力电器股份有限公司 | Data processing method, data processing device, operation instruction identification method, operation instruction identification device, equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101311882A (en) * | 2007-05-23 | 2008-11-26 | 华为技术有限公司 | Eye tracking human-machine interaction method and apparatus |
US20090186321A1 (en) * | 2007-11-30 | 2009-07-23 | Beyo Gmgh | Reading Device for Blind or Visually Impaired Persons |
CN102662462A (en) * | 2012-03-12 | 2012-09-12 | 中兴通讯股份有限公司 | Electronic device, gesture recognition method and gesture application method |
CN102713794A (en) * | 2009-11-24 | 2012-10-03 | 奈克斯特控股公司 | Methods and apparatus for gesture recognition mode control |
CN102789312A (en) * | 2011-12-23 | 2012-11-21 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN102819751A (en) * | 2012-08-21 | 2012-12-12 | 长沙纳特微视网络科技有限公司 | Man-machine interaction method and device based on action recognition |
CN103176595A (en) * | 2011-12-23 | 2013-06-26 | 联想(北京)有限公司 | Method and system for information prompt |
-
2016
- 2016-05-12 CN CN201610313048.7A patent/CN105843401A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101311882A (en) * | 2007-05-23 | 2008-11-26 | 华为技术有限公司 | Eye tracking human-machine interaction method and apparatus |
US20090186321A1 (en) * | 2007-11-30 | 2009-07-23 | Beyo Gmgh | Reading Device for Blind or Visually Impaired Persons |
CN102713794A (en) * | 2009-11-24 | 2012-10-03 | 奈克斯特控股公司 | Methods and apparatus for gesture recognition mode control |
CN102789312A (en) * | 2011-12-23 | 2012-11-21 | 乾行讯科(北京)科技有限公司 | User interaction system and method |
CN103176595A (en) * | 2011-12-23 | 2013-06-26 | 联想(北京)有限公司 | Method and system for information prompt |
CN102662462A (en) * | 2012-03-12 | 2012-09-12 | 中兴通讯股份有限公司 | Electronic device, gesture recognition method and gesture application method |
CN102819751A (en) * | 2012-08-21 | 2012-12-12 | 长沙纳特微视网络科技有限公司 | Man-machine interaction method and device based on action recognition |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019023999A1 (en) * | 2017-08-02 | 2019-02-07 | 深圳传音通讯有限公司 | Operation method and operation apparatus for smart device |
CN107678547A (en) * | 2017-09-27 | 2018-02-09 | 维沃移动通信有限公司 | A kind of processing method and mobile terminal of information notice |
CN111899442A (en) * | 2020-08-17 | 2020-11-06 | 中国银行股份有限公司 | Method and device for blind self-service handling of banking business |
CN112270210A (en) * | 2020-10-09 | 2021-01-26 | 珠海格力电器股份有限公司 | Data processing method, data processing device, operation instruction identification method, operation instruction identification device, equipment and medium |
CN112270210B (en) * | 2020-10-09 | 2024-03-01 | 珠海格力电器股份有限公司 | Data processing and operation instruction identification method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10957315B2 (en) | Mobile terminal and method for controlling mobile terminal using machine learning | |
CN104717360B (en) | A kind of call recording method and terminal | |
CN107329743A (en) | Methods of exhibiting, device and the storage medium of five application page | |
CN105788597A (en) | Voice recognition-based screen reading application instruction input method and device | |
CN106201177B (en) | A kind of operation execution method and mobile terminal | |
CN105824429A (en) | Screen reading application instruction input method and device based on infrared sensor | |
CN105843401A (en) | Screen reading instruction input method and device based on camera | |
CN106164808A (en) | Equipment and the method for equipment is calculated for ring | |
CN105867641A (en) | Screen reading application instruction input method and device based on brain waves | |
CN101641660A (en) | Apparatus, method and computer program product providing a hierarchical approach to command-control tasks using a brain-computer interface | |
CN108958503A (en) | input method and device | |
US10685650B2 (en) | Mobile terminal and method of controlling the same | |
CN105446489B (en) | Voice Dual-mode control method, device and user terminal | |
CN108345442B (en) | A kind of operation recognition methods and mobile terminal | |
US10770077B2 (en) | Electronic device and method | |
CN106775666A (en) | A kind of application icon display methods and terminal | |
CN105843402A (en) | Screen reading application instruction input method and device based on wearable equipment | |
WO2020240838A1 (en) | Conversation control program, conversation control method, and information processing device | |
CN109144376A (en) | A kind of operation readiness method and terminal | |
CN104598133B (en) | The specification generation method and device of object | |
CN111158487A (en) | Man-machine interaction method for interacting with intelligent terminal by using wireless earphone | |
CN109344592A (en) | Project kanban system method and apparatus based on bio-identification | |
CN105843404A (en) | Screen reading application instruction input method and device | |
CN107918509A (en) | Software shortcut prompting method to set up, device and readable storage medium storing program for executing | |
CN106789949B (en) | A kind of sending method of voice data, device and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160810 |
|
RJ01 | Rejection of invention patent application after publication |