CN109407946A - Graphical interfaces target selecting method based on speech recognition - Google Patents
Graphical interfaces target selecting method based on speech recognition Download PDFInfo
- Publication number
- CN109407946A CN109407946A CN201811056705.XA CN201811056705A CN109407946A CN 109407946 A CN109407946 A CN 109407946A CN 201811056705 A CN201811056705 A CN 201811056705A CN 109407946 A CN109407946 A CN 109407946A
- Authority
- CN
- China
- Prior art keywords
- tagged words
- user
- smart machine
- screen
- circle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000002045 lasting effect Effects 0.000 claims abstract 2
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to a kind of graphical interfaces target selecting method based on speech recognition belongs to Speech Object selection field.It is the following steps are included: creation phonetic symbol dictionary, mark point;By mark point and label word association;User says tagged words and lasting sounding, generates using correspondence markings point as the center of circle, and radius continues sounding and ever-increasing circle with user;Circle is divided into several arc sections;By arc section and label word association;User says tagged words, to correspond to the central point of arc section as the center of circle, as radius, to generate a circle at a distance from adjacent circular arc segments;Circle is divided into several regions;By region and label word association;User says tagged words, carries out target selection by selected element of the central point in corresponding region.The present invention focuses on giving user intuitive visual feedback, so that user is apparent from when this says that is ordered, without allowing user to carry out the study of multiple voice command again, so that greatly facilitating user uses smart machine.
Description
Technical field
The present invention relates to Speech Objects to select field, more particularly to the graphical interfaces target selection side based on speech recognition
Method.
Background technique
Speech recognition technology passes through the development of many years, gradually moves towards practical application from laboratory, starts to become gradually
The significant technology in information industry field progresses into our daily lives in human-computer interaction application, such as existing intelligence
Mobile phone, tablet computer, smart television, vehicle-mounted flat, Intelligent bracelet, smartwatch etc. are usually all accompanied with speech identifying function.
The application program of voice control smart machine can be developed using speech recognition technology, people do not need to carry out hand to smart machine
Animal manages button operation, the operation to smart machine need to only can be realized in the form of voice command, this is for numerous disabled persons
Scholar is of great significance.But existing Speech Object selection method, the target being previously set can only be chosen, Bu Nengxuan
The arbitrary target points occurred on screen are taken, and the intuitive visual feedback of user can not be given in the selection process, so that user
Do not know when be unaware that whether the order said is effective, while not knowing that the order said is it may be said that order yet
What, such as: when needing to choose some target in screen, if there is multiple targets completely the same, then " being chosen when user says
So-and-so " when, voice selecting system directly would generally all screen all targets being matched to all, and by consistent mesh
Different target designations is respectively set in mark, while generating new selectable command statement, then user is waited to say newer command,
And so when user can not be apparent from it may be said that order, when a upper order terminates, and user
It is not aware that new order is, to influence user experience.
Summary of the invention
The technical problem to be solved by the present invention is to, provide it is a kind of more intuitive, select it is more efficient, use more convenient language
Sound target selecting method.
The technical scheme is that a kind of graphical interfaces target selecting method based on speech recognition, including following step
It is rapid:
Step1 creates a phonetic symbol dictionary in smart machine, and an at least category is arranged in the phonetic symbol dictionary
Remember word, and such tagged words at least contains a tagged words;
Step2 creates several mark points on the screen of smart machine, and is shown on the screen of smart machine;
Step3 is associated with a certain class tagged words of phonetic symbol dictionary respectively by the mark point of the screen of smart machine, and will
Tagged words in such are respectively displayed on around correspondence markings point;
Step4 judges whether user says tagged words shown in the screen of smart machine in Step3, while judging user
Whether continuing sounding and continues the judgement of sounding without user if user not yet says the tagged words, system is waited,
Until user says the tagged words, if user says the tagged words and is located exactly at the corresponding label of the tagged words to selection target
At point, then user no longer continues sounding, target selection is carried out by selected element of the corresponding mark point of the tagged words, if user says
The tagged words and it is located at outside the corresponding mark point of the tagged words to selection target, then user continues sounding, generates one with the mark
Remember that the corresponding mark point of word is the center of circle, radius ever-increasing circle with the continuous sounding of user;
Tagged words described in Step5 cancellation Step3 are associated with mark point, and are removed described in Step3 in smart machine
The tagged words shown on screen, while removing the mark point created on the screen of smart machine described in Step2;
The circle generated on the screen of smart machine described in Step4 is divided into several arc sections by Step6, and arc section is distinguished
It is associated with a certain class tagged words of phonetic symbol dictionary, and the tagged words in such are respectively displayed on corresponding arc section week
It encloses;
Step7 judges whether user says tagged words shown in the screen of smart machine in Step6, if user not yet says
Tagged words out, then system is waited, until user says the tagged words, Step2 is returned to if system waits time-out, if user
Say tagged words, then it is adjacent with the arc section with the central point using the central point of the corresponding arc section of the tagged words as the center of circle
The distance of arc section intersection point is radius, generates a circle;
Tagged words described in Step8 cancellation Step6 are associated with arc section, and are removed described in Step6 in smart machine
The tagged words shown on screen, while removing the circle generated on the screen of smart machine described in Step4;
The circle generated on the screen of smart machine described in Step7 is divided into several regions by Step9, while by each region
It is associated with a certain class tagged words of phonetic symbol dictionary respectively, and the tagged words in such are respectively displayed on corresponding region
It is interior;
Step10 judges whether user says tagged words shown in the screen of smart machine in Step9, if user not yet says
The tagged words out, then system is waited, until user says the tagged words, returns to Step2 if system waits time-out, if with
The tagged words are said at family, then carry out target selection by selected element of the central point in region corresponding to the tagged words.
Specifically, the smart machine in the above method refers to the computer and smart phone for possessing speech identifying function.
Specifically, the phonetic symbol dictionary in the Step1 includes numeric class tagged words, alphabetic class tagged words and text class
Tagged words or the customized tagged words of user.
Specifically, the creation of mark point refers in the Step2: the screen of smart machine being divided into several pieces, and is taken
Every piece of central point is as mark point.
Specifically, continuing sounding in the Step4 refers to the repetition sounding to same tagged words, until circle generated
Close to selection target, user stops sounding, and radius of circle stops increasing.
Specifically, associated with a certain class tagged words of phonetic symbol dictionary in described Step3, Step6 and Step9 be all
Random association, i.e., the classification of associated tagged words can be same or different in these three steps, for a category when association
Remember that the selection of tagged words in word is also random.
The beneficial effects of the present invention are: the graphical interfaces target selecting method the present invention is based on speech recognition is focused on to user
Intuitive visual feedback, it continues sounding one ever-expanding circle of radius of generation by user to carry out obtaining for aiming spot
It takes, and generates one and target selection is carried out based on the circle of the position, so as to intuitively feed back to user, keep user clear
Chu knows when that this says that is ordered, and without allowing user to carry out the study of multiple voice command again, facilitates user
Using smart machine, shortens the time of target selection and improve the precision of target selection.
Specific embodiment
The present invention is described in further detail combined with specific embodiments below.
A kind of embodiment 1: graphical interfaces target selecting method based on speech recognition, comprising the following steps:
Step1 creates a phonetic symbol dictionary in smart machine, and an at least category is arranged in the phonetic symbol dictionary
Remember word, and such tagged words at least contains a tagged words, phonetic symbol dictionary includes numeric class tagged words, alphabetic class tagged words
With text class tagged words or the customized tagged words of user;
Step2 creates several mark points on the screen of smart machine, the screen of smart machine is divided into several pieces, and take
Every piece of central point is shown on the screen of smart machine as mark point;
Step3 is associated with a certain class tagged words of phonetic symbol dictionary respectively by the mark point of the screen of smart machine, and will
Tagged words in such are respectively displayed on around correspondence markings point;
Step4 judges whether user says tagged words shown in the screen of smart machine in Step3, while judging user
Whether continuing sounding and continues the judgement of sounding without user if user not yet says the tagged words, system is waited,
Until user says the tagged words, if user says the tagged words and is located exactly at the corresponding label of the tagged words to selection target
At point, then user no longer continues sounding, target selection is carried out by selected element of the corresponding mark point of the tagged words, if user says
The tagged words and it is located at outside the corresponding mark point of the tagged words to selection target, then user continues sounding, continues sounding and refers to pair
The repetition sounding of same tagged words generates one using the corresponding mark point of the tagged words as the center of circle, and radius is with the continuous sounding of user
And ever-increasing circle, until circle generated, close to selection target, user stops sounding, and radius of circle stops increasing.
Tagged words described in Step5 cancellation Step3 are associated with mark point, and are removed and set described in Step3 intelligently
The tagged words shown on standby screen, while removing the mark point created on the screen of smart machine described in Step2;
The circle generated on the screen of smart machine described in Step4 is divided into several arc sections by Step6, and arc section is distinguished
It is associated with a certain class tagged words of phonetic symbol dictionary, and the tagged words in such are respectively displayed on corresponding arc section week
It encloses;
Step7 judges whether user says tagged words shown in the screen of smart machine in Step6, if user not yet says
Tagged words out, then system is waited, until user says the tagged words, Step2 is returned to if system waits time-out, if user
Say tagged words, then it is adjacent with the arc section with the central point using the central point of the corresponding arc section of the tagged words as the center of circle
The distance of arc section intersection point is radius, generates a circle;
Tagged words described in Step8 cancellation Step6 are associated with arc section, and are removed described in Step6 in smart machine
The tagged words shown on screen, while removing the circle generated on the screen of smart machine described in Step4;
The circle generated on the screen of smart machine described in Step7 is divided into several regions by Step9, while by each region
It is associated with a certain class tagged words of phonetic symbol dictionary respectively, and the tagged words in such are respectively displayed on corresponding region
It is interior;
Step10 judges whether user says tagged words shown in the screen of smart machine in Step9, if user not yet says
The tagged words out, then system is waited, until user says the tagged words, returns to Step2 if system waits time-out, if with
The tagged words are said at family, then carry out target selection by selected element of the central point in region corresponding to the tagged words.
Smart machine in the above method refers to the computer and smart phone for possessing speech identifying function.The Step3,
Associated with a certain class tagged words of phonetic symbol dictionary in Step6 and Step9 is all to be associated at random, i.e. institute in these three steps
The classification of associated tagged words can be same or different, and when association is also random for the selection of tagged words in a kind of tagged words
's.
Embodiment 2: passed through in voice selecting screen when using the computer for possessing speech identifying function with user below
To the present invention is based on the graphical interfaces target selecting methods of speech recognition to make further for the destination folder occurred at random
It is described in detail.
Step1 creates a phonetic symbol dictionary in smart machine, and " number " is arranged in the phonetic symbol dictionary
Class tagged words: " 1 ", " 2 ", " 3 ", " 4 ", " 5 ", " 6 ", " 7 ", " 8 ", " 9 ", " 0 ";
Step2 is divided into three parts on the screen of smart machine, by the screen aspect of smart machine, is divided into area equation
Nine rectangular areas, taking the diagonal line intersection point of rectangular area is mark point, this 9 mark points are shown the screen in smart machine
On;
Third step, by 9 mark points of the screen of smart machine respectively with " number " class tagged words of phonetic symbol dictionary
" 1 ", " 2 ", " 3 ", " 4 ", " 5 ", " 6 ", " 7 ", " 8 ", " 9 " are associated, and set tagged words to divide the rectangular area of screen
Background show;
4th step, it is assumed that the file occurred at random in screen is located exactly at the corresponding label of tagged words " 3 " in the screen upper right corner
Under point, user says the tagged words " 3 " in the screen of smart machine and pause sounding, then tagged words " 3 " institute said with user
Corresponding mark point is that selected element carries out target selection, it is assumed that the file occurred at random in screen is not in the screen upper right corner
Under the mark point corresponding to tagged words " 3 " but it is located in region locating for this mark point, then user says the screen of smart machine
Tagged words " 3 " in curtain and continues sounding i.e. one and speak out " 333333 ... ", then mark corresponding to the tagged words " 3 " said with user
Note generation one is using the corresponding mark point of the tagged words as the center of circle, radius ever-increasing circle with the continuous sounding of user, when
When the circular arc of the circle is close to the file destination, user stops sounding, and radius of circle stops increasing;
5th step, tagged words described in cancellation third step are associated with mark point, and are removed and set described in third step intelligently
The tagged words shown on standby screen, while removing the mark point created on the screen of smart machine described in second step;
6th step, by the circular arc of the circle generated on the screen of smart machine described in the 4th step with vertically and by the straight of the center of circle
It is starting point that line, which hands over the point of round upper circular arc, and circular arc is divided into nine sections, at the same by arc section respectively with phonetic symbol dictionary
" number " class tagged words " 1 ", " 2 ", " 3 ", " 4 ", " 5 ", " 6 ", " 7 ", " 8 ", " 9 " are associated, and the tagged words are shown
In the top of corresponding arc section;
7th step, it is assumed that in step 6 under the 2nd section of arc section of the circle, user says the file occurred at random in screen
Tagged words " 2 " in the screen of smart machine out, then using the central point of the corresponding arc section of the tagged words as the center of circle, with the center
Point is radius at a distance from the adjacent circular arc segments intersection point of the arc section, generates a circle;
8th step, tagged words described in the 6th step of cancellation are associated with arc section, and are removed and set described in the 6th step intelligently
The tagged words shown on standby screen, while removing the circle generated on the screen of smart machine described in the 4th step;
The left horizontal radius of the circle generated on the screen of smart machine described in 7th step is rotated clockwise 40 by the 9th step
Degree rotates 9 times, and circle is equally divided into nine fan-shaped regions, and by fan-shaped region and radius thus round 1/3 concentric circles intersect,
The annulus for being divided into nine regions is obtained after removal intersection, adds concentric circles, i.e., circle is divided into ten areas
Domain, at the same by each region " 1 " with " number " class tagged words of phonetic symbol dictionary respectively, " 2 ", " 3 ", " 4 ", " 5 ", " 6 ",
" 7 ", " 8 ", " 9 ", " 0 " is associated, and the background that tagged words are set as dividing round region is shown;
Tenth step, it is assumed that the file occurred at random in screen is in the ninth step under the region of the circle correspondence markings word " 1 ",
Then user says tagged words " 1 " shown in the screen of smart machine, with the central point in region corresponding to the tagged words
Target selection is carried out for selected element.
When carrying out target selection be by saying tagged words after, obtain corresponding mark point coordinate or corresponding
The center point coordinate in region, then system, which receives, controls cursor after coordinate information again and is moved to this coordinate position, therefore can be real
Now when not occurring target on graphical interfaces, but the operation etc. that white space carries out right button is moved the cursor to according to the above method,
And the selection after above-mentioned steps gradually divide regional choice for target can improve target selection
Accuracy rate also saves the time of selection.
The graphical interfaces target selecting method based on speech recognition in the present embodiment be suitble to disabled user or both hands not
Just user selects to use in the case where a random any position target in screen.
Above embodiment be only preferred embodiments of the present invention will be described, not to the scope of the present invention into
Row limits, and without departing from the spirit of the design of the present invention, those of ordinary skill in the art make technical solution of the present invention
Various changes and improvements out should all be fallen into the protection scope that claims of the present invention determines.
Claims (6)
1. a kind of graphical interfaces target selecting method based on speech recognition, it is characterised in that: the following steps are included:
Step1 creates a phonetic symbol dictionary in smart machine, and an at least category is arranged in the phonetic symbol dictionary
Remember word, and such tagged words at least contains a tagged words;
Step2 creates several mark points on the screen of smart machine, and is shown on the screen of smart machine;
Step3 is associated with a certain class tagged words of phonetic symbol dictionary respectively by the mark point of the screen of smart machine, and will
Tagged words in such are respectively displayed on around correspondence markings point;
Step4 judges whether user says tagged words shown in the screen of smart machine in Step3, while judging user
Whether continuing sounding and continues the judgement of sounding without user if user not yet says the tagged words, system is waited,
Until user says the tagged words, if user says the tagged words and is located exactly at the corresponding label of the tagged words to selection target
At point, then user no longer continues sounding, target selection is carried out by selected element of the corresponding mark point of the tagged words, if user says
The tagged words and it is located at outside the corresponding mark point of the tagged words to selection target, then user continues sounding, generates one with the mark
Remember that the corresponding mark point of word is the center of circle, radius ever-increasing circle with the continuous sounding of user;
Tagged words described in Step5 cancellation Step3 are associated with mark point, and are removed described in Step3 in smart machine
The tagged words shown on screen, while removing the mark point created on the screen of smart machine described in Step2;
The circle generated on the screen of smart machine described in Step4 is divided into several arc sections by Step6, and arc section is distinguished
It is associated with a certain class tagged words of phonetic symbol dictionary, and the tagged words in such are respectively displayed on corresponding arc section week
It encloses;
Step7 judges whether user says tagged words shown in the screen of smart machine in Step6, if user not yet says
Tagged words out, then system is waited, until user says the tagged words, Step2 is returned to if system waits time-out, if user
Say tagged words, then it is adjacent with the arc section with the central point using the central point of the corresponding arc section of the tagged words as the center of circle
The distance of arc section intersection point is radius, generates a circle;
Tagged words described in Step8 cancellation Step6 are associated with arc section, and are removed described in Step6 in smart machine
The tagged words shown on screen, while removing the circle generated on the screen of smart machine described in Step4;
The circle generated on the screen of smart machine described in Step7 is divided into several regions by Step9, while by each region
It is associated with a certain class tagged words of phonetic symbol dictionary respectively, and the tagged words in such are respectively displayed on corresponding region
It is interior;
Step10 judges whether user says tagged words shown in the screen of smart machine in Step9, if user not yet says
The tagged words out, then system is waited, until user says the tagged words, returns to Step2 if system waits time-out, if with
The tagged words are said at family, then carry out target selection by selected element of the central point in region corresponding to the tagged words.
2. the graphical interfaces target selecting method according to claim 1 based on speech recognition, it is characterised in that: described
Smart machine in Step1- Step10 refers to the computer and smart phone for possessing speech identifying function.
3. the graphical interfaces target selecting method according to claim 1 or 2 based on speech recognition, it is characterised in that: institute
Stating the phonetic symbol dictionary in Step1 includes numeric class tagged words, alphabetic class tagged words and text class tagged words.
4. the graphical interfaces target selecting method according to claim 1 or 2 based on speech recognition, it is characterised in that: institute
The creation for stating mark point in Step2 refers to: the screen of smart machine being divided into several pieces, and takes every piece of central point as mark
Note point.
5. the graphical interfaces target selecting method according to claim 1 or 2 based on speech recognition, it is characterised in that: institute
It states lasting sounding in Step4 and refers to that the repetition sounding to same tagged words is used until circle generated is close to selection target
Family stops sounding, and radius of circle stops increasing.
6. the graphical interfaces target selecting method according to claim 1 or 2 based on speech recognition, it is characterised in that: institute
Stating associated with a certain class tagged words of phonetic symbol dictionary in Step3, Step6 and Step9 is all to be associated at random, i.e., these three
The classification of associated tagged words can be same or different in step, selection when association for tagged words in a kind of tagged words
It is also random.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056705.XA CN109407946B (en) | 2018-09-11 | 2018-09-11 | Graphical interface target selection method based on voice recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811056705.XA CN109407946B (en) | 2018-09-11 | 2018-09-11 | Graphical interface target selection method based on voice recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109407946A true CN109407946A (en) | 2019-03-01 |
CN109407946B CN109407946B (en) | 2021-05-14 |
Family
ID=65464748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811056705.XA Active CN109407946B (en) | 2018-09-11 | 2018-09-11 | Graphical interface target selection method based on voice recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109407946B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113539253A (en) * | 2020-09-18 | 2021-10-22 | 厦门市和家健脑智能科技有限公司 | Audio data processing method and device based on cognitive assessment |
CN115248650A (en) * | 2022-06-24 | 2022-10-28 | 南京伟柏软件技术有限公司 | Screen reading method and device |
CN113539253B (en) * | 2020-09-18 | 2024-05-14 | 厦门市和家健脑智能科技有限公司 | Audio data processing method and device based on cognitive assessment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1667700A (en) * | 2004-03-10 | 2005-09-14 | 微软公司 | New-word pronunciation learning using a pronunciation graph |
US20080158261A1 (en) * | 1992-12-14 | 2008-07-03 | Eric Justin Gould | Computer user interface for audio and/or video auto-summarization |
CN102547463A (en) * | 2011-12-15 | 2012-07-04 | Tcl集团股份有限公司 | Method and device for locating interface focus of TV set, and TV set |
CN103680498A (en) * | 2012-09-26 | 2014-03-26 | 华为技术有限公司 | Speech recognition method and speech recognition equipment |
CN103905636A (en) * | 2014-03-03 | 2014-07-02 | 联想(北京)有限公司 | Information processing method and electronic device |
CN105100460A (en) * | 2015-07-09 | 2015-11-25 | 上海斐讯数据通信技术有限公司 | Method and system for controlling intelligent terminal by use of sound |
-
2018
- 2018-09-11 CN CN201811056705.XA patent/CN109407946B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080158261A1 (en) * | 1992-12-14 | 2008-07-03 | Eric Justin Gould | Computer user interface for audio and/or video auto-summarization |
CN1667700A (en) * | 2004-03-10 | 2005-09-14 | 微软公司 | New-word pronunciation learning using a pronunciation graph |
CN102547463A (en) * | 2011-12-15 | 2012-07-04 | Tcl集团股份有限公司 | Method and device for locating interface focus of TV set, and TV set |
CN103680498A (en) * | 2012-09-26 | 2014-03-26 | 华为技术有限公司 | Speech recognition method and speech recognition equipment |
CN103905636A (en) * | 2014-03-03 | 2014-07-02 | 联想(北京)有限公司 | Information processing method and electronic device |
CN105100460A (en) * | 2015-07-09 | 2015-11-25 | 上海斐讯数据通信技术有限公司 | Method and system for controlling intelligent terminal by use of sound |
Non-Patent Citations (1)
Title |
---|
丁怀东,殷继彬: "面向视频会议的多功能手写笔白板系统及协同工作的研究", 《昆明理工大学学报( 理工版)》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113539253A (en) * | 2020-09-18 | 2021-10-22 | 厦门市和家健脑智能科技有限公司 | Audio data processing method and device based on cognitive assessment |
CN113539253B (en) * | 2020-09-18 | 2024-05-14 | 厦门市和家健脑智能科技有限公司 | Audio data processing method and device based on cognitive assessment |
CN115248650A (en) * | 2022-06-24 | 2022-10-28 | 南京伟柏软件技术有限公司 | Screen reading method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109407946B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105786804B (en) | A kind of interpretation method and mobile terminal | |
US7719521B2 (en) | Navigational interface providing auxiliary character support for mobile and wearable computers | |
US20150242119A1 (en) | Reduced keyboard with prediction solutions when input is a partial sliding trajectory | |
KR101587625B1 (en) | The method of voice control for display device, and voice control display device | |
US20080288260A1 (en) | Input/Output Apparatus Based on Voice Recognition, and Method Thereof | |
KR20130001261A (en) | Multimodal text input system, such as for use with touch screens on mobile phones | |
CN103885596A (en) | Information processing method and electronic device | |
JP2017521692A (en) | Audio control video display device and audio control method for video display device | |
CN102939574A (en) | Character selection | |
CN103744683A (en) | Information fusion method and device | |
TW201510774A (en) | Apparatus and method for selecting a control object by voice recognition | |
CN108549493B (en) | Candidate word screening method and related equipment | |
CN102893588A (en) | Nine-key chinese input method | |
CN110456922A (en) | Input method, input unit, input system and electronic equipment | |
CN109407946A (en) | Graphical interfaces target selecting method based on speech recognition | |
CN103778209A (en) | POI (Point of Interest) search result display method and electronic equipment | |
CN109478122A (en) | The gesture based on pressure for graphic keyboard is keyed in | |
JP6272496B2 (en) | Method and system for recognizing speech containing a sequence of words | |
JP2007052738A (en) | Plant monitoring device and method thereof, and program | |
CN105988595A (en) | Sliding input method and apparatus | |
CN102375655B (en) | A kind of processing method and system of letter input | |
KR101099652B1 (en) | Hangeul input apparatus and method on touch screen | |
US11978252B2 (en) | Communication system, display apparatus, and display control method | |
CN111176545B (en) | Equipment control method, system, electronic equipment and storage medium | |
KR101594416B1 (en) | Chinese character input method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |