CN102426483A - Multi-channel accurate target positioning method for touch equipment - Google Patents

Multi-channel accurate target positioning method for touch equipment Download PDF

Info

Publication number
CN102426483A
CN102426483A CN2011104451641A CN201110445164A CN102426483A CN 102426483 A CN102426483 A CN 102426483A CN 2011104451641 A CN2011104451641 A CN 2011104451641A CN 201110445164 A CN201110445164 A CN 201110445164A CN 102426483 A CN102426483 A CN 102426483A
Authority
CN
China
Prior art keywords
touch
phonetic entry
confidence level
control
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104451641A
Other languages
Chinese (zh)
Other versions
CN102426483B (en
Inventor
姜映映
田丰
王宏安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN 201110445164 priority Critical patent/CN102426483B/en
Publication of CN102426483A publication Critical patent/CN102426483A/en
Application granted granted Critical
Publication of CN102426483B publication Critical patent/CN102426483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention belongs to the field of human-machine interaction and particularly relates to a multi-channel accurate target positioning method for touch equipment. According to the multi-channel accurate target positioning method, touch input and voice input are used for positioning the target, wherein the touch input is used for providing a possible target object candidate set; the voice input is also used for providing a possible target object candidate set; and the possible target object candidate sets are positioned to an accurate target object by a multi-channel fusion algorithm. According to the multi-channel accurate target positioning method disclosed by the invention, two natural input channels, such as the touch input and the voice input as well as two natural interaction manners are supported; more accurate target positioning can be carried out; and the efficiency of positioning relevant tasks on the touch equipment is increased.

Description

A kind of hyperchannel precision target localization method towards touch control device
Affiliated technical field
The invention belongs to field of human-computer interaction, be specifically related to a kind of hyperchannel precision target localization method towards touch control device.
Background technology
On touch control device, some tasks often need the user to navigate to earlier to carry out subsequent operation after the destination object.For example, in the text editing operation, often need earlier cursor positioning to the target location, and then insert, operation such as deletion; In the navigation process of map, need navigate to earlier and check after amplify the target location; In a plurality of small objects,, need select destination object earlier to the operation of single object.Yet; Because the big (list of references: Fat finger worries:How older and younger users physically interact with PDAs.In Proc.INTERACT 2005 of the contact area of finger and touch screen; 267-280.), often can not accurately select or navigate to destination object.Therefore, for the user provides effectively, accurate target localization technology becomes particularly important.
High-precision touch screen interaction technology is by broad research.(list of references: Shift:A technique for operating pen-based interfaces using touch.In Proc.CHI 2007 657-666.) utilizes the cursor migration technology to solve the bigger problem of the mutual middle finger input of touch-control for Vogel and Baudisch.(list of references: Rubbing and tapping for precise and rapid selection on touch-screen displays.In Proc.CHI 2008 such as Olwal; 295-304.), (list of references: TapTap and MagStick:Improving one-handed target acquisition on small touch-screens.In Proc.AVI 2008 such as Roudaut; 146-153.), (list of references: TouchCuts and TouchZoom:Enhanced target selection for touch displays using finger proximity sensing.In Proc.CHI 2011 such as Yang; 2585-2594.) and
Figure BDA0000125632500000011
etc. (list of references: FingerGlass:Efficient multiscale interaction on multitouch screens.In Proc.CHI 2011, it is accurate mutual 1343-1352.) to utilize the scaling technology to support.(list of references: Escape:A target selection technique using visually-cued gestures.In Proc.CHI 2008, the speed when 285-294.) adopting gesture to come the accelerating selection destination object such as Yatani with visual cues.(list of references: Precise selection techniques for multi-touch screens.In Proc.CHI 2006 such as Benko; 1263-1272.) and Olwal etc. (list of references: Rubbing and tapping for precise and rapid selection on touch-screen displays.In Proc.CHI 2008 295-304.) makes and with the hands refers to select to help accurate location.Albinsson and Zhai (list of references: High precision touch screen interaction.In Proc.CHI2003,105-112.) designed some interactive tools and supported accurate mutual.(list of references: Barrier pointing:using physical edges to assist target acquisition on mobile device touch screens.In Proc.ASSETS 2007 19-26.) then utilizes the physics frame of mobile device to come obtaining of auxiliary mark to Froehlich etc.Although accurate more mutual on these technical support touch control devices, after once touching clicking operation, they often need extra adjustment operation.
A plurality of input channels can provide complementary and information redundancy, can be used to design new interaction technique.For example; Hinckley and Song (list of references: Sensor Synaesthesia:Touch in Motion; And Motion in Touch.In Proc.CHI 2011 801-810.) will combine the advantage of multiple point touching and motion perception, support that abundanter touch is mutual.Wigdor and Balakrishman (list of references: TiltText:using tilt for text input to mobile phones.In Proc.UIST 2003; 81-90.) designed TiltText, support the user in button the inclination mobile phone with the input target character.(list of references: Multimodal Chinese text entry with speech and keypad on mobile devices.In Proc.IUI 2008 such as Jiang; 341-344.) combine voice and keyboard input, support Chinese text input more efficiently on the mobile device.The aspect is very useful abundanter interaction technique being provided and raise the efficiency etc. for these methods, but they are not to design for the precision target orientation problem, can not be by in the simple solution that must be applied to this problem.
Summary of the invention
The objective of the invention is to utilize the hyperchannel input, the object localization method on a kind of touch control device is provided, thereby improve the accuracy of touch-control input target localization.This method has merged touch-control input and phonetic entry: non-accurate touch-control input provides a series of possible positions of destination object, and phonetic entry is used for confirming the exact position of target; Navigate to the final objective object through touch-control input and phonetic entry being carried out the hyperchannel fusion, as shown in Figure 1.
Specifically, the technical scheme of the present invention's employing is following:
A kind of hyperchannel precision target localization method towards touch control device, its step comprises:
1) touch-control of gathering on the touch control device is imported and phonetic entry, obtains corresponding target candidate set of touch-control input and the corresponding target candidate of phonetic entry and gathers;
2) calculate the confidence level of said target candidate through the hyperchannel blending algorithm, final position location is confirmed as in the position of the target candidate that confidence level is the highest.
Further, said touch control device need both be supported the touch-control input, also support voice input.Can be the touch-control mobile phone, like iPhone, Google Phone etc., can be all kinds of touch-screen tablet computers, like iPad, happy Pad etc., also can be touch-control desktop etc.Object localization method of the present invention is not subject to the form that is used to locate, like cursor, little button etc.
Further, the position of said touch-control input definite object object, the collection of touch-control input can obtain based on the interface that existing touch control operation system (like Android, iOS, Windows Phone etc.) provides.The information that obtains comprises the position of the current touch point that system-computed obtains, the area of touch area etc.The finger touch overlay area has comprised possible target location, and they form the destination object candidate collection.The size of the distance of the detected touch point of the confidence level of each target candidate and this target location and system is inversely proportional to, and the confidence level of the target candidate near more apart from touch point is big more.Meeting on the basis of this principle, the concrete definition of candidate collection and confidence level can be defined by the developer.For example, when cursor positioning, can be obtained the cursor position of periphery by the detected cursor position of system, form the cursor position candidate collection, the confidence level of cursor position is along with the change of the cursor position that system of distance obtains reduces greatly.
Further, the content of said phonetic entry is the relevant voice of destination object.For example, phonetic entry can be title or other association attributes of destination object.With touch-control on input time the most contiguous phonetic entry be the corresponding phonetic entry of current touch-control input.Phonetic entry can obtain the candidate of some recognition results and their confidence level by speech recognition engine identification.Speech recognition engine can be the speech recognition engine (like the Speech API of Google) that existing touch control operation system provides, and also can be third-party speech recognition engine (like iSpeech etc.).
Further, the computing formula of said hyperchannel blending algorithm is:
p(c i)=a*p(c i|T)+(1-a)*p(c i|S) (1)
Wherein, T is the corresponding target candidate set of touch-control input; S is the corresponding target candidate set of phonetic entry; c iBe i candidate among the T; P (c i| T) be target location c under the touch-control input condition iConfidence level; P (c i| S) be target location c under the phonetic entry situation iConfidence level; P (c i) be c iConfidence level as the final goal object; A with (1-a) be the touch-control input and phonetic entry shared proportion when determining destination object, can adjust according to dissimilar users.The position of the target candidate that the confidence level that calculates according to formula (1) is the highest is final position location, and is as shown in Figure 2.
Compared with prior art, the advantage and the good effect that have of the present invention is following:
1) the accurate more target localization of support of the present invention, thus the efficient of target localization on the touch control device can be improved.
2) the present invention adopts the input channel of touch-control and two kinds of natures of voice, thereby can support the mutual of nature.
3) the present invention adopts phonetic entry and touch-control input, because the input of most of touch control device support voice, this technology can be adopted by touch control device easily.
Description of drawings
Fig. 1 is the synoptic diagram of the present invention towards the hyperchannel precision target localization method of touch control device.
Fig. 2 is the synoptic diagram of the hyperchannel blending algorithm of touch-control input of the present invention and phonetic entry.
Fig. 3 is the synoptic diagram of cursor positioning method in the text editing of the embodiment of the invention.
Fig. 4 is the synoptic diagram of the instance (navigating between the character ' h ' and ' s ' in " touchscreen ") that adopts cursor in the embodiment of the invention and position, wherein: (a) touch target location and say target location first character ' s ' at the back with voice; (b) the bigger zone of finger covering; (c) possible cursor position; (d) confirm accurate cursor position through phonetic entry.
Embodiment
For make the object of the invention, feature and advantage can be more obviously understandable, hereinafter is through specific embodiment, and combines accompanying drawing, does detailed explanation.
Present very common at present alternately based on touch-control on the mobile device, the user usually uses application such as note, mail, notepad on these equipment.In these were used, cursor usually was used to the particular location of specify text input and the execution of editor's task.Be used for the mutual finger of touch-control owing to the text on the mobile device is less then relatively large, and the touch area of finger possibly cover a plurality of characters of different row and columns, and this makes the efficient of cursor positioning in the target location is become lowly and is easy to make mistakes.In addition, the often less and mutual vicinity of the character on the touch-screen becomes particularly difficult with cursor positioning between the narrow character (like " ij ") of two vicinities.The inventive method provides a kind of solution for this problem.
Present embodiment supports that the user carries out more effectively, accurate cursor positioning operation on the touch-control mobile phone; Be embodied on the GoogleNexus S mobile phone, its screen size is 4.0 cun, and resolution is 480 * 800, and operating system is Android 2.3.
Fig. 3 provides the synoptic diagram of this embodiment.When positioning cursor, the user says target light cursor position character at the back with voice in the target light cursor position of touch screen.Among this embodiment, the content setting of phonetic entry is first character of target light cursor position back, but the present invention is not limited to this, and other is set (like the character of target light cursor position front etc.) and has similar effects.The Android system acquisition obtains the cursor position of finger touch screen, obtains the character of (upper and lower, left and right) around this cursor, forms the candidate collection of cursor position; Obtain the character candidates collection of phonetic entry based on the speech recognition engine of Google Speech API.Through merging the candidate collection and the voice Candidate Set (value of a is set at 0.5 in the computing formula of p (ci)) of touch-control input, obtain the confidence level of each cursor position in the finger touch cursor candidate collection.The cursor position that confidence level is the highest is final cursor position.
Fig. 4 provides an instance that in the foregoing description cursor is positioned.When the user attempts positioning cursor in " touchscreen " between ' h ' and ' s ' time, the user is with this cursor position of finger click, and (Fig. 4 a) to say ' s ' with voice simultaneously.User's finger has covered a plurality of characters (Fig. 4 b and Fig. 4 c).At this moment, the cursor position that the touch-control input that the Android system points according to the user obtains is arranged between ' c ' and ' h ' of " touchscreen ", and is also inequality with user's intention.Merge through utilizing phonetic entry and carrying out hyperchannel, can cursor correctly be navigated to (Fig. 4 d) between ' h ' and ' s '.
To the instance among Fig. 4, further introduce its hyperchannel blending algorithm below.The touch-control input has generated 12 candidates' cursor position, and cursor can be positioned at ' h ', ' c ', ' s ', ' u ', ' c ', ' ', ' d ', ' l ', ' e ', ' e ', ' e ', ' e ' before, and they sort according to confidence level from high to low.The change of the distance of the cursor position that this confidence level obtains along with candidate's cursor position and system is big and reduce.Discern candidate ' s ', ' f ' and ' a ' that obtains voice identification result through speech recognition engine, and arrange from high to low by confidence level.A kind of mode of the confidence level of phonetic entry is directly to be provided by speech recognition engine; A kind of in addition mode is the sequential definition confidence level of the identification candidate result returned according to speech recognition engine by the technological development personnel, and need satisfy following principle: forward more identification candidate's confidence level is high more in candidate result sequences.Pass through blending algorithm then; Promptly adopt formula (1), the cursor position that can calculate the touch-control input of sorting from high to low according to confidence level is: ' s ', ' h ', ' c ', ' u ', ' c ', ' ', ' d ', ' l ', ' e ', ' e ', ' e ', ' e '.Therefore, having the highest believable cursor position is that character ' s ' is preceding, and it will be as final cursor position, i.e. the cursor position of user expectation.
In the above-described embodiments, during application of formula (1), a with (1-a) be the touch-control input and phonetic entry shared proportion when determining destination object, can adjust according to dissimilar users.When more lack of standardization or environmental noise was big when user's voice input, the speech recognition accuracy reduced, and can set the value of a more greatly, and localization method to the dependence of touch-control input just greatly like this; Otherwise, when phonetic entry standard and environment are quiet, can heighten the effect of phonetic entry in the location, what be about to that the value of a sets is smaller.In the present embodiment touch-control input and phonetic entry are considered on an equal basis, a is made as 0.5.
More than through embodiment the method for the hyperchannel precision target location towards touch control device of the present invention has been carried out detailed explanation, but concrete way of realization of the present invention is not limited thereto.One of ordinary skill in the art can be carried out various conspicuous variations and modification to it under the situation of spirit that does not deviate from the method for the invention and principle.Protection scope of the present invention should be as the criterion so that claims are said.

Claims (10)

1. hyperchannel precision target localization method towards touch control device, its step comprises:
1) touch-control of gathering on the touch control device is imported and phonetic entry, obtains corresponding target candidate set of said touch-control input and the corresponding target candidate of said phonetic entry and gathers;
2) calculate the confidence level of said target candidate through the hyperchannel blending algorithm, final position location is confirmed as in the position of the target candidate that confidence level is the highest; The computing formula of said hyperchannel blending algorithm is:
p(c i)=a*p(c i|T)+(1-a)*p(c i|S)
Wherein, T is the corresponding target candidate set of touch-control input, and S is the corresponding target candidate set of phonetic entry, and ci is i candidate among the T; P (ci|T) is the confidence level of target location ci under the touch-control input condition; P (ci|S) is the confidence level of target location ci under the phonetic entry situation, and p (ci) is the confidence level of ci as the final goal object, a with (1-a) be the touch-control input and phonetic entry shared proportion when determining destination object.
2. the method for claim 1 is characterized in that, said touch control device is touch-control mobile phone, touch-screen tablet computer or a touch-control desktop of supporting touch-control input and phonetic entry.
3. method as claimed in claim 2 is characterized in that, said touch control device adopts cursor or little button to carry out said location.
4. method as claimed in claim 2 is characterized in that, the input of said touch-control obtains based on the interface that the touch control operation system provides, and said touch control operation system comprises Android, iOS and Windows Phone.
5. method as claimed in claim 4 is characterized in that, the information that obtains through said touch-control input comprises: the position of touch point, the area of touch area.
6. the method for claim 1 is characterized in that, the size of the distance of the detected touch point of confidence level and this target location and system of each target candidate was inversely proportional to during the corresponding target candidate of said touch-control input was gathered.
7. the method for claim 1 is characterized in that, the content of said phonetic entry is relevant with destination object.
8. method as claimed in claim 7 is characterized in that, the content of said phonetic entry is the title of destination object.
9. the method for claim 1 is characterized in that, discerns said phonetic entry and obtains the confidence level of said phonetic entry through speech recognition engine.
10. method as claimed in claim 9 is characterized in that, said speech recognition engine is the speech recognition engine that existing touch control operation system provides, and comprises the Speech API of Google; Or third-party speech recognition engine, comprise iSpeech.
CN 201110445164 2011-12-27 2011-12-27 Multi-channel accurate target positioning method for touch equipment Active CN102426483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110445164 CN102426483B (en) 2011-12-27 2011-12-27 Multi-channel accurate target positioning method for touch equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110445164 CN102426483B (en) 2011-12-27 2011-12-27 Multi-channel accurate target positioning method for touch equipment

Publications (2)

Publication Number Publication Date
CN102426483A true CN102426483A (en) 2012-04-25
CN102426483B CN102426483B (en) 2013-12-25

Family

ID=45960479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110445164 Active CN102426483B (en) 2011-12-27 2011-12-27 Multi-channel accurate target positioning method for touch equipment

Country Status (1)

Country Link
CN (1) CN102426483B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932810A (en) * 2014-03-18 2015-09-23 香港城市大学 Target acquisition system for use in touch screen graphical interface
WO2016045445A1 (en) * 2014-09-28 2016-03-31 厦门幻世网络科技有限公司 Target position positioning method and device thereof based on touch screen
CN106990843A (en) * 2017-04-01 2017-07-28 维沃移动通信有限公司 A kind of parameter calibrating method and electronic equipment of eyes tracking system
CN108829248A (en) * 2018-06-01 2018-11-16 中国科学院软件研究所 A kind of mobile target selecting method and system based on the correction of user's presentation model
CN109121081A (en) * 2018-09-11 2019-01-01 电子科技大学 A kind of indoor orientation method based on position Candidate Set Yu EM algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1397866A (en) * 2002-08-02 2003-02-19 北京南山高科技有限公司 Method and device for inputting Chinese characters by speech and equipment with simplified keyboard
CN1635453A (en) * 2003-12-29 2005-07-06 中国科学院自动化研究所 Audio control device as computer peripheral
US20090182562A1 (en) * 2008-01-14 2009-07-16 Garmin Ltd. Dynamic user interface for automated speech recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1397866A (en) * 2002-08-02 2003-02-19 北京南山高科技有限公司 Method and device for inputting Chinese characters by speech and equipment with simplified keyboard
CN1635453A (en) * 2003-12-29 2005-07-06 中国科学院自动化研究所 Audio control device as computer peripheral
US20090182562A1 (en) * 2008-01-14 2009-07-16 Garmin Ltd. Dynamic user interface for automated speech recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜映映 等: "基于语音和笔的手写数学公式纠错方法", 《计算机研究与发展》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104932810A (en) * 2014-03-18 2015-09-23 香港城市大学 Target acquisition system for use in touch screen graphical interface
CN104932810B (en) * 2014-03-18 2019-12-13 香港城市大学 Object acquisition system for use in touch screen graphical interfaces
WO2016045445A1 (en) * 2014-09-28 2016-03-31 厦门幻世网络科技有限公司 Target position positioning method and device thereof based on touch screen
CN106990843A (en) * 2017-04-01 2017-07-28 维沃移动通信有限公司 A kind of parameter calibrating method and electronic equipment of eyes tracking system
CN106990843B (en) * 2017-04-01 2021-01-08 维沃移动通信有限公司 Parameter calibration method of eye tracking system and electronic equipment
CN108829248A (en) * 2018-06-01 2018-11-16 中国科学院软件研究所 A kind of mobile target selecting method and system based on the correction of user's presentation model
CN108829248B (en) * 2018-06-01 2020-11-20 中国科学院软件研究所 Moving target selection method and system based on user performance model correction
CN109121081A (en) * 2018-09-11 2019-01-01 电子科技大学 A kind of indoor orientation method based on position Candidate Set Yu EM algorithm
CN109121081B (en) * 2018-09-11 2020-12-29 电子科技大学 Indoor positioning method based on position candidate set and EM algorithm

Also Published As

Publication number Publication date
CN102426483B (en) 2013-12-25

Similar Documents

Publication Publication Date Title
US9665276B2 (en) Character deletion during keyboard gesture
RU2635285C1 (en) Method and device for movement control on touch screen
US9032338B2 (en) Devices, methods, and graphical user interfaces for navigating and editing text
US20130215018A1 (en) Touch position locating method, text selecting method, device, and electronic equipment
US11340755B2 (en) Moving a position of interest on a display
US8456433B2 (en) Signal processing apparatus, signal processing method and selection method of user interface icon for multi-touch panel
CN108733303B (en) Touch input method and apparatus of portable terminal
US20140109016A1 (en) Gesture-based cursor control
US20120188164A1 (en) Gesture processing
US20150242114A1 (en) Electronic device, method and computer program product
JP4851547B2 (en) Mode setting system
WO2012061564A2 (en) Device, method, and graphical user interface for manipulating soft keyboards
US9778780B2 (en) Method for providing user interface using multi-point touch and apparatus for same
CN102426483B (en) Multi-channel accurate target positioning method for touch equipment
US9836211B2 (en) Device, method, and graphical user interface for selection of views in a three-dimensional map based on gesture inputs
US10416868B2 (en) Method and system for character insertion in a character string
CN104199607A (en) Candidate selection method and device based on input method
US9182908B2 (en) Method and electronic device for processing handwritten object
CN105335089A (en) Page switch method and device based on intelligent terminal
US20120050184A1 (en) Method of controlling driving of touch panel
US20150355769A1 (en) Method for providing user interface using one-point touch and apparatus for same
US20160139802A1 (en) Electronic device and method for processing handwritten document data
KR101436585B1 (en) Method for providing user interface using one point touch, and apparatus therefor
CN101794182B (en) Method and equipment for touch input
US20140035876A1 (en) Command of a Computing Device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant