CN104866201A - Intelligent device and method for triggering editing function of application - Google Patents

Intelligent device and method for triggering editing function of application Download PDF

Info

Publication number
CN104866201A
CN104866201A CN201510317937.6A CN201510317937A CN104866201A CN 104866201 A CN104866201 A CN 104866201A CN 201510317937 A CN201510317937 A CN 201510317937A CN 104866201 A CN104866201 A CN 104866201A
Authority
CN
China
Prior art keywords
application
user
operation trace
action
editing interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510317937.6A
Other languages
Chinese (zh)
Inventor
谢根英
张国梁
卢家顺
陈建威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201510317937.6A priority Critical patent/CN104866201A/en
Publication of CN104866201A publication Critical patent/CN104866201A/en
Pending legal-status Critical Current

Links

Landscapes

  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an intelligent device, comprising a touch screen, an action obtaining module and an action recognition and showing module, wherein the action obtaining module is used for obtaining operating trajectory information of a user based on the touch screen and sending to the action recognition and showing module, and the action recognition and showing module is used for determining an application editing interface to be showed according to the obtained operating trajectory information and showing the determined editing interface to the user. The invention discloses a method for triggering the editing function of an application. With the application of the intelligent device and the method for triggering the editing function of the application, the editing function of the application can be conveniently and quickly triggered.

Description

A kind of smart machine and a kind of method triggering the editting function of application
Technical field
The present invention relates to Computer Applied Technology, particularly a kind of smart machine and a kind of method triggering the editting function of application.
Background technology
At present, smart machine such as the mobile phone etc. with touch-screen is widely applied, and on the first interface of equipment, usual exhibitions is shown with multiple different application (APP) icon.
According to existing mode; for the application with editting function, when user needs to trigger its editting function, usually need to open multiple layer menu and just can enter into required editing interface; complex operation; and opening of each layer menu needs to expend the regular hour, especially for the application based on network service; interface loads very consuming time; often there will be the situation loading and wait for, thus add time cost, inefficiency.
Summary of the invention
In view of this, the invention provides a kind of smart machine and a kind of method triggering the editting function of application, the fast and convenient editting function triggering application can be realized.
In order to achieve the above object, technical scheme of the present invention is achieved in that
A kind of smart machine, comprising: touch-screen, action acquisition module and action recognition and display module;
Described action acquisition module, for obtaining the operation trace information of user based on described touch-screen, and sends to described action recognition and display module;
Described action recognition and display module, for determining the editing interface of the application that will show according to the operation trace information received, and show user by the editing interface determined.
Trigger a method for the editting function of application, the method is applicable to the smart machine with touch-screen, comprising:
Obtain the operation trace information of user based on described touch-screen;
Determine the editing interface of the application that will show according to the operation trace information got, and the editing interface determined is showed user.
Visible, adopt scheme of the present invention, user only needs to make an operational motion, the editting function of application can be triggered, thus avoid owing to needing to open the complex operation that multiple layer menu causes and the problem increasing time cost in prior art, that is, scheme of the present invention is not only simple to operate, and cost of can saving time, namely can realize the fast and convenient editting function triggering application.
Accompanying drawing explanation
Fig. 1 is the composition structural representation of smart machine embodiment of the present invention.
Fig. 2 is the process flow diagram that the present invention triggers the embodiment of the method for the editting function of application.
The realization flow figure that Fig. 3 is step 22 shown in Fig. 2.
Embodiment
In order to make technical scheme of the present invention clearly, understand, to develop simultaneously embodiment referring to accompanying drawing, scheme of the present invention be described in further detail.
Fig. 1 is the composition structural representation of smart machine embodiment of the present invention.As shown in Figure 1, comprising: touch-screen (for simplifying accompanying drawing, not shown), action acquisition module and action recognition and display module.
Action acquisition module, for obtaining the operation trace information of user based on touch-screen, and sends to action recognition and display module;
Action recognition and display module, for determining the editing interface of the application that will show according to the operation trace information received, and show user by the editing interface determined.
Below the concrete function of above-mentioned action acquisition module and action recognition and display module is described respectively.
1) action acquisition module
In actual applications, the operation trace information of user that action acquisition module gets can be: writing pencil motion track information or finger motion locus information etc.
Such as, when user needs a certain editting function triggering a certain application, writing pencil can be utilized, one, unsettled picture circle on this shown on the touchscreen application icon, correspondingly, action acquisition module, by electromagnetic induction technology, gets the writing pencil motion track information that one group of continuous print senses, is implemented as prior art.
For another example, when user needs a certain editting function triggering a certain application, available finger touches this application icon upward sliding that touch-screen is shown, correspondingly, action acquisition module is by monitor user ' finger pressing on the touchscreen (Down), mobile (Move) and lift (Up) event coordinates, get the finger motion locus information of user, specific implementation is similarly prior art.
2) action recognition and display module
As shown in Figure 1, this module by action recognition submodule and can show that submodule forms.
Action recognition submodule, for determining the action identification that the operation trace information that receives is corresponding and application identities, and sends to displaying submodule by the action identification determined and application identities;
Show submodule, for by application corresponding for the application identities received, the editing interface corresponding with the action identification received show user.
Wherein, can comprise further again in action recognition submodule: the first recognition unit and the second recognition unit.
First recognition unit, for determining the operational motion of user according to the operation trace information received, sends to the second recognition unit by the action identification of correspondence, and determines the center point coordinate of operation trace, send to the second recognition unit;
Second recognition unit, for determining the application icon that center point coordinate that each application icon middle distance that touch-screen is shown receives is nearest, and sends to displaying submodule by the application identities of the application icon determined and the action identification that receives;
Correspondingly, show that submodule can according to the corresponding relation between the action identification pre-set and the editing interface of application, by application corresponding for the application identities received, the editing interface corresponding with the action identification received show user.
The operational motion of the user that the first recognition unit operation trace information how basis receives is determined is prior art, and described operational motion can for using handwritten stroke one circle, finger upward sliding, finger slide downward etc.Can pre-set different operational motions unique corresponding action identification respectively, like this, action identification corresponding for this operational motion, after determining the operational motion of user, can be sent to the second recognition unit by the first recognition unit.In addition, the first recognition unit also can calculate the center point coordinate P of operation trace 0(X 0, Y 0), send to the second recognition unit equally, wherein, X 0=(x 1+ x 2+ ...+x n)/n, Y 0=(y 1+ y 2+ ...+y n)/n, x nfor the horizontal ordinate of each pixel in operation trace, y nfor the ordinate of each pixel in operation trace, n is the pixel number that operation trace comprises.
Because the coordinate position of each application icon that touch-screen is shown is known, so the second recognition unit is after receiving the action identification and center point coordinate that the first recognition unit sends, first the application icon that center point coordinate that each application icon middle distance that touch-screen is shown receives is nearest can be determined, the application identities of the application icon determined and the action identification received can be sent to displaying submodule afterwards, described distance can refer to the distance between the center point coordinate of application icon and the center point coordinate received.
In addition, for arbitrary application with editting function, the operational motion that the different editing interfaces of this application are corresponding respectively can be pre-set, as previously mentioned, different operational motions is a unique corresponding action identification respectively, so then can record the corresponding relation between editing interface and action identification.Like this, show that submodule is after receiving the application identities and action identification that the second recognition unit sends, can according to described corresponding relation, by in application corresponding for the application identities received, the editing interface corresponding with the action identification received show user, after user has edited content on editing interface, can directly preserve or issue edited content.
It should be noted that, in actual applications, except ingredient each shown in Fig. 1, also can a system control module be set in smart machine again, in some cases, when user does not wish that smart machine performs function of the present invention, utilisation system control module comes closing motion acquisition module and action recognition and display module, otherwise then can open again.
Based on above-mentioned introduction, invention also provides a kind of method triggering the editting function of application, as shown in Figure 2, Fig. 2 is the process flow diagram that the present invention triggers the embodiment of the method for the editting function of application, the method is applicable to the smart machine with touch-screen, comprises the following steps 21 ~ 22.
Step 21: obtain the operation trace information of user based on touch-screen.
Wherein, described operation trace can comprise: writing pencil movement locus, finger motion locus.
Correspondingly, acquisition user can be based on the mode of the operation trace information of touch-screen:
By electromagnetic induction technology, obtain the writing pencil motion track information of user;
Point pressing on the touchscreen, mobile and lift event coordinates by monitor user ', obtain the finger motion locus information of user.
Step 22: the editing interface determining the application that will show according to the operation trace information got, and the editing interface determined is showed user.
The realization flow figure that Fig. 3 is step 22 shown in Fig. 2, as shown in Figure 3, comprises the following steps 31 ~ 32.
Step 31: determine the action identification that operation trace information is corresponding and application identities.
In this step, the operational motion of user can be determined according to operation trace information, and obtain corresponding action identification; The center point coordinate of the operation trace of user can be determined, and determine the application icon that each this center point coordinate of application icon middle distance that touch-screen is shown is nearest further, using the application identities of this application icon as application identities corresponding to operation trace information.
Wherein, the center point coordinate of operation trace can be P 0(X 0, Y 0), X 0=(x 1+ x 2+ ...+x n)/n, Y 0=(y 1+ y 2+ ...+y n)/n, x nfor the horizontal ordinate of each pixel in operation trace, y nfor the ordinate of each pixel in operation trace, n is the pixel number that operation trace comprises.
Step 32: by application corresponding for the application identities determined, the editing interface corresponding with the action identification determined show user.
In this step, can according to the corresponding relation between the action identification pre-set and the editing interface of application, by in application corresponding for the application identities received, the editing interface corresponding with the action identification received show user, after user has edited content on editing interface, can directly preserve or issue edited content.
The specific implementation of each step shown in Fig. 2 and Fig. 3 can refer to embodiment illustrated in fig. 1 in related description, repeat no more.
Comprehensive above-mentioned introduction, when scheme of the present invention is applied in note application, specific implementation can be as follows:
User arranges operational motion corresponding to the editing interface of " create message breath ", and this operational motion can be: with one, the unsettled picture of writing pencil circle or point upward sliding etc.;
Suppose that the operational motion arranged is finger upward sliding, so when user needs create message to cease, the finger of user can fall in the below of note icon, and upward sliding is through note icon, as long as the position of stopping can guaranteeing that the starting point of finger sliding and the central point of terminal are near note icon;
Suppose that the operational motion arranged is for using writing pencil unsettled picture circle, so when user needs create message to cease, can writing pencil be extracted and make touch screen induction to writing pencil (namely touch-screen occurring the induction point of writing pencil), writing pencil can be used draw a circle afterwards, as long as the central point of the track streaked is near note icon, draw a circle complete, user can raise writing pencil, makes touch screen induction less than writing pencil;
After user completes aforesaid operations, note application is opened, and directly enters short message editing interface, namely directly enters create message breath interface.
In a word, adopt scheme of the present invention, user only needs to make an operational motion, the editting function of application can be triggered, thus avoid owing to needing to open the complex operation that multiple layer menu causes and the problem increasing time cost in prior art, that is, scheme of the present invention is not only simple to operate, and cost of can saving time, namely can realize the fast and convenient editting function triggering application.
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a smart machine, is characterized in that, comprising: touch-screen, action acquisition module and action recognition and display module;
Described action acquisition module, for obtaining the operation trace information of user based on described touch-screen, and sends to described action recognition and display module;
Described action recognition and display module, for determining the editing interface of the application that will show according to the operation trace information received, and show user by the editing interface determined.
2. smart machine according to claim 1, is characterized in that,
Described operation trace comprises: writing pencil movement locus, finger motion locus;
Described action acquisition module, by electromagnetic induction technology, obtains the writing pencil motion track information of user;
Described action acquisition module, by monitor user ' finger pressing on described touch-screen, mobile and lift event coordinates, obtains the finger motion locus information of user.
3. smart machine according to claim 1 and 2, is characterized in that,
Described action recognition and display module comprise: action recognition submodule and displaying submodule;
Described action recognition submodule, for determining the action identification that the operation trace information that receives is corresponding and application identities, and sends to described displaying submodule by the action identification determined and application identities;
Described displaying submodule, for by application corresponding for the application identities received, the editing interface corresponding with the action identification received show user.
4. smart machine according to claim 3, is characterized in that,
Described action recognition submodule comprises: the first recognition unit and the second recognition unit;
Described first recognition unit, for determining the operational motion of user according to the operation trace information received, sends to described second recognition unit, and determines the center point coordinate of operation trace, send to described second recognition unit by action identification;
Described second recognition unit, for determining the application icon that center point coordinate that each application icon middle distance that described touch-screen is shown receives is nearest, and the application identities of the application icon determined and the action identification that receives are sent to described displaying submodule;
Described displaying submodule according to the corresponding relation between the action identification pre-set and the editing interface of application, by application corresponding for the application identities received, the editing interface corresponding with the action identification received show user.
5. smart machine according to claim 4, is characterized in that,
Described center point coordinate is P 0(X 0, Y 0), X 0=(x 1+ x 2+ ...+x n)/n, Y 0=(y 1+ y 2+ ...+y n)/n;
Wherein, x nfor the horizontal ordinate of each pixel in operation trace, y nfor the ordinate of each pixel in operation trace, n is the pixel number that operation trace comprises.
6. trigger a method for the editting function of application, the method is applicable to the smart machine with touch-screen, it is characterized in that, comprising:
Obtain the operation trace information of user based on described touch-screen;
Determine the editing interface of the application that will show according to the operation trace information got, and the editing interface determined is showed user.
7. method according to claim 6, is characterized in that,
Described operation trace comprises: writing pencil movement locus, finger motion locus;
Described acquisition user comprises based on the operation trace information of described touch-screen:
By electromagnetic induction technology, obtain the writing pencil motion track information of user;
By monitor user ' finger pressing on described touch-screen, mobile and lift event coordinates, obtain the finger motion locus information of user.
8. the method according to claim 6 or 7, is characterized in that,
The operation trace information that described basis gets determines the editing interface of the application that will show, and shows user to comprise the editing interface determined:
Determine the action identification that operation trace information is corresponding and application identities;
By in application corresponding for the application identities determined, the editing interface corresponding with the action identification determined show user.
9. method according to claim 8, is characterized in that,
Describedly determine that the action identification that operation trace information is corresponding and application identities comprise:
Determine the operational motion of user according to operation trace information, and obtain corresponding action identification;
Determine the center point coordinate of the operation trace of user, and determine the application icon that each this center point coordinate of application icon middle distance that described touch-screen is shown is nearest further, using the application identities of this application icon as application identities corresponding to operation trace information;
Described by application corresponding for the application identities determined, the editing interface corresponding with the action identification determined show user to comprise:
According to the corresponding relation between the action identification pre-set and the editing interface of application, by application corresponding for the application identities received, the editing interface corresponding with the action identification received show user.
10. method according to claim 9, is characterized in that,
Described center point coordinate is P 0(X 0, Y 0), X 0=(x 1+ x 2+ ...+x n)/n, Y 0=(y 1+ y 2+ ...+y n)/n;
Wherein, x nfor the horizontal ordinate of each pixel in operation trace, y nfor the ordinate of each pixel in operation trace, n is the pixel number that operation trace comprises.
CN201510317937.6A 2015-06-10 2015-06-10 Intelligent device and method for triggering editing function of application Pending CN104866201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510317937.6A CN104866201A (en) 2015-06-10 2015-06-10 Intelligent device and method for triggering editing function of application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510317937.6A CN104866201A (en) 2015-06-10 2015-06-10 Intelligent device and method for triggering editing function of application

Publications (1)

Publication Number Publication Date
CN104866201A true CN104866201A (en) 2015-08-26

Family

ID=53912068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510317937.6A Pending CN104866201A (en) 2015-06-10 2015-06-10 Intelligent device and method for triggering editing function of application

Country Status (1)

Country Link
CN (1) CN104866201A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020629A (en) * 2016-06-12 2016-10-12 紫蛙科技(上海)有限公司 Triggering method and device of application program selection menu
CN106020615A (en) * 2016-05-26 2016-10-12 珠海市魅族科技有限公司 Terminal control method and device
WO2017035794A1 (en) * 2015-09-01 2017-03-09 华为技术有限公司 Method and device for operating display, user interface, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930282A (en) * 2009-06-27 2010-12-29 英华达(上海)电子有限公司 Mobile terminal and mobile terminal-based input method
CN102841682A (en) * 2012-07-12 2012-12-26 宇龙计算机通信科技(深圳)有限公司 Terminal and gesture manipulation method
CN103076942A (en) * 2011-08-30 2013-05-01 三星电子株式会社 Apparatus and method for changing an icon in a portable terminal
CN103631514A (en) * 2012-08-24 2014-03-12 三星电子株式会社 Method for operation of pen function and electronic device supporting the same
US20140325410A1 (en) * 2013-04-26 2014-10-30 Samsung Electronics Co., Ltd. User terminal device and controlling method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930282A (en) * 2009-06-27 2010-12-29 英华达(上海)电子有限公司 Mobile terminal and mobile terminal-based input method
CN103076942A (en) * 2011-08-30 2013-05-01 三星电子株式会社 Apparatus and method for changing an icon in a portable terminal
CN102841682A (en) * 2012-07-12 2012-12-26 宇龙计算机通信科技(深圳)有限公司 Terminal and gesture manipulation method
CN103631514A (en) * 2012-08-24 2014-03-12 三星电子株式会社 Method for operation of pen function and electronic device supporting the same
US20140325410A1 (en) * 2013-04-26 2014-10-30 Samsung Electronics Co., Ltd. User terminal device and controlling method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017035794A1 (en) * 2015-09-01 2017-03-09 华为技术有限公司 Method and device for operating display, user interface, and storage medium
CN107533431A (en) * 2015-09-01 2018-01-02 华为技术有限公司 Method, apparatus, user interface and the storage medium of display operation
CN106020615A (en) * 2016-05-26 2016-10-12 珠海市魅族科技有限公司 Terminal control method and device
CN106020629A (en) * 2016-06-12 2016-10-12 紫蛙科技(上海)有限公司 Triggering method and device of application program selection menu

Similar Documents

Publication Publication Date Title
CN104756060B (en) Cursor control based on gesture
EP2565760A1 (en) Method and mobile terminal for automatically identifying rotary gesture
CN104360816A (en) Screen capture method and system
CN104205047A (en) Apparatus and method for providing for remote user interaction
CN103037102A (en) Free screen shot method of touch screen cellphone and cellphone
CN103324348A (en) Windows desktop control method based on intelligent mobile terminals
CN103645897A (en) Mobile terminal and operation method thereof
CN103246445A (en) Method and communication terminal for switching application programs
CN103713848A (en) Mobile terminal and operation method thereof
CN102830930B (en) The processing method of a kind of keyboard, device and multimedia terminal
CN105302464B (en) System and method for scribing flow type document
CN102609191A (en) Browsing interaction method for incoming messages of touchscreen cellphones
CN103530043A (en) Operation method and device for touch screen application program
CN105824531A (en) Method and device for adjusting numbers
CN110928614B (en) Interface display method, device, equipment and storage medium
CN103809852A (en) Method and device for displaying multiple application programs on same screen at same time
CN113194024B (en) Information display method and device and electronic equipment
CN108958861A (en) Object displaying method, equipment and storage medium based on text control
CN107357515A (en) The method and its system that multiple utility program picture is presented simultaneously
CN103294375A (en) Terminal and touch screen manipulation method
CN104866201A (en) Intelligent device and method for triggering editing function of application
CN104615362A (en) Touch device capable of realizing on the basis of sliding gesture switching program and method thereof
CN102402361A (en) Method and device for controlling on computer based on movement track of mouse
CN101533314B (en) Method for positioning terminal by cursor, system thereof and terminal controller
CN105760077A (en) Game control method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150826