KR20170067547A - Method for implementing active inversion emoji based on force touch, and terminal for the same - Google Patents
Method for implementing active inversion emoji based on force touch, and terminal for the same Download PDFInfo
- Publication number
- KR20170067547A KR20170067547A KR1020150174325A KR20150174325A KR20170067547A KR 20170067547 A KR20170067547 A KR 20170067547A KR 1020150174325 A KR1020150174325 A KR 1020150174325A KR 20150174325 A KR20150174325 A KR 20150174325A KR 20170067547 A KR20170067547 A KR 20170067547A
- Authority
- KR
- South Korea
- Prior art keywords
- active
- force
- shape
- control unit
- touch screen
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0414—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to a force touch-based active transformation method and a terminal device for implementing the same. A first step of the control unit (12) receiving a summoning command for one of the active emoji stored in the storage unit (13) as a character input window of the touch screen (11); A second step of the control unit (12) receiving a force touch input to the active type image on the touch screen (11); A third step of the control unit 12 detecting a force step according to the force touch input; A fourth step of extracting at least one of a shape, a shape, and a facial expression matched with the detected force step of the control unit 12 and expressing an active operation matched with the extracted element as active facial hair; .
As a result, the user can express his / her active emotions in which the shape, shape, and facial expression are transformed step by step according to the user's step pressure, so that the user can input or transmit the message as it is by dragging it to the character input window or the message transmission window It has the effect of providing an intuitive interface.
Description
The present invention relates to a force touch-based active conversion method and a terminal device for implementing the method. More particularly, the present invention relates to an input method and a status expression of a user through an intuitive UI interface And a terminal device for implementing the same.
Emoji (emoji) is a word that combines an e (繪), which means a drawing in Japanese, and a mugi (character), which means a character.
On the other hand, posterity means a function of processing a task more intuitively when a mobile terminal or the like literally applies a force on an app or a system.
[Related Technical Literature]
APPARATUS AND METHODS OF MAKING USER EMOTICON (Patent Application No. 10-201-20115473)
SUMMARY OF THE INVENTION The present invention has been made to solve the above problems and it is an object of the present invention to provide a force touch-based active conversion auscle for expressing an active type image in which a shape, a shape, And a terminal device for implementing the method.
However, the objects of the present invention are not limited to the above-mentioned objects, and other objects not mentioned can be clearly understood by those skilled in the art from the following description.
In order to achieve the above object, a POSIT-based active touch conversion method according to an embodiment of the present invention is a method in which a control unit 12 inputs a character input window of a touch screen 11 into a
In order to achieve the above object, a terminal device for implementing a force touch-based active conversion conversion method according to an embodiment of the present invention includes a touch screen (not shown) 11), and outputs an active operation of the active emotion outputted to the touch screen (11) as a separating operation; And a setting signal for at least one of shapes, shapes, and expressions classified by the division operation according to the force touch input step (first to n-th steps, n is a natural number of 2 or more) to the touch screen 110 of the user A force touch setting module 12b for storing the active emotion in the
A POSIT-TOUCH-based active conversion method according to an embodiment of the present invention is a method for realizing the active conversion method, and a terminal device for implementing the same is characterized in that an active type image in which a shape, shape, The present invention has an effect of providing an intuitive interface for inputting or transmitting a message as it is by dragging it to a character input window, a message transmission window, or the like.
FIG. 1 is a diagram illustrating a configuration of a terminal device 100 for implementing a force touch-based active transformation method according to an embodiment of the present invention.
FIG. 2 is a view showing an active type image implemented on the touch screen 110 of the terminal device 100 of FIG.
FIG. 3 is a flowchart illustrating a setting process for implementing a ForceTouch-based active conversion conversion method according to an exemplary embodiment of the present invention.
FIG. 4 is a flowchart illustrating an active emoticon implementation process for implementing a force touch-based active transformation emotion implementation method according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a detailed description of preferred embodiments of the present invention will be given with reference to the accompanying drawings. In the following description of the present invention, detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.
In the present specification, when any one element 'transmits' data or signals to another element, the element can transmit the data or signal directly to the other element, and through at least one other element Data or signal can be transmitted to another component.
FIG. 1 is a diagram illustrating a configuration of a terminal device 100 for implementing a force touch-based active transformation method according to an embodiment of the present invention. FIG. 2 is a view showing an active type image implemented on the touch screen 110 of the terminal device 100 of FIG.
Referring to FIG. 1, a terminal device 100 for implementing a force touch-based active transformation method includes a touch screen 11, a control unit 12, a
The control unit 12 includes an image input module 12a, a force touch setting module 12b, a force
The image input module 12a outputs the active image stored in the
Then, the image input module 12a outputs the active operation of the active image output to the touch screen 11 as a separating operation. To this end, one active emoticon called from the
The force touch setting module 12b is configured to set at least one of the shape, shape, and expression classified by the division operation according to the force touch input step (first to n-th steps, n is a natural number of 2 or more) And stores the input signal in the
The force
Accordingly, the force
The emotional active mode conversion module 12d extracts at least one of a shape, a shape, and a facial expression matched with the force phase detected by the force-
Thereafter, the active wear mode module 12d applies active wear to the touch screen 11 to perform active operation with the coordinates of the character input window matched according to the drag or touch input. That is, the active wear mode module 12d can send a message as it is, by dragging the active type image in which the shape, shape, and facial expression are changed step by step according to the pressure of the step, to the character input window according to the user's required usage method.
FIG. 3 is a flowchart illustrating a setting process for implementing a ForceTouch-based active conversion conversion method according to an exemplary embodiment of the present invention. Referring to FIG. 3, the controller 12 outputs the active emoji stored in the
After step S110, the controller 12 outputs the active operation of the active emotion outputted to the touch screen 11 in step S110 as a separating operation, and then, in the force touch input step to the user's touch screen 110 A setting signal for at least one of a shape, a shape, and a facial expression classified by the dividing operation according to the dividing operation is stored in the
FIG. 4 is a flowchart illustrating an active emoticon implementation process for implementing a force touch-based active transformation emotion implementation method according to an embodiment of the present invention. 4, after step S120 of FIG. 3, the control unit 12 receives a summoning command for one of the active emoji stored in the
After step S210, the control unit 12 receives the force touch input to the active type image on the touch screen 11 (S220).
After step S220, the controller 12 detects the force step according to the force touch input (S230).
After step S230, the controller 12 extracts at least one of a shape, a shape, and a facial expression matched with the detected force phase, and expresses the active motion matched with the extracted element as an active facial image (S240).
More specifically, the control unit 12 outputs the active motion of the active emotion output to the touch screen 11 in step S120 as a separating action, and then outputs at least one of the shape, shape, and expression matched with the force step in the setting signal After extracting the division operation classified as above, it is possible to repeatedly express the state of the image before the extraction operation and the state of the image after the separation operation.
After step S240, the controller 12 applies an active emotion for performing the active operation of step S240 to the coordinates of the character input window matched with the drag or touch input to the touch screen 11 at step S250.
That is, the control unit 12 can send a message as it is to the character input window according to a user's required usage method, in which the shape, shape, facial expression, etc. set in stepwise pressure are converted step by step.
The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.
Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device and the like, and also implemented in the form of a carrier wave (for example, transmission over the Internet) .
The computer readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And functional programs, codes, and code segments for implementing the present invention can be easily inferred by programmers skilled in the art to which the present invention pertains.
As described above, preferred embodiments of the present invention have been disclosed in the present specification and drawings, and although specific terms have been used, they have been used only in a general sense to easily describe the technical contents of the present invention and to facilitate understanding of the invention , And are not intended to limit the scope of the present invention. It is to be understood by those skilled in the art that other modifications based on the technical idea of the present invention are possible in addition to the embodiments disclosed herein.
11: Touch screen
12:
12a: Emotion input module
12b: force touch setting module
12c: Force-touch input module
12d: Emotion active conversion module
13:
14: Transmitting /
Claims (2)
A second step of the control unit (12) receiving a force touch input to the active type image on the touch screen (11);
A third step of the control unit 12 detecting a force step according to the force touch input; And
A fourth step of extracting at least one of a shape, a shape, and a facial expression matched with the detected force phase by the control unit 12 and expressing an active motion matched with the extracted element as active facial hair; Wherein the method comprises the steps of:
A setting signal for at least one of a shape, a shape, and a facial expression classified by the division operation according to the force touch input step (first to n-th steps, n is a natural number of 2 or more) of the user to the touch screen 110 A force touch setting module 12b for storing the active emotion in the unit 13 and completing a setting process for executing the active conversion emotion implementing method; And a terminal device for implementing a force touch-based active conversion method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150174325A KR20170067547A (en) | 2015-12-08 | 2015-12-08 | Method for implementing active inversion emoji based on force touch, and terminal for the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150174325A KR20170067547A (en) | 2015-12-08 | 2015-12-08 | Method for implementing active inversion emoji based on force touch, and terminal for the same |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20170067547A true KR20170067547A (en) | 2017-06-16 |
Family
ID=59278686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150174325A KR20170067547A (en) | 2015-12-08 | 2015-12-08 | Method for implementing active inversion emoji based on force touch, and terminal for the same |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20170067547A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020116868A1 (en) * | 2018-12-04 | 2020-06-11 | Samsung Electronics Co., Ltd. | Electronic device for generating augmented reality emoji and method thereof |
-
2015
- 2015-12-08 KR KR1020150174325A patent/KR20170067547A/en unknown
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020116868A1 (en) * | 2018-12-04 | 2020-06-11 | Samsung Electronics Co., Ltd. | Electronic device for generating augmented reality emoji and method thereof |
KR20200067593A (en) * | 2018-12-04 | 2020-06-12 | 삼성전자주식회사 | Electronic device for generating augmented reality emoji and method thereof |
US11029841B2 (en) | 2018-12-04 | 2021-06-08 | Samsung Electronics Co., Ltd. | Electronic device for generating augmented reality emoji and method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6751122B2 (en) | Page control method and apparatus | |
US10664060B2 (en) | Multimodal input-based interaction method and device | |
JP6916167B2 (en) | Interactive control methods and devices for voice and video calls | |
US20190306277A1 (en) | Interaction between devices displaying application status information | |
CN109429522A (en) | Voice interactive method, apparatus and system | |
CN108475221B (en) | Method and apparatus for providing a multitasking view | |
US20190051147A1 (en) | Remote control method, apparatus, terminal device, and computer readable storage medium | |
CN111984115A (en) | Message sending method and device and electronic equipment | |
US20190251990A1 (en) | Information processing apparatus and information processing method | |
EP3141986A3 (en) | Digital device and method of processing data the same | |
CN105379236A (en) | User experience mode transitioning | |
CN110472558B (en) | Image processing method and device | |
US10942622B2 (en) | Splitting and merging files via a motion input on a graphical user interface | |
US10628031B2 (en) | Control instruction identification method and apparatus, and storage medium | |
US9769596B2 (en) | Mobile device output to external device | |
KR20170067547A (en) | Method for implementing active inversion emoji based on force touch, and terminal for the same | |
US20150355788A1 (en) | Method and electronic device for information processing | |
CN111638787A (en) | Method and device for displaying information | |
CN111010335A (en) | Chat expression sending method and device, electronic equipment and medium | |
US11662886B2 (en) | System and method for directly sending messages with minimal user input | |
EP2781990A1 (en) | Signal processing device and signal processing method | |
US9122548B2 (en) | Clipboard for processing received data content | |
CN112866475A (en) | Image sending method and device and electronic equipment | |
JP2017033397A (en) | Operation information sharing device, application execution device, operation information sharing system, information processing device and program | |
CN112187628A (en) | Method and device for processing identification picture |