KR101789279B1 - Command generating method and display apparatus using the same - Google Patents

Command generating method and display apparatus using the same Download PDF

Info

Publication number
KR101789279B1
KR101789279B1 KR1020100005971A KR20100005971A KR101789279B1 KR 101789279 B1 KR101789279 B1 KR 101789279B1 KR 1020100005971 A KR1020100005971 A KR 1020100005971A KR 20100005971 A KR20100005971 A KR 20100005971A KR 101789279 B1 KR101789279 B1 KR 101789279B1
Authority
KR
South Korea
Prior art keywords
touch
command
input
clipboard
gesture
Prior art date
Application number
KR1020100005971A
Other languages
Korean (ko)
Other versions
KR20110086309A (en
Inventor
윤여준
천가원
양필승
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020100005971A priority Critical patent/KR101789279B1/en
Publication of KR20110086309A publication Critical patent/KR20110086309A/en
Application granted granted Critical
Publication of KR101789279B1 publication Critical patent/KR101789279B1/en

Links

Images

Abstract

A command generation method and a display device using the same are disclosed. This command generation method includes a step of receiving a first touch, a step of recognizing a second touch satisfying a preset condition, as a gesture, and a step of generating a command based on the gesture. This makes it possible to execute a command by inputting a user command in an easier and simpler manner in a touch screen environment.

Description

TECHNICAL FIELD [0001] The present invention relates to a command generation method and a display apparatus using the same,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a command generation method for generating a command using a gesture and a display device using the same, and more particularly, to a command generation method for generating a command using a gesture in a touch screen environment and a display device using the command generation method .

Techniques using the touch screen to select GUI items such as icons, menu or anchor on the screen are already common. As a method of executing a command in an environment using the touch screen method, a method of selecting a command from a menu is generally used.

However, in order to execute a command from a menu, an operation for making a menu screen callable, an operation for calling a menu screen, an operation for moving to a screen on which a desired command is displayed on the menu screen, an operation for selecting a command Is required. In addition, in the case of a command that is not frequently used, it must be executed through additional operations such as entering a detailed menu.

Since a plurality of operations are required to execute a single command in this way, it is true that the touch screen has the convenience of directly touching the screen and also inconvenience.

Of course, when items such as a shortcut icon for executing a command on the touch screen are displayed in advance, a lot of operations are not required. However, in this case too, the shortened items cover the screen of the touch screen, .

Accordingly, a search for a method for executing the command in an easier and more convenient manner is requested.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a command generating method and a display apparatus using the same for executing a command only by a simple operation in a touch screen environment.

According to an aspect of the present invention, there is provided a command generation method including: receiving a first touch; Recognizing an operation performed by the second touch or the second touch as a gesture when a second touch satisfying a preset condition is input after the first touch is input; And generating a command based on the gesture.

Here, the predetermined condition may be a condition in which the first touch is maintained.

When the touch screen is operated in a depressurized manner, the recognizing step may include a step of detecting a difference in distance between a point at which the first touch is sensed and a point at which the second touch is sensed, Comparing the time difference between the detected time points; And determining whether the first touch is maintained and the second touch is input according to the comparison result.

The determining step may determine that the first touch is maintained and the second touch is input when the time difference for the distance difference is within a predetermined range.

When the touch screen is operated in the infrared mode, the recognition step may include a step of inputting the second touch in a state in which the first touch is not released, or a time when the detection of the first touch is released, Determining whether a time difference between a time when the touch is sensed is less than a predetermined range; And determining whether the first touch is maintained and whether the second touch is input according to the determination result.

Here, the predetermined condition may be a condition in which the second touch is input in an area set based on a point where the first touch is input.

In addition, the command generation method according to an embodiment of the present invention is a method for generating a command from a point at which the first touch is sensed to a point at which the second touch is sensed, And setting an end point of the vector extended by the second touch as a point at which the second touch is input.

The method of generating a command according to an embodiment of the present invention is characterized in that when the touch screen is operated in the infrared mode, if the difference between the time when the first touch is sensed and the time when the second touch is sensed is less than a predetermined range And setting a point at which the second touch is sensed to a point at which the second touch is input.

In addition, the area may be set differently according to a point where the first touch is input on the touch screen.

The generating step may generate a different command according to a point at which the second touch is input based on a point at which the first touch is input.

According to another embodiment of the present invention, there is provided a method of generating a command, the method including: displaying the set area to be distinguished from an area other than the set area;

Here, the recognizing step may not recognize a touch input in an area other than the set area or an operation in response to the touch as a gesture.

A method of generating a command according to an embodiment of the present invention includes: providing a guide item for inputting an ID; And searching the clipboard corresponding to the ID for the generated command if the ID is input.

In addition, the ID is input by an operation following the third touch or the third touch of the user, and the guide item is used to guide the position of the third touch of the user or the pattern of the operation following the third touch It may be an item.

According to another aspect of the present invention, there is provided a command generation method comprising: clipping a generated command to the clipboard when the clipboard is searched; And executing the generated command with reference to the command clipped to the clipboard.

In the execution step, when the pre-clipped command is a copy command or a cut command and the generated command is a paste command, the execution of the paste command is performed so that the copy command or the target item corresponding to the cut command is pasted .

The clipping step may delete the pre-clipped command and clip the generated command if both the pre-clipped command and the generated command are a copy command or a cut command.

According to another aspect of the present invention, there is provided a command generation method comprising: generating a clipboard corresponding to the ID when the clipboard is not searched, and clipping the command to the generated clipboard; And executing the command.

The retrieving step may include: a first retrieving step of retrieving a clipboard stored inside; And a second retrieving step of retrieving a clipboard stored in the external device if the internal stored clipboard does not exist.

In addition, the second searching step may include broadcasting a message inquiring whether the clipboard exists or not; And receiving an access address for the external device that is unicast from an external device in which the clipboard resides.

A method of generating a command according to an embodiment of the present invention is a method of unicasting a message requesting a clipboard stored in the external device to the external device or causing the generated command to be clipped to a clipboard stored in the external device And unicasting the generated command to the external device.

The unicast step unicasts the generated command to the external device so that the generated command is clipped to the clipboard stored in the external device when the generated command is a copy command or a cut command, When the generated command is a paste command, the message requesting the clipboard stored in the external device can be unicasted to the external device.

According to another aspect of the present invention, there is provided a method of generating a command, the method comprising: receiving and storing a unicast clipboard from the external device when the generated command is a paste command; And executing the command so that the copy command or the target item corresponding to the cut command is pasted with reference to the copy command or the cut command that is clipped to the stored clipboard.

The gesture is input by the drag operation following the second touch operation or the second touch, and the gesture input in accordance with the drag operation is recognized as a different gesture according to the pattern of the drag operation and is input .

In addition, the method of generating a command according to an embodiment of the present invention may further include matching and storing the pattern of the second touch operation and the drag operation with the command.

The generating step may generate a command based on the gesture when the first touch is held until input of the gesture is completed.

In addition, the generating step may generate the command when the operation of the second touch or the second touch is completed.

And, when the first touch is input for a specific item, the command may be a command for the specific item.

Also, the first touch and the second touch may be input together on one touch screen.

When a plurality of devices are operated in cooperation with each other, a second command based on the second gesture input to the touch screen provided in the second device is generated based on the first gesture input to the touch screen provided in the first device Can be generated with reference to the first command.

According to another aspect of the present invention, there is provided a display device including: a touch screen for receiving a touch or a gesture; And a controller for recognizing an operation performed by the second touch or the second touch as a gesture and generating a command based on the gesture when a second touch satisfying predetermined conditions is input after the first touch is input; .

Here, the predetermined condition may be a condition in which the first touch is maintained.

When the touch screen is operated in a depressurization type, the controller determines a difference in distance between a point at which the first touch is sensed and a point at which the second touch is sensed, a point at which the first touch is sensed, And determines whether the first touch is maintained and the second touch is input according to the comparison result.

The controller may determine that the first touch is maintained and the second touch is input when the time difference for the distance difference is within a predetermined range.

When the touch screen is operated in the infrared mode, the control unit may determine whether the second touch is input while the first touch is not released, the time when the detection of the first touch is released, It is possible to determine whether the time difference between the time when the touch is sensed is less than a predetermined range and determine whether the first touch is maintained and the second touch is inputted or not according to the determination result.

The predetermined condition may be a condition in which the second touch is input in an area set based on a point at which the first touch is input.

Here, when the touch screen is operated in a depressurizing manner, the controller may calculate an end point of a vector extending a connected vector from a point at which the first touch is sensed to a point at which the second touch is sensed, It is possible to set the point where the second touch is inputted.

If the touch screen is operated in the infrared mode, the controller may detect the second touch when the difference between the time when the first touch is sensed and the time when the second touch is sensed is less than a predetermined range, A point at which the second touch is input can be set.

In addition, the controller may set the areas differently according to a point at which the first touch is input on the touch screen.

The control unit may generate different commands according to a point at which the second touch is input based on a point at which the first touch is input.

In addition, the controller may distinguish the set area from the set area on the touch screen.

The control unit may not recognize a touch input in an area other than the set area or an operation in response to the touch as a gesture.

Further, the display device according to an embodiment of the present invention may include: a GUI generating unit for providing a guide item for inputting the ID; And a storage unit for storing a clipboard corresponding to the input ID. When the ID is input, the controller can search the clipboard corresponding to the ID for the generated command.

In addition, the ID is input by an operation following the third touch of the user or the operation of the third touch, and the guide item is used to select the position of the third touch of the user or the pattern of the operation following the third touch It may be an item.

In addition, when the clipboard is searched, the control unit may clip the generated command to the clipboard, and may execute the generated command by referring to the command that is clipped to the clipboard.

When the generated command is a paste command, the control unit causes the paste command to be executed so that the copy command or the target item corresponding to the cut command is pasted, if the pre-clipped command is a copy command or a cut command .

In addition, when the pre-clipped command and the generated command are both a copy command or a cut command, the control unit deletes the pre-clipped command and clips the generated command.

If the clipboard is not searched, the control unit generates a clipboard corresponding to the ID, stores the generated clipboard in the storage unit, and clips the command to the generated clipboard.

The display device may further include a communication interface for communicating with an external device, wherein the control unit searches for a clipboard stored in the storage unit, It is possible to search the clipboard stored in the external device through communication with the external device.

The control unit broadcasts a message inquiring whether or not the clipboard is present through the communication interface and receives an access address for the external device unicast from the external device in which the clipboard exists can do.

In addition, the control unit may be configured to unicast a message requesting a clipboard stored in the external device to the external device through the communication interface, or to transmit the generated command to the clipboard stored in the external device, Command to the external device.

The control unit unicasts the generated command to the external device so that the generated command is clipped to the clipboard stored in the external device when the generated command is a copy command or a cut command, When the command is the paste command, the message requesting the clipboard stored in the external device can be unicasted to the external device.

If the generated command is a paste command, the control unit receives a unicast clipboard from the external device, stores the unicast clipboard in the storage unit, and refers to a copy command or a cut command previously clipped to the stored clipboard , And execute the command so that the copy command or the target item corresponding to the cut command is pasted.

The gesture is input by the drag operation following the second touch operation or the second touch, and the gesture input in accordance with the drag operation is recognized as a different gesture according to the pattern of the drag operation and is input .

The display device according to an embodiment of the present invention may further include a storage unit for storing the second touch operation and the pattern of the drag operation and the command in a matching manner.

The control unit may generate a command based on the gesture when the first touch is held until input of the gesture is completed.

In addition, the control unit may generate the command when the operation of the second touch or the second touch is completed.

And, when the first touch is input for a specific item, the command may be a command for the specific item.

Also, the first touch and the second touch may be input together on one touch screen.

When the display device is operated in conjunction with an external device, the control unit may cause the second command based on the second gesture input to the touch screen provided in the external device to be input to the touch screen provided in the display device And can be generated with reference to the first command generated based on the first gesture.

This makes it easier and easier to execute commands using gestures in a touch screen environment.

1 is a diagram for explaining a kind of input operation of a user,
2 is a view showing the kind of a gesture,
3 is a diagram for explaining conditions under which a user's input operation is recognized as a gesture;
4A to 4C are diagrams for explaining cases where a user's input operation is not recognized as a gesture in violation of the first condition,
5A and 5B are diagrams for explaining another condition in which a user's input operation is recognized as a gesture;
6A to 6C are diagrams for explaining a case where a user's input operation is recognized as a gesture;
Figures 7A and 7B show another embodiment of the set region,
8A and 8B are diagrams for explaining a case where different commands are generated according to a point where a gesture is input;
FIG. 9 is a diagram for explaining a case where different commands are generated according to an object to which a first touch is input;
10 is a diagram for explaining a method of sensing a touch according to an electrostatic method,
11 is a view for explaining a method of sensing a touch according to a depressurization type,
12 is a diagram for explaining a method of detecting a touch according to an infrared method,
FIG. 13 is a diagram for explaining a gesture recognition and discrimination method when a depressurization type is adopted;
14 is a diagram for explaining a gesture recognition and a discrimination method in the case of employing an infrared system,
15 is a flowchart for explaining a gesture recognition method and a command generation method in a display device employing an electrostatic type,
16A and 16B are flowcharts for explaining a gesture recognition method and a command generation method in a display device employing a depressurization type,
17 is a flowchart for explaining a gesture recognition method and a command generation method in a display device employing an infrared method,
18 is a diagram for explaining the necessity for distinguishing a command for each user,
FIGS. 19A to 19F are diagrams for explaining a method of identifying a command according to a gesture input for each user;
Figures 20a-20c illustrate additional examples of guide items,
21 is a diagram for describing a gesture input interlocked with an external device,
22 is a diagram for describing a gesture and an ID input interlocked with an external device,
23 is a view for explaining a case where a clipboard corresponding to an input ID is not stored in both the display device and the external device when a gesture and an ID are input from a user,
24 is a diagram for explaining a case where a gesture for a copy command is input from a user,
25 is a diagram for explaining a case where a gesture for a paste command is input from a user,
26 is a diagram showing a configuration of a display device.

Hereinafter, the present invention will be described in detail with reference to the drawings. The contents to be described below are as follows.

1. Types of input operations and basic descriptions of gestures (Figs. 1 and 2)

2. Condition 1 in which the gesture is recognized (Figs. 3 to 4C)

3. Condition 2 in which the gesture is recognized (Figs. 5A to 7B)

4. Command generation method according to another embodiment (Figs. 8A to 9)

5. Detection method of the display device (FIGS. 10 to 12)

6. Gesture Recognition and Discrimination Method by Detection Method (FIGS. 13 and 14)

7. Operation flow for each sensing method (Figs. 15 to 17)

8. User-specific command storage (Figs. 18 to 20C)

9. Interlocking with an external device (Figs. 21 to 22)

10. Concrete interworking method with external device (Figs. 23 to 25)

11. Configuration for display device (Figure 26)

<1. Basic actions for gesture input>

Hereinafter, with reference to FIG. 1 and FIG. 2, a type of a user's input operation to a display device in a touch screen environment and a type of a gesture for command generation will be described.

1 is a view for explaining a kind of an input operation by a user. The display device 100 operates in a touch screen manner, whereby the user can operate the display device 100 by touching the screen. That is, as shown in the drawing, the touch is input by the operation of pressing the screen, and the pointing operation such as the target selection desired by the user is performed on the display device 100 by the input touch operation 110.

In addition, the user may operate the display device 100 by inputting a gesture on the screen. That is, as shown in the drawing, the gesture is input by a specific motion operation using the drag operation 150 while the screen is touched, and the pointing such as the target selection desired by the user is displayed on the display device 100 by the input gesture It will be done.

These touches and gestures are generated and executed with different commands even when they are operations on the same part on the screen.

For example, when a user touches a specific item displayed on the screen, the touch may be a command for causing an application corresponding to a specific item to be executed. When a user touches a specific item displayed on the screen to input a gesture , And the gesture may be a command for copying an application corresponding to a specific item.

When the user touches the background area displayed on the screen, the touch may be a command for calling up the attribute information for the display device 100. If the user touches the background area displayed on the screen to input the gesture , The gesture may be a command for pasting an item that has been copied in advance to the background area.

2 is a view showing the kind of a gesture.

As shown in the figure, when different gestures are input, different matched commands are generated and executed. That is, when a gesture is input in the form of 'C', a command for 'COPY' is generated. When a gesture is input in the form of '/', 'CUT' or ' A command for 'PASTE' can be generated.

Needless to say, the illustrated commands are not always generated as gestures, and the illustrated commands may be generated using a simple touch or a separate operating means. However, since the gesture can have various forms, it can be matched to various commands, while the touch has only one form, so that it can not be matched with various commands.

In addition, although the touch can express diversity as a position at which a touch is input, a menu screen must be provided in order to generate various commands according to a position at which the touch is input. Therefore, There is a problem in that it is necessary to generate a desired command. On the contrary, the gesture is advantageous in that the command can be executed by matching with various commands without interfering with the viewing of the contents desired by the user.

Of course, what has been described above is merely an example of a description of a gesture, and a command corresponding to the shape of each gesture and the gesture may be matched differently from the above description. It goes without saying that the command corresponding to the shape of each gesture and the gesture may be set to be directly matched by the user.

<2. Conditions in which the gesture is recognized 1>

In the above, it is mentioned that the gesture is inputted by the specific motion operation using the drag operation in the state that the screen is touched. That is, the gesture is not input by simply touching the screen or touching and dragging the screen, and this drag operation needs to be a specific motion operation. Therefore, in the case of not a specific motion operation, a simple touch operation or a drag operation by a touch is performed instead of the gesture operation.

On the other hand, in order to allow the user to easily input a desired command, it is required that the specific motion operation used for the gesture operation does not have a complicated form. However, there is a problem that the gesture manipulation having an uncomplicated form is difficult to distinguish from a simple touch manipulation or a drag manipulation. Further, even when the user operates the display device to perform a simple drag operation, there is a problem that such a drag operation can be accepted as a gesture operation if a simple drag operation is similar to a specific motion operation.

Accordingly, it is necessary to clarify whether the input operation of the user is recognized as a simple touch or a simple drag or as a gesture.

3 is a diagram for explaining conditions under which a user's input operation is recognized as a gesture.

The first condition for the user's input operation to be recognized as a gesture is that after the first (first) touch 110 is input, the second (second) touch 130 And the dragging operation 150 is inputted to the second touch 130 as shown in the lower left part, and then the dragging operation 150 is finished as shown in the lower right part.

When the first touch 110 is held and the specific motion is input by the drag 150 subsequent to the second touch 130, the display device 100 recognizes the drag 150 as a gesture , And generates and executes a command corresponding to the recognized gesture.

That is, a command corresponding to the gesture 'Z' is generated and executed on the display device 100.

4A to 4C are diagrams for explaining cases where a user's input operation is not recognized as a gesture in violation of the first condition.

4A shows a case where the first touch 110 is released before the second touch 130 is input. As described above, in order for the user's input operation to be recognized as a gesture, the first touch 110 must be continuously held.

4A, when the first touch 110 is released before the second touch 130 is input, the display device 100 does not recognize the user's input operation as a gesture but recognizes it as a simple touch operation . As a result, the command corresponding to the gesture is not executed, and the commands corresponding to the first touch 110 and the second touch 130 are sequentially executed.

Next, FIG. 4B illustrates a case where the first touch 110 is released before a drag is inputted to the second touch 130. FIG. As described above, in order for the user's input operation to be recognized as a gesture, the first touch 110 must be continuously held.

4B, when the first touch 110 is released before dragging the second touch 130, the display device 100 does not recognize the user's input operation as a gesture, It is recognized as an operation. The command corresponding to the first touch 110 and the command corresponding to the second touch 130 are sequentially executed or the command corresponding to the first touch 110 or the second touch 130 is sequentially executed without executing the command corresponding to the gesture, A command corresponding to a state in which a touch is input between the point where the first touch 110 is input and the point where the second touch 130 is input is executed.

4C is a diagram illustrating a case where the first touch 110 is released before the drag 150 is released after the drag 150 and after the second touch 130. FIG. As described above, in order for the user's input operation to be recognized as a gesture, the first touch 110 must be continuously held.

4C, when the first touch 110 is released before the drag 150 is released after the drag 150 after the second touch 130, the display device 100 displays the user's input operation Is not recognized as a gesture but recognized as a simple touch or simple drag operation. The commands corresponding to the first touch 110 and the drag 150 may be sequentially executed or one of the commands corresponding to the first touch 110 and the drag 150 may be executed or the first touch 110 may be executed A command corresponding to a state in which a drag is input is executed at an intermediate point between the input point and the point where the drag 150 is input.

When the drag operation 150 after the second touch 110 or the second touch 110 is completed, the second touch 110 or the second touch 110 ) Is recognized as a gesture, a simple touch or a simple drag and a gesture can be clearly distinguished from each other.

<3. Condition in which the gesture is recognized 2>

5A and 5B are diagrams for explaining another condition in which a user's input operation is recognized as a gesture.

The second condition for the user's input operation to be recognized as a gesture is that the second touch 110 is input in the region 510 set based on the point where the first touch 110 is input, as shown in FIG. 5A.

When the specific motion is input by the drag 150 following the second touch within a specific radius from the point where the first touch 110 is input, the display device 100 recognizes the drag 150 as a gesture , And generates a command corresponding to the recognized gesture.

As shown in FIG. 5A, the set area may not be displayed on the touch screen. However, for convenience of the user's gesture input, the set area 530 may be displayed on the screen of the touch screen It may be implemented to be displayed.

5B, when the area 530 set on the touch screen is displayed, it is advantageous in that the input operation of the user is convenient compared to the case where the set area 510 is not displayed as shown in FIG. 5A, There may be a disadvantage that the visibility of already displayed contents can be hindered.

6A to 6C are diagrams for explaining a case where a user's input operation is recognized as a gesture.

6A, when a specific motion operation is input by the drag 150 following the second touch in the area 510 set on the basis of the first touch 110 and a specific motion operation by the drag 150 is set If all is done in area 510, this is of course recognized as a gesture.

6B, when a second touch is input in the area 510 set on the basis of the first touch 110 and a specific motion operation is input by the drag 150 subsequent to the second touch, 150 is out of the set area 510, this is recognized as a gesture.

In addition, as shown in FIG. 6C, a second touch is input in the area 510 set on the basis of the first touch 110, a specific motion operation is input by the drag 150 following the second touch, and If another touch 170 is input outside the set area 510, the drag 150 following the second touch is recognized as a gesture regardless of whether another touch 170 exists or not.

In this case, another touch 170 is recognized as a simple touch separately from the gesture, and the drag 150 is recognized as a gesture separately from the simple touch. That is, the display device 100 can receive a plurality of separate operations according to the area.

As described above, the simple touch, simple drag, or gesture is recognized as a simple touch, a simple drag, or a gesture based on whether or not the first touch 110 is input within the set area, so that the simple touch or the simple drag and the gesture can be recognized clearly.

In the above description, the radius based on the first touch 110 is assumed to be a set area, but it is only an example for convenience of explanation. Therefore, the present invention can be applied to a case where another area is set. 7A and 7B will be referred to for the explanation.

Figs. 7A and 7B are views showing another embodiment of the set area.

The area set in the example shown in Figs. 5A to 6C has a circular shape, but as shown in Figs. 7A and 7B, the area set based on the first touch 110 may have a rectangular shape. Of course, this rectangular shape is only an example.

On the other hand, the set area can be determined based on the point where the first touch 110 is input. In FIG. 7A, since the first touch 110 is input to the upper left of the touch screen, a region 710 is set so that the gesture can be input to the right and lower portions. In FIG. 7B, since the first touch 110 is inputted to the lower right of the touch screen, the area 730 is set so that the gesture can be input to the left and upper parts.

Thus, by setting the area in consideration of the first touch 110 point of the user, the user can input the gesture more conveniently.

On the other hand, in order to recognize the user's touch or drag as a gesture, the display device 100 does not necessarily satisfy the first condition described in Figs. 3 to 4C and the second condition described in Figs. 5A to 6C, Even if one condition is satisfied, it will be applied to the present invention.

In addition, the above-described two conditions are merely exemplary for convenience of description, and the technical idea of the present invention can be applied even when the gesture is distinguished from the simple touch or simple drag through different conditions .

Particularly, in the case of the first condition or the second condition, the display device 100 can be applied to both a large format display (LFD) and a small mobile device. However, in the case of a small mobile device, In relation, the second condition may be omitted.

In the case of the second condition, the type of the display device 100 may be considered. For example, in the case where the display device 100 is LFD, considering that the operation is performed using the user's left hand finger and the right hand finger in a state in which the arm is flat, the area set based on the first touch 110 is the user's shoulder The width may be appropriate.

In the case where the display device 100 is a small mobile device, considering the operation using the finger of the left hand of the user and the finger of the right hand while grasping the display device 100 with both hands, The area set to 5 cm may be suitable.

<4. Method of Generating Command According to Other Embodiments>

Hereinafter, other embodiments for generating a command as a gesture will be described with reference to FIGS. 8A to 9. As described above, in order to generate a command by the gesture, the display apparatus 100 receives the input operation of the user and judges whether or not to recognize the received input operation as a simple touch or simple drag or a gesture , And executes the corresponding operation. In addition, when different gestures are input, different matched commands are input.

On the other hand, it is needless to say that even when a gesture having the same motion operation is input, different commands may be generated depending on the situation.

8A and 8B are diagrams for explaining a case where different commands are generated according to a point at which a gesture is input.

As shown in FIG. 8A, after the first touch 110 is input, a gesture 150 having a 'Z' shape at a position higher than a horizontal line 810 based on a point where the first touch 110 is input is input , It can be recognized as a gesture for generating and executing the first command.

8B, after a first touch 110 is input, a gesture 150 having a 'Z' shape at a position lower than a horizontal line 810 based on a point where the first touch 110 is input, It can be recognized as a gesture for generating and executing a second command different from the first command.

As described above, even if the gesture having the same form is made to be a different command according to the input point, the burden of the user needing to memorize the gesture is reduced, and more gestures can be utilized.

Fig. 9 is a diagram for explaining a case where different commands are generated depending on the object to which the first touch is input.

As shown in the figure, when a plurality of items 910, 930, and 950 are displayed on the touch screen, a first touch 110 is input to a specific item 950 that is not a background region, and then a gesture 150 is input I will assume. This is recognized as a command for the specific item 950, rather than a command for the entire display device 100 or a command for other items 910, 930.

For example, as shown, when a first touch 110 is entered into a particular item 950 and a gesture 150 of the 'C' type is entered, it is possible to make a copy of the particular item 950 It may be a requesting command.

On the other hand, when the same operation is performed on the other items 910 and 930, this also becomes a command for requesting a copy for each of the other items 910 and 930.

Also, if the gesture 150 is input in the 'C' form after the first touch 110 is input to the background area other than the other items 910 and 930, it may be a command different from the copy .

In the above description, it is assumed that different commands are generated depending on the object to which the first touch is input, but it is not necessarily limited to the case of touch. For example, if the first touch is a touch-and-drag dragging region setting, and the region includes all three items (910, 930, 950), this may be for copying all three items.

In this manner, by making different commands according to the point where the first touch is input, the burden of the user to memorize the gesture is reduced, and more gestures can be utilized.

<5. Detection method of display device>

Meanwhile, the display device 100 may be classified into a single touch type, a dual touch type, and a multi-touch type according to a sensing method. In the case of the dual touch method and the multi-touch method, there is no problem in distinguishing the drag after the first touch and the second touch, but the single touch method has a problem that it is difficult to detect two or more touches.

However, the display device 100 according to the present invention recognizes the gesture of the user by determining the drag after the second touch and the second touch even when using the single-touch method.

In order to explain this, first, a sensing method that can be employed in the display apparatus 100 will be described.

FIG. 10 is a diagram for explaining a touch sensing method according to the electrostatic method.

As shown, the touch screen includes sensors 1010 at four corners to sense a touch according to the electrostatic type. In order to sense the touch according to the electrostatic method, a current is continuously flowed to the touch screen, and the sensor 1010 accumulates the charges in the horizontal and vertical directions.

When the user touches the touch screen with the finger, the current flowing through the touch screen is moved by the finger, and the sensor 1010 detects the change of the current, .

As described above, since the electrostatic method uses a method of measuring the coordinates of the touched point by sensing the change of the current, multi-touch can be implemented.

11 is a view for explaining a method of sensing a touch according to a depressurization type.

As shown, the touch screen is composed of a plurality of layers made of transparent materials in order to sense a touch according to the depressurization type. In particular, the touch screen is composed of the stabilizing layer 1110, the first conductive layer 1130, the second conductive layer 1150, and the spacer dot 1170.

The stabilizing layer 1110 is made of an insulating material so that the current flowing on the screen does not affect the operation of the touch screen, and functions to prevent the liquid crystal from being damaged by the liquid crystal being pressed together when the user touches the touch screen. As the stabilizing layer 1110, glass or plastic is mainly used.

The first conductive layer 1130 and the second conductive layer 1150 are made of a material having electrical resistance and conductivity to sense a touch point. The first conductive layer 1130 and the second conductive layer 1150 are filled with air and have a very thin space in an insulated state. When the user touches the touch screen, the first conductive layer 1130 and the second conductive layer 1130 The conductive layer 1150 is brought into contact with each other, and a change in the resistance value and the current occurs, and the coordinates of the touched point are measured based on the signal related to the change of the resistance value and the current value.

The dot spacer 1170 is formed of a nonconductive material, and prevents the first conductive layer 1130 and the second conductive layer 1150 from being electrically connected to each other while the user's touch is not being inputted.

On the other hand, since the depressurization type measures the coordinates of the touched point based on the signal related to the change of the resistance value and the current, it is difficult to detect the multi-touch that touches a plurality of points.

12 is a diagram for explaining a method of detecting a touch according to an infrared method.

The touch screen includes means for emitting or receiving infrared rays at four corners of the touch screen to detect the touch according to the infrared method.

The infrared method utilizes the property that the infrared ray has a straight-line property and can not proceed because of the obstacle. That is, the infrared ray method is a method of detecting the touch point of the user by reading the X and Y coordinates of the blocked portion when the infrared ray emitted from the horizontal direction and the vertical direction is blocked by the user's touched portion of the touch screen.

To make this infrared grating, one corner of each of the corners of the touch screen radiates infrared rays, and each one of the corners facing the infrared ray receives radiated infrared rays to form an infrared grid.

Since the infrared ray is radiated at a position slightly higher than the surface of the touch screen, in the case of the infrared ray method, the touch screen senses the touch even if the user's finger does not directly touch the touch screen.

On the other hand, the infrared method detects the touch point by reading the X and Y coordinates of the finally blocked portion, so it is difficult to detect the multi-touch that touches a plurality of points.

<6. Gesture Recognition and Detection by Detection Method>

As described above, it is easy to realize the multi-touch by the electrostatic type, but it is not easy to realize the multi-touch by the decompression type and the infrared type. Therefore, when the display device 100 employs the electrostatic type, the aforementioned gesture recognition is not a big problem.

Meanwhile, since the display device 100 according to various embodiments of the present invention allows a gesture to be recognized by a specific motion operation, it is not necessary to calculate exact coordinates to which a multi-touch is input, It is possible to implement such that a command corresponding to the gesture can be sufficiently executed only by knowing the shape.

Hereinafter, with reference to FIG. 13 and FIG. 14, gesture recognition and discrimination method in the case of adopting the depressurization type and the infrared method will be described.

Fig. 13 is a diagram for explaining the gesture recognition and discrimination method when the depressurization type is adopted.

One of the conditions for the gesture to be recognized is that the first touch must be maintained. As another example of a condition for recognizing a gesture, a description has been given of the fact that a gesture should be input in a predetermined area based on the first touch.

In the display device 100 employing the depressurization type, the following conditions are used to determine the conditions under which the first touch is continuously maintained and the conditions for the point where the gesture is input, and the type of the gesture is determined.

First, when the second touch 130 is inputted after the first touch 110 is inputted as shown in the upper left corner, half of the distance from the first touch 110 to the second touch 130, A straight line is generated.

As described above, since the depressurizing type measures the coordinates of the touched point based on the signal related to the change of the resistance value and the current, when the first touch 110 and the second touch 130 are input together, Since the value and the current value are changed to the resistance value and the current value corresponding to the midpoint between the first touch 110 and the second touch 130.

Therefore, the point where the end point of the straight line generated from the point where the first touch 110 is input is doubled is actually the point where the second touch 130 is input. Here, the end point of the generated straight line can be regarded as the point where the second touch 130 is detected.

That is, the display device 100 doubles the distance from the point where the first touch 110, which is the point where the first touch 110 is detected, to the point where the second touch 130 is detected, Lt; RTI ID = 0.0 &gt; actually &lt; / RTI &gt;

The display device 100 determines whether or not the second touch 130 is input within the area set based on the first touch 110 based on the result of calculating the point where the second touch 130 is input. Thus, it is possible to know whether or not the gesture satisfies the condition for the input point.

When the drag 150 is inputted to the second touch 130 in a state where the first touch 110 is held as shown in the lower left corner, the display device 100 displays the first touch 110, And the drag 150 is sensed at the midpoint between the point where the drag 150 and the second touch 130 are input. Accordingly, the gesture 1310 is displayed on the screen at a position between the point where the first touch 110 is input and the point where the drag 150 is input after the second touch 130.

Since the shape of the gesture 1310 in which the drag 150 is actually sensed and the shape of the drag 150 are detected are similar to each other in the display device 100 employing the depressurization type, There is no difficulty in judging whether or not there is a problem.

On the other hand, when the drag 150 is completed after the second touch 130, a straight line is formed from the point where the input of the drag 150 is terminated to the point where the first touch 110 is input. This is because the resistance value and the current value measured in the display device 100 are changed to the resistance value and the current value corresponding to the point where the first touch 110 is input, as described above.

The display device 100 displays the drag 150 after the second touch 130 on the basis of a point where a straight line is formed from the point where the input of the drag 150 is terminated to the point where the first touch 110 is inputted, It is possible to confirm that the condition that the first touch 110 is continuously maintained until the touch input is released and released is satisfied.

14 is a diagram for explaining a gesture recognition and a discrimination method in the case of adopting an infrared method.

One of the conditions for being recognized as a gesture is that the first touch must be maintained. In addition, as another condition to be recognized as a gesture, it has been described that the gesture must be input in the area set based on the first touch.

In the display device 100 employing the infrared method, the following conditions are used to determine the conditions under which the first touch is continuously maintained and the conditions for the point where the gesture is input, and the form of the gesture is determined.

First, when the first touch 110 is input and then the second touch 130 is input, as shown in the upper left corner, the first touch 110 is no longer It is not detected by the display device 100.

As described above, the infrared method detects the touch point by reading the X and Y coordinates of the finally blocked part.

Meanwhile, when the first touch 110 is sensed and the second touch 130 is sensed, the second touch 130 is displayed in a state where the first touch 110 is maintained It is necessary to determine whether the second touch 130 is input or not after the first touch 110 is released.

If the second touch 130 is input while the first touch 110 is held, the drag after the second touch 130 or the second touch 130 can be viewed as a gesture, but after the first touch 110 is released If the second touch 130 is input, the second touch 130 should be viewed as a simple touch, and the drag after the second touch 130 should be regarded as a simple drag.

Accordingly, when the difference between the time when the first touch 110 is sensed and the time when the second touch 130 is sensed is less than a preset range, the display device 100 displays the second touch (110) 130 are input.

On the other hand, in the display device 100 employing the infrared method, since the point where the second touch 130 is inputted is the point where the second touch 130 is detected, the second touch 130 is detected based on the point where the second touch 130 is sensed. (130) has been input within the area set on the basis of the first touch (110). Thus, it is possible to know whether or not the gesture satisfies the condition for the input point.

When the drag 150 is input to the second touch 130 in a state where the first touch 110 is held as shown in the lower left corner, the display device 100 displays the shape of the drag 150 sensed It determines the shape of the gesture as a basis.

When the first touch 110 is held and the drag 150 is completed after the second touch 130, the first touch 110 is displayed as a solid line, .

In this way, based on the first touch 110 being detected as soon as the input of the gesture by the drag 150 is terminated, the display device 100 displays the drag 150 after the second touch 130, It is possible to confirm that the condition that the first touch 110 is continuously maintained is satisfied.

As described above, the display device 100 employing the depressurization type or the infrared type can recognize and judge the gesture input from the user without having a separate hardware configuration for detecting the electrostatic touch or multi-touch Thus, the command can be generated and executed.

Meanwhile, the above description is only an example for explaining a method of judging a gesture when a dual-touch or multi-touch can not be applied. Accordingly, it is needless to say that the gesture can be recognized and determined even in the case of the single-touch method in which the above-described depressurization type or infrared ray method is not used, and thus the command can be generated and executed.

<7. Operation flow by sensing method>

Hereinafter, an operation flow for recognizing a gesture for each sensing method will be described with reference to FIGS. 15 to 17. FIG.

15 is a flowchart for explaining a gesture recognition method and a command generation method in the display device 100 employing the electrostatic type. As described above, since the display device 100 employing the electrostatic method is easy to implement a dual touch or a multi-touch, even when a dual-touch or multi-touch is input, A command according to the gesture can be executed.

When the first touch is input in the display device 100 employing the electrostatic method (S1500-Y), an area is set based on the point where the first touch is input (S1510).

Then, the display device 100 determines whether the first touch is released (S1520). If the first touch is released (S1520-Y), the first touch is recognized as a simple touch rather than a gesture (S1570), a command corresponding to the recognized touch is generated (S1580), and the generated command is executed (S1560).

On the other hand, if the first touch is not released (S1520-N), it is determined whether or not a second touch is inputted in the set area (S1530).

If the second touch is input in the set area (S1530-Y), the display device 100 recognizes the drag corresponding to the second touch or the second touch as a gesture (S1540), generates a command corresponding to the recognized gesture ), And executes the generated command (S1560).

On the other hand, if the second touch is inputted outside the set area (S1530-N), the display device 100 recognizes the first touch input first and the second touch inputted outside the set area as simple touches (S1570) (S1580).

As described above, since the display device 100 employing the electrostatic method is easy to implement the dual touch and the multi-touch, it is possible to execute the command according to the gesture by determining the sensed point as the input point as it is.

16A and 16B are flowcharts for explaining a gesture recognition method and a command generation method in the display device 100 employing the depressurization type. As described above, since the display device 100 employing the depressurization type is not easy to implement dual touch or multi-touch, when a dual-touch or multi-touch is input, the input point is calculated based on the detected point And executes the command according to the calculated result.

In the above description, the condition for recognizing the gesture as a condition for the first touch to be maintained and the condition for inputting the second touch in the predetermined area have been described. It will be appreciated by those skilled in the art that the first touch may be maintained by referring to the description relating to FIG. 15. Hereinafter, the process of determining whether or not a second touch is input in the predetermined area in the display device 100 employing the pressure- .

First, the display device 100 employing the pressure reduction type determines whether or not the first touch is input (S1600). If it is determined that the first touch is input (S1600-Y) (S1605).

Then, the display device 100 determines whether the first touch is released (S1610). If the first touch is released (S1610-Y), the display device 100 recognizes the first touch as a simple touch (S1665) Command (S1670).

On the other hand, when the first touch is not released (S1610-N) and the second touch is sensed (S1615), the display device 100 calculates the difference between the time when the first touch is sensed and the time when the second touch is sensed ), A distance difference between the point where the first touch is sensed and the point where the second touch is sensed is calculated (S1625).

If the time difference for the distance difference is within the predetermined range (a) (S1630-Y), the display device 100 generates a connected vector from the point where the first touch is sensed to the point where the second touch is sensed (S1635) , And the generated vector is extended to regard the end point of the extended vector as a point at which the second touch is input (S1640).

The display device 100 determines whether or not a second touch is input in the set area based on the point at which the second touch is regarded as being input (S1645). If it is determined that the second touch is input in the set area (S1645-Y ), And recognizes the drag after the second touch or the second touch as a gesture (S1650).

Thereafter, the display apparatus 100 generates a command corresponding to the recognized gesture (S1655), and executes the generated command (S1660).

On the other hand, if the time difference for the distance difference is not within the preset range a (S1630-N), or if it is not determined that the second touch is input in the set area (S1645-N) (S1665), and generates a command corresponding to the recognized touch (S1670).

Since the display device 100 employing the depressurization type is not easy to implement the dual touch or the multi-touch, it is possible to calculate the input point based on the sensed point and execute the command according to the calculated result.

FIG. 17 is a flowchart for explaining a gesture recognition method and a command generation method in a display device using an infrared system. As described above, since the display device 100 employing the infrared method is not easy to implement a dual touch or a multi-touch, when a dual-touch or multi-touch is input, Determines whether or not a gesture is input, and executes a command in accordance with the determined result.

In the above description, the condition for recognizing the gesture as a condition for the first touch to be maintained and the condition for inputting the second touch in the predetermined area have been described. It will be appreciated by those skilled in the art that the first touch may be maintained by referring to the description relating to FIG. 15, and in the following description, it is determined whether or not a second touch is input in the predetermined area in the display device 100 employing the infrared method We will focus on the process.

When the first touch is input in the display device 100 employing the infrared method (S1700-Y), an area is set based on the point where the first touch is input (S1705).

Thereafter, the first touch of the display device 100 is released (S1710), and it is determined whether or not a second touch is sensed in the area (S1715). If the first touch is released (S1710-Y) and the second touch is detected in the area (S1715-Y), the display device 100 calculates the difference between the time when the first touch is released and the time when the second touch is sensed If the time difference is within the predetermined range (b), the dragging of the input second touch or the second touch is recognized as a gesture (S1730).

Thereafter, the display device 100 generates a command corresponding to the recognized gesture (S1735), and executes the generated command (S1740).

On the other hand, if no second touch is detected in the area (S1715-N) and the time difference is not within the preset range b (S1725-N), the display device 100 displays the first touch input first (S1745), and generates a command corresponding to the recognized touch (S1750).

Since the display device 100 employing the infrared method is not easy to implement a dual touch or a multi-touch, it is determined whether or not a gesture is input based on the input time difference between the first touch and the second touch, The command is executed.

<8. User-specific command storage>

 Hereinafter, a method of storing a command for each user will be described with reference to FIGS. 18 to 20C.

18 is a diagram for explaining the necessity for distinguishing commands by user. When the display apparatus 100 is an LFD (Large Format Display), as shown in FIG. 18, a plurality of operations can be inputted from a plurality of users to one display apparatus 100.

FIG. 18 shows a state in which two cut gestures are input together from two users U1 and U2. In this case, even if two cut gestures are input, two items Only one of the items 1810 and 1820 is cut and stored. For example, if T1 < T2, then only the item 1820 corresponding to the cut gesture input by the user of U2 at the time of T2 will be cut.

Of course, if the same user entered two different cut gestures for two different items 1810, 1820, it would be desirable to cut the item 1820 through the final cut gesture as the user's actual intent It may not be an issue.

However, if the different users U1 and U2 input different cut gestures and the commands are not distinguished for each user, the item T1 according to the cut gesture of a user U1 is deleted without being cut There is a problem that it is ignored.

In FIG. 18, two paste gestures are simultaneously input from two users U1 and U2. In this case, the item corresponding to the finally stored cut command or copy command is pasted twice. Of course, if the same user has entered two different paste gestures, it may not be a problem in that the same item is to be pasted twice in the sense of the user's actual intentions.

However, if it is the intent of each of the users U1 and U2 that each of the different users U1 and U2 respectively paste the copied or cut items, if the commands are not distinguished for each user, There is a problem in that this is not pasted.

Thus, in the LFD environment, commands need to be separately stored for each user. Of course, this need will be greater than in the LFD environment. However, even in the case of a small mobile device, there is no such necessity, and the technical idea of the present invention can be equally applied to a mobile device as well as an LFD.

19A to 19F are views for explaining a method of identifying a command according to a gesture inputted for each user. Referring to Figs. 19A to 19F, the user can distinguish the command corresponding to the inputted gesture as his / her command through the operation of inputting the ID.

19A, when the user holds the first touch 1910 on the item 1900 and inputs the cut gesture 1920 through the drag operation subsequent to the second touch, as shown in FIG. 19B Similarly, a guide item 1930 that provides a guide for inputting a user's ID is generated on the screen.

As shown in the figure, the guide item 1930 has a shape similar to the up, down, left, and right direction keys of the remote controller, and the user can drag the remote controller in a specific pattern along the guide items displayed on the screen, The user can input his / her own ID through the operation. That is, when the drag operation after the second touch is completed, the ID of the specific pattern is inputted through the drag operation following the third (third) touch.

In this case, since the gesture input is completed, it is not necessary to satisfy the condition that the first touch should be maintained. However, it is needless to say that it is possible to realize that the first touch is maintained as it is.

19D shows the input of the user's ID 1940. According to FIG. 19D, the user's ID 1940 is a sequence of dragging from the top to the left of the guide item and from the left to the right sequentially .

In this way, a command according to a gesture input for each user is identified. In order to paste the cut item 1900, the same ID 1940 as the input ID 1940 must be input again.

19E, when the first touch 1950 is input and the first touch 1950 is maintained, and the paste gesture 1920 is input through the drag operation following the second touch, as shown in FIG. 19E A guide item 1930 for guiding the input of the user's ID 1940 is displayed again.

Thereafter, the user can input his / her own ID 1940 again so that the item 1900 which the user has previously cut can be restored on the screen.

In this manner, the user can distinguish the command corresponding to the inputted gesture as his / her command through the operation of inputting the ID.

20A to 20C are diagrams showing additional examples of guide items. Although an item similar to the up, down, left, and right direction keys of the remote controller is assumed as an example of the guide item, the guide item is not necessarily limited thereto. That is, as shown in FIG. 20A, there are no connection lines connecting up and down and left and right, or items having shapes other than up, down, left, and right as shown in FIG. 20B are examples of guide items. Of course.

In this case, as shown in FIG. 20C, in the case where there is no guide item, the user can input his or her ID or signature through a method of inputting his / The command corresponding to the gesture can be distinguished as its own command.

Meanwhile, in order to distinguish the commands for each user, it is necessary to clip the user-specific commands to the clipboard provided for each user. In particular, when a gesture is input from a specific user, the display apparatus 100 generates a command corresponding to the inputted gesture, searches the clipboard corresponding to a specific user, and refers to a command already clipped in the retrieved clipboard And executes a command corresponding to the inputted gesture.

For example, when a gesture for a paste is input from the same user while a copy command for an item a is clipped to a clipboard of the user A, the display device generates a paste command and displays the paste command Refers to the input ID, searches for the clipboard of user A, refers to the copy command for item a that has already been clipped to the searched clipboard, and executes the paste command for item a.

<9. Interlocking with external device>

21 is a diagram for describing a gesture input interlocked with an external device. 21, the display device 100 and the external device 2100 are shown together.

The display device 100 may execute a command generated based on the inputted gesture but may separately generate a command based on the command generated in the external device 2100 and execute the command. Also, the external device 2100 may generate and execute a command based on the command generated in the display device 100. [

Both are performed by the same principle, and the latter will be described below.

When the first touch 2110 is input to the item 2120 displayed on the screen of the display device 100 and the drag 2130 satisfying the above conditions is input as shown in the upper left corner, do. In the present embodiment, it is assumed that a gesture is input for a cut command for the item 2120.

The cut item 2120 may be pasted back to the display device 100, but it may also be pasted to the external device 2100 through wireless communication with the external device 2100.

That is, when the first touch 2140 is input to a portion of the screen of the external device 2100 and a drag 2130 satisfying the above-described conditions is input as shown in the upper right part, it is recognized as a gesture . In the present embodiment, it is assumed that the gesture is input for the paste command for the item 2120.

Accordingly, the item 2120 cut in the display device 100 is pasted at the point where the first touch 2140 is input in the external device 2100. [

In this way, a command can be generated even in cooperation with the external device 2100.

22 is a diagram for explaining a gesture and an ID input interlocked with an external device. 22, the display device 100 and the external device 2100 are shown together.

As described above, the display apparatus 100 can execute a new command with reference to a command clipped to an internal clipboard. However, the display apparatus 100 may refer to a command clipped to the clipboard stored in the external device 2100, . Also, the external device 2100 may execute a new command with reference to the command clipped to the clipboard inside the display device 100. [

Both are performed by the same principle, and the latter will be described below.

When the first touch 2210 is input to the item 2220 displayed on the screen of the display device 100 and the drag 2230 satisfying the above conditions is input as shown in the upper left corner, do. It is assumed that a gesture is input for a cut command for the item 2220 in this embodiment.

The cut item 2220 may be pasted back to the display device 100, but it may also be pasted to the external device 2100 through wireless communication with the external device 2100.

That is, when the first touch 2260 is input to a part of the screen of the external device 2100 and a drag 2270 satisfying the above-described conditions is input, as shown in the right upper part, it is recognized as a gesture . In the present embodiment, it is assumed that the gesture is input for the paste command for the item 2220. [

Accordingly, the cut item 2220 in the display device 100 is pasted at the point where the first touch 2260 is input from the external device 2100.

In this manner, even when the external device 2100 is interlocked, a command can be generated by inputting the gesture and the ID.

<10. Specific interworking with external devices>

Hereinafter, a specific interworking method with the external device 2100 will be described with reference to FIGS. 23 to 25. FIG. Particularly, the case where the clipboard corresponding to the ID input to the display device 100 is stored in the display device 100 has been described above. Therefore, in FIGS. 23 to 25, The case where the clipboard corresponding to the ID is not stored in the display device 100 will be described.

23 is a diagram for explaining a case where a clipboard corresponding to an input ID is not stored in both the display device 100 and the external device 2100 when a gesture and an ID are input from the user.

First, when a gesture is input from a user, the control unit 101 of the display device 100 generates a command based on the inputted gesture. When an ID of A is inputted from the user, In step S2300, a message inquiring whether or not clipboard A exists.

The control unit 101 of the display apparatus 100 searches whether the clipboard A exists in the clipboard storage unit 106 stored therein (S2310), and receives a broadcast message from the display apparatus 100 The control unit 2101 of the external device 2100 searches the clipboard storage unit 2106 stored in the external device 2100 to see whether the clipboard A exists (S2310).

If it is determined that the clipboard A does not exist in both the display device 100 and the external device 2100 at step S2320, the control unit 101 of the display device 100 directly creates the clipboard A at step S2330, The clipboard A is stored in the clipboard storage unit 106, and the generated command is stored (S2340).

The reason why the display device 100 knows that the clipboard A does not exist in the external device 2100 is that the external device 2100 not only receives no response from the external device 2100, It is possible that the clipboard A is not present.

24 and 25 show a case where a clipboard corresponding to the ID input to the display device 100 is not stored when a gesture and an ID are input from a user and the clipboard is stored in the external device 2100 Fig.

In particular, FIG. 24 is a diagram for explaining a case where a gesture for a copy command is input from a user.

First, when a gesture for copying is input from a user, the control unit 101 of the display device 100 generates a copy command based on the input gesture. When the user inputs an ID of A from the user, In step S2400, a message requesting a search for clipboard A, that is, a message inquiring whether or not clipboard A exists. Of course, such a message may include an access address for the display device 100. [

The control unit 101 of the display device 100 searches whether the clipboard A exists in the clipboard storage unit 106 stored therein or not in step S2410 and receives a broadcast message from the display device 100 The control unit 2101 of the external device 2100 searches the clipboard storage unit 2106 of the external device 2100 to determine whether the clipboard A exists (S2410).

If the clipboard A is not present in the display device 100 but it is determined that the clipboard A is present in the external device 2100 in operation S2420, the control unit 2101 of the external device 2100 displays the clipboard A in the external device 2100 And unicasts the access address to the display device 100 (S2430). Of course, not only the access address to the external device 2100 but also information that the external device 2100 is storing the clipboard A may be included.

Upon receiving the access address for the external device 2100, the display device 100 unicasts the copy command generated by the gesture inputted by the user to the external device 2100 (S2440).

Accordingly, the external device 2100 clips the received copy command on the clipboard A and stores it (S2450). In particular, if the copy command or the cut command has been clipped on the clipboard A in advance, the external device 2100 overwrites and clips the received copy command on the previously clipped command.

On the other hand, apart from the storage for the command, the display apparatus 100 executes the generated copy command based on the gesture input from the user.

Next, FIG. 25 is a diagram for explaining a case where a gesture for a paste command is input from a user.

First, the controller 101 of the display device 100 generates a paste command based on the input gesture when a gesture for a paste is input from the user. When the user inputs an ID of A from the user, And broadcasts a message requesting a search for the corresponding clipboard A, that is, a message inquiring whether clipboard A exists (S2500). Of course, such a message may include an access address for the display device 100. [

The control unit 101 of the display apparatus 100 searches whether the clipboard A exists in the clipboard storage unit 106 stored in the display apparatus 100 in operation S2510 and receives a broadcast message from the display apparatus 100 The control unit 2101 of the external device 2100 searches the clipboard storage unit 2106 of the external device 2100 to determine whether the clipboard A exists (S2510).

If the clipboard A is not present in the display device 100 but the clipboard A is present in the external device 2100 in operation S2520, the control unit 2101 of the external device 2100 displays the clipboard A in the external device 2100 And unicasts the access address to the display device 100 (S2430). Of course, not only the access address to the external device 2100 but also information that the external device 2100 is storing the clipboard A may be included.

Upon receiving the access address for the external device 2100, the display device 100 unicasts the paste command generated by the gesture inputted by the user to the external device 2100 (S2540).

Also, the external device 2100 unicasts the information about the clipped command to the display device 100 in the clipboard A. Here, the information on the periodically clipped command becomes a copy command or a cut command. Thereafter, the display apparatus 100 executes the generated paste command based on the copy command or the cut command which is a pre-clipped command. That is, the target item of the copy command or the target item of the cut command is pasted.

On the other hand, the external device 2100 clips the command received from the display device 100 to the clipboard A and stores it (S2560).

<11. Configuration for display device>

26 is a diagram showing a configuration of the display device 100. In Fig. The display device 100 according to the present embodiment distinguishes between a touch and a gesture, and executes a command according to an input gesture and an ID.

The display device 100 includes a control unit 101, a multimedia function block 102, a GUI (Graphic User Interface) generating unit 130, a communication interface 104, a storage unit 105 and a touch screen 108 .

The multimedia function block 102 displays a screen according to a user's operation. In particular, the multimedia function block 102 performs a function of reproducing contents such as moving pictures, still images, music, and text, and executing and running various applications in order to display a screen according to a user's operation. The type of functions that the multimedia function block 102 can perform is determined depending on what the display device 100 is.

The GUI generating unit 103 generates a GUI such as an item or an item of content and displays the generated GUI in an application played or operated by the multimedia function block 102 or the multimedia function block 102 .

The touch screen 108 displays an image to be played by the multimedia function block 102 or an application to be executed by the multimedia function block 102. Further, the GUI generated by the GUI generation unit 103 is displayed on the touch screen 108. [

In particular, the touch screen 108 comprises a display unit (not shown) and a sensing unit (not shown), and the image or application described above is displayed on the display unit.

The sensing unit functions as a means for sensing an operation of a user's touch, drag and drop, etc., input to the display unit. In particular, the sensing unit may be implemented by the electrostatic, pressure sensitive, or infrared method described above, or may be implemented in other ways as well.

The control unit 101 controls the functioning of the multimedia function block 102 according to the user's operation inputted through the touch screen 110 or the operation means (not shown). The control unit 150 controls the GUI generation unit 103 such that a GUI corresponding to the user's operation is displayed on the display unit of the touch screen 108. [

As described above, since the sensing unit can be implemented by one sensing method among a plurality of sensing methods, the control unit 101 controls the display unit 100 according to the sensing method employed by the display device 100, A point at which a touch or a drag is inputted is calculated.

The communication interface 104 communicates with the external device to receive information on the command from the external device or to transmit information on the command generated in the display device 100 to the external device.

The storage unit 105 stores various programs and applications for operating the display device 100, and a clipboard. In particular, the storage unit 105 includes a clipboard storage unit 106 for storing a clipboard for each user's ID, and a program storage unit 107 for storing programs and applications.

This makes it possible to execute a command by inputting a user command in an easier and simpler manner in a touch screen environment.

Although the methods for generating commands in the LFD environment have been described above, they are merely examples for convenience of explanation. Therefore, it goes without saying that the technical idea of the present invention can be applied to small mobile devices as they are.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention.

101: control unit 102: multimedia function block
103: GUI generation unit 104: communication interface
105: storage unit 108: touch screen

Claims (60)

  1. Receiving a first touch;
    Recognizing an operation performed by the second touch or the second touch as a gesture when a second touch satisfying a preset condition is input after the first touch is input; And
    And generating a command based on the gesture,
    Wherein the predetermined condition is a condition that the second touch is input in an area set based on a point where the first touch is inputted in a state where the input first touch is maintained.
  2. delete
  3. [Claim 3 is abandoned upon payment of the registration fee.]
    The method according to claim 1,
    Wherein,
    When the touch screen is operated in a depressurized manner, a difference in distance between a point at which the first touch is sensed and a point at which the second touch is sensed, and a time difference between a point at which the first touch is sensed and a point at which the second touch is sensed Comparing; And
    And determining whether the first touch is maintained and whether the second touch is input according to the comparison result.
  4. [Claim 4 is abandoned upon payment of the registration fee.]
    The method of claim 3,
    Wherein,
    And when the time difference with respect to the distance difference is within a predetermined range, the first touch is maintained and the second touch is determined to have been input.
  5. [Claim 5 is abandoned upon payment of registration fee.]
    The method according to claim 1,
    Wherein,
    When the touch screen is operated in an infrared mode, the second touch is input while the first touch is not released, or the time difference between the time when the detection of the first touch is released and the time when the second touch is detected Determining whether the predetermined range is less than a predetermined range; And
    And determining whether the first touch is maintained and whether the second touch is input according to the determination result.
  6. delete
  7. [7] has been abandoned due to the registration fee.
    The method according to claim 1,
    When the touch screen is operated in a depressurized manner, an end point of a vector extending from a point at which the first touch is sensed to a point at which the second touch is sensed is extended by a previously calculated distance, The method comprising the steps of: (a) setting a point in time at which the command is generated;
  8. [8] has been abandoned due to the registration fee.
    The method according to claim 1,
    When the difference between the time when the first touch is sensed and the time when the second touch is sensed is less than a preset range when the touch screen is operated in the infrared system, To a point at which the command is input.
  9. The method according to claim 1,
    Wherein the area is set differently according to a point at which the first touch is input.
  10. The method according to claim 1,
    Wherein the generating comprises:
    And generates a different command according to a point at which the second touch is input based on a point at which the first touch is input.
  11. The method according to claim 1,
    And displaying the set area so that the set area is distinguished from an area other than the set area.
  12. The method according to claim 1,
    Wherein,
    Wherein a touch input in an area other than the set area or an operation in response to the touch is not recognized as a gesture.
  13. [13] has been abandoned due to the registration fee.
    The method according to claim 1,
    Providing a guide item for input of an ID; And
    And searching for the generated command in the clipboard corresponding to the ID when the ID is input.
  14. [14] has been abandoned due to the registration fee.
    14. The method of claim 13,
    The ID is input by an operation following the third touch of the user or the third touch,
    Wherein the guide item is an item for deriving a position of a third touch of a user or a pattern of an operation following the third touch.
  15. [Claim 15 is abandoned upon payment of registration fee]
    14. The method of claim 13,
    Clipping the generated command to the clipboard when the clipboard is searched; And
    And executing the generated command with reference to a command clipped to the clipboard.
  16. [Claim 16 is abandoned upon payment of registration fee.]
    16. The method of claim 15,
    Wherein,
    Wherein the paste command is executed such that the copy command or an object item corresponding to the cut command is pasted when the pre-clipped command is a copy command or a cut command and the generated command is a paste command Way.
  17. [Claim 17 is abandoned upon payment of registration fee.]
    16. The method of claim 15,
    Wherein the clipping step comprises:
    Clipped command is deleted, and the generated command is clipped when the pre-clipped command and the generated command are both copy commands or cut commands.
  18. [Claim 18 is abandoned upon payment of registration fee.]
    14. The method of claim 13,
    Generating a clipboard corresponding to the ID when the clipboard is not searched, and clipping the command to the generated clipboard; And
    Further comprising the step of: executing the command.
  19. [Claim 19 is abandoned upon payment of the registration fee.]
    14. The method of claim 13,
    The retrieving step comprises:
    A first retrieving step of retrieving a clipboard stored therein; And
    And a second retrieving step of retrieving a clipboard stored in the external device when the internal stored clipboard does not exist.
  20. [Claim 20 is abandoned upon payment of the registration fee.]
    20. The method of claim 19,
    Wherein the second searching step comprises:
    Broadcasting a message inquiring whether or not the clipboard exists; And
    And receiving an access address for the external device that is unicast from an external device in which the clipboard exists.
  21. [Claim 21 is abandoned upon payment of the registration fee.]
    21. The method of claim 20,
    Unicasting a message requesting the clipboard stored in the external device to the external device or unicasting the generated command to the external device so that the generated command is clipped to the clipboard stored in the external device; Further comprising the steps of:
  22. [Claim 22 is abandoned upon payment of the registration fee.]
    22. The method of claim 21,
    Wherein the unicast step comprises:
    Unicasts the generated command to the external device so that the generated command is clipped to the clipboard stored in the external device when the generated command is a copy command or a cut command,
    And when the generated command is a paste command, unicasts a message requesting a clipboard stored in the external device to the external device.
  23. [Claim 23 is abandoned due to the registration fee.]
    23. The method of claim 22,
    Receiving and storing a unicast clipboard from the external device when the generated command is a paste command; And
    And executing the command so that the copy command or the target item corresponding to the cut command is pasted with reference to a copy command or a cut command that is clipped to the stored clipboard .
  24. The method according to claim 1,
    Wherein the gesture is input by a drag operation subsequent to the second touch operation or the second touch,
    Wherein the gesture input in accordance with the drag operation is recognized as a different gesture according to the pattern of the drag operation and is input.
  25. 25. The method of claim 24,
    And a step of matching and storing the pattern of the second touch operation and the drag operation with the command.
  26. The method according to claim 1,
    Wherein the generating comprises:
    And generates a command based on the gesture when the first touch is held until input of the gesture is completed.
  27. The method according to claim 1,
    Wherein the generating comprises:
    And generating the command when the operation of the second touch or the second touch is completed.
  28. The method according to claim 1,
    And when the first touch is input for a specific item, the command is a command for the specific item.
  29. The method according to claim 1,
    Wherein the first touch and the second touch are input together on one touch screen.
  30. [Claim 30 is abandoned upon payment of registration fee.]
    The method according to claim 1,
    The second command based on the second gesture input to the touch screen provided in the second device is a command for inputting the first gesture based on the first gesture input on the touch screen provided in the first device, Wherein the command is generated by referring to the command.
  31. A touch screen for receiving touch or gesture input; And
    A control unit for recognizing the operation of the second touch or the second touch as a gesture and generating a command based on the gesture when a second touch satisfying a preset condition is inputted after the first touch is inputted Including,
    Wherein the predetermined condition is a condition that the second touch is input in an area set based on a point where the first touch is inputted in a state where the first touch is maintained.
  32. delete
  33. [33] has been abandoned due to the registration fee.
    32. The method of claim 31,
    When the touch screen is operated in a depressurized manner,
    Wherein,
    Comparing a difference in distance between a point at which the first touch is sensed and a point at which the second touch is sensed and a time difference between a time at which the first touch is sensed and a moment at which the second touch is sensed, Wherein the first touch is maintained and whether or not the second touch is input is determined.
  34. [Claim 34 is abandoned upon payment of registration fee.]
    34. The method of claim 33,
    Wherein,
    Wherein when the time difference for the distance difference is within a predetermined range, the first touch is maintained and the second touch is determined to be input.
  35. [Claim 35 is abandoned upon payment of registration fee.]
    32. The method of claim 31,
    When the touch screen is operated in an infrared mode,
    Wherein,
    It is determined whether the time difference between the time when the second touch is input while the first touch is not released and the time when the detection of the first touch is released and the time when the second touch is detected is less than a preset range And determines whether or not the first touch is maintained and the second touch is input according to the determination result.
  36. delete
  37. [Claim 37 is abandoned upon payment of registration fee.]
    32. The method of claim 31,
    When the touch screen is operated in a depressurized manner,
    Wherein,
    And sets an end point of a vector obtained by extending a connected vector from a point at which the first touch is sensed to a point at which the second touch is sensed by a previously calculated distance as a point at which the second touch is input Device.
  38. [Claim 38 is abandoned upon payment of registration fee.]
    32. The method of claim 31,
    When the touch screen is operated in an infrared mode,
    Wherein,
    And sets a point at which the second touch is sensed to a point at which the second touch is input when a difference between a point of time when the first touch is sensed and a point of time when the second touch is sensed is less than a predetermined range, / RTI &gt;
  39. 32. The method of claim 31,
    Wherein,
    And sets the areas differently according to a point where the first touch is input on the touch screen.
  40. 32. The method of claim 31,
    Wherein,
    And generates different commands according to a point at which the second touch is input based on a point at which the first touch is input.
  41. 32. The method of claim 31,
    Wherein,
    Wherein the touch screen displays the set area and the area other than the set area in a distinguishable manner.
  42. 32. The method of claim 31,
    Wherein,
    Wherein the touch input or the touch operation input in an area other than the set area is not recognized as a gesture.
  43. [Claim 43 is abandoned upon payment of the registration fee.]
    32. The method of claim 31,
    A GUI generating unit for providing a guide item for inputting an ID; And
    And a storage unit for storing a clipboard corresponding to the input ID,
    Wherein,
    And if the ID is input, searches the clipboard corresponding to the ID with the generated command.
  44. [44] is abandoned upon payment of the registration fee.
    44. The method of claim 43,
    The ID is input by an operation following the third touch of the user or the third touch,
    Wherein the guide item is an item for guiding the position of the third touch of the user or the pattern of the operation following the third touch.
  45. [Claim 45 is abandoned upon payment of the registration fee.]
    44. The method of claim 43,
    Wherein,
    Wherein when the clipboard is searched, the generated command is clipped to the clipboard, and the generated command is executed by referring to the command clipped to the clipboard.
  46. [Claim 46 is abandoned due to the registration fee.]
    46. The method of claim 45,
    Wherein,
    And the paste command is executed so that the copy command or the cut command is pasted when the pre-clipped command is a copy command or a cut command and the generated command is a paste command. .
  47. [Claim 47 is abandoned upon payment of the registration fee.]
    46. The method of claim 45,
    Wherein,
    And the clipping command deletes the pre-clipped command and clips the generated command when the pre-clipped command and the generated command are both copy commands or cut commands.
  48. [Claim 48 is abandoned upon payment of registration fee.]
    44. The method of claim 43,
    Wherein,
    If the clipboard is not searched, a clipboard corresponding to the ID is generated and stored in the storage unit, and the command is clipped to the generated clipboard.
  49. [49] has been abandoned due to the registration fee.
    44. The method of claim 43,
    Further comprising: a communication interface for communicating with an external device,
    Wherein,
    Searches the clipboard stored in the storage unit, and searches the clipboard stored in the external device through communication with the external device when the clipboard stored in the storage unit does not exist.
  50. [Claim 50 is abandoned upon payment of the registration fee.]
    50. The method of claim 49,
    Wherein,
    Broadcasts a message inquiring whether or not the clipboard is present through the communication interface and receives an access address for the external device unicast from an external device in which the clipboard exists, Device.
  51. [51] has been abandoned due to the registration fee.
    51. The method of claim 50,
    Wherein,
    A message requesting a clipboard stored in the external device is unicasted to the external device via the communication interface or the created command is clipped to the clipboard stored in the external device to the external device Unicast &quot;.
  52. [Claim 52 is abandoned upon payment of the registration fee.]
    52. The method of claim 51,
    Wherein,
    Unicasts the generated command to the external device so that the generated command is clipped to the clipboard stored in the external device when the generated command is a copy command or a cut command,
    And unicasts a message requesting a clipboard stored in the external device to the external device when the generated command is a paste command.
  53. [Claim 53 is abandoned upon payment of the registration fee.]
    53. The method of claim 52,
    Wherein,
    Receiving the unicast clipboard from the external device when the generated command is a paste command, storing the unicast clipboard in the storage unit,
    And the command is executed so that the copy command or the target item corresponding to the cut command is pasted with reference to the copy command or the cut command temporarily clipped to the stored clipboard.
  54. 32. The method of claim 31,
    Wherein the gesture is input by a drag operation subsequent to the second touch operation or the second touch,
    Wherein the gesture input in accordance with the drag operation is recognized as a different gesture according to the pattern of the drag operation and is input.
  55. 55. The method of claim 54,
    And a storage unit for storing and matching the second touch operation and the pattern of the drag operation with the command.
  56. 32. The method of claim 31,
    Wherein,
    And generates a command based on the gesture when the first touch is held until input of the gesture is completed.
  57. 32. The method of claim 31,
    Wherein,
    And generates the command when the operation of the second touch or the second touch is completed.
  58. 32. The method of claim 31,
    Wherein when the first touch is input for a specific item, the command is a command for the specific item.
  59. 32. The method of claim 31,
    Wherein the first touch and the second touch are input together on one touch screen.
  60. [Claim 60 is abandoned upon payment of the registration fee.]
    32. The method of claim 31,
    Wherein,
    Wherein when the display device operates in cooperation with an external device, a second command based on a second gesture input to the touch screen provided on the external device is input to the touch screen, based on the first gesture input to the touch screen provided on the display device And generates the first command with reference to the generated first command.
KR1020100005971A 2010-01-22 2010-01-22 Command generating method and display apparatus using the same KR101789279B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020100005971A KR101789279B1 (en) 2010-01-22 2010-01-22 Command generating method and display apparatus using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100005971A KR101789279B1 (en) 2010-01-22 2010-01-22 Command generating method and display apparatus using the same

Publications (2)

Publication Number Publication Date
KR20110086309A KR20110086309A (en) 2011-07-28
KR101789279B1 true KR101789279B1 (en) 2017-10-23

Family

ID=44922735

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100005971A KR101789279B1 (en) 2010-01-22 2010-01-22 Command generating method and display apparatus using the same

Country Status (1)

Country Link
KR (1) KR101789279B1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713755B (en) * 2012-09-29 2017-02-08 北京汇冠新技术股份有限公司 Touch recognizing device and recognizing method
CN107608506A (en) * 2017-09-01 2018-01-19 北京小米移动软件有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100748469B1 (en) 2006-06-26 2007-08-06 삼성전자주식회사 User interface method based on keypad touch and mobile device thereof
JP2008009668A (en) 2006-06-29 2008-01-17 Syn Sophia Inc Driving method and input method for touch panel
KR100914438B1 (en) * 2008-05-20 2009-08-28 엘지전자 주식회사 Electronic device with touch device and method of executing functions thereof
US20090327964A1 (en) * 2008-06-28 2009-12-31 Mouilleseaux Jean-Pierre M Moving radial menus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100748469B1 (en) 2006-06-26 2007-08-06 삼성전자주식회사 User interface method based on keypad touch and mobile device thereof
JP2008009668A (en) 2006-06-29 2008-01-17 Syn Sophia Inc Driving method and input method for touch panel
KR100914438B1 (en) * 2008-05-20 2009-08-28 엘지전자 주식회사 Electronic device with touch device and method of executing functions thereof
US20090327964A1 (en) * 2008-06-28 2009-12-31 Mouilleseaux Jean-Pierre M Moving radial menus

Also Published As

Publication number Publication date
KR20110086309A (en) 2011-07-28

Similar Documents

Publication Publication Date Title
US8004501B2 (en) Hand-held device with touchscreen and digital tactile pixels
US9836201B2 (en) Zoom-based gesture user interface
US9733752B2 (en) Mobile terminal and control method thereof
JP4450657B2 (en) Display device
AU2012281308B2 (en) Method and apparatus for controlling content using graphical object
US9395905B2 (en) Graphical scroll wheel
US9891732B2 (en) Selective rejection of touch contacts in an edge region of a touch surface
EP2682853B1 (en) Mobile device and operation method control available for using touch and drag
EP2184673A1 (en) Information processing apparatus, information processing method and program
KR20130058752A (en) Apparatus and method for proximity based input
US20100277429A1 (en) Operating a touch screen control system according to a plurality of rule sets
US20070277124A1 (en) Touch screen device and operating method thereof
EP2669786A2 (en) Method for displaying item in terminal and terminal using the same
JP2008505381A (en) Method and apparatus for preventing contamination of display device
KR20100055716A (en) Method for controlling map and mobile terminal using the same
KR20120004978A (en) Detecting touch on a curved surface
US10386992B2 (en) Display device for executing a plurality of applications and method for controlling the same
KR20130054073A (en) Apparatus having a touch screen processing plurality of apllications and method for controlling thereof
CN101114204B (en) Remote input device and electronic apparatus using the same
US9104239B2 (en) Display device and method for controlling gesture functions using different depth ranges
US20100257447A1 (en) Electronic device and method for gesture-based function control
US8217905B2 (en) Method and apparatus for touchscreen based user interface interaction
US8593398B2 (en) Apparatus and method for proximity based input
KR20120037366A (en) Detecting touch on a curved surface
US20140104240A1 (en) Light-based proximity detection system and user interface

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant