KR20150068001A - Apparatus and method for recognizing gesture using sensor - Google Patents

Apparatus and method for recognizing gesture using sensor Download PDF

Info

Publication number
KR20150068001A
KR20150068001A KR1020130153692A KR20130153692A KR20150068001A KR 20150068001 A KR20150068001 A KR 20150068001A KR 1020130153692 A KR1020130153692 A KR 1020130153692A KR 20130153692 A KR20130153692 A KR 20130153692A KR 20150068001 A KR20150068001 A KR 20150068001A
Authority
KR
South Korea
Prior art keywords
sensor
value
object
values
method
Prior art date
Application number
KR1020130153692A
Other languages
Korean (ko)
Inventor
정문식
이성오
최성도
차현희
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020130153692A priority Critical patent/KR20150068001A/en
Publication of KR20150068001A publication Critical patent/KR20150068001A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00335Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
    • G06K9/00355Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/041012.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup

Abstract

The present invention provides a method for recognizing a gesture using a sensor comprising the steps of: continuously recognizing sensor values for a an object by a plurality of sensors with different operating ranges; fusing the sensor values to generate a control value; and performing a function corresponding to the control value.

Description

[0001] APPARATUS AND METHOD FOR RECOGNIZING GESTURE USING SENSOR [0002]

The present invention relates to a method for recognizing an operation using a sensor.

2. Description of the Related Art [0002] In general, a method of detecting an operation in a portable terminal uses a method of recognizing an operation of up, down, left, and right, or detecting a specific operation using a camera. Among them, Hand Gesture Recognition technology refers to a technique of recognizing hand related movements such as hand movements and hand shapes using various sensors. The conventional hand motion recognition method recognizes hand motion using various sensors such as a touch sensor, a proximity sensor, and a camera sensor.

The conventional motion recognizing method recognizes an operation using a plurality of sensors, and therefore, when the distance to the object changes, the operation of the object is not recognized. This is because if the operation range or the characteristic is different for each sensor and the object deviates from the operation range of the first sensor, it is necessary to recognize the operation of the object by the second sensor instead of the first sensor, 2 because it does not recognize the object's motion continuously between sensors. That is, the conventional motion recognition method has a disadvantage in that when the first sensor can no longer recognize the motion of the object, it can not continuously recognize the motion of the object in the second sensor.

An object of the present invention is to provide a method and apparatus for recognizing an operation using a sensor capable of continuously recognizing an operation of an object without interruption in a plurality of sensors having different operating ranges.

A method of recognizing an operation using a sensor according to an embodiment of the present invention includes sequentially recognizing sensor values for an object in a plurality of sensors having different operating ranges, generating a control value by fusing the sensor values And performing a function corresponding to the control value.

A motion recognition apparatus using a sensor according to an embodiment of the present invention includes a sensor unit for continuously recognizing sensor values for an object, and a controller for generating a control value by combining the sensor values and executing a function corresponding to the control value .

According to an embodiment of the present invention, a plurality of sensors having different operation ranges can continuously recognize an operation of an object without interruption.

1 is a flowchart illustrating an operation recognition method using a sensor according to an embodiment of the present invention.
2 is a view showing an operation range of a plurality of sensors according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating an example of recognizing a depth value according to an embodiment of the present invention.
4A to 4C are diagrams illustrating an example of recognizing a pointer value according to an embodiment of the present invention.
5 is a diagram illustrating an example of tracking an object according to an embodiment of the present invention.
6 is a diagram illustrating an example of recognizing a swipe value according to an embodiment of the present invention.
7A and 7B are diagrams illustrating an example of generating control values by combining sensor values of the same kind according to an embodiment of the present invention.
8A and 8B are diagrams illustrating an example of generating control values by combining sensor values of different types according to an embodiment of the present invention.
9A and 9B illustrate an example of generating control values by combining sensor values of the same kind with sensor values of the same kind according to an embodiment of the present invention.
10 is a block diagram illustrating the configuration of a motion recognition apparatus using a sensor according to an embodiment of the present invention.

Hereinafter, various embodiments will be described in detail with reference to the accompanying drawings. Note that, in the drawings, the same components are denoted by the same reference symbols as possible. Further, the detailed description of well-known functions and constructions that may obscure the gist of the present invention will be omitted. In the following description, only parts necessary for understanding the operation according to various embodiments of the present invention will be described, and the description of other parts will be omitted so as not to obscure the gist of the present invention.

The motion recognition device using the sensor of the present invention may be included in "electronic device ".

The electronic device according to the present invention may be an apparatus including a communication function. For example, the electronic device can be a smartphone, a tablet personal computer, a mobile phone, a videophone, an e-book reader, a desktop personal computer, a laptop Such as a laptop personal computer (PC), a netbook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, a mobile medical device, a camera, or a wearable device Such as a head-mounted-device (HMD) such as electronic glasses, an electronic garment, an electronic bracelet, an electronic necklace, an electronic app apparel, an electronic tattoo, or a smartwatch.

According to some embodiments, the electronic device may be a smart home appliance with communication capabilities. [0003] Smart household appliances, such as electronic devices, are widely used in the fields of television, digital video disk (DVD) player, audio, refrigerator, air conditioner, vacuum cleaner, oven, microwave oven, washing machine, air cleaner, set- And may include at least one of a box (e.g., Samsung HomeSyncTM, Apple TVTM, or Google TVTM), game consoles, an electronic dictionary, an electronic key, a camcorder, or an electronic frame.

According to some embodiments, the electronic device may be a variety of medical devices (e.g., magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT) (global positioning system receiver), EDR (event data recorder), flight data recorder (FDR), automotive infotainment device, marine electronic equipment (eg marine navigation device and gyro compass), avionics, A security device, or an industrial or home robot.

According to some embodiments, the electronic device may be a piece of furniture or a structure / structure including a communication function, an electronic board, an electronic signature receiving device, a projector, (E.g., water, electricity, gas, or radio wave measuring instruments, etc.). An electronic device according to the present invention may be one or more of the various devices described above. It should also be apparent to those skilled in the art that the electronic device according to the present invention is not limited to the above-described devices.

1 is a flowchart illustrating an operation recognition method using a sensor according to an embodiment of the present invention. The "motion recognition method using sensor" of the present invention can be performed by a motion recognition apparatus using a sensor.

Referring to FIG. 1, in step 110, a motion recognition device using a sensor (hereinafter referred to as "motion recognition device") continuously recognizes sensor values for the object in a plurality of sensors having different motion ranges. Here, the object may be an "object" to which the motion recognition apparatus recognizes an operation, for example, a user's hand.

Conventionally, a plurality of sensors are used to recognize an operation. However, when the distance from the object is changed, the operation range or characteristic is different for each sensor, and the operation of the object is not continuously recognized. That is, if the object is out of the operation range of the first sensor, the operation of the object should be recognized by the second sensor instead of the first sensor. However, conventionally, the first sensor and the second sensor continuously The operation of the object was not recognized.

Therefore, in the present invention, it is possible to recognize the operation of the object continuously seamlessly in a plurality of sensors having different operating ranges.

In an embodiment, the motion recognition device may use at least one of a touch sensor, a proximity sensor, a camera sensor, and a dynamic vision sensor as sensors for recognizing sensor values for the object. The plurality of sensors have different operating ranges and characteristics.

Sensor type Street Fps power resolution XY Z Luminance Proximity sensor 0 to 150 - Low (≤5) - HW HW High Dynamic visual sensor 0-300 High
(≥120)
Low
(? 10)
120x120 SW SW -
Camera sensor 0-500 30 High
(≥100)
≥QVGA SW SW Low
(≥ 20)

2 is a view showing an operation range of a plurality of sensors according to an embodiment of the present invention.

Referring to FIG. 2 and Table 1, when the sensors are arranged on the basis of the operation distance (mm), the contact sensor 210 determines that the distance (or vertical distance) from one surface of the motion recognition device to the object is' 0 to 50 ), The proximity sensor 220 has the distance of 0 to 150 (b), the dynamic visual sensor 240 has the distance of 0 to 300 (c) The distance is '0 ~ 500'. However, the operation angle of the proximity sensor 220 is greater than the operation angle of the camera sensor 230 and the operation angle of the dynamic visual sensor 240 of 70 °.

In addition, the frame per second (Fps) is higher than that of the camera sensor 230 in the dynamic visual sensor 240. The operating power (mW) is the lowest in the proximity sensor 220, and the camera sensor 230 is the largest. The resolution of the camera sensor 230 is higher than that of the dynamic visual sensor 240. For reference, the Quarter Video Graphics Array (QVGA) has a resolution of 320x240. The luminance (Lux) is high in the proximity sensor (220), and the luminance of the camera sensor (230) is 20 or less.

That is, the plurality of sensors have different operating ranges and characteristics. Accordingly, the plurality of sensors can output sensor values at a data-level, a feature-level, and a decision-level according to the level of data. Therefore, the motion recognition apparatus can generate a comprehensive control value using the data level, feature level, and decision level output from each sensor. Therefore, the motion recognition device can implement motion recognition technology that is not limited by the operation range and characteristics of each sensor.

In an embodiment, the plurality of sensors may recognize at least one of a depth value, a pointer value, and a swipe value as the sensor value.

FIG. 3 is a diagram illustrating an example of recognizing a depth value according to an embodiment of the present invention.

Referring to FIG. 3, the motion recognition device can recognize the distance from the one side of the motion recognition device as a depth value by pushing and pulling the object in one direction with respect to the one side (screen) of the motion recognition device. The proximity sensor 320 can recognize the distance up to 150 mm and the camera sensor 330 and the dynamic visual sensor 340 can recognize the distance up to 300 mm can do. Therefore, even if the distance of the object changes from 0 to 300, the motion recognition device continuously detects the depth of the object in the contact sensor 310, the proximity sensor 320, the camera sensor 330, and the dynamic visual sensor 340, Values to be recognized. The motion recognition device may perform a function corresponding to the depth value.

4A to 4C are diagrams illustrating an example of recognizing a pointer value according to an embodiment of the present invention.

Referring to FIG. 4A, the motion recognition apparatus recognizes a pointer value using the sensor value by using an object as a pointer in space. For example, the pointer value may include x, y, z coordinate values in space. That is, a plurality of sensors (touch sensor, proximity sensor, camera sensor, dynamic vision sensor) that recognize the pointer value have different operating ranges. However, the plurality of sensors may continuously recognize the pointer value for the object.

Referring to FIG. 4B, the motion recognition apparatus may recognize a symbol with the pointer value. For example, the code may include a fist 401, a palm 402, an OK 403, a victory 404, a palm spread 405, a fingertip side 406, a thumb Thumb-up 407, on the upper finger 408, beside the upper finger and above the thumb 409, and the like.

Referring to FIG. 4C, the motion recognition apparatus can perform a function corresponding to the sign. For example, if the sign is OK, the motion recognition apparatus can enlarge or reduce the screen of the motion recognition apparatus by giving a function of 'magnification'.

5 is a diagram illustrating an example of tracking an object according to an embodiment of the present invention.

Referring to FIG. 5, the motion recognition apparatus may determine a mounting position of at least one of a contact sensor, a proximity sensor, a camera sensor, and a dynamic visual sensor in consideration of an operation range of the sensor. For example, when the motion recognizing device is a "smart phone ", a contact sensor 510 is mounted on the screen, and a proximity sensor 520 and a camera sensor 530 are mounted on the top of the screen. (540).

Thus, after each sensor is mounted, the motion recognition device can track the object using the sensor value in consideration of the operation range and characteristics of each sensor. For example, since the proximity sensor 520 and the camera sensor 530 mounted at the top of the screen can easily recognize the pointer value, the motion recognition device can track the object based on the pointer value (501). Alternatively, since the dynamic visual sensor 540 mounted at the bottom of the screen facilitates motion recognition of the object, the motion recognition device can track the object based on the motion value (502). Here, the motion value can be interpreted as a swipe value.

Accordingly, the motion recognition device tracks the motion of the object even in the area (d in Fig. 2) that the touch sensor 510, the proximity sensor 520, the camera sensor 530 and the dynamic visual sensor 540 can not recognize The operation of the object can be recognized.

6 is a diagram illustrating an example of recognizing a swipe value according to an embodiment of the present invention.

Referring to FIG. 6, the motion recognition apparatus recognizes a swipe value by using a hand-wipe recognition technology, which is a hand motion recognition technology. The swipe value is a sensor value in which the depth value or the pointer value of the object changes with time. Accordingly, the motion recognition apparatus can recognize the swipe value according to the motion of the object or the velocity of the object for a predetermined time. At this time, even if the depth value of the object is changed, the plurality of sensors can continuously recognize the motion of the object. Therefore, the motion recognition apparatus can recognize the motion of the seamless object irrespective of the limited motion range of each sensor.

Again in step 120 of FIG. 1, the motion recognition device fuses the sensor values to generate a control value. That is, the sensor values recognized by the plurality of sensors may be the same or different. Accordingly, the motion recognition apparatus may combine sensor values of the same type or sensor values of different types to generate a control value for performing the function.

In an embodiment, when the types of the sensor values are the same, the motion recognition device can generate the control value by combining the absolute values of the sensor values or the relative values of the sensor values in consideration of the operation range of each sensor. For example, the sensor values recognized by the touch sensor and the proximity sensor have absolute values, and the sensor values recognized by the camera sensor and the dynamic vision sensor may have relative values.

In another embodiment, the motion recognition apparatus may update the sensor values when the types of the sensor values are different, and may combine the updated sensor values to generate the control value.

In another embodiment, the motion recognition apparatus generates the first combined value by combining the absolute values of the sensor values or the relative values of the sensor values, taking into account the operating ranges of the respective sensors, with respect to the sensor values of the same type And generates a second combined value by combining the updated sensor values by updating the sensor values for the sensor values of different kinds, and combining the first combined value and the second combined value to generate a control value .

In step 130, the motion recognition device executes a function corresponding to the control value.

7A and 7B are diagrams illustrating an example of generating control values by combining sensor values of the same kind according to an embodiment of the present invention.

Referring to FIG. 7A, for convenience of explanation, the depth value recognized by the first sensor 701 is referred to as a first depth value, and the depth value recognized by the second sensor 702 is referred to as a second depth value.

In step 703, the motion recognition apparatus can confirm whether the first depth sensor 701 has recognized the first depth value. If it is recognized, in step 704, the motion recognition apparatus can confirm that the second sensor 702 has recognized the second depth value. In step 705, the motion recognition device may combine the first depth value and the second depth value using a weighting technique or a filter technique. For example, the motion recognition apparatus applies a weight of the first sensor to the first depth value, applies a weight of the second sensor to the second depth value, and then calculates a sum of two weighted depth values Weighted sum.

In step 710, the motion recognition apparatus may generate an output value of step 705 as a control value.

Alternatively, if the second sensor 702 does not recognize the second depth value, then at step 706 and step 710, the motion recognition device can generate the control value with only the first depth value.

Alternatively, if the first sensor 701 has not recognized the first depth value, then in step 707, the motion recognition apparatus can confirm that the second depth sensor 702 has recognized the second depth value. In steps 708 and 710, the motion recognition device may generate a control value based only on the second depth value.

Alternatively, in step 709, the motion recognition apparatus may not be able to recognize both the first depth value or the second depth value.

Referring to Fig. 7B, in step 711a, the motion recognition apparatus can recognize the first depth value using the touch sensor 711. [ In step 712a, the motion recognition device may recognize the second depth value using the proximity sensor 712. [ In step 713a, the motion recognition device can recognize the third depth value using the camera sensor 713. [ For reference, the depth values recognized by the touch sensor 711 and the proximity sensor 712 have absolute values, and the depth values recognized by the camera sensor 713 and the dynamic visual sensor may have relative values.

In step 714, the motion recognition apparatus may combine the first depth value to the third depth value. For example, the motion recognition apparatus may combine the absolute values of the first depth value and the second depth value using a weighting technique or a filtering technique, and map the absolute values of the second depth values and the relative values of the third depth values And can map the absolute values of the first depth value and the relative values of the third depth value.

In step 715, the motion recognition device may generate a control value with the combined depth values.

In step 716, the motion recognition device may perform a function corresponding to the control value. The motion recognition apparatus may set a function corresponding to the control value in advance and store the function for each control value in a storage unit (not shown). For example, the motion recognition apparatus may set "DMB" for the first control value, "enlarge the screen" for the second control value, and set "screen reduction" for the third control value.

8A and 8B are diagrams illustrating an example of generating control values by combining sensor values of different types according to an embodiment of the present invention.

Referring to FIG. 8A, in step 803, the motion recognition apparatus can confirm whether or not the first sensor 801 recognizes the first depth value. If it is recognized, in step 804, the motion recognition apparatus can confirm that the second sensor 802 has recognized the second pointer value. In step 805, the motion recognition apparatus may update the first depth value and the second pointer value. For example, the motion recognition apparatus can check whether the first depth value or the second pointer value is changed.

In step 810, the motion recognition apparatus may generate the output value of step 805 as a control value.

Alternatively, if the second sensor 802 does not recognize the second pointer value, then in step 806, the motion recognition device may update the first depth value. In step 810, the motion recognition apparatus may generate the control value based only on the first depth value.

Alternatively, if the first sensor 801 does not recognize the first depth value, in step 807, the motion recognition apparatus can confirm whether the second sensor 802 has recognized the second pointer value. In step 808, the motion recognition device may update the second pointer value. In this case, in step 810, the motion recognition apparatus can generate the control value only by the second pointer value.

Alternatively, if the first sensor 801 does not recognize the first depth value and the second sensor 802 does not recognize the second pointer value, then in step 809, The depth value and the previous second pointer value. In step 810, the motion recognition device may combine the previous first depth value and the previous second pointer value to generate a control value. For reference, the previous first depth value refers to a depth value temporally recognized prior to the first depth value.

Referring to FIG. 8B, in step 811a, the motion recognition apparatus can recognize the first pointer value using the touch sensor 811. FIG. In step 812a, the motion recognition apparatus can recognize the second pointer value using the proximity sensor 812. [ In steps 813a and 813b, the motion recognition device can recognize the third pointer value and the third swipe value using the camera sensor 813. [ For reference, the pointer values recognized by the touch sensor 811 and the proximity sensor 812 have absolute values, and the pointer values recognized by the camera sensor 813 and the dynamic visual sensor may have relative values.

In step 814, the motion recognition apparatus may combine the first pointer value to the third pointer value to generate a first combined value. For example, the motion recognition apparatus may combine the absolute values of the first pointer value and the second pointer value using a weighting technique or a filter technique, and map the absolute values of the second pointer values and the relative values of the third pointer values And the absolute value of the first pointer value and the relative value of the third pointer value are mapped to each other, thereby generating the first combined value.

In step 815, the motion recognition device may generate the second combined value by combining the third swipe value with the first combined value (combined pointer value). At this time, the motion recognition apparatus may update the first combined value, update the third swipe value, and combine the updated second combined value and the updated third swath value to generate a second combined value have.

In step 816, the motion recognition device may generate the second combined value as a control value.

In step 817, the motion recognition apparatus may execute a function corresponding to the control value.

9A and 9B illustrate an example of generating control values by combining sensor values of the same kind with sensor values of the same kind according to an embodiment of the present invention.

Referring to FIG. 9A, in steps 903 and 904, the motion recognition apparatus can recognize the first depth value and the first pointer value using the first sensor 901. FIG. In steps 905 and 906, the motion recognition device may recognize the second depth value and the second swipe value using the second sensor 902. [

In step 907, the motion recognition apparatus may generate the first combined value by combining the absolute values of the first depth value and the absolute value of the second depth value using a weighting technique or a filter technique.

In step 908, the motion recognition device may combine the first pointer value and the second swipe value to generate a second combined value. For example, the motion recognition apparatus may update the first pointer value, update the second swipe value, combine the updated first pointer value and the updated second swath value to generate a second combined value can do.

In step 909, the motion recognition apparatus may combine the first combined value and the second combined value to generate a control value.

Referring to FIG. 9B, in steps 911a and 911b, the motion recognition apparatus can recognize the first swipe value and the first depth value using the touch sensor 911. FIG. In steps 912a and 912b, the motion recognition apparatus can recognize the second swipe value and the second depth value using the proximity sensor 912. [ In steps 913a and 913b, the motion recognition apparatus can recognize the third swipe value and the third depth value using the camera sensor 913. [ For reference, the swipe values recognized by the touch sensor 911 and the proximity sensor 912 have absolute values, and the swipe values recognized by the camera sensor 813 and the dynamic visual sensor may have relative values.

In step 914, the motion recognition apparatus may combine the first swipe value and the third swipe value to generate a first combined value. For example, the motion recognition apparatus may combine the absolute values of the first swipe value and the second swipe value using a weighting technique or a filter technique, and determine the absolute value of the second swipe value and the absolute value of the third swipe value By mapping the relative values to each other, and mapping the relative values of the absolute value of the first swipe value and the relative value of the third swipe value to generate the first combined value.

In step 915, the motion recognition apparatus may generate the second combined value by combining the first depth value and the third depth value.

In step 916, the motion recognition device may combine the first combined value and the second combined value. The motion recognition apparatus may update the first combined value, update the second combined value, and combine the updated first combined value and the updated second combined value.

In step 917, the motion recognition apparatus may generate the output value of step 916 as a control value.

In step 918, the motion recognition apparatus may execute a function corresponding to the control value.

10 is a block diagram illustrating the configuration of a motion recognition apparatus using a sensor according to an embodiment of the present invention.

Referring to FIG. 10, the motion recognition apparatus 1000 includes a sensor unit 1010 and a control unit 1060.

In an embodiment, the sensor portion 1010 may include at least one of a touch sensor 1020, a proximity sensor 1030, a camera sensor 1040, and a dynamic vision sensor 1050. The control unit 1060 may determine the mounting position of at least one of the touch sensor 1020, the proximity sensor 1030, the camera sensor 1040 and the dynamic visual sensor 1050 in consideration of the operating range of the sensor.

The sensor unit 1010 continuously recognizes sensor values for the object. For example, the sensor unit 1010 may recognize at least one of a depth value for the object, a pointer value for the object, and a swipe value for the object as the sensor value (refer to FIG. 2 to FIG. 5). The sensor unit 1010 may recognize the swipe value according to the motion of the object or the velocity of the object for a predetermined time. The sensor unit 1010 controls the touch sensor 1020, the proximity sensor 1030, the camera sensor 1040 and the dynamic visual sensor 1050 to continuously recognize the motion of the object according to the change of the depth value of the object can do.

The controller 1060 combines the sensor values to generate a control value, and executes a function corresponding to the control value. The control unit 1060 may set a function corresponding to the control value in advance and store the function for each control value in a storage unit (not shown). For example, the control unit 1060 may set "DMB" for the first control value, set "screen enlargement" for the second control value, and set "screen reduction" for the third control value.

In an embodiment, when the types of the sensor values are the same, the controller 1060 can generate the control value by combining the absolute values of the sensor values or the relative values of the sensor values in consideration of the operation range of each sensor. For example, the sensor values recognized by the touch sensor and the proximity sensor have absolute values, and the sensor values recognized by the camera sensor and the dynamic vision sensor may have relative values.

In another embodiment, the controller 1060 may update sensor values when the types of sensor values are different, and may combine the updated sensor values to generate control values.

In another embodiment, the control unit 1060 generates the first combined value by combining the absolute values of the sensor values or the relative values of the sensor values, taking into account the operating ranges of the respective sensors, with respect to the sensor values having the same type And generates a second combined value by combining the updated sensor values by updating the sensor values for the sensor values of different kinds, and combining the first combined value and the second combined value to generate a control value .

It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. Accordingly, the scope of the present invention should be construed as being included in the scope of the present invention, all changes or modifications derived from the technical idea of the present invention.

1000: Motion recognition device using sensor
1010:
1020: contact sensor 1030: proximity sensor
1040: Camera sensor 1050: Dynamic visual sensor
1060:

Claims (20)

  1. Recognizing sensor values for an object continuously in a plurality of sensors having different operating ranges;
    Generating a control value by fusing the sensor values; And
    Executing a function corresponding to the control value
    The method comprising the steps of:
  2. The method according to claim 1,
    Wherein the recognizing comprises:
    Determining a mounting position of at least one of a contact sensor, a proximity sensor, a camera sensor, and a dynamic vision sensor in consideration of an operation range of the sensor.
  3. The method according to claim 1,
    Wherein the recognizing comprises:
    And recognizing a depth value for the object as the sensor value.
  4. The method according to claim 1,
    Wherein the recognizing comprises:
    And recognizing a pointer value for the object as the sensor value.
  5. The method according to claim 1,
    Wherein the recognizing comprises:
    And recognizing a swipe value for the object as the sensor value.
  6. 6. The method of claim 5,
    Wherein the recognizing comprises:
    Recognizing the swipe value according to the motion of the object or the speed of the object for a predetermined time.
  7. The method according to claim 6,
    Wherein the recognizing comprises:
    Wherein the plurality of sensors continuously recognize the motion of the object according to the change of the depth value for the object.
  8. The method according to claim 1,
    Wherein the generating comprises:
    And generating a control value by combining an absolute value of each sensor value or a relative value of each sensor value in consideration of an operation range of each sensor when the types of the sensor values are the same.
  9. The method according to claim 1,
    Wherein the generating comprises:
    Updating sensor values when the types of sensor values are different; And
    Combining the updated sensor values to generate a control value
    The method comprising the steps of:
  10. The method according to claim 1,
    Wherein the generating comprises:
    Generating a first combined value by combining an absolute value of each sensor value or a relative value of each sensor value in consideration of an operation range of each sensor for sensor values having the same type;
    Updating the sensor values and combining the updated sensor values for different sensor values to generate a second combined value; And
    Combining the first combined value and the second combined value to generate a control value
    The method comprising the steps of:
  11. A sensor unit that continuously recognizes sensor values for an object; And
    A controller for generating a control value by combining the sensor values and executing a function corresponding to the control value,
    The motion recognition device comprising:
  12. 12. The method of claim 11,
    Wherein,
    A proximity sensor, a camera sensor, and a dynamic visual sensor in consideration of the operating range of the sensor.
  13. 12. The method of claim 11,
    The sensor unit includes:
    And recognizes a depth value for the object as the sensor value.
  14. 12. The method of claim 11,
    The sensor unit includes:
    And recognizes a pointer value for the object as the sensor value.
  15. 12. The method of claim 11,
    The sensor unit includes:
    And recognizes a swipe value for the object as the sensor value.
  16. 16. The method of claim 15,
    The sensor unit includes:
    And recognizes the swipe value according to the motion of the object or the speed of the object for a predetermined time.
  17. 17. The method of claim 16,
    The sensor unit includes:
    Wherein the plurality of sensors continuously recognize the motion of the object as the depth value of the object is changed.
  18. 12. The method of claim 11,
    Wherein,
    Wherein when the sensor values are the same, an absolute value of each sensor value or a relative value of each sensor value is combined in consideration of an operation range of each sensor to generate a control value.
  19. 12. The method of claim 11,
    Wherein,
    And updates the sensor values when the types of sensor values are different, and combines the updated sensor values to generate a control value.
  20. 12. The method of claim 11,
    Wherein,
    The absolute value of each sensor value or the relative value of each sensor value is combined to generate a first combined value in consideration of the operating range of each sensor for sensor values having the same type, Wherein the sensor value is updated to combine the updated sensor values to generate a second combined value, and combining the first combined value and the second combined value to generate a control value.
KR1020130153692A 2013-12-11 2013-12-11 Apparatus and method for recognizing gesture using sensor KR20150068001A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130153692A KR20150068001A (en) 2013-12-11 2013-12-11 Apparatus and method for recognizing gesture using sensor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020130153692A KR20150068001A (en) 2013-12-11 2013-12-11 Apparatus and method for recognizing gesture using sensor
US14/564,762 US9760181B2 (en) 2013-12-11 2014-12-09 Apparatus and method for recognizing gesture using sensor

Publications (1)

Publication Number Publication Date
KR20150068001A true KR20150068001A (en) 2015-06-19

Family

ID=53271134

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130153692A KR20150068001A (en) 2013-12-11 2013-12-11 Apparatus and method for recognizing gesture using sensor

Country Status (2)

Country Link
US (1) US9760181B2 (en)
KR (1) KR20150068001A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190074836A (en) * 2017-12-20 2019-06-28 (주)에이텍티앤 User interface control device and method thereof of portable settler

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170050293A (en) 2015-10-30 2017-05-11 삼성전자주식회사 Method and apparatus of detecting gesture recognition error
US10260862B2 (en) 2015-11-02 2019-04-16 Mitsubishi Electric Research Laboratories, Inc. Pose estimation using sensors
USD780222S1 (en) * 2015-11-09 2017-02-28 Naver Corporation Display panel with icon

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008132546A1 (en) * 2007-04-30 2008-11-06 Sony Ericsson Mobile Communications Ab Method and algorithm for detecting movement of an object
US9305191B2 (en) * 2009-11-17 2016-04-05 Proventix Systems, Inc. Systems and methods for using a hand hygiene compliance system to improve workflow
US8704767B2 (en) * 2009-01-29 2014-04-22 Microsoft Corporation Environmental gesture recognition
JP5282661B2 (en) * 2009-05-26 2013-09-04 ソニー株式会社 Information processing apparatus, information processing method, and program
KR20110010906A (en) * 2009-07-27 2011-02-08 삼성전자주식회사 Apparatus and method for controlling of electronic machine using user interaction
US20110267264A1 (en) * 2010-04-29 2011-11-03 Mccarthy John Display system with multiple optical sensors
CN103154880B (en) * 2010-10-22 2016-10-19 惠普发展公司,有限责任合伙企业 Assess the input relative to display
US20130009875A1 (en) * 2011-07-06 2013-01-10 Fry Walter G Three-dimensional computer interface
KR20130116013A (en) 2012-04-13 2013-10-22 삼성전자주식회사 Camera apparatus and method for controlling thereof
JP6316540B2 (en) 2012-04-13 2018-04-25 三星電子株式会社Samsung Electronics Co.,Ltd. Camera device and control method thereof
WO2014140827A2 (en) * 2013-03-14 2014-09-18 Eyesight Mobile Technologies Ltd. Systems and methods for proximity sensor and image sensor based gesture detection
JP5802247B2 (en) * 2013-09-25 2015-10-28 株式会社東芝 Information processing device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190074836A (en) * 2017-12-20 2019-06-28 (주)에이텍티앤 User interface control device and method thereof of portable settler

Also Published As

Publication number Publication date
US20150160737A1 (en) 2015-06-11
US9760181B2 (en) 2017-09-12

Similar Documents

Publication Publication Date Title
Argelaguet et al. A survey of 3D object selection techniques for virtual environments
US9024877B2 (en) Method for automatically switching user interface of handheld terminal device, and handheld terminal device
EP3164785B1 (en) Wearable device user interface control
KR20130004357A (en) A computing device interface
US9170607B2 (en) Method and apparatus for determining the presence of a device for executing operations
US20160139731A1 (en) Electronic device and method of recognizing input in electronic device
US9530232B2 (en) Augmented reality surface segmentation
KR101634154B1 (en) Eye tracking based selectively backlighting a display
KR20150115555A (en) Electronic device And Method for providing information thereof
EP2836899B1 (en) Augmented reality system to selectively improve the visibility of the desired information
CN102906671B (en) Gesture input device and gesture input method
US20110037727A1 (en) Touch sensor device and pointing coordinate determination method thereof
KR101580914B1 (en) Electronic device and method for controlling zooming of displayed object
US9791920B2 (en) Apparatus and method for providing control service using head tracking technology in electronic device
EP2609488A2 (en) Touch sensing apparatus and method
EP2726961A2 (en) Systems and methods for controlling a cursor on a display using a trackpad input device
US20130181897A1 (en) Operation input apparatus, operation input method, and program
US10509511B2 (en) Mobile terminal performing a different operation based on a type of a tap applied to a display and control method thereof
US20140125615A1 (en) Input device, information terminal, input control method, and input control program
US20120280898A1 (en) Method, apparatus and computer program product for controlling information detail in a multi-device environment
US8826178B1 (en) Element repositioning-based input assistance for presence-sensitive input devices
US20150062033A1 (en) Input device, input assistance method, and program
US20120281018A1 (en) Electronic device, information processing method, program, and electronic device system
US20140111548A1 (en) Screen display control method of terminal
US9690377B2 (en) Mobile terminal and method for controlling haptic feedback

Legal Events

Date Code Title Description
A201 Request for examination