WO2013055137A1 - 이벤트 기반 비전 센서를 이용한 동작 인식 장치 및 방법 - Google Patents
이벤트 기반 비전 센서를 이용한 동작 인식 장치 및 방법 Download PDFInfo
- Publication number
- WO2013055137A1 WO2013055137A1 PCT/KR2012/008287 KR2012008287W WO2013055137A1 WO 2013055137 A1 WO2013055137 A1 WO 2013055137A1 KR 2012008287 W KR2012008287 W KR 2012008287W WO 2013055137 A1 WO2013055137 A1 WO 2013055137A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- movement
- motion
- event
- vision sensor
- reception
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/47—Image sensors with pixel address output; Event-driven image sensors; Selection of pixels to be read out based on image data
Definitions
- the present invention relates to a gesture recognition apparatus and a method thereof, and may use a vision sensor that senses a portion in which a movement occurs to track a movement trace of a finger, track a movement direction of a hand, and control an operation of a device.
- a keyboard, a mouse, or a touch panel may be mentioned.
- the touch technology used in the touch panel the user may touch the screen to operate the UI.
- the screen of the panel may be damaged, and the user may feel uncomfortable hygienically by direct contact. Accordingly, there is a need to improve user convenience by providing an intuitive interfacing technology with natural and enhanced interaction between humans and electronic devices.
- a motion sensor for sensing the portion in which the movement occurs, and outputs events
- a motion type determination unit for determining the type of motion using the frequency of occurrence of the event output through the vision sensor
- the motion type determination unit As a result of the determination, when the movement is small, the first operation determination unit which tracks the movement trajectory of the portion in which the movement occurs and determines the operation pattern according to the movement trajectory; and when the determination is the large movement, use the events.
- the first operation determination unit which tracks the movement trajectory of the portion in which the movement occurs and determines the operation pattern according to the movement trajectory; and when the determination is the large movement, use the events.
- To control the device by referring to a second motion determination unit that determines a movement direction in which the object moves and the movement pattern determined by the first motion determination unit or the movement direction determined by the second motion determination unit.
- an event-based vision sensor including an operation control unit for outputting a control command The motion recognition apparatus.
- the movement type determination unit calculates a frequency of occurrence of the events output through the vision sensor, and compares the frequency of occurrence with a preset threshold, if only the portion of the object is less than the threshold. It may be determined that the movement is a small movement, and if the occurrence frequency is greater than or equal to the threshold value, it may be determined whether the entire object is a large movement.
- the first operation determining unit may receive events output from preset reception fields of the vision sensor and correspond to the construction correlators corresponding to each of the reception fields for calculating construction correlation of each of the reception fields.
- a motion trajectory tracking unit that tracks a motion trajectory of the part where the motion occurs, using the height of the space-time correlation of each of the construction tube correlators, and the motion of the part where the motion is generated from the tracked motion trajectory. It may include an operation pattern determination unit for determining a pattern.
- the reception field may be a divided region of the vision sensor, and the reception field may overlap an area with another neighboring reception field.
- the motion trajectory tracking unit may generate a cluster by grouping spatiotemporal correlators having a high spatiotemporal correlation among the spatiotemporal correlators, setting a predetermined region including the cluster as a clustering region, and moving the center of the clustering region. By calculating the position of the object, it is possible to track the movement trajectory by linking the previously calculated position of the object.
- the motion trajectory tracking unit recalculates the center of the previous clustering region by using internal state values of the spatiotemporal correlators within the previous clustering region when there are no space-time correlators having high space-time correlation among the space-time correlators.
- the center of the previous clustering region obtained by recalculation may be calculated as the position of the object, and the movement trajectory may be tracked by linking the previously calculated position of the object.
- the second operation determination unit, the first direction detection filter and the reception fields for receiving the events output from the predetermined reception fields of the vision sensor to sense the direction of movement of the object for each of the reception fields; It may include a second direction detection filter for determining the final movement direction of the object by using the movement direction detected for each.
- the reception field is a divided area of the vision sensor, and the reception field overlaps an area with another reception field around it, and may be formed in a circular shape.
- the first direction detection filter includes at least one winner-reading circuit corresponding to each of the reception filters, and the winner-reading circuit includes as many neurons as the number of directions determined by the second operation determination unit.
- the direction of movement of the object may be output from the corresponding reception filter by using the s.
- the second direction detecting filter may include a winner reading circuit that determines a final moving direction of the object by using the moving direction detected for each of the reception fields, and the winner reading circuit includes the second operation determining unit.
- the final direction of movement of the object may be output using the direction of movement detected for each of the reception fields by using as many neurons as the number of directions determined by.
- the second direction detecting filter may determine the final moving direction of the object by vector-adding the moving directions detected for each of the reception fields.
- the event output through the vision sensor is divided into an ON event and an OFF event, the ratio of the OFF event to the ON event, and calculates the ratio
- the apparatus may further include a third operation determiner configured to determine whether the object is moved forward or backward by comparing with at least one preset reference value.
- a vision sensor for sensing a portion in which a movement occurs to output events, and receives the events output from predetermined reception fields of the vision sensor to sense the direction of movement of the object for each of the reception fields
- a control for controlling a device by referring to a first direction detection filter, a second direction detection filter for determining a final direction of movement of the object using the movement direction detected for each of the reception fields, and the determined direction of movement
- An apparatus for recognizing a motion using an event-based vision sensor including a motion controller for outputting a command is provided.
- the first direction detection filter may include at least one winner-reading circuit corresponding to each of the reception filters, and the winner-reading circuit may have as many neurons as the number of directions determined by the second operation determiner.
- the direction of movement of the object may be output from the corresponding reception filter by using the s.
- the second direction detecting filter may include a winner reading circuit that determines a final moving direction of the object by using the moving direction detected for each of the reception fields, and the winner reading circuit includes the second operation determining unit.
- the final direction of movement of the object may be output using the direction of movement detected for each of the reception fields by using as many neurons as the number of directions determined by.
- the step of receiving an event corresponding to the portion of the motion from the vision sensor determining the type of motion using the event occurrence frequency, if the result of the determination of the type of movement, the motion, Tracking a motion trajectory of the generated portion and determining an operation pattern according to the motion trajectory, and determining a movement direction in which the object moves by using the events when the motion type is a result of the large motion.
- a method of recognizing a gesture recognition apparatus including controlling a device by referring to the determined movement pattern or the determined movement direction.
- the method comprising: receiving events output from preset reception fields of a vision sensor, sensing a movement direction of the object by the reception fields, and detecting the movements by the reception fields
- a method of recognizing a motion recognition apparatus comprising determining a final movement direction of the object by using a direction and controlling a device with reference to the determined movement direction.
- a plurality of direction detection filter having a vision sensor for sensing the portion in which the movement occurs to output the events and a measurement structure for receiving the events output from the predetermined reception fields of the vision sensor to recognize the direction of movement
- a motion recognition apparatus using an event-based vision sensor including the above.
- the type of motion is determined by using an event occurrence frequency.
- the motion recognition apparatus and the method for controlling the device by referring to the determined movement pattern or the determined movement direction by determining the movement direction of the object, Because it recognizes the motion, it can recognize the movement of the object at a very high speed of 1ms, has low power consumption, and has no effect on the brightness of the surroundings.
- FIG. 1 shows an example of the configuration of a gesture recognition apparatus.
- FIG. 2 illustrates a method of determining a type of motion using an event occurrence frequency.
- FIG. 4 is a diagram illustrating an example of an array and sensing fields of sensing units forming a vision sensor.
- FIG. 5 is a diagram for describing spatial associations between first to nth spatiotemporal correlators.
- FIG. 6 is a diagram illustrating an example of generating first to third clusters and setting a clustering region by grouping first to n-th spatiotemporal correlators.
- FIG. 7 is a diagram for describing an example of tracking a motion trajectory of a portion in which a motion occurs using the generated clustering region.
- FIG. 11 is a diagram illustrating a shape of a reception field for sensing a movement direction.
- FIG. 12 is a diagram illustrating a sensing direction when the winner-reading circuit senses eight directions.
- FIG. 13 is a diagram illustrating an example of weights given to each of the neurons of the winner-reading circuit included in the first direction detection filter.
- FIG. 14 illustrates an example of a configuration of a winner-reading circuit included in the second direction detection filter.
- FIG. 15 is a diagram illustrating an example of weights given to each of the neurons of the winner-reading circuit included in the second direction detection filter.
- 16 shows an example of configuring a winner-reading circuit according to the type of event received through the vision sensor.
- FIG. 18 illustrates a flow of recognizing a gesture of a finger.
- 19 shows a flow of detecting a direction of movement of a hand.
- FIG. 1 illustrates an example of a configuration of a gesture recognition apparatus 100.
- the gesture recognition apparatus 100 that recognizes a movement pattern according to a small movement such as a finger or recognizes a movement direction of a large movement such as a palm includes a vision sensor 110 and a movement type determination unit 120.
- the first operation determiner 130, the second operation determiner 140, the third operation determiner 145, and the operation controller 150 may be included.
- the vision sensor 110 senses a part in which a movement occurs among the subjects and outputs events.
- the vision sensor 110 includes a plurality of sensing units arrayed as shown in FIG. 4 to be described later.
- the sensing units may be provided in pixel units of an image. For example, when a touch screen for an interface outputs an image of 60 ⁇ 60, the sensing units may be provided in the form of 60 ⁇ 60.
- the sensing units may be light receiving elements.
- the intensity of light sensed by the sensing units of the first image sensor 111 changes.
- target sensing units in which a change in intensity of light is sensed among the sensing units output an event. That is, the target sensing units corresponding to the portion in which the movement occurs among the sensing units output an event.
- the target sensing unit is a sensing unit that outputs an event.
- the event includes information such as ⁇ time of occurrence of an event, position of a sensing unit outputting the event, polarity ⁇ , and the like.
- the polarity is 'on' when an event occurs due to an increase in the intensity of light received from the sensing unit, and 'off' when an event occurs due to a decrease in intensity of the received light.
- each sensing unit may output an event when the amount of change in intensity of the light is larger or smaller than the set reference value.
- the motion type determiner 120 determines the type of motion based on an event occurrence frequency output through the vision sensor 110.
- a representative example of the small movement may be a front or back movement or a finger movement of the object (hand)
- a representative example of the large movement may be the movement of the entire hand such as a palm.
- the movement type determination unit 120 detects only one of the front and rear movements of the object (hand) and the finger movement according to a preset user's request, or by using a more detailed criterion. Distinguish between forward and backward movement or finger movement.
- a small movement may be represented by a finger movement and a large movement may be represented by a hand movement.
- the motion type determiner 120 may determine the type of motion through the example of FIG. 2 below.
- FIG. 2 illustrates a method of determining a type of motion using an event occurrence frequency.
- an event occurrence frequency of the vision sensor 110 is low when only a finger moves and an event occurrence frequency of the vision sensor 110 is high when a hand moves. Accordingly, the movement type determiner 120 may determine whether only a finger moves or the entire hand moves based on an appropriate threshold.
- the first motion determiner 130 tracks a motion trajectory of a portion where a motion occurs by using a spatial temporal correlation between the events output from the vision sensor 110 and determines an operation pattern according to the motion trajectory. .
- FIG 3 illustrates an example of the configuration of the first operation determiner 130.
- the first motion determiner 130 may include first to nth spatiotemporal correlators 312, 314, and 316, a motion trajectory tracker 320, and a pattern storage unit 330.
- the operation pattern determination unit 340 may be included.
- the space-time correlators 312, 314, and 316 may calculate space-time correlations between the target sensing units using events input from the target sensing units, respectively.
- the space-time correlators 312, 314, and 316 may be implemented as spiking neurons.
- the reception field will be described first with reference to FIG. 4, and the first spatiotemporal correlator 121-1 will be described as an example.
- FIG. 4 is a diagram illustrating an example of an array and sensing fields of sensing units forming a vision sensor.
- the first space-time correlator 312 receives an event output from sensing units constituting a specific region of the vision sensor 110.
- the specific area means the reception field.
- the area occupied by the sensing units that are electrically connected to the first space-time correlator 312 and output the event to the first space-time correlator 312 is called a reception field.
- the reception field may have a size of m ⁇ m (m is a positive number). Accordingly, the first space-time correlator 312 may receive an event from at least one of the sensing units of the reception field.
- the second to nth spatiotemporal correlators 314 and 316 may also be connected to sensing units of a corresponding reception field to receive an event.
- the first to nth spatiotemporal correlators 312, 314, and 316 each have an internal state value representing the current spatiotemporal correlation.
- Each internal state value may be the same as or different from each other, and may be, for example, a voltage value.
- the internal state value may be determined by the current internal state value and a newly input event. When an event is input, the internal state value increases, and when there is no input of the event, the internal state value may decrease as the set time elapses. Reducing the internal state value can minimize the bandwidth load of the memory that stores the internal state value.
- the first to n-th spatiotemporal correlators 312, 314, and 316 increase an internal state value each time an event is input from the corresponding vision sensor 110, and increases the internal state value and the set threshold value. By comparing this, we can determine the height of the spatiotemporal correlation.
- the spatiotemporal correlation refers to temporal and spatial correlations between events input to each of the first to nth spatiotemporal correlators 312, 314, and 316.
- Equation 1 is a relational expression for calculating an output value output from a space-time correlator by an input event.
- Qn (t) is the internal state value of the n-th time-space correlator at time t
- Qn (tprev) is the internal state value of the n-th space-time correlator at the previous time, that is, the current internal state value.
- tprev is the time when the most recent event set occurred.
- e (t) is an event set input at time t
- outn (t) is an output value of the nth space-time correlator, Is the threshold.
- the nth spatiotemporal correlator increases the previous internal state value Qn (tprev) when an event is input from one of the sensing units of the sensing units of the corresponding reception field.
- the degree of increase is affected by the weight set in the target sensing unit that generated the event. Therefore, when a plurality of events are input at the same time, the speed of increase also increases.
- the weight may be set differently for each sensing unit.
- the first to nth spatiotemporal correlators 312, 314, and 316 output different output values to the first movement trajectory tracking unit 123 according to the height of the spatiotemporal correlation. That is, the n-th time-space correlator has a threshold value at which the internal state value Qn (t) is set. Is exceeded, it is determined that the spatiotemporal correlation between the target sensing units outputting the event to the nth spatiotemporal correlator is high, and 1 is output. In addition, the n-th space-time correlator has a threshold value at which the internal state value Qn (t) is set. If less than or equal to), it is determined that the spatiotemporal correlation is low, and 0 may be output.
- the n-th space-time correlator may decrease the internal state value of the n-th space-time correlator 121-n by a predetermined amount.
- the internal state value Qn (t) calculated by the nth space-time correlator may represent spatial and temporal association of events input to the n-th space-time correlator from the target sensing units. For example, when events are continuously input to the n-th time-space correlator from one target sensing unit, the internal state value Qn (t) may indicate a temporal relationship between the input events.
- the two events are simultaneously input into the n-th spatial correlator from two target sensing units, that is, when the two target sensing units are in close proximity and connected to the same n-th spatial correlator, the two events are spatially highly correlated. Has Since two events are simultaneously input from two target sensing units, the two events also have temporal correlation.
- first to nth spatiotemporal correlators 312, 314, and 316 may have a spatial relationship with each other.
- FIG. 5 is a diagram for describing spatial associations between first to nth spatiotemporal correlators.
- the vision sensor 110 may be logically divided into a plurality of reception fields.
- the divided reception fields overlap with at least one peripheral area.
- the first through n-th spatiotemporal correlators 312, 314, and 316 may be mapped with the divided reception fields of the vision sensor 110, respectively.
- the sensing units located in each reception field output an event to a corresponding space-time correlator of the first to n-th time-space correlators 312, 314, and 316.
- C (i, j) is the center coordinate of the reception field located at the center of the vision sensor 110.
- C (i-1, j) is the center coordinate of the reception field in which the center coordinate is moved by '1' in the x-axis direction (ie, the horizontal direction).
- '1' means that three pixels are spaced apart, which is changeable.
- some overlap of reception fields means that the same event may be output to at least two space-time correlators at the same time.
- the spatial correlation between the first through n-th spatiotemporal correlators 312, 314, and 316 or the reception fields may be given.
- the spatial correlation between the first through n-th spatiotemporal correlators 312, 314, and 316 may affect the tracking of the motion trajectory.
- the motion trace tracking unit 320 outputs an output value of the spatiotemporal correlations calculated from the first to nth spatiotemporal correlators 312, 314, and 316, and the first to nth spatiotemporal correlators (
- the internal trajectory values of 312, 314, and 316 can be used to track the motion trajectory of the portion where the motion occurs.
- the motion trajectory tracking unit 320 generates a cluster by grouping the spatiotemporal correlators having high spatiotemporal correlation among the first to nth spatiotemporal correlators 312, 314, and 316.
- the motion trace tracking unit 320 groups the spatiotemporal correlators outputting '1' among the first to nth spatiotemporal correlators 312, 314, and 316 based on the overlap of the reception fields. That is, the motion trace tracking unit 320 generates a cluster by grouping space-time correlators that overlap each other. Outputting '1' means that it has a high spatiotemporal correlation.
- the motion trajectory tracking unit 320 generates a cluster by grouping the spatiotemporal correlators whose output is 1, and sets a predetermined region including the cluster as a clustering region.
- FIG. 6 is a diagram illustrating an example of generating first to third clusters and setting a clustering region by grouping first to n-th spatiotemporal correlators.
- the first cluster 1 is generated by grouping two reception fields 601 and 602 overlapping each other.
- the second cluster 2 is generated by grouping 10 reception fields that overlap each other.
- the third cluster 3 is generated by grouping seven reception fields that overlap each other.
- the first clustering region 610 is a predetermined region including the first cluster.
- the second clustering region 620 is a predetermined region including the second cluster.
- the third clustering region 630 is a predetermined region including the third cluster. In this case, the clustering region may be generated based on the center of the cluster.
- the motion trajectory tracking unit 320 calculates the center of the clustering area as the position of the moving object.
- the motion trajectory tracking unit 320 may calculate the center of the clustering region by using internal state values of the spatiotemporal correlators in the clustering region.
- the motion trajectory tracking unit 320 recalculates the center of the previous clustering region using the internal state values of the spatiotemporal correlators in the previous clustering region when the output is 1 and there are no space-time correlators, and the previous clustering obtained by recalculation. Calculate the center of the area as the location of the moving object.
- the motion trajectory tracking unit 320 may track the motion trajectory by linking the position of the calculated object with the position of the previously calculated object.
- the cluster may have a shape corresponding to the portion where the movement occurs.
- the motion trajectory tracking unit 320 may multiply the position and the constant of each spatiotemporal correlator included in the clustering region, and determine an average value of the multiplication results as the center position of the clustering region.
- the position of the space-time correlator may be a position representing the area covered by the space-time correlator.
- the constant may be various values such as the internal state value Q (t) calculated by the space-time correlators, the number of events input to the space-time correlators during a specific time interval.
- the clustering region was created by grouping the first and second space-time correlators 312 and 314, and the internal state value of the first space-time correlator 312 is Q1 and the internal state of the second space-time correlator 314.
- the value is Q2.
- the positions representing the first space-time correlator 312 are (x1, y1), and the positions representing the second space-time correlator 314 are (x2, y2).
- the motion trace tracking unit 320 may calculate the center position of the clustering region by using Equation 2.
- Equation 2 x 'is the x coordinate of the center position of the clustering region, and y' is the y coordinate of the center position of the clustering region.
- FIG. 7 is a diagram for describing an example of tracking a motion trajectory of a portion in which a motion occurs using the generated clustering region.
- the motion trace tracking unit 320 generates a clustering area or calculates a center position of the clustering area at predetermined time intervals. Referring to FIG. 7, the motion trace tracking unit 320 sequentially generates clustering regions at times t1, t2, and t3, and calculates a center position of each clustering region. Then, the motion trajectory tracking unit 320 connects the calculated center position to track the motion trajectory. That is, the motion trajectory of the portion where the motion occurred can be tracked by calculating the position of the grouped space-time correlators (ie, the grouped receive fields).
- the motion pattern determiner 340 may determine the motion pattern of the portion where the motion is generated from the motion trajectory tracked by the motion trajectory tracker 320.
- the motion pattern determination unit 340 obtains feature components for expressing the motion pattern from the tracked motion trajectory, compares the feature components with the motion patterns stored in the pattern storage unit 330, and operates the motion of the portion where the motion occurs.
- the pattern can be determined.
- the feature components may include various parameters such as the position of the cluster, the direction in which the cluster moves, and the angle of movement of the cluster.
- the pattern storage unit 330 may store values of a plurality of feature components and an operation pattern corresponding thereto.
- the second operation determiner 140 detects a direction in which an object moves by using events output from the vision sensor 110.
- FIG 8 illustrates an example of the configuration of the second operation determiner 140.
- the second motion determination unit 140 may include a first direction detection filter 810 and a second direction detection filter 820.
- the first direction detection filter 810 detects a moving direction of an object for each reception field by using events output from the vision sensor 110.
- the first direction detection filter 810 includes at least as many winner takes all circuits 812, 814, 816 as the number of receive fields.
- Each of the winner-reading circuits 812, 814, 816 receives an event from a corresponding receive field and senses a direction of movement, which is a direction in which an object moves within the corresponding receive field.
- the second direction detection filter 820 detects the final direction of movement of the object by using the first direction detection filter 810 to detect the movement direction of each reception field.
- the second direction sensing filter 820 includes at least one final winner-reading circuit 822.
- the final winner-reading circuit 822 receives the movement direction information of the object in the corresponding reception field sensed by each of the winner-reading circuits 812, 814, and 816 to determine the final movement direction based on the vision sensor 110. .
- the second direction detection filter 820 may be implemented without using a winner read circuit.
- the second direction detection filter 820 vector- sums the motion direction information of the object in the corresponding reception field detected by each of the winner-reading circuits 812, 814, and 816 input from the first direction detection filter 810, and generates a vision sensor ( The final direction of movement based on 110 may also be calculated.
- the second motion determination unit 140 implements two direction detection filters having a measurement structure, but may be implemented with more direction detection filters having a measurement structure.
- the second motion determination unit 140 detects only the movement direction using a direction detection filter having a hierarchical structure. By diversifying the filter's structure (for example, by using filters that can detect a particular shape), it can also detect motion.
- the winner-reading circuit of FIG. 9 may include four neurons 910, 920, 930, and 940 that sense a direction and a suppressor neuron 950.
- Each of the neurons 910, 920, 930, and 940 receives an event input from a corresponding reception field, applies a greater weight to an event in a direction detected by each neuron, and adds weighted events to an internal state value.
- Each of the neurons 910, 920, 930, and 940 outputs a signal indicating that motion is detected in a corresponding direction when an internal state value exceeds a preset threshold.
- Suppressive neurons 950 when the output occurs among the neurons 910, 920, 930, 940, other than the neuron that generated the first output to suppress the neurons do not generate the output.
- Each of the neurons 910, 920, 930, and 940 detects one direction so that four neurons may be detected in four directions.
- the winner-reading circuit of FIG. 10 may include four neurons 1010, 1020, 1030, and 1040 that sense directions.
- Each of the neurons 1010, 1020, 1030, and 1040 receives an event input from a corresponding reception field, applies a greater weight to an event in a direction assigned to each of the neurons 1010, 1020, 1030, and 1040, Calculate the internal state value by adding weighted events.
- Each of the neurons 1010, 1020, 1030, and 1040 outputs a signal indicating that motion is detected in a corresponding direction when an internal state value exceeds a preset threshold.
- a neuron when a neuron outputs a signal indicative of detecting movement in a corresponding direction among the neurons 1010, 1020, 1030, and 1040, the output is received and the output is subsequently suppressed.
- the neurons 1010, 1020, 1030, 1040 suppress each other so that other neurons do not generate an output except the neuron that generated the first output.
- Each of the neurons 1010, 1020, 1030, and 1040 detects one direction and thus, four neurons 1010, 1020, 1030, and 1040 may sense four directions.
- the reception field which is a unit for providing an event to each of the winner-reading circuits 812, 814, and 816, may be configured in various forms, but is preferably set as shown in FIG.
- FIG. 11 is a diagram illustrating a shape of a reception field for sensing a movement direction.
- the shape of the reception field is set to circular. This is because the direction in a specific direction may occur due to the shape of the reception field. Therefore, the reception field of the events input to the winner-reading circuits 812, 814, 816 is preferably set to a circular reception field.
- the winner-reading circuits 812, 814, 816 and the final winner-reading circuit 822 may include eight neurons that sense different directions, respectively, to detect movement in eight directions as shown in FIG. 12 below. However, the direction detected by the winner read circuits 812, 814, 816 and the final winner read circuit 822 can be arbitrarily determined.
- FIG. 12 is a diagram illustrating a sensing direction when the winner-reading circuit senses eight directions.
- the winner-reading circuits 812, 814, and 816 and the final winner-reading circuit 822 may detect movement in eight directions represented by indices of 0 to 7.
- FIG. 12 the winner-reading circuits 812, 814, and 816 and the final winner-reading circuit 822 may detect movement in eight directions represented by indices of 0 to 7.
- a weight applied to a reception field connected to each of the neurons detecting each direction may be set as shown in FIG. 13.
- FIG. 13 is a diagram illustrating an example of weights given to each of the neurons of the winner-reading circuit included in the first direction detection filter.
- the shading indicates the degree of weight, and the weight applied to the reception field connected to each of the direction-specific neurons corresponding to the index of FIG. 12 may have a gradient value according to the direction. have.
- the final winner-reading circuit 822 may be configured as shown in FIG. 14 below.
- FIG. 14 illustrates an example of a configuration of a winner-reading circuit included in the second direction detection filter.
- the final winner-reading circuit 822 may consist of nine neurons 1410, 1420, and 1430.
- Each of the neurons 1410, 1420, 1430 receives movement direction information sensed by each of the winner-reading circuits 812, 814, 816, and is larger in the direction assigned to each of the neurons 1410, 1420, 1430.
- the internal state values of the neurons 1410, 1420, and 1430 are calculated by applying weights.
- the neurons 1410, 1420, and 1430 output a signal indicating that motion is detected in a corresponding direction when an internal state value exceeds a preset threshold.
- the neurons 1410, 1420, 1430 suppress each other to prevent other neurons from generating an output except for the neuron that generated the first output.
- the seventh neuron 1420 in the zeroth neuron 1410 is a neuron that senses movement in a direction corresponding to the index shown in FIG. 12.
- the g neuron 1430 is a neuron that detects a case where an occurrence frequency of an event is greater than a reference value or does not move in a specific direction.
- the hand may move back and forth or draw a circle based on the vision sensor 110.
- the final winner-reading circuit 822 detects eight directions as shown in FIG. 12, the final winner-reading circuit 822 is applied to the movement direction information detected by each of the winner-reading circuits 812, 814, and 816 input to the neurons detecting each direction.
- the weight may be set as shown in FIG. 15.
- FIG. 15 is a diagram illustrating an example of weights given to each of the neurons of the winner-reading circuit included in the second direction detection filter.
- weights applied to motion direction information detected by each of the winner-reading circuits 812, 814, and 816 input to the neurons 1410, 1420, and 1430 included in the final winner-reading circuit 822. The more similar to the direction, the higher the weight and the opposite the direction, the lower the weight.
- winner-reading circuits included in the second operation determiner 140 may be configured according to types of events output from the vision sensor 110.
- the second motion determination unit 140 uses a configuration for detecting a direction of movement using the ON event and an OFF event.
- the configuration for detecting the movement direction is configured separately, and the final movement direction may be determined by combining the detection direction using the ON event and the detection direction using the OFF event.
- 16 shows an example of configuring a winner-reading circuit according to the type of event received through the vision sensor.
- the ON event of the reception field is input to the ON event winner reading circuit 1610 and the OFF event of the reception field is OFF. It is input to the event winner reading circuit 1620.
- the third operation determiner 145 calculates a ratio of an OFF event to an ON event and compares the ratio with at least one reference value. You can judge forward or backward.
- the third operation determiner 145 determines that the object is moving forward when (the occurrence frequency of the ON event / the occurrence frequency of the OFF event) is greater than the first reference value while the light is increased, and the (ON) The occurrence frequency of the event / off event occurrence frequency) is less than the second reference value.
- the first reference value is larger than 1 and the second reference value is smaller than 1.
- the third operation determiner 145 determines that the object moves backward when (the occurrence frequency of the ON event / the occurrence frequency of the OFF event) is greater than the first reference value in the state where light is modulated, and the occurrence frequency of the ON event. / Off event occurrence frequency) is less than the second reference value it is determined that the object is moving forward. At this time, the first reference value is larger than 1 and the second reference value is smaller than 1.
- Both the first motion determination unit 130 and the third motion determination unit 145 may use only small movements and may operate only one of the two according to a user's setting.
- the operation controller 150 may provide a control command to the device to control the device (not shown) by referring to the motion pattern determined by the first motion determiner 130 or the movement direction determined by the second motion determiner 140. You can print
- the device may be provided at a place where wired / wireless communication with the gesture recognition apparatus 100 may be provided, or the gesture recognition apparatus 100 may be provided in the device.
- FIG 17 illustrates the flow of the gesture recognition apparatus 100.
- the gesture recognition apparatus 100 outputs an event corresponding to a portion in which a motion occurs in the vision sensor included in operation 1710.
- the gesture recognition apparatus 100 confirms an occurrence frequency of the event.
- the gesture recognition apparatus 100 compares the occurrence frequency of the event with a preset threshold to determine whether the type of movement is a small movement such as a finger movement.
- the identification method may be determined as a finger movement when the frequency of occurrence of the event is less than the threshold value, and may be determined as a movement of the entire hand when the frequency of occurrence of the event is greater than or equal to the threshold value.
- the gesture recognition apparatus 100 calculates a finger movement trajectory by tracking the position of the finger in step 1716.
- the gesture recognition apparatus 100 determines an operation pattern of a finger of the portion where the movement occurs from the movement trajectory of the finger.
- the gesture recognition apparatus 100 tracks the movement direction of the hand in operation 1720.
- the gesture recognition apparatus 100 After operation 1718 or 1720, the gesture recognition apparatus 100 outputs a control command for controlling the apparatus with reference to the operation pattern or the determined movement direction determined in operation 1722.
- FIG. 18 illustrates a flow of recognizing a gesture of a finger.
- the gesture recognition apparatus 100 receives an event from the vision sensor 110.
- the motion recognition apparatus 100 calculates a space-correlation correlation of the space-time correlators.
- the gesture recognition apparatus 100 checks whether a space-time correlator having an output of 1 exists.
- the gesture recognition apparatus 100 In operation 1816, if the spatio-temporal correlator having an output of 1 is present, the gesture recognition apparatus 100 generates a cluster by grouping the spatiotemporal correlators whose output is 1 in step 1816.
- the gesture recognition apparatus 100 sets a clustering area from the location of the cluster.
- the clustering region may be a predetermined region including the cluster, and may be a predetermined region from the center of the cluster.
- the gesture recognition apparatus 100 calculates the center of the clustering region using the internal state values of the spatiotemporal correlators within the clustering region, and calculates the center of the clustering region as the position of the moving object.
- the gesture recognition apparatus 100 uses the internal state values of the spatiotemporal correlators using the internal state values of the spatiotemporal correlators within the previous clustering region in step 1820. Then, the center of the clustering region is recalculated, and the center of the clustering region obtained by the recalculation is calculated as the position of the moving object. In this case, the clustering region most recently set as the previous clustering region is represented.
- the gesture recognition apparatus 100 tracks a motion trajectory by linking a position of an object previously calculated in operation 1824.
- the gesture recognition apparatus 100 extracts feature components for representing the motion pattern from the motion trajectory tracked.
- the feature components may include various parameters such as the position of the cluster, the direction in which the cluster moves, the movement angle of the cluster, and the like.
- the gesture recognition apparatus 100 compares the feature components with the stored operation patterns, determines an operation pattern of a portion where a movement occurs, and provides a control command for controlling the device with reference to the determined operation pattern. Output to.
- 19 shows a flow of detecting a direction of movement of a hand.
- the gesture recognition apparatus 100 receives an event from the vision sensor 110 in step 1910. In operation 1912, the gesture recognition apparatus 100 is performed.
- the object detects the direction of movement for each reception field by using the events output from the vision sensor 110.
- the reception field is preferably set to a circular reception field to exclude the direction according to the shape of the reception field.
- the gesture recognition apparatus 100 detects the final movement direction, which is the movement direction of the object based on the vision sensor 110, using the direction information detected for each reception field.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Geophysics And Detection Of Objects (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (23)
- 움직임이 발생한 부분을 센싱하여 이벤트들을 출력하는 비전 센서;상기 비전 센서를 통해 출력되는 상기 이벤트 발생 빈도를 이용해서 움직임의 종류를 판단하는 움직임 종류 판단부;상기 움직임 종류 판단부의 판단결과 작은 움직임인 경우, 움직임이 발생한 부분의 움직임 궤적을 추적하고 상기 움직임 궤적에 따른 동작 패턴을 판단하는 제1 동작 판단부;상기 움직임 종류 판단부의 판단결과 큰 움직임인 경우, 상기 이벤트들을 이용해서 상기 물체가 이동한 움직임 방향을 판단하는 제2 동작 판단부; 및상기 제1 동작 판단부에서 판단된 상기 동작 패턴 또는 상기 제2 동작 판단부에서 판단된 상기 움직임 방향을 참조하여 기기를 제어하기 위한 제어 명령을 출력하는 동작 제어부를 포함하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제1항에 있어서,상기 움직임 종류 판단부는,상기 비전 센서를 통해 출력되는 상기 이벤트들의 발생 빈도를 계산하고, 상기 발생 빈도를 기설정한 임계값과 비교해서 상기 발생 빈도가 상기 임계값보다 작으면 물체의 일부분만 움직이는 작은 움직임으로 판단하고, 상기 발생 빈도가 상기 임계값보다 크거나 같으면 상기 물체 전체가 움직이는 큰 움직임인지를 판단하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제1항에 있어서,상기 제1 동작 판단부는,상기 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 입력받아 상기 수신 필드들 각각의 시공관 상관도를 산출하는 상기 수신 필드들 각각에 대응하는 시공관 상관기들;상기 시공관 상관기들 각각의 시공간 상관도의 고저(高低)를 이용하여, 상기 움직임이 발생한 부분의 움직임 궤적을 추적하는 움직임 궤적 추적부; 및상기 추적된 움직임 궤적으로부터 상기 움직임이 발생한 부분의 동작 패턴을 판단하는 동작 패턴 판단부를 포함하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제3항에 있어서,상기 수신 필드는,상기 비전 센서의 분할된 영역으로, 상기 수신 필드는 주변의 다른 수신 필드와 영역이 오버랩되는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제3항에 있어서,상기 움직임 궤적 추적부는,상기 시공간 상관기들 중 시공간 상관도가 높은 시공간 상관기들을 그룹핑하여 클러스터를 생성하고, 상기 클러스터를 포함하는 일정영역을 클러스터링 영역으로 설정하고, 상기 클러스터링 영역의 중심을 움직이는 상기 물체의 위치로 계산하고, 이전에 계산된 상기 물체의 위치를 연계하여 움직임 궤적을 추적하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제5항에 있어서,상기 움직임 궤적 추적부는,상기 시공간 상관기들 중 시공간 상관도가 높은 시공간 상관기들이 존재하지 않는 경우, 이전 클러스터링 영역 내에 있는 시공간 상관기들의 내부 상태값을 이용해서 상기 이전 클러스터링 영역의 중심을 재계산하고, 재계산으로 구해진 이전 클러스터링 영역의 중심을 상기 물체의 위치로 계산하고, 이전에 계산된 상기 물체의 위치를 연계하여 움직임 궤적을 추적하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제1항에 있어서,상기 제2 동작 판단부는,상기 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 입력받아 상기 수신 필드들 별로 상기 물체가 이동한 움직임 방향을 감지하는 제1 방향 감지 필터; 및상기 수신 필드들 별로 감지한 상기 움직임 방향을 이용해서 상기 물체의 최종 움직임 방향을 판단하는 제2 방향 감지 필터를 포함하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제7항에 있어서,상기 수신 필드는,상기 비전 센서의 분할된 영역으로, 상기 수신 필드는 주변의 다른 수신 필드와 영역이 오버랩되며, 원형으로 구성되는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제7항에 있어서,상기 제1 방향 감지 필터는,상기 수신 필터들 각각에 대응하는 적어도 하나의 승자독식 회로들을 포함하고,상기 승자독식 회로는,상기 제2 동작 판단부에서 판단할 수 있는 방향들의 수만큼의 뉴런들을 이용해서 대응하는 수신 필터에서 상기 물체의 움직임 방향을 출력하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제7항에 있어서,상기 제2 방향 감지 필터는,상기 수신 필드들 별로 감지한 상기 움직임 방향을 이용해서 상기 물체의 최종 움직임 방향을 판단하는 승자독식 회로를 포함하고,상기 승자독식 회로는,상기 제2 동작 판단부에서 판단할 수 있는 방향들의 수만큼의 뉴런들을 이용해서 상기 수신 필드들 별로 감지한 상기 움직임 방향들을 이용해서 상기 물체의 최종 움직임 방향을 출력하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제7항에 있어서,상기 제2 방향 감지 필터는,상기 수신 필드들 별로 감지한 상기 움직임 방향을 벡터합해서 상기 물체의 최종 움직임 방향을 판단하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제1항에 있어서,상기 움직임 종류 판단부의 판단결과 작은 움직임인 경우, 상기 비전 센서를 통해 출력되는 상기 이벤트를 ON 이벤트와 OFF 이벤트로 구분하고, 상기 ON 이벤트에 대한 상기 OFF 이벤트의 비율을 계산하고, 상기 비율을 기설정한 적어도 하나의 기준값과 비교해서 물체의 전진 또는 후퇴를 판단하는 제3 동작 판단부를 더 포함하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 움직임이 발생한 부분을 센싱하여 이벤트들을 출력하는 비전 센서;상기 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 입력받아 상기 수신 필드들 별로 상기 물체가 이동한 움직임 방향을 감지하는 제1 방향 감지 필터;상기 수신 필드들 별로 감지한 상기 움직임 방향을 이용해서 상기 물체의 최종 움직임 방향을 판단하는 제2 방향 감지 필터; 및판단된 상기 움직임 방향을 참조하여 기기를 제어하기 위한 제어 명령을 출력하는 동작 제어부를 포함하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제13항에 있어서,상기 수신 필드는,상기 비전 센서의 분할된 영역으로, 상기 수신 필드는 주변의 다른 수신 필드와 영역이 오버랩되며, 원형으로 구성되는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제13항에 있어서,상기 제1 방향 감지 필터는,상기 수신 필터들 각각에 대응하는 적어도 하나의 승자독식 회로들을 포함하고,상기 승자독식 회로는,상기 제2 동작 판단부에서 판단할 수 있는 방향들의 수만큼의 뉴런들을 이용해서 대응하는 수신 필터에서 상기 물체의 움직임 방향을 출력하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 제13항에 있어서,상기 제2 방향 감지 필터는,상기 수신 필드들 별로 감지한 상기 움직임 방향을 이용해서 상기 물체의 최종 움직임 방향을 판단하는 승자독식 회로를 포함하고,상기 승자독식 회로는,상기 제2 동작 판단부에서 판단할 수 있는 방향들의 수만큼의 뉴런들을 이용해서 상기 수신 필드들 별로 감지한 상기 움직임 방향들을 이용해서 상기 물체의 최종 움직임 방향을 출력하는이벤트 기반 비전 센서를 이용한 동작 인식 장치.
- 비전 센서로부터 움직임이 발생한 부분에 대응하는 이벤트를 입력받는 단계;상기 이벤트 발생 빈도를 이용해서 움직임의 종류를 판단하는 단계;상기 움직임 종류의 판단결과 작은 움직임인 경우, 움직임이 발생한 부분의 움직임 궤적을 추적하고 상기 움직임 궤적에 따른 동작 패턴을 판단하는 단계;상기 움직임 종류의 판단결과 큰 움직임인 경우, 상기 이벤트들을 이용해서 상기 물체가 이동한 움직임 방향을 판단하는 단계; 및상기 판단된 동작 패턴 또는 상기 판단된 움직임 방향을 참조하여 기기를 제어하는 단계를 포함하는동작 인식 장치의 인식 방법.
- 제17항에 있어서,상기 동작 패턴을 판단하는 단계는,상기 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 입력받아 상기 수신 필드들 각각에 대응하는 시공관 상관기들의 시공관 상관도를 산출하는 단계;상기 시공관 상관기들 각각의 시공간 상관도의 고저(高低)를 이용하여, 상기 움직임이 발생한 부분의 움직임 궤적을 추적하는 단계; 및상기 추적된 움직임 궤적으로부터 상기 움직임이 발생한 부분의 동작 패턴을 판단하는 단계를 포함하는동작 인식 장치의 인식 방법.
- 제18항에 있어서,상기 움직임 궤적을 추적하는 단계는,상기 시공간 상관기들 중 시공간 상관도가 높은 시공간 상관기들을 그룹핑하여 클러스터를 생성하고, 상기 클러스터를 포함하는 일정영역을 클러스터링 영역으로 설정하고, 상기 클러스터링 영역의 중심을 움직이는 상기 물체의 위치로 계산하고, 이전에 계산된 상기 물체의 위치를 연계하여 움직임 궤적을 추적하는동작 인식 장치의 인식 방법.
- 제19항에 있어서,상기 움직임 궤적을 추적하는 단계는,상기 시공간 상관기들 중 시공간 상관도가 높은 시공간 상관기들이 존재하지 않는 경우, 이전 클러스터링 영역 내에 있는 시공간 상관기들의 내부 상태값을 이용해서 상기 이전 클러스터링 영역의 중심을 재계산하고, 재계산으로 구해진 이전 클러스터링 영역의 중심을 상기 물체의 위치로 계산하고, 이전에 계산된 상기 물체의 위치를 연계하여 움직임 궤적을 추적하는동작 인식 장치의 인식 방법.
- 제17항에 있어서,상기 움직임 방향을 판단하는 단계는,상기 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 이용해서 상기 수신 필드들 별로 상기 물체가 이동한 움직임 방향을 감지하는 단계; 및상기 수신 필드들 별로 감지한 상기 움직임 방향을 이용해서 상기 물체의 최종 움직임 방향을 판단하는 단계를 포함하는동작 인식 장치의 인식 방법.
- 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 입력받는 단계;상기 수신 필드들 별로 상기 물체가 이동한 움직임 방향을 감지하는 단계;상기 수신 필드들 별로 감지한 상기 움직임 방향을 이용해서 상기 물체의 최종 움직임 방향을 판단하는 단계; 및판단된 상기 움직임 방향을 참조하여 기기를 제어하는 단계를 포함하는동작 인식 장치의 인식 방법.
- 움직임이 발생한 부분을 센싱하여 이벤트들을 출력하는 비전 센서; 및상기 비전 센서의 기설정된 수신 필드들로부터 출력되는 이벤트들을 입력받아 움직임 방향을 인식하는 계측적 구조를 가지는 복수개의 방향 감지 필터들을 포함하는,이벤트 기반 비전 센서를 이용한 동작 인식 장치.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280050556.0A CN103858075B (zh) | 2011-10-14 | 2012-10-12 | 利用基于事件的视觉传感器识别动作的设备和方法 |
EP12839526.6A EP2767889B1 (en) | 2011-10-14 | 2012-10-12 | Apparatus and method for recognizing motion by using event-based vision sensor |
US14/351,806 US9389693B2 (en) | 2011-10-14 | 2012-10-12 | Apparatus and method for recognizing motion by using an event-based vision sensor |
JP2014535648A JP5982000B2 (ja) | 2011-10-14 | 2012-10-12 | イベント基盤ビジョンセンサを用いた動作認識装置及び方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2011-0105341 | 2011-10-14 | ||
KR1020110105341A KR101880998B1 (ko) | 2011-10-14 | 2011-10-14 | 이벤트 기반 비전 센서를 이용한 동작 인식 장치 및 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013055137A1 true WO2013055137A1 (ko) | 2013-04-18 |
Family
ID=48082099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2012/008287 WO2013055137A1 (ko) | 2011-10-14 | 2012-10-12 | 이벤트 기반 비전 센서를 이용한 동작 인식 장치 및 방법 |
Country Status (6)
Country | Link |
---|---|
US (1) | US9389693B2 (ko) |
EP (1) | EP2767889B1 (ko) |
JP (1) | JP5982000B2 (ko) |
KR (1) | KR101880998B1 (ko) |
CN (2) | CN103858075B (ko) |
WO (1) | WO2013055137A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015126672A1 (en) * | 2014-02-19 | 2015-08-27 | Enlighted, Inc. | Motion tracking |
EP2838069A3 (en) * | 2013-07-29 | 2015-10-07 | Samsung Electronics Co., Ltd | Apparatus and method for analyzing image including event information |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102022970B1 (ko) * | 2013-04-30 | 2019-11-04 | 삼성전자주식회사 | 시각 센서에 기반하여 공간 정보를 감지하는 장치 및 방법 |
KR102129916B1 (ko) * | 2013-07-29 | 2020-08-05 | 삼성전자주식회사 | 이벤트 정보를 포함하는 영상을 분석하는 장치 및 방법 |
KR102091473B1 (ko) | 2014-01-07 | 2020-04-14 | 삼성전자주식회사 | 이벤트 기반 센서를 이용한 피사체의 움직임 분석 방법 |
KR102224932B1 (ko) * | 2014-02-19 | 2021-03-08 | 삼성전자주식회사 | 비전 센서를 이용한 사용자 입력 처리 장치 및 사용자 입력 처리 방법 |
KR20150120124A (ko) * | 2014-04-17 | 2015-10-27 | 삼성전자주식회사 | 다이내믹 비전 센서 및 이를 포함하는 모션 인식 장치 |
US9761009B2 (en) * | 2014-10-20 | 2017-09-12 | Sesame Enable Ltd. | Motion tracking device control systems and methods |
KR102347249B1 (ko) | 2014-10-21 | 2022-01-04 | 삼성전자주식회사 | 외부 물체의 움직임과 연관된 이벤트에 응답하여 화면을 디스플레이하는 장치 및 방법 |
CN105844659B (zh) * | 2015-01-14 | 2019-04-26 | 北京三星通信技术研究有限公司 | 运动部件的跟踪方法和装置 |
CN105844128B (zh) * | 2015-01-15 | 2021-03-02 | 北京三星通信技术研究有限公司 | 身份识别方法和装置 |
KR102402678B1 (ko) | 2015-03-18 | 2022-05-26 | 삼성전자주식회사 | 이벤트 기반 센서 및 프로세서의 동작 방법 |
KR20160121287A (ko) * | 2015-04-10 | 2016-10-19 | 삼성전자주식회사 | 이벤트에 기반하여 화면을 디스플레이하는 방법 및 장치 |
KR102407274B1 (ko) | 2015-07-31 | 2022-06-10 | 삼성전자주식회사 | 임계 전압 제어 방법 및 임계 전압 제어 장치 |
KR102523136B1 (ko) | 2015-09-01 | 2023-04-19 | 삼성전자주식회사 | 이벤트 기반 센서 및 이벤트 기반 센서의 픽셀 |
KR102399017B1 (ko) * | 2015-11-16 | 2022-05-17 | 삼성전자주식회사 | 이미지 생성 방법 및 장치 |
CN105511631B (zh) * | 2016-01-19 | 2018-08-07 | 北京小米移动软件有限公司 | 手势识别方法及装置 |
CN105718056B (zh) * | 2016-01-19 | 2019-09-10 | 北京小米移动软件有限公司 | 手势识别方法及装置 |
US10198660B2 (en) * | 2016-01-27 | 2019-02-05 | Samsung Electronics Co. Ltd. | Method and apparatus for event sampling of dynamic vision sensor on image formation |
KR20180014992A (ko) | 2016-08-02 | 2018-02-12 | 삼성전자주식회사 | 이벤트 신호 처리 방법 및 장치 |
KR102621725B1 (ko) * | 2016-09-01 | 2024-01-05 | 삼성전자주식회사 | 데이터 출력 장치 |
US20180146149A1 (en) | 2016-11-21 | 2018-05-24 | Samsung Electronics Co., Ltd. | Event-based sensor, user device including the same, and operation method of the same |
KR102503543B1 (ko) | 2018-05-24 | 2023-02-24 | 삼성전자주식회사 | 다이나믹 비전 센서, 전자 장치 및 이의 데이터 전송 방법 |
EP3595295B1 (en) * | 2018-07-11 | 2024-04-17 | IniVation AG | Array of cells for detecting time-dependent image data |
JP7023209B2 (ja) | 2018-10-04 | 2022-02-21 | 株式会社ソニー・インタラクティブエンタテインメント | 電子機器、アクチュエータの制御方法およびプログラム |
JP7369517B2 (ja) | 2018-10-04 | 2023-10-26 | 株式会社ソニー・インタラクティブエンタテインメント | センサモジュール、電子機器、被写体の検出方法およびプログラム |
JP7251942B2 (ja) * | 2018-10-17 | 2023-04-04 | 株式会社ソニー・インタラクティブエンタテインメント | センサの校正システム、表示制御装置、プログラム、およびセンサの校正方法 |
JP7152285B2 (ja) | 2018-12-05 | 2022-10-12 | 株式会社ソニー・インタラクティブエンタテインメント | 電子機器、補正方法およびプログラム |
JP7175730B2 (ja) | 2018-12-05 | 2022-11-21 | 株式会社ソニー・インタラクティブエンタテインメント | 信号処理装置、電子機器、センサ装置、信号処理方法およびプログラム |
WO2020120782A1 (en) | 2018-12-13 | 2020-06-18 | Prophesee | Method of tracking objects in a scene |
JPWO2020255399A1 (ko) | 2019-06-21 | 2020-12-24 | ||
CN114073070A (zh) * | 2019-06-25 | 2022-02-18 | 索尼互动娱乐股份有限公司 | 系统、信息处理设备、信息处理方法和程序 |
EP3993398A4 (en) | 2019-06-26 | 2023-01-25 | Sony Interactive Entertainment Inc. | INFORMATION PROCESSING SYSTEM, DEVICE AND METHOD, AND PROGRAM |
EP3993403A4 (en) | 2019-06-27 | 2023-04-26 | Sony Interactive Entertainment Inc. | SENSOR CONTROL UNIT, SENSOR CONTROL METHOD AND PROGRAM |
EP4020962A4 (en) | 2019-08-20 | 2023-05-10 | Sony Interactive Entertainment Inc. | IMAGE PROCESSING DEVICE, IMAGE CAPTURE DEVICE, IMAGE PROCESSING METHOD AND PROGRAM |
WO2021033251A1 (ja) | 2019-08-20 | 2021-02-25 | 株式会社ソニー・インタラクティブエンタテインメント | 画像処理装置、画像処理方法およびプログラム |
WO2021033252A1 (ja) | 2019-08-20 | 2021-02-25 | 株式会社ソニー・インタラクティブエンタテインメント | 転送制御装置、画像処理装置、転送制御方法およびプログラム |
JP7280860B2 (ja) | 2020-11-17 | 2023-05-24 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、システム、情報処理方法および情報処理プログラム |
WO2023188184A1 (ja) * | 2022-03-30 | 2023-10-05 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、システム、情報処理方法、情報処理プログラム、およびコンピュータシステム |
WO2023188183A1 (ja) * | 2022-03-30 | 2023-10-05 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、システム、情報処理方法、情報処理プログラム、およびコンピュータシステム |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030030232A (ko) * | 2001-10-09 | 2003-04-18 | 한국과학기술원 | 컴퓨터 시각을 기반한 연속적인 수화 인식 방법 및 시스템 |
WO2010008835A1 (en) * | 2008-06-23 | 2010-01-21 | Gesturetek, Inc. | Enhanced character input using recognized gestures |
KR20110022057A (ko) * | 2008-06-18 | 2011-03-04 | 오블롱 인더스트리즈, 인크 | 차량 인터페이스를 위한 제스처 기반 제어 시스템 |
KR20110040165A (ko) * | 2009-10-13 | 2011-04-20 | 한국전자통신연구원 | 비접촉 입력 인터페이싱 장치 및 이를 이용한 비접촉 입력 인터페이싱 방법 |
JP2011513847A (ja) * | 2008-02-27 | 2011-04-28 | ジェスチャー テック,インコーポレイテッド | 認識されたジェスチャーを用いた高機能化入力 |
JP2011170747A (ja) * | 2010-02-22 | 2011-09-01 | Brother Industries Ltd | 情報入力装置 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9025797D0 (en) * | 1990-11-28 | 1991-01-09 | Hitachi Europ Ltd | Motion target tracking system |
US5454043A (en) * | 1993-07-30 | 1995-09-26 | Mitsubishi Electric Research Laboratories, Inc. | Dynamic and static hand gesture recognition through low-level image analysis |
WO1996034332A1 (fr) * | 1995-04-28 | 1996-10-31 | Matsushita Electric Industrial Co., Ltd. | Dispositif d'interface |
JP3321053B2 (ja) | 1996-10-18 | 2002-09-03 | 株式会社東芝 | 情報入力装置及び情報入力方法及び補正データ生成装置 |
JP5048890B2 (ja) | 1998-10-13 | 2012-10-17 | ソニー エレクトロニクス インク | 動作検知インターフェース |
JP2001216069A (ja) | 2000-02-01 | 2001-08-10 | Toshiba Corp | 操作入力装置および方向検出方法 |
JP3732757B2 (ja) | 2001-06-08 | 2006-01-11 | 株式会社東芝 | 画像認識方法および画像認識装置 |
US10242255B2 (en) * | 2002-02-15 | 2019-03-26 | Microsoft Technology Licensing, Llc | Gesture recognition system using depth perceptive sensors |
JP2004094653A (ja) * | 2002-08-30 | 2004-03-25 | Nara Institute Of Science & Technology | 情報入力システム |
US7671916B2 (en) * | 2004-06-04 | 2010-03-02 | Electronic Arts Inc. | Motion sensor using dual camera inputs |
SG162756A1 (en) | 2005-06-03 | 2010-07-29 | Universitaet Zuerich | Photoarray for detecting time-dependent image data |
KR100642499B1 (ko) * | 2005-12-02 | 2006-11-10 | 주식회사 애트랩 | 광 네비게이션 장치 및 이의 동작 방법 |
JP4569555B2 (ja) * | 2005-12-14 | 2010-10-27 | 日本ビクター株式会社 | 電子機器 |
JP4607797B2 (ja) * | 2006-03-06 | 2011-01-05 | 株式会社東芝 | 行動判別装置、方法およびプログラム |
JP4579191B2 (ja) * | 2006-06-05 | 2010-11-10 | 本田技研工業株式会社 | 移動体の衝突回避システム、プログラムおよび方法 |
JP4650381B2 (ja) * | 2006-09-08 | 2011-03-16 | 日本ビクター株式会社 | 電子機器 |
JP4992618B2 (ja) * | 2007-09-05 | 2012-08-08 | カシオ計算機株式会社 | ジェスチャー認識装置及びジェスチャー認識方法 |
CN101436301B (zh) * | 2008-12-04 | 2012-01-18 | 上海大学 | 视频编码中特征运动区域的检测方法 |
JP5483899B2 (ja) * | 2009-02-19 | 2014-05-07 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置および情報処理方法 |
US20110223995A1 (en) * | 2010-03-12 | 2011-09-15 | Kevin Geisner | Interacting with a computer based application |
US20110254765A1 (en) * | 2010-04-18 | 2011-10-20 | Primesense Ltd. | Remote text input using handwriting |
US8698092B2 (en) * | 2010-09-10 | 2014-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for motion recognition |
KR101792866B1 (ko) * | 2011-04-06 | 2017-11-20 | 삼성전자주식회사 | 이벤트 센서와 칼라 센서를 이용한 동작 인식 장치 및 그 방법 |
-
2011
- 2011-10-14 KR KR1020110105341A patent/KR101880998B1/ko active IP Right Grant
-
2012
- 2012-10-12 CN CN201280050556.0A patent/CN103858075B/zh active Active
- 2012-10-12 EP EP12839526.6A patent/EP2767889B1/en active Active
- 2012-10-12 WO PCT/KR2012/008287 patent/WO2013055137A1/ko active Application Filing
- 2012-10-12 US US14/351,806 patent/US9389693B2/en active Active
- 2012-10-12 CN CN201710325207.XA patent/CN107066117A/zh not_active Withdrawn
- 2012-10-12 JP JP2014535648A patent/JP5982000B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030030232A (ko) * | 2001-10-09 | 2003-04-18 | 한국과학기술원 | 컴퓨터 시각을 기반한 연속적인 수화 인식 방법 및 시스템 |
JP2011513847A (ja) * | 2008-02-27 | 2011-04-28 | ジェスチャー テック,インコーポレイテッド | 認識されたジェスチャーを用いた高機能化入力 |
KR20110022057A (ko) * | 2008-06-18 | 2011-03-04 | 오블롱 인더스트리즈, 인크 | 차량 인터페이스를 위한 제스처 기반 제어 시스템 |
WO2010008835A1 (en) * | 2008-06-23 | 2010-01-21 | Gesturetek, Inc. | Enhanced character input using recognized gestures |
KR20110040165A (ko) * | 2009-10-13 | 2011-04-20 | 한국전자통신연구원 | 비접촉 입력 인터페이싱 장치 및 이를 이용한 비접촉 입력 인터페이싱 방법 |
JP2011170747A (ja) * | 2010-02-22 | 2011-09-01 | Brother Industries Ltd | 情報入力装置 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2838069A3 (en) * | 2013-07-29 | 2015-10-07 | Samsung Electronics Co., Ltd | Apparatus and method for analyzing image including event information |
US9767571B2 (en) | 2013-07-29 | 2017-09-19 | Samsung Electronics Co., Ltd. | Apparatus and method for analyzing image including event information |
WO2015126672A1 (en) * | 2014-02-19 | 2015-08-27 | Enlighted, Inc. | Motion tracking |
CN106030466A (zh) * | 2014-02-19 | 2016-10-12 | 启迪公司 | 运动跟踪 |
US9671121B2 (en) | 2014-02-19 | 2017-06-06 | Enlighted, Inc. | Motion tracking |
US10520209B2 (en) | 2014-02-19 | 2019-12-31 | Enlighted, Inc. | Motion tracking |
Also Published As
Publication number | Publication date |
---|---|
JP5982000B2 (ja) | 2016-08-31 |
US20140320403A1 (en) | 2014-10-30 |
EP2767889A4 (en) | 2016-10-19 |
EP2767889B1 (en) | 2023-09-13 |
KR101880998B1 (ko) | 2018-07-24 |
EP2767889A1 (en) | 2014-08-20 |
CN103858075A (zh) | 2014-06-11 |
CN103858075B (zh) | 2017-06-06 |
US9389693B2 (en) | 2016-07-12 |
JP2014535098A (ja) | 2014-12-25 |
KR20130040517A (ko) | 2013-04-24 |
CN107066117A (zh) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013055137A1 (ko) | 이벤트 기반 비전 센서를 이용한 동작 인식 장치 및 방법 | |
WO2013051752A1 (ko) | 터치 감지 장치 및 방법 | |
WO2018131884A1 (en) | Moving robot and control method thereof | |
WO2012053792A2 (ko) | 입력 장치 및 이 장치의 접촉 위치 검출 방법 | |
WO2016052876A1 (en) | Display apparatus and controlling method thereof | |
KR20120113847A (ko) | 이벤트 센서와 칼라 센서를 이용한 동작 인식 장치 및 그 방법 | |
WO2016182181A1 (ko) | 웨어러블 디바이스 및 웨어러블 디바이스의 피드백 제공 방법 | |
WO2020153810A1 (en) | Method of controlling device and electronic device | |
WO2017135774A1 (ko) | 터치 입력 장치 | |
WO2020059939A1 (ko) | 인공지능 장치 | |
WO2017126741A1 (ko) | Hmd 디바이스 및 그 제어 방법 | |
WO2016129923A1 (ko) | 디스플레이 장치 및 디스플레이 방법 및 컴퓨터 판독가능 기록매체 | |
WO2009157654A2 (ko) | 이동 터치 감지 방법, 장치 및 그 방법을 실행하는 프로그램이 기록된 기록매체 | |
WO2015182811A1 (ko) | 사용자 인터페이스 제공 장치 및 방법 | |
WO2014051362A1 (ko) | 이벤트 기반 비전 센서를 이용한 근접 센서 및 근접 센싱 방법 | |
WO2016192438A1 (zh) | 一种体感交互系统激活方法、体感交互方法及系统 | |
WO2016195308A1 (ko) | 터치 압력을 감지하는 터치 입력 장치의 감도 보정 방법 및 컴퓨터 판독 가능한 기록 매체 | |
WO2012093873A2 (ko) | 터치스크린의 터치 위치 검출 방법 및 이러한 방법을 사용하는 터치스크린 | |
WO2020153657A1 (ko) | 터치 장치 및 이의 터치 검출 방법 | |
WO2017164582A1 (ko) | 화면 캡처가 용이한 모바일 단말 및 화면 캡처 방법 | |
WO2017135707A9 (ko) | 터치 압력 감도 보정 방법 및 컴퓨터 판독 가능한 기록 매체 | |
WO2018124624A1 (en) | Method, device, and system for processing multimedia signal | |
WO2021049730A1 (ko) | 영상 인식 모델을 학습하는 전자 장치 및 그 동작 방법 | |
WO2022086161A1 (ko) | 로봇 및 그 제어 방법 | |
WO2020171510A1 (ko) | 터치 회로를 포함하는 전자 장치, 및 그 동작 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12839526 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014535648 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14351806 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012839526 Country of ref document: EP |