KR20160141023A - The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents - Google Patents

The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents Download PDF

Info

Publication number
KR20160141023A
KR20160141023A KR1020150073637A KR20150073637A KR20160141023A KR 20160141023 A KR20160141023 A KR 20160141023A KR 1020150073637 A KR1020150073637 A KR 1020150073637A KR 20150073637 A KR20150073637 A KR 20150073637A KR 20160141023 A KR20160141023 A KR 20160141023A
Authority
KR
South Korea
Prior art keywords
gesture
unit
smart
region
triangle
Prior art date
Application number
KR1020150073637A
Other languages
Korean (ko)
Inventor
박구만
전지혜
양지희
박종화
김경만
Original Assignee
주식회사 에보시스
동신대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에보시스, 동신대학교산학협력단 filed Critical 주식회사 에보시스
Priority to KR1020150073637A priority Critical patent/KR20160141023A/en
Publication of KR20160141023A publication Critical patent/KR20160141023A/en

Links

Images

Classifications

    • G06K9/00335
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • G06K9/00342

Abstract

In the present invention, a data glove and a vision-based gesture recognizing device are proposed as a conventional gesture recognizing device. However, since a separate transceiver and a camera must be installed, cost is high and utilization is low and gesture types are various In order to solve the problem that the gesture recognition rate is low because dynamic and static gesture recognition are different from each other and a noncontact gesture recognition based technology capable of interlocking with realistic media contents is urgently required, A gesture camera unit 200 and a smart gesture control unit 300. The gesture camera 100 includes a gesture camera unit 200 and a smart gesture control unit 300. The gesture camera 100 includes a gesture control unit 300 and a gesture control unit 300. The gesture camera 100 includes a first infrared point, Point scan can be performed, and the initial gesture scan speed can be The gesture according to the movement of the current person can be compared with the reference gesture model through the smart gesture control unit and the specific gesture can be recognized in real time through the smart gesture control unit, It is possible to recognize the gesture according to the movement of a person. Therefore, it is possible to recognize the gesture not only on a general small screen including a PC but also on a large screen, thereby increasing the applicability and use range of the apparatus by 80% It is possible to control an action event or a media content linking function to perform a gesture in addition to a gesture after the recognition of a specific gesture, and to expand and apply it to various industrial fields such as educational contents, media art, exhibition works, It is possible to build and build a non-contact gesture recognition market. To provide an action event and feel contactless smart gesture recognition apparatus and method for performing a media content, it is an object of interlocking.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a contactless smart gesture recognition apparatus and method for performing an action event,

In the present invention, only the gesture according to the movement of a person is point-focused and focused through the triangle-type infrared sensor unit, the gesture camera unit, and the smart gesture control unit, and then the gesture according to the current movement of the person is compared with the reference gesture model It is possible to recognize a specific gesture in real time, so that an action event or an action event that performs an action event or an interlocking function of a media content to a device only through a gesture according to a movement of a person, without attaching or attaching a separate gesture recognition device to a human body Contact smart gesture recognition apparatus and method for performing media content linkage.

User Interface (UI) technology with existing computer has been popularized in the technology using tools such as mouse and keyboard, touching 2D image through screen, Nintendo Wii, Microsoft Kinect game machine, etc. Recognition technology has become commercialized, and convenience and immersion can be felt.

At this time, depending on the application, the device may be separated from the body by a certain distance, or the hand may be difficult to access the mouse or the screen.

In this case, it is necessary to transmit commands to a computer or a smart device through a hand gesture.

As a conventional gesture recognition device, a data glove and a vision-based gesture recognition device have been proposed. However, a separate transceiver and a camera have to be installed.

In addition, since there are many kinds of gesture types, a standardized gesture reference model is not proposed. Therefore, the gesture recognition rate is different due to the difference of dynamic and static gesture recognition, and in particular, in order to activate the noncontact gesture recognition market, It is necessary to develop a non - contact gesture recognition based technology.

Patent Registration No. 10-1114989 (published on Mar. 06, 2012)

To achieve the above object, in the present invention,

The gesture according to the movement of a person can be used as a reference point to perform focus scanning through a triangle type infrared ray including a first infrared point, a second infrared point and a third infrared point, and a smart gesture control unit In addition, it is possible to recognize a specific gesture in real time by comparing and analyzing it with a reference gesture model. An action event to realize control of an action event or a media content linking function to a device in a gesture after recognizing a specific gesture. The present invention is directed to a non-contact smart gesture recognition apparatus and method.

In order to attain the above object, the gesture recognition / real-sensory media content interface apparatus using a depth camera according to the present invention comprises:

An initial motion input unit 100 for inputting a hand motion of a person using a depth camera and measuring the three-dimensional data flow in real time by geometric analysis to obtain the position and direction of the motion,

A gesture that enables a type of mathematical determination by generating a depth map of a dynamic gesture recognized by the movement of the arm, such as a static gesture that recognizes the type and shape of a human finger, the rotation and direction of a human hand, A determination unit 200,

A gesture recognizing unit (300) for reading a shape and a motion of a coordinate of the expression and a corresponding expression in the gesture type classified by the gesture judging unit,

And a content interlocking unit 400 interfaced with the media content to perform an operation corresponding to the commanded gesture or to display the result according to the gesture recognition result.

In addition, the contactless smart gesture recognition method for performing an action event / actual feeling media content association according to the present invention

A step (S100) of shooting a triangle-shaped infrared ray through the triangle-type infrared ray sensor unit and focusing the human body into a triangle-shaped reference point,

A step (S200) of acquiring a gesture image according to a movement of a person by shooting a triangle-type infrared ray which is reflected again by touching a human body through a gesture camera unit,

A step (S300) of extracting only a gesture region excluding a background region from a gesture image acquired from a gesture camera unit in a gesture region extraction unit of a smart gesture control unit;

The gesture region extracted from the smart gesture recognition unit of the smart gesture control unit is divided into a static gesture region or a dynamic gesture region and classified and classified into a static gesture region and a dynamic gesture region and compared with a reference gesture model, (Step S400)

(S500) of causing an action event control unit of the smart gesture control unit to fetch an action event preset in accordance with a specific gesture and outputting an action event control signal to the device,

(S600) of interacting with sensible media content preset in accordance with a specific gesture by the sensible media content interface unit of the smart gesture control unit and displaying the sensible media content according to a specific gesture on the device side.

As described above, according to the present invention, the initial gesture scan speed can be improved by 70% compared to the conventional method, and the smart gesture controller can compare the gesture according to the movement of the current person with the reference gesture model, It is possible to recognize the gesture on a large screen as well as a general small screen including a PC, so that it can increase the equipment application and the use range by 80%, and provide education contents, media art, exhibition work, and publicity (User experience) effect by providing the interaction technology that can interact with the device, and can provide the user experience effect There is a good effect of increasing the efficiency of the system.

1 is a block diagram showing components of a non-contact smart gesture recognizing apparatus 1 that performs an action event / actual feeling media contents linkage according to the present invention.
FIG. 2 is a block diagram showing the components of the triangle-type infrared sensor unit according to the present invention,
3 is an internal cross-sectional view showing the components of the gesture camera unit according to the present invention,
4 is a block diagram illustrating components of a smart gesture control unit according to the present invention.
FIG. 5 is a block diagram illustrating components of a smart gesture recognition unit according to the present invention;
FIG. 6A is a diagram illustrating a gesture image obtained from a gesture camera unit according to an embodiment of the present invention. FIG. 6B is a diagram illustrating an RGB histogram engine module according to an exemplary embodiment of the present invention. (C) shows an example of generating the three-dimensional histogram of the X-axis, the Y-axis and the Z-axis based on H (color), S (saturation), and V (brightness) through the HSV histogram generator according to the present invention In one embodiment,
FIG. 7 is a flowchart illustrating a method of setting a gesture pattern according to a user's movement in the form of a database through a reference gesture modeling setting unit according to an exemplary embodiment of the present invention,
FIG. 8 is a block diagram illustrating a first infrared point of a first infrared point sensor unit, a second infrared point of a second infrared point sensor unit, a third infrared ray point of a third infrared point sensor unit, The point is varied to form one triangle shape, and then the human body is controlled to be the reference point in the triangle shape to perform the focus scan.
9 is a flowchart showing an action event control signal output to a device by calling an action event preset according to a specific gesture based on the specific gesture recognized by the smart gesture recognition unit through the action event control unit according to the present invention. For example,
10 is a flowchart showing a contactless smart gesture recognition method for performing an action event / actual feeling media content linkage according to the present invention.

Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.

FIG. 1 is a block diagram illustrating components of a contactless smart gesture recognizing apparatus 1 for performing an action / real-life media content interlock according to an embodiment of the present invention, which includes a triangle-type infrared sensor unit 100, (200), and a smart gesture controller (300).

First, the triangle-type infrared sensor unit 100 according to the present invention will be described.

The triangle-type infrared sensor unit 100 is installed at one side of the apparatus and shoots a triangle-type infrared ray including a first infrared ray point, a second infrared ray point, and a third infrared ray point in front of the apparatus, And performs focus scanning.

As shown in FIG. 2, the first infrared point sensor unit 110, the second infrared point sensor unit 120, the third infrared point sensor unit 130, and the triangle scan control unit 140 are configured.

The first infrared point sensor unit 110 forms a first infrared point by shooting an infrared project on one side of a human body.

It consists of a straight-line infrared project.

And a rotary motor that rotates a straight type infrared project from 1 ° to 360 ° on one side of the rear end.

The second infrared point sensor unit 120 forms a second infrared point by shooting an infrared ray project on the other side of the human body.

It consists of a straight-line infrared project.

And a rotary motor that rotates a straight type infrared project from 1 ° to 360 ° on one side of the rear end.

The third infrared point sensor unit 130 forms a third infrared point by shooting an infrared ray project on the other side of the human body.

It consists of a straight-line infrared project.

And a rotary motor that rotates a straight type infrared project from 1 ° to 360 ° on one side of the rear end.

The triangle scan control unit 140 varies the first infrared point of the first infrared point sensor unit, the second infrared point of the second infrared point sensor unit, and the third infrared point of the third infrared point sensor unit, And forms a triangle shape, and controls the focus scan of the human body in the form of a triangle as a reference point.

Here, the human body includes all of the face, arms, fingers, torso, and legs.

And, the movement of a person includes all things that move from a person's face, arms, fingers, trunk, and legs.

The reason why the human body is scanned in the form of a triangle as a reference point is that when a human's movement is photographed through only a camera, the background image is shot, So that the gesture extraction speed is delayed due to the space occupied by the frames in the background area and the gesture recognition rate is lowered.

That is, the gesture part corresponding to the movement of the human being among the face, arm, finger, torso, and legs of the human body is made into a reference point in a triangle shape whether face, arm, or finger, The operator can focus on any one of the arm and the finger to perform scanning.

For example, as shown in FIG. 8, when a face or a finger is set as a gesture part as a gesture part according to a movement of a person, the face is set as a first infrared point of the first infrared point sensor part, Is set to the second infrared point of the second infrared point sensor unit and the left hand finger is set to the third infrared point of the third infrared point sensor unit to form one triangular infrared ray.

At this time, the face on which the triangle-shaped infrared ray is formed, the left hand finger, and the right hand finger are made reference points to perform focus scanning.

Then, the gesture camera part photographs the face of the focus scan, the left hand finger, and the right hand finger.

Next, the gesture camera unit 200 according to the present invention will be described.

The gesture camera unit 200 is located at one side of the triangle-type infrared sensor unit, captures a triangle-type infrared ray that is reflected back to the human body through the triangle-type infrared sensor unit, and acquires a gesture image according to the movement of the person do.

It consists of a depth camera.

The depth camera is an apparatus for acquiring depth information of a scene or an object to be photographed to produce a stereoscopic image. The depth camera calculates the depth of the object by calculating the return time of infrared rays generated by the triangle-type infrared sensor unit.

At this time, the obtained depth information has higher accuracy than the depth information obtained by the stereo matching method.

The gesture camera unit according to the present invention includes a 3D coordinate setting unit 210, as shown in FIG.

When the gesture image according to the movement of a person is acquired, the 3D coordinate setting unit sets the X axis of the gesture position as a front and back gesture based on the front side of the camera by displaying coordinates from -value to + To + value, and sets the left and right gestures based on the front of the camera. The Y axis of the gesture position expresses the coordinates from the minus (-) value to the minus (+) value, and sets the upper and lower gestures based on the front of the camera.

As a result, noise generated in the gesture image is removed, and an accurate gesture image composed of the X axis, the Y axis, and the Z axis can be obtained.

Next, the smart gesture controller 300 according to the present invention will be described.

The smart gesture control unit 300 extracts only the gesture region except for the background region in the gesture image acquired from the gesture camera unit, and recognizes the specific gesture by comparing and analyzing the extracted gesture region with the reference gesture model, Or to perform media content interworking.

4, the gesture region extraction unit 310, the HSV histogram generation unit 320, the reference gesture modeling setting unit 330, the smart gesture recognition unit 340, the action event control unit 350, And a media content interface unit 360.

First, a gesture region extracting unit 310 according to the present invention will be described.

The gesture region extracting unit 310 extracts only the gesture region excluding the background region using the RGB histogram set in advance in the gesture image acquired from the gesture camera unit.

This is configured to include the RGB histogram engine module 311. [

As shown in FIG. 6, the RGB histogram engine module 311 extracts only the gesture region excluding the background region through the RGB histogram.

The RGB histogram engine module is configured to construct a three-dimensional histogram of X-axis, Y-axis, and Z-axis for skin color and background color as dictionary information of skin color.

That is, when continuous color detection is performed using the RGB color, the gesture image acquired from the gesture camera unit can be used as it is, thereby improving the speed of execution.

The RGB histogram engine module is performed through the following process.

First, only the gesture region excluding the background region is masked through the RGB histogram of the input gesture image region.

At this time, the mask is set to a semi-circular shape.

Then, the noise generated in the masked gesture region is filtered and removed through the filtering unit.

Finally, the filtered RGB gesture region is passed to the HSV histogram setting unit.

Second, the HSV histogram generator 320 according to the present invention will be described.

The HSV histogram generator 320 generates the HSV histogram using the color information in the gesture region extracted by the gesture region extracting unit.

This converts the RGB gesture region generated by the gesture region extraction unit into the HSV gesture region.

As shown in FIG. 6, the HSV histogram generator generates a three-dimensional histogram of X-axis, Y-axis, and Z-axis based on H (color), S (saturation), and V (brightness).

Third, the reference gesture modeling setting unit 330 according to the present invention will be described.

As shown in FIG. 7, the reference gesture modeling setting unit 330 sets the gesture pattern according to the movement of a person as a DB, and sets the gesture pattern as a reference gesture model in advance.

It gestures all the movements of the human face, arms, fingers, torso, and legs.

In the present invention, the face and the finger are set as the reference gesture model.

That is, the pixel (white) corresponding to the gesture region is 1, the pixel corresponding to the background region (black) is 0, and the binary image

Figure pat00001
.

After constructing N i images in each of the J gesture patterns, a reference gesture model is created by averaging the following equation (1).

Figure pat00002

Here, DM i , j is a model for the jth gesture pattern of the i-th user.

Figure pat00003
Represents an nth gesture image of the gesture pattern of the i-th user, and has a value of 1 when the pixel (x, y) belongs to the gesture region and a value of 0 when belonging to the background region.

DM i , j corresponds to the user dependent model since different models are created according to the user.

Thus, a user-independent model is constructed as shown in Equation (2).

Figure pat00004

The IM j is a user-independent model that calculates the average using all gesture images of each user.

Fourth, the smart gesture recognition unit 340 according to the present invention will be described.

The smart gesture recognition unit 340 divides the extracted gesture region into a static gesture region or a dynamic gesture region and classifies the static gesture region and the dynamic gesture region. Then, the smart gesture recognition unit 340 compares and analyzes the classified static gesture region and the dynamic gesture region with a reference gesture model, And recognizes whether or not it is matched.

As shown in FIG. 5, the gesture classifying unit 341, the gesture comparison analyzing unit 342, and the gesture recognizing unit 343.

The gesture classifying unit 341 classifies and classifies the extracted gesture region into a static gesture region or a dynamic gesture region.

This is because the gesture classification standard is divided into facial movement, arm movement, hand movement, finger movement, body movement, and leg movement, and then the moving state is set as a static gesture region without moving, .

The gesture comparison and analysis unit 342 compares and analyzes the reference gesture model with the static gesture region and the dynamic gesture region classified through the gesture classifying unit.

Here, the reference gesture model gestures all the movements of a person's face, arms, fingers, torso, and legs.

The gesture recognition unit 343 matches any gesture type based on the specific gesture that is compared and analyzed by the gesture comparison and analysis unit.

Here, the gesture type is stored in advance as a SETUP table according to the type of gesture (facial motion, arm motion, hand motion, finger motion, body motion, leg motion) and the number of gestures.

Fifth, an action event control unit 350 according to the present invention will be described.

The action event control unit 350 invokes a predetermined action event according to a specific gesture based on the specific gesture recognized by the smart gesture recognition unit and outputs an action event control signal to the device.

9, when the non-contact specific gesture recognized by the smart gesture recognition unit is "one point click ", an action event called" Select location "matching the noncontact specific gesture of" (Fixed value) "corresponding to a non-contact specific gesture of" one-point double-click "when the non-contact specific gesture recognized by the smart gesture recognition unit is" one-point double- Contact type specific gesture recognized by the smart gesture recognizing unit is "two-point click ", a" small size " corresponding to the noncontact specific gesture of " Quot; fixed value) "to output an action event control signal to the device, and the non-contact specific item recognized by the smart gesture recognizing unit If the skater is a "double-point double-click," an action event called "enlarge (fixed value)" corresponding to a non-contact specific gesture of "double point double click" is invoked to output an action event control signal to the device, Quot; small fixed value "corresponding to the non-contact specific gesture of" click three points " if the noncontact specific gesture recognized by the unit is "three-point click ", and outputs an action event control signal to the apparatus , And if the non-contact specific gesture recognized by the smart gesture recognition unit is "four-point click ", an action event called" small-sized (fixed numerical value) "matching the non- Contact specific gesture recognized by the smart gesture recognition unit is "one point movement ", a" movement " corresponding to the non-contact specific gesture of & Contact type specific gesture recognized by the smart gesture recognition unit is "moving up three points ", a corresponding non-contact specific gesture of" moving up three points " Quot; move to three points below "if the non-contact specific gesture recognized by the smart gesture recognition unit is " moving down three points" Invites an action event called "3D space backward" for a non-contact specific gesture, and outputs an action event control signal to the device.

The gesture type shown in Fig. 9 is a non-contact specific gesture, which refers to a gesture formed by the movement of the user on the gesture camera unit.

Sixth, realistic media content interface unit 360 according to the present invention will be described.

The sensible media content interface unit 360 functions to interface actual sensory media contents preset according to a specific gesture based on the specific gesture recognized by the smart gesture recognition unit to display sensible media contents according to a specific gesture on the device side .

It consists of a gestural user interface.

The gestural user interface 361 interfaces the actual media content to the device having the display function by using the body movement.

For example, a pointing device has the advantage of being quite straightforward because it uses fingers or gestures.

Because it uses familiar gestures as gestures, it can easily be applied to the first person.

The gesture user interface according to the present invention is configured to belong to all categories of a form using a touch screen or body motion freely.

And is configured in accordance with the context, and the gesture user interface control is configured based on the user's intended movement.

It is also configured to know at what point each gesture scheme is best and at what point it is worst. The user selects the input method with minimal effort in a given situation.

And a new interaction is added or updated.

The sensible media contents include an animation character, an avatar, a game character, and the like.

As shown in Table 1, the sensible media content interface unit 360 according to the present invention includes a sensible media content DB unit 362 suitable for a specific gesture.

If the specific gesture recognized by the smart gesture recognizing unit is "one foot move, one foot move ", the sensible media content interface unit 360 outputs a character advance Quot; is displayed in the sensible media content DB unit 362 and is interfaced to express realistic media content called "character forward and backward" on the device side, and the specific gesture recognized by the smart gesture recognition unit is called " Quot; change the character field of view " corresponding to the specific gesture of the "right-hand motion" in the real-effect media content DB unit 362 and displays the real- If the specific gesture recognized by the smart gesture recognition unit is "jump ", a" character jump " Quot; character jump "is displayed on the device side by bringing the real-sensed media content into the real-sensible media content DB unit 362 and interfaces the real-sensed media content to the device, and when the specific gesture recognized by the smart gesture recognition unit is" Quot; character sitting "corresponding to a specific gesture of" sitting "is displayed in the sensible media content DB unit 362 by interrogating the sensible media contents in the sensible media contents DB unit 362 and the sensible media contents called" , And if the specific gesture recognized by the smart gesture recognition unit is "right-handed outline ", realistic media content called" attack motion "matching the specific gesture of &Quot; attack motion "is displayed on the side of the gesture recognition section, and the specific gesture recognized by the smart gesture recognition section If the target is "left hand extended ↓ movement", realistic media contents called "character temporary end" corresponding to a specific gesture of "left hand extended ↓ movement" are called up in real media content DB unit 362 and interfaced to " And if the specific gesture recognized by the smart gesture recognizing unit is "right hand extended & up ", realistic media contents called" character game end "matching a specific gesture of " Called by the content DB unit 362, and interfaces with the device to display real-life media content called "end of character game ".

If the specific gesture recognized by the smart gesture recognition unit is a "sparring posture (left hand, right hand overlapping movement) ", the sensory media content" character sparring posture "corresponding to a specific gesture of" sparring posture Quot; character sparring attitude "is displayed on the device side, and the specific gesture recognized by the smart gesture recognizing unit is" X "shaped in the left and right hands Realizing media content called " formation of a character shielding film "corresponding to a specific gesture of " crossing movement of the left hand and right hand" in the " crossing movement ", the sensible media content DB unit 362 calls, Character shielding film formation ", and the specific gesture recognized by the smart gesture recognition unit is expressed as" Type motion ", real-life media content called" character knife motion "corresponding to a specific gesture of" spreading left-handed and straight-line motion "is called in real-life media content DB unit 362 and interfaced, Express realistic media contents.

Gesture type Realistic media contents Move forward, move backward Character forward, backward Right hand movement Change character view jump Character Jump Sit Character sitting Right handedness Attack motion Move left ↓ ↓ Temporary end Move right hand ↑ game over Sparring posture (left hand, overlapping right hand) Character spar posture Crossing the left and right hands in an "X" Character shield formation Straighten left hand and straight move Character Carl Motion

Hereinafter, a non-contact smart gesture recognizing method for performing an action event / actual feeling media content linking according to the present invention will be described.

First, as shown in FIG. 10, a triangle-shaped infrared ray is shot through a triangle-type infrared sensor unit, and a human's body is made a reference point in a triangle shape to perform focus scanning (S100).

Next, a gesture image corresponding to a movement of a person is obtained by photographing a triangle-type infrared ray which is reflected again by touching a human body through a gesture camera unit (S200).

Next, in the gesture region extraction unit of the smart gesture control unit, only the gesture region excluding the background region is extracted from the gesture image acquired from the gesture camera unit (S300).

Next, the gesture region extracted from the smart gesture recognition unit of the smart gesture control unit is subdivided into a static gesture region or a dynamic gesture region, and then the static gesture region and the dynamic gesture region are compared with the reference gesture model, And recognizes where the gesture is matched (S400).

Next, the action event control unit of the smart gesture control unit calls an action event preset in accordance with the specific gesture, and outputs an action event control signal to the device (S500).

Finally, the sensible media content interface unit of the smart gesture control unit interfaces preset sensible media content according to a specific gesture to display actual sensible media content according to a specific gesture on the device (S600).

As described above, the contactless smart gesture recognition method for interlocking the action event / actual feeling media contents according to the present invention is applied to the media art, the multi-touch table, the motion tracking system, the interactive design, the appliance controller, and the program controller in the OS There is a number.

1: Contactless Smart Gesture Recognition Apparatus 100: Triangle Infrared Sensor Unit
200: gesture camera unit 300: smart gesture control unit

Claims (7)

A triangle type infrared ray sensor unit which is provided at one side of the apparatus and shoots a triangle type infrared ray including a first infrared ray point, a second infrared ray point and a third infrared ray point in front and focuses the human body to a reference point in a triangle shape, 100,
A gesture camera unit 200 positioned at one side of the triangle-type infrared sensor unit, capturing a triangle-type infrared ray reflected from a human body through a triangle-type infrared sensor unit to acquire a gesture image according to a movement of a person,
In the gesture image obtained from the gesture camera part, only the gesture area excluding the background area is extracted. Then, the extracted gesture area is compared with the reference gesture model and analyzed to recognize the specific gesture, so that the action event or the media content is linked to the device And a smart gesture controller (300) for controlling the smart gesture controller (300).
2. The apparatus according to claim 1, wherein the triangle-type infrared sensor unit (100)
A first infrared point sensor unit 110 for forming a first infrared point by shooting an infrared project on one side of the human body,
A second infrared point sensor unit 120 for forming a second infrared point by shooting an infrared ray project on the other side of the human body,
A third infrared point sensor unit 130 for shooting an infrared ray project on the other side of the human body to form a third infrared ray point,
The first infrared point of the first infrared point sensor unit, the second infrared point of the second infrared point sensor unit, and the third infrared point of the third infrared point sensor unit are varied to form a triangle shape, And a triangle scan control unit (140) for controlling the focus scan of the human body in the form of a triangle as a reference point. The non-contact smart gesture recognition apparatus of claim 1,
The apparatus according to claim 1, wherein the gesture camera unit (200)
When acquiring a gesture image according to the movement of a person, the X-axis of the gesture position is expressed as a front-rear gesture based on the front side of the camera by displaying the coordinates from the -value to the + value, and the Z- And a 3D coordinate setting unit for setting coordinates of the gesture position from the minus (-) value to the plus (+) value and setting the upper and lower gestures based on the camera front face Contact smart gesture recognition device that performs an action event, real-life media content link.
The apparatus of claim 1, wherein the smart gesture controller (300)
A gesture region extraction unit 310 for extracting only a gesture region excluding a background region using a RGB histogram set in advance in a gesture image acquired from the gesture camera unit;
An HSV histogram generator 320 for generating an HSV histogram using color information in a gesture region extracted by the gesture region extracting unit,
A reference gesture modeling setting unit 330 configured to convert a gesture pattern according to a person's movement into a DB and presetting the gesture pattern as a reference gesture model,
A smart gesture recognition unit for classifying the extracted gesture area into a static gesture area or a dynamic gesture area and then classifying and classifying the extracted gesture area and dynamic gesture area into a classified gesture area and a dynamic gesture area, (340)
An action event control unit 350 that invokes a predetermined action event according to a specific gesture based on the specific gesture recognized by the smart gesture recognition unit and outputs an action event control signal to the device,
And a sensible media content interface unit 360 for interfacing sensible media contents predetermined according to a specific gesture on the basis of the specific gesture recognized by the smart gesture recognition unit and displaying the sensible media content according to a specific gesture on the device side Contact smart gesture recognition device that performs an action event, real-life media content link.
The method of claim 4, wherein the smart gesture recognition unit (340)
A gesture classifying unit 341 for classifying and classifying the extracted gesture region into a static gesture region or a dynamic gesture region,
A gesture comparison and analysis unit 342 for comparing and analyzing the static gesture area and the dynamic gesture area classified with the gesture classification unit with the reference gesture model,
And a gesture recognition unit (343) for matching and recognizing any gesture type on the basis of a specific gesture that is compared and analyzed by the gesture comparison and analysis unit. The non-contact smart gesture recognition device .
5. The real-time media content interface unit (360) according to claim 4,
And a gestural user interface (361) for interfacing the sensible media contents to the device having the display function using the movement of the body. The contactless smart gesture Recognition device.
A step (S100) of shooting a triangle-shaped infrared ray through the triangle-type infrared ray sensor unit and focusing the human body into a triangle-shaped reference point,
A step (S200) of acquiring a gesture image according to a movement of a person by shooting a triangle-type infrared ray which is reflected again by touching a human body through a gesture camera unit,
A step (S300) of extracting only a gesture region excluding a background region from a gesture image acquired from a gesture camera unit in a gesture region extraction unit of a smart gesture control unit;
The gesture region extracted from the smart gesture recognition unit of the smart gesture control unit is divided into a static gesture region or a dynamic gesture region and classified and classified into a static gesture region and a dynamic gesture region and compared with a reference gesture model, (Step S400)
(S500) of causing an action event control unit of the smart gesture control unit to fetch an action event preset in accordance with a specific gesture and outputting an action event control signal to the device,
And a step (S600) of displaying actual sensed media contents according to a specific gesture on the device side by interfacing actual sensed media contents preset in accordance with a specific gesture in the real sense media content interface unit of the smart gesture control unit A contactless smart gesture recognition method performing interworking.
KR1020150073637A 2015-05-27 2015-05-27 The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents KR20160141023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150073637A KR20160141023A (en) 2015-05-27 2015-05-27 The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150073637A KR20160141023A (en) 2015-05-27 2015-05-27 The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents

Publications (1)

Publication Number Publication Date
KR20160141023A true KR20160141023A (en) 2016-12-08

Family

ID=57576991

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150073637A KR20160141023A (en) 2015-05-27 2015-05-27 The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents

Country Status (1)

Country Link
KR (1) KR20160141023A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710449A (en) * 2018-05-02 2018-10-26 Oppo广东移动通信有限公司 Electronic device
CN108762487A (en) * 2018-05-02 2018-11-06 Oppo广东移动通信有限公司 Electronic device
CN108985227A (en) * 2018-07-16 2018-12-11 杭州电子科技大学 A kind of action description and evaluation method based on space triangular plane characteristic
KR102579463B1 (en) 2022-11-28 2023-09-15 주식회사 에스씨크리에이티브 Media art system based on extended reality technology

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101114989B1 (en) 2010-11-11 2012-03-06 (주)유비쿼터스통신 Sex offender monitoring system using electronic anklet sensing camera for cctv

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101114989B1 (en) 2010-11-11 2012-03-06 (주)유비쿼터스통신 Sex offender monitoring system using electronic anklet sensing camera for cctv

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710449A (en) * 2018-05-02 2018-10-26 Oppo广东移动通信有限公司 Electronic device
CN108762487A (en) * 2018-05-02 2018-11-06 Oppo广东移动通信有限公司 Electronic device
CN108710449B (en) * 2018-05-02 2022-03-22 Oppo广东移动通信有限公司 Electronic device
CN108985227A (en) * 2018-07-16 2018-12-11 杭州电子科技大学 A kind of action description and evaluation method based on space triangular plane characteristic
CN108985227B (en) * 2018-07-16 2021-06-11 杭州电子科技大学 Motion description and evaluation method based on space triangular plane features
KR102579463B1 (en) 2022-11-28 2023-09-15 주식회사 에스씨크리에이티브 Media art system based on extended reality technology

Similar Documents

Publication Publication Date Title
US10394334B2 (en) Gesture-based control system
KR101184170B1 (en) Volume recognition method and system
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
Lee et al. 3D natural hand interaction for AR applications
US8648808B2 (en) Three-dimensional human-computer interaction system that supports mouse operations through the motion of a finger and an operation method thereof
US20130335318A1 (en) Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers
CN109145802B (en) Kinect-based multi-person gesture man-machine interaction method and device
WO2014126711A1 (en) Model-based multi-hypothesis target tracker
KR20160141023A (en) The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents
Palleja et al. Implementation of a robust absolute virtual head mouse combining face detection, template matching and optical flow algorithms
Hartanto et al. Real time hand gesture movements tracking and recognizing system
CN114170407A (en) Model mapping method, device, equipment and storage medium of input equipment
Fadzli et al. VoxAR: 3D modelling editor using real hands gesture for augmented reality
KR101525011B1 (en) tangible virtual reality display control device based on NUI, and method thereof
Abdallah et al. An overview of gesture recognition
KR101447958B1 (en) Method and apparatus for recognizing body point
Xu et al. Bare hand gesture recognition with a single color camera
Raees et al. Thumb inclination-based manipulation and exploration, a machine learning based interaction technique for virtual environments
Jaiswal et al. Creative exploration of scaled product family 3D models using gesture based conceptual computer aided design (C-CAD) tool
Verma et al. 7 Machine vision for human–machine interaction using hand gesture recognition
Diaz et al. Preliminary experimental study of marker-based hand gesture recognition system
Ahn et al. A VR/AR Interface Design based on Unaligned Hand Position and Gaze Direction
Chee Real time gesture recognition system for ADAS
Soundari et al. Extension of desktop control to robot control by eye blinks using Support Vector Machine (SVM)
Van den Bergh et al. Perceptive user interface, a generic approach

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right