KR20170067398A - User interface control method and system using triangular mesh model according to the change in facial motion - Google Patents
User interface control method and system using triangular mesh model according to the change in facial motion Download PDFInfo
- Publication number
- KR20170067398A KR20170067398A KR1020150174033A KR20150174033A KR20170067398A KR 20170067398 A KR20170067398 A KR 20170067398A KR 1020150174033 A KR1020150174033 A KR 1020150174033A KR 20150174033 A KR20150174033 A KR 20150174033A KR 20170067398 A KR20170067398 A KR 20170067398A
- Authority
- KR
- South Korea
- Prior art keywords
- unit
- frame
- triangle mesh
- change
- face
- Prior art date
Links
Images
Classifications
-
- G06K9/00268—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G06K9/00281—
-
- G06K9/00604—
-
- G06K9/4671—
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
A user interface control method and system utilizing a triangular mesh model according to a change in a face motion are disclosed. The user interface control method includes: detecting a face region in each of a first frame input and a second frame input after the first frame, extracting feature points from the face region, generating a triangle mesh based on the feature points, Comparing the first frame and the second frame to track the change of the triangle mesh, determining whether the change of the triangle mesh is above the threshold, and generating a control event according to the face motion corresponding to the change of the triangle mesh above the threshold .
Description
Embodiments of the present invention relate to a user interface control technique, and more particularly, to a user interface control method and system using a triangular mesh model according to a change in a face motion.
The user interface using the conventional face recognition mainly refers to the control through pupil recognition of the face. Such conventional control through face recognition can be used only when both eyes of both faces of the user are recognized by the input device.
However, the conventional control through face recognition is a system for grasping the movement of the pupil of the user's face, and is advantageous in that both hands can be used freely. However, in a mobile device such as a mobile terminal, .
The object of the present invention is to solve the problem of the control through the conventional face recognition as described above, and it is an object of the present invention to provide a method and apparatus for improving face motion recognition, And an interface control method and system.
According to an aspect of the present invention, there is provided an interface for controlling a system operation through a triangular mesh model according to a user's face motion for a mobile device and a desktop device equipped with a camera.
That is, in one aspect of the present invention, a face region of a user is detected with respect to an image input through an external input device (e.g., a camera), the feature points of the user's face are extracted from the region, And controlling the operation of the system by tracking the amount of change of the triangular mesh that changes according to the movement of the user's face.
According to another aspect of the present invention, there is provided an image processing method comprising the steps of: (a) extracting, from the input image, user's facial contours, eyebrows, eyes, eyes, nose, mouth, A triangle mesh is generated through a line segment connecting the distances between the points, and a vector component of the generated triangle mesh is traced to determine a gradient that varies according to the mesh model And a user interface control method capable of outputting a user interface signal for a device to be controlled or a signal for supporting user interaction by calculating a trigonometric function and a moving distance of the distance between the points do.
According to another aspect of the present invention, there is provided a method of detecting a face region in a face region, the method comprising the steps of: detecting a face region in each of an input first frame and a second frame input after the first frame; Generating a triangle mesh based on the feature points, comparing the first frame with the second frame to track the change of the triangle mesh, determining whether the change of the triangle mesh is above the threshold, and changing the triangle mesh above the threshold And generating a control event in accordance with the corresponding face motion.
According to another aspect of the present invention, there is provided an image processing apparatus including a detection unit detecting a face region in each of a first frame input and a second frame input after a first frame, an extraction unit extracting feature points from the face region, A generating unit for generating a triangle mesh based on the feature points, a tracking unit for tracking the change of the triangle mesh by comparing the first frame and the second frame, a determining unit for determining whether the change of the triangle mesh is above a threshold, And an output unit for generating a control event according to the face motion corresponding to the change of the face motion.
The user interface control method and apparatus using the triangular mesh model according to the present invention can provide an improved recognition rate because it includes various feature point distributions in addition to eye pupil and eyes of the user's face. , And other strong elements (glasses, sunglasses, hats, etc.), and more robust user motion control is possible.
In addition, when the user interface control method and apparatus of the present invention is used, since the triangular mesh model is generated based on the feature points in tracking the user's face motion, , The triangular mesh change amount can be tracked regardless of the user's face direction and the camera position direction. The exclusion of such constraints can provide a wider range of applications. For example, when tracking eyes or lips, etc., there is a limitation that the position of the camera is limited to the front, but the method and apparatus of the present invention are greatly free from such limitation.
In addition, when the user interface control method and apparatus of the present invention is used, a control system (e.g., Kinect) through conventional user's body motion recognition supports only input from a fixed camera device in order to track movement of two hands In comparison, limitations can be overcome in mobile devices with convenient mobility. That is, according to the present invention, it is possible to analyze the face recognition by tracking the movement of the face of the camera while moving it in the hand, so that the user can freely use the two hands Can be provided.
In addition, when the user interface control method and apparatus of the present invention is used, since the behavior recognition object is the user's recognition through the facial expression and the tracking, not the gesture, the gesture, the degree of freedom of the user is high and the physical weakness, There is an advantage that it can be utilized also as a user interface (particularly, a user interface for a mobile device) that can easily control an inconvenient user and an elderly user.
1 is a hardware block diagram of a user interface control apparatus according to an embodiment of the present invention.
2 is a block diagram of a program block that can be employed in a user interface control apparatus according to another embodiment of the present invention.
3 is a flowchart of a user interface control method according to another embodiment of the present invention.
4 is a schematic block diagram of a computing device capable of employing the user interface control apparatus of FIG.
5 is a configuration diagram of a user interface control device implemented in the computing device of FIG.
6 is a configuration diagram of a tracking unit of the user interface control apparatus of FIG.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.
The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.
It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
Unless otherwise defined herein, all terms used herein, including technical or scientific terms, may have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.
Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.
1 is a hardware block diagram of a user interface control apparatus according to an embodiment of the present invention.
Referring to FIG. 1, a user interface control apparatus according to an embodiment of the present invention includes a
The
Such a
The
The
The
The
The
The
It should be understood that the
The
According to the present embodiment, the user interface control apparatus acquires an input stream including a user's face image through the
2 is a block diagram of a program block that can be employed in a user interface control apparatus according to another embodiment of the present invention.
Referring to FIG. 2, the user interface control apparatus according to the present embodiment includes
The input video
The
The face
The facial feature
The triangle
The triangle mesh
The
The threshold
Upon receiving the frame from the threshold
The weighting
The
3 is a flowchart of a user interface control method according to another embodiment of the present invention.
The present embodiment relates to a portion characterized in an interface method for controlling a system according to a change in a face motion using hardware and software schematically illustrated in FIGS.
Referring to FIG. 3, the user interface control method according to the present embodiment checks whether a camera module is connected to the processing apparatus (S31). The connection success of the camera module can be confirmed a predetermined number of times, and if the connection success is not confirmed (No in S32), this procedure can be terminated. When the connection success is confirmed (Yes in S32), the video stream of the camera module is transmitted to the processing device.
Next, the processing device receives the video stream from the camera module or the camera equipped with the camera module (S33). The camera module or camera may be integrally coupled to the processing device, but is not limited thereto.
Next, the processing apparatus filters video frames (S34). The filtering may include filtering the frames other than the specific frame according to a predefined preference.
Next, the processing apparatus detects a face region of the user in one filtered frame (S35). The face region may include one or more sub-regions of the face. The detected user face region may correspond to a facial sub-area for one frame.
Various methods can be used for face area detection. For example, the processing apparatus can detect the face region using region segmentation and position information. The face region detection method detects a skin color of a user based on the RGB distribution of the skin color of the RGB space and detects a second region having a color other than the skin color with a first color (e.g., white) And binarizing it into a second color (e.g., black). In this case, a median filter may be used for edge preservation, and a final face region may be detected by removing a first color region having a relatively small size in addition to a face estimation region having a predetermined size or more.
As another example of the face region detection, the processing apparatus can detect unevenness in skin tone, age, pose change (front, left, right, top, or bottom direction), illumination (JPEG, MPEG-1, and MPEG-2) conditions, depending on the environment, the distance from the camera module, the number of faces, the size of the image (resolution of the image), data type (RGB, YCbCr, And a face area detecting unit for detecting a face of the user or detecting a face area. The face region detection unit can be implemented to distinguish the face region from the background region by comparing the geometrically and / or statistically learned model with the model detected in the current image (single frame), in addition to the method using the skin color. Euclidean distance and absolute reference can be used for comparison of models. Of course, in addition to the above-described methods, the face region detecting unit may detect at least one of face region detection using an ellipse mask, face region detection using HSI color information, face region detection using a HUE and a difference image, The face region can be detected using the method of FIG.
Next, the processing apparatus determines that the detection of the face area is successful (S36). If the detection is not successful, the processing unit may return to the previous step S35 to again detect the face region of the user in one filtered frame. If the detection is not successful, the processing apparatus can return to step S33 to receive the video stream from the camera module. When the detection is not successful, a predetermined determination step S36a may be performed to return to any one of the previous steps S33 and S35. For example, the predetermined determination step may be set to a repetition number (for example, three times), the face region detection step may be returned to the face region detection step for three times, and the video stream may be received after three times.
Next, the processing apparatus extracts facial feature points from the face region (S37). The facial feature points extracted from the facial feature point extracting unit can be output in the form of coordinate points of feature points in a predetermined dimension.
Facial feature point detection is necessary for detecting changes requiring more than two frames of image processing. The facial feature point detection can be performed using or based on the edge component and the geometric position information of the face feature (eyes, nose, mouth, etc.), but it is not limited thereto and other suitable other feature point detection Technique. ≪ / RTI >
Next, the processing device determines whether the facial feature point extraction is successful (S38). If it is not an extraction success, the processing apparatus can return to the facial feature point extraction step. Of course, if the facial feature point extraction is unsuccessful for a predetermined number of times or more, the processing apparatus may return to the step of receiving the video stream, or may output an error and terminate the process for the current frame.
Next, the processing apparatus forms a triangular mesh through the triangular mesh model (S39). In this step, the triangle mesh generation unit includes generating triangle meshes by connecting at least some of the feature points of the face region. The triangle mesh generation method may be as described above with reference to FIG. The triangular mesh model can generate triangular meshes in the face region by referring to or using training data (see 200 in FIG. 1).
Next, the processing apparatus can determine whether to perform buffering for triangular mesh tracking (S40). If it is necessary to perform buffering (Yes in S40), the processing apparatus can perform buffering for the frame (the previous frame or the first frame) in which the triangle mesh is generated (S41). In that case, the processing apparatus may include a buffering unit or a buffer. The buffering unit may store a triangle mesh (first triangle mesh) for the face region of the first frame. On the other hand, similar to the case of the first frame, the processing apparatus performs facial region extraction, facial feature point extraction, and triangular mesh (second triangular mesh) for the second frame (current frame) received after the first frame ) Generating steps.
On the other hand, if it is not necessary to perform buffering (No in S40), the processing device compares or tracks the first triangle mesh for the face area of the first frame and the second triangle mesh for the face area of the second frame You can proceed. In this case, the processing apparatus sets the parallel processing for the first frame and the second frame through the parallel processing unit, and thereby generates the triangular mesh generation for the plurality of frames in parallel through the plurality of processing processes included in the processing apparatus It can be processed as an enemy.
When the first triangular mesh of the first frame and the second triangular mesh of the second frame are prepared, the processing unit converts the second triangular mesh of the second frame into the first triangular mesh of the first frame and the second triangular mesh of the second frame, And the change from the first triangle mesh to the second triangle mesh can be tracked (S42).
Next, the processing device compares the change of the triangular mesh with the threshold value to determine whether the change of the triangular mesh exceeds the error range or within the tolerance range (S43). If the determination result is within an allowable range error, the processing device may return to the step of comparing and tracking the triangular meshes, or may terminate the present process for the first frame. If the determination result indicates that the allowable range error is exceeded, a preset control event may be generated in response to the change of the triangular mesh (S44).
On the other hand, when the result of the determination exceeds the allowable range error, the processing apparatus selects any one of the preset control events corresponding to the start position, range, shape, direction, amount of change, And a step of selecting the step. According to such a configuration, the user interface control device uses a plurality of facial features such as eyebrows, eyes, eyes, mouth, lips, face, cheek, jaw line, Various control events can be set in correspondence with the size, position, direction, symmetry, or a change of the combinations of the features.
4 is a schematic block diagram of a computing device capable of employing the user interface control apparatus of FIG.
Referring to FIG. 4, the
The
The
In addition, the
The
The
The
The input / output device 530 transfers the input stream to the
The
The
5 is a configuration diagram of a user interface control device implemented in the computing device of FIG.
5, the user
The
The
The
The generating
The
The
The
Although not shown in FIG. 5, the user
The classification unit may include a means or a component for classifying the largest change or the change in the ROI within the triangular mesh change determined by the
The evaluating unit may include means for determining a predetermined weight based on a position or a feature in the face region with respect to the largest triangular mesh change classified by the classifying unit or the triangular mesh change in the ROI. The evaluation unit may have the form of an evaluation module (eighth module) stored in the memory, and may be performed by the processor to operate as a weight determination evaluation unit.
The providing unit may include means for generating a preset control event corresponding to the triangular mesh change obtained by at least one of the judging
The first to ninth modules may be software modules stored in a memory and may be implemented by a processor connected to a memory to implement a user interface control method according to the present embodiment.
6 is a configuration diagram of a tracking unit of the user interface control apparatus of FIG.
Referring to FIG. 6, in the user interface control apparatus according to the present embodiment, the
The classifying
The selecting
The comparing
The
According to the above-described embodiment, a plurality of feature points are extracted in a sub-area of an input face image, and a variation event of a triangle mesh connecting feature points is traced to generate a control event according to a preset triangle mesh model. This means that the target of the action recognizes the user's intention or command through facial expression and tracking, rather than the user's hand motion or gesture, and outputs a control event, control command, or control signal accordingly, A notebook computer, an actuator mounting device, etc.) can be controlled. In addition, this has the advantage that the degree of freedom of the user can be greatly increased, and the utilization for the physical weak can be greatly improved.
Meanwhile, in the present embodiment, the components of the user interface control device may be, but not limited to, a functional block or a module mounted on a user's mobile terminal or a computer device. The above-described components may be stored in a computer-readable medium (recording medium) in the form of software for implementing a series of functions that they perform, or may be transmitted to a remote location in the form of a carrier to be implemented to operate in various computer devices. Herein, the computer readable medium may be disposed in a plurality of computer devices or cloud systems connected via a network, and at least one of the plurality of computer devices or the cloud system may be connected to a memory system for performing the user interface control method of the present embodiment You can store programs and source code.
That is, the computer-readable medium may be embodied in the form of a program command, a data file, a data structure, or the like, alone or in combination. Programs recorded on a computer-readable medium may include those specifically designed and constructed for the present invention or those known and available to those skilled in the computer software arts.
The computer-readable medium may also include a hardware device specifically configured to store and execute program instructions, such as a ROM, a RAM, a flash memory, and the like. Program instructions may include machine language code such as those produced by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like. The hardware device may be configured to operate with at least one software module to perform the user interface control method of the present embodiment, and vice versa.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the present invention as defined by the following claims It can be understood that
Claims (20)
Extracting feature points from the face region;
Generating a triangle mesh based on the feature points;
Comparing the first frame and the second frame to track changes in the triangle mesh;
Determining whether the change of the triangle mesh is above a threshold value; And
Generating a control event in accordance with a face motion corresponding to a change in the triangle mesh above the threshold.
Wherein the generating the triangle mesh comprises:
A user interface control method for setting one of the minutiae as a main axis and dividing the face area into a plurality of areas based on the main axis and generating the triangle mesh by using two- .
Wherein the generating the triangle mesh comprises:
And using the Delaunay triangulation to connect the feature points to the triangles so that the interior angles of the triangles are minimized when dividing the plane or space.
Wherein the generating the triangle mesh comprises:
Wherein the feature points are generated according to a tree structure connected so that the feature points can reach each other and the sum of the total distances is minimized.
Prior to tracking the change in the triangle mesh,
Further comprising buffering the first frame including the triangle mesh.
Wherein the first frame and the second frame are processed in different processes of the image processing apparatus.
Wherein tracking the change in the triangle mesh comprises:
Wherein the face region is divided into a plurality of predetermined sub regions, a region of interest having the largest change among the sub regions is selected, and a change of the triangle mesh in the region of interest is compared.
Wherein the region of interest comprises an eyebrow, an eye, a pupil, a mouth, a lip, a cheek, a jaw line, a face orientation, or a combination thereof.
Wherein the step of determining whether the change of the triangle mesh is equal to or greater than a threshold value,
And comparing the variation of the triangle mesh with the training data.
The training data may be stored in a storage unit of a control target device receiving a signal according to the control event or in a storage unit of an interoperable device or stored in a server device or a cloud server connected to the control target device or an interoperable device through a network, Interface control method.
Wherein the control target device or the interoperable device comprises a mobile terminal and the first frame and the second frame are obtained from a camera module of the mobile terminal.
An extracting unit for extracting feature points from the face region;
A generator for generating a triangle mesh based on the feature points;
A tracking unit for tracking a change of the triangular mesh by comparing the first frame and the second frame;
A determining unit determining whether the change of the triangle mesh is equal to or greater than a threshold value; And
And generating a control event in accordance with a face motion corresponding to the change of the triangle mesh above the threshold value.
Wherein the generation unit comprises:
Wherein one of the minutiae is set as a main axis, the face region is divided into a plurality of regions based on the principal axis, the triangle mesh is generated using two-dimensional Dilnyi triangulation for each divided region,
A Delaunay triangulation is used to connect the minutiae to the triangles to divide the plane or the space so that the internal angles of the triangles are minimized,
Wherein the triangle meshes are generated according to a concatenated tree structure such that the feature points can reach each other and the sum of the total distances is minimized.
Further comprising a buffering unit for buffering the first frame including the triangle mesh and disposed between the generating unit and the tracking unit.
Further comprising a parallel processing unit for supporting independently performing a first process of processing the first frame and a second process of processing the second frame.
The tracking unit includes:
A dividing unit dividing the face area into a plurality of preset sub-areas;
A selector for selecting a region of interest having the largest change among the sub regions; And
And a comparator for tracking a change of the triangle mesh in the area of interest.
Wherein the region of interest comprises an eyebrow, an eye, a pupil, a mouth, a lip, a chin, a face orientation, or a combination thereof.
Wherein the determination unit compares the change of the triangle mesh with pre-stored training data.
The proximity classifier may further include a closest neighbor classifier between the determination unit and the user control unit, wherein the closest neighbor classifier classifies the user Interface control device.
And a weight determination evaluation unit between the nearest neighbor classification unit and the user control unit, wherein the weight determination evaluation unit assigns a weight according to the type or position of the feature or the sub-area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150174033A KR101909326B1 (en) | 2015-12-08 | 2015-12-08 | User interface control method and system using triangular mesh model according to the change in facial motion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150174033A KR101909326B1 (en) | 2015-12-08 | 2015-12-08 | User interface control method and system using triangular mesh model according to the change in facial motion |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170067398A true KR20170067398A (en) | 2017-06-16 |
KR101909326B1 KR101909326B1 (en) | 2018-12-19 |
Family
ID=59278396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150174033A KR101909326B1 (en) | 2015-12-08 | 2015-12-08 | User interface control method and system using triangular mesh model according to the change in facial motion |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101909326B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019134346A1 (en) * | 2018-01-08 | 2019-07-11 | 平安科技(深圳)有限公司 | Face recognition method, application server, and computer-readable storage medium |
KR20190091884A (en) * | 2018-01-29 | 2019-08-07 | 박길주 | Image certificating system for anti-hacking and method of the same |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201501044A (en) | 2013-06-24 | 2015-01-01 | Utechzone Co Ltd | Apparatus, method and computer readable recording medium of generating signal by detecting facial action |
-
2015
- 2015-12-08 KR KR1020150174033A patent/KR101909326B1/en active IP Right Grant
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019134346A1 (en) * | 2018-01-08 | 2019-07-11 | 平安科技(深圳)有限公司 | Face recognition method, application server, and computer-readable storage medium |
KR20190091884A (en) * | 2018-01-29 | 2019-08-07 | 박길주 | Image certificating system for anti-hacking and method of the same |
Also Published As
Publication number | Publication date |
---|---|
KR101909326B1 (en) | 2018-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10198823B1 (en) | Segmentation of object image data from background image data | |
US10394318B2 (en) | Scene analysis for improved eye tracking | |
KR102364993B1 (en) | Gesture recognition method, apparatus and device | |
US10891473B2 (en) | Method and device for use in hand gesture recognition | |
US20130342636A1 (en) | Image-Based Real-Time Gesture Recognition | |
CN103970264B (en) | Gesture recognition and control method and device | |
KR101612605B1 (en) | Method for extracting face feature and apparatus for perforimg the method | |
JP6221505B2 (en) | Image processing apparatus, image processing method, and image processing program | |
US9213897B2 (en) | Image processing device and method | |
KR20170056860A (en) | Method of generating image and apparatus thereof | |
KR101279561B1 (en) | A fast and accurate face detection and tracking method by using depth information | |
KR20220007882A (en) | Representation and extraction of layered motion from monocular still camera video | |
KR20140019950A (en) | Method for generating 3d coordinate using finger image from mono camera in terminal and mobile terminal for generating 3d coordinate using finger image from mono camera | |
JP2016099643A (en) | Image processing device, image processing method, and image processing program | |
KR101909326B1 (en) | User interface control method and system using triangular mesh model according to the change in facial motion | |
JP2016024534A (en) | Moving body tracking device, moving body tracking method, and computer program | |
JP2017033556A (en) | Image processing method and electronic apparatus | |
Xu et al. | Bare hand gesture recognition with a single color camera | |
CN112541418B (en) | Method, apparatus, device, medium and program product for image processing | |
CN112183155B (en) | Method and device for establishing action posture library, generating action posture and identifying action posture | |
US11847823B2 (en) | Object and keypoint detection system with low spatial jitter, low latency and low power usage | |
KR20160107587A (en) | Apparatus and method for gesture recognition using stereo image | |
Thakur | Robust hand gesture recognition for human machine interaction system | |
CN112711324B (en) | Gesture interaction method and system based on TOF camera | |
KR101396098B1 (en) | Method for generating 3d coordinate using finger image from mono camera in terminal and mobile terminal for generating 3d coordinate using finger image from mono camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |