KR20160147326A - Virtual Keyboard Operation System and Control Method using Depth Image Processing - Google Patents

Virtual Keyboard Operation System and Control Method using Depth Image Processing Download PDF

Info

Publication number
KR20160147326A
KR20160147326A KR1020150083918A KR20150083918A KR20160147326A KR 20160147326 A KR20160147326 A KR 20160147326A KR 1020150083918 A KR1020150083918 A KR 1020150083918A KR 20150083918 A KR20150083918 A KR 20150083918A KR 20160147326 A KR20160147326 A KR 20160147326A
Authority
KR
South Korea
Prior art keywords
depth
virtual keyboard
touch sensor
image
module
Prior art date
Application number
KR1020150083918A
Other languages
Korean (ko)
Inventor
권순각
Original Assignee
동의대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 동의대학교 산학협력단 filed Critical 동의대학교 산학협력단
Priority to KR1020150083918A priority Critical patent/KR20160147326A/en
Publication of KR20160147326A publication Critical patent/KR20160147326A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera

Abstract

The present invention relates to a virtual keyboard and a method of operating the keyboard using the depth camera. More particularly, the present invention relates to a keyboard device and a method of operating the same, And depth information is analyzed using a depth camera irrespective of the limitations of the touch process on the touch screen limited by the user or the constraints that cause the input error, so that the flat surface or the curved surface such as the wall surface and the desk surface To a technique for using the flat structure portion as a virtual keyboard-based body.
The virtual keyboard device and the operation method based on the depth image processing according to the present invention can be realized by installing a depth camera at a desired position by an operator and using a virtual keyboard based object such as a flat surface such as a wall surface or a desk or a gently curved surface, A virtual camera is generated by a video projection module such as a beam projector, and then a specific position of the keyboard projected on the flat structure portion is touched, the depth camera recognizes the touch of the keyboard and extracts the corresponding key information from the keyboard information input to the DB module And an algorithm for outputting the touch tone to the speaker module are sequentially applied, and the individual modules are configured as one system.

Description

TECHNICAL FIELD [0001] The present invention relates to a virtual keyboard apparatus,

More particularly, the present invention relates to a method of generating a virtual keyboard using a depth camera, and more particularly, to a method of operating a keyboard using a depth camera, As a virtual keyboard-based sieve.

Conventionally, when a human being inputs and transmits a command to control a computer, it has been common practice to utilize a physical keyboard-based GUI (Graphic User Interface) as an intermediary of such a process. Recently, (NUI) such as motion recognition is increasingly used. Generally, a physical keyboard can be integrated with a specific device or can be separately modularized. In such a case, the limitation is limited to a limited size or a separate portable process, which may limit the spatial aspects. In addition, since the touch screen-based virtual keyboard has different hand sizes depending on the user, the touch process using the finger on the limited touch screen may not be performed smoothly or the input error may be caused.

By combining the above NUI and virtual keyboard implementation technology based on depth image processing, it is possible to increase computer operation and control efficiency by natural behavior of human by fusion with a computer device requiring a series of control processes, It is advantageous that it can be utilized regardless.

A similar prior art to disclose the above-described virtual keyboard device and operation method based on depth image processing is 'Design and Implementation of NUI using Kinect' in the journal of Digital Contents Society. This is to disclose a method for extracting a user's hand with certain depth information of a Kinect and using it to recognize a finger and a gesture, and a technique for implementing a GUI of a hand mouse and a virtual keyboard on a computer operating system. However, the virtual keyboard manipulation method based on depth image processing according to the present invention is advantageous in that a virtual keyboard is implemented on a general flat structure such as a wall surface or a desk, in addition to being manipulated on a computer operating system, . Another prior art is disclosed in Korean Patent Application No. 10-2013-0112061 filed in the Korean Intellectual Property Office (KIPO); (A) 10-2014-0140095. However, in the conventional technology, a technique for implementing a virtual keyboard and a technique for operating a virtual keyboard device having no limitation in a target application range and a method for operating the virtual keyboard have not been provided.

KR10-2013-0112061A KR10-2014-0140095A

Lee, Seonbom, Jeongilhong, 2014, "Design and Implementation of NUI Using Kinect", Journal of Digital Contents Society, Vol. 15, No. 4, pp. 473-480.

The present invention aims to satisfy the technical needs required from the background of the above-mentioned invention.

It is an object of the present invention to overcome the limitations of conventional physical keyboards which are restricted to a limited size or require a separate portable process, The present invention provides a virtual keyboard device and a manipulation method based on depth image processing that can overcome the limitation that the touch process on the screen may not be performed smoothly or may cause an input error.

The technical objects to be achieved by the present invention are not limited to the above-mentioned problems, and other technical subjects not mentioned can be clearly understood by those skilled in the art from the following description. There will be.

According to an aspect of the present invention, there is provided a virtual keyboard device and a method for operating the virtual keyboard device based on depth image processing, the method comprising: providing a depth camera at a desired position of an operator; After recognizing the object as a base object and creating a virtual keyboard with a video projection module such as a beam projector and touching a specific position of the keyboard projected on the flat structure portion, the depth camera recognizes the keyboard touch, an algorithm for extracting the corresponding key information from the virtual keyboard data input to the base module and transmitting the corresponding key information to the control target computer device is sequentially applied, Of the system.

As described above, the present invention can overcome the limitations of the conventional physical keyboard which may be confined to a limited size or require a separate portable process, which may be restricted by spatial aspects. In addition, the virtual keyboard based on the touch screen can overcome the limitation that the touch process on the limited touch screen may not be performed smoothly or the input error may be caused. By integrating NUI with virtual keyboard implementation technology based on depth image processing, it is possible to improve computer operation and control efficiency by natural human action through fusion of computer devices requiring a series of control processes, and to utilize them regardless of place or environment There are advantages.

It is to be understood that the technical advantages of the present invention are not limited to the technical effects mentioned above and that other technical effects not mentioned can be clearly understood by those skilled in the art from the description of the claims There will be.

1 is a flowchart of a depth information processing analysis applied to a virtual keyboard apparatus and a manipulation method based on depth image processing according to an embodiment of the present invention;
BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a virtual keyboard device and a virtual keyboard device based on depth image processing according to an embodiment of the present invention.
FIG. 3 illustrates an example of a process of correcting coordinates before transformation by a one-dimensional linear transformation in a virtual keyboard device and a manipulation method based on depth image processing according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating touch judgment in a touch sensor area in a virtual keyboard device and a method of operation based on depth image processing according to an embodiment of the present invention; FIG.
FIG. 5 is a diagram illustrating a flow of a touch speed and a locus of a pointer in a virtual keyboard device and a manipulation method based on depth image processing according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a depth image processing based virtual keyboard apparatus and an operation method according to an embodiment of the present invention; FIG.
FIG. 7 illustrates an example of a virtual keyboard implementation shape presented in a depth image processing based virtual keyboard device and an operation method according to an embodiment of the present invention; FIG.
8 is a layout diagram of major modules proposed in the depth image processing based virtual keyboard device and the operation method according to the embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings, It is not. In the following description of the present embodiment, the same components are denoted by the same reference numerals and symbols, and further description thereof will be omitted.

The concept of the depth information processing analysis using the depth camera will be described with reference to FIG. As shown in FIG. 1 (a), the detailed configuration of the virtual touch sensor includes a depth image photographing module 11 for obtaining a depth image of a touch sensor area using a touch sensor area; A spatial coordinate correction unit (12) for correcting spatial coordinate distortion of an image with respect to the touch sensor area taken by the depth image photographing module (11); A depth value calculation unit 13 for calculating a representative depth value of the touch sensor region in the depth image whose spatial coordinate distortion is corrected in the spatial coordinate correction unit 12; An object detector 14 for grouping and labeling pixels of a depth image having a depth value different from a representative depth value of the touch sensor area, storing the sequentially stored size values of the objects, and detecting the object; A pointer extracting unit (15) for extracting a pointer to the detected object; And a touch determination unit 16 for determining whether the touch sensor area is touched based on the depth value of the extracted pointer position.

In addition, the order of correction of the spatial coordinates can be made different as shown in Fig. 1 (b). In the case where the correction of the spatial coordinates is performed before the touch determination, as shown in FIG. 1B, the depth image photographing module 21 photographs the touch sensor area to obtain a depth image for the touch sensor area; A depth value calculator 22 for calculating a representative depth value of the touch sensor area in the depth image; An object detection unit 23 for sequentially grouping and labeling pixels of a depth image having a depth value different from a representative depth value of the touch sensor area and storing the depth information as a size value corresponding to the number of objects in order; A pointer extracting unit (24) for extracting a pointer from the detected object; A spatial coordinate correcting unit 25 for correcting spatial coordinate distortion of an image with respect to the touch sensor area; And the spatial coordinate correction order can be changed by the touch determination unit 26 that determines whether or not the touch sensor area is touched based on the depth value of the extracted pointer position.

Here, the spatial coordinate correction is used when the absolute spatial coordinate is applied, and not when the relative spatial coordinate is applied. One image shooting module is installed in any one of the top, bottom, left, right, upper left, lower left, right upper, and lower right areas in a display monitor, screen, flat or curved object used as a virtual touch sensor Or may be installed in more than one place (FIG. 2).

The depth image photographing module 11 can be attached to an external mount such as a hanger or a pedestal, a fixed type inside a wall or a floor, a removable type which is freely attachable and detachable, and a socket type attached to a terminal device. The depth value calculation unit 13 of the touch sensor area calculates a depth value of a specific point by receiving the depth image of the touch sensor area obtained by capturing the touch sensor area by the depth image capturing module 11. The depth value calculation unit 13 is a part that operates only at the beginning of execution and can select the first screen of the depth image. In addition, a screen corresponding to a specific number can be accumulated to obtain a stable image. When the screen is accumulated, the average value is calculated and stored as the representative depth value at each position of the touch sensor. In this case, in addition to the average value, an intermediate value, a minimum value, or the like may be used. When the position or the distance of the photographing module and the touch sensor area is changed, the depth value calculation unit 13 is executed again. The depth value of the touch sensor area is obtained for each screen, and the obtained depth value is compared with the previously stored representative depth value The depth value calculation unit 13 of the touch sensor area is executed again when the absolute difference is larger than the predetermined value.

FIG. 3 is a diagram illustrating that the coordinates before transformation are corrected by one-dimensional linear transformation based on four spatial coordinates that are mapped under the environment in which the depth image capturing module is installed on the left side of the virtual touch sensor area. Is defined.

Figure pat00001
Equation (1)

Figure pat00002
Equation (2)

The corrected post-transformation absolute coordinates can be obtained by using the above equations (1) and (2). Here, x (horizontal) and y (vertical) are coordinates before conversion, and X (horizontal) and Y (vertical) are coordinates after conversion. H x (width), V yo , V y1 (height) correspond to the touch sensor area before conversion, H X (width) and V Y (height) correspond to the touch sensor area after conversion. x 0 (horizontal) and y 0 (vertical) are the correction offsets.

Each time a new image of the depth image is input, the object detecting unit 14 compares the representative depth value of the touch sensor area stored in the previous step and determines that the object is detected if the two values are out of a predetermined range. That is, if the following equation (3) is satisfied, it is determined that the object is detected in the horizontal direction x and the vertical direction y position, and a new binary image is generated and the pixel value of the corresponding position is stored as '255'.

Figure pat00003
Equation (3)

When object detection is performed on one screen, a binary image having a pixel value of '255' is generated in the object part and a pixel value is '0' in the non-object part. A process of sequentially grouping and labeling pixels having a pixel value of '255' in a binary image is performed. In the labeled order, objects are detected by storing them as '1' for the first object, '2' for the second object, and so on. In addition, noise can be removed through various morphology operations, erosion, expansion, removal, and filling. Since noise may occur in the labeling of the detected object, if the number of pixels of the labeled object is within a certain number, it may be excluded from the labeling object to exclude noise. The pointer extracting unit 15 extracts a pointer from the detected object, and the pointer obtains a position closest to the touch sensor area and uses the pointer as a pointer. The extracted pointer may be a human finger or an indicator. Finding a position close to the touch sensor area depends on the installation direction of the image sensing module. When the depth imaging module is installed on the left side of the touch sensor area, the position of the leftmost pixel in the object is used as a pointer based on the direction of the depth sensor image of the touch sensor area, and when the imaging module is on the right side, A pixel located on the upper side when the image is on the upper side and a pixel located on the lower side when the lower side is used as a pointer.

FIG. 4 is a block diagram showing touch judgment in a touch sensor area, and FIG. 5 is a block diagram showing a flow of a touch speed and a locus of a pointer. The touch determination unit 16 determines whether or not the pointer approaches or touches the touch sensor. Whether or not the touch sensor area is judged to be touched even when the hand or the pointer touches the touch sensor area is determined to be touched or when the touch point is within a certain distance, it can be determined according to the convenience of the user. The touch determiner 16 in the, given that touch the case of contact value of the depth of the pointer d p (x, y) and the value of the depth at which the position in the depth image of the touch sensor area d s (x, y ), And determines that the touch is made when the absolute difference between the two values is less than the specific value. That is, when the following equation (4) is satisfied, it is determined that the pointer touches the touch sensor area in the horizontal direction x and the vertical direction y position.

Figure pat00004
Equation (4)

Here, T D is set by the user as a specific value, and in the case of the millimeter standard, generally 5 is suitable. In this case, in order to prevent malfunctioning of the judgment, the average value of the x-1, y, y + 1 three-dimensional depth values in several neighboring horizontal directions rather than one position in the horizontal direction x and the vertical direction y may be used. Alternatively, neighboring pixel values on the left diagonal, right diagonal, etc. may be used. When it is judged that the approach is touched, the position at which the touched position is compared with the position at which the depth value is compared is different. That is, when the image pickup module is installed on the left side of the touch sensor area, d p (x, y) which is the depth value of the pointer position and d s (xT, y), and judges that the touch is made when the difference between the two values is less than the specified value. At this time, the touched position is the position of the horizontal direction xT and the vertical direction y, and if the following equation (5) is satisfied, it is determined that the pointer touches the touch sensor area at the corresponding position.

Figure pat00005
Equation (5)

Here, T is set by a user as a specific value, and when an approach distance between the pointer and the touch sensor area is 1 cm, five pixels are suitable. The average value of several neighboring depth values may be used rather than one position of the pointer. When the image capturing module is installed on the right side of the touch sensor area, d p (x, y) is compared with d s (x + T, y) it indicates, d p (x, y) and d s Compare (x, yT) (x, yT) , and determines the touch whether or not the position, when the bottom of d p (x, y) and d s (x, y + T) to judge whether or not the position of (x, y + T) is touched. In addition, the horizontal / vertical depth value of the pointer can be obtained for each screen as shown in FIG. That is, the velocity and direction of the pointer in the horizontal direction, the velocity and direction in the vertical direction, and the distance from the photographing module or the touch sensor area can be known based on the frame rate of the screen. One of the methods for detecting the moving direction and speed of the pointer is to continuously compare the previous frame and the current frame of the depth image acquired from the depth image capturing module to detect the moving direction, the moving speed, and the moving distance of the pointer Can be used.

The virtual keyboard manipulation method based on the depth image processing according to the present invention comprises the steps of: installing a depth camera at a desired position of a virtual keyboard operator and recognizing an object by using a virtual keyboard based object such as a flat surface such as a wall surface or a desk or a gently curved surface, After creating a virtual keyboard with a video projection module such as a beam projector and touching a specific position of the keyboard projected on the wall, the depth camera recognizes the keyboard touch and extracts the corresponding key information from the keyboard information input to the DB module And an algorithm for transmitting the corresponding key information to the control target computer device is applied. Referring to FIG. 6,

(S100) of selecting a planar portion of a virtual plane-based chain plane or a smoothly curved plane to perform depth camera photographing and depth image data acquisition and starting a depth image photographing;

A step (S200) of projecting and generating a virtual keyboard image on the flat structure part selected in step S100;

(S210), when the step S200 is completed, extracting background data of the virtual keyboard tendon area from the virtual keyboard image projected and generated on the flat structure part;

An object recognizing step (S300) of recognizing a hand of a virtual keyboard operator with a depth camera to acquire depth information about a displacement between a virtual keyboard image and a virtual keyboard operator's hand projected and generated in operation S200;

Detecting a position of the virtual keyboard image by the depth camera in step S400;

Converting the target position detected in step S400 into depth information (S500);

A key data synchronization step (S510) of confirming and matching the key position data on the DB module with the key position converted into the depth information in S500;

(S600) transmitting a code in which the depth information on the subject position and the key data are synchronized to the DB module in step S510;

A step S610 of selecting and extracting the code transmitted in the step S600 with a corresponding input key embedded in the DB module;

And transmitting the input key extracted in step S610 to the control target computer (S700).

In this case, the virtual keyboard shape projected from the beam projector image transmitting module to the flat structure portion is image-outputted in the same shape and size as the real keyboard in order to realize the feeling that the virtual keyboard operator actually touches the keyboard, do. Also, as shown in FIG. 7, it is characterized in that the video output is set to be able to output video in a keyboard layout that supports languages of the world. This allows the user to select a language arrangement type of a virtual keyboard for input according to the situation of a virtual keyboard operator It is to increase. In addition, it is possible to output a touch tone in order to provide an auditory effect in a process of keying a virtual keyboard.

FIG. 8 is a schematic diagram of an overall system 100 for implementing a depth image processing based virtual keyboard device and a method of operation according to the present invention.

A depth camera module (110) for acquiring depth information;

A beam projector image sending module 130 for projecting the virtual keyboard to the flat structure portion 120;

A DB module 140 in which virtual keyboard image data for keypad position recognition, key data, and tone data are embedded;

And a speaker module 150 for outputting a touch tone.

The speaker module 150 is equipped with a short-range wireless communication module 160 in order to minimize the wired connection and increase the space utilization. The short-range wireless communication module 160 is a low-power-based short- It is preferable to use a bluetooth communication method.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, will be. Accordingly, the true scope of the present invention should be determined only by the appended claims.

11: Depth imaging module
12: Space coordinate correction unit
13: depth value calculation unit
14:
15: Pointer extracting unit
16:

Claims (12)

The detailed configuration of the virtual touch sensor in the virtual keyboard device and the operation method based on the depth image processing,
A depth image photographing module (11) for obtaining a depth image of a touch sensor area using a touch sensor area;
A spatial coordinate correction unit (12) for correcting spatial coordinate distortion of an image with respect to the touch sensor area taken by the depth image photographing module (11);
A depth value calculation unit 13 for calculating a representative depth value of the touch sensor region in the depth image whose spatial coordinate distortion is corrected in the spatial coordinate correction unit 12;
An object detecting unit 14 for sequentially grouping and labeling pixels of a depth image having a depth value different from a representative depth value of the touch sensor region and storing the depth information in a size value corresponding to the number of objects in order;
A pointer extracting unit (15) for extracting a pointer to the detected object;
And a touch determination unit (16) for determining whether the touch sensor area is touched based on a depth value of the extracted pointer position.
The method according to claim 1,
The order of correction of the spatial coordinates in the detailed configuration of the virtual touch sensor can be changed,
A depth image photographing module (21) for photographing a touch sensor area to obtain a depth image of the touch sensor area;
A depth value calculator 22 for calculating a representative depth value of the touch sensor area in the depth image;
An object detection unit 23 for sequentially grouping and labeling pixels of a depth image having a depth value different from a representative depth value of the touch sensor area and storing the depth information as a size value corresponding to the number of objects in order;
A pointer extracting unit (24) for extracting a pointer from the detected object;
A spatial coordinate correcting unit 25 for correcting spatial coordinate distortion of an image with respect to the touch sensor area;
And a touch determination unit (26) for determining whether or not the touch sensor area is touched based on a depth value of the extracted pointer position, and the correction order of the spatial coordinates can be changed by the touch determination unit (26) And operating method.
3. The method of claim 2,
The spatial coordinate correction is used when absolute spatial coordinates are applied, and is not used when relative spatial coordinates are applied. In the case of a display monitor, a screen, a flat or curved object used as a virtual touch sensor, , A depth image capturing module or a plurality of depth image capturing modules can be installed in one or more of the left, right, upper left, lower left, right upper, Way.
The method according to claim 1,
The depth image photographing module 11 can be installed in an external mount type such as a hanger or a pedestal, a fixed type in a wall or a floor, a removable type in which a removable attachment is possible, Image processing based virtual keyboard device and operation method.
The method according to claim 1,
The depth value calculation unit 13 of the touch sensor area calculates a depth value of a specific point by receiving the depth image of the touch sensor area acquired by capturing the touch sensor area by the depth image capturing module 11, The depth value of the depth sensor is calculated as a representative depth value at each position of the touch sensor, and when the screen is stored, the average value is calculated and stored as a representative depth value at each position of the touch sensor. How to operate.
The method according to claim 1,
The spatial coordinate distortion correction of the image with respect to the touch sensor area taken by the depth image photographing module 11,
Figure pat00006
Equation (1)
Figure pat00007
Equation (2)
The corrected post-transformation absolute coordinates can be obtained using the above equations (1) and (2), where x (horizontal) and y (vertical) X (horizontal) and Y (vertical) are the coordinates after transformation; H x (width), V yo , V y1 (height) are the touch sensor area before conversion; H X (width) and V Y (height) are the touch sensor area after conversion; wherein x 0 (horizontal) and y 0 (vertical) are correction offsets.
The execution flow of the virtual keyboard device and the operation method based on the depth image processing,
(S100) of selecting a planar portion of a virtual plane-based chain plane or a smoothly curved plane to perform depth camera photographing and depth image data acquisition and starting a depth image photographing;
A step (S200) of projecting and generating a virtual keyboard image on the flat structure part selected in step S100;
(S210), when the step S200 is completed, extracting background data of the virtual keyboard tendon area from the virtual keyboard image projected and generated on the flat structure part;
An object recognizing step (S300) of recognizing a hand of a virtual keyboard operator with a depth camera to acquire depth information about a displacement between a virtual keyboard image and a virtual keyboard operator's hand projected and generated in operation S200;
Detecting a position of the virtual keyboard image by the depth camera in step S400;
Converting the target position detected in step S400 into depth information (S500);
A key data synchronization step (S510) of confirming and matching the key position data on the DB module with the key position converted into the depth information in S500;
(S600) transmitting a code in which the depth information on the subject position and the key data are synchronized to the DB module in step S510;
A step S610 of selecting and extracting the code transmitted in the step S600 with a corresponding input key embedded in the DB module;
And transmitting the input key extracted in step S610 to the control target computer (S700).
8. The method of claim 7,
The shape of the virtual keyboard projected from the beam projector image transmitting module to the flat structure portion is image output in the same shape and size as the actual keyboard in order to realize a feeling that the virtual keyboard operator actually touches the keyboard. Virtual keyboard device and operation method.
9. The method of claim 8,
The beam projector image transmitting module has a function of outputting a video image in a keyboard layout that supports languages of the world in order to increase the degree of freedom in selecting a language arrangement type of a virtual keyboard for input according to a situation of a virtual keyboard operator And a virtual keyboard device and a manipulation method based on the depth image processing.
9. The method of claim 8,
A virtual keyboard apparatus and method based on depth image processing, characterized in that a key tone output is enabled to provide an auditory effect in a process of keying a virtual keyboard.
The detailed configuration of the entire system 100 for implementing the virtual keyboard device and the operation method based on the depth image processing,
A depth camera module (110) for acquiring depth information;
A beam projector image sending module 130 for projecting the virtual keyboard to the flat structure portion 120;
A DB module 140 in which image data, key data, and tone data of the virtual keyboard for caption position recognition are embedded;
And a speaker module (150) for outputting a touch tone.
12. The method of claim 11,
The short-range wireless communication module 160 is installed in the speaker module 150 in order to minimize the wired connection and increase the space utilization. The short-range wireless communication module 160 is a low-power-based short- And a Bluetooth communication method is used.
KR1020150083918A 2015-06-15 2015-06-15 Virtual Keyboard Operation System and Control Method using Depth Image Processing KR20160147326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150083918A KR20160147326A (en) 2015-06-15 2015-06-15 Virtual Keyboard Operation System and Control Method using Depth Image Processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150083918A KR20160147326A (en) 2015-06-15 2015-06-15 Virtual Keyboard Operation System and Control Method using Depth Image Processing

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020170103661A Division KR101808720B1 (en) 2017-08-16 2017-08-16 Virtual Keyboard Control Method using Depth Image Processing

Publications (1)

Publication Number Publication Date
KR20160147326A true KR20160147326A (en) 2016-12-23

Family

ID=57736190

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150083918A KR20160147326A (en) 2015-06-15 2015-06-15 Virtual Keyboard Operation System and Control Method using Depth Image Processing

Country Status (1)

Country Link
KR (1) KR20160147326A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130112061A (en) 2011-01-05 2013-10-11 소프트키네틱 소프트웨어 Natural gesture based user interface methods and systems
KR20140140095A (en) 2012-03-26 2014-12-08 애플 인크. Enhanced virtual touchpad and touchscreen

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130112061A (en) 2011-01-05 2013-10-11 소프트키네틱 소프트웨어 Natural gesture based user interface methods and systems
KR20140140095A (en) 2012-03-26 2014-12-08 애플 인크. Enhanced virtual touchpad and touchscreen

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
이새봄, 정일홍, 2014, "키넥트를 사용한 NUI 설계 및 구현" 디지털콘텐츠학회 논문지, 제15권, 4호, pp. 473-480.

Similar Documents

Publication Publication Date Title
JP6548518B2 (en) INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
US8971629B2 (en) User interface system based on pointing device
KR101808714B1 (en) Vehicle Center Fascia Control Method Based On Gesture Recognition By Depth Information And Virtual Touch Sensor
JP6075122B2 (en) System, image projection apparatus, information processing apparatus, information processing method, and program
US11029766B2 (en) Information processing apparatus, control method, and storage medium
US20130135199A1 (en) System and method for user interaction with projected content
US10372229B2 (en) Information processing system, information processing apparatus, control method, and program
JP2015176253A (en) Gesture recognition device and control method thereof
US20220189113A1 (en) Method for generating 3d skeleton using joint-based calibration acquired from multi-view camera
CN104699233A (en) Screen operation control method and system
JP2014142737A (en) Input program, input device, and input method
JP2014029656A (en) Image processor and image processing method
JPWO2018154634A1 (en) Projection display device, control method of projection display device, and program
KR101785781B1 (en) Virtual Piano Event Control Method using Depth Information
KR101536673B1 (en) Virtual Touch Sensor Using Depth Information and Method for controlling the same
JP2006244272A (en) Hand position tracking method, device and program
KR102344227B1 (en) Moving body detecting device, moving body detecting method, and moving body detecting program
KR101426378B1 (en) System and Method for Processing Presentation Event Using Depth Information
WO2011096571A1 (en) Input device
KR101808720B1 (en) Virtual Keyboard Control Method using Depth Image Processing
JP2017033556A (en) Image processing method and electronic apparatus
KR101447958B1 (en) Method and apparatus for recognizing body point
KR20160147326A (en) Virtual Keyboard Operation System and Control Method using Depth Image Processing
JP6452658B2 (en) Information processing apparatus, control method thereof, and program
KR101775784B1 (en) Karaoke Machine System control method using Virtual Touch Sensor Based On Depth Information

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
E601 Decision to refuse application
E801 Decision on dismissal of amendment
A107 Divisional application of patent