KR101687150B1 - Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor - Google Patents

Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor Download PDF

Info

Publication number
KR101687150B1
KR101687150B1 KR1020150136974A KR20150136974A KR101687150B1 KR 101687150 B1 KR101687150 B1 KR 101687150B1 KR 1020150136974 A KR1020150136974 A KR 1020150136974A KR 20150136974 A KR20150136974 A KR 20150136974A KR 101687150 B1 KR101687150 B1 KR 101687150B1
Authority
KR
South Korea
Prior art keywords
depth
touch
touch sensor
pointer
multimedia
Prior art date
Application number
KR1020150136974A
Other languages
Korean (ko)
Inventor
권순각
이동석
Original Assignee
동의대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 동의대학교 산학협력단 filed Critical 동의대학교 산학협력단
Priority to KR1020150136974A priority Critical patent/KR101687150B1/en
Application granted granted Critical
Publication of KR101687150B1 publication Critical patent/KR101687150B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • H04N13/0459
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

More particularly, the present invention relates to a virtual touch-based operation device and a method of implementing the same. More particularly, the present invention relates to a virtual touch- Control and the like, and a virtual touch area is created in a flat structure part such as a wall surface, and motion recognition and depth information of the elderly person is analyzed by a depth camera to display an image capable of operating the bedside multimedia device Processing technology.
The present invention relates to a vital multimedia multimedia device and a method for implementing the same in a patient-friendly virtual touch system, wherein a depth camera is installed on a side of a ward where a senior person's bed is located, an object is recognized by using a flat- After the multimedia graphic interface is created on the flat structure of the side wall, the depth camera touches the specific position on the multimedia graphic interface projected on the flat structure part of the sidewall of the sickroom to recognize the touch operation for the specific position of the elderly person, And the individual modules are configured as a single system.

Description

[0001] The present invention relates to a virtual-touch-type bedside multimedia device and a method for implementing the same,

More particularly, the present invention relates to a virtual touch-based operation device and a method of implementing the same. More particularly, the present invention relates to a virtual touch- Control and the like, and a virtual touch area is created in a flat structure part such as a wall surface, and motion recognition and depth information of the elderly person is analyzed by a depth camera to display an image capable of operating the bedside multimedia device Processing technology.

In Korea, due to the development of medicine and the increase in the average life expectancy, the proportion of the elderly people in the total population is rapidly increasing, and it is now entering into a serious aging society. As a result, interest and investment in the silver industry, an industry for the elderly, is increasing day by day. The most remarkable area of the silver industry is the silverware sector, and demand for bed-related technology development, in which the daily lives of the elderly people suffering from inconvenience or suffering from the illness, are being concentrated. These elderly people need the help and care of their caregivers or family members for most of their basic daily activities, such as eating and drinking, as well as toilet use. Therefore, in terms of social security and welfare, it is required to have engineering skills that can help the elderly people who are ill or uncomfortable to move smoothly and create basic living environment. According to the above-mentioned social needs and necessities, the present invention can provide an accessibility to elements close to daily life such as an emergency call, a telephone use, an Internet access, an indoor lighting control and a TV / radio control in a sickbed, The present invention provides a virtual-touch-type bedside multimedia device and an implementation method capable of maximizing the virtual-touch-type multimedia device.

A similar prior art for disclosing the above-described elderly-friendly virtual-touch-type bedside multimedia device and its implementation method includes KR 10-0527055 (B1) registered in the Korean Intellectual Property Office; KR 10-0950062 (B1); KR 10-1318724 (B1); . In the above-described conventional technique, there is not provided an image processing technique that can operate a sickbed-friendly multimedia device by utilizing depth information and a virtual touch sensor area at the same time.

KR 10-0527055 (B1) KR 10-0950062 (B1) KR 10-1318724 (B1)

The present invention aims to satisfy the technical needs required from the background of the above-mentioned invention.

Specifically, the object of the present invention is to provide a multifunctional bedside multimedia device for an elderly person who is inconvenienced or ill with difficulty in performing emergency call, telephone use, internet connection, interior lighting control, TV / The present invention provides an image processing technique capable of operating the bedside multimedia device by creating a virtual touch area on a flat structure such as a wall surface and analyzing motion recognition and depth information of the elderly person using a depth camera.

The technical objects to be achieved by the present invention are not limited to the above-mentioned problems, and other technical subjects not mentioned can be clearly understood by those skilled in the art from the following description. There will be.

In order to achieve the above object, according to the present invention, there is provided a sleeping-friendly virtual hospital-type bedside multimedia device and a method for implementing the same, wherein a depth camera is installed at one side of a bed room where a senior patient bed is located, After the object is recognized, a multimedia graphic interface is created on the flat structure of the sidewall side wall, and a specific position on the multimedia graphic interface projected on the flat structure part of the sidewall side wall is touched, the depth camera recognizes the touch operation An algorithm for transmitting touch information to a multimedia console is sequentially applied, and the individual modules are configured as one system.

As described above, according to the present invention, the elderly who are uncomfortable or uncomfortable in their daily lives can use the virtual touch-type bedside multimedia device to perform routine tasks such as emergency call, telephone use, Internet access, It has the advantage of securing accessibility to elements close to life and maximizing convenience of use.

It is to be understood that the technical advantages of the present invention are not limited to the technical effects mentioned above and that other technical effects not mentioned can be clearly understood by those skilled in the art from the description of the claims There will be.

FIG. 1 is a flowchart of a depth information processing analysis applied to a bedside multimedia device and a method for implementing the same according to an embodiment of the present invention; FIG.
FIG. 2 is an exemplary view illustrating a location of a depth image capturing module installed on a virtual touch sensor in a patient-friendly virtual touch-based bedside multimedia device and its implementing method according to an embodiment of the present invention;
FIG. 3 illustrates an example of a process in which coordinates before conversion are corrected by a one-dimensional linear transformation in an old-friend-friendly virtual touch-type bedside multimedia device and an implementation method thereof according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a touch judgment in a touch sensor area in a mobility-friendly virtual multimedia touch pad type multimedia device and its implementation method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a flow of a touch speed and a trajectory of a pointer in a sleeping-friendly virtual-touch-type bedside multimedia device and its implementation method according to an embodiment of the present invention;
FIG. 6 is an exemplary view illustrating a mobility-friendly virtual touch-type bedside multimedia device and its implementing method according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating an embodiment of a bedside multimedia device and a method for implementing the same according to the present invention; FIG.
FIG. 8 is a layout diagram of major modules proposed in the elderly-friendly virtual-touch-type bedside multimedia device and its implementing method according to the embodiment of the present invention; FIG.
FIG. 9 is a diagram illustrating an example of recognition of motion in a patient-friendly virtual-touch-type bedside multimedia device and its implementing method according to an embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings, It is not. In the following description of the present embodiment, the same components are denoted by the same reference numerals and symbols, and further description thereof will be omitted.

The concept of the depth information processing analysis using the depth camera will be described with reference to FIG. As shown in FIG. 1 (a), the detailed configuration of the virtual touch sensor includes a depth image photographing module 11 for obtaining a depth image of a touch sensor area using a touch sensor area; A spatial coordinate correction unit (12) for correcting spatial coordinate distortion of an image with respect to the touch sensor area taken by the depth image photographing module (11); A depth value calculation unit 13 for calculating a representative depth value of the touch sensor region in the depth image whose spatial coordinate distortion is corrected in the spatial coordinate correction unit 12; An object detector 14 for grouping and labeling pixels of a depth image having a depth value different from a representative depth value of the touch sensor area, storing the sequentially stored size values of the objects, and detecting the object; A pointer extracting unit (15) for extracting a pointer to the detected object; And a touch determination unit 16 for determining whether the touch sensor area is touched based on the depth value of the extracted pointer position.

In addition, the order of correction of the spatial coordinates can be made different as shown in Fig. 1 (b). In the case where the correction of the spatial coordinates is performed before the touch determination, as shown in FIG. 1B, the depth image photographing module 21 photographs the touch sensor area to obtain a depth image for the touch sensor area; A depth value calculator 22 for calculating a representative depth value of the touch sensor area in the depth image; An object detection unit 23 for sequentially grouping and labeling pixels of a depth image having a depth value different from a representative depth value of the touch sensor area and storing the depth information as a size value corresponding to the number of objects in order; A pointer extracting unit (24) for extracting a pointer from the detected object; A spatial coordinate correcting unit 25 for correcting spatial coordinate distortion of an image with respect to the touch sensor area; And the spatial coordinate correction order can be changed by the touch determination unit 26 that determines whether or not the touch sensor area is touched based on the depth value of the extracted pointer position.

Here, the spatial coordinate correction is used when the absolute spatial coordinate is applied, and not when the relative spatial coordinate is applied. One image shooting module is installed in any one of the top, bottom, left, right, upper left, lower left, right upper, and lower right areas in a display monitor, screen, flat or curved object used as a virtual touch sensor Or may be installed in more than one place (FIG. 2).

The depth image photographing module 11 can be attached to an external mount such as a hanger or a pedestal, a fixed type inside a wall or a floor, a removable type which is freely attachable and detachable, and a socket type attached to a terminal device. The depth value calculation unit 13 of the touch sensor area calculates a depth value of a specific point by receiving the depth image of the touch sensor area obtained by capturing the touch sensor area by the depth image capturing module 11. The depth value calculation unit 13 is a part that operates only at the beginning of execution and can select the first screen of the depth image. In addition, a screen corresponding to a specific number can be accumulated to obtain a stable image. When the screen is accumulated, the average value is calculated and stored as the representative depth value at each position of the touch sensor. In this case, in addition to the average value, an intermediate value, a minimum value, or the like may be used. When the position or the distance of the photographing module and the touch sensor area is changed, the depth value calculation unit 13 is executed again. The depth value of the touch sensor area is obtained for each screen, and the obtained depth value is compared with the previously stored representative depth value The depth value calculation unit 13 of the touch sensor area is executed again when the absolute difference is larger than the predetermined value.

FIG. 3 is a diagram illustrating that the coordinates before transformation are corrected by one-dimensional linear transformation based on four spatial coordinates that are mapped under the environment in which the depth image capturing module is installed on the left side of the virtual touch sensor area. Is defined.

Figure 112015094060133-pat00001
... (1)

Figure 112015094060133-pat00002
... (2)

The corrected post-transformation absolute coordinates can be obtained by using the above equations (1) and (2). Here, x (horizontal) and y (vertical) are coordinates before conversion, and X (horizontal) and Y (vertical) are coordinates after conversion. H x (width), V yo , V y1 (height) correspond to the touch sensor area before conversion, H X (width) and V Y (height) correspond to the touch sensor area after conversion. x 0 (horizontal) and y 0 (vertical) are the correction offsets.

Each time a new image of the depth image is input, the object detecting unit 14 compares the representative depth value of the touch sensor area stored in the previous step and determines that the object is detected if the two values are out of a predetermined range. That is, if the following equation (3) is satisfied, it is determined that the object is detected in the horizontal direction x and the vertical direction y position, and a new binary image is generated and the pixel value of the corresponding position is stored as '255'.

Figure 112015094060133-pat00003
... (3)

Here, d is the depth value of the currently input depth image, d s is the representative depth value of the touch sensor area, and T i is a specific value, which is set by the user, and is preferably 5 in the millimeter standard.

When object detection is performed for one screen, a binary image having a pixel value of '255' is generated in the object part and '0' is generated in the non-object part. A process of sequentially grouping and labeling pixels having a pixel value of '255' in a binary image is performed. In the labeled order, objects are detected by storing them as '1' for the first object, '2' for the second object, and so on. In addition, noise can be removed through various morphology operations, erosion, expansion, removal, and filling. Since noise may occur in the labeling of the detected object, if the number of pixels of the labeled object is within a certain number, it may be excluded from the labeling object to exclude noise. The pointer extracting unit 15 extracts a pointer from the detected object, and the pointer obtains a position closest to the touch sensor area and uses the pointer as a pointer. The extracted pointer may be a human finger or an indicator. Finding a position close to the touch sensor area depends on the installation direction of the image sensing module. When the depth imaging module is installed on the left side of the touch sensor area, the position of the leftmost pixel in the object is used as a pointer based on the direction of the depth sensor image of the touch sensor area, and when the imaging module is on the right side, A pixel located on the upper side when the image is on the upper side and a pixel located on the lower side when the lower side is used as a pointer.

FIG. 4 is a block diagram showing touch judgment in a touch sensor area, and FIG. 5 is a block diagram showing a flow of a touch speed and a locus of a pointer. The touch determination unit 16 determines whether or not the pointer approaches or touches the touch sensor. Whether or not the touch sensor area is judged to be touched even when the hand or the pointer touches the touch sensor area is determined to be touched or when the touch point is within a certain distance, it can be determined according to the convenience of the user. The touch determiner 16 in the, given that touch the case of contact value of the depth of the pointer d p (x, y) and the value of the depth at which the position in the depth image of the touch sensor area d s (x, y ), And determines that the touch is made when the absolute difference between the two values is less than the specific value. That is, when the following equation (4) is satisfied, it is determined that the pointer touches the touch sensor area in the horizontal direction x and the vertical direction y position.

Figure 112015094060133-pat00004
... (4)

Here, T D is set by the user as a specific value, and in the case of the millimeter standard, generally 5 is suitable. In this case, in order to prevent malfunctioning of the judgment, the average value of the x-1, y, y + 1 three-dimensional depth values in several neighboring horizontal directions rather than one position in the horizontal direction x and the vertical direction y may be used. Alternatively, neighboring pixel values on the left diagonal, right diagonal, etc. may be used. When it is judged that the approach is touched, the position at which the touched position is compared with the position at which the depth value is compared is different. That is, when the image pickup module is installed on the left side of the touch sensor area, d p (x, y) which is the depth value of the pointer position and d s (xT, y), and judges that the touch is made when the difference between the two values is less than the specified value. At this time, the touched position is the position of the horizontal direction xT and the vertical direction y, and if the following equation (5) is satisfied, it is determined that the pointer touches the touch sensor area at the corresponding position.

Figure 112015094060133-pat00005
(5)

Here, T is set by a user as a specific value, and when an approach distance between the pointer and the touch sensor area is 1 cm, five pixels are suitable. The average value of several neighboring depth values may be used rather than one position of the pointer. When the image capturing module is installed on the right side of the touch sensor area, d p (x, y) is compared with d s (x + T, y) it indicates, d p (x, y) and d s Compare (x, yT) (x, yT) , and determines the touch whether or not the position, when the bottom of d p (x, y) and d s (x, y + T) to judge whether or not the position of (x, y + T) is touched. In addition, the horizontal / vertical depth value of the pointer can be obtained for each screen as shown in FIG. That is, the velocity and direction of the pointer in the horizontal direction, the velocity and direction in the vertical direction, and the distance from the photographing module or the touch sensor area can be known based on the frame rate of the screen. One of the methods for detecting the moving direction and speed of the pointer is to continuously compare the previous frame and the current frame of the depth image acquired from the depth image capturing module to detect the moving direction, the moving speed, and the moving distance of the pointer Can be used.

In the elderly friend-friendly virtual touch-type bedside multimedia device and its implementing method according to the present invention, the depth camera is installed on one side of the bed room where the elderly patient bed is located, and the flat bed portion 120 of the bedside side wall is set as a multimedia graphic interface After recognizing an object and creating a virtual multimedia graphic interface area using a video projection module such as a beam projector, when a senior person touches a specific position on a multimedia graphic interface projected on a sidewall of a sickroom, And transmits the touch information to the multimedia console. Referring to FIG. 7,

(S100) of selecting a flat structure portion of a sidewall of a chain based on a multimedia graphic interface to perform depth camera photographing and depth image data acquisition and starting a depth image photographing;

A step (S200) of generating a multimedia graphic interface image by projecting the multimedia graphic interface image on the flat structure part of the sickroom side wall selected in step S100;

(S210) extracting the multimedia graphic interface touch area background data from the multimedia graphic interface image projected and generated on the flat structure part of the sidewall of the patient's body when the step S200 is completed;

An object recognizing step (S300) of recognizing a hand of a senior citizen by a depth camera to acquire depth information about a displacement between the multimedia graphic interface image projected and generated and the elderly hand in step S200;

(S400) of detecting a touch position of the multimedia graphic interface image by the depth camera to which the hand of the aged person who has finished recognizing the object touches the multimedia graphic interface image in step S300;

Converting the touch position detected in step S400 into depth information (S500);

A step S510 of synchronizing multimedia control data, which is a process of confirming and matching the touch position converted into the depth information and the multimedia control information embedded in the multimedia console in step S500;

(S600) transmitting a code synchronized with the depth information on the touch position and the multimedia control data to the multimedia console in step S510;

(S610) selecting and extracting the code transmitted in operation S600 as corresponding multimedia control task information embedded in the multimedia console;

And feedback the multimedia control job information extracted in step S610 to the multimedia graphic interface again and reflect the multimedia control result (S700).

8 is a schematic diagram of an overall system 100 for implementing a sleeping-friendly virtual-touch-type bedside multimedia device according to the present invention.

A depth camera module (110) for acquiring depth information;

A beam projector image sending module 130 for projecting the multimedia graphic interface to the flat structure part 120 of the sickroom side wall;

A multimedia console module 140 in which multimedia graphic interface data for touch location recognition, multimedia control data, and multimedia control announcement voice data are embedded;

And a speaker module 150 for outputting a multimedia control announcement voice.

The speaker module 150 is equipped with a short-range wireless communication module 160 in order to minimize the wired connection and increase the space utilization. The short-range wireless communication module 160 is a low-power-based short- It is preferable to use a bluetooth communication method.

FIG. 9 is a diagram illustrating an example of recognition of motion in a patient-friendly virtual-touch-type bedside multimedia device and its implementing method according to an embodiment of the present invention. When a single touch operation or a multi-touch operation is performed on the virtual touch sensor area 12r implemented on the flat structure part 120 of the side wall of the patient room, the depth camera recognizes the coordinates information 10t and 11t of the touch and converts And transmits the multimedia graphic event to the multimedia console module 140 so that the multimedia graphic event is displayed on the flat structure part of the sickroom side wall to implement the motion recognition based bedside multimedia device.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, will be. Accordingly, the true scope of the present invention should be determined only by the appended claims.

10t, 11t: coordinate information of touch
12r: Virtual touch sensor area
100: Whole system
120: Flat structure of the side wall of the patient room
140: Multimedia console module

Claims (8)

A depth image photographing module (11) for obtaining a depth image of a touch sensor area; A spatial coordinate correction unit (12) for correcting spatial coordinate distortion of the image with respect to the touch sensor area; A depth value calculation unit 13 for calculating a representative depth value of the touch sensor area; An object detection unit 14 for sequentially grouping and labeling pixels of a depth image to detect an object; A pointer extracting unit (15) for extracting a pointer to the detected object; A touch determination unit (16) for determining whether or not the touch sensor area is touched based on a depth value of the extracted pointer position; A beam projector image sending module 130 for projecting the multimedia graphic interface to the flat structure part 120 of the sickroom side wall; A multimedia console module 140 in which multimedia graphic interface data for touch location recognition, multimedia control data, and multimedia control announcement voice data are embedded; A speaker module 150 for outputting a multimedia control announcement voice; And a short distance wireless communication module 160 for minimizing a wired connection. The short range wireless communication module 160 uses a Bluetooth communication method, which is a low power based short range wireless technology standard, The depth camera recognizes the coordinates information 10t and 11t of the touch and converts the depth information into depth information and transmits the depth information to the multimedia console module 140 so that the multimedia graphic event The spatial coordinate correction is used when the absolute spatial coordinates are applied and not when the relative spatial coordinates are applied, and the display monitor, screen, flat or curved surface used as the virtual touch sensor The image shooting module may be installed in any one of the upper, lower, left, right, upper left, lower right, upper right, The depth image photographing module 11 can be mounted in an external form such as a hanger and a pedestal, a fixed form in a wall or a floor, a detachable form in which it is easy to attach and detach, a socket type attached to a terminal device, The depth value calculation unit 13 of the sensor area calculates a depth value of a specific point by receiving the depth image of the touch sensor area obtained by capturing the touch sensor area by the depth image capturing module 11, In the case where the screen is accumulated, an average value is calculated and stored as a representative depth value at each position of the touch sensor. The depth value of the touch sensor area The spatial coordinate distortion correction of the image
Figure 112016082083619-pat00017
... (1)
Figure 112016082083619-pat00018
... (2)
The corrected post-transformation absolute coordinates can be obtained using the above equations (1) and (2), where x (horizontal) and y (vertical) X (horizontal) and Y (vertical) are the coordinates after transformation; H x (width), V yo , V y1 (height) are the touch sensor area before conversion; H X (width) and V Y (height) are the touch sensor area after conversion; x 0 (horizontal) and y 0 (vertical) are the correction offset;
Each time a new image of the depth image is input, the object detector 14 compares the representative depth value of the touch sensor area stored in the previous step with the representative depth value. If the two values are outside a predetermined range,
Figure 112016082083619-pat00019
... (3)
If the above equation (3) is satisfied, it is determined that the object is detected in the horizontal direction x and the vertical direction y position, and a new binary image is generated and the pixel value of the corresponding position is stored as '255' Ds is the representative depth value of the touch sensor area, and Ti is a specific value that is set by the user. In general, 5 is input in the millimeter unit standard,
When all the objects are detected for one screen, a binary image having pixel values of '255' and '0' is generated in the object portion and the pixel portion in the binary image is '255' Grouping and labeling are performed. In the labeled order, objects are detected by storing them as '1' in the first object, '2' in the second object, and so on. Then, the object is detected, and morphology operation, erosion, And the pointer extracting unit 15 extracts the object from the labeling object if the number of pixels of the labeled object is within a predetermined number in order to exclude noise in the labeling of the detected object, The pointer is used as a pointer to obtain a position close to the touch sensor area, and the extracted pointer is used as a pointer When the depth imaging module is installed on the left side of the touch sensor area, the position of the leftmost pixel in the object is used as a pointer on the basis of the direction of the touch sensor area depth image, The uppermost pixel is the uppermost pixel and the lower pixel is the lower pixel as the pointer. The touch determiner 16 determines whether the pointer approaches or touches the touch sensor Whether or not the touch or the pointer is touched only when the touch or touch point is touched, or whether the touch or touch point is determined to be touched even if the touch is within a predetermined distance. The touch determination unit 16 ), considering that the touch when the touch on the depth value of the pointer p d (x, y) in the depth image of the touch sensor area, Compare the value of the depth at the position where each d s (x, y) and determines that the touch when the absolute difference between the two values is below a certain value,
Figure 112016082083619-pat00020
... (4)
If the equation (4) is satisfied, it is determined that the pointer touches the touch sensor area in the horizontal direction x and the vertical direction y, where T D is set by a user as a specific value, Y, and y + 1 rather than one position in the horizontal direction x and the vertical direction y in order to prevent the malfunction of the judgment, and the average values of the depth values of the x-1, y, y + When the touching position is compared with the depth value, and when the image sensing module is installed on the left side of the touch sensor area, the depth value of the pointer position d p (x, y) and compares the d s (xT, y) than the corresponding position in the depth value, the depth image left a predetermined number T as the distance of the pixel area, the difference between the two touch-sensor-specific value Determined that the touch time is less than, and the touched position is the position in the horizontal direction and the vertical direction y xT,
Figure 112016082083619-pat00021
(5)
If the expression (5) is satisfied, it is determined that the pointer touches the touch sensor area at the corresponding position, where T is set by the user as a specific value, and when the approach distance between the pointer and the touch sensor area is 1 cm type 5 pixels, and if the access to and the neighboring average value of the plurality of depth values can be used, rather than a pointer, if the imaging module is installed on the right side of the touch sensor area d p (x, y) and d s ( (x, yT) is compared with d p (x, y) and d s (x, y T) (X, y + T) by comparing d p (x, y) with d s (x, y + T) Horizontal and vertical depth values of the pointer. Based on the frame rate of the screen, the velocity and direction of the pointer in the horizontal direction, It is possible to know the distance between the pointer and the directionality, the photographing module or the touch sensor area, to continuously compare the previous frame and the current frame of the depth image acquired from the depth image photographing module to detect the moving direction and speed of the pointer, A method of implementing a sleeping-friendly virtual-touch-type bedside multimedia device for detecting a direction, a moving speed, and a moving distance,
Selecting a flat structure portion of the sidewall of the chain based on the multimedia graphic interface and starting the depth imaging (S100); A step (S200) of generating a multimedia graphic interface image by projecting the image on a flat structure part of the side wall of the room; Extracting touch area background data from the multimedia graphic interface image (S210); An object recognizing step (S300) of recognizing a hand of the elderly person by a depth camera in order to acquire depth information on the displacement between the multimedia graphic interface image and the elderly hand; A step (S400) of detecting a position touched by the aged hand on the multimedia graphic interface image; Converting the detected touch position into depth information (S500); A data synchronization step (S510) of synchronizing the converted depth information with the multimedia control information embedded in the multimedia console; Transmitting the synchronized data code to the multimedia console (S600); Extracting the transmitted code as corresponding multimedia control job information embedded in the multimedia console (S610); (S700) of performing feedback for retransmitting the extracted multimedia control task information to the multimedia graphic interface and reflecting the multimedia control result.
delete delete delete delete delete delete delete
KR1020150136974A 2015-09-25 2015-09-25 Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor KR101687150B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150136974A KR101687150B1 (en) 2015-09-25 2015-09-25 Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150136974A KR101687150B1 (en) 2015-09-25 2015-09-25 Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor

Publications (1)

Publication Number Publication Date
KR101687150B1 true KR101687150B1 (en) 2016-12-15

Family

ID=57572094

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150136974A KR101687150B1 (en) 2015-09-25 2015-09-25 Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor

Country Status (1)

Country Link
KR (1) KR101687150B1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0397401A (en) * 1989-09-11 1991-04-23 Noriyoshi Saeki Make-up slipper made of paper and its manufacture
JP2004145595A (en) * 2002-10-24 2004-05-20 Toshiba Corp In-hospital message service system
KR100527055B1 (en) 2003-01-17 2005-11-09 오인환 Input unit of computer for disabled person
KR100950062B1 (en) 2008-04-04 2010-03-26 인제대학교 산학협력단 Apparatus and method for monitoring and alarm service for smart home in olders
KR20130050572A (en) * 2011-11-08 2013-05-16 엘지전자 주식회사 Mobile terminal and method for controlling of the same
KR101318724B1 (en) 2012-01-20 2013-10-16 연세대학교 산학협력단 The intelligent medical care transformable bed
KR20150059423A (en) * 2013-11-22 2015-06-01 동의대학교 산학협력단 Virtual Touch Sensor Using Depth Information and Method for controlling the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0397401A (en) * 1989-09-11 1991-04-23 Noriyoshi Saeki Make-up slipper made of paper and its manufacture
JP2004145595A (en) * 2002-10-24 2004-05-20 Toshiba Corp In-hospital message service system
KR100527055B1 (en) 2003-01-17 2005-11-09 오인환 Input unit of computer for disabled person
KR100950062B1 (en) 2008-04-04 2010-03-26 인제대학교 산학협력단 Apparatus and method for monitoring and alarm service for smart home in olders
KR20130050572A (en) * 2011-11-08 2013-05-16 엘지전자 주식회사 Mobile terminal and method for controlling of the same
KR101318724B1 (en) 2012-01-20 2013-10-16 연세대학교 산학협력단 The intelligent medical care transformable bed
KR20150059423A (en) * 2013-11-22 2015-06-01 동의대학교 산학협력단 Virtual Touch Sensor Using Depth Information and Method for controlling the same

Similar Documents

Publication Publication Date Title
JP6137425B2 (en) Image processing system, image processing apparatus, image processing method, and image processing program
WO2016143641A1 (en) Posture detection device and posture detection method
JP2014236896A (en) Information processor, information processing method, and program
JP2014182409A (en) Monitoring apparatus
JP2004246856A (en) Interface device
KR101808714B1 (en) Vehicle Center Fascia Control Method Based On Gesture Recognition By Depth Information And Virtual Touch Sensor
JP2016510144A (en) Detection of natural user input involvement
JP2020086819A (en) Image processing program and image processing device
WO2019003859A1 (en) Monitoring system, control method therefor, and program
US10509967B2 (en) Occupancy detection
JP2015011404A (en) Motion-recognizing and processing device
JP7243725B2 (en) Target object detection program and target object detection device
KR101687150B1 (en) Aged and Feeble Person Friendly Multimedia System Based on Virtual Touch Sensor
JP6791731B2 (en) Posture judgment device and reporting system
JP2017228042A (en) Monitoring device, monitoring system, monitoring method and monitoring program
KR101785781B1 (en) Virtual Piano Event Control Method using Depth Information
KR20150059423A (en) Virtual Touch Sensor Using Depth Information and Method for controlling the same
JP2017041079A (en) Operation recognition device
JP2019021002A (en) Watching support system, and control method thereof
WO2020241057A1 (en) Image processing system, image processing program, and image processing method
CN116457882A (en) Apparatus and method for controlling camera
JP2022010581A (en) Detection device, detection method, image processing method and program
JPWO2016185738A1 (en) Image analysis apparatus, image analysis method, and image analysis program
KR20170035116A (en) Concurrency Game System Based On Gesture Recognition Using Virtual Touch Sensor
KR101775784B1 (en) Karaoke Machine System control method using Virtual Touch Sensor Based On Depth Information

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20191128

Year of fee payment: 4