CN111913577A - Three-dimensional space interaction method based on Kinect - Google Patents
Three-dimensional space interaction method based on Kinect Download PDFInfo
- Publication number
- CN111913577A CN111913577A CN202010763675.7A CN202010763675A CN111913577A CN 111913577 A CN111913577 A CN 111913577A CN 202010763675 A CN202010763675 A CN 202010763675A CN 111913577 A CN111913577 A CN 111913577A
- Authority
- CN
- China
- Prior art keywords
- color
- kinect
- partition image
- dimensional space
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000005192 partition Methods 0.000 claims abstract description 31
- 230000002452 interceptive effect Effects 0.000 claims abstract description 5
- 230000003238 somatosensory effect Effects 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 4
- 230000001788 irregular Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 206010009232 Clang associations Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009154 spontaneous behavior Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a three-dimensional space interaction method based on Kinect, which comprises the following steps: creating a color partition image, wherein the color partition image has the same resolution as the Kinect depth map; loading the color partition image and storing coordinates and color information corresponding to each pixel; acquiring the coordinate of the center position of the current target in the Kinect depth image, and acquiring the color information of the current target in the color partition image according to the coordinate; and judging whether the target is positioned in a space for triggering interactive operation or not according to the color information and the depth information of the target in the color partition image. According to the invention, the region is divided by using the color of the picture, the picture can define the pattern by user, and the interaction detection of the irregular region is more accurate and flexible.
Description
Technical Field
The invention relates to the field of computer image processing, relates to a three-dimensional space interaction technology, and particularly relates to a three-dimensional space interaction method based on Kinect.
Background
NUI is based on the mutual mode of human instinct, and people can utilize modes such as pronunciation, gesture, catch of eye, limbs to realize man-machine interaction, and this kind of mutual mode more approaches human spontaneous behavior mode, can alleviate user's study burden. Somatosensory interaction (somatosensory interaction) is the mainstream trend of NUI, and therefore, becomes the most popular direction in the field of natural interaction research at present.
The somatosensory interaction means that a user directly uses limb actions to interact with peripheral devices or environments without any control equipment, so that the user has a feeling of being personally on the scene. The somatosensory technology generally needs the support of computer technologies such as gesture recognition, bone-clanging tracking, facial expression recognition and the like. Common somatosensory interaction equipment comprises Wii (world information interface) released by Nintendo corporation of Japan, Microsoft Kinect, a somatosensory controller LeapMotion released by Leap of somatosensory controller manufacturing company and the like, and a user can send an instruction through gestures, postures or actions in the process of interacting with machine equipment to realize a natural interaction mode.
The Kinect V2 has three cameras in total, the RGB camera on the left side is used for acquiring a color image with the maximum resolution of 1920 x 1080, and at most 30 frames of images are acquired every second; the remaining two cameras are depth sensors, the left side is an infrared emitter, the right side is an infrared receiver, used to obtain the operator's position: kinect is flanked by microphone arrays to obtain stable Speech information, which can be used for Speech recognition in conjunction with Microsoft Speech platform SDK: the base is internally provided with a motor which can rotate left and right along with the movement of the object.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a three-dimensional space interaction method based on Kinect, which comprises the following steps: creating a color partition image, wherein the color partition image has the same resolution as the Kinect depth map; storing the coordinate and color information corresponding to each pixel; acquiring the coordinate of the center position of the current target in the Kinect depth image, and acquiring the color information of the current target in the color partition image according to the coordinate; and judging whether the target is positioned in a space for triggering interactive operation or not according to the color information and the depth information of the target in the color partition image.
In some embodiments of the invention, the color-partition image distinguishes regions of different depth information by different colors. Preferably, the color-partition image is customizable. Further, the color partition image is completely overlapped with the Kinect depth map.
In some embodiments of the invention, the color information comprises a name of a color or a numerical representation of a pixel. Specifically, the color name can be expressed in languages of chinese, english, and the like, or can be identified in common RGB and YUV forms.
In some embodiments of the present invention, the coordinate of the center position of the current target in the depth map and the depth information further include threshold filtering the depth map.
In some embodiments of the present invention, the obtaining of the coordinates of the center position of the current target in the Kinect depth map is implemented by OpenCV.
In some embodiments of the invention, the interaction comprises a somatosensory interaction.
The beneficial effect of adopting the further scheme is that: the image color is used for dividing the area, the pattern can be defined by the image, and the interaction detection of the irregular area is more accurate and flexible.
Drawings
FIG. 1 is a diagram of basic steps of a Kinect-based three-dimensional space interaction method according to some embodiments of the present invention;
FIG. 2 is a diagram of basic steps of a Kinect-based three-dimensional space interaction method according to some embodiments of the present invention;
FIG. 3a is a color-zoned image of some embodiments of the present invention; FIG. 3b is a Kinect measured depth map according to some embodiments of the present invention;
FIG. 4 is a diagram illustrating a common gesture interaction.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a three-dimensional space interaction method based on Kinect includes the following steps: s1, creating a color partition image, wherein the color partition image has the same resolution as a Kinect depth image; s2, loading the color partition image, and storing coordinates and color information corresponding to each pixel; s3, obtaining the coordinate of the center position of the current target in the Kinect depth image, and obtaining the color information of the current target in the color partition image according to the coordinate; and S4, judging whether the target is positioned in a space for triggering interactive operation according to the color information and the depth information of the target in the color partition image.
It should be noted that the RGB camera resolution of the first generation Kinect is 640 × 480(Kinect v1), and the depth camera resolution is 320 × 240; the second generation RGB camera resolution was 1920 x 1080 and the depth camera resolution was 512 x 424(Kinect v 2). The above interaction methods include, but are not limited to, the Kinect generation or the second generation.
In some embodiments of the present invention, the color-partition image distinguishes regions with different depth information by different colors. Preferably, the color-partition image is customizable. In the color partition map in fig. 3, the partition map is generally similar to an ellipse, and the upper, middle and lower regions are divided according to different colors, each region represents different depth information, and the depth information in the same region is the same or in the same numerical value interval. In particular, the color-partition image is completely coincident with the Kinect depth map for debugging convenience or computational convenience.
In some embodiments of the invention, the color information comprises a name of a color or a numerical representation of a pixel. Specifically, the color name may be expressed in chinese, english, or the like, and the value of the pixel may be expressed in the form of RGB, HIS, HSV, CMYK, YUV, or the like, which is common.
As shown in fig. 2 and fig. 3, in some embodiments of the present invention, a three-dimensional space interaction method based on Kinect includes the following specific steps: and step S1, installing a computer and a Kinect software and hardware environment, and confirming that the drive and the operation are normal. And hanging the Kinect downwards for installation, wherein the installation height is 4.5 meters lower than the detection effective range of the Kinect depth camera.
Step S2, the picture in fig. 3a is loaded, and each pixel is stored in an array such as: ColorArray [ (x1, y1, black), (x1, y2, black) … (x512, y424, color) ], each element of the array contains pixel coordinate information x, y and color information of the current pixel.
Step S3, acquiring each frame of Kinect infrared depth map and determining whether an object is detected, and storing information of the depth map into an array such as:
DepthArray=[(x1,y1,4),(x1,y2,4)…(x512,y424,depth)]。
in step S4, when an object (target) enters the Kinect depth camera detection area, such as a white dot in fig. 3b, the central position coordinates of the white dot are obtained by OpenCV processing of the depth map.
Step S5, because the depth map coincides with the color partition map coordinates of fig. 3a, after the coordinates of the white point of fig. 2 are obtained in step S4, the coordinates of the white point in fig. 3a are obtained, and then the color of the pixel at the current coordinate position is obtained from the ColorArray array in step S2, so that the planar area where the current object is located can be obtained according to the color.
In step S6, after the object coordinates are obtained in step S4, the depth information depth of the object is obtained from the depthacray array in step S3, and the spatial region where the current object is located is obtained. Therefore, the interactive operation can be triggered when an object enters a specific three-dimensional space region.
In some embodiments of the invention, the interaction comprises a somatosensory interaction. In particular, a common gesture operation for 3D interaction is shown in fig. 4.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A three-dimensional space interaction method based on Kinect is characterized by comprising the following steps:
creating a color partition image, wherein the color partition image has the same resolution as the Kinect depth map;
loading the color partition image and storing coordinates and color information corresponding to each pixel;
acquiring the coordinate of the center position of the current target in the Kinect depth image, and acquiring the color information of the current target in the color partition image according to the coordinate;
and judging whether the target is positioned in a space for triggering interactive operation or not according to the color information and the depth information of the target in the color partition image.
2. The Kinect-based three-dimensional space interaction method as claimed in claim 1, wherein the color partition image distinguishes regions with different depth information by different colors.
3. The Kinect-based three-dimensional space interaction method as recited in any one of claims 1 or 2, wherein the color-partition image is customizable.
4. The Kinect-based three-dimensional space interaction method as recited in claim 3, wherein the color partition image is completely overlapped with the Kinect depth map.
5. The Kinect-based three-dimensional space interaction method as recited in claim 1 or 2, wherein the color information comprises a name of a color or a numerical representation of a pixel.
6. The method of three-dimensional interaction of Kinect as claimed in claim 1 or 2, wherein the coordinate of the center position of the current target in the depth map and the depth information further comprises performing a threshold filtering process on the depth map.
7. The Kinect three-dimensional space interaction method as claimed in claim 1 or 2, wherein the obtaining of the coordinates of the center position of the current target in the Kinect depth map is implemented by OpenCV.
8. The three-dimensional space interaction method of Kinect as claimed in claim 1 or 2, the interaction operation comprising somatosensory interaction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010763675.7A CN111913577A (en) | 2020-07-31 | 2020-07-31 | Three-dimensional space interaction method based on Kinect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010763675.7A CN111913577A (en) | 2020-07-31 | 2020-07-31 | Three-dimensional space interaction method based on Kinect |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111913577A true CN111913577A (en) | 2020-11-10 |
Family
ID=73287562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010763675.7A Pending CN111913577A (en) | 2020-07-31 | 2020-07-31 | Three-dimensional space interaction method based on Kinect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111913577A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103207709A (en) * | 2013-04-07 | 2013-07-17 | 布法罗机器人科技(苏州)有限公司 | Multi-touch system and method |
US20150002419A1 (en) * | 2013-06-26 | 2015-01-01 | Microsoft Corporation | Recognizing interactions with hot zones |
CN104360729A (en) * | 2014-08-05 | 2015-02-18 | 北京农业信息技术研究中心 | Multi-interactive method and device based on Kinect and Unity 3D |
US20160191879A1 (en) * | 2014-12-30 | 2016-06-30 | Stephen Howard | System and method for interactive projection |
CN109145802A (en) * | 2018-08-14 | 2019-01-04 | 清华大学 | More manpower gesture man-machine interaction methods and device based on Kinect |
CN109513157A (en) * | 2018-10-16 | 2019-03-26 | 广州嘉影软件有限公司 | Training exercises exchange method and system based on Kinect somatosensory |
CN110045821A (en) * | 2019-03-12 | 2019-07-23 | 杭州电子科技大学 | A kind of augmented reality exchange method of Virtual studio hall |
-
2020
- 2020-07-31 CN CN202010763675.7A patent/CN111913577A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103207709A (en) * | 2013-04-07 | 2013-07-17 | 布法罗机器人科技(苏州)有限公司 | Multi-touch system and method |
US20150002419A1 (en) * | 2013-06-26 | 2015-01-01 | Microsoft Corporation | Recognizing interactions with hot zones |
CN105518584A (en) * | 2013-06-26 | 2016-04-20 | 微软技术许可有限责任公司 | Recognizing interactions with hot zones |
CN104360729A (en) * | 2014-08-05 | 2015-02-18 | 北京农业信息技术研究中心 | Multi-interactive method and device based on Kinect and Unity 3D |
US20160191879A1 (en) * | 2014-12-30 | 2016-06-30 | Stephen Howard | System and method for interactive projection |
CN109145802A (en) * | 2018-08-14 | 2019-01-04 | 清华大学 | More manpower gesture man-machine interaction methods and device based on Kinect |
CN109513157A (en) * | 2018-10-16 | 2019-03-26 | 广州嘉影软件有限公司 | Training exercises exchange method and system based on Kinect somatosensory |
CN110045821A (en) * | 2019-03-12 | 2019-07-23 | 杭州电子科技大学 | A kind of augmented reality exchange method of Virtual studio hall |
Non-Patent Citations (2)
Title |
---|
裴以建;韩甲甲;闫哲;薛端;: "Kinect彩色图像在光标移动控制中的应用", 计算机工程, no. 10, pages 241 - 245 * |
贾丙佳;李平;: "人机交互过程中数字手势的识别方法", 华侨大学学报(自然科学版), no. 02 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11896893B2 (en) | Information processing device, control method of information processing device, and program | |
CN111880657B (en) | Control method and device of virtual object, electronic equipment and storage medium | |
EP2956882B1 (en) | Managed biometric identity | |
KR20200124280A (en) | Motion recognition, driving motion analysis method and device, electronic device | |
CN109996051B (en) | Projection area self-adaptive dynamic projection method, device and system | |
US20150002419A1 (en) | Recognizing interactions with hot zones | |
CN111417989B (en) | Program, information processing method, information processing system, head-mounted display device, and information processing device | |
JP5161435B2 (en) | Image processing apparatus, image processing system, computer control method, and program | |
CN111199583B (en) | Virtual content display method and device, terminal equipment and storage medium | |
KR20170014491A (en) | Method and device for recognizing motion | |
CN108563327B (en) | Augmented reality method, device, storage medium and electronic equipment | |
JP7162079B2 (en) | A recording medium for recording a method, system and computer program for remotely controlling a display device via head gestures | |
US20170228047A1 (en) | Information processing device, information processing method and program | |
US11284020B2 (en) | Apparatus and method for displaying graphic elements according to object | |
CN110489027A (en) | Handheld input device and its display position control method and device for indicating icon | |
US20180032142A1 (en) | Information processing apparatus, control method thereof, and storage medium | |
KR102565444B1 (en) | Method and apparatus for identifying object | |
KR102572675B1 (en) | Method and electronic device for adaptively configuring user interface | |
CN105468249B (en) | Intelligent interaction system and its control method | |
WO2023142555A1 (en) | Data processing method and apparatus, computer device, storage medium, and computer program product | |
JP6452585B2 (en) | Information processing apparatus and position information acquisition method | |
CN111913577A (en) | Three-dimensional space interaction method based on Kinect | |
CN109144598A (en) | Electronics mask man-machine interaction method and system based on gesture | |
CN110489026A (en) | A kind of handheld input device and its blanking control method and device for indicating icon | |
KR101048406B1 (en) | Game system and method for recognizing user posture as user command |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |