CN111860142A - Projection enhancement oriented gesture interaction method based on machine vision - Google Patents

Projection enhancement oriented gesture interaction method based on machine vision Download PDF

Info

Publication number
CN111860142A
CN111860142A CN202010523379.XA CN202010523379A CN111860142A CN 111860142 A CN111860142 A CN 111860142A CN 202010523379 A CN202010523379 A CN 202010523379A CN 111860142 A CN111860142 A CN 111860142A
Authority
CN
China
Prior art keywords
gesture
camera
interaction
image
projector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010523379.XA
Other languages
Chinese (zh)
Other versions
CN111860142B (en
Inventor
徐光耀
李亮
白晓亮
常壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Hover Information Physics Integration Innovation Research Institute Co Ltd
Original Assignee
Nanjing Hover Information Physics Integration Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Hover Information Physics Integration Innovation Research Institute Co Ltd filed Critical Nanjing Hover Information Physics Integration Innovation Research Institute Co Ltd
Priority to CN202010523379.XA priority Critical patent/CN111860142B/en
Priority to PCT/CN2020/109691 priority patent/WO2021248686A1/en
Publication of CN111860142A publication Critical patent/CN111860142A/en
Application granted granted Critical
Publication of CN111860142B publication Critical patent/CN111860142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a projection enhancement oriented gesture interaction method based on machine vision, which comprises the following steps: step 1, carrying out combined calibration on a projector and a camera; step 2, turning on a camera to collect gesture-free images; step 3, converting the gesture-free image into a background image and preprocessing the background image; step 4, eliminating background information of the current image by adopting a frame difference method to obtain a gesture image, and then carrying out edge contour detection on the gesture information by utilizing a Canny operator to obtain a gesture contour; step 5, traversing the acquired gesture outline to acquire a fingertip point coordinate; step 6, judging the position and time of the fingertip point coordinates, and transmitting the judgment result to a computer; and 7, analyzing the judgment result, and executing corresponding operation by the computer. When the invention is in an assembly site, an assembly worker can realize interaction only by simple gestures without wearing equipment, natural interaction is really realized, and the practicability of the system is improved.

Description

Projection enhancement oriented gesture interaction method based on machine vision
Technical Field
The invention belongs to the technical field of human-computer interaction, and particularly relates to a gesture interaction method based on machine vision and oriented to projection enhancement.
Background
With the rapid development of information technology, human interaction with various computer systems has become unavoidable, and thus human-computer interaction technology has received increasing attention. New technology and new equipment like VR appear like spring shoots after rain, and meanwhile, efficient and humanized operation with the equipment also becomes a technical difficulty and an important research direction. Compared with a mouse, a keyboard and a touch screen, the gesture has the characteristics of flexibility, intuition, non-contact property and the like; gesture recognition plays an important role in VR, is widely applied to fields such as AR, unmanned aerial vehicle control, smart home and sign language recognition, and plays an increasingly important role. In the field of aerospace manufacturing, as the assembly of heavy equipment such as airplanes and the like has more parts and more manual assemblies, the problems of misassembly, neglected assembly and the like are easily caused, and the safety of passengers in the future is greatly influenced by the problems. The intelligent assembly guiding system based on augmented reality can well solve the problems, and in actual assembly, the projection augmented reality system is more suitable for an assembly site, but has the problem of unnatural interaction.
In a projection augmented reality system, a traditional interaction mode is keyboard and mouse interaction, interaction is realized by clicking a keyboard and clicking a mouse, and assembly guide information is displayed, however, computer equipment cannot be installed on an assembly site, and application of the projection augmented reality system is greatly limited. With the development of computer technologies, experts and scholars propose gesture interaction by using devices such as Kinect and Leap Motion, however, the devices are used at too close distance, which affects the assembly operation of workers.
Chinese literature 'gesture recognition based on vision and interactive application research [ D ]. Nanjing university of physical workers, 2017' researches gesture recognition and interaction based on machine vision, firstly proposes field skin color modeling to solve the problem of skin color detection in a complex illumination environment, and then proposes two target re-detection mechanisms to effectively solve the problem of target loss, then improves the traditional frame difference method to improve gesture detection efficiency, and finally uses a Fourier descriptor as a gesture feature to realize recognition of static gestures based on a KNN algorithm, realizes recognition of dynamic gestures based on a statistical counting recognition method, and finally simulates a mouse function to realize man-machine interaction of natural hands under monocular vision. The gesture segmentation in the prior art is based on a skin color model, when a human hand is overlapped with a human face or a skin color-like region, segmentation is inaccurate, the feature extraction and gesture recognition in the later period are seriously influenced, meanwhile, the recognized static gestures and dynamic gestures are fewer, the limiting conditions are more, only a plurality of functions of a mouse can be simulated, and the most natural mode of interaction with a machine cannot be achieved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a gesture interaction method based on machine vision for projection enhancement, which realizes more natural human-computer interaction in a projection augmented reality system, improves the applicability, the interactive pleasure and the assembly efficiency of the system and is oriented to projection enhancement.
The technical scheme adopted by the invention is as follows:
a projection enhancement oriented gesture interaction method based on machine vision comprises the following steps:
step 1: the projector and the camera are jointly calibrated, the camera is calibrated by utilizing an entity asymmetric circle calibration plate, then the projector is calibrated by utilizing a non-entity checkerboard calibration plate, so that a non-entity checkerboard calibration object and an entity asymmetric circle calibration object are on the same calibration plate, and the asymmetric circle calibration plate and the checkerboard calibration plate are captured by the camera simultaneously, so that the projector and the camera are in the same coordinate system;
step 2: opening a camera, manually adjusting the aperture size and the exposure time of the camera, then setting the frame rate and the image size of the camera, and then acquiring a gesture-free image;
and step 3: converting the collected gesture-free image into a background image, eliminating noise and irrelevant information in the background image through preprocessing, and recovering useful real information in the background image;
And 4, step 4: acquiring a current image in a virtual scene by using a camera, subtracting the current image from a background image by using a frame difference method, eliminating background information in the current image and obtaining an absolute value of the background information, so as to obtain gesture information in the current image, and then performing edge contour detection on the gesture information by using the existing Canny operator, so as to obtain a gesture contour;
and 5: traversing pixel points of the acquired gesture outline through OpenCV, judging coordinates of the pixel points, and acquiring a highest point so as to extract coordinates of fingertip points in the gesture outline;
step 6: performing position and time judgment on the interaction area coordinate and the interaction button coordinate in the virtual scene and the fingertip point coordinate through matrix conversion and transfer by using a world coordinate system, activating interaction operation when the fingertip point is positioned on one interaction button for a long time, and then transmitting a judgment result to a computer;
and 7: and analyzing the judgment result, and then executing corresponding operation by the computer according to the interaction type of the predefined interaction operation.
In step 1, the calibration principle of the camera is expressed as:
Figure BDA0002532849870000021
in step 1, the projector and the camera are jointly calibrated to obtain an internal reference matrix of the projector and the camera, wherein the internal reference matrix is expressed as:
Figure BDA0002532849870000031
In step 1, the projector and the camera are jointly calibrated to obtain an external parameter matrix of the projector and the camera, wherein the external parameter matrix is represented as:
Figure BDA0002532849870000032
in step 3, the specific steps of converting the gesture-free image into the background image are as follows: (1) firstly, acquiring 3-5 gesture-free images through a camera; (2) reading the acquired gesture-free image, converting the gesture-free image into a matrix form, summing by using a related function in OpenCV, and solving an average value; (3) and taking the gesture-free image with the average value as a background image.
The background image is provided with a rectangular interactive button, so that the projector can project on the background image conveniently.
In step 3, the preprocessing includes graying, geometric variation, and enhancement steps.
In step 2, the frame rate of the camera is greater than or equal to 5 frames.
The invention has the following beneficial effects:
(1) when the system is in an assembly site, an assembly worker can realize interaction only by simple gestures without wearing equipment, natural interaction is really realized, and the practicability of the system is improved;
(2) according to the invention, the gesture detection time can be shortened by selecting the interest area, the gesture judgment time and precision are improved, and the program response time is shortened, so that the interaction efficiency is improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The projection-enhanced machine vision-based gesture interaction method of the present invention is further illustrated by the following description and examples.
As shown in fig. 1, the present invention includes:
step 1: the projector and the camera are jointly calibrated, the camera is calibrated by utilizing an entity asymmetric circle calibration plate, then the projector is calibrated by utilizing a non-entity checkerboard calibration plate, so that a non-entity checkerboard calibration object and an entity asymmetric circle calibration object are on the same calibration plate, and the asymmetric circle calibration plate and the checkerboard calibration plate are captured by the camera simultaneously, so that the projector and the camera are in the same coordinate system; the projector and the camera are jointly calibrated to provide data for subsequent coordinate extraction and gesture judgment, wherein when the asymmetric circle calibration plate calibrates the camera, the asymmetric circle calibration plate needs to be kept at 45 degrees relative to a camera imaging plane, so that the calibration precision of the camera is ensured conveniently.
The specific steps of utilizing the entity asymmetric circle calibration plate to calibrate the camera are as follows: (1) firstly, acquiring images at multiple angles on a virtual scene through a camera; (2) calculating by using a related algorithm in OpenCV to obtain coordinates of the center corner points in the image, and calculating an internal reference matrix of the linear model corresponding to the image coordinates and world coordinates; (3) and calculating the distortion parameter of the nonlinear model by using the internal parameter matrix, recalculating the linear parameter, repeating iteration to obtain the internal parameter and the distortion parameter, and obtaining the external parameter matrix according to the internal parameter and the distortion parameter.
And acquiring 24-26 images at multiple angles by the camera during the joint calibration, calculating world coordinates of checkerboard corner points of the projector by using internal reference and external reference matrixes of the camera, calculating the checkerboard corner point coordinates according to the internal reference and external reference matrixes of the projector, and matching the world coordinates, thereby solving the internal and external parameters of the projector and the camera together, and enabling the projector and the camera to be in the same coordinate system.
Step 2: opening a camera, manually adjusting the aperture size and the exposure time of the camera, then setting the frame rate and the image size of the camera, and then acquiring a gesture-free image; the camera is opened by calling a VideoCapture function in OpenCV, wherein the camera frame rate is more than or equal to 5 frames.
And step 3: converting the collected gesture-free image into a background image, eliminating noise and irrelevant information in the background image through preprocessing, and recovering useful real information in the background image; wherein the preprocessing comprises graying, geometric variation and enhancement steps.
And 4, step 4: acquiring a current image in a virtual scene by using a camera, subtracting the current image from a background image by using a frame difference method, eliminating background information in the current image and obtaining an absolute value of the background information, so as to obtain gesture information in the current image, and then performing edge contour detection on the gesture information by using the existing Canny operator, so as to obtain a gesture contour; the camera is opened by calling the VideoCapture function in OpenCV.
And (3) performing subtraction on the current image and the gesture-free image acquired in the step (2) by a frame difference method, if the subtraction result is 0, indicating that no gesture information exists in the current image, and then acquiring the current image in the virtual scene again until the subtraction can be performed smoothly to acquire the gesture information in the current image.
And 5: traversing pixel points of the acquired gesture outline through OpenCV, judging coordinates of the pixel points, and acquiring a highest point so as to extract coordinates of fingertip points in the gesture outline;
step 6: performing position and time judgment on the interaction area coordinate and the interaction button coordinate in the virtual scene and the fingertip point coordinate through matrix conversion and transfer by using a world coordinate system, activating interaction operation when the fingertip point is positioned on one interaction button for a long time, and then transmitting a judgment result to a computer;
the position and time judgment indicates that the position of the pointing point coordinate is in the interactive area and the interactive button area and continues for a certain camera frame rate. The shape of the interactive button is rectangular, and the coordinates of each vertex of the interactive button and the interactive operation of each interactive button are designed and defined in advance.
And 7: and analyzing the judgment result, and then executing corresponding operation by the computer according to the interaction type of the predefined interaction operation.
In step 1, the calibration principle of the camera is expressed as:
Figure BDA0002532849870000051
the focal length of the camera calibrated by the camera is f, the scale factors of the u axis and the v axis are f/dx and f/dy respectively, the rotation matrix is R, the translation vector is T, the scale factor is ZC, and the coordinate of the center point of the camera image is (u/d/y)0,v0)。
In step 1, the projector and the camera are jointly calibrated to obtain an internal reference matrix of the projector and the camera, wherein the internal reference matrix is expressed as:
Figure BDA0002532849870000052
in step 1, the projector and the camera are jointly calibrated to obtain an external parameter matrix of the projector and the camera, wherein the external parameter matrix is represented as:
Figure BDA0002532849870000053
in step 3, the specific steps of converting the gesture-free image into the background image are as follows: (1) firstly, acquiring 3-5 gesture-free images through a camera; (2) reading the acquired gesture-free image, converting the gesture-free image into a matrix form, summing by using a related function in OpenCV, and solving to obtain an average value; (3) and taking the gesture-free image with the average value as a background image.
Example (b):
step 1: the projector and the camera are jointly calibrated, firstly, an entity asymmetric circle calibration plate is utilized to calibrate the camera to obtain an internal reference matrix and an external reference matrix of the camera, then, a non-entity checkerboard calibration plate is utilized to calibrate the projector to obtain the internal reference matrix and the external reference matrix of the projector, so that a non-entity checkerboard calibration object and an entity asymmetric circle calibration object are on the same calibration plate, and the asymmetric circle calibration plate and the checkerboard calibration plate are captured by the camera at the same time, so that the projector and the camera are in the same coordinate system;
Step 2: the method comprises the steps of opening a camera by calling a VideoCapture function in OpenCV, manually adjusting the aperture size and the exposure time of the camera, then setting the frame rate and the image size of the camera, and then acquiring gesture-free images;
and step 3: reading and converting the collected gesture-free image into a matrix form, summing by using a related function in OpenCV, solving an average value, wherein the gesture-free image with the average value is solved to serve as a background image, and then preprocessing the background image in the steps of graying, geometric change and enhancement, so that noise and irrelevant information in the background image are eliminated;
and 4, step 4: opening a camera by using a VideoCapture function in OpenCV to acquire a current image in a virtual scene, subtracting the current image and a background image by a frame difference method, eliminating background information in the current image and obtaining an absolute value of the background information so as to acquire gesture information in the current image, and then performing edge contour detection on the gesture information by using an existing Canny operator so as to acquire a gesture contour;
and 5: traversing pixel points of the acquired gesture outline through OpenCV, judging coordinates of the pixel points, and acquiring a highest point so as to extract coordinates of fingertip points in the gesture outline;
Step 6: performing position and time judgment on the interaction area coordinate and the interaction button coordinate in the virtual scene and the fingertip point coordinate through matrix conversion and transfer by using a world coordinate system, activating interaction operation when the fingertip point is positioned on one interaction button for a long time, and then transmitting a judgment result to a computer;
and 7: and analyzing the judgment result, and executing corresponding operation by the computer according to the interaction type of the predefined interaction operation, thereby finishing the gesture interaction.
Other undescribed portions of the present invention are the same as the prior art.

Claims (8)

1. A gesture interaction method facing projection enhancement and based on machine vision is characterized by comprising the following steps:
step 1: the projector and the camera are jointly calibrated, the camera is calibrated by utilizing an entity asymmetric circle calibration plate, then the projector is calibrated by utilizing a non-entity checkerboard calibration plate, so that a non-entity checkerboard calibration object and an entity asymmetric circle calibration object are on the same calibration plate, and the asymmetric circle calibration plate and the checkerboard calibration plate are captured by the camera simultaneously, so that the projector and the camera are in the same coordinate system;
step 2: opening a camera, manually adjusting the aperture size and the exposure time of the camera, then setting the frame rate and the image size of the camera, and then acquiring a gesture-free image;
And step 3: converting the collected gesture-free image into a background image, eliminating noise and irrelevant information in the background image through preprocessing, and recovering useful real information in the background image;
and 4, step 4: acquiring a current image in a virtual scene by using a camera, subtracting the current image from a background image by using a frame difference method, eliminating background information in the current image and obtaining an absolute value of the background information, so as to obtain gesture information in the current image, and then performing edge contour detection on the gesture information by using the existing Canny operator, so as to obtain a gesture contour;
and 5: traversing pixel points of the acquired gesture outline through OpenCV, judging coordinates of the pixel points, and acquiring a highest point so as to extract coordinates of fingertip points in the gesture outline;
step 6: performing position and time judgment on the interaction area coordinate and the interaction button coordinate in the virtual scene and the fingertip point coordinate through matrix conversion and transfer by using a world coordinate system, activating interaction operation when the fingertip point is positioned on one interaction button for a long time, and then transmitting a judgment result to a computer;
and 7: and analyzing the judgment result, and then executing corresponding operation by the computer according to the interaction type of the predefined interaction operation.
2. The projection-enhanced machine vision-based gesture interaction method according to claim 1, wherein in step 1, the calibration principle of the camera is expressed as:
Figure FDA0002532849860000011
3. the method for gesture interaction based on machine vision and oriented to projection enhancement as claimed in claim 1, wherein in step 1, the projector and the camera are jointly calibrated to obtain an internal reference matrix of the projector and the camera, and the internal reference matrix is represented as:
Figure FDA0002532849860000021
4. the method for gesture interaction based on machine vision and oriented to projection enhancement as claimed in claim 1, wherein in step 1, the projector and the camera are jointly calibrated to obtain the external parameter matrix of the projector and the camera, and the external parameter matrix is represented as:
Figure FDA0002532849860000022
5. the projection-enhanced machine vision-based gesture interaction method as claimed in claim 1, wherein in step 3, the specific steps of converting the gesture-free image into the background image are as follows: (1) firstly, acquiring 3-5 gesture-free images through a camera; (2) reading the acquired gesture-free image, converting the gesture-free image into a matrix form, summing by using a related function in OpenCV, and solving an average value; (3) and taking the gesture-free image with the average value as a background image.
6. The projection-enhanced machine vision-based gesture interaction method as claimed in claim 5, wherein a rectangular interaction button is arranged on the background image, so that a projector can project on the background image conveniently.
7. The projection-oriented enhanced machine vision-based gesture interaction method of claim 1, wherein in step 3, the preprocessing comprises graying, geometric variation and enhancement steps.
8. The projection-oriented enhanced machine vision-based gesture interaction method according to claim 1, wherein in the step 2, the camera frame rate is greater than or equal to 5 frames.
CN202010523379.XA 2020-06-10 2020-06-10 Gesture interaction method based on machine vision and oriented to projection enhancement Active CN111860142B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010523379.XA CN111860142B (en) 2020-06-10 2020-06-10 Gesture interaction method based on machine vision and oriented to projection enhancement
PCT/CN2020/109691 WO2021248686A1 (en) 2020-06-10 2020-08-18 Projection enhancement-oriented gesture interaction method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010523379.XA CN111860142B (en) 2020-06-10 2020-06-10 Gesture interaction method based on machine vision and oriented to projection enhancement

Publications (2)

Publication Number Publication Date
CN111860142A true CN111860142A (en) 2020-10-30
CN111860142B CN111860142B (en) 2024-08-02

Family

ID=72987166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010523379.XA Active CN111860142B (en) 2020-06-10 2020-06-10 Gesture interaction method based on machine vision and oriented to projection enhancement

Country Status (2)

Country Link
CN (1) CN111860142B (en)
WO (1) WO2021248686A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393535A (en) * 2021-06-25 2021-09-14 深圳市拓普智造科技有限公司 Projection type operation guiding method and system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494169A (en) * 2022-01-18 2022-05-13 南京邮电大学 Industrial flexible object detection method based on machine vision
CN115712354B (en) * 2022-07-06 2023-05-30 成都戎盛科技有限公司 Man-machine interaction system based on vision and algorithm
CN115880296B (en) * 2023-02-28 2023-07-18 中国建筑第五工程局有限公司 Machine vision-based prefabricated part quality detection method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140661A (en) * 2007-09-04 2008-03-12 杭州镭星科技有限公司 Real time object identification method taking dynamic projection as background
CN102591533A (en) * 2012-03-01 2012-07-18 桂林电子科技大学 Multipoint touch screen system realizing method and device based on computer vision technology
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device
CN106201173A (en) * 2016-06-28 2016-12-07 广景视睿科技(深圳)有限公司 The interaction control method of a kind of user's interactive icons based on projection and system
CN106897688A (en) * 2017-02-21 2017-06-27 网易(杭州)网络有限公司 Interactive projection device, the method for control interactive projection and readable storage medium storing program for executing
US20190034714A1 (en) * 2016-02-05 2019-01-31 Delphi Technologies, Llc System and method for detecting hand gestures in a 3d space

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140661A (en) * 2007-09-04 2008-03-12 杭州镭星科技有限公司 Real time object identification method taking dynamic projection as background
CN102591533A (en) * 2012-03-01 2012-07-18 桂林电子科技大学 Multipoint touch screen system realizing method and device based on computer vision technology
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device
US20190034714A1 (en) * 2016-02-05 2019-01-31 Delphi Technologies, Llc System and method for detecting hand gestures in a 3d space
CN106201173A (en) * 2016-06-28 2016-12-07 广景视睿科技(深圳)有限公司 The interaction control method of a kind of user's interactive icons based on projection and system
CN106897688A (en) * 2017-02-21 2017-06-27 网易(杭州)网络有限公司 Interactive projection device, the method for control interactive projection and readable storage medium storing program for executing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393535A (en) * 2021-06-25 2021-09-14 深圳市拓普智造科技有限公司 Projection type operation guiding method and system

Also Published As

Publication number Publication date
WO2021248686A1 (en) 2021-12-16
CN111860142B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN111860142B (en) Gesture interaction method based on machine vision and oriented to projection enhancement
CN109344701B (en) Kinect-based dynamic gesture recognition method
CN101593022B (en) Method for quick-speed human-computer interaction based on finger tip tracking
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
CN105528082B (en) Three dimensions and gesture identification tracking exchange method, device and system
Jain et al. Real-time upper-body human pose estimation using a depth camera
CN109359514B (en) DeskVR-oriented gesture tracking and recognition combined strategy method
WO2014126711A1 (en) Model-based multi-hypothesis target tracker
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN110930411B (en) Human body segmentation method and system based on depth camera
CN102591533A (en) Multipoint touch screen system realizing method and device based on computer vision technology
CN112667078B (en) Method, system and computer readable medium for quickly controlling mice in multi-screen scene based on sight estimation
CN106952312B (en) Non-identification augmented reality registration method based on line feature description
CN110751097B (en) Semi-supervised three-dimensional point cloud gesture key point detection method
CN108305321B (en) Three-dimensional human hand 3D skeleton model real-time reconstruction method and device based on binocular color imaging system
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN104167006A (en) Gesture tracking method of any hand shape
CN110764620A (en) Enhanced semi-virtual reality aircraft cabin system
Xu et al. Robust hand gesture recognition based on RGB-D Data for natural human–computer interaction
CN112365578A (en) Three-dimensional human body model reconstruction system and method based on double cameras
CN117133032A (en) Personnel identification and positioning method based on RGB-D image under face shielding condition
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
CN113674395B (en) 3D hand lightweight real-time capturing and reconstructing system based on monocular RGB camera
CN110348344A (en) A method of the special facial expression recognition based on two and three dimensions fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zeng Yalan

Inventor after: Xu Guangyao

Inventor after: Li Liang

Inventor after: Bai Xiaoliang

Inventor after: Chang Zhuang

Inventor before: Xu Guangyao

Inventor before: Li Liang

Inventor before: Bai Xiaoliang

Inventor before: Chang Zhuang

GR01 Patent grant
GR01 Patent grant