CN111752379B - Gesture detection method and system - Google Patents

Gesture detection method and system Download PDF

Info

Publication number
CN111752379B
CN111752379B CN201910251062.2A CN201910251062A CN111752379B CN 111752379 B CN111752379 B CN 111752379B CN 201910251062 A CN201910251062 A CN 201910251062A CN 111752379 B CN111752379 B CN 111752379B
Authority
CN
China
Prior art keywords
gesture
light spot
reflection
track
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910251062.2A
Other languages
Chinese (zh)
Other versions
CN111752379A (en
Inventor
刘德建
汪松
郭玉湖
陈宏�
方振华
关胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianquan Educational Technology Ltd
Original Assignee
Fujian Tianquan Educational Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianquan Educational Technology Ltd filed Critical Fujian Tianquan Educational Technology Ltd
Priority to CN201910251062.2A priority Critical patent/CN111752379B/en
Publication of CN111752379A publication Critical patent/CN111752379A/en
Application granted granted Critical
Publication of CN111752379B publication Critical patent/CN111752379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

A gesture detection method and a system are provided, wherein the method comprises the following steps of detecting a reflection light spot of a reflection surface and detecting a track of a feature point, wherein the reflection light spot is formed by reflecting light rays emitted by a light source arranged on a signal device on the reflection surface, and the feature point is located on the signal device; when the light spot of the reflection surface is detected, obtaining a gesture instruction corresponding to the track of the feature point according to the track of the feature point; and according to the gesture instruction, executing a preset step corresponding to the gesture instruction. According to the technical scheme, the reflection light spot is introduced, so that a camera can detect a gesture of operating known plane contents by a user; and the gesture is judged as an effective instruction only when the user approaches a plane to write and draw, so that the identification efficiency and accuracy in the gesture judgment field are better improved.

Description

Gesture detection method and system
Technical Field
The invention relates to the field of optical analysis design, in particular to a method and a system for detecting gestures through optical signals.
Background
The existing video shooting and interaction field combines the development of communication technology, can directly share the shot picture, but lacks the means that can interact with the receiver sharing the picture through optical information, if the signal device can directly send signals to the camera, such as infrared light, etc., so as to obtain the optical signal of the signal device through the camera, the problem of analyzing the relative movement of the handheld signal device and corresponding instructions can be completed according to the optical signal.
Disclosure of Invention
For this reason, it is desirable to provide a solution that enables optical gesture command control.
To achieve the above object, the inventor provides a gesture detection method, comprising the steps of,
detecting the reflected light spot of the reflecting surface,
detecting the track of the characteristic points;
the reflecting light spot is formed by reflecting light rays emitted by a light source arranged on the signal device on a reflecting surface, and the characteristic point is positioned on the signal device;
when the light spot of the reflecting surface is detected, obtaining a gesture instruction corresponding to the characteristic point track according to the characteristic point track;
and executing a preset step corresponding to the gesture instruction according to the gesture instruction.
Specifically, the step of detecting the track of the feature point is real-time detection.
Specifically, the step of detecting the track of the feature point is to detect the track of the feature point only after detecting the reflection light spot of the reflection surface.
Specifically, the light emitted by the light source arranged on the signal device is infrared light.
Optionally, the feature point is a preset pattern on the pen body.
Optionally, the image in the motion track of the reflection light spot is selected according to the gesture instruction, and is uploaded to the cloud.
A gesture detection system comprises a detection unit, a signal device, an instruction judgment unit and a preset step execution unit,
the detection unit is used for detecting the reflection light spot of the reflection surface and detecting the track of the characteristic point;
the reflecting light spot is formed by reflecting light rays emitted by a light source arranged on the signal device on a reflecting surface, and the characteristic point is positioned on the signal device;
the command judging unit is used for obtaining a gesture command corresponding to the characteristic point track according to the track of the characteristic point after the light spot of the reflecting surface is detected;
the preset step execution unit is used for executing a preset step corresponding to the gesture instruction according to the gesture instruction.
Specifically, the detection unit detects the track of the feature point in real time.
Specifically, the detection unit detects the track of the feature point when the reflection light spot of the reflection surface is detected.
Optionally, the light emitted by the signaling device is infrared light.
Specifically, the characteristic point is a preset pattern on the signaling device.
Further, the gesture instruction comprises that an image in the motion track of the reflection light spot is selected and uploaded to the cloud.
Different from the prior art, according to the technical scheme, the camera can detect the gesture of the user for operating the content of the known plane by introducing the reflection light spot, and the user is judged to be an effective instruction only when the user approaches the plane for writing and drawing, so that the recognition efficiency and accuracy in the field of gesture judgment are improved better.
Drawings
FIG. 1 is a flowchart of a gesture detection method according to an embodiment;
FIG. 2 is a flowchart of a gesture detection method according to an embodiment;
FIG. 3 is a block diagram of a gesture detection system according to an embodiment;
FIG. 4 is a flowchart of a gesture command execution method according to an embodiment.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, a gesture detection method according to the present invention includes the following steps,
s100 detecting a reflected light spot of the reflection surface,
s102, detecting a track of the characteristic points;
the reflecting light spot is formed by reflecting light rays emitted by a light source arranged on the signal device on a reflecting surface, and the characteristic point is positioned on the signal device;
after the light spot of the reflecting surface is detected, S104, obtaining a gesture instruction corresponding to the characteristic point track according to the characteristic point track;
and S106, executing a preset step corresponding to the gesture instruction according to the gesture instruction.
In this embodiment, our gesture detection method relies on a video acquisition unit, where the video acquisition unit is used to acquire video signals on diffuse scattering planes, the signaling device can be used for holding hands, and can also be used to make some moving gestures on some diffuse scattering planes, and a light source arranged on the signaling device satisfies the following conditions: the intensity of the light source is set at a threshold value, and if and only if the distance between the signal device and the diffuse reflection plane is in a close position, such as in a range of 1-3 cm, a light spot enough to be captured is generated at a position such as the diffuse reflection plane (such as a paper surface and a table surface), and a specific light intensity index can be determined through experiments. The feature point may be an optical signal provided on the surface of the signaling device, such as a red dot mark, a blue dot mark, a two-dimensional code, a specific trademark, a specific number, or a plurality of or single patterns, and the video acquisition unit acquires an image of the feature point as long as the feature point is located in the lens of the video acquisition unit, so step S102 may be performed to analyze the trajectory of the feature point in the acquired video and detect the trajectory of the feature point. In our example, this step detects the trajectory of the feature points as real-time detection. The method has the advantages that the track of the characteristic points is monitored at any time, the possible range of the light spots is reduced, for example, in the embodiment that the characteristic points are arranged at one end of a signal device, the infrared light spots on the plane can only appear in the range of about 1-2 centimeters near the characteristic points, and the difficulty of video image analysis is reduced practically. After the light spot is detected, a user of the signaling device places the signaling device close to the diffuse reflection plane, such as paper, to perform some gesture actions, and then determines a corresponding gesture according to a characteristic point movement track from the moment after the light spot is detected. The detection of the characteristic point moving track can be carried out until the moment when the light spot disappears. The advantage of selecting the moving track of the feature point for judgment is that the signal of the feature point is more stable than the light spot, and the light spot may be overlooked due to the hand movement of the user. By the method, the effect of acquiring the gesture control instruction according to the video signal on any plane and the operation of the signal device by the user is achieved. Compared with the traditional mode of detecting the characteristic points of the video signals, the characteristic points are located on a two-dimensional plane relative to the camera, the prior art cannot judge when a user 'wants' to make a gesture command, and in combination with the scheme, only when the distance between the user and a handheld signal device is close enough to the plane, and light spots are detected, an operation command for analysis is obtained, and the technical effect of screening and analyzing effective gesture commands is finally achieved. The analysis success rate in the field of video gesture judgment control is improved.
In other specific embodiments, the detection of the locus of the feature point in step S102 is performed after the reflection spot of the reflection surface is detected. The processing mode can effectively reduce the calculation power consumption and the algorithm complexity of the real-time detection characteristic points, but has a delay in response speed. Of course, the technical effect of effectively screening and analyzing the gesture instruction can also be achieved.
The light spot emitted by the signaling device can be a light spot with a preset specific wavelength, such as green light, blue-violet light, yellow light, ultraviolet light, and the like. In some preferred embodiments, the light emitted by the light source disposed on the signaling device is infrared light. The advantage of using invisible light is to avoid affecting the normal operation and use of the user on non-scattering surfaces. But the mechanical equipment can identify the infrared light, so that the practicability of the method is improved.
In some specific optional embodiments, the gesture instruction includes an application scenario in which, for example, the signaling device is an infrared laser pen, and the diffuse reflection surface is paper, a user operates an infrared light emitting unit of the signaling device to point at the paper surface, and selects a certain section of speech displayed on the paper surface by pressing close to the paper surface, an infrared reflection light spot which can be sensed by a camera is generated in the paper surface, a moving track of a center of the light spot substantially surrounds the section of speech, and after analyzing and judging a moving track of a feature point in the period of time, a rear end executes an instruction related to the gesture of a circulant. Therefore, when the user is not knowing about the problems in the books, the user can ask the problems through the network, so that the user can read the books according to the interestingness and improve the learning efficiency. In other embodiments, the related instruction may also be to collect the images within the circled area, such as collecting the content into a favorite, or performing related operations such as projecting highlighting. Through the scheme, gesture motion recognition in the video can be performed more conveniently, and a better, faster and more accurate interaction effect is achieved.
In other embodiments, a method for detecting a gesture as shown in FIG. 2 includes the steps of,
s200 detecting a reflection light spot of the reflection surface,
when the reflection light spot of the reflection surface is detected, S202 obtains a gesture command corresponding to the movement track of the light spot according to the movement track of the reflection light spot, wherein the reflection light spot is formed by reflecting light rays emitted by a light source arranged on a signal device on the reflection surface,
s204, executing a preset step corresponding to the gesture command.
The light source arranged on the signal device in the technical scheme of the embodiment meets the following conditions: the intensity of the light source is set at a threshold value, and if and only if the distance between the signal device and the diffuse reflection plane is in a close position, such as in a range of 1-3 cm, a light spot enough to be captured is generated at a position such as the diffuse reflection plane (such as a paper surface and a table surface), and a specific light intensity index can be determined through experiments. Therefore, the detection of whether the signal device held by the user is close to the diffuse reflection plane can be achieved only by detecting the light spots. The technical effect of accurately detecting the gesture of the user can be achieved by only detecting the existence of the light spots and the dynamic state of the light spots.
In a further embodiment, the method further comprises the steps of detecting a characteristic point, wherein the characteristic point is located on the signal device, detecting a reflection light spot of the reflection surface within a preset range of the characteristic point, and recording a movement track of the reflection light spot after the reflection light spot of the reflection surface is detected. The range in which the facula is likely to appear is detected by detecting the characteristic points, so that the detection accuracy of the method can be improved, and the operation difficulty can be reduced.
The embodiment shown in fig. 3 illustrates a gesture detection system, which includes a detection unit 300, an instruction determination unit 302, and a preset step execution unit 304, where the detection unit is configured to detect a reflection light spot of a reflection surface and also configured to detect a track of a feature point; the reflecting light spot is formed by reflecting light rays emitted by a light source arranged on the signal device on a reflecting surface, and the characteristic point is positioned on the signal device; the command judging unit is used for obtaining a gesture command corresponding to the characteristic point track according to the track of the characteristic point after the light spot of the reflecting surface is detected; the preset step execution unit is used for executing a preset step corresponding to the gesture instruction according to the gesture instruction.
Specifically, the detection unit detects the track of the feature point in real time.
Specifically, the detection unit detects the track of the feature point when the reflection light spot of the reflection surface is detected.
Optionally, the light emitted by the signaling device is infrared light.
Specifically, the characteristic point is a preset pattern on the signaling device.
Further, the gesture instruction comprises that an image in the motion track of the reflection light spot is selected and uploaded to the cloud.
In another embodiment, a gesture detection system includes a detection unit 300, an instruction determination unit 302, and a preset step execution unit 304, where the detection unit is configured to detect a reflection light spot of a reflection surface, the instruction determination unit is configured to obtain a gesture instruction corresponding to a movement track of the light spot according to the movement track of the reflection light spot after detecting the reflection light spot of the reflection surface, the reflection light spot is formed by reflecting light emitted by a light source disposed on a signal device on the reflection surface, and the preset step execution unit is configured to execute a preset step corresponding to the gesture instruction.
In a further embodiment, as shown in fig. 4, in order to better respond to a proprietary gesture, a gesture instruction execution method is further performed, including the following steps of S400 pre-dividing an acquired image to obtain a plurality of divided blocks;
s402, a judging step is carried out, and when the track of the reflection light spot is detected to be a convex curve and the proportion of the range enclosed by the track and the connecting line of the head end and the tail end of the reflection light spot to a certain segmentation block is larger than a preset threshold value, or the track of the reflection light spot is detected to be a closed curve and the proportion of the range enclosed by the closed curve to a certain segmentation block is larger than a preset threshold value, S404 selects the segmentation block. The convex curve is a curve that is always on the same side of any tangent line, and if the user uses the signal device to perform the gesture motion of the convex curve and a larger range is defined, the user can cover most of the segmentation blocks, and then the segmentation blocks are determined to be selected by the circle. The larger range can be labeled by using a method of a preset proportion threshold, for example, when the preset threshold is selected to be 60%, if the convex curve and the coverage range thereof account for the proportion of the segmented block or exceed 60% of the area of the segmented block, the block can be determined to be selected. Through the steps, accurate selection can be performed after judgment is performed on the selection gesture made by the user. The problem of selecting a specific segmentation block through a gesture instruction is solved. The method further comprises the steps of receiving user instruction selection information, wherein the user instruction selection information can be copied to the selected segmentation block according to the user instruction through a signal device, other preset gestures, sound, available buttons arranged on other devices such as the signal device and the like, and another image is generated; uploading the image of the area image, collecting the image into a folder stored by a user or labeling the voice and character types.
In some further embodiments, we pre-segment the image after the camera acquires the new image. Taking an obtained image as a paper teaching material as an example, when the obtained image is identified in a teaching material database, and the database is manually labeled according to established contents of the teaching material, such as texts, illustrations, after-school problems and the like, the segmentation block is read through the database and then segmented according to the manually labeled block, in other embodiments, the image obtained by a camera is not identified as the image content in the database, and then segmentation can be performed through a connected domain algorithm, and finally the connected domain algorithm can segment the content dense block on the image into texts, illustrations, after-school problems and the like. Through the design, the method better solves the problem of block identification in gesture control.
The computer program executes the following steps when being executed, namely pre-dividing the acquired image to obtain a plurality of divided blocks;
when the track of the reflection light spot is detected to be a convex curve and the proportion of the range enclosed by the track and the connecting line of the head end and the tail end of the reflection light spot in a certain segmentation block is larger than a preset threshold value, or the track of the reflection light spot is detected to be a closed curve and the proportion of the range enclosed by the closed curve in the certain segmentation block is larger than the preset threshold value,
the partition block is selected.
Specifically, the computer program further executes a step of receiving a user instruction and copying, uploading, collecting or labeling the selected partition block according to the user instruction when the computer program is executed.
Further, the partition block is partitioned by a database or a connected domain algorithm.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.

Claims (12)

1. A gesture detection method is characterized by comprising the following steps,
acquiring video signals on a reflecting surface, wherein the reflecting surface is a diffuse reflection plane, detecting reflection light spots of the reflecting surface,
detecting the track of the characteristic points;
the reflecting light spot is formed by reflecting light rays emitted by a light source arranged on the signal device on a reflecting surface, and the characteristic point is positioned on the signal device; the intensity of the light source is set at a preset threshold value, and if and only if the distance between the signal device and the diffuse reflection plane is in a close position, a light spot which is enough to be captured is generated,
when the light spot of the reflecting surface is detected, obtaining a gesture instruction corresponding to the characteristic point track according to the characteristic point track;
and executing a preset step corresponding to the gesture instruction according to the gesture instruction.
2. The gesture detection method according to claim 1, wherein the step of detecting the trajectory of the feature point is real-time detection.
3. The gesture detection method according to claim 1, wherein the step of detecting the trajectory of the feature point is to perform trajectory detection of the feature point only after detecting the reflection light spot of the reflection surface.
4. The gesture detection method according to claim 1, wherein the light emitted by the light source disposed on the signal device is infrared light.
5. The gesture detection method according to claim 1, wherein the feature point is a preset pattern on a signaling device.
6. The gesture detection method according to claim 1, wherein the gesture instruction comprises selecting an image in the motion trail of the reflection light spot and uploading the image to a cloud.
7. A gesture detection system is characterized by comprising a video detection unit, a detection unit, an instruction judgment unit and a preset step execution unit,
the video detection unit is used for acquiring a video signal on a reflecting surface, the reflecting surface is a diffuse reflection plane,
the detection unit is used for detecting the reflection light spot of the reflection surface and detecting the track of the characteristic point;
the reflecting light spot is formed by reflecting light rays emitted by a light source arranged on the signal device on a reflecting surface, and the characteristic point is positioned on the signal device; the intensity of the light source is set at a preset threshold value, and if and only if the distance between the signal device and the diffuse reflection plane is in a close position, a light spot which is enough to be captured is generated,
the command judging unit is used for obtaining a gesture command corresponding to the characteristic point track according to the track of the characteristic point after the light spot of the reflecting surface is detected;
the preset step execution unit is used for executing a preset step corresponding to the gesture instruction according to the gesture instruction.
8. The gesture detection system according to claim 7, wherein the detection unit detects the trajectory of the feature point in real time.
9. The gesture detection system according to claim 7, wherein the detection unit detects the trajectory of the feature point when the reflection light spot of the reflection surface is detected.
10. The gesture detection system of claim 7, wherein the light emitted by the signaling device is infrared light.
11. The gesture detection system of claim 7, wherein the feature point is a preset pattern on a signaling device.
12. The gesture detection system of claim 7, wherein the gesture instruction comprises selecting an image in the motion trajectory of the reflected light spot and uploading the image to a cloud.
CN201910251062.2A 2019-03-29 2019-03-29 Gesture detection method and system Active CN111752379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910251062.2A CN111752379B (en) 2019-03-29 2019-03-29 Gesture detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910251062.2A CN111752379B (en) 2019-03-29 2019-03-29 Gesture detection method and system

Publications (2)

Publication Number Publication Date
CN111752379A CN111752379A (en) 2020-10-09
CN111752379B true CN111752379B (en) 2022-04-15

Family

ID=72671740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910251062.2A Active CN111752379B (en) 2019-03-29 2019-03-29 Gesture detection method and system

Country Status (1)

Country Link
CN (1) CN111752379B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019466A (en) * 2012-11-16 2013-04-03 厦门大学 Projection interactive system based on infrared detection
CN103777746A (en) * 2012-10-23 2014-05-07 腾讯科技(深圳)有限公司 Human-machine interactive method, terminal and system
CN108363522A (en) * 2018-04-24 2018-08-03 石家庄科达文教用品有限公司 Synchronous writing system and its method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693724B2 (en) * 2009-05-29 2014-04-08 Microsoft Corporation Method and system implementing user-centric gesture control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777746A (en) * 2012-10-23 2014-05-07 腾讯科技(深圳)有限公司 Human-machine interactive method, terminal and system
CN103019466A (en) * 2012-11-16 2013-04-03 厦门大学 Projection interactive system based on infrared detection
CN108363522A (en) * 2018-04-24 2018-08-03 石家庄科达文教用品有限公司 Synchronous writing system and its method

Also Published As

Publication number Publication date
CN111752379A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN107422949B (en) Projection touch image selection method
US8837780B2 (en) Gesture based human interfaces
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
CN110434853B (en) Robot control method, device and storage medium
CA3058821C (en) Touchless input
Maro et al. Event-based gesture recognition with dynamic background suppression using smartphone computational capabilities
US20110115892A1 (en) Real-time embedded visible spectrum light vision-based human finger detection and tracking method
EP1441514A2 (en) Interactive image projector
JP4627052B2 (en) Audio output method and apparatus linked to image
CN104571482A (en) Digital device control method based on somatosensory recognition
US10078374B2 (en) Method and system enabling control of different digital devices using gesture or motion control
KR102440198B1 (en) VIDEO SEARCH METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
US20200326783A1 (en) Head mounted display device and operating method thereof
CN112445326B (en) Projection interaction method based on TOF camera, system thereof and electronic equipment
KR20150083602A (en) Digital device, server, and method for processing/managing writing data
CN111460858B (en) Method and device for determining finger tip point in image, storage medium and electronic equipment
CN111752379B (en) Gesture detection method and system
CN111752377B (en) Gesture detection method and system
US20210011621A1 (en) Virtual Keyboard Engagement
CN111752378A (en) Gesture instruction execution method and storage medium
CN107567609A (en) For running the method for input equipment, input equipment, motor vehicle
KR20110009614A (en) Apparatus for predicting intention of user using multi modal information and method thereof
JP2019164232A (en) Sharing terminal, method and program, and sharing system and method
TW201832052A (en) Gesture recognition device and man-machine interaction system
Dadiz et al. Go-Mo (Go-Motion): An android mobile application detecting motion gestures for generating basic mobile phone commands utilizing KLT algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant