CN111596766B - Gesture recognition method of head-mounted device and storage medium - Google Patents

Gesture recognition method of head-mounted device and storage medium Download PDF

Info

Publication number
CN111596766B
CN111596766B CN202010443135.0A CN202010443135A CN111596766B CN 111596766 B CN111596766 B CN 111596766B CN 202010443135 A CN202010443135 A CN 202010443135A CN 111596766 B CN111596766 B CN 111596766B
Authority
CN
China
Prior art keywords
emitting device
rgb values
light
rgb
light emitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010443135.0A
Other languages
Chinese (zh)
Other versions
CN111596766A (en
Inventor
刘德建
陈丛亮
郭玉湖
陈宏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian TQ Digital Co Ltd
Original Assignee
Fujian TQ Digital Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian TQ Digital Co Ltd filed Critical Fujian TQ Digital Co Ltd
Priority to CN202010443135.0A priority Critical patent/CN111596766B/en
Publication of CN111596766A publication Critical patent/CN111596766A/en
Application granted granted Critical
Publication of CN111596766B publication Critical patent/CN111596766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention provides a gesture recognition method and a storage medium of a head-mounted device, wherein the method comprises the following steps: acquiring unused RGB values according to the RGB values of each pixel of each frame of image within a preset first duration, and sending the unused RGB values to a light emitting device; within a preset second duration, recognizing the coordinate position of the light output by the light emitting device according to the received RGB values in the display; and starting timing in turn of the first time length and the second time length in a seamless mode. The invention can obviously improve the recognition efficiency and recognition speed on the basis of ensuring the recognition accuracy; particularly, the device is applied to the head-mounted equipment with a single camera, and has remarkable effect; further, the equipment (light-emitting equipment) for realizing the needed matching has the characteristics of simple structure, light weight, portability and the like, so that the scheme also has the characteristics of strong practicability and easiness in realization.

Description

Gesture recognition method of head-mounted device and storage medium
Technical Field
The invention relates to the field of gesture recognition, in particular to a gesture recognition method and a storage medium of head-mounted equipment.
Background
The prior art head-mounted device is worn on the head, so that operation control is difficult to perform by using a touch control mode such as a mobile phone screen. Although some headsets are already capable of supporting gesture control. However, since the headset is generally configured with only a single camera, gesture recognition by the single camera generally has problems of complex calculation, low recognition rate, low operation sensitivity, and the like.
Accordingly, there is a need for a method of gesture recognition for a headset and a storage medium that can overcome the above problems at the same time.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the gesture recognition method and the storage medium of the head-mounted equipment can simultaneously improve the accuracy and recognition efficiency of single-camera gesture recognition.
In order to solve the technical problems, the invention adopts the following technical scheme:
acquiring unused RGB values according to the RGB values of each pixel of each frame of image within a preset first duration, and sending the unused RGB values to a light emitting device;
within a preset second duration, recognizing the coordinate position of the light output by the light emitting device according to the received RGB values in the display;
and starting timing in turn of the first time length and the second time length in a seamless mode.
The other technical scheme provided by the invention is as follows:
a computer readable storage medium having stored thereon a computer program which, when executed by a processor, is capable of carrying out the steps comprised in the gesture recognition method of a head-mounted device as described above.
The invention has the beneficial effects that: the invention controls the light emitting device to output the light with the unused RGB value by acquiring the unused RGB value; and then, the movement track of the touch gesture is acquired by identifying the movement track of the display corresponding to the light. Therefore, the existing mode that the operation track of the real hand of the user can be identified by carrying out complex calculation on all the image pixels is changed into the identification mode that the gesture of the user can be quickly obtained by simply analyzing the pixels with specific RGB values. Therefore, the invention not only greatly reduces the computational complexity of recognition, but also improves the recognition efficiency; and the accuracy of identification can be ensured at the same time; particularly, aiming at the head-mounted equipment with a single camera, the gesture recognition efficiency and accuracy of the head-mounted equipment can be remarkably improved.
Drawings
FIG. 1 is a flow chart of a gesture recognition method of a headset according to an embodiment of the present invention;
fig. 2 is a flowchart of a gesture recognition method of a headset according to an embodiment of the invention.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
The most critical concept of the invention is as follows: only a simple analysis of pixels of a particular RGB value is required to quickly obtain a user gesture.
Referring to fig. 1, the present invention provides a gesture recognition method of a headset device, including:
acquiring unused RGB values according to the RGB values of each pixel of each frame of image within a preset first duration, and sending the unused RGB values to a light emitting device;
within a preset second duration, recognizing the coordinate position of the light output by the light emitting device according to the received RGB values in the display;
and starting timing in turn of the first time length and the second time length in a seamless mode.
Further, the method further comprises the following steps:
and receiving a click signal sent by the light-emitting device, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting device in the display.
From the above description, it is also possible to simulate the mouse click function in cooperation with gestures.
Further, the obtaining the unused RGB values according to the RGB values of each pixel of each frame of image specifically includes:
presetting more than two groups respectively corresponding to different RGB value ranges;
acquiring RGB values of each pixel of each frame of image;
dividing each pixel into a corresponding group according to the RGB value;
calculating the pixel points of each group, and obtaining the group with the minimum pixel points;
and determining the RGB value in the RGB value range corresponding to the group with the minimum pixel point number as an unused RGB value.
As can be seen from the above description, the manner of dividing the RGB value ranges of each group according to the color values is helpful for centralizing one or a few groups without being dispersed into a plurality of groups when analyzing the unused pixel colors in the image captured in the first time period, so as to improve the accuracy and efficiency of the subsequent analysis and calculation.
Further, if the unused RGB values correspond to more than two groups, the sending to the light emitting device specifically includes:
respectively calculating RGB difference values of more than two groups corresponding to the unused RGB values and other groups;
acquiring an RGB value range corresponding to the group with the largest difference value with other groups;
and sending the RGB value range to a light emitting device.
As can be seen from the above description, if the unused RGB values are dispersed in more than two groups, the group with the largest difference from other groups is further selected, and the corresponding RGB value range is used as the standard of the light output by the light emitting device, so that the recognition degree of the light output by the light emitting device in the head portrait device display screen can be further improved, and the recognition accuracy is improved again.
Further, the RGB values of the light output by the light emitting device are randomly selected from the received RGB value range.
From the above description, the light emitting device can be freely selected from a given range, so that the matching degree with the light emitting device is improved, and the light emitting device can output light meeting the required RGB value.
Further, the different RGB value ranges are RGB value ranges corresponding to respective colors.
From the above description, it can be seen that the grouping is directly performed according to the color values corresponding to the respective colors, so as to improve the usability and intuitiveness of the pixel grouping result.
Further, the identifying the coordinate position of the light output by the light emitting device according to the received RGB values in the display specifically includes:
controlling a light emitting device to emit light corresponding to the received RGB values;
searching pixel points in the current frame image, which correspond to RGB values sent to the light emitting equipment, and acquiring coordinate positions of the pixel points;
and acquiring the movement track of the display corresponding to the light in the second time according to the coordinate position of each frame of image in the second time.
As can be seen from the above description, by locating specific RGB values in the images and comparing the time sequence combination, the control gesture made by the user through the light emitting device can be obtained.
Further, the head-mounted device and the light-emitting device communicate via a Bluetooth communication link.
From the above description, the light emitting device and the head-mounted device are connected wirelessly, which is more convenient for the user to operate.
Further, the first time period is equal to the second time period.
As can be seen from the above description, the same frequency is used for the analysis processing by the head-mounted device and the light-emitting device, so that the accuracy of the calculation result of the head-mounted device and the accuracy of the output of the light-emitting device can be ensured at the same time.
The other technical scheme provided by the invention is as follows:
a computer readable storage medium having stored thereon a computer program which, when executed by a processor, is capable of implementing the steps comprised in a method of gesture recognition of a headset device comprising:
acquiring unused RGB values according to the RGB values of each pixel of each frame of image within a preset first duration, and sending the unused RGB values to a light emitting device;
within a preset second duration, recognizing the coordinate position of the light output by the light emitting device according to the received RGB values in the display;
and starting timing in turn of the first time length and the second time length in a seamless mode.
Further, the method further comprises the following steps:
and receiving a click signal sent by the light-emitting device, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting device in the display.
Further, the obtaining the unused RGB values according to the RGB values of each pixel of each frame of image specifically includes:
presetting more than two groups respectively corresponding to different RGB value ranges;
acquiring RGB values of each pixel of each frame of image;
dividing each pixel into a corresponding group according to the RGB value;
calculating the pixel points of each group, and obtaining the group with the minimum pixel points;
and determining the RGB value in the RGB value range corresponding to the group with the minimum pixel point number as an unused RGB value.
Further, if the unused RGB values correspond to more than two groups, the sending to the light emitting device specifically includes:
respectively calculating RGB difference values of more than two groups corresponding to the unused RGB values and other groups;
acquiring an RGB value range corresponding to the group with the largest difference value with other groups;
and sending the RGB value range to a light emitting device.
Further, the RGB values of the light output by the light emitting device are randomly selected from the received RGB value range.
Further, the different RGB value ranges are RGB value ranges corresponding to respective colors.
Further, the identifying the coordinate position of the light output by the light emitting device according to the received RGB values in the display specifically includes:
controlling a light emitting device to emit light corresponding to the received RGB values;
searching pixel points in the current frame image, which correspond to RGB values sent to the light emitting equipment, and acquiring coordinate positions of the pixel points;
and acquiring the movement track of the display corresponding to the light in the second time according to the coordinate position of each frame of image in the second time.
Further, the head-mounted device and the light-emitting device communicate via a Bluetooth communication link.
Further, the first time period is equal to the second time period.
From the foregoing description, it will be appreciated by those skilled in the art that the foregoing embodiments may be implemented, in whole or in part, by hardware, by a computer program, where the program may be stored on a computer readable storage medium, where the program, when executed, may include the steps of the methods described above. After the program is executed by the processor, the beneficial effects corresponding to the methods can be realized.
The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Example 1
Referring to fig. 2, the present embodiment provides a gesture recognition method for a headset device, which can significantly improve gesture recognition efficiency and simultaneously ensure gesture recognition accuracy. The light emitting device may be any device capable of emitting light corresponding to a specified RGB value, such as a ring or a bracelet provided with LEDs.
The method comprises the following steps:
s1: presetting more than two groups respectively corresponding to different RGB value ranges;
that is, one group corresponds to one RGB value range. Preferably, a group corresponds to a range of color values. For example, the three-dimensional image can be divided into 9 groups of red, orange, yellow, green, blue, violet, and black, wherein the "red" group corresponds to a cuboid space of which the RGB values range from (200, 50, 50) to (255, 0), namely a cuboid of which the length and width are 50 and the height is 55.
Of course, the grouping may also be finer, such as grouping each small RGB value interval.
Red color: (200, 50, 50) to (255, 0);
orange: (200, 100, 50) to (255, 50, 0);
yellow: (200, 150, 50) to (255, 100, 0);
green: (0, 255, 0) to (50, 200, 50);
and (3) green: (0, 255, 50) to (50, 200, 100);
blue: (0, 255) to (50, 50, 200);
purple: (50, 0, 255) to (100, 50, 200);
preferably, the red-orange-yellow-green-blue-violet partition can also be a range classified by polar LAB and reconverted to RGB.
S2: presetting a first time length and a second time length;
preferably the first time period and the second time period are equal, such as 100ms.
S3: and in the first time period, the head-mounted device acquires unused RGB values according to the RGB values of each pixel of each frame of image and sends the unused RGB values to the light-emitting device.
Specifically, the method comprises the following steps:
s31: RGB values for each pixel of each frame of image are acquired. The RGB value of each pixel in each frame of image shot by the camera in the first time period is obtained.
S32: each pixel is divided into a corresponding group according to the RGB values. That is, each pixel acquired in step S31 is grouped according to its RGB value, and is divided into groups corresponding to the RGB value ranges.
S33: and calculating the pixel points of each group, and obtaining the group with the minimum pixel points. That is, the number of pixels included in each group is calculated, and a group having the smallest number of pixels is acquired, and if the acquired number of groups is one, the group may be regarded as the largest difference from the other groups.
In another embodiment, the number of groups with the smallest number of pixel points finally obtained in the step S33 is more than two, and the discrimination of the light output by the light emitting device can be improved by further calculating one group with the largest difference from all other groups.
For example, if the pixels included in the group 3 of "red", "orange", "yellow" are all 0 or close to 0, the RGB value ranges corresponding to the three groups are unused RGB values.
Corresponding to the other specific example described above, the group with the largest difference can be determined therefrom by:
s34: and (3) respectively calculating the RGB difference values of more than two groups corresponding to the unused RGB values acquired in the step (S33) and all other groups to determine.
Taking the above 9 groups of red, orange, yellow, green, blue, violet, black and white as an example, and determining that the three groups of red, orange and yellow have the least pixels and are equal after the step S33, the unused RGB values are corresponding to the RGB values of the three groups of red, orange and yellow. In order to further improve the discrimination of the light emitted by the light emitting device. The one of the three groups "red", "orange", "yellow" which is most different from the 9 groups of red, orange, yellow, green, blue, violet, black and white can be further calculated. The calculation process may be: the RGB values of the three groups "red", "orange", "yellow" are subtracted from the other groups having pixels (6 groups of cyan, magenta and black) to obtain the group having the largest difference in the three components (R, G, B). The formula is: difference values d12= (R1-R2) 2+ (G1-G2) 2+ (B1-B2) 2 of group 1, group 2; the difference value dij is obtained for the red, orange, yellow 3 groups and the green, blue and purple 4 groups. Wherein, the numbers of red, orange, yellow, green, blue and purple are respectively 1234567; the maximum difference value=max (min (d 14, d15, d16, d 17), min (d 24, d25, d26, d 27), min (d 34, d35, d36, d 37)).
The red group is assumed to be the maximum value d 14.
S35: if only one group corresponds to the unused RGB values, the RGB values in the RGB value range corresponding to the group may be directly transmitted to the light emitting device.
If the group with the least pixel points corresponds to more than two groups, the corresponding RGB value range can be directly sent to the light-emitting device, one RGB value of the corresponding RGB value range is selected for light sending accidents, and one group with the highest identification degree can be screened out of the more than two groups and then the RGB value range is sent to the light-emitting device.
That is, regardless of the number of groups to which the unused RGB values correspond, it is preferable to transmit the RGB value having the largest difference to the light emitting device; of course, it is also possible to choose to send all unused RGB values to the light emitting device.
It should be noted that the head-mounted device transmits the RGB value range to the light emitting device through the light emitting device connected to the bluetooth.
S4: during the second period of time, the headset identifies the coordinate position in the display of the light output by the light emitting device in accordance with the RGB value range it receives.
The method specifically comprises the following steps:
s41: and in the second time period, controlling the light emitting device to emit corresponding light according to the received RGB value range.
Preferably, if the unused RGB values correspond to more than two color values (i.e., two grouped RGB value ranges), the light emitting device may randomly select the RGB value for output.
In a specific example, if the maximum difference set is "red" set, i.e. only one set of corresponding RGB values is obtained. The light emitting device can also randomly output the colors (200, 50, 50) to (255, 0) from the light emitting device if the RGB value range corresponding to the "red" group is a cuboid space, i.e., a cuboid with the length and width of 50 and the height of 55, of (200, 50, 50) to (255, 0).
S42: the head-mounted device searches pixel points corresponding to RGB values sent to the light-emitting device in the current frame image according to the image shot by the camera (the camera still shoots in real time in the second time duration), and acquires the coordinate positions of the pixel points.
S43: and acquiring the movement track of the display corresponding to the light in the second time according to the coordinate position determined in the last step of each frame image in the second time.
That is, the head-mounted device only needs to identify the pixel points corresponding to the RGB values transmitted to the light-emitting device in the image photographed in real time and locate the coordinate positions thereof in the figure; and then connecting the coordinate positions according to the time sequence, so that the gesture of the light output by the light-emitting device in the second time period corresponding to the screen of the head-mounted device, namely the gesture made by the user through the light-emitting device, can be obtained.
S5: and starting timing in turn of the first time length and the second time length in a seamless mode.
Correspondingly, the above S3 and S4 are alternately executed, and the first time period is started. In particular, during the first period of time, the light emitting device will cease outputting any light, and will output the corresponding light only when the RGB value range transmitted by the head-mounted device is received.
That is, during the first period of time, the head-mounted device calculates and transmits the calculation result to the light-emitting device; outputting corresponding light by the light emitting device in a second time period; the timing of the first time period is started again, the light emitting device stops outputting light, and the head-mounted device performs calculation again and sends the calculation to the light emitting device, and the cycle is repeated.
The whole process is a process that a user makes a gesture by using the light-emitting device to simulate a human hand, a mouse or other control devices, and the head-mounted device acquires the user control gesture by identifying the position of light emitted by the light-emitting device corresponding to a screen.
In a specific example, based on the above, setting various specific gestures and corresponding control methods of the specific gestures on the headset device in advance can be realized, and then after the gestures are identified, the control methods corresponding to the gestures are directly executed. For example, a gesture of preset "swipe left" corresponds to "back to previous page"; the gesture of presetting "hooking" corresponds to "closing the current interface", and so on.
In particular, more complex gesture manipulations, such as simulating mouse clicks, can also be provided.
The specific implementation is as follows:
s6: and receiving a click signal sent by the light-emitting device, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting device in the display.
Presetting a click or touch button on the light-emitting device in advance; then, in the process of executing the gesture, the button can be triggered to send a click signal to the head-mounted device (in a Bluetooth or infrared mode, etc.); then the head-mounted equipment determines the position of the current light corresponding to the screen according to the received click signal in real time; and finally triggering the function corresponding to the position. It is understood as a function of clicking a mouse.
For example, the camera shooting range is a coordinate range of (0, 0) to (1920, 1080), and the corresponding head-mounted display is 1920×1080 pixels; the central position of the plurality of pixel points of the light output by the identified light emitting device is (100, 200), and the display mouse pointer is suspended at the position of the coordinates (100, 200) in the display screen of the head-mounted device; when the clicking signal sent by the light-emitting device is received at the moment, the mouse is clicked corresponding to the coordinates.
The implementation of the embodiment can realize gesture control on the head-mounted device by wearing the light-emitting device by a user, can obviously improve the gesture recognition speed of the head-mounted device, particularly the head-mounted device near which the single camera is configured, and can simultaneously ensure the recognition accuracy.
Example two
According to a first embodiment of the present invention, a specific application scenario is provided:
1. by acquiring RGB values of pixels of each frame of image of a camera, firstly, manually preset groups are arranged, and each group has an RGB range. For example: can be divided into 9 groups of red, orange, yellow, green, blue, violet, black and white (groups can also be divided into groups of finer each small rgb value interval). Then, the image acquired by the camera is respectively calculated to the pixel point number of each group. For example, if the statistics find that the pixel points of the 3 groups are 0, or are close to 0, the three groups are unused rgb value groups.
2. The color that is most different is output by more than two led rings worn on different fingers on the hand and turned off at a fixed frequency, such as 100 milliseconds. The maximum difference group was found to be the red group from the red-orange group above. For example, red, with an rgb range of (200, 00, 00) to (255, 00, 00), the led ring outputs colors of (200, 00, 00) to (255, 00, 00) randomly.
3. The camera calculates the rgb range of the maximum value red of the rgb difference in the led off output period to be (200, 00, 00) to (255, 00, 00) according to the same frequency, then the led outputs the color of the maximum difference value, and the upper led position is acquired when the led is on. After the Led is turned off, calculating the color which the Led should display, after the Led is turned on, searching the position of the pixel point in the Led color range by the camera, for example, the coordinate range of (0, 0) to (1920, 1080) of the camera shooting range, identifying a plurality of pixel points of the Led color, randomly initializing 3 coordinates by acquiring the current Led number 3, dividing the plurality of pixel points into 3 types by a k-means algorithm, and obtaining the central positions of the 3 types as (100, 200), (150, 200) (200 ) respectively.
4. Then assuming the head mounted display is also 1920x1080 pixels, the display touch point floats on the (100, 200), (150, 200) (200 ) coordinates.
5. Pressing the button of the led device corresponds to clicking the mouse.
Example III
The first embodiment corresponds to the second embodiment, and provides a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, can implement the steps included in the gesture recognition method for a headset according to the first embodiment, and details of the steps are not described herein.
In summary, the gesture recognition method and the storage medium of the head-mounted device provided by the invention can obviously improve the recognition efficiency and the recognition speed on the basis of ensuring the recognition accuracy; particularly, the device is applied to the head-mounted equipment with a single camera, and has remarkable effect; further, the equipment (light-emitting equipment) for realizing the needed matching has the characteristics of simple structure, light weight, portability and the like, so that the scheme also has the characteristics of strong practicability and easiness in realization.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (8)

1. A method for gesture recognition of a headset, comprising:
acquiring unused RGB values according to the RGB values of each pixel of each frame of image within a preset first duration, and sending the unused RGB values to a light emitting device;
within a preset second duration, recognizing the coordinate position of the light output by the light emitting device according to the received RGB values in the display;
the first time length and the second time length are seamlessly and alternately started to time;
the unused RGB values are obtained according to the RGB values of each pixel of each frame of image, specifically:
presetting more than two groups respectively corresponding to different RGB value ranges;
acquiring RGB values of each pixel of each frame of image;
dividing each pixel into a corresponding group according to the RGB value;
calculating the pixel points of each group, and obtaining the group with the minimum pixel points;
determining RGB values in an RGB value range corresponding to the group with the minimum pixel points as unused RGB values;
the coordinate position of the light output by the identification light emitting device according to the received RGB value in the display is specifically:
controlling a light emitting device to emit light corresponding to the received RGB values;
searching pixel points in the current frame image, which correspond to RGB values sent to the light emitting equipment, and acquiring coordinate positions of the pixel points;
and acquiring the movement track of the display corresponding to the light in the second time according to the coordinate position of each frame of image in the second time.
2. The method of gesture recognition of a headset of claim 1, further comprising:
and receiving a click signal sent by the light-emitting device, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting device in the display.
3. The method for gesture recognition of a headset according to claim 1, wherein if the unused RGB values correspond to more than two groups, the sending to the light emitting device is specifically:
respectively calculating RGB difference values of more than two groups corresponding to the unused RGB values and other groups;
acquiring an RGB value range corresponding to the group with the largest difference value with other groups;
and sending the RGB value range to a light emitting device.
4. A method of gesture recognition for a headset as claimed in claim 3, wherein the RGB values of the light output by the light emitting device are randomly selected from a range of RGB values received.
5. The method of gesture recognition of a headset of claim 1, wherein the different RGB value ranges are RGB value ranges corresponding to respective colors.
6. The method of gesture recognition of a headset of claim 1, wherein the headset and the light emitting device are communicatively coupled via a bluetooth communication link.
7. The method of gesture recognition of a headset of claim 1, wherein the first time period is equal to the second time period.
8. A computer readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, is capable of carrying out the steps comprised in the gesture recognition method of a headset according to any of the preceding claims 1-7.
CN202010443135.0A 2020-05-22 2020-05-22 Gesture recognition method of head-mounted device and storage medium Active CN111596766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010443135.0A CN111596766B (en) 2020-05-22 2020-05-22 Gesture recognition method of head-mounted device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010443135.0A CN111596766B (en) 2020-05-22 2020-05-22 Gesture recognition method of head-mounted device and storage medium

Publications (2)

Publication Number Publication Date
CN111596766A CN111596766A (en) 2020-08-28
CN111596766B true CN111596766B (en) 2023-04-28

Family

ID=72189183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010443135.0A Active CN111596766B (en) 2020-05-22 2020-05-22 Gesture recognition method of head-mounted device and storage medium

Country Status (1)

Country Link
CN (1) CN111596766B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007128453A (en) * 2005-11-07 2007-05-24 Canon Inc Image processing method and device for it
CN102871784A (en) * 2012-09-21 2013-01-16 中国科学院深圳先进技术研究院 Positioning controlling apparatus and method
CN104866106A (en) * 2015-06-03 2015-08-26 深圳市光晕网络科技有限公司 HUD and infrared identification-combined man-machine interactive method and system
CN105426817A (en) * 2015-10-30 2016-03-23 上海集成电路研发中心有限公司 Gesture position recognition device and recognition method based on infrared imaging
CN108769526A (en) * 2018-06-12 2018-11-06 广州视源电子科技股份有限公司 A kind of image adjusting method, device, equipment and storage medium
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007128453A (en) * 2005-11-07 2007-05-24 Canon Inc Image processing method and device for it
CN102871784A (en) * 2012-09-21 2013-01-16 中国科学院深圳先进技术研究院 Positioning controlling apparatus and method
CN104866106A (en) * 2015-06-03 2015-08-26 深圳市光晕网络科技有限公司 HUD and infrared identification-combined man-machine interactive method and system
CN105426817A (en) * 2015-10-30 2016-03-23 上海集成电路研发中心有限公司 Gesture position recognition device and recognition method based on infrared imaging
CN108769526A (en) * 2018-06-12 2018-11-06 广州视源电子科技股份有限公司 A kind of image adjusting method, device, equipment and storage medium
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉的手势识别技术及其应用研究;张圆圆;《计算技术与自动化》(第01期);全文 *

Also Published As

Publication number Publication date
CN111596766A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN110036258B (en) Information processing apparatus, information processing method, and computer program
US8320622B2 (en) Color gradient object tracking
RU2439653C2 (en) Virtual controller for display images
CA3016921A1 (en) System and method for deep learning based hand gesture recognition in first person view
US10672187B2 (en) Information processing apparatus and information processing method for displaying virtual objects in a virtual space corresponding to real objects
US9978000B2 (en) Information processing device, information processing method, light-emitting device regulating apparatus, and drive current regulating method
CN104364733A (en) Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
WO2009120299A2 (en) Computer pointing input device
KR100663515B1 (en) A portable terminal apparatus and method for inputting data for the portable terminal apparatus
WO2017203815A1 (en) Information processing device, information processing method, and recording medium
TWI694411B (en) Emotional based interaction device and method
EP3422152A1 (en) Remote operation device, remote operation method, remote operation system, and program
CN106598356B (en) Method, device and system for detecting positioning point of input signal of infrared emission source
KR20140091502A (en) Touch Pen And Selecting Mathod Of Color Thereof
Störring et al. Computer vision-based gesture recognition for an augmented reality interface
US10757337B2 (en) Information processing apparatus and information processing method to control exposure for imaging the eye
CN111596766B (en) Gesture recognition method of head-mounted device and storage medium
JP6650739B2 (en) Light emitting device adjustment apparatus and drive current adjustment method
CN111796673B (en) Multi-finger gesture recognition method of head-mounted equipment and storage medium
CN111796675B (en) Gesture recognition control method of head-mounted equipment and storage medium
CN111796674B (en) Gesture touch sensitivity adjusting method based on head-mounted equipment and storage medium
US20170168592A1 (en) System and method for optical tracking
JP2015184906A (en) Skin color detection condition determination device, skin color detection condition determination method and skin color detection condition determination computer program
CN104076990B (en) Screen localization method and device
CN111796671B (en) Gesture recognition and control method of head-mounted equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant