CN111796674A - Gesture touch sensitivity adjusting method based on head-mounted device and storage medium - Google Patents

Gesture touch sensitivity adjusting method based on head-mounted device and storage medium Download PDF

Info

Publication number
CN111796674A
CN111796674A CN202010443126.1A CN202010443126A CN111796674A CN 111796674 A CN111796674 A CN 111796674A CN 202010443126 A CN202010443126 A CN 202010443126A CN 111796674 A CN111796674 A CN 111796674A
Authority
CN
China
Prior art keywords
light
movement coefficient
rgb
rgb values
rgb value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010443126.1A
Other languages
Chinese (zh)
Other versions
CN111796674B (en
Inventor
刘德建
陈丛亮
郭玉湖
陈宏�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian TQ Digital Co Ltd
Original Assignee
Fujian TQ Digital Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian TQ Digital Co Ltd filed Critical Fujian TQ Digital Co Ltd
Priority to CN202010443126.1A priority Critical patent/CN111796674B/en
Publication of CN111796674A publication Critical patent/CN111796674A/en
Application granted granted Critical
Publication of CN111796674B publication Critical patent/CN111796674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention discloses a gesture touch sensitivity adjusting method based on head-mounted equipment and a storage medium, wherein the method comprises the following steps: within a preset first time length, acquiring unused RGB values according to the RGB values of pixels of each frame of image, and sending the unused RGB values to light-emitting equipment; within a preset second time length, identifying the coordinate position of light output by the light-emitting device according to the received RGB value in the lens picture; determining the coordinate of the coordinate position corresponding to the display screen according to the first movement coefficient or the second movement coefficient, calculating according to the resolution of the camera and the resolution of the screen to obtain the first movement coefficient, and training according to the track drawn by the user to obtain the second movement coefficient; and starting timing in turn seamlessly between the first time length and the second time length. The invention obviously improves the recognition efficiency and the recognition speed on the basis of ensuring the recognition accuracy; especially, the method has obvious effect in a single-camera scene; furthermore, the sensitivity of autonomous control gesture recognition is supported, and the method is more humanized and more in line with actual requirements.

Description

Gesture touch sensitivity adjusting method based on head-mounted device and storage medium
Technical Field
The invention relates to the field of gesture recognition, in particular to a method for adjusting gesture touch sensitivity based on head-mounted equipment and a storage medium
Background
The head-mounted equipment in the prior art is worn on the head, so that the operation control is difficult to be carried out by using a mode such as touch control of a mobile phone screen. Although some head-mounted devices can support gesture control, the problems of complex calculation, low recognition rate, low operation sensitivity and the like generally exist, and the operation mode is not convenient enough.
Therefore, it is desirable to provide a method and a storage medium for adjusting gesture touch sensitivity based on a head-mounted device, which can overcome the above problems.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the gesture touch sensitivity adjusting method and the storage medium based on the head-mounted device can improve the recognition accuracy, recognition efficiency and touch sensitivity of gesture control at the same time.
In order to solve the technical problems, the invention adopts the technical scheme that:
a gesture touch sensitivity adjusting method based on a head-mounted device comprises the following steps:
within a preset first time length, acquiring unused RGB values according to the RGB values of pixels of each frame of image, and sending the unused RGB values to light-emitting equipment;
within a preset second time length, identifying the coordinate position of light output by the light-emitting device according to the received RGB value in the lens picture;
determining the coordinate of the coordinate position corresponding to the display screen according to the first movement coefficient or the second movement coefficient, wherein the first movement coefficient is obtained through calculation according to the resolution of the camera and the resolution of the screen, and the second movement coefficient is obtained through training according to the track drawn by the user;
and starting timing in turn seamlessly between the first time length and the second time length.
The invention provides another technical scheme as follows:
a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, is capable of implementing the steps included in the above-mentioned head-mounted device-based gesture touch sensitivity adjustment method.
The invention has the beneficial effects that: the invention controls more than two light-emitting devices to respectively output the light of the unused RGB value by acquiring the unused RGB value; and then, by identifying the coordinate position of the light of each light-emitting device corresponding to the picture shot by the camera, determining the coordinate of the corresponding display screen according to the first movement coefficient or the second movement coefficient, and accordingly acquiring the movement track of the gesture. The method can change the existing mode that the operation track of the real hand of the user can be recognized only by performing complex calculation on all image pixels, and convert the mode into the recognition mode that the gesture of the user can be quickly obtained only by performing simple analysis on the pixels with specific RGB values, thereby greatly reducing the calculation complexity of recognition, improving the recognition efficiency and accuracy, and particularly aiming at the head-mounted equipment with a single camera, obviously improving the gesture recognition efficiency and accuracy; and moreover, the coordinates of the corresponding display screen are determined according to the camera precision or the movement coefficient obtained by the initial training calculation of the user, so that the speed and the precision of gesture recognition can be greatly improved.
Drawings
Fig. 1 is a schematic flowchart illustrating a gesture touch sensitivity adjustment method based on a head-mounted device according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a gesture touch sensitivity adjustment method based on a head-mounted device according to an embodiment of the present invention.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
The most key concept of the invention is as follows: the coordinates of the corresponding display screen can be efficiently and accurately determined only by simply analyzing the pixels with the specific RGB values, and the gestures of the user can be rapidly recognized.
Referring to fig. 1, the present invention provides a gesture touch sensitivity adjustment method based on a head-mounted device, including:
within a preset first time length, acquiring unused RGB values according to the RGB values of pixels of each frame of image, and sending the unused RGB values to light-emitting equipment;
within a preset second time length, identifying the coordinate position of light output by the light-emitting device according to the received RGB value in the lens picture;
determining the coordinate of the coordinate position corresponding to the display screen according to the first movement coefficient or the second movement coefficient, wherein the first movement coefficient is obtained through calculation according to the resolution of the camera and the resolution of the screen, and the second movement coefficient is obtained through training according to the track drawn by the user;
and starting timing in turn seamlessly between the first time length and the second time length.
Further, the calculating the first movement coefficient according to the resolution of the camera and the resolution of the screen includes:
the first movement coefficient is obtained by dividing the length and width of the resolution of the camera by the length and width of the resolution of the display screen, respectively.
According to the description, the first movement coefficient determined according to the resolution ratios of the camera and the display screen is used for adjusting the touch sensitivity according to the equipment parameters, and the method is suitable for improving the speed and the accuracy of gesture recognition for a large number of users.
Further, the second movement coefficient obtained by training according to the trajectory drawn by the user includes:
recognizing a coordinate set of a designated track copied on a display screen by light emitted by a user through a light-emitting device;
and calculating the maximum offset of the coordinate set and the specified track, and taking the maximum offset as a second movement coefficient.
According to the description, the touch sensitivity is determined according to the operation habits of the user, and the most suitable operation sensitivity is customized by the user.
Further, the first movement coefficient or the second movement coefficient or other movement coefficients are selected by a sensitivity adjustment key provided on the light emitting apparatus.
According to the description, the automatic control method and the automatic control device support the automatic adjustment of the recognition sensitivity, the automatic control operation accuracy and the operation convenience, are more humanized and meet the requirements better.
Further, the obtaining of the unused RGB values according to the RGB values of each pixel of each frame image includes:
presetting more than two groups respectively corresponding to different RGB value ranges;
acquiring the RGB value of each pixel of each frame of image;
dividing each pixel into a corresponding group according to the RGB value;
calculating the number of pixel points of each group to obtain the group with the least number of pixel points;
and determining the RGB value in the RGB value range corresponding to the group with the least number of pixel points as an unused RGB value.
As can be seen from the above description, the way of dividing the RGB value ranges of each group according to the color values is helpful to lock one or a few groups in a centralized manner when analyzing the unused pixel colors in the image captured in the first time period, so as not to disperse into multiple groups, thereby improving the accuracy and efficiency of subsequent analysis and calculation.
Further, if the unused RGB values correspond to two or more groups, the sending to the light emitting device includes:
respectively calculating RGB difference values of more than two groups corresponding to the unused RGB values and other groups;
acquiring an RGB value range corresponding to the group with the largest difference value with other groups;
and sending the RGB value range to a light-emitting device.
As can be seen from the above description, if the unused RGB values are dispersed in two or more groups, the group with the largest difference from other groups is further selected, and the corresponding RGB value range is used as the standard of the light output by the light-emitting device, so that the recognition degree of the light output by the light-emitting device in the display screen of the head portrait device can be further improved, and the recognition accuracy is improved again.
Further, the RGB values of the light output by the light emitting device are randomly chosen from the range of received RGB values.
As can be seen from the above description, the light emitting device can be freely selected from a given range, and the matching degree with the light emitting device is improved, ensuring that it can output light of RGB values meeting the requirements.
Further, the different RGB value ranges are RGB value ranges corresponding to respective colors.
As can be seen from the above description, grouping is directly performed according to the color values corresponding to the colors, so that the available value and intuitiveness of the pixel grouping result can be improved.
Further, the recognizing the coordinate position of the light output by the light emitting device according to the received RGB values in the display screen includes:
controlling a light emitting device to emit light corresponding to the received RGB values;
searching pixel points corresponding to the RGB values sent to the light-emitting equipment in the current frame image, and acquiring coordinate positions of the pixel points;
and acquiring the movement track of the display screen corresponding to the light in the second time length according to the coordinate position of each frame of image in the second time length.
As can be seen from the above description, by locating a specific RGB value in an image and combining the RGB values in time sequence, a control gesture made by a user through a light-emitting device can be obtained.
Further, still include:
and receiving a click signal sent by the light-emitting equipment, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting equipment in the display screen.
Therefore, the mouse can be matched with the gesture to simulate the function of mouse clicking.
Further, the head-mounted device and the light-emitting device are in communication transmission through a Bluetooth communication link.
According to the above description, the light-emitting device and the head-mounted device adopt a wireless connection mode, so that the user can control the light-emitting device more conveniently.
Further, the first duration is equal to the second duration.
As can be seen from the above description, the same frequency is used by the head-mounted device and the light-emitting device for analysis processing, and the accuracy of the calculation result of the head-mounted device and the accuracy of the output of the light-emitting device can be ensured at the same time.
The invention provides another technical scheme as follows:
a computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, enables the following method for adjusting touch sensitivity of a head-mounted device based gesture, comprising the steps of:
within a preset first time length, acquiring unused RGB values according to the RGB values of pixels of each frame of image, and sending the unused RGB values to light-emitting equipment;
within a preset second time length, identifying the coordinate position of light output by the light-emitting device according to the received RGB value in the lens picture;
determining the coordinate of the coordinate position corresponding to the display screen according to the first movement coefficient or the second movement coefficient, wherein the first movement coefficient is obtained through calculation according to the resolution of the camera and the resolution of the screen, and the second movement coefficient is obtained through training according to the track drawn by the user;
and starting timing in turn seamlessly between the first time length and the second time length.
Further, the calculating the first movement coefficient according to the resolution of the camera and the resolution of the screen includes:
the first movement coefficient is obtained by dividing the length and width of the resolution of the camera by the length and width of the resolution of the display screen, respectively.
Further, the second movement coefficient obtained by training according to the trajectory drawn by the user includes:
recognizing a coordinate set of a designated track copied on a display screen by light emitted by a user through a light-emitting device;
and calculating the maximum offset of the coordinate set and the specified track, and taking the maximum offset as a second movement coefficient.
Further, the first movement coefficient or the second movement coefficient or other movement coefficients are selected by a sensitivity adjustment key provided on the light emitting apparatus.
Further, the obtaining of the unused RGB values according to the RGB values of each pixel of each frame image includes:
presetting more than two groups respectively corresponding to different RGB value ranges;
acquiring the RGB value of each pixel of each frame of image;
dividing each pixel into a corresponding group according to the RGB value;
calculating the number of pixel points of each group to obtain the group with the least number of pixel points;
and determining the RGB value in the RGB value range corresponding to the group with the least number of pixel points as an unused RGB value.
Further, if the unused RGB values correspond to two or more groups, the sending to the light emitting device includes:
respectively calculating RGB difference values of more than two groups corresponding to the unused RGB values and other groups;
acquiring an RGB value range corresponding to the group with the largest difference value with other groups;
and sending the RGB value range to a light-emitting device.
Further, the RGB values of the light output by the light emitting device are randomly chosen from the range of received RGB values.
Further, the different RGB value ranges are RGB value ranges corresponding to respective colors.
Further, still include:
and receiving a click signal sent by the light-emitting equipment, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting equipment in the display screen.
As can be understood from the above description, those skilled in the art can understand that all or part of the processes in the above technical solutions can be implemented by instructing related hardware through a computer program, where the program can be stored in a computer-readable storage medium, and when executed, the program can include the processes of the above methods. The program can also achieve advantageous effects corresponding to the respective methods after being executed by a processor.
The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Example one
Referring to fig. 2, the embodiment provides a gesture touch sensitivity adjustment method based on a head-mounted device, which can significantly improve gesture recognition efficiency and simultaneously ensure accuracy and convenience of gesture recognition. The light emitting device may be any device capable of emitting light corresponding to a specified RGB value, such as a ring or bracelet configured with LEDs.
The method comprises the following steps:
s1: presetting more than two groups respectively corresponding to different RGB value ranges;
that is, one group corresponds to one RGB value range. Preferably a set corresponds to a range of colour values. For example, the color image may be divided into 9 groups of red, orange, yellow, green, blue, purple, and black, wherein the "red" group corresponds to a cube having a length, width, and height of 50 and 55, which is a rectangular solid space having RGB values ranging from (200, 50, 50) to (255, 0, 0).
Of course, the grouping may also be finer, such as grouping each small RGB value interval.
Red: (200, 50, 50) - (255, 0, 0);
orange: (200, 100, 50) - (255, 50, 0);
yellow: (200, 150, 50) - (255, 100, 0);
green: (0, 255, 0) to (50, 200, 50);
cyan: (0, 255, 50) - (50, 200, 100);
blue: (0, 0, 255) to (50, 50, 200);
purple: (50, 0, 255) - (100, 50, 200);
preferably, the partition of red, orange, yellow, green, blue-violet may also be a range classified in an LAB-wise manner in polar coordinates and then converted into RGB.
S2: presetting a first time length and a second time length;
preferably, the first and second time periods are equal, such as 100 ms.
S3: in the first time period, the head-mounted device obtains unused RGB values according to the RGB values of the pixels of each frame of image and sends the unused RGB values to the light-emitting device.
Specifically, the step includes:
s31: and acquiring the RGB value of each pixel of each frame of image. Namely, the RGB value of each pixel in each frame of image shot by the camera in the first duration is obtained.
S32: and dividing each pixel into a corresponding group according to the RGB value. That is, each pixel acquired in step S31 is grouped according to its RGB value and classified into a group corresponding to a range of RGB values.
S33: and calculating the number of pixel points of each group to obtain the group with the minimum number of pixel points. That is, the number of pixels included in each group is calculated, and the group with the smallest number of pixels is obtained, and if the obtained number of groups is one, the group can be considered to have the largest difference from other groups.
In another specific example, the number of the groups with the smallest number of pixels finally obtained in the step S33 is two or more, and the identification degree of the light output by the light emitting device can be improved by further calculating one group with the largest difference from all other groups.
For example, if statistics shows that the pixel points included in the 3 groups of "red", "orange" and "yellow" are all 0 or close to 0, the RGB value ranges corresponding to the three groups are the unused RGB values.
In accordance with another embodiment, the most different group can be determined by:
s34: RGB difference values of two or more groups corresponding to the unused RGB values acquired at S33 and all other groups are calculated, respectively, to determine.
Taking the above 9 groups of red, orange, yellow, green, blue, purple and black as an example, and determining that the three groups of "red", "orange" and "yellow" have the least and equal pixel points after the step of S33, the unused RGB values correspond to the RGB values of the three groups of "red", "orange" and "yellow". In order to further improve the discrimination of the light emitted by the light emitting device. Then the difference between the three groups "red", "orange" and "yellow" and the 9 groups red, orange, yellow, green, blue, purple and black can be calculated. The calculation process may be: the RGB values of three groups of "red", "orange" and "yellow" are subtracted from other groups having pixel points (6 groups of cyan, blue, violet, black and white) to obtain a group having the largest difference among the three components (R, G, B). The formula is as follows: group 1 and group 2 had a difference d12 ═ 2+ (R1-R2)2+ (G1-G2)2+ (B1-B2) 2; the difference value dij is obtained for each of the red orange yellow 3 group and the green blue violet 4 group. Wherein the numbers of red, orange, yellow, green, blue and purple are 1234567 respectively; the maximum difference value is max (min (d14, d15, d16, d17), min (d24, d25, d26, d27), min (d34, d35, d36, d 37)).
If the maximum value d14 is found, the red group is defined.
S35: if the group corresponding to the unused RGB value is only one group, the RGB values within the range of RGB values corresponding to the group can be directly sent to the light emitting device.
If the group with the least number of pixels corresponds to more than two groups, the corresponding RGB value range can be directly sent to the light-emitting equipment, one RGB value is selected by the light-emitting equipment to carry out light sending accidents, and the group with the highest identification degree can be screened from the more than two groups and then the RGB value range is sent to the light-emitting equipment.
That is, regardless of the number of groups to which unused RGB values correspond, it is preferable to transmit the RGB values having the largest difference to the light emitting devices; of course, it is also possible to choose to send all unused RGB values to the light emitting device.
It should be noted that the head-mounted device sends the RGB value range to the lighting device via the lighting device connected to its bluetooth.
S4: within a preset second time length, identifying the coordinate position of light output by the light-emitting device according to the received RGB value in the lens picture;
the method specifically comprises the following steps:
s41: and controlling the light-emitting equipment to emit corresponding light according to the received RGB value range in the second time length.
Preferably, if the unused RGB values correspond to more than two color values (i.e. two grouped ranges of RGB values), the light-emitting device may randomly select the RGB values from the unused RGB values for output.
In a specific example, if the maximum difference group is found as "red" group, only one group of corresponding RGB values is obtained. If the "red" group corresponds to a cuboid space having RGB values in the ranges of (200, 50, 50) to (255, 0, 0), i.e., a cuboid having a length, width and height of 50 and 55, the light emitting device may randomly output the colors of (200, 50, 50) to (255, 0, 0).
S42: the head-mounted device searches pixel points of the RGB values which are correspondingly sent to the light-emitting device in the current frame image according to the image shot by the camera at present (the image is still shot by the camera in real time within the second duration), and obtains the coordinate position of the pixel points.
For example, the camera shooting range is a coordinate range from (0, 0) to (1920, 1080), and the center position of the light emitted by the light-emitting device recognized by the camera corresponding to the pixel point is (400 ).
S5: and determining the coordinate of the coordinate position corresponding to the display screen according to the first movement coefficient or the second movement coefficient, wherein the first movement coefficient is obtained by calculation according to the resolution of the camera and the resolution of the screen, and the second movement coefficient is obtained by training according to the track drawn by the user.
The determination of the first movement coefficient and the second movement coefficient in this step is performed before the actual recognition, i.e., before step S3 of the present embodiment. The first movement coefficient is determined as follows:
the first movement coefficient is obtained by dividing the length and width of the resolution of the camera by the length and width of the resolution of the display screen, respectively. For example, if the maximum shooting resolution of the camera is 1280x720 and the screen resolution is 1280x720, the movement coefficient is (1, 1); the camera resolution is 1920x1080 and the screen resolution is 960x540, then shifted by a factor of (2, 2).
Wherein the second movement coefficient is determined according to the optimal sensitivity of the training user, and the process is as follows:
based on the principle of S1-S4, acquiring a coordinate set formed by all coordinates recognized by a user in the process of copying the designated track through light emitted by the light-emitting device; then, the maximum offset of the coordinate set from the designated track is calculated and used as a second movement coefficient.
In one embodiment, the maximum offset from the straight line is calculated by having the user draw a straight line on the screen, recording the user's drawn trajectory. That is, a straight line y ═ kx + b passing through all points closest to each other is obtained by the least square method, and the distance from the point farthest from the straight line y ═ kx + b to the straight line is obtained. For example, if the distance of each point of the user drawing trajectory is 2 pixels on average, the movement coefficient is adjusted to (2, 2). At this time, if the resolution of the camera is 1920x1080 and the resolution of the screen is 960x540, the central position of the light emitted by the light emitting device will be moved by 1 pixel in the screen every 2 pixels. I.e. screen coordinates camera resolution/movement coefficient 1920/2/960. Therefore, when the moving coefficient is (2,2), the led in the camera moves by only 1 pixel in the screen every 2 pixels. The center position of the corresponding led is at the camera (400 ) and the corresponding screen position is (200 ).
In a specific example, the first movement coefficient or the second movement coefficient or other movement coefficients may be selected by a sensitivity adjustment key or an adjustment knob provided on the light-emitting device, so as to achieve a user-defined adjustment of the movement coefficients. The sensitivity is adjusted by the led knob, the higher the sensitivity, the shorter the led movement unit distance required for the coordinate movement per pixel. The lower the sensitivity the greater the led movement distance required for each pixel of the coordinate movement.
For example, some users have more hand jitter, which may reduce the sensitivity and improve the operation accuracy. Some users want to operate more easily and quickly, so that the moving distance can be reduced, and the operation can be completed more quickly.
S6: and acquiring the movement track of the display screen corresponding to the light in the second time length according to the coordinate position determined in the last step of each frame image in the second time length.
That is, the head-mounted device only needs to identify the pixel points corresponding to the RGB values sent to the light-emitting device in the image shot in real time, and locate the coordinate positions of the pixel points in the image; and then connecting the coordinate positions according to the time sequence, so that the gesture of the light output by the light-emitting device in the second time length corresponding to the display screen, namely the gesture made by the user through the light-emitting device, can be obtained.
S7: and starting timing in turn seamlessly between the first time length and the second time length.
Correspondingly, S3 and S4 are executed alternately, and the first time period is the beginning. Specifically, in the first time period, the light emitting device stops outputting any light, and only when receiving the RGB value range sent by the head-mounted device, the corresponding light is output.
That is, in the first duration, the head-mounted device calculates and transmits the calculation result to the light-emitting device; in a second time period, the light-emitting device outputs corresponding light; and starting the timing of the first time length again, stopping outputting light by the light-emitting device, and repeatedly calculating and sending the light to the light-emitting device by the head-mounted device.
The whole process is that the user utilizes the light-emitting device to simulate a human hand, a mouse or other control devices to make gestures, and the head-mounted device obtains the user control gestures by identifying the position of the screen corresponding to the light emitted by the light-emitting device.
In a specific example, based on the above, it is already possible to set various specific gestures and corresponding manipulation methods in the head-mounted device in advance, and then, after recognizing the gestures, directly execute the manipulation methods corresponding to the gestures. For example, a preset "stroke left" gesture corresponds to "go back to previous page"; the preset 'hooking' gesture corresponds to 'closing the current interface', and the like.
In particular, more complex gesture manipulations can be provided, such as simulating mouse clicks of such functions.
The concrete implementation is as follows:
s8: and receiving a click signal sent by the light-emitting equipment, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting equipment in the display screen.
Presetting a 'click' or 'touch' button on the light-emitting equipment in advance; then, in the process of executing the gesture, the button can be triggered to send a click signal to the head-mounted device (in a Bluetooth or infrared mode, etc.); then the head-mounted equipment instantly determines the position of the current screen corresponding to the light according to the received click signal; and finally triggering the function corresponding to the position. Which may be understood as a click mouse function.
For example, the shooting range of the camera is a coordinate range from (0, 0) to (1920, 1080), and the corresponding head-mounted display screen is 1920 × 1080 pixels; if the center positions of the plurality of pixel points of the light output by the identified light-emitting device are (100,200), displaying that a mouse pointer is suspended at the position of the coordinates (100,200) in the display screen of the head-mounted device; and receiving a click signal sent by the light-emitting device at the moment, and clicking the mouse corresponding to the coordinates.
The realization of this embodiment can wear the light emitting equipment through the user and realize carrying out the gesture to head mounted device and control to can show ground and improve head mounted device, especially nearly dispose the head mounted device's of single camera gesture recognition speed, and can guarantee the accuracy nature of discernment and the convenience of operation simultaneously.
Example two
The invention provides a specific application scenario corresponding to the first embodiment:
1. the RGB values of pixels of each frame of image of the camera are obtained, firstly, the RGB values are grouped through manual presetting, and each group has an RGB range. For example: the groups can be divided into 9 groups of red, orange, yellow, green, blue, purple and black (the groups can also be divided into one group for each smaller rgb value interval). Then, the number of pixel points of each group is calculated according to the image acquired by the camera. For example, the 3 groups of pixels of red, orange and yellow are statistically found to be 0, or the three groups are the unused rgb values when the pixel is close to 0.
2. The most differentiated color is output by more than two led rings worn on different fingers on the hand and turned off at a fixed frequency, such as 100 milliseconds. The maximum difference group was found from the red orange yellow group as the red group above. For example, if the red rgb ranges from (200, 00, 00) to (255, 00, 00), the led ring randomly outputs the colors of (200, 00, 00) to (255, 00, 00).
3. The camera also calculates the rgb range of the maximum value of rgb difference red in the led off output period according to the same frequency as (200, 00, 00) to (255, 00, 00), then the led outputs the color of the maximum difference value, and the upper led position is obtained when the led is on. After the Led is closed, calculating the color to be displayed by the Led, after the Led is opened, searching the position of a pixel point in the Led color range by the camera, for example, the shooting range of the camera is the coordinate range from (0, 0) to (1920, 1080), randomly initializing 3 coordinates by obtaining the current Led number of 3, classifying the pixels into 3 classes by a k-means algorithm, and finding out the central positions of the 3 classes as (100,200), (150, 200) (200 ).
4. Then, assuming that the head mounted display is also 1920 × 1080 pixels, it is determined that the display touch point floats on the (100,200), (150, 200) (200 ) coordinates according to the first movement coefficient.
5. And if the second movement coefficient is determined, training the optimal sensitivity of the user. A user draws a straight line on a screen, records the drawing track of the user, and calculates the maximum deviation position from the straight line. For example, if the distance of each point of the user drawing track is 2 pixels on average, the movement coefficient is adjusted to (2, 2). Assuming that the head mounted display resolution is 960x540, it is determined that the display touch point floats on the (50,100), (75, 100) (100 ) coordinates according to the first movement coefficient.
6. Pressing the button of the led device is equivalent to clicking a mouse.
EXAMPLE III
Corresponding to the first embodiment and the second embodiment, a computer-readable storage medium is provided, where a computer program is stored, and when the computer program is executed by a processor, the steps included in the method for adjusting a gesture touch sensitivity based on a headset according to the first embodiment or the second embodiment can be implemented, and specific steps are not repeated here, and refer to the description of the first embodiment or the second embodiment for details.
In summary, the gesture touch sensitivity adjusting method and the storage medium based on the head-mounted device provided by the invention can significantly improve the recognition efficiency and the recognition speed on the basis of ensuring the recognition accuracy; particularly, the method is applied to the head-mounted equipment with a single camera, and has remarkable effect; furthermore, the sensitivity of autonomous regulation and control gesture recognition is supported, and the method is more humanized and more in line with actual requirements; furthermore, the equipment (light-emitting equipment) required to be matched has the characteristics of simple structure, lightness, portability and the like, so that the scheme also has the characteristics of strong practicability and easiness in implementation.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (10)

1. Gesture touch sensitivity adjusting method based on head-mounted equipment is characterized by comprising the following steps:
within a preset first time length, acquiring unused RGB values according to the RGB values of pixels of each frame of image, and sending the unused RGB values to light-emitting equipment;
within a preset second time length, identifying the coordinate position of light output by the light-emitting device according to the received RGB value in the lens picture;
determining the coordinate of the coordinate position corresponding to the display screen according to the first movement coefficient or the second movement coefficient, wherein the first movement coefficient is obtained through calculation according to the resolution of the camera and the resolution of the screen, and the second movement coefficient is obtained through training according to the track drawn by the user;
and starting timing in turn seamlessly between the first time length and the second time length.
2. The method for adjusting gesture touch sensitivity based on a head-mounted device according to claim 1, wherein the calculating a first movement coefficient according to a camera resolution and a screen resolution includes:
the first movement coefficient is obtained by dividing the length and width of the resolution of the camera by the length and width of the resolution of the display screen, respectively.
3. The method for adjusting gesture touch sensitivity based on a head-mounted device according to claim 1, wherein the second movement coefficient trained according to the trajectory drawn by the user comprises:
recognizing a coordinate set of a designated track copied on a display screen by light emitted by a user through a light-emitting device;
and calculating the maximum offset of the coordinate set and the specified track, and taking the maximum offset as a second movement coefficient.
4. The method of claim 1, wherein the first movement coefficient or the second movement coefficient or another movement coefficient is selected by a sensitivity adjustment key provided on a light-emitting device.
5. The method for adjusting gesture touch sensitivity based on a head-mounted device according to claim 1, wherein the obtaining of the unused RGB values according to the RGB values of the pixels of each frame of image comprises:
presetting more than two groups respectively corresponding to different RGB value ranges;
acquiring the RGB value of each pixel of each frame of image;
dividing each pixel into a corresponding group according to the RGB value;
calculating the number of pixel points of each group to obtain the group with the least number of pixel points;
and determining the RGB value in the RGB value range corresponding to the group with the least number of pixel points as an unused RGB value.
6. The method of claim 1, wherein if the unused RGB values correspond to more than two groups, the sending to a light emitting device comprises:
respectively calculating RGB difference values of more than two groups corresponding to the unused RGB values and other groups;
acquiring an RGB value range corresponding to the group with the largest difference value with other groups;
and sending the RGB value range to a light-emitting device.
7. The method of claim 1, wherein the RGB values of the light output by the light emitting device are randomly selected from a range of received RGB values.
8. The method of claim 4, wherein the different RGB value ranges are RGB value ranges corresponding to respective colors.
9. The method of claim 1, further comprising:
and receiving a click signal sent by the light-emitting equipment, wherein the click signal corresponds to the coordinate position of the light currently output by the light-emitting equipment in the display screen.
10. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, is capable of implementing the steps included in the method for adjusting gesture touch sensitivity based on a head-mounted device according to any of the claims 1 to 9.
CN202010443126.1A 2020-05-22 2020-05-22 Gesture touch sensitivity adjusting method based on head-mounted equipment and storage medium Active CN111796674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010443126.1A CN111796674B (en) 2020-05-22 2020-05-22 Gesture touch sensitivity adjusting method based on head-mounted equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010443126.1A CN111796674B (en) 2020-05-22 2020-05-22 Gesture touch sensitivity adjusting method based on head-mounted equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111796674A true CN111796674A (en) 2020-10-20
CN111796674B CN111796674B (en) 2023-04-28

Family

ID=72805996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010443126.1A Active CN111796674B (en) 2020-05-22 2020-05-22 Gesture touch sensitivity adjusting method based on head-mounted equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111796674B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103409A (en) * 2011-01-20 2011-06-22 桂林理工大学 Man-machine interaction method and device based on motion trail identification
TW201124878A (en) * 2010-01-13 2011-07-16 Chao-Lieh Chen Device for operation and control of motion modes of electrical equipment
JP2011181360A (en) * 2010-03-02 2011-09-15 Takahiro Kido Ornament with built-in led light-emitting device
CN104134070A (en) * 2013-05-03 2014-11-05 仁宝电脑工业股份有限公司 Interactive object tracking system and interactive object tracking method thereof
CN104199550A (en) * 2014-08-29 2014-12-10 福州瑞芯微电子有限公司 Man-machine interactive type virtual touch device, system and method
US20160092726A1 (en) * 2014-09-30 2016-03-31 Xerox Corporation Using gestures to train hand detection in ego-centric video
US20160117860A1 (en) * 2014-10-24 2016-04-28 Usens, Inc. System and method for immersive and interactive multimedia generation
CN105898289A (en) * 2016-06-29 2016-08-24 北京小米移动软件有限公司 Intelligent wearing equipment and control method of intelligent wearing equipment
CN106293081A (en) * 2016-08-03 2017-01-04 北京奇虎科技有限公司 Wearable device
US20170086712A1 (en) * 2014-03-20 2017-03-30 Telecom Italia S.P.A. System and Method for Motion Capture
CN107340871A (en) * 2017-07-25 2017-11-10 深识全球创新科技(北京)有限公司 The devices and methods therefor and purposes of integrated gesture identification and ultrasonic wave touch feedback

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201124878A (en) * 2010-01-13 2011-07-16 Chao-Lieh Chen Device for operation and control of motion modes of electrical equipment
JP2011181360A (en) * 2010-03-02 2011-09-15 Takahiro Kido Ornament with built-in led light-emitting device
CN102103409A (en) * 2011-01-20 2011-06-22 桂林理工大学 Man-machine interaction method and device based on motion trail identification
CN104134070A (en) * 2013-05-03 2014-11-05 仁宝电脑工业股份有限公司 Interactive object tracking system and interactive object tracking method thereof
US20170086712A1 (en) * 2014-03-20 2017-03-30 Telecom Italia S.P.A. System and Method for Motion Capture
CN104199550A (en) * 2014-08-29 2014-12-10 福州瑞芯微电子有限公司 Man-machine interactive type virtual touch device, system and method
US20160092726A1 (en) * 2014-09-30 2016-03-31 Xerox Corporation Using gestures to train hand detection in ego-centric video
US20160117860A1 (en) * 2014-10-24 2016-04-28 Usens, Inc. System and method for immersive and interactive multimedia generation
CN105898289A (en) * 2016-06-29 2016-08-24 北京小米移动软件有限公司 Intelligent wearing equipment and control method of intelligent wearing equipment
CN106293081A (en) * 2016-08-03 2017-01-04 北京奇虎科技有限公司 Wearable device
CN107340871A (en) * 2017-07-25 2017-11-10 深识全球创新科技(北京)有限公司 The devices and methods therefor and purposes of integrated gesture identification and ultrasonic wave touch feedback

Also Published As

Publication number Publication date
CN111796674B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US20210073953A1 (en) Method for applying bokeh effect to image and recording medium
Soriano et al. Using the skin locus to cope with changing illumination conditions in color-based face tracking
WO2023134743A1 (en) Method for adjusting intelligent lamplight device, and robot, electronic device, storage medium and computer program
CN107006100B (en) Control illumination dynamic
US10672187B2 (en) Information processing apparatus and information processing method for displaying virtual objects in a virtual space corresponding to real objects
CN104364733A (en) Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
KR101543374B1 (en) Sensibility lighting control apparatus and method
JP2008016289A (en) Lighting control system
CN105517222B (en) Light adjusting method and device
US9916123B2 (en) Information processing apparatus, information processing system, information processing method, and computer readable recording medium for displaying images from multiple terminal devices at different sizes in a list
CN104408395A (en) A gesture identifying method and system
Störring et al. Computer vision-based gesture recognition for an augmented reality interface
CN106598356B (en) Method, device and system for detecting positioning point of input signal of infrared emission source
CN110084204A (en) Image processing method, device and electronic equipment based on target object posture
CN109076164A (en) Focus pulling is carried out by means of the range information from auxiliary camera system
CN111796674A (en) Gesture touch sensitivity adjusting method based on head-mounted device and storage medium
CN111596766B (en) Gesture recognition method of head-mounted device and storage medium
CN111796675B (en) Gesture recognition control method of head-mounted equipment and storage medium
CN105611430B (en) Method and system for handling video content
CN111754492A (en) Image quality evaluation method and device, electronic equipment and storage medium
CN111796673B (en) Multi-finger gesture recognition method of head-mounted equipment and storage medium
KR100858138B1 (en) Control System Using Remote Pointing Device
CN111796672B (en) Gesture recognition method based on head-mounted equipment and storage medium
CN110532860A (en) The modulation of visible light bar code and recognition methods based on RGB LED lamp
CN114520902A (en) Privacy protection-based smart home projection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant