CN113791548A - Device control method, device, electronic device and storage medium - Google Patents

Device control method, device, electronic device and storage medium Download PDF

Info

Publication number
CN113791548A
CN113791548A CN202111129883.2A CN202111129883A CN113791548A CN 113791548 A CN113791548 A CN 113791548A CN 202111129883 A CN202111129883 A CN 202111129883A CN 113791548 A CN113791548 A CN 113791548A
Authority
CN
China
Prior art keywords
target
gesture
hand image
target user
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111129883.2A
Other languages
Chinese (zh)
Inventor
孔祥晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111129883.2A priority Critical patent/CN113791548A/en
Publication of CN113791548A publication Critical patent/CN113791548A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Abstract

The present disclosure provides a device control method, apparatus, electronic device, and storage medium, the method comprising: acquiring a first hand image of a target user, wherein the target user is a user for controlling a target device; acquiring a second hand image of the target user when the first hand image is identified that the first gesture category of the target user is matched with a set pre-trigger gesture; and controlling the target device to execute an operation corresponding to a first target gesture when detecting that a second gesture category of the target user matches a set first target gesture based on the second hand image.

Description

Device control method, device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a device control method and apparatus, an electronic device, and a storage medium.
Background
The man-machine interaction technology is a technology for realizing human-computer conversation in an effective mode through computer input and output equipment. With the continuous development of science and technology, people put new demands on the level and quality of human-computer interaction. The gestures have the characteristics of intuition, naturalness and the like, so that the gestures become an important means for man-machine information interaction. Therefore, gesture recognition based on computer vision is a research focus in the field of human-computer interaction.
Generally, when a gesture is used for controlling equipment, due to the fact that hand shapes obtained after different hand actions are mapped on an image are similar, accuracy of a gesture recognition result is reduced, error control of the equipment is further caused, and accuracy of a control process of the equipment is reduced.
Disclosure of Invention
In view of the above, the present disclosure provides at least a device control method, an apparatus, an electronic device, and a storage medium.
In a first aspect, the present disclosure provides an apparatus control method, including:
acquiring a first hand image of a target user, wherein the target user is a user for controlling a target device;
acquiring a second hand image of the target user when the first hand image is identified that the first gesture category of the target user is matched with a set pre-trigger gesture;
and controlling the target device to execute an operation corresponding to a first target gesture when detecting that a second gesture category of the target user matches a set first target gesture based on the second hand image.
According to the method, the pre-trigger gesture is set, when the first gesture type of the target user is matched with the pre-trigger gesture, the target user is determined to have the intention of making the first target gesture, so that the acquired second hand image can be recognized, when the second gesture recognition is matched with the first target gesture, the target device can be controlled to execute the operation corresponding to the first target gesture, the determination efficiency of the first target gesture can be improved by setting the pre-trigger gesture, and the target device can be controlled accurately by the first target gesture.
In one possible embodiment, detecting that the second gesture category of the target user matches the set first target gesture based on the second hand image includes:
identifying the second hand image and determining a second identification result; the second recognition result comprises a confidence coefficient that the hand action of the target user in the second hand image belongs to each preset gesture;
determining that the second gesture category of the target user is matched with the set first target gesture under the condition that the second recognition result indicates that the first confidence coefficient of the hand action of the target user belonging to the first target gesture is greater than the set first threshold value and the second confidence coefficient of the hand action belonging to other gestures except the first target gesture is less than the set second threshold value; wherein the first target gesture is a three-finger pinch gesture.
In the foregoing embodiment, after the target user performs the pre-trigger gesture, when the second recognition result indicates that the first confidence that the hand motion of the target user belongs to the first target gesture is greater than the set first threshold, and the second confidence that the hand motion belongs to the other gestures except the first target gesture is less than the set second threshold, the hand motion of the target user may be a three-finger pinch gesture or may be another gesture similar to the three-finger pinch gesture, so that it can be determined that the second gesture category of the target user matches the first target gesture, and the recognition efficiency of the three-finger pinch gesture is improved.
In a possible embodiment, the method further comprises:
and controlling the target device to execute an operation corresponding to a second target gesture when detecting that the second gesture category of the target user is matched with the set second target gesture except the first target gesture and the pre-trigger gesture based on the second hand image.
Here, in the case where it is detected that the second gesture category of the target user matches a set second target gesture other than the first target gesture and the pre-trigger gesture, the control target device executes an operation corresponding to the second target gesture, and flexible and fine control is performed on the target device.
In one possible embodiment, when it is recognized that the first gesture category of the target user in the first hand image matches the preset pre-trigger gesture, acquiring a second hand image of the target user includes:
determining a cutoff time which is separated from the acquisition time of the first hand image by a preset duration under the condition that the first hand image is recognized to be matched with the first gesture category of the target user and the set pre-trigger gesture;
acquiring a second hand image of the target user from the acquisition time of the first hand image to the cutoff time.
Here, a preset duration may be set, a deadline that is separated from the acquisition time of the first hand image by the preset duration is determined, and a second hand image of the target user between the acquisition time of the first hand image and the deadline is acquired, so that device function limitation caused by the target user after the pre-trigger gesture is triggered by mistake is alleviated, and flexibility of device control is improved.
In a possible embodiment, the method further comprises:
and in response to the second gesture type being the same as the pre-trigger gesture, controlling a target device to execute an operation corresponding to the pre-trigger gesture, or acquiring a hand image of the target user and taking the acquired hand image as the first hand image.
In a possible implementation, the controlling the target device to perform an operation corresponding to the first target gesture includes:
determining target position information of the first target gesture in each second hand image;
determining position change information of the first target gesture in an adjacent second hand image based on the target position information; the position change information includes at least one of a change direction and a change distance;
and controlling the target equipment to execute the operation corresponding to the first target gesture based on the position change information.
According to the method, the position change information of the first target gesture in the adjacent second hand images is determined based on the target position information of the first target gesture in each second hand image, and the target device is controlled to execute the operation corresponding to the first target gesture based on the position change information, so that the target device is controlled accurately by the first target gesture.
In one possible embodiment, controlling the target device to perform an operation corresponding to the first target gesture includes at least one of:
adjusting the volume of the target device;
adjusting an operating mode of the target device, wherein the operating mode comprises turning off or turning on at least part of functions of the target device;
displaying a mobile identifier in a display interface of the target device, or adjusting a display position of the mobile identifier in the display interface;
zooming out or zooming in at least part of display content in the display interface;
fast forwarding or fast rewinding of display content in the display interface;
sliding or jumping of the display interface;
and selecting a preset function in the display interface.
The method can control various operations of the target equipment, improves the diversity of control operations of the target equipment, and realizes flexible control of the target equipment.
In a possible implementation, after controlling the target device to perform an operation corresponding to the first target gesture, the method further includes:
acquiring a third hand image of the target user;
and when the hand motion of the target user does not exist in the third hand image or the third gesture type of the target user is detected not to be matched with the first target gesture based on the third hand image, the operation control of the target device based on the first target gesture is released.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides an apparatus control device, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first hand image of a target user, and the target user is a user for controlling target equipment;
the second acquisition module is used for acquiring a second hand image of the target user under the condition that the first hand type of the target user in the first hand image is identified to be matched with a set pre-trigger gesture;
and the first control module is used for controlling the target equipment to execute the operation corresponding to the first target gesture when detecting that the second gesture category of the target user is matched with the set first target gesture based on the second hand image.
In one possible implementation, the first control module, when detecting that the second gesture category of the target user matches the set first target gesture based on the second hand image, is configured to:
identifying the second hand image and determining a second identification result; the second recognition result comprises a confidence coefficient that the hand action of the target user in the second hand image belongs to each preset gesture;
determining that the second gesture category of the target user is matched with the set first target gesture under the condition that the second recognition result indicates that the first confidence coefficient of the hand action of the target user belonging to the first target gesture is greater than the set first threshold value and the second confidence coefficient of the hand action belonging to other gestures except the first target gesture is less than the set second threshold value; wherein the first target gesture is a three-finger pinch gesture.
In a possible embodiment, the apparatus further comprises: a second control module to:
and controlling the target device to execute an operation corresponding to a second target gesture when detecting that the second gesture category of the target user is matched with the set second target gesture except the first target gesture and the pre-trigger gesture based on the second hand image.
In one possible implementation manner, in a case that it is recognized that the first gesture category of the target user in the first hand image matches a preset pre-trigger gesture, the second obtaining module, when obtaining the second hand image of the target user, is configured to:
determining a cutoff time which is separated from the acquisition time of the first hand image by a preset duration under the condition that the first hand image is recognized to be matched with the first gesture category of the target user and the set pre-trigger gesture;
acquiring a second hand image of the target user from the acquisition time of the first hand image to the cutoff time.
In a possible embodiment, the apparatus further comprises: a third obtaining module configured to:
and in response to the second gesture type being the same as the pre-trigger gesture, controlling a target device to execute an operation corresponding to the pre-trigger gesture, or acquiring a hand image of the target user and taking the acquired hand image as the first hand image.
In one possible implementation, the first control module, when controlling the target device to perform an operation corresponding to the first target gesture, is configured to:
determining target position information of the first target gesture in each second hand image;
determining position change information of the first target gesture in an adjacent second hand image based on the target position information; the position change information includes at least one of a change direction and a change distance;
and controlling the target equipment to execute the operation corresponding to the first target gesture based on the position change information.
In one possible embodiment, the first control module, when controlling the target device to perform an operation corresponding to the first target gesture, includes at least one of:
adjusting the volume of the target device;
adjusting an operating mode of the target device, wherein the operating mode comprises turning off or turning on at least part of functions of the target device;
displaying a mobile identifier in a display interface of the target device, or adjusting a display position of the mobile identifier in the display interface;
zooming out or zooming in at least part of display content in the display interface;
fast forwarding or fast rewinding of display content in the display interface;
sliding or jumping of the display interface;
and selecting a preset function in the display interface.
In one possible embodiment, after controlling the target device to perform the operation corresponding to the first target gesture, the apparatus further includes: a detection module to:
acquiring a third hand image of the target user;
and when the hand motion of the target user does not exist in the third hand image or the third gesture type of the target user is detected not to be matched with the first target gesture based on the third hand image, the operation control of the target device based on the first target gesture is released.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the device control method according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the apparatus control method according to the first aspect or any one of the embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic flow chart illustrating a device control method provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating another apparatus control method provided by the embodiment of the disclosure;
fig. 3 is a schematic diagram illustrating an architecture of a device control apparatus provided in an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Generally, when a device is controlled by a gesture, due to the fact that hand shapes obtained after different hand actions with certain similarity are mapped on an image are similar, accuracy of a gesture recognition result is reduced, error control of the device is further caused, and accuracy of a device control process is reduced.
For example, the three-finger pinch gesture is similar to other gestures such as a fist-making gesture and a heart-comparing gesture, and the display shapes of the three-finger pinch gesture and the other gestures are similar, so that the three-finger pinch gesture and the other gestures are mutually interfered, for example, when gesture recognition is performed on a hand image, the three-finger pinch gesture is easily recognized as other gestures, the other gestures are easily recognized as three-finger pinch gestures, the recognition difficulty of the three-finger pinch gesture is increased, and the accuracy of device control is low when the three-finger pinch gesture control device is utilized.
In order to alleviate the above problem, embodiments of the present disclosure provide a device control method, apparatus, electronic device, and storage medium.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
For the purpose of facilitating an understanding of the embodiments of the present disclosure, a detailed description will first be given of an apparatus control method disclosed in the embodiments of the present disclosure. An execution subject of the device control method provided by the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a Personal Digital Assistant (PDA), a vehicle-mounted device, a wearable device, or a server or other processing device. In some possible implementations, the device control method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a schematic flow chart of an apparatus control method provided in the embodiment of the present disclosure is shown, where the method includes S101-S103, where:
s101, acquiring a first hand image of a target user, wherein the target user is a user for controlling target equipment;
s102, acquiring a second hand image of the target user when the first hand type of the target user in the first hand image is identified to be matched with a preset pre-trigger gesture;
and S103, controlling the target device to execute the operation corresponding to the first target gesture when detecting that the second gesture type of the target user is matched with the set first target gesture based on the second hand image.
According to the method, the pre-trigger gesture is set, when the first gesture type of the target user is matched with the pre-trigger gesture, the target user is determined to have the intention of making the first target gesture, so that the acquired second hand image can be recognized, when the second gesture recognition is matched with the first target gesture, the target device can be controlled to execute the operation corresponding to the first target gesture, the determination efficiency of the first target gesture can be improved by setting the pre-trigger gesture, and the target device can be controlled accurately by the first target gesture.
S101 to S103 will be specifically described below.
For S101:
in implementation, the image capturing device may be configured to capture a first hand image of the target user, and the image capturing device may be disposed near the target device, or the image capturing device may also be a camera or the like carried by the target device, so that the image capturing device can capture the first hand image of the target user controlling the target device. For example, the target device may be a television, the target user is located right in front of the target device, the image capturing device may be disposed at a top center position of the television, and after the image capturing device captures a first hand image of the target user, the executing subject may obtain the first hand image of the target user from the image capturing device.
The first hand image may be any frame image captured by the image capture device that includes the hand motion of the target user before the first gesture class of the target user matches the pre-trigger gesture.
For S102:
after the first hand image is acquired, gesture recognition may be performed on the first hand image by using a trained target neural network for gesture recognition, so as to determine a first gesture class of the target user in the first hand image.
Judging whether the first gesture type is matched with the set pre-trigger gesture, and if so, acquiring a second hand image of the target user; if not, the target device can be controlled by the detection gesture indicated by the first gesture type, and/or the first hand image of the target user is continuously acquired.
The pre-trigger gesture can prompt detection of the first target gesture. For example, when the first gesture category belongs to the pre-trigger gesture, the target neural network may determine whether the second gesture category of the target user is the first target gesture using a preset determination rule. For example, the preset determination rule may be: when the first confidence coefficient of the first target gesture is larger than a set first threshold value and the second confidence coefficient of the hand motion belonging to other gestures except the first target gesture is smaller than a set second threshold value, determining that the second gesture category belongs to the first target gesture; otherwise, the gesture does not belong to the first target gesture, and the gesture with the highest confidence coefficient can be determined as the second gesture category.
When the first gesture category does not belong to the pre-trigger gesture, the target neural network may output a second gesture category of the target user using a preset ground rule. For example, the preset basic rule may be a gesture with the highest confidence level, and is determined as a gesture corresponding to the second gesture category.
For example, the target neural network for performing gesture recognition can recognize the second hand image, and obtain a confidence that the hand motion of the target user included in the second hand image belongs to each preset gesture. For example, it is possible to obtain a confidence that the hand motion in the second hand image 1 belongs to a palm of a hand, a confidence that the hand motion belongs to a pinch gesture of three fingers, a confidence that the hand motion belongs to a fist of a hand, and the like, of 0.1; when the first target gesture is a three-finger pinch gesture and the first gesture category belongs to the pre-trigger gesture, according to a preset judgment rule, the second gesture category of the target user output by the target neural network is a three-finger pinch gesture; and when the first gesture category does not belong to the pre-trigger gesture, according to a preset basic rule, the second gesture category of the target user output by the target neural network is a fist gesture.
The pre-trigger gesture may be set as required, for example, the pre-trigger gesture may include one or more of, but is not limited to, a palm gesture, a pistol gesture, a jeer gesture, or the like.
In an alternative embodiment, when it is recognized that the first gesture category of the target user in the first hand image matches the preset pre-trigger gesture, acquiring a second hand image of the target user may include:
step one, determining a cutoff time with a preset duration of an acquisition time interval of the first hand image under the condition that the first hand type of the target user in the first hand image is identified to be matched with a preset pre-trigger gesture;
and step two, acquiring a second hand image of the target user between the acquisition time of the first hand image and the cut-off time.
In implementation, the preset time duration may be set as required, for example, the preset time duration may be 3 seconds, 5 seconds, or the like. When the first gesture category of the target user in the first hand image is identified to be matched with the set pre-trigger gesture, the cutoff time may be determined according to the acquisition time of the first hand image and the preset time duration, for example, when the acquisition time of the first hand image is 10 minutes and 10 seconds, and when the preset time duration is 5 seconds, the determined cutoff time is 10 minutes and 15 seconds.
Further, a second hand image of the target user from the acquisition time of the first hand image to the cutoff time may be acquired, for example, the second hand image may be acquired between 10 minutes 10 seconds and 10 minutes 15 seconds, wherein the second hand image does not include the image at 10 minutes 10 seconds, and includes the image at 10 minutes 15 seconds.
Here, a preset duration may be set, a deadline that is separated from the acquisition time of the first hand image by the preset duration is determined, and a second hand image of the target user between the acquisition time of the first hand image and the deadline is acquired, so that device function limitation caused by the target user after the pre-trigger gesture is triggered by mistake is alleviated, and flexibility of device control is improved.
For S103:
here, it may be detected whether the second gesture category of the target user matches the set first target gesture based on the second hand image. Wherein the first target gesture may be a three-finger pinch gesture.
In one possible embodiment, detecting that the second gesture category of the target user matches the set first target gesture based on the second hand image includes:
step one, identifying the second hand image and determining a second identification result; the second recognition result comprises a confidence coefficient that the hand action of the target user in the second hand image belongs to each preset gesture;
step two, under the condition that the second recognition result indicates that a first confidence coefficient of the hand motion of the target user belonging to a first target gesture is larger than a set first threshold value and a second confidence coefficient of the hand motion belonging to other gestures except the first target gesture is smaller than a set second threshold value, determining that the second gesture category of the target user is matched with the set first target gesture; wherein the first target gesture is a three-finger pinch gesture.
In the foregoing embodiment, after the target user performs the pre-trigger gesture, when the second recognition result indicates that the first confidence that the hand motion of the target user belongs to the first target gesture is greater than the set first threshold, and the second confidence that the hand motion belongs to the other gestures except the first target gesture is less than the set second threshold, the hand motion of the target user may be a three-finger pinch gesture or may be another gesture similar to the three-finger pinch gesture, so that it can be determined that the second gesture category of the target user matches the first target gesture, and the recognition efficiency of the three-finger pinch gesture is improved.
The second hand image may be recognized by using a target neural network for gesture recognition, and a second recognition result may be output. And the second recognition result comprises the confidence coefficient that the hand action of the target user in the second hand image belongs to each preset gesture. For example, when the preset gesture includes a three-finger pinch gesture, a three-finger gather gesture, a heart gesture, a fist making gesture, and a palm gesture, the second recognition result corresponding to the second hand image includes: the confidence level of the three-finger pinch gesture is 0.5 (the probability that the hand motion representing the target user in the second hand image belongs to the three-finger pinch gesture is 0.5), the confidence level of the three-finger convergent gesture is 0.5, the confidence level of the barycentric gesture is 0.7, the confidence level of the fist-making gesture is 0.8, and the confidence level of the palm gesture is 0.2.
And according to the second recognition result, determining whether a first confidence coefficient that the hand action of the target user belongs to the first target gesture is larger than a set first threshold value or not, and whether a second confidence coefficient that the hand action belongs to other gestures except the first target gesture is smaller than a set second threshold value or not.
The first threshold value and the second threshold value are set according to the requirement of identification accuracy; if the required recognition accuracy is high, the first threshold value may be set to be large and the second threshold value may be set to be small, for example, the first threshold value may be 0.7, and the second threshold value may be 0.6; if the required recognition accuracy is low, the first threshold value may be set to be small and the second threshold value may be set to be large, for example, the first threshold value may be 0.5 and the second threshold value may be 0.8.
Illustratively, the second recognition result corresponding to the second hand image includes: when the confidence level of the three-finger pinch gesture is 0.65, the confidence level of the three-finger gather gesture is 0.5, the confidence level of the heart-to-heart gesture is 0.7, the confidence level of the fist-making gesture is 0.5, and the confidence level of the palm gesture is 0.2, it is known that the second gesture type corresponding to the second hand image matches the set first target gesture. And further can control the target device to execute an operation corresponding to the first target gesture.
The second recognition result corresponding to the second hand image comprises: when the confidence of the three-finger pinch gesture is 0.65, the confidence of the three-finger gather gesture is 0.5, the confidence of the barycentric gesture is 0.9, the confidence of the fist gesture is 0.5, and the confidence of the palm gesture is 0.2, it is known that the hand motion of the target user in the second hand image belongs to the barycentric gesture. And then the target device is controlled to execute the operation corresponding to the bizarre gesture.
In an alternative embodiment, the method further comprises: in response to the condition that the second gesture type is the same as the pre-trigger gesture, acquiring a hand image of the target user, and taking the acquired hand image as the first hand image; or, in response to a situation that the second gesture category is the same as the pre-trigger gesture, controlling the target device to execute an operation corresponding to the pre-trigger gesture.
After the second gesture type of the target user is determined based on the second hand image, if the second gesture type is determined to be the same as the pre-trigger gesture, and the hand motion of the target user is not changed, the process returns to the step of acquiring the first hand image of the target user, that is, the process of S102-S104 may be executed while acquiring the hand image of the target user and setting the acquired hand image as the first hand image.
Alternatively, the control target device may execute an operation corresponding to the pre-trigger gesture in the same second gesture type as the pre-trigger gesture. And the operation corresponding to the pre-triggering gesture can be set as required.
In another alternative embodiment, the method further comprises: and controlling the target device to execute an operation corresponding to a second target gesture when detecting that the second gesture category of the target user is matched with the set second target gesture except the first target gesture and the pre-trigger gesture based on the second hand image.
After the second gesture category of the target user is determined based on the second hand image, if it is determined that the second gesture category does not belong to the first target gesture and does not belong to the pre-trigger gesture, it is determined that the second gesture category belongs to the second target gesture, and the target device may be controlled to execute an operation corresponding to the second target gesture.
Here, in the case where it is detected that the second gesture category of the target user matches a set second target gesture other than the first target gesture and the pre-trigger gesture, the control target device executes an operation corresponding to the second target gesture, and flexible and fine control is performed on the target device.
In implementation, the operation corresponding to each gesture may be predetermined, that is, the operation corresponding to the first target gesture and the operation corresponding to the second target gesture may be predetermined. And then the target setting can be controlled to execute the operation corresponding to any gesture.
In an optional implementation manner, in S103, the controlling the target device to perform an operation corresponding to the first target gesture may include:
s1031, determining target position information of the first target gesture in each second hand image;
s1032, determining position change information of the first target gesture in an adjacent second hand image based on the target position information; the position change information comprises a change direction and/or a change distance;
and S1033, controlling the target device to execute an operation corresponding to the first target gesture based on the position change information.
In an implementation, when the first target gesture is a dynamic gesture, the target neural network may include a position detection branch and a category detection branch, the second hand image is input to the target neural network, the position information of the gesture in the second hand image is output through the position detection branch, and the second hand category corresponding to the second hand image is output through the category detection branch. When it is determined that the second hand category matches the first target gesture, target position information of the first target gesture in the second hand image may be acquired.
Based on the target position information of the first target gesture in each second hand image, position change information of the first target gesture in the adjacent second hand image is determined, wherein the position change information may comprise a change direction and/or a change distance. For example, the changing direction may include a direction from bottom to top, a direction from top to bottom, a direction from left to right, a direction from right to left, and the like, and the changing distance may be a length of a position change of the first target gesture on the adjacent second hand image.
The target device may then be controlled to perform an operation corresponding to the first target gesture based on the location change information. For example, if the first target gesture is a three-finger pinch gesture, and the operation corresponding to the three-finger pinch gesture is to control fast forwarding or fast rewinding of the display content in the display interface of the target device, if the position change information indicates that the three-finger pinch gesture moves from right to left, fast rewinding of the display content is controlled, fast rewinding duration of the display content corresponding to the fast rewinding operation can be determined according to the change distance, and the display content displayed on the display interface after the fast rewinding operation is determined based on the fast rewinding duration; and if the position change information indicates that the three-finger pinch gesture moves from left to right, controlling the fast forwarding of the display content, determining the fast forwarding time length of the display content corresponding to the fast forwarding operation according to the change distance, and determining the display content displayed on the display interface after the fast forwarding operation based on the fast forwarding time length.
In implementation, when the first target gesture is a static gesture, target position information of the first target gesture in the second hand image may be determined, and the target device may be controlled to perform an operation corresponding to the first target gesture based on the target position information. For example, if the first target gesture is a three-finger pinch gesture and the operation corresponding to the three-finger pinch gesture is a pattern special effect, the operation position of the three-finger pinch gesture on the display interface may be determined based on the conversion relationship between the image coordinate system corresponding to the second hand image and the interface coordinate system corresponding to the display interface and the target position information, and the pattern special effect may be added at the operation position.
According to the method, the position change information of the first target gesture in the adjacent second hand images is determined based on the target position information of the first target gesture in each second hand image, and the target device is controlled to execute the operation corresponding to the first target gesture based on the position change information, so that the target device is controlled accurately by the first target gesture.
In an optional embodiment, controlling the target device to perform an operation corresponding to the first target gesture includes at least one of: adjusting the volume of the target device; adjusting an operating mode of the target device, wherein the operating mode comprises turning off or turning on at least part of functions of the target device; displaying a mobile identifier in a display interface of the target device, or adjusting a display position of the mobile identifier in the display interface; zooming out or zooming in at least part of display content in the display interface; fast forwarding or fast rewinding of display content in the display interface; sliding or jumping of the display interface; and selecting a preset function in the display interface.
The method can control various operations of the target equipment, improves the diversity of control operations of the target equipment, and realizes flexible control of the target equipment.
The operation corresponding to the first target gesture is as follows: when the volume of the target device is adjusted, the volume amplification operation or the volume reduction operation of the target device can be determined according to the change direction and the change distance indicated by the position change information, and the amplified volume value or the reduced volume value can be determined. For example, if it is detected that the first target gesture moves from bottom to top, the representation performs a volume amplification operation on the target device, and the amplified volume value can be determined according to the change distance from bottom to top and the current volume value; if the first target gesture is detected to move from top to bottom, the representation performs volume reduction operation on the target device, and the reduced volume value can be determined according to the change distance from top to bottom and the current volume value.
The operation corresponding to the first target gesture is as follows: when the working mode of the target device is adjusted, the working mode of the target device can be adjusted to be a preset mode corresponding to the first target gesture. For example, a video playing function of the target device may be started, and the working mode of the target device may be adjusted to a video playing mode; or the video playing function of the target equipment can be closed, and the working mode of the target equipment is adjusted to be the interface display mode; or, the drawing function of the target device may be turned on, and the operation mode of the target device may be adjusted to a drawing mode. The type of the preset mode can be set according to the function of the target device.
The operation corresponding to the first target gesture is as follows: when the movement identifier is displayed in the display interface of the target device, the operation position of the first target gesture on the display interface can be determined based on the conversion relation between the image coordinate system corresponding to the second hand image and the interface coordinate system corresponding to the display interface and the target position information of the first target gesture in the second hand image, and the movement identifier can be displayed at the operation position. The moving identifier may be a moving cursor, and the shape of the moving cursor may be set as required, for example, the shape of the moving cursor may be a palm shape, an arrow shape, or the like.
The operation corresponding to the first target gesture is as follows: when the display position of the mobile identifier in the display interface is adjusted, the display position of the mobile identifier in the display interface can be adjusted according to the target position information of the first target gesture in each second hand image.
The operation corresponding to the first target gesture is as follows: when at least part of the display content in the display interface is reduced or enlarged, the content enlargement operation and the enlargement ratio of the target device can be determined according to the change direction and the change distance indicated by the position change information, or the content reduction operation and the reduction ratio of the target device can be determined. Amplifying part of display contents in the display interface according to the determined amplification scale; or, reducing part of the display content in the display interface according to the determined reduction scale. For example, when the changing direction is from bottom to top, the content amplification operation on the target device is determined; and when the changing direction is the direction from top to bottom, determining to perform content reduction operation on the target equipment.
The operation corresponding to the first target gesture is as follows: when the display interface slides, the sliding operation of the display interface of the target device can be determined according to the change direction indicated by the position change information. For example, when the changing direction is from right to left, the left sliding operation may be performed on the display interface, and the display interface is controlled to display the corresponding display content after the left sliding operation; when the moving direction is from left to right, the display interface can be subjected to right sliding operation, and the display interface is controlled to display corresponding display content after the right sliding operation.
The operation corresponding to the first target gesture is as follows: when the display interface jumps, the display interface can be controlled to display preset display contents corresponding to the jump operation. Alternatively, the operation position of the first target gesture on the display interface may be determined based on the conversion relationship between the image coordinate system corresponding to the second hand image and the interface coordinate system corresponding to the display interface and the target position information of the first target gesture in the second hand image, and the display interface may be controlled to display the jump display content corresponding to the jump operation executed at the operation position.
The operation corresponding to the first target gesture is as follows: when a preset function in the display interface is selected, the preset function can be set as required, for example, the preset function can be a pattern special effect, a painting brush function, and the like. When the preset function is a brush function, the target device can be controlled to select the brush function, and corresponding drawing contents can be displayed on the display interface through movement of the first target gesture. When the preset function is a pattern special effect, the set pattern can be displayed at the operation position of the display interface based on the target position information of the first target gesture in the second hand image.
In a possible implementation, after controlling the target device to perform an operation corresponding to the first target gesture, the method further includes:
acquiring a third hand image of the target user;
and when the hand motion of the target user does not exist in the third hand image or the third gesture type of the target user is detected not to be matched with the first target gesture based on the third hand image, the operation control of the target device based on the first target gesture is released.
In this case, after the control target device executes the operation corresponding to the first target gesture, the third hand image of the target user may be acquired and recognized, and when it is recognized that the hand motion of the target user does not exist in the third hand image, the operation control of the target device based on the first target gesture may be released, and the process may return to S101 to acquire the first hand image of the target user.
Alternatively, when it is detected that the third gesture category of the target user does not match the first target gesture based on the third hand image, the operation control over the target device based on the first target gesture may be released, and the target device may be controlled based on the target gesture indicated by the third gesture category.
Referring to a flow chart of another apparatus control method shown in fig. 2, the method may include:
s201: a first hand image of a target user is acquired.
The target user is any user for controlling the target equipment.
S202: and recognizing the first hand image to obtain a first gesture category corresponding to the first hand image.
S203: and judging whether the first gesture category is matched with the set pre-trigger gesture.
S204: and if so, acquiring a second hand image of the target user.
And if not, controlling the target equipment to execute the operation corresponding to the gesture indicated by the first gesture type.
S205: and recognizing the second hand image to obtain a second hand type corresponding to the second hand image.
S206: and when the second gesture category is matched with the first target gesture, controlling the target device to execute the operation corresponding to the first target gesture.
S207: when it is detected that the second gesture type is the same as the pre-trigger gesture, the hand image of the target user is acquired, and the acquired hand image is taken as the first hand image, and the process returns to step S202.
Alternatively, the target device may be controlled to perform an operation corresponding to the pre-trigger gesture.
S208: and when the second gesture category is detected to be matched with a set second target gesture except the first target gesture and the pre-trigger gesture, controlling the target device to execute an operation corresponding to the second target gesture.
S209: acquiring a third hand image of the target user; and when the hand motion of the target user does not exist in the third hand image or the third gesture type of the target user is detected to be not matched with the first target gesture based on the third hand image, the operation control of the target device based on the first target gesture is released.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides an apparatus control device, as shown in fig. 3, which is an architecture schematic diagram of the apparatus control device provided in the embodiment of the present disclosure, and includes a first obtaining module 301, a second obtaining module 302, and a first control module 303, specifically:
a first obtaining module 301, configured to obtain a first hand image of a target user, where the target user is a user who controls a target device;
a second obtaining module 302, configured to obtain a second hand image of the target user when it is recognized that the first gesture category of the target user in the first hand image matches a set pre-trigger gesture;
a first control module 303, configured to control the target device to perform an operation corresponding to a first target gesture when it is detected that a second gesture category of the target user matches a set first target gesture based on the second hand image.
In one possible implementation, the first control module 303, when detecting that the second gesture category of the target user matches the set first target gesture based on the second hand image, is configured to:
identifying the second hand image and determining a second identification result; the second recognition result comprises a confidence coefficient that the hand action of the target user in the second hand image belongs to each preset gesture;
determining that the second gesture category of the target user is matched with the set first target gesture under the condition that the second recognition result indicates that the first confidence coefficient of the hand action of the target user belonging to the first target gesture is greater than the set first threshold value and the second confidence coefficient of the hand action belonging to other gestures except the first target gesture is less than the set second threshold value; wherein the first target gesture is a three-finger pinch gesture.
In a possible embodiment, the apparatus further comprises: a second control module 304 to:
and controlling the target device to execute an operation corresponding to a second target gesture when detecting that the second gesture category of the target user is matched with the set second target gesture except the first target gesture and the pre-trigger gesture based on the second hand image.
In one possible implementation manner, in a case that it is recognized that the first gesture category of the target user in the first hand image matches a preset pre-trigger gesture, the second obtaining module 302, when obtaining the second hand image of the target user, is configured to:
determining a cutoff time which is separated from the acquisition time of the first hand image by a preset duration under the condition that the first hand image is recognized to be matched with the first gesture category of the target user and the set pre-trigger gesture;
acquiring a second hand image of the target user from the acquisition time of the first hand image to the cutoff time.
In a possible embodiment, the apparatus further comprises: a third obtaining module 305, configured to:
and in response to the second gesture type being the same as the pre-trigger gesture, controlling a target device to execute an operation corresponding to the pre-trigger gesture, or acquiring a hand image of the target user and taking the acquired hand image as the first hand image.
In a possible implementation, the first control module 303, when controlling the target device to perform an operation corresponding to the first target gesture, is configured to:
determining target position information of the first target gesture in each second hand image;
determining position change information of the first target gesture in an adjacent second hand image based on the target position information; the position change information includes at least one of a change direction and a change distance;
and controlling the target equipment to execute the operation corresponding to the first target gesture based on the position change information.
In one possible implementation, the first control module 303, when controlling the target device to perform the operation corresponding to the first target gesture, includes at least one of:
adjusting the volume of the target device;
adjusting an operating mode of the target device, wherein the operating mode comprises turning off or turning on at least part of functions of the target device;
displaying a mobile identifier in a display interface of the target device, or adjusting a display position of the mobile identifier in the display interface;
zooming out or zooming in at least part of display content in the display interface;
fast forwarding or fast rewinding of display content in the display interface;
sliding or jumping of the display interface;
and selecting a preset function in the display interface.
In one possible embodiment, after controlling the target device to perform the operation corresponding to the first target gesture, the apparatus further includes: a detection module 306 to:
acquiring a third hand image of the target user;
and when the hand motion of the target user does not exist in the third hand image or the third gesture type of the target user is detected not to be matched with the first target gesture based on the third hand image, the operation control of the target device based on the first target gesture is released.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 4, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the electronic device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
acquiring a first hand image of a target user, wherein the target user is a user for controlling a target device;
acquiring a second hand image of the target user when the first hand image is identified that the first gesture category of the target user is matched with a set pre-trigger gesture;
and controlling the target device to execute an operation corresponding to a first target gesture when detecting that a second gesture category of the target user matches a set first target gesture based on the second hand image.
The specific processing flow of the processor 401 may refer to the description of the above method embodiment, and is not described herein again.
Furthermore, the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the device control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the device control method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. An apparatus control method characterized by comprising:
acquiring a first hand image of a target user, wherein the target user is a user for controlling a target device;
acquiring a second hand image of the target user when the first hand image is identified that the first gesture category of the target user is matched with a set pre-trigger gesture;
and controlling the target device to execute an operation corresponding to a first target gesture when detecting that a second gesture category of the target user matches a set first target gesture based on the second hand image.
2. The method of claim 1, wherein detecting that the second gesture category of the target user matches the set first target gesture based on the second hand image comprises:
identifying the second hand image and determining a second identification result; the second recognition result comprises a confidence coefficient that the hand action of the target user in the second hand image belongs to each preset gesture;
determining that the second gesture category of the target user is matched with the set first target gesture under the condition that the second recognition result indicates that the first confidence coefficient of the hand action of the target user belonging to the first target gesture is greater than the set first threshold value and the second confidence coefficient of the hand action belonging to other gestures except the first target gesture is less than the set second threshold value; wherein the first target gesture is a three-finger pinch gesture.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and controlling the target device to execute an operation corresponding to a second target gesture when detecting that the second gesture category of the target user is matched with the set second target gesture except the first target gesture and the pre-trigger gesture based on the second hand image.
4. The method according to any one of claims 1 to 3, wherein acquiring the second hand image of the target user when the first hand image is recognized that the first gesture type of the target user matches the preset pre-trigger gesture comprises:
determining a cutoff time which is separated from the acquisition time of the first hand image by a preset duration under the condition that the first hand image is recognized to be matched with the first gesture category of the target user and the set pre-trigger gesture;
acquiring a second hand image of the target user from the acquisition time of the first hand image to the cutoff time.
5. The method according to any one of claims 1 to 4, further comprising:
and in response to the second gesture type being the same as the pre-trigger gesture, controlling a target device to execute an operation corresponding to the pre-trigger gesture, or acquiring a hand image of the target user and taking the acquired hand image as the first hand image.
6. The method according to any one of claims 1 to 5, wherein the controlling the target device to perform the operation corresponding to the first target gesture comprises:
determining target position information of the first target gesture in each second hand image;
determining position change information of the first target gesture in an adjacent second hand image based on the target position information; the position change information includes at least one of a change direction and a change distance;
and controlling the target equipment to execute the operation corresponding to the first target gesture based on the position change information.
7. The method according to any one of claims 1 to 6, wherein controlling the target device to perform an operation corresponding to the first target gesture comprises at least one of:
adjusting the volume of the target device;
adjusting an operating mode of the target device, wherein the operating mode comprises turning off or turning on at least part of functions of the target device;
displaying a mobile identifier in a display interface of the target device, or adjusting a display position of the mobile identifier in the display interface;
zooming out or zooming in at least part of display content in the display interface;
fast forwarding or fast rewinding of display content in the display interface;
sliding or jumping of the display interface;
and selecting a preset function in the display interface.
8. The method according to any one of claims 1 to 7, wherein after controlling the target device to perform the operation corresponding to the first target gesture, the method further comprises:
acquiring a third hand image of the target user;
and when the hand motion of the target user does not exist in the third hand image or the third gesture type of the target user is detected not to be matched with the first target gesture based on the third hand image, the operation control of the target device based on the first target gesture is released.
9. An apparatus control device, characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first hand image of a target user, and the target user is a user for controlling target equipment;
the second acquisition module is used for acquiring a second hand image of the target user under the condition that the first hand type of the target user in the first hand image is identified to be matched with a set pre-trigger gesture;
and the first control module is used for controlling the target equipment to execute the operation corresponding to the first target gesture when detecting that the second gesture category of the target user is matched with the set first target gesture based on the second hand image.
10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the device control method according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the device control method according to any one of claims 1 to 8.
CN202111129883.2A 2021-09-26 2021-09-26 Device control method, device, electronic device and storage medium Pending CN113791548A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111129883.2A CN113791548A (en) 2021-09-26 2021-09-26 Device control method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111129883.2A CN113791548A (en) 2021-09-26 2021-09-26 Device control method, device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113791548A true CN113791548A (en) 2021-12-14

Family

ID=79184410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111129883.2A Pending CN113791548A (en) 2021-09-26 2021-09-26 Device control method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113791548A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103858074A (en) * 2011-08-04 2014-06-11 视力移动技术有限公司 System and method for interfacing with a device via a 3d display
CN104298454A (en) * 2013-07-15 2015-01-21 霍尼韦尔国际公司 User interface navigation system and method used for smart home system
CN107422859A (en) * 2017-07-26 2017-12-01 广东美的制冷设备有限公司 Regulation and control method, apparatus and computer-readable recording medium and air-conditioning based on gesture
CN108958490A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Electronic device and its gesture identification method, computer readable storage medium
CN109991859A (en) * 2017-12-29 2019-07-09 青岛有屋科技有限公司 A kind of gesture instruction control method and intelligent home control system
CN110426962A (en) * 2019-07-30 2019-11-08 苏宁智能终端有限公司 A kind of control method and system of smart home device
CN110472396A (en) * 2018-08-17 2019-11-19 中山叶浪智能科技有限责任公司 A kind of body-sensing gesture touch control method, system, platform and storage medium
CN112686169A (en) * 2020-12-31 2021-04-20 深圳市火乐科技发展有限公司 Gesture recognition control method and device, electronic equipment and storage medium
CN112987933A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN113031464A (en) * 2021-03-22 2021-06-25 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN113219851A (en) * 2021-06-16 2021-08-06 徐秀改 Control device of intelligent household equipment, control method thereof and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103858074A (en) * 2011-08-04 2014-06-11 视力移动技术有限公司 System and method for interfacing with a device via a 3d display
CN104298454A (en) * 2013-07-15 2015-01-21 霍尼韦尔国际公司 User interface navigation system and method used for smart home system
CN107422859A (en) * 2017-07-26 2017-12-01 广东美的制冷设备有限公司 Regulation and control method, apparatus and computer-readable recording medium and air-conditioning based on gesture
CN109991859A (en) * 2017-12-29 2019-07-09 青岛有屋科技有限公司 A kind of gesture instruction control method and intelligent home control system
CN108958490A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Electronic device and its gesture identification method, computer readable storage medium
CN110472396A (en) * 2018-08-17 2019-11-19 中山叶浪智能科技有限责任公司 A kind of body-sensing gesture touch control method, system, platform and storage medium
CN110426962A (en) * 2019-07-30 2019-11-08 苏宁智能终端有限公司 A kind of control method and system of smart home device
CN112686169A (en) * 2020-12-31 2021-04-20 深圳市火乐科技发展有限公司 Gesture recognition control method and device, electronic equipment and storage medium
CN113031464A (en) * 2021-03-22 2021-06-25 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN112987933A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN113219851A (en) * 2021-06-16 2021-08-06 徐秀改 Control device of intelligent household equipment, control method thereof and storage medium

Similar Documents

Publication Publication Date Title
US20180024643A1 (en) Gesture Based Interface System and Method
JP5665140B2 (en) Input device, input method, and program
CN108781252B (en) Image shooting method and device
CN112987933A (en) Device control method, device, electronic device and storage medium
WO2019174398A1 (en) Method, apparatus, and terminal for simulating mouse operation by using gesture
EP4307096A1 (en) Key function execution method, apparatus and device, and storage medium
CN112506340A (en) Device control method, device, electronic device and storage medium
CN111596757A (en) Gesture control method and device based on fingertip interaction
EP4180933A1 (en) Device control method and apparatus, and storage medium and electronic device
CN105867822B (en) Information processing method and electronic equipment
CN113031464B (en) Device control method, device, electronic device and storage medium
CN110858291A (en) Character segmentation method and device
CN115061577B (en) Hand projection interaction method, system and storage medium
CN113791548A (en) Device control method, device, electronic device and storage medium
CN111382598A (en) Identification method and device and electronic equipment
KR20190132885A (en) Apparatus, method and computer program for detecting hand from video
CN112306242A (en) Interaction method and system based on book-space gestures
CN112099634A (en) Interactive operation method and device based on head action, storage medium and terminal
CN113835527A (en) Device control method, device, electronic device and storage medium
JP5565886B2 (en) Input device, input method, and program
CN107643821B (en) Input control method and device and electronic equipment
CN106293435B (en) Information processing method and electronic equipment
US20180196524A1 (en) Hover interaction using orientation sensing
Dave et al. Project MUDRA: Personalization of Computers using Natural Interface
CN105867806B (en) Input method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination