CN114740977A - Non-contact human-computer interaction method and device - Google Patents

Non-contact human-computer interaction method and device Download PDF

Info

Publication number
CN114740977A
CN114740977A CN202210374751.4A CN202210374751A CN114740977A CN 114740977 A CN114740977 A CN 114740977A CN 202210374751 A CN202210374751 A CN 202210374751A CN 114740977 A CN114740977 A CN 114740977A
Authority
CN
China
Prior art keywords
instruction
computer interaction
image information
display
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210374751.4A
Other languages
Chinese (zh)
Inventor
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yuyue Technology Co ltd
Original Assignee
Guangzhou Yuyue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yuyue Technology Co ltd filed Critical Guangzhou Yuyue Technology Co ltd
Priority to CN202210374751.4A priority Critical patent/CN114740977A/en
Publication of CN114740977A publication Critical patent/CN114740977A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a non-contact human-computer interaction method and a non-contact human-computer interaction device. Different from most of the existing interaction methods based on gesture recognition, the method not only needs to recognize the plane position of the control point in the hand image information of the user, but also needs to recognize the depth position of the control point, and outputs a corresponding control instruction according to the depth change of the position of the control point, thereby enriching the richness and flexibility of instruction calling in the contactless human-computer interaction.

Description

Non-contact human-computer interaction method and device
Technical Field
The present application relates to the field of human-computer interaction technologies, and in particular, to a contactless human-computer interaction method and apparatus.
Background
The non-contact man-machine interaction means that a control instruction is sent to equipment under the condition of not contacting the equipment. The non-contact human-computer interaction is a new interaction mode following the contact human-computer interaction of a mouse, a keyboard, a touch screen and the like, can avoid virus contact infection and operation inconvenience possibly brought by the traditional interaction mode, and can be widely applied to the fields of virtual reality, augmented reality and the like.
At present, most of the commonly used non-contact human-computer interaction modes are realized by voice control, wireless signal control and the like, but the application conditions of the non-contact human-computer interaction modes are limited by application scenes, so that the non-contact human-computer interaction modes are difficult to widely popularize. Therefore, a non-contact human-computer interaction mode based on image recognition is concerned by research and development teams in the field.
Most of the existing non-contact human-computer interaction modes based on image recognition are that a camera is used for acquiring a two-dimensional plane image of a hand of a user, and then the hand image is subjected to image processing to recognize a finger position or a gesture outline so as to judge a control instruction of the user. However, because of the processing based on the two-dimensional plane image, the change of the control instruction is limited, and in some complex software, the control instruction is difficult to call accurately and quickly.
Disclosure of Invention
The present application aims to provide a contactless human-computer interaction method and apparatus, which can improve the above-mentioned problems.
The embodiment of the application is realized as follows:
in a first aspect, the present application provides a contactless human-computer interaction method, which includes:
acquiring user hand image information in an operation space, wherein the user hand image information comprises pixel plane image information and pixel depth information;
identifying a control point in the user hand image information;
and outputting a corresponding control instruction according to the position depth change of the control point.
It can be understood that, unlike most of the existing interaction methods based on gesture recognition, the contactless human-computer interaction method disclosed by the first aspect not only needs to recognize the plane position of the manipulation point in the image information of the hand of the user, but also needs to recognize the depth position of the manipulation point, and outputs a corresponding control instruction according to the depth change of the position of the manipulation point, thereby enriching the richness and flexibility of instruction invocation in contactless human-computer interaction.
Correspondingly, this application still provides a contactless human-computer interaction device, and it includes: the information acquisition module is used for acquiring user hand image information in an operation space, wherein the user hand image information comprises pixel plane image information and pixel depth information; the control identification module is used for identifying control points in the user hand image information; and the instruction processing module is used for outputting a corresponding control instruction according to the position depth change of the control point.
In an optional embodiment of the present application, the outputting a corresponding control instruction according to a change in the depth of the position of the manipulation point includes: taking the number of position depth change cycles of the control point at the same plane coordinate position within a first time threshold as an instruction number; and outputting a corresponding control command according to the command number.
The first time threshold is limited by the depth camera, and the first time threshold is not less than the time of the depth camera for acquiring each frame of picture. For example, the depth camera captures depth information data 30FPS, that is, 30 frames of depth information data are obtained every second, that is, the first minimum time threshold value here is 1/30 seconds.
It can be understood that the position depth change cycle of the manipulation point at the same plane coordinate position can be understood as a process that the depth of the manipulation point at the same plane position is changed from deep to shallow to deep, that is, the click operation of the control finger of the user. The instruction number is the number of clicks of the user within a first time threshold, if the instruction number is 1, the click operation can be understood as a single click operation, and a control instruction corresponding to the single click operation, such as a selection instruction, is output; if the number of commands is 2, the double-click operation can be understood, and a control command corresponding to the double-click operation, such as a program opening command, is output. The corresponding relation between the instruction number and the control instruction can be preset or generated by user definition, for example, the user can define what control instruction corresponds to a single-click operation, what control instruction corresponds to a double-click operation, and the like.
It should be noted that, the method not only determines the instruction of the user through the single-click or double-click action, but also comprehensively makes a corresponding judgment according to parameters such as the position, the speed change, the moving direction, and the like of the control point, which is easily conceivable by those skilled in the art according to the prior art. It should be noted that the correspondence between the control command and the parameters of the position, the speed change, the moving direction, etc. of the control point may also be preset or generated by user definition. This is also readily apparent to those skilled in the art from the prior art and the detailed description is not repeated here.
In an optional embodiment of the present application, the outputting a corresponding control instruction according to a change in the depth of the position of the manipulation point includes: taking the plane motion track of the control point in a second time threshold and the depth track corresponding to the plane motion track as a control point track; and outputting a corresponding control instruction according to the control point track.
The second time threshold is limited by the depth camera, and the second time threshold is not less than the time of the depth camera for acquiring each frame of picture. For example, the depth camera captures depth information data 30FPS, that is, depth information data of 30 frames is obtained every second, that is, the minimum value of the second time threshold is 1/30 seconds.
It can be understood that most of the existing interaction methods based on gesture recognition only process the pixel plane image information in the hand image information of the user, so that the recognized trajectory of the manipulation point is also planar, and errors are very easy to occur in practical application. The method considers the plane motion track of the control point and the depth track of the control point, and outputs the corresponding control instruction only when the plane motion track and the depth track of the control point both meet the preset condition. The accuracy of contactless human-computer interaction is greatly improved.
In a second aspect, the present application provides another contactless human-computer interaction method, which includes:
acquiring user hand image information in an operation space, wherein the user hand image information comprises pixel plane image information and pixel depth information;
identifying a control point in the user hand image information;
taking the position depth change value of the control point in the same plane coordinate position within the third time threshold as an instruction parameter value;
and outputting a menu opening instruction according to the instruction parameter value, so that the display equipment displays the corresponding instruction menu.
The third time threshold is limited by the depth camera, and the third time threshold is not less than the time for the depth camera to acquire each frame of picture. For example, the depth camera captures depth information data 30FPS, that is, depth information data of 30 frames is obtained every second, that is, the minimum value of the third time threshold is 1/30 seconds.
The non-contact human-computer interaction method can call an instruction menu corresponding to the position depth change value according to the position depth change value of the control point. That is, pressing down different depths by the user's finger will call up the instruction menu corresponding to the depth. Compared with the traditional single-layer instruction menu calling function of the right mouse button, the method can enable a user to easily call a multi-layer instruction menu, and enriches the instruction quantity of quick calling.
Correspondingly, the application also provides another contactless human-computer interaction device, and compared with the contactless human-computer interaction device disclosed in the first aspect, the contactless human-computer interaction device further comprises a multilayer instruction menu module, wherein the multilayer instruction menu module is used for taking a position depth change value of the control point in the third time threshold value at the same plane coordinate position as an instruction parameter value; and outputting a menu opening instruction according to the instruction parameter value, so that the display equipment displays the corresponding instruction menu.
In an optional embodiment of the present application, the outputting a menu opening instruction according to the instruction parameter value to enable a display device to display a corresponding instruction menu includes: under the condition that the instruction parameter value reaches a first parameter threshold value and does not reach a second parameter threshold value, outputting a first menu opening instruction to enable display equipment to display a first instruction menu; and under the condition that the instruction parameter value reaches a second parameter threshold value, outputting a second menu opening instruction to enable the display equipment to display a second instruction menu.
Correspondingly, the multilayer instruction menu module is specifically configured to output a first menu opening instruction when the instruction parameter value reaches a first parameter threshold value and does not reach a second parameter threshold value, so that the display device displays the first instruction menu; and under the condition that the instruction parameter value reaches a second parameter threshold value, outputting a second menu opening instruction to enable the display equipment to display a second instruction menu.
In a third aspect, the present application provides another contactless human-computer interaction method, including:
displaying a corresponding interactive interface through display equipment;
acquiring user hand image information in an operation space, wherein the user hand image information comprises pixel plane image information and pixel depth information;
identifying a control point in the user hand image information;
displaying the position mark of the operation point on the interactive interface through a display device;
and outputting a corresponding control instruction according to the position depth change of the control point.
It can be understood that the non-contact human-computer interaction method disclosed by the application not only can identify the position of the control point in the hand image information of the user, but also can display the control point on the interaction interface in real time, so that the user can know the position of the control point controlled by the user on the interaction interface in time, and the interaction effect of human-computer interaction is improved.
Correspondingly, the present application further provides another contactless human-computer interaction device, which, compared with the contactless human-computer interaction device disclosed in the first aspect, further includes: the instruction feedback display module is used for displaying a corresponding interactive interface through display equipment; and displaying the position mark of the operation point on the interactive interface through a display device.
In an alternative embodiment of the present application, the method further comprises:
acquiring a virtual display scale factor and a virtual real scale factor, wherein the virtual display scale factor is the size ratio of a virtual interface to a display picture, and the virtual real scale factor is the size ratio of the virtual interface to a real picture;
the displaying of the corresponding interactive interface through the display device specifically includes: mapping the virtual interface on the display picture through the virtual display scale factor to form the interactive interface;
the displaying, by a display device, the position mark of the operation point on the interactive interface includes: and mapping the real coordinates of the operation points on the virtual interface through the virtual real scale factor, and mapping the virtual coordinates of the operation points on the virtual interface on the display picture through the virtual display scale factor to form the position mark.
The non-contact human-computer interaction method can map the virtual interface on the display pictures of different display devices according to the virtual display scale factor so as to ensure the good presentation of the interaction interface; and the real coordinates of the operation points can be mapped on the virtual interface according to the virtual real scale factor so as to ensure the control accuracy of human-computer interaction. In conclusion, the composition table switching among the virtual interface, the display picture and the real picture can be realized through the calculation of the virtual display scale factor and the virtual real scale factor.
Correspondingly, the non-contact human-computer interaction device also comprises a coordinate mapping module, wherein the coordinate mapping module is used for acquiring a virtual display scale factor and a virtual real scale factor, and mapping the virtual interface on the display picture through the virtual display scale factor to form the interaction interface; and mapping the real coordinates of the operation points on the virtual interface through the virtual real scale factor, and mapping the virtual coordinates of the operation points on the virtual interface on the display picture through the virtual display scale factor to form the position mark.
In a fourth aspect, the present application provides a contactless human-computer interaction method, including:
acquiring user hand image information in each operation subspace, wherein the user hand image information comprises pixel plane image information and pixel depth information;
identifying control points in user hand image information in each operation subspace;
and outputting a control instruction of the instruction category corresponding to the operation subspace according to the position depth change of the control point.
Correspondingly, the present application further provides another contactless human-computer interaction device, which includes: the information acquisition module is specifically used for acquiring user hand image information in each operation subspace; the control identification module is specifically used for identifying control points in the user hand image information in each operation subspace; the instruction processing module is specifically configured to output a control instruction of an instruction category corresponding to the operation subspace according to the position depth change of the control point.
In a fifth aspect, the present application provides a method for contactless human-computer interaction, where compared to the method disclosed in the first aspect, the method of the fifth aspect further includes:
extracting gesture outlines in the user hand image information and identifying user gestures;
and outputting a corresponding control instruction according to the user gesture.
Correspondingly, the control recognition module in the non-contact human-computer interaction device is also used for extracting a gesture outline in the user hand image information and recognizing the user gesture; and the instruction processing module is also used for outputting a corresponding control instruction according to the user gesture.
It can be understood that, different from the recognition of the position depth change of the manipulation point, the method can also set the control instruction corresponding to the specific gesture through a preset gesture or a user-defined gesture, and when the specific gesture is recognized by the user, the corresponding control instruction is output. For example, the determination instruction is output when the user has made an OK gesture, the screen capture instruction is output when the user has made a five-finger stretch gesture, and the like.
In a sixth aspect, the present application discloses a contactless human-computer interaction apparatus, comprising a processor, a depth image acquisition device, a display device, and a memory, wherein the processor, the depth image acquisition device, the display device, and the memory are connected to each other, wherein the memory is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions to execute any one of the methods of the first aspect to the fourth aspect.
In an optional embodiment of the present application, the depth image acquiring device includes a planar camera and a depth camera; the planar camera is used for shooting a hand of a user in the operation space to obtain the pixel planar image information of the hand image information of the user; the depth camera is used for shooting the hand of the user in the operation space to obtain the pixel depth information of the image information of the hand of the user.
In alternative embodiments of the present application, the display device comprises a flat panel display device, a projection device, a VR display device, or an AR display device.
In a seventh aspect, the present application discloses a computer readable storage medium, having stored thereon a computer program comprising program instructions, which when executed by a processor, cause the processor to perform any of the methods of the first to fourth aspects.
Has the advantages that:
the non-contact human-computer interaction method and the non-contact human-computer interaction device are different from most of the existing interaction methods based on gesture recognition, the plane position of an operation point in user hand image information needs to be recognized, the depth position of the operation point needs to be recognized, a corresponding control instruction is output according to the position depth change of the operation point, and the richness and flexibility of instruction calling in the non-contact human-computer interaction are enriched.
The non-contact human-computer interaction method and the non-contact human-computer interaction device not only consider the plane motion track of the control point, but also consider the depth track of the control point, and only output a corresponding control instruction when the plane motion track and the depth track of the control point both meet preset conditions. The accuracy of contactless human-computer interaction is greatly improved.
The non-contact human-computer interaction method and the non-contact human-computer interaction device can call an instruction menu corresponding to the position depth change value according to the position depth change value of the control point. That is, pressing different depths by the user's finger will call the instruction menu corresponding to the depth. Compared with the traditional single-layer instruction menu calling function of the right mouse button, the method can enable a user to easily call a multi-layer instruction menu, and enriches the instruction quantity of quick calling.
The non-contact human-computer interaction method and the non-contact human-computer interaction device can not only identify the position of the control point in the hand image information of the user, but also display the control point on the interaction interface in real time, so that the user can know the position of the control point controlled by the user on the interaction interface in time, and the human-computer interaction effect is improved.
According to the non-contact human-computer interaction method and the non-contact human-computer interaction device, the virtual interface can be mapped on the display pictures of different display equipment according to the virtual display scale factor so as to ensure good presentation of the interaction interface; and the real coordinates of the operation points can be mapped on the virtual interface according to the virtual real scale factor so as to ensure the control accuracy of human-computer interaction.
The non-contact human-computer interaction method and the non-contact human-computer interaction device can acquire the hand image information of the user in different operation subspaces, recognize each control point, and output a corresponding control instruction according to the position change, particularly the depth change, of each control point. The position depth change of one control point corresponds to one control instruction, and the method can realize an application scene that a user can control a plurality of control points in an operation space to output a plurality of control instructions at the same time.
To make the aforementioned objects, features and advantages of the present application more comprehensible, alternative embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flow chart of a contactless human-computer interaction method 10 provided in a first aspect of the present application;
FIG. 2 is a schematic diagram of a partial pixel plane image information of the user's hand image information in the present application;
FIG. 3 is a schematic diagram of partial pixel depth information of user hand image information in the present application;
fig. 4 is a schematic structural diagram of a contactless human-computer interaction device 100 provided in the first aspect of the present application;
FIG. 5 is a diagram illustrating a user controlling a control point to perform a clicking operation;
FIG. 6 is a diagram illustrating a user controlling a control point to perform a sliding operation;
FIG. 7 is a flow chart of a contactless human-computer interaction method 20 provided by a second aspect of the present application;
FIG. 8 is a diagram of a user controlling a control point to make a command menu call;
fig. 9 is a schematic structural diagram of a contactless human-computer interaction device 200 provided in the second aspect of the present application;
FIG. 10 is a flow chart illustrating a contactless human-computer interaction method 30 provided by a third aspect of the present application;
FIG. 11 is a schematic structural diagram of a contactless human-computer interaction device 300 according to a third aspect of the present application;
FIG. 12 is a schematic view of a virtual interface;
FIG. 13 is a diagram illustrating a display screen for directly displaying a virtual interface on a display device;
FIG. 14 is a schematic illustration of mapping a virtual interface onto a display screen of a display device;
FIG. 15 is a schematic flow chart of a contactless human-computer interaction method 40 according to a fourth aspect of the present application;
fig. 16 is a schematic view of an application scenario of multi-steering point recognition disclosed in the present application;
fig. 17 is a schematic structural diagram of a contactless human-computer interaction device 400 provided in a fourth aspect of the present application;
fig. 18 is a schematic structural diagram of a contactless human-computer interaction device 500 provided in the sixth aspect of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a first aspect, as shown in fig. 1, the present application provides a contactless human-computer interaction method 10, which includes:
11. and acquiring the hand image information of the user in the operation space.
Different from traditional user hand image information, the user hand image information acquired in the application not only comprises pixel plane image information but also comprises pixel depth information.
In an optional embodiment of the present application, the planar camera shoots the hand of the user in the operation space, and obtains pixel planar image information of the hand of the user. As shown in fig. 2, fig. 2 is a schematic view of image information of a part of pixel planes of image information of a user's hand in the present application. A4 x 8 array of pixels is shown, each having its own color gray scale value including a red gray scale value, a green gray scale value and a blue gray scale value, where pixel a in row 2, column 4 is shown as having a red gray scale value of 236, a green gray scale value of 21 and a red gray scale value of 229.
In an optional embodiment of the present application, the depth camera shoots a hand of a user in an operation space to obtain pixel depth information of image information of the hand of the user, and a depth of each pixel is a distance from the pixel to a plane of the depth camera. As shown in fig. 3, fig. 3 is a schematic diagram of depth information of a part of pixels of user hand image information in the present application, in the diagram, a height corresponding to each pixel is a depth corresponding to the pixel, that is, a distance from an object point corresponding to the pixel to a depth camera plane. The height value of the pixel point A in the 1 st row and the 1 st column is 80, the height value of the pixel point B in the 3 rd row and the 3 rd column is 60, the height value of the pixel point C in the 3 rd row and the 4 th column is 50, and the height values of the rest pixels are 100. That is to say, the position of the object point corresponding to the pixel a is 80 cm from the depth camera plane, the position of the object point corresponding to the pixel B is 60 cm from the depth camera plane, the position of the object point corresponding to the pixel C is 50 cm from the depth camera plane, and the positions of the object points corresponding to the other pixels are 100 cm from the depth camera plane. The cases described herein represent only a single example, not all, and the depth value ranges and resolutions measured by different cameras may be different.
In an optional embodiment of the present application, the operation space is a shooting range and a shooting depth of the plane camera and the depth camera, and is also an effective control range and an effective control depth of the user to the control point.
12. And identifying a control point in the hand image information of the user.
In an optional embodiment of the present application, a gesture contour may be extracted from the hand image information of the user, and a manipulation point may be identified. A method for calculating a manipulation point is to calculate a minimum depth value of a pixel point in a planar area as an effective manipulation point, for example, the manipulation point may be a position point pointed by a user's index finger, which is equivalent to a cursor position of a mouse.
It can be understood that two-dimensional image data tutoring analysis calculation can be omitted in the recognition of the control points, great optimization is conducted on the program, and great significance is achieved in simple operations such as ordering dishes, single-machine operation, double-click operation, sliding operation and the like which do not need specific gesture recognition.
13. And outputting a corresponding control instruction according to the position depth change of the control point.
It can be understood that, unlike most of the existing interaction methods based on gesture recognition, the contactless human-computer interaction method 10 disclosed in the first aspect needs to recognize not only the plane position of the manipulation point in the hand image information of the user, but also the depth position of the manipulation point, and output the corresponding control instruction according to the depth change of the position of the manipulation point, thereby enriching the richness and flexibility of instruction invocation in contactless human-computer interaction.
Correspondingly, the present application also provides a contactless human-computer interaction device 100, as shown in fig. 4, which includes: the information acquisition module 110 is configured to acquire user hand image information in an operation space, where the user hand image information includes pixel plane image information and pixel depth information; a manipulation recognition module 120, configured to recognize a manipulation point in the user hand image information; and the instruction processing module 130 is configured to output a corresponding control instruction according to the position depth change of the manipulation point.
Correspondingly, the application also provides a non-contact human-computer interaction device which further comprises a space depth pressing multilayer instruction virtual machine, and the SDPI VM technology is used for outputting a corresponding control instruction according to the position depth change of the control point.
In an alternative embodiment of the present application, step 13 specifically includes: taking the number of position depth change cycles of the control point in the same plane coordinate position within the first time threshold as the number of instructions; and outputting a corresponding control command according to the command number.
It can be understood that the position depth change period of the manipulation point at the same plane coordinate position can be understood as a process that the depth of the manipulation point at the same plane position is changed from deep to shallow to deep, that is, the click operation of the control finger of the user. As shown in fig. 5, in the user hand image information acquired by the depth image acquiring device, it can be recognized that a click operation is completed from far to near to far of a manipulation point controlled by the user, which can be understood as a selection operation, similar to a mouse click operation, and a selection instruction is output. If it is recognized that the control point controlled by the user completes two click operations within the first time threshold, similar to the mouse double click operation, the double click operation can be understood as an open operation, and an open program instruction is output.
That is to say, the number of the instructions is the number of clicks of the user within the first time threshold, and if the number of the instructions is 1, the instructions can be understood as click operations, and a control instruction corresponding to the click operations, such as a selection instruction, is output; if the number of commands is 2, the double-click operation can be understood, and a control command corresponding to the double-click operation, such as a program opening command, is output. The corresponding relation between the instruction number and the control instruction can be preset or generated by user definition, for example, the user can define what control instruction corresponds to a single-click operation, what control instruction corresponds to a double-click operation, and the like.
It should be noted that, the method does not merely determine the instruction of the user through a single-click or double-click action, but also comprehensively makes a corresponding judgment according to parameters such as the position, the speed change, the moving direction, and the like of the control point, which is easily conceivable by those skilled in the art according to the prior art. It should be noted that the correspondence between the control command and the parameters of the position, the speed change, the moving direction, etc. of the control point may also be preset or generated by user definition. This is also readily apparent to those skilled in the art from the prior art and the detailed description is not repeated here.
In an alternative embodiment of the present application, step 13 specifically includes: taking the plane motion track of the control point in the second time threshold and the depth track corresponding to the plane motion track as the track of the control point; and outputting a corresponding control instruction according to the track of the control point.
It can be understood that most of the existing interaction methods based on gesture recognition only process the pixel plane image information in the hand image information of the user, so that the recognized trajectory of the manipulation point is also planar, and errors are very easy to occur in practical application. For example, for the sliding operation, taking the case shown in fig. 6 as an example, if only the movement trajectory of the manipulation point on the X-Y plane is taken as the identification condition of the sliding operation, in the application scenario, the movement trajectory of the child playing with the thrown object is also highly likely to be considered as the sliding operation, and therefore, the identification error rate is high.
The method considers the plane motion track of the control point and the depth track of the control point, and outputs the corresponding control instruction only when the plane motion track and the depth track of the control point both meet the preset condition. The accuracy of contactless human-computer interaction is greatly improved. As shown in fig. 6, the sliding operation controlled by the user needs to be recognized not only from the movement track of the X-Y plane but also from the depth track of the Z axis, which greatly improves the accuracy of contactless human-computer interaction because the movement track of the child playing and throwing the object is difficult to satisfy the changing condition of the depth track of the Z axis.
In a second aspect, as shown in fig. 7, the present application provides another contactless human-computer interaction method 20, which includes:
21. and acquiring the hand image information of the user in the operation space.
The specific implementation of step 21 is similar to step 11, and is not described herein again.
22. And identifying a control point in the hand image information of the user.
The specific implementation of step 22 is similar to that of step 12, and will not be described herein.
23. And taking the position depth change value of the control point in the third time threshold value at the same plane coordinate position as an instruction parameter value.
24. And outputting a menu opening instruction according to the instruction parameter value, so that the display equipment displays the corresponding instruction menu.
In some complex software, such as CAD software, a series of instruction menus are arranged on the top of the software, a plurality of sub-instruction menus are derived under a menu bar, a toolbar integrating a plurality of button instructions is only partially commonly displayed, and a plurality of hidden toolbars are not displayed, so that the button instructions need to be searched in a plurality of instruction menus one by one, which is very troublesome. At present, a right mouse button can call partial shortcut instructions, but the number is limited. And if the stack of command menus is popped up by the right mouse button, the design is not the same as the design of the right mouse button.
It can be understood that, according to the contactless human-computer interaction method 20 provided by the present application, an instruction menu corresponding to the position depth change value can be called according to the position depth change value of the manipulation point. That is, pressing down different depths by the user's finger will call up the instruction menu corresponding to the depth. Compared with the traditional single-layer instruction menu calling function of the right mouse button, the method can enable a user to easily call a multi-layer instruction menu, and enriches the instruction quantity of quick calling.
In an alternative embodiment of the present application, step 24 comprises:
241. and under the condition that the instruction parameter value reaches the first parameter threshold value and does not reach the second parameter threshold value, outputting a first menu opening instruction to enable the display equipment to display the first instruction menu.
As shown in the left diagram of fig. 8, when the user control point presses the first layer, a first instruction menu is called correspondingly, and the first instruction menu includes a1, a2, A3, a4, and a12 instructions, where the a12 instruction is an instruction in the instruction submenu of a 1.
242. And under the condition that the instruction parameter value reaches a second parameter threshold value, outputting a second menu opening instruction to enable the display equipment to display a second instruction menu.
As shown in the right diagram of fig. 8, when the user control point presses the second layer, a second instruction menu is called correspondingly, and the second instruction menu includes B1, B2, B3, B4, and B12 instructions, where the B12 instruction is an instruction in the instruction submenu of B1.
Correspondingly, the non-contact human-computer interaction device/system comprises a space depth pressing multilayer instruction virtual machine and a space depth shortcut instruction box. The space depth pressing multilayer instruction virtual machine and the space depth shortcut instruction BOX respectively realize the steps 23 and 24 through the SDPI VM technology and the SDSC-BOX technology.
Correspondingly, as shown in fig. 9, the present application further provides another contactless human-computer interaction device 200, which further includes a multi-layer instruction menu module 240, compared to the contactless human-computer interaction device 100 disclosed in the first aspect, for executing steps 23 and 24.
In a third aspect, as shown in fig. 10, the present application provides another contactless human-computer interaction method 30, which includes:
31. and displaying the corresponding interactive interface through the display equipment.
32. And acquiring the hand image information of the user in the operation space.
The specific implementation of step 32 is similar to step 11, and is not described herein again.
33. And identifying a control point in the hand image information of the user.
The specific implementation of step 33 is similar to step 12, and is not described herein again.
34. And displaying the position mark of the operation point on the interactive interface through the display device.
35. And outputting a corresponding control instruction according to the position depth change of the control point.
It can be understood that, the non-contact human-computer interaction method 30 disclosed in the present application not only can identify the position of the manipulation point in the hand image information of the user, but also can display the manipulation point on the interaction interface in real time, so that the user can know the position of the manipulation point controlled by the user on the interaction interface in time, and the interaction effect of human-computer interaction is improved.
Correspondingly, as shown in fig. 11, the present application further provides another contactless human-computer interaction device 300, which further includes, compared to the contactless human-computer interaction device 100 disclosed in the first aspect: the instruction feedback display module 340 is configured to display a corresponding interactive interface through a display device; and displaying the position mark of the operation point on the interactive interface through the display device.
In an alternative embodiment of the present application, the method 30 further comprises: and acquiring a virtual display scale factor and a virtual real scale factor, wherein the virtual display scale factor is the size ratio of the virtual interface to the display picture, and the virtual real scale factor is the size ratio of the virtual interface to the real picture.
The virtual display scale factors comprise a virtual display width scale factor VOFW, a virtual display height scale factor VOFH and a virtual display depth scale factor VOFD; the virtual real scale factors include a virtual real width scale factor VRFW, a virtual real height scale factor VRFH, and a virtual real depth scale factor VRFD.
Step 34 specifically includes: and mapping the virtual interface on a display picture through the virtual display scale factor to form an interactive interface.
In the embodiment of the present application, the virtual interface is mapped on the display screen, that is, the coordinates (vw _ x, vw _ y, vw _ z) of the virtual interface are converted into the coordinates (outcr _ x, outcr _ y, outcr _ z) of the display screen by the following formula:
outscr_x=vw_x/VOFW;
outscr_y=vw_y/VOFH;
outscr_z=vw_z/VOFD。
step 35 specifically includes: and mapping the real coordinates of the operation points on the virtual interface through the virtual real scale factor, and mapping the virtual coordinates of the operation points on the virtual interface on a display picture through the virtual display scale factor to form position marks.
In the embodiment of the present application, the real coordinates of the operation point are mapped on the virtual interface, that is, the real coordinates (rw _ x, rw _ y, rw _ z) of the operation point are converted into the coordinates (vw _ x, vw _ y, vw _ z) of the virtual interface by the following formula:
vw_x=rw_x×VRFW;
vw_y=rw_y×VRFH;
vw_z=rw_z×VRFD。
and mapping the virtual interface on the display screen, namely converting the coordinates (vw _ x, vw _ y, vw _ z) of the virtual interface into the coordinates (outcr _ x, outcr _ y, outcr _ z) of the display screen by the following formula:
outscr_x=vw_x/VOFW;
outscr_y=vw_y/VOFH;
outscr_z=vw_z/VOFD。
the non-contact human-computer interaction method can map the virtual interface on the display pictures of different display devices according to the virtual display scale factor so as to ensure the good presentation of the interaction interface; and the real coordinates of the operation points can be mapped on the virtual interface according to the virtual real scale factor so as to ensure the control accuracy of human-computer interaction.
As shown in fig. 12 to 14. Fig. 12 is a schematic view of a virtual interface having a size of 160 × 120, in which the interface area of button a is determined by a1(0,120,0), a2(0,90,0), A3(30,90,0) and a4(30,120,0), and the interface area of button B is determined by B1(120,30,0), B2(120,0,0), B3(160,0,0) and B4(160,30, 0).
If the virtual interface is directly displayed on the display screen of the display device without coordinate mapping, as shown in fig. 13, the display screen size of the display device is 120 × 30, and if the virtual interface is directly displayed, the display areas of the button a and the button B will be beyond the display screen. Therefore, the coordinates of the button a and the button B on the display screen need to be mapped, and as shown in fig. 14, fig. 14 is a schematic diagram for mapping the virtual interface on the display screen of the display device.
Correspondingly, as shown in fig. 11, the contactless human-computer interaction device 300 further includes a coordinate mapping module 350, configured to obtain a virtual display scale factor and a virtual real scale factor, and map a virtual interface on a display screen through the virtual display scale factor to form an interaction interface; and mapping the real coordinates of the operation points on the virtual interface through the virtual real scale factor, and mapping the virtual coordinates of the operation points on the virtual interface on a display picture through the virtual display scale factor to form position marks.
In the embodiment of the present application, the display frame may be a two-dimensional frame displayed by a flat panel display, or may be a light field display scene projected by a stereoscopic projection system.
In a fourth aspect, as shown in fig. 15, the present application provides still another contactless human-computer interaction method 40, which includes:
41. and acquiring the hand image information of the user in each operation subspace.
In an optional embodiment of the present application, the operation space is a shooting range and a shooting depth of the plane camera and the depth camera, and is also an effective control range and an effective control depth of the user to the control point. The operation space may include at least two operation subspaces, and the user hand image information in each operation subspace is acquired respectively.
42. And identifying the control points in the user hand image information in each operation subspace.
It can be understood that after the user hand image information in each operation subspace is acquired, the control points in each operation subspace are identified, and since there is more than one operation subspace, there is also more than one identified control point in the operation space.
43. And outputting a control instruction of the instruction category corresponding to the operation subspace according to the position depth change of the control point.
It can be understood that the non-contact human-computer interaction method 40 disclosed in the present application can obtain the image information of the user's hand in each of the different operation subspaces, identify each of the operation points, and output a corresponding control instruction according to the position change, particularly the depth change, of each of the operation points. The position depth change of one of the control points corresponds to one control instruction, and the method 40 can realize an application scenario in which a user controls a plurality of control points in an operation space to simultaneously output a plurality of control instructions.
As shown in fig. 16, which is a current display screen of the display device, it can be understood that the display screen presents two areas to be controlled, namely a left "palette selection color area" and a right "drawing work area", and the method 40 can implement the operation point identification for the operation subspace corresponding to the two areas to be controlled. That is to say, the user can use the left hand and the right hand to cooperate, and the control instruction output of the "color palette selecting color area" and the "drawing work area" on the right side is realized through the two control points in different operation subspaces respectively. Or, multiple users can cooperate with each other, one user realizes the output of the control instruction of the color palette color area selection through the control point in one operation subspace, and the other user realizes the output of the control instruction of the drawing work area through the other control point in the other operation subspace.
Correspondingly, as shown in fig. 17, the present application further provides another contactless human-computer interaction device 400, which includes: the information acquisition module 410 is specifically configured to acquire user hand image information in each operation subspace; the manipulation identification module 420 is specifically configured to identify a manipulation point in the user hand image information in each operation subspace; the instruction processing module 430 is specifically configured to output a control instruction of an instruction category corresponding to the operation subspace according to the position depth change of the manipulation point.
In a fifth aspect, the present application provides a method for contactless human-computer interaction, where compared to the method disclosed in the first aspect, the method of the fifth aspect further includes:
extracting gesture outlines in the user hand image information and identifying user gestures;
and outputting a corresponding control instruction according to the user gesture.
Correspondingly, the control recognition module in the non-contact human-computer interaction device is also used for extracting a gesture outline in the user hand image information and recognizing the user gesture; the instruction processing module is further used for outputting a corresponding control instruction according to the user gesture.
It can be understood that, different from the recognition of the position depth change of the manipulation point, the method can also set the control instruction corresponding to the specific gesture through a preset gesture or a user-defined gesture, and when the specific gesture is recognized by the user, the corresponding control instruction is output. For example, the determination instruction is output when the user has made an OK gesture, and the screen capture instruction is output when the user has made a five-finger-extending gesture.
In a sixth aspect, the present application provides a regional drought event recognition device considering dynamic processes, comprising a recognition server. As shown in fig. 18, the recognition server includes one or more processors 510; one or more depth image acquisition devices 520, one or more display devices 530, and a memory 540. The processor 510, depth image acquisition device 520, display device 530, and memory 540 are connected via bus 550. Memory 540 is used to store computer programs comprising program instructions and processor 510 is used to execute the program instructions stored by memory 540. Wherein the processor 510 is configured to invoke program instructions to perform any of the methods of the first to fourth aspects.
It should be understood that in the present embodiment, the Processor 510 may be a Central Processing Unit (CPU), and the Processor may be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In an alternative embodiment of the present application, the depth image obtaining device 520 includes a plane camera and a depth camera; the planar camera is used for shooting the hand of the user in the operation space to obtain pixel planar image information of the hand image information of the user; the depth camera is used for shooting the hands of the user in the operation space to obtain pixel depth information of the image information of the hands of the user.
In alternative embodiments of the present application, display device 530 includes a flat panel display device, a projection device, a VR display device, or an AR display device.
The memory 540 may include both read-only memory and random access memory, and provides instructions and data to the processor 510. A portion of memory 540 may also include non-volatile random access memory. For example, memory 540 may also store device type information.
In a specific implementation, the processor 510, the depth image obtaining device 520, and the display device 530 described in this embodiment of the present invention may execute the implementation described in this embodiment of the present invention, or may execute the implementation of the terminal device described in this embodiment of the present invention, which is not described herein again.
In a seventh aspect, the present application discloses a computer readable storage medium, in which a computer program is stored, the computer program comprising program instructions that, when executed by a processor, cause the processor to perform any of the methods of the first to fourth aspects.
The computer-readable storage medium may be an internal storage unit of the terminal device in any of the foregoing embodiments, for example, a hard disk or a memory of the terminal device. The computer-readable storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided in the terminal device. Further, the computer-readable storage medium may include both an internal storage unit and an external storage device of the terminal device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal device. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal device and the unit described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electrical, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. Especially, for the embodiments of the apparatus, the device, and the medium, since they are substantially similar to the embodiments of the method, the description is relatively simple, and reference may be made to part of the description of the embodiments of the method for relevant parts, which is not described in detail herein.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is meant as an illustration of alternative embodiments of the application and of the principles of the technology applied. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
The foregoing is illustrative of only alternative embodiments of the present application and is not intended to limit the present application, which may be modified or varied by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A contactless human-computer interaction method is characterized by comprising the following steps:
acquiring user hand image information in an operation space, wherein the user hand image information comprises pixel plane image information and pixel depth information;
identifying a control point in the user hand image information;
and outputting a corresponding control instruction according to the position depth change of the control point.
2. The contactless human-computer interaction method of claim 1,
the outputting of the corresponding control instruction according to the position depth change of the control point comprises:
taking the number of position depth change cycles of the control point at the same plane coordinate position within a first time threshold as an instruction number;
and outputting a corresponding control instruction according to the instruction number.
3. The contactless human-computer interaction method of claim 1,
the outputting of the corresponding control instruction according to the position depth change of the control point comprises:
taking the plane motion track of the control point in a second time threshold and the depth track corresponding to the plane motion track as a control point track;
and outputting a corresponding control instruction according to the control point track.
4. The contactless human-computer interaction method of claim 1,
the outputting of the corresponding control instruction according to the position depth change of the control point comprises:
pressing a multilayer instruction virtual machine through space depth to take the position depth change value of the control point in the same plane coordinate position in the third time threshold value as an instruction parameter value;
and outputting a menu opening instruction according to the instruction parameter value, so that the display equipment displays a corresponding instruction menu.
5. The contactless human-computer interaction method of claim 4,
outputting a menu opening instruction according to the instruction parameter value to enable the display equipment to display a corresponding instruction menu, wherein the menu opening instruction comprises the following steps:
under the condition that the instruction parameter value reaches a first parameter threshold value and does not reach a second parameter threshold value, outputting a first menu opening instruction to enable display equipment to display a first instruction menu;
and under the condition that the instruction parameter value reaches a second parameter threshold value, outputting a second menu opening instruction to enable the display equipment to display a second instruction menu.
6. The contactless human-computer interaction method of claim 1,
prior to identifying a maneuver point in the user hand image information, the method further comprises:
displaying a corresponding interactive interface through display equipment;
after identifying a manipulation point in the user hand image information, the method further comprises:
and displaying the position mark of the operation point on the interactive interface through a display device.
7. The contactless human-computer interaction method of claim 6,
the method further comprises the following steps:
acquiring a virtual display scale factor and a virtual real scale factor, wherein the virtual display scale factor is the size ratio of a virtual interface to a display picture, and the virtual real scale factor is the size ratio of the virtual interface to a real picture;
the displaying of the corresponding interactive interface through the display device specifically includes:
mapping the virtual interface on the display picture through the virtual display scale factor to form the interactive interface;
the displaying, by a display device, the position mark of the operation point on the interactive interface includes:
and mapping the real coordinates of the operation points on the virtual interface through the virtual real scale factor, and mapping the virtual coordinates of the operation points on the virtual interface on the display picture through the virtual display scale factor to form the position mark.
8. The contactless human-computer interaction method of claim 1,
the operation space comprises at least two operation subspaces, and the acquiring of the user hand image information in the operation space comprises:
acquiring user hand image information in each operation subspace;
the identifying of the control point in the user hand image information comprises:
identifying control points in user hand image information in each operation subspace;
the outputting of the corresponding control instruction according to the position depth change of the control point comprises:
and outputting a control instruction of the instruction category corresponding to the operation subspace according to the position depth change of the control point.
9. The contactless human-computer interaction method of claim 1,
the method further comprises the following steps:
extracting gesture outlines in the user hand image information and identifying user gestures;
and outputting a corresponding control instruction according to the user gesture.
10. A contactless human-computer interaction device, characterized in that it comprises means for carrying out the method according to any one of claims 1 to 9.
CN202210374751.4A 2022-04-11 2022-04-11 Non-contact human-computer interaction method and device Withdrawn CN114740977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210374751.4A CN114740977A (en) 2022-04-11 2022-04-11 Non-contact human-computer interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210374751.4A CN114740977A (en) 2022-04-11 2022-04-11 Non-contact human-computer interaction method and device

Publications (1)

Publication Number Publication Date
CN114740977A true CN114740977A (en) 2022-07-12

Family

ID=82280565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210374751.4A Withdrawn CN114740977A (en) 2022-04-11 2022-04-11 Non-contact human-computer interaction method and device

Country Status (1)

Country Link
CN (1) CN114740977A (en)

Similar Documents

Publication Publication Date Title
US11048333B2 (en) System and method for close-range movement tracking
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
US9910498B2 (en) System and method for close-range movement tracking
Seo et al. Direct hand touchable interactions in augmented reality environments for natural and intuitive user experiences
TWI398818B (en) Method and system for gesture recognition
US9600078B2 (en) Method and system enabling natural user interface gestures with an electronic system
Kang et al. Color based hand and finger detection technology for user interaction
US8768006B2 (en) Hand gesture recognition
Shen et al. Vision-based hand interaction in augmented reality environment
KR101890459B1 (en) Method and system for responding to user's selection gesture of object displayed in three dimensions
CN110532984A (en) Critical point detection method, gesture identification method, apparatus and system
CN102096471B (en) Human-computer interaction method based on machine vision
CN104081307A (en) Image processing apparatus, image processing method, and program
CN103503030A (en) Image processing device for specifying depth of object present in real space by performing image processing, stereoscopic viewing device, integrated circuit, and program
US8948493B2 (en) Method and electronic device for object recognition, and method for acquiring depth information of an object
US10528145B1 (en) Systems and methods involving gesture based user interaction, user interface and/or other features
CN107682595B (en) interactive projection method, system and computer readable storage medium
Hartanto et al. Real time hand gesture movements tracking and recognizing system
CN106383583A (en) Method and system capable of controlling virtual object to be accurately located and used for air man-machine interaction
Widodo et al. Laser spotlight detection and interpretation of its movement behavior in laser pointer interface
Guliani et al. Gesture controlled mouse navigation: Hand landmark approach
CN114740977A (en) Non-contact human-computer interaction method and device
CN116301551A (en) Touch identification method, touch identification device, electronic equipment and medium
Fujiwara et al. Interactions with a line-follower: An interactive tabletop system with a markerless gesture interface for robot control
CN103793053B (en) Gesture projection method and device for mobile terminals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220712

WW01 Invention patent application withdrawn after publication