CN110221717A - Virtual mouse driving device, gesture identification method and equipment for virtual mouse - Google Patents

Virtual mouse driving device, gesture identification method and equipment for virtual mouse Download PDF

Info

Publication number
CN110221717A
CN110221717A CN201910441221.5A CN201910441221A CN110221717A CN 110221717 A CN110221717 A CN 110221717A CN 201910441221 A CN201910441221 A CN 201910441221A CN 110221717 A CN110221717 A CN 110221717A
Authority
CN
China
Prior art keywords
infrared
processing module
module
hand
changing value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910441221.5A
Other languages
Chinese (zh)
Other versions
CN110221717B (en
Inventor
李锦华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910441221.5A priority Critical patent/CN110221717B/en
Publication of CN110221717A publication Critical patent/CN110221717A/en
Application granted granted Critical
Publication of CN110221717B publication Critical patent/CN110221717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of virtual mouse driving device, for the gesture identification method and equipment of virtual mouse, it is related to human-computer interaction technique field, pass through grating module and image processing module, various gestures of user's hand on background supplementary module and operation are identified, terminal can be controlled by realizing the operation based on hand, without relying on entity mouse, by user since the risk that long-time holds entity mouse and illness is preferably minimized, it avoids damaging the hand of user, improves user's viscosity.The virtual mouse driving device includes: background supplementary module, grating module, image processing module and event processing module;Background supplementary module includes gesture identification region;Grating module generates infrared changing value;Image processing module generates hand exercise changing value and gesture identification region;Event processing module determines and triggers object event according to kinematic parameter and gesture identification region including infrared changing value and hand exercise changing value.

Description

Virtual mouse driving device, gesture identification method and equipment for virtual mouse
Technical field
The present invention relates to human-computer interaction technique fields, more particularly to a kind of virtual mouse driving device, are used for virtual mouse Target gesture identification method and equipment.
Background technique
With the rapid development of Internet technology, computer is become increasingly popular, and having become must not in people's life and work The a part that can be lacked.Mouse, which is people, carries out the main path of " link up and exchange " with computer, on many computers now It will be equipped with mouse, user need to only press mousebutton or mobile mouse, can input different instructions on computers, thus Realization controls the application program on computer.
In the related technology, it is provided with multiple keys on mouse, is provided on the screen of computer as mouse mappings object Cursor.When mouse is mobile, cursor on computer screen can with the mobile identical track of mouse, and when mouse detect by When key is triggered, cursor can execute a series of operations such as Object Selections, object covering according to the triggering mode of key.
In the implementation of the present invention, inventor find the relevant technologies the prior art has at least the following problems:
User needs to hold mouse using same posture for a long time, user is easy to cause to generate such as when using mouse The illnesss such as carpal tunnel syndrome damage the hand of user, and user's viscosity is lower.
Summary of the invention
In view of this, the present invention provides a kind of virtual mouse driving device, for the gesture identification method of virtual mouse And equipment, main purpose are to solve to hold mouse using same posture for a long time at present, and user is easy to cause to generate such as wrist The illnesss such as pipe syndrome damage the hand of user, the lower problem of user's viscosity.
According to the present invention in a first aspect, providing a kind of virtual mouse driving device, the virtual mouse driving device packet It includes: background supplementary module, grating module, image processing module and event processing module;
The background supplementary module defines the placement location of user's hand, including gesture identification region;
The grating module is set to the top of the background supplementary module left and right edges, generates a plurality of infrared ray, works as inspection When measuring a plurality of infrared ray blocking or connection, infrared changing value is generated, and the infrared changing value is transmitted to the thing Part processing module;
Described image processing module acquires the complete multiple images data of background supplementary module, to described multiple images Data are identified, obtain hand exercise changing value and gesture identification region, and by the hand exercise changing value and the hand Gesture identification region is transmitted to the event processing module;
The event processing module is connect with the grating module and described image processing module, receives the grating module The infrared changing value returned, and receive hand exercise changing value and the gesture identification area that described image processing module returns Domain, using the infrared changing value and the hand exercise changing value as kinematic parameter, according to the kinematic parameter and the hand Gesture identification region determines and triggers object event.
In another embodiment, the grating module includes: multiple bases to penetrating unit and single-chip microcontroller;
The multiple basis generates a plurality of infrared ray to unit is penetrated;
The single-chip microcontroller is connected with the event processing module, when detecting any infrared ray quilt in a plurality of infrared ray It blocks, generates the disabling signal as infrared changing value, when detecting the infrared ray connection being blocked in a plurality of infrared ray When logical, the connection signal is generated as infrared changing value, and the infrared changing value is transmitted to the event processing module.
In another embodiment, the multiple basis includes infrared emission to unit is penetrated to each basis in unit is penetrated Unit and infrared receiver;
The infrared emission unit is set to the left edge of the background supplementary module or the top of right hand edge, described infrared The emission port of transmitting unit is opposite with the infrared receiver, and Xiang Suoshu infrared receiver emits infrared ray;
The infrared receiver is set to the upper of the background supplementary module and the infrared emission unit opposite edges Side, the receiving port of the infrared receiver is opposite with the infrared emission unit, receives the infrared emission unit transmitting Infrared ray.
In another embodiment, described image processing module includes image acquisition units and image processing unit;
Described image acquisition unit acquires the complete described multiple images data of background supplementary module;
Described image processing unit receives the described multiple images data that described image acquisition unit returns, and is filled out using unrestrained water It fills algorithm and identifies the gesture identification region in described multiple images data, using optical flow algorithm to described multiple images data Pixel value variation be tracked, determine pixel motion direction and the pixel motion component of the pixel value, the pixel transported Dynamic direction and the pixel motion component are as the hand exercise changing value, and by the hand exercise changing value and the hand Gesture identification region is transmitted to the event processing module.
Second aspect according to the present invention provides a kind of gesture identification method for virtual mouse, which comprises
Image processing module acquires the multiple images data of background supplementary module, and there are gratings on the background supplementary module The a plurality of infrared ray that module generates;
When detecting user's hand setting in motion, the grating module and described image processing module are to described more A image data identified, obtains kinematic parameter and gesture identification region, and by the kinematic parameter and the gesture identification Area transmissions to event processing module, the kinematic parameter includes at least one of hand exercise changing value or infrared changing value Or two kinds;
The event processing module determines object event according to the kinematic parameter and the gesture identification region, and triggers The object event.
In another embodiment, the gesture identification region is obtained, comprising:
Described image processing module acquires multiple images data, for each picture number in described multiple images data According at least one angular coordinate of background supplementary module in definition described image data;
Described image processing module is handled at least one described angular coordinate using unrestrained water filling algorithm, obtains institute State the unrestrained water images of at least one angular coordinate;
Described image processing module is filtered according to unrestrained water images of the result threshold values at least one angular coordinate, Target is extracted in the unrestrained water images of at least one angular coordinate and overflows water images, and the target overflows water images and meets the knot The standard of fruit threshold values;
Described image processing module is overflow water images to the target and is adjusted, and the gesture identification region is obtained.
In another embodiment, kinematic parameter is obtained, comprising:
When detecting user's hand setting in motion, described image processing module obtains described multiple images data Pixel value is tracked the variation of the pixel value using optical flow algorithm, determine the pixel value pixel motion direction and Pixel motion component, using the pixel motion direction and the pixel motion component as the hand exercise changing value;With/ Or,
The grating module detects the current state of a plurality of infrared ray, determines the red of a plurality of infrared ray Outer current value, and the infrared original value of a plurality of infrared ray is obtained, according to infrared current value described in a plurality of infrared ray The infrared ray different from the infrared original value generates the infrared changing value;
Described image processing module and/or the grating module are by the hand exercise changing value and/or the infrared change Change value is as the kinematic parameter.
In another embodiment, the infrared current value according to a plurality of infrared ray with it is described infrared original It is worth different infrared rays, generates the infrared changing value, comprising:
For any bar infrared ray in a plurality of infrared ray, the grating module is by the infrared current of the infrared ray Value is compared with infrared original value;
If the infrared current value is blocking state, the infrared original value is connection state, then generates disabling signal As the infrared changing value;
If the infrared current value is connection state, the infrared original value is blocking state, then generates connection signal As the infrared changing value.
In another embodiment, the event processing module is true according to the kinematic parameter and the gesture identification region Set the goal event, and triggers the object event, comprising:
When the pixel motion component in the hand exercise changing value is more than or equal to the preset threshold, at the event Reason module determine the object event be moving event, control the mapping body of user's hand at the terminal from current location to The mobile distance to a declared goal in the direction of the finger movement direction instruction, the mapping body for executing the user on the terminal The value of the operation that hand request executes, the distance to a declared goal is raw according to pixel motion component in the hand exercise changing value At;And/or
The event processing module counts the variation number of the infrared changing value, determines the corresponding point of the variation number Hitting event is the object event, controls the mapping body and grasps in current location according to the instruction performance objective of the object event Make.
The third aspect according to the present invention, provides a kind of equipment, including memory and processor, and the memory is stored with The step of program, the processor realizes above-mentioned second aspect the method when executing described program.
By above-mentioned technical proposal, a kind of virtual mouse driving device provided by the invention, the gesture identification based on grating Method and apparatus, compared with the mode for realizing human-computer interaction using entity mouse at present, the present invention passes through grating module and image Processing module identifies various gestures of user's hand on background supplementary module and operation, realizes based on hand Operation can control terminal, without relying on entity mouse, by user's illness due to long-time gripping entity mouse Risk be preferably minimized, avoid damaging the hand of user, improve user's viscosity.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Figure 1A shows a kind of structural schematic diagram of virtual mouse driving device provided in an embodiment of the present invention;
Figure 1B shows a kind of structural schematic diagram of virtual mouse driving device provided in an embodiment of the present invention;
Fig. 2A shows a kind of gesture identification method flow diagram for virtual mouse provided in an embodiment of the present invention;
Fig. 2 B shows a kind of gesture identification method schematic diagram for virtual mouse provided in an embodiment of the present invention;
Fig. 2 C shows a kind of gesture identification method schematic diagram for virtual mouse provided in an embodiment of the present invention;
Fig. 2 D shows a kind of gesture identification method flow diagram for virtual mouse provided in an embodiment of the present invention;
Fig. 3 shows a kind of apparatus structure schematic diagram of equipment provided in an embodiment of the present invention.
Specific embodiment
The exemplary embodiment that the present invention will be described in more detail below with reference to accompanying drawings.Although showing the present invention in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the present invention without should be by embodiments set forth here It is limited.It is to be able to thoroughly understand the present invention on the contrary, providing these embodiments, and can be by the scope of the present invention It is fully disclosed to those skilled in the art.
The embodiment of the invention provides a kind of virtual mouse driving devices, can pass through grating module and image procossing mould Block identifies various gestures of user's hand on background supplementary module and operation, realizes the operation based on hand just Terminal can be controlled, reached without relying on entity mouse, by user's illness due to long-time gripping entity mouse Risk be preferably minimized, avoid damaging the hand of user, improve the purpose of user's viscosity, as shown in Figure 1A, the void Quasi- mouse-driven device includes: background supplementary module 101, grating module 102, image processing module 103 and event processing module 104, Figure 1B be the top view of the virtual mouse driving device.
Wherein, background supplementary module 101 is placed in desktop;Grating module 102 is set to the left and right side of background supplementary module 101 The top of edge is connected with event processing module 104;Image processing module is connected with event processing module 104;Event processing module 104 can be set in the terminal, or can also be an independent device shown in figure 1A, at grating module 102 and image It manages module 103 to connect, connect for receiving the data of grating module 102 and the transmission of image processing module 103, while also with terminal It connects, for triggering object event in the terminal.The embodiment of the present invention is to the set-up mode of event processing module without specifically limiting It is fixed.It is empty in order to support image processing module 103 that the surface of background supplementary module 101 is arranged in during practical application It further include pedestal 105 in quasi- mouse-driven device.
Background supplementary module 101
Substantially one piece of mouse pad of background supplementary module 101 is, it is specified that the placement location of user's hand, including hand Gesture identification region so that grating module 102 can adhere to thereon, and also can indicate that the acquisition position of image processing module 103. Usually, acquired image data are identified for the ease of image processing module 103, background supplementary module 101 Color is panchromatic and not identical as the hand color of user, for example, can be using the mouse pad of the colors such as all black, full blue As background supplementary module 101.The present invention is to the actual color of background supplementary module 101 without specifically limiting.User can incite somebody to action Hand is naturally placed on background supplementary module 101, is placed using the gesture of user's the most comfortable.It should be noted that with In the use process of family, the direction of user's hand be not it is fixed, user can according to it is current itself the location of by hand To be that any angle is placed on background supplementary module 101.
Grating module 102
Grating module 102 includes multiple bases to penetrating unit and single-chip microcontroller, and multiple bases are to penetrating unit using horizontally arranged Mode be set to the edge of background supplementary module 101.Multiple bases are to the side penetrated unit and can be tightly attached to background supplementary module 101 Edge, or can be maintained a certain distance with the edge of background supplementary module 101, namely be suspended from the side of background supplementary module 101 Edge guarantees that multiple bases a plurality of infrared ray for penetrating unit generation can be detected the movement of user's hand.Wherein, multiple bases are right Penetrating each basis in unit includes infrared emission unit and infrared receiver to unit is penetrated, and infrared emission unit and infrared is connect It receives unit and is respectively arranged at the both sides of the edge of background supplementary module 101, be one-to-one, and the emission port of infrared emission unit Receiving port with infrared receiver be it is opposite, to generate a plurality of infrared ray, dotted portion is a plurality of infrared in Figure 1A Line.If infrared emission unit is located at the left edge of background supplementary module 101, infrared receiver is located at background supplementary module 101 right hand edge;If infrared emission unit is located at the right hand edge of background supplementary module 101, infrared receiver is located at back The left edge of scape supplementary module 101.Single-chip microcontroller is used for the variation of a plurality of infrared ray of real-time detection, and it is right on each basis to can be set It penetrates on the infrared receiver of unit, and is connected with event processing module 104, in this way, single-chip microcontroller can be according to infrared receiver The infrared ray whether unit receives infrared emission unit transmitting is blocked or to determine infrared ray by connection.
Since infrared ray is to diffuse out light as flashlight, so detecting each pair of infrared emission unit and infrared receiver When whether the infrared ray between unit has blocking, need successively while to drive each pair of infrared emission unit and infrared receiver list in order Member.Specifically, firstly, first infrared emission unit is connected in sequence, first red when connecting multiple bases to unit is penetrated While outer transmitting unit transmitting infrared ray, first infrared receiver is connected, whether the infrared ray for detecting generation is blocked.Cause Not yet start for other infrared emission units and infrared receiver, so there will not be signal even if having by infrared radiation, So that being not compromised by the diffusive of infrared ray and causing identification mistake.Then, other infrared emission units and red are successively executed Whether the connection of outer receiving unit and detection infrared ray are blocked.Single-chip microcontroller can according to specified frequency from first basis to penetrating unit Scanning is basic to unit is penetrated to the last one, to detect whether infrared ray occurs to block and whether connection occurs.Work as detection Into a plurality of infrared ray, any infrared ray is blocked, and single-chip microcontroller generation disabling signal is a plurality of when detecting as infrared changing value When the infrared ray connection being blocked in infrared ray, single-chip microcontroller generates connection signal as infrared changing value, and by infrared changing value It is transmitted to event processing module 104.
During practical application, multiple bases can be sent out infrared laser to unit is penetrated, and be based on infrared laser Realize the process of the infrared changing value of above-mentioned generation.Wherein, the spy small with high directivity, luminous exitance due to infrared laser Point allows infrared receiver directly to receive the laser that corresponding infrared emission unit issues, can't identify it The case where laser that his infrared receiver issues, therefore, when connecting infrared emission unit and infrared receiver, Ke Yitong When connect infrared emission unit and infrared receiver, without in turn switching on infrared emission unit and infrared receiver, and Judge whether infrared laser is blocked by single-chip microcontroller.It is had been illustrated in the embodiment of the present invention and generates infrared ray and infrared The case where laser, and during practical application, other light with infrared ray and infrared laser characteristics can also be used Beam, the embodiment of the present invention is to this without specifically limiting.
Image processing module 103
Image processing module 103 is for acquiring the complete multiple images data of background supplementary module 101, by multiple figures As data are identified, hand exercise changing value and gesture identification region are obtained, and by hand exercise changing value and gesture identification Area transmissions are to event processing module 104.The position of image processing module is the position that can effectively identify hand exercise, is not limited to Surface, front, left and the right of background supplementary module 101.Wherein, image processing module 103 includes Image Acquisition list Member and image processing unit.Image acquisition units are for acquiring the complete multiple images data of background supplementary module, and substantially one Camera, the pixel of camera can be it is a variety of, the camera of low pixel or high pixel can satisfy demand of the invention. Image processing unit is used to receive the multiple images data of image acquisition units return, using unrestrained water filling algorithm in multiple images Gesture identification region is identified in data, the variation of the pixel value of multiple images data is tracked using optical flow algorithm, is determined The pixel motion direction of pixel value and pixel motion component become using pixel motion direction and pixel motion component as hand exercise Change value, and by hand exercise changing value and gesture identification area transmissions to event processing module 104.It should be noted that image The distance between processing module 103 and background supplementary module 101 can be according to different any adjustment of scene, can also be according to figure It is determined as the focal length of acquisition unit, guarantees that the image data got is high-visible.During practical application, examine Consider that environment locating for night virtual mouse device is more dark, the identification of image data can be impacted, it can be in image Multiple LED (Light Emitting Diode, light emitting diode) lamp is inlayed around acquisition unit, to guarantee ambient enviroment It is bright and clear.
During practical application, the image processing unit in image processing module 103 can also be separately as one Module is run on a computing device, or can also be to use external method to run in the form of external equipment, the present invention Embodiment is to the setting position of image processing unit and setting method without specifically limiting.
Event processing module 104
Event processing module 104 is used to receive the infrared changing value of the return of grating module 102, and receives image processing module The 103 hand exercise changing values returned and gesture identification region are joined using infrared changing value and hand exercise changing value as movement Number determines and triggers object event according to kinematic parameter and gesture identification region.
Virtual mouse driving device provided in an embodiment of the present invention, by grating module and image processing module, to user Various gestures and operation of the hand on background supplementary module are identified that realizing the operation based on hand can be to terminal It is controlled, without relying on entity mouse, by user since the risk that long-time holds entity mouse and illness is preferably minimized, is kept away Exempt to damage the hand of user, improves user's viscosity.
The embodiment of the invention provides a kind of gesture identification methods for virtual mouse, can pass through grating module and figure As processing module, various gestures of user's hand on background supplementary module and operation are identified, are realized based on hand Operation terminal can be controlled, reached without relying on entity mouse, by user since long-time holds entity mouse Mark and the risk of illness is preferably minimized, avoid damaging the hand of user, improve the purpose of user's viscosity, such as Fig. 2A institute Show, this method comprises:
201, the multiple images data of image processing module acquisition background supplementary module, when detecting that user's hand starts to transport When dynamic, image processing module identifies multiple images data, obtains gesture identification region, and by gesture identification area transmissions To event processing module.
It was recognized by the inventor that the hand that user holds entity mouse needs to keep for a long time when using entity mouse for a long time The same posture, many peripheral nerves of user's hand are easy to that lesion occurs under prolonged compressing, namely generate mouse hand, Therefore, the present invention provides a kind of gesture identification method based on grating, and entity mouse is omitted, and user need to only make corresponding hand The human-computer interaction between terminal can be realized in gesture.
Since the difficulty that the methods of conventionally employed model training and machine learning identify gesture is higher, error compared with Greatly, and in view of hand images are under different illumination conditions, color value may be different, and different ethnic groups hand color not Same problem, it is also necessary to a large amount of colour of skin value algorithm is predefined, it is therefore, accurate to the gesture identification of user and fast in order to guarantee Speed, setting has powerful connections supplementary module in the present invention, plans that user makes the position of gesture with background supplementary module, allows to Recognize user's hand on the background supplementary module, so as to the subsequent gesture to user's hand identified it is corresponding to trigger Event.Image acquisition units, the position to background supplementary module that image acquisition units can continue are provided in image processing module It sets and carries out complete Image Acquisition, be continuously generated multiple images data, when user's hand is put on background supplementary module, image User's hand will occur in acquisition unit acquired image data, at this point, can be by carrying out to multiple images data Identification, to determine gesture identification region.For each image data in multiple images data, image data was specifically identified Journey is as follows:
Since the color of background supplementary module and the color of user's hand are different, come compared to background supplementary module Say, user's hand be it is active, have ambiguity, directly by the color to user's hand carry out identification be likely to inaccuracy , therefore, after image processing module collects image data, OpenCV (Open Source Computer can be used Vision Library, increase income computer vision library) in unrestrained water filling algorithm (Flood Fill) image data is known Not, the processing carried out by overflowing water filling algorithm to image data, obtains gesture identification region in image data.It needs to illustrate , overflowing water filling algorithm is handled each frame image data, is all made of unrestrained water for each frame image data and is filled out Algorithm is filled to be identified.
Since the probability that hand is moved to background supplementary module angle point by user is lower, firstly, for multiple images Each image data in data, at least one angle point that image processing module defines background supplementary module in image data are sat Mark, and at least one angular coordinate is handled using unrestrained water filling algorithm, obtain the unrestrained water figure of at least one angular coordinate Picture, namely using at least one angular coordinate as unrestrained water filling algorithm pixel seed point.It should be noted that in practical application In the process, under ordinary light conditions, only take an angular coordinate, the hand images of acquisition may not be most accurately, therefore, 4 angular coordinates can usually be taken.The embodiment of the present invention is to the number of the angular coordinate of acquirement without specifically limiting.With Afterwards, a result threshold values for being screened to the unrestrained water images of at least one angular coordinate is defined, according to result threshold values pair The unrestrained water images of at least one angular coordinate are filtered, and target is extracted in the unrestrained water images of at least one angular coordinate and overflows water Image, namely available, clear and accurate unrestrained water images overflow water images as target.It is adjusted finally, overflowing water images to target It is whole, gesture identification region is obtained, so that not will receive the hand ornaments of user during determining gesture identification region, referring to The influence of the extraneous factors such as first color guarantees the accuracy of identification.Substantially, the gesture identification region obtained is a binaryzation Image, for describing the current hand state of user's hand and the current position coordinates of user's hand.
Wherein, the adjustment for overflowing water images progress to target may include the expansion adjustment that water images are overflow to target, corrosion tune It is whole and negate adjustment etc..The gesture identification region that B is as obtained by the above process referring to fig. 2.It should be noted that Fig. 2 B Shown in gesture identification region be to pixel be 160*120 image data handled, due to pixel be 160* The efficiency that 120 image data is identified is preferable, will not impact to response speed, therefore, usually can all acquire pixel For the image data of 160*120.And during practical application, the pixel of the image data of acquisition can also be 640*480, right The gesture identification region that is identified of image data that pixel is 640*480 is referring to fig. 2 160*120 with pixel shown in C The obtained gesture identification region of image data it is clear round and smooth compared to more.The embodiment of the present invention is to acquired image data Pixel without specifically limiting.
After having obtained complete hard recognition region by unrestrained water filling algorithm, image processing module is by gesture identification area Domain is transmitted to event processing module, and the hard recognition region that obtains so as to the subsequent basis of event processing module judges current finger State.Specifically, it can be determined that it is currently index finger protracts or middle finger protracts or index finger, middle finger, the third finger protract etc., namely Judge that the hand of active user is acted with the presence of which finger.
202, grating module identifies multiple images data, obtains kinematic parameter, and kinematic parameter is transmitted to thing Part processing module.
In embodiments of the present invention, the picture collection unit in picture processing module is lasting acquisition image data, Therefore, when the position in gesture identification region in adjacent two image datas changes, so that it may confirmly detect user hand Portion's setting in motion triggers corresponding thing according to gesture at this point, the gesture to multiple images data is just needed to identify Part.
Wherein, since the gesture identification region for being transferred to event processing module using image processing module is only capable of giving expression to use The states such as extension, the contraction of family hand, and the hands such as finger is clicked, finger is double-clicked cannot be embodied in vertical direction Variation, therefore, be provided with grating module on background supplementary module of the invention, a plurality of infrared ray generated by grating module, when with When family hand changes in vertical direction, it may be implemented to user's hand by the blocking and disconnection of infrared ray in Vertical Square The identification of upward gesture.In this way, grating module and image processing module can be simultaneously when detecting user's hand setting in motion Start the identification work to multiple images data, user's hand is identified in the gesture of vertical direction, by image by grating module It manages module and identifies user's hand gesture in the horizontal direction, and the data recognized are transferred to event handling as kinematic parameter Module, so that event processing module can trigger corresponding object event according to kinematic parameter.
Specifically, the data that image processing module identifies the gesture of user's hand in the horizontal direction use Hand exercise changing value embodies, so the subsequent mapping body controlled according to hand exercise changing value in computer equipment into Row movement.When user's hand moves in the horizontal direction, the data being related to may be summarized to be direction and distance, when detecting use When the hand setting in motion of family, image processing module obtains the pixel value of adjacent two field pictures data in multiple images data, to picture Plain value carries out removing dryness a processing, is tracked using optical flow algorithm to the variation of pixel value, according to each pixel motion component Different pixel motion directions is put into the corresponding component motion limited as in limit (namely different subregions), finding out each picture, and leads to It crosses to compare to make a decision and wherein one be limited out as being limited to purpose picture, picture limit component motion is found out, by pixel motion direction and pixel motion Component is as hand exercise changing value.Wherein, it when execution optical flow algorithm is tracked the variation of pixel value, can use CalcOpticalFlowFarneback function in OpenCV is realized.Due to user's hand may move hand main body or Finger, mobile finger can also distinguish between moving direction and moving distance for several fingers of movement and every finger, therefore, figure Which pixel motion direction is picture processing module can mark and pixel motion component is specific when generating hand exercise changing value Which root finger and which position belonged to, so that event processing module is when receiving hand exercise changing value, Ke Yiqing The detailed movement situation for recognizing each position of user's hand of Chu, and then trigger accurate object event.The present invention is to true The pixel motion direction of pixel value and the mode of pixel motion component are without specifically limiting.Wherein, in order to guarantee image know Other accurate fixed, the image data of image processing module identification is usually grayscale image.
It should be noted that only finger is slided if determining that the palm of user is not mobile according to optical flow algorithm It is dynamic, then a wicket can be fixed by overflowing the hard recognition region that water filling algorithm obtains, be mentioned by image acquisition units The image in wicket is taken, then the image of extraction is identified using optical flow algorithm, judges whether finger moves, to contract The mobile process of short identification finger, no longer needs to identify entire hand.And when determining user has moved entire palm When, wicket failure waits next palm static and when only finger is mobile, reacquires wicket.For example, if index finger, Middle finger slides up and down (palm does not move), and grating variation then calculates moving distance and side to optical flow algorithm with extraction wicket To.
In addition, unrestrained water filling algorithm can also be obtained when being identified based on optical flow algorithm to the movement of user's hand To hand coordinate be mapped in image data, and then in image data extract interest region, and using optical flow algorithm to emerging Interesting region is identified, so as to shorten the identification process of optical flow algorithm, improves recognition efficiency.Identification of the present invention to optical flow algorithm Object is without specifically limiting.
Specifically, the data that grating module identifies the gesture of user's hand in vertical direction use infrared Changing value embodies.The data that user's hand is related to when moving in vertical direction may be summarized to be mobile number and mobile pair As when detecting user's hand setting in motion, grating module detects the current state of a plurality of infrared ray, determines a plurality of The infrared current value of infrared ray, and obtain the infrared original value of a plurality of infrared ray, according to current value infrared in a plurality of infrared ray with The different infrared ray of infrared original value, generates infrared changing value.For every infrared ray in a plurality of infrared ray, specifically identified Journey are as follows: the infrared current value of infrared ray is compared grating module with infrared original value, if infrared current value is to block shape State, infrared original value are connection state, then generate disabling signal as infrared changing value;If infrared current value is connection shape State, infrared original value are blocking state, then generate connection signal as infrared changing value.It should be noted that due to different hands The present position of finger is different, and length is also different, and therefore, when generating disabling signal and connection signal, can be incited somebody to action It on disabling signal and connection signal mark is generated according to which finger, so that event processing module can be according to infrared change Signal in change value specifically determines that movement has occurred in which finger.
By the above process, image processing module and grating module can get hand exercise changing value and red respectively At this moment hand exercise changing value and infrared changing value all can be used as kinematic parameter to be transferred to event handling by outer changing value Module triggers object event according to kinematic parameter by event processing module.It should be noted that in the process of practical application In, the movement of movement or vertical direction that the movement of user's hand may be only horizontally oriented, so that only image processing module Or some module in grating module can collect kinematic parameter, therefore, image processing module and grating module are not It has to collect kinematic parameter simultaneously, as long as any module collects kinematic parameter, kinematic parameter can be transferred to thing Part processing module is handled namely kinematic parameter can be one or both of hand exercise changing value or infrared changing value.
203, event processing module determines object event according to kinematic parameter and gesture identification region, and triggers target thing Part.
In embodiments of the present invention, when event processing module receives the movement of grating module and image processing module transmission After parameter and gesture identification region, corresponding object event can be determined according to kinematic parameter and gesture identification region, in turn Trigger object event.Wherein, since kinematic parameter will include one or both of hand exercise changing value or infrared changing value, Therefore, event processing module can use following two ways, be determined not according to hand exercise changing value and infrared changing value respectively Same object event is triggered.
Mode one determines object event according to hand exercise changing value.
Include pixel motion component in hand exercise changing value, pixel motion component indicate the movement of user's hand away from From, it is contemplated that user there may come a time when the small distance of the subconscious movement of meeting, these movements are not the shifting of user's subjective consciousness Dynamic, the movement of this slight distance is not enough to realize the interaction between user and terminal, therefore, is provided in event processing module Preset threshold, when determining object event according to hand exercise changing value, only hand exercise changing value has been greater than preset threshold, Ability determination can trigger object event, and then continue to determine object event, be otherwise considered as user's hand and remain static, do not touch Send out object event.
Specifically, since user's hand can be embodied on the screen in the form of responding body at the terminal namely be set on screen It is equipped with cursor, therefore, when the pixel motion component in hand exercise changing value is more than or equal to preset threshold, event processing module Determine that object event is moving event, the mapping body of control user's hand at the terminal is referred to from current location to finger movement direction The mobile distance to a declared goal in the direction shown.Wherein, distance to a declared goal is generated according to the pixel motion component in hand exercise changing value, Method particularly includes: amplification coefficient is obtained, by the pixel motion component multiplied by amplification coefficient, to obtain distance to a declared goal.In addition, examining Therefore the hand state of movement that can't remain a constant speed for considering user when the hand of user quickly moves, needs to simulate mouse Acceleration movement, namely the component motion according to user's hand in fixed time period are marked, the movement speed of user's hand is calculated, And piecewise function is used, the distance to a declared goal generated is adjusted.Usually, piecewise function is movement speed multiplied by shifting Dynamic multiplying power, the present invention is to the content of piecewise function without specifically limiting.
During practical application, since user's hand is can to distinguish finger and hand, in order to make user be based on hand The operation that gesture is realized is more diversified, and mobile different finger or mobile more fingers can be set or mobile hand can To execute more complicated operation, details are referring to following table 1.It should be noted that being only the target of some illustrations in table 1 Event, in actual application, object event of the present invention is not limited to the example in table 1, the embodiment of the present invention pair The event number for the object event being related to and the particular content of object event are without limiting.
Table 1
Wherein, if food, middle finger execute click, food, middle finger while lifting is needed to put down;If food, middle finger execute sliding, It needs food, middle finger while sliding.In addition, making gesture due to user's hand needs the time, so that image processing module returns The hand exercise changing value returned is likely to the process that description user's hand does gesture, namely description user's hand intermediate state 's.For example, the palm of user's hand is placed naturally, thumb is initial state, when index finger to be changed into, middle finger, the third finger and small When the state of finger, the movement needed to be implemented remains stationary for thumb, other fingers are changed to nature and are stretched by rolling up, then other refer to To be changed to the process that nature stretches be intermediate state to head by rolling up, and intermediate state cannot trigger any object event.Referring specifically to Following two kinds of situations:
The first, when the finger slide downward of user's hand, event processing module also can be in real time by hand exercise The detection of changing value has determined that object event is moving event, and controls mapping body and move on the screen.But when user's hand It is moved to when clenching fist state, mapping body may not move on to the position that user wants also, and at this moment, the finger of user's hand lifts, is preceding The process stretch, put down is intermediate state, and event processing module does not move mapping body at this time, when user's hand finger still further below When sliding, event processing module can also continue to move to mapping body in real time.
Second, when the finger upward sliding of user's hand, event processing module also can be in real time by hand exercise The detection of changing value has determined that object event is moving event, and controls mapping body and move on the screen.But when user's hand When being moved to finger straight configuration, mapping body may not move on to the position that user wants also, and at this moment, the finger of user's hand is lifted The process rise, shrink, put down is intermediate state, and event processing module does not move mapping body at this time, when the finger of user's hand Again when upward sliding, event processing module can also continue to move to mapping body in real time.
Slip event, magnification event can also be defined during practical application, in event processing module and reduce thing Part, namely after event processing module detects that the finger of user's hand leaves background supplementary module, certain rate upwards, Under, left and right direction it is mobile, corresponding corresponding triggering slip event realizes the sliding action of corresponding direction;Or, working as event handling mould Block detects that the spacing between two fingers of user's hand changes, corresponding triggering magnification event and diminution event, real Now identification finger executes zoom movement.
Mode two determines object event according to infrared changing value.
Infrared changing value has recorded the blocking and connection that finger in the process of moving carries out a plurality of infrared ray, passes through Between identifying that the variation number of infrared ray and variation pattern can determine that user is clicked how many times, clicked every time using gesture Every and using any root finger click.Therefore, event processing module counts the variation number of infrared changing value, determines variation number Corresponding click event is object event, and control mapping body is operated in current location according to the instruction performance objective of object event. Wherein, object run can be clicking operation, scroll operation, slide, amplifying operation and reduction operation etc..For example, it is assumed that Include a disabling signal and a connection signal in infrared changing value, then can determine that user click is primary, at this point, can Click event to be triggered as object event.Namely place citing naturally with right hand index finger, a plurality of infrared ray is from the left side It is irradiated toward the right, in the time range of restriction, event processing module determines that index finger is lifted, put by the infrared changing value of identification Under, then one click event is represented, it is primary as object event triggering to will click on event;In the time of restriction, event handling mould Block is by identifying that infrared changing value determines that index finger is lifted, puts down, lifts, put down, then represents a double click event, by double click event It is primary as object event triggering.Similarly, other fingers are placed naturally, and event processing module is by identifying that infrared changing value determines Other fingers are lifted, are put down simultaneously, then represent one click event, can also trigger different event according to finger difference number, Details are referring to following table 2.It should be noted that being only the event of the output of some illustrations in table 2, in actual application In, event of the present invention is not limited to the example in table 1, the embodiment of the present invention to the event number for being related to event and The particular content of event is without limiting.
Table 2
Finger number Mono-/bis-is hit Output
Index finger It clicks Right button click event
Index finger It double-clicks Right double click event
Food, middle finger It clicks Left button event
Food, middle finger It double-clicks It is undetermined
It should be noted that the corresponding event of gesture can be by user's self-setting, event processing module is by user The gesture of setting is stored in relevant table to the corresponding relationship of event.Event can be based on user and assist in background What the various gestures in module were arranged, the corresponding relationship between gesture and event can be set based on the preference of user.
Above-mentioned steps 201 are to process shown in step 203, and D referring to fig. 2 simply may be summarized to be:
Whether virtual mouse driving device captures grating flag bit, and captures image data, determine user by hand placement In mouse pad, when being determined that user in mouse pad, identifies hand placement to the hand profile of user's hand, determines and use The kinematic parameter of family hand;When being determined that user's hand is not placed on mouse pad, current trapped state is kept.Then, according to Kinematic parameter determines whether the palm of user's hand moves, and deposits the state of active user's hand as preceding frame state Storage.If it is determined that palm is mobile, then triggered moving event as object event;If it is determined that palm does not move, then it is right The hand profile of user's hand identifies, judges whether the finger of user's hand moves.If it is determined that finger is mobile, then will move Dynamic event is triggered as object event;If it is determined that finger does not move, then the hand of user's hand is determined according to kinematic parameter Refer to whether click;If it is determined that finger is clicked, then it will click on event and triggered as object event;If it is determined that the non-point of finger It hits, then keeps current state.Through grating module and image processing module, to user's hand on background supplementary module Various gestures and operation are identified that terminal can be controlled by realizing the operation based on hand, without relying on entity Mouse avoids causing to hurt to the hand of user by user since the risk that long-time holds entity mouse and illness is preferably minimized Evil.
Method provided by the invention, by grating module and image processing module, to user's hand in background supplementary module On various gestures and operation identified, terminal can be controlled by realizing the operation based on hand, without rely on Entity mouse avoids making the hand of user by user since the risk that long-time holds entity mouse and illness is preferably minimized At injury, user's viscosity is improved.
In the exemplary embodiment, referring to Fig. 3, a kind of equipment is additionally provided, which includes communication bus, processing Device, memory and communication interface, can also include, input/output interface and display equipment, wherein can between each functional unit To complete mutual communication by bus.The memory is stored with computer program, processor, for executing institute on memory The program of storage executes the gesture identification method for virtual mouse in above-described embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that the application can lead to Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software.Based on this understanding, this Shen Technical solution please can be embodied in the form of software products, which can store in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that a computer equipment (can be Personal computer, server or network equipment etc.) execute the hand that virtual mouse is used for described in each implement scene of the application Gesture recognition methods.
It will be appreciated by those skilled in the art that the accompanying drawings are only schematic diagrams of a preferred implementation scenario, module in attached drawing or Process is not necessarily implemented necessary to the application.
It will be appreciated by those skilled in the art that the module in device in implement scene can be described according to implement scene into Row is distributed in the device of implement scene, can also be carried out corresponding change and is located at the one or more dresses for being different from this implement scene In setting.The module of above-mentioned implement scene can be merged into a module, can also be further split into multiple submodule.
Above-mentioned the application serial number is for illustration only, does not represent the superiority and inferiority of implement scene.
Disclosed above is only several specific implementation scenes of the application, and still, the application is not limited to this, Ren Heben What the technical staff in field can think variation should all fall into the protection scope of the application.

Claims (10)

1. a kind of virtual mouse driving device, which is characterized in that the virtual mouse driving device include: background supplementary module, Grating module, image processing module and event processing module;
The background supplementary module defines the placement location of user's hand, including gesture identification region;
The grating module is set to the top of the background supplementary module left and right edges, generates a plurality of infrared ray, when detecting When a plurality of infrared ray blocking or connection, infrared changing value is generated, and the infrared changing value is transmitted at the event Manage module;
Described image processing module acquires the complete multiple images data of background supplementary module, to described multiple images data It is identified, obtains hand exercise changing value and gesture identification region, and the hand exercise changing value and the gesture are known Other area transmissions are to the event processing module;
The event processing module is connect with the grating module and described image processing module, is received the grating module and is returned Infrared changing value, and receive described image processing module return hand exercise changing value and the gesture identification region, will The infrared changing value and the hand exercise changing value are as kinematic parameter, according to the kinematic parameter and the gesture identification Region determines and triggers object event.
2. virtual mouse driving device according to claim 1, which is characterized in that the grating module includes: multiple bases Plinth is to penetrating unit and single-chip microcontroller;
The multiple basis generates a plurality of infrared ray to unit is penetrated;
The single-chip microcontroller is connected with the event processing module, when detecting that any infrared ray is hindered in a plurality of infrared ray It is disconnected, the disabling signal is generated as infrared changing value, when detecting the infrared ray connection being blocked in a plurality of infrared ray When, the connection signal is generated as infrared changing value, and the infrared changing value is transmitted to the event processing module.
3. virtual mouse driving device according to claim 2, which is characterized in that the multiple basis is every in unit to penetrating A basis includes infrared emission unit and infrared receiver to unit is penetrated;
The infrared emission unit is set to the left edge of the background supplementary module or the top of right hand edge, the infrared emission The emission port of unit is opposite with the infrared receiver, and Xiang Suoshu infrared receiver emits infrared ray;
The infrared receiver is set to the top of the background supplementary module Yu the infrared emission unit opposite edges, institute The receiving port for stating infrared receiver is opposite with the infrared emission unit, receives the infrared of the infrared emission unit transmitting Line.
4. virtual mouse driving device according to claim 1, which is characterized in that described image processing module includes image Acquisition unit and image processing unit;
Described image acquisition unit acquires the complete described multiple images data of background supplementary module;
Described image processing unit receives the described multiple images data that described image acquisition unit returns, and is filled and is calculated using unrestrained water Method identifies the gesture identification region in described multiple images data, using optical flow algorithm to the picture of described multiple images data Plain value variation is tracked, and pixel motion direction and the pixel motion component of the pixel value is determined, by the pixel motion side Know to the pixel motion component as the hand exercise changing value, and by the hand exercise changing value and the gesture Other area transmissions are to the event processing module.
5. a kind of gesture identification method for virtual mouse, which is characterized in that the described method includes:
Image processing module acquires the multiple images data of background supplementary module, and there are grating modules on the background supplementary module The a plurality of infrared ray generated;
When detecting user's hand setting in motion, the grating module and described image processing module are to the multiple figure As data are identified, kinematic parameter and gesture identification region are obtained, and by the kinematic parameter and the gesture identification region It is transmitted to event processing module, the kinematic parameter includes at least one of hand exercise changing value or infrared changing value or two Kind;
The event processing module determines object event according to the kinematic parameter and the gesture identification region, and described in triggering Object event.
6. according to the method described in claim 5, it is characterized in that, obtaining the gesture identification region, comprising:
Described image processing module acquires multiple images data, fixed for each image data in described multiple images data At least one angular coordinate of background supplementary module in adopted described image data;
Described image processing module is handled at least one described angular coordinate using unrestrained water filling algorithm, and acquisition is described extremely The unrestrained water images of a few angular coordinate;
Described image processing module is filtered according to unrestrained water images of the result threshold values at least one angular coordinate, in institute It states and extracts the unrestrained water images of target in the unrestrained water images of at least one angular coordinate, the target overflows water images and meets the result valve The standard of value;
Described image processing module is overflow water images to the target and is adjusted, and the gesture identification region is obtained.
7. according to the method described in claim 5, it is characterized in that, obtaining kinematic parameter, comprising:
When detecting user's hand setting in motion, described image processing module obtains the pixel of described multiple images data Value, is tracked the variation of the pixel value using optical flow algorithm, determines pixel motion direction and the pixel of the pixel value Component motion, using the pixel motion direction and the pixel motion component as the hand exercise changing value;And/or
The grating module detects the current state of a plurality of infrared ray, determines that the infrared of a plurality of infrared ray is worked as Preceding value, and the infrared original value of a plurality of infrared ray is obtained, according to infrared current value described in a plurality of infrared ray and institute The different infrared ray of infrared original value is stated, the infrared changing value is generated;
Described image processing module and/or the grating module are by the hand exercise changing value and/or the infrared changing value As the kinematic parameter.
8. the method according to the description of claim 7 is characterized in that described infrared current according to a plurality of infrared ray It is worth the infrared ray different from the infrared original value, generates the infrared changing value, comprising:
For any bar infrared ray in a plurality of infrared ray, the grating module by the infrared current value of the infrared ray with Infrared original value is compared;
If the infrared current value is blocking state, the infrared original value is connection state, then generates disabling signal conduct The infrared changing value;
If the infrared current value is connection state, the infrared original value is blocking state, then generates connection signal conduct The infrared changing value.
9. according to the method described in claim 5, it is characterized in that, the event processing module is according to the kinematic parameter and institute It states gesture identification region and determines object event, and trigger the object event, comprising:
When the pixel motion component in the hand exercise changing value is more than or equal to the preset threshold, the event handling mould Block determines that the object event is moving event, controls the mapping body of user's hand at the terminal from current location to described The mobile distance to a declared goal in the direction of finger movement direction instruction, the mapping body for executing user's hand on the terminal The operation executed is requested, the value of the distance to a declared goal is generated according to pixel motion component in the hand exercise changing value;With/ Or,
The event processing module counts the variation number of the infrared changing value, determines the corresponding click thing of the variation number Part is the object event, controls the mapping body and operates in current location according to the instruction performance objective of the object event.
10. a kind of equipment, including memory and processor, the memory are stored with program, which is characterized in that the processor The step of any one of claim 5 to 9 the method is realized when executing described program.
CN201910441221.5A 2019-05-24 2019-05-24 Virtual mouse driving device, gesture recognition method and device for virtual mouse Active CN110221717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910441221.5A CN110221717B (en) 2019-05-24 2019-05-24 Virtual mouse driving device, gesture recognition method and device for virtual mouse

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910441221.5A CN110221717B (en) 2019-05-24 2019-05-24 Virtual mouse driving device, gesture recognition method and device for virtual mouse

Publications (2)

Publication Number Publication Date
CN110221717A true CN110221717A (en) 2019-09-10
CN110221717B CN110221717B (en) 2024-07-09

Family

ID=67818105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910441221.5A Active CN110221717B (en) 2019-05-24 2019-05-24 Virtual mouse driving device, gesture recognition method and device for virtual mouse

Country Status (1)

Country Link
CN (1) CN110221717B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200834A (en) * 2011-05-26 2011-09-28 华南理工大学 television control-oriented finger-mouse interaction method
CN102324020A (en) * 2011-09-02 2012-01-18 北京新媒传信科技有限公司 The recognition methods of area of skin color of human body and device
KR20120047746A (en) * 2011-06-16 2012-05-14 주식회사 매크론 Virture mouse driving method
CN102799317A (en) * 2012-07-11 2012-11-28 联动天下科技(大连)有限公司 Smart interactive projection system
US20130111414A1 (en) * 2011-10-31 2013-05-02 Institute For Information Industry Virtual mouse driving apparatus and virtual mouse simulation method
CN103699274A (en) * 2013-12-27 2014-04-02 三星电子(中国)研发中心 Information input device and method based on projection and infrared sensing
KR101404018B1 (en) * 2013-02-01 2014-06-10 전자부품연구원 Device for recognizing the hand gesture and method thereof
CN107239727A (en) * 2016-12-07 2017-10-10 北京深鉴智能科技有限公司 Gesture identification method and system
CN107958218A (en) * 2017-11-22 2018-04-24 南京邮电大学 A kind of real-time gesture knows method for distinguishing
CN108010047A (en) * 2017-11-23 2018-05-08 南京理工大学 A kind of moving target detecting method of combination unanimity of samples and local binary patterns
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture
CN109324701A (en) * 2017-07-26 2019-02-12 罗技欧洲公司 Double mode optical input apparatus

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200834A (en) * 2011-05-26 2011-09-28 华南理工大学 television control-oriented finger-mouse interaction method
KR20120047746A (en) * 2011-06-16 2012-05-14 주식회사 매크론 Virture mouse driving method
CN102324020A (en) * 2011-09-02 2012-01-18 北京新媒传信科技有限公司 The recognition methods of area of skin color of human body and device
US20130111414A1 (en) * 2011-10-31 2013-05-02 Institute For Information Industry Virtual mouse driving apparatus and virtual mouse simulation method
CN102799317A (en) * 2012-07-11 2012-11-28 联动天下科技(大连)有限公司 Smart interactive projection system
KR101404018B1 (en) * 2013-02-01 2014-06-10 전자부품연구원 Device for recognizing the hand gesture and method thereof
CN103699274A (en) * 2013-12-27 2014-04-02 三星电子(中国)研发中心 Information input device and method based on projection and infrared sensing
CN107239727A (en) * 2016-12-07 2017-10-10 北京深鉴智能科技有限公司 Gesture identification method and system
CN109324701A (en) * 2017-07-26 2019-02-12 罗技欧洲公司 Double mode optical input apparatus
CN107958218A (en) * 2017-11-22 2018-04-24 南京邮电大学 A kind of real-time gesture knows method for distinguishing
CN108010047A (en) * 2017-11-23 2018-05-08 南京理工大学 A kind of moving target detecting method of combination unanimity of samples and local binary patterns
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture

Also Published As

Publication number Publication date
CN110221717B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN108334814B (en) Gesture recognition method of AR system
US11650659B2 (en) User input processing with eye tracking
CN110532984A (en) Critical point detection method, gesture identification method, apparatus and system
US20120117514A1 (en) Three-Dimensional User Interaction
TWI471815B (en) Gesture recognition device and method
JP2021522591A (en) How to distinguish a 3D real object from a 2D spoof of a real object
CN104364733A (en) Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
CN103207709A (en) Multi-touch system and method
US9836130B2 (en) Operation input device, operation input method, and program
CN114138121B (en) User gesture recognition method, device and system, storage medium and computing equipment
CN108629272A (en) A kind of embedded gestural control method and system based on monocular cam
CN104035557A (en) Kinect action identification method based on joint activeness
US20220230079A1 (en) Action recognition
CN202159302U (en) Augment reality system with user interaction and input functions
CN114299604A (en) Two-dimensional image-based hand skeleton capturing and gesture distinguishing method
JP6810048B2 (en) How to simulate and control virtual balls on mobile devices
CN105046249B (en) A kind of man-machine interaction method
CN110619630B (en) Mobile equipment visual test system and test method based on robot
KR20110097504A (en) User motion perception method and apparatus
CN107145741A (en) Ear based on graphical analysis examines collecting method and device
KR20230080938A (en) Method and apparatus of gesture recognition and classification using convolutional block attention module
CN109947243A (en) Based on the capture of intelligent electronic device gesture and identification technology for touching hand detection
CN110221717A (en) Virtual mouse driving device, gesture identification method and equipment for virtual mouse
CN111753796A (en) Method and device for identifying key points in image, electronic equipment and storage medium
KR101868520B1 (en) Method for hand-gesture recognition and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant