CN110209273A - Gesture identification method, interaction control method, device, medium and electronic equipment - Google Patents

Gesture identification method, interaction control method, device, medium and electronic equipment Download PDF

Info

Publication number
CN110209273A
CN110209273A CN201910435353.7A CN201910435353A CN110209273A CN 110209273 A CN110209273 A CN 110209273A CN 201910435353 A CN201910435353 A CN 201910435353A CN 110209273 A CN110209273 A CN 110209273A
Authority
CN
China
Prior art keywords
gesture
image
gesture image
camera
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910435353.7A
Other languages
Chinese (zh)
Other versions
CN110209273B (en
Inventor
黄锋华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910435353.7A priority Critical patent/CN110209273B/en
Publication of CN110209273A publication Critical patent/CN110209273A/en
Application granted granted Critical
Publication of CN110209273B publication Critical patent/CN110209273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

Present disclose provides a kind of gesture identification method and devices, man-machine interaction method and device, computer readable storage medium and electronic equipment, belong to human-computer interaction technique field.The gesture identification method is applied to electronic equipment, the electronic equipment includes the first camera and second camera, set on the same side of the electronic equipment, this method comprises: acquiring first gesture image by the first camera, and second gesture image is acquired by second camera, wherein, first gesture image is depth image, and second gesture image is flat image;When detection first gesture image is not up to preset quality standard, second gesture image is handled, to identify the gesture in second gesture image;When detection first gesture image reaches preset quality standard, first gesture image is handled, to identify the gesture in first gesture image.The disclosure can realize the higher Gesture Recognition Algorithm of robustness, improve the accuracy of gesture identification under existing hardware condition.

Description

Gesture identification method, interaction control method, device, medium and electronic equipment
Technical field
This disclosure relates to human-computer interaction technique field more particularly to a kind of gesture identification method, interaction control method, gesture Identification device, interaction control device, computer readable storage medium and electronic equipment.
Background technique
Human-computer interaction based on gesture refers in the case where not contact arrangement, utilizes the skills such as computer vision, graphics Art identifies the operating gesture of people, and is converted into the control instruction to equipment.Gesture interaction be after mouse, keyboard and touch screen it New interactive mode afterwards can get rid of dependence of traditional interactive mode for input equipment, in virtual reality, augmented reality etc. Field, which has had, to be widely applied.
On the mobile terminals such as smart phone, tablet computer, gesture interaction has already appeared at present there has also been certain development The mobile phone of configuration TOF (Time of Flight, flight time) camera identifies hand therein by shooting depth image Gesture, to interact control.However, the ability of TOF camera is limited on existing mobile phone, can not accurate detecting distance camera it is excessively close or The depth information of the object of person too far, and for the object of black material or high reflective material, the biggish scene of illumination variation etc. Processing capacity is poor, causes the algorithm robustness of gesture identification lower, influences normal interaction.
Therefore, how under existing hardware condition, the accuracy of gesture identification is improved, realizes the higher calculation of robustness Method guarantees being normally carried out for interaction, is prior art urgent problem to be solved.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
Present disclose provides a kind of gesture identification method, interaction control method, gesture identifying device, interaction control device, Computer readable storage medium and electronic equipment, and then overcome the prior art that can not accurately identify gesture at least to a certain extent The problem of.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure Practice and acquistion.
According to the disclosure in a first aspect, a kind of gesture identification method is provided, applied to electronic equipment, the electronic equipment Including the first camera and second camera, first camera is set to the same of the electronic equipment with the second camera Side, which comprises first gesture image is acquired by first camera, and is adopted by the second camera Collect second gesture image, wherein the first gesture image is depth image, and the second gesture image is flat image;When When detecting the first gesture image and being not up to preset quality standard, the second gesture image is handled, to identify State the gesture in second gesture image;When detecting the first gesture image and reaching the preset quality standard, to described the One images of gestures is handled, to identify the gesture in the first gesture image.
In a kind of exemplary embodiment of the disclosure, whether the first gesture image, which reaches preset quality standard, passes through Following methods detection: whether the depth value for detecting each pixel in the first gesture image is invalid;It is invalid to count depth value Pixel accounts for the ratio of the first gesture image;Judge whether the ratio is less than preset ratio threshold value, it is described if being less than First gesture image reaches preset quality standard.
In a kind of exemplary embodiment of the disclosure, whether the first gesture image, which reaches preset quality standard, passes through Following methods detection: being converted to flat image for the first gesture image, and detects the first gesture image and described the Whether the similarity of two images of gestures reaches default similarity threshold, if reaching, the first gesture image reaches default matter Amount standard.
It is described that the first gesture image is handled in a kind of exemplary embodiment of the disclosure, to identify Before stating the gesture in first gesture image, the method also includes: to the first gesture image and the second gesture figure As being registrated;Place is optimized to the first gesture image after registration using the second gesture image after registration Reason, wherein the optimization processing includes any of the following or a variety of: edge filter, filling-up hole and distortion correction.
In a kind of exemplary embodiment of the disclosure, first camera is the TOF camera based on infrared light, institute Stating first gesture image is TOF image;The method also includes: infrared image is acquired by first camera;Using institute It states infrared image to pre-process the TOF image, wherein the pretreatment includes any of the following or a variety of: screenshot, Noise is removed to filter with the pixel based on depth value confidence level.
In a kind of exemplary embodiment of the disclosure, the gesture in the first gesture image includes hand skeleton point The information of information and/or hand gestures;It is described that the first gesture image is handled, to identify the first gesture image In gesture, comprising: the first gesture image is identified using first nerves network model trained in advance, to obtain the hand The information of portion's skeleton point;And/or the first gesture image is identified using nervus opticus network model trained in advance, to obtain The information of the hand gestures.
In a kind of exemplary embodiment of the disclosure, the first nerves network model or the nervus opticus are being utilized Before network model identifies the first gesture image, the method also includes: background is carried out to the first gesture image and is subtracted It removes, obtains first gesture image only comprising hand foreground image.
According to the second aspect of the disclosure, a kind of interaction control method is provided, is applied to electronic equipment, the electronic equipment Including the first camera and second camera, first camera is set to the same of the electronic equipment with the second camera Side, which comprises gesture is identified by gesture identification method described in above-mentioned any one;It is executed according to the gesture Control instruction.
In a kind of exemplary embodiment of the disclosure, the gesture includes the information and/or hand appearance of hand skeleton point The information of state;It is described that control instruction is executed according to the gesture, comprising: to execute the corresponding control instruction of the hand gestures;And/ Or the mapping point according to the hand skeleton point in the graphic user interface of the electronic equipment, triggering execute the mapping point The control option at place.
According to the third aspect of the disclosure, a kind of gesture identifying device is provided, is applied to electronic equipment, the electronic equipment Including the first camera and second camera, first camera is set to the same of the electronic equipment with the second camera Side, described device include: image capture module, for acquiring first gesture image, Yi Jitong by first camera Cross the second camera acquisition second gesture image, wherein the first gesture image is depth image, the second gesture Image is flat image;First identification module, it is right for when detecting the first gesture image and being not up to preset quality standard The second gesture image is handled, to identify the gesture in the second gesture image;Second identification module, for when inspection When surveying the first gesture image and reaching the preset quality standard, the first gesture image is handled, to identify State the gesture in first gesture image.
In a kind of exemplary embodiment of the disclosure, the gesture identifying device further include: quality detection module;It is described Quality detection module includes: depth value detection unit again, for detecting the depth value of each pixel in the first gesture image It is whether invalid;Invalid ration statistics unit accounts for the ratio of the first gesture image for the invalid pixel that counts depth value; Quality standard judging unit, for judging whether the ratio is less than preset ratio threshold value, if being less than, the first gesture figure As reaching the preset quality standard.
In a kind of exemplary embodiment of the disclosure, the gesture identifying device further include: quality detection module is used for The first gesture image is converted into flat image, and detects the phase of the first gesture image with the second gesture image Whether reach default similarity threshold like degree, if reaching, the first gesture image reaches the preset quality standard.
In a kind of exemplary embodiment of the disclosure, second identification module is carried out to the first gesture image Processing, before identifying the gesture in the first gesture image, is also used to the first gesture image and the second-hand Gesture image is registrated, and excellent to the first gesture image progress after registration using the second gesture image after registration Change processing, wherein the optimization processing includes any of the following or a variety of: edge filter, filling-up hole and distortion correction.
In a kind of exemplary embodiment of the disclosure, first camera is the TOF camera based on infrared light, described First gesture image is TOF image;Described image acquisition module is acquiring the same of the TOF image and the second gesture image When, it is also used to acquire infrared image by first camera;The gesture identifying device further include: preprocessing module is used In being pre-processed using the infrared image to the TOF image, wherein the pretreatment includes any of the following or more Kind: screenshot, removal noise and the pixel based on depth value confidence level filter.
In a kind of exemplary embodiment of the disclosure, the gesture in the first gesture image includes hand skeleton point The information of information and/or hand gestures;Second identification module includes skeleton point recognition unit and/or gesture recognition unit; Wherein, the skeleton point recognition unit is used to identify the first gesture figure using first nerves network model trained in advance Picture, to obtain the information of the hand skeleton point, the gesture recognition unit is used to utilize nervus opticus network trained in advance Model identifies the first gesture image, to obtain the information of the hand gestures.
In a kind of exemplary embodiment of the disclosure, second identification module further include: background subtraction unit is used for Before the skeleton point recognition unit or the gesture recognition unit identify the first gesture image, to the first gesture Image carries out background subtraction, to obtain first gesture image only comprising hand foreground image.
According to the fourth aspect of the disclosure, a kind of interaction control device is provided, is applied to electronic equipment, the electronic equipment Including the first camera and second camera, first camera is set to the same of the electronic equipment with the second camera Side, described device include: image capture module, for acquiring first gesture image, Yi Jitong by first camera Cross the second camera acquisition second gesture image;First identification module, for not reached when the detection first gesture image When to preset quality standard, the second gesture image is handled, to identify the gesture in the second gesture image;The Two identification modules, for when detecting the first gesture image and reaching the preset quality standard, to the first gesture figure As being handled, to identify the gesture in the first gesture image;Instruction execution module, for executing control according to the gesture System instruction.
In a kind of exemplary embodiment of the disclosure, the interaction control device includes any of the above-described kind of gesture identification dress Set included whole module/units and described instruction execution module.
In a kind of exemplary embodiment of the disclosure, the gesture includes information and/or the institute of the hand skeleton point State the information of hand gestures;Described instruction execution module includes the first execution unit and/or the second execution unit;Wherein, described First execution unit is used for for executing the corresponding control instruction of the hand gestures, second execution unit according to the hand Mapping point of portion's skeleton point in the graphic user interface of the electronic equipment, triggering execute the control choosing where the mapping point ?.
According to the 5th of the disclosure the aspect, a kind of computer readable storage medium is provided, computer program is stored thereon with, The computer program realizes gesture identification method described in above-mentioned any one or above-mentioned any one when being executed by processor The interaction control method.
According to the 6th of the disclosure the aspect, a kind of electronic equipment is provided, comprising: the first camera, it is first-hand for acquiring Gesture image, the first gesture image are depth image;Second camera, for acquiring second gesture image, the second-hand Gesture image is flat image;Processor;And memory, for storing the executable instruction of the processor;Wherein, described One camera and the second camera are set to the same side of the electronic equipment;The processor is configured to via described in execution Executable instruction executes: gesture identification method described in above-mentioned any one, to identify the first gesture image or second Gesture in images of gestures;Or interaction control method described in above-mentioned any one, with identify the first gesture image or Gesture in second gesture image, and control instruction is executed according to the gesture.
The exemplary embodiment of the disclosure has the advantages that
The first gesture figure with depth information is acquired respectively by the first camera and second camera of electronic equipment As the second gesture image with plane, when first gesture picture quality is higher, by the processing of first gesture image to identify Gesture, when first gesture picture quality is lower, by the processing of second gesture image to identify gesture.To existing hard Under the conditions of part, the higher Gesture Recognition Algorithm of robustness is realized, the defect for overcoming depth camera makes recognition result At influence, improve the accuracy of gesture identification.Also, the present exemplary embodiment is from application scenarios such as mobile terminals, Image Acquisition and the process of image procossing are simple, and the operand that Gesture Recognition Algorithm is related to is lower, therefore with higher suitable The property used.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 shows a kind of flow chart of gesture identification method in the present exemplary embodiment;
Fig. 2 shows the sub-process figures of gesture identification method a kind of in the present exemplary embodiment;
Fig. 3 shows a kind of flow chart of interaction control method in the present exemplary embodiment;
Fig. 4 shows a kind of structural block diagram of gesture identifying device in the present exemplary embodiment;
Fig. 5 shows a kind of structural block diagram of interaction control device in the present exemplary embodiment;
Fig. 6 shows a kind of computer readable storage medium for realizing the above method in the present exemplary embodiment;
Fig. 7 shows a kind of structural block diagram of the electronic equipment for realizing the above method in the present exemplary embodiment;
Fig. 8 shows in the present exemplary embodiment another kind for realizing the structural block diagram of the electronic equipment of the above method.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.
The exemplary embodiment of the disclosure provides firstly a kind of gesture identification method, can be applied to electronic equipment, should Electronic equipment includes the first camera and second camera, can be two functions different camera or camera module.The One camera and second camera are set to the same side of electronic equipment, are usually set to the front or the back side of electronic equipment simultaneously, use Image in acquisition electronic equipment the same side.The executing subject of the present exemplary embodiment can be mobile phone, tablet computer or configuration The smart television of dual camera, PC etc., take the mobile phone as an example, before the first camera and second camera can be two It sets camera or two rear cameras or the first camera is the front camera for being embedded in mobile phone front, second camera It is the front camera etc. of lift, the disclosure does not limit this.
Fig. 1 shows the method flow of the present exemplary embodiment, including step S110~S130:
Step S110 acquires first gesture image by the first camera, and acquires second-hand by second camera Gesture image.
Wherein, the first camera can be depth camera, such as TOF camera, structure light video camera head etc., shooting In image include the depth information of object, such as can be TOF image, structural light three-dimensional image etc.;Second camera is common Planar imaging camera, shooting flat image can be RGB image, gray level image etc..In the present exemplary embodiment, use Family can open the gesture identification function on mobile phone, and to start the first camera and second camera, then user is before mobile phone Gesture operation is carried out in square region, the first preposition camera of mobile phone and second camera shoot the depth about the gesture respectively Image and flat image, respectively first gesture image and second gesture image.It should be noted that first gesture image and The image that two images of gestures are while acquiring, such as the first camera and second camera sync pulse jamming, two images are same frame Image etc., the gesture that two images are recorded are same gesture.
Step S120, when detection first gesture image is not up to preset quality standard, at second gesture image Reason, to identify the gesture in second gesture image;
Step S130 is handled first gesture image when detection first gesture image reaches preset quality standard, To identify the gesture in first gesture image.
In the present exemplary embodiment, it is contemplated that the depth camera ability on electronic equipment is limited, can not accurately detect away from Depth information excessively close or object too far from camera, and the object of black material or high reflective material, illumination are become It is poor to change the processing capacities such as biggish scene, therefore, in the case where first gesture picture quality is poor, the standard of picture material Exactness is lower, it is difficult to the foundation as gesture identification.The preset quality standard of the present exemplary embodiment is used to judge first-hand Whether the quality of gesture image is up to standard, determines whether the image can be used, if it is not then identifying hand from second gesture image Gesture, if it is available, then identifying gesture from first gesture image.Either from first gesture image or second gesture image It identifies gesture, is provided to correctly identify the operating gesture of user, if first gesture picture quality is higher, since it includes depths Information is spent, the rich second gesture image higher than plane of information identifies the accuracy of gesture more from first gesture image Height, it is preferential to execute step S130, on the contrary then follow the steps S120.
When performing image processing, the model of deep learning, such as preparatory training convolutional neural networks model can be used, Images to be recognized is inputted into the model, to obtain the result of gesture identification.It should be noted that due to first gesture image and The dimension or port number of two images of gestures are typically different, and model can be respectively trained for two class images, such as first-hand The model of gesture image (RGB-D image) identification is the model of 4 channels input, for second gesture image (RGB image) identification Model is the model of 3 channels input.Image procossing can also be carried out by the way of gesture comparison, such as predefine multiple marks Quasi- gesture comes out the gesture extracting section in images to be recognized, judges that it is closest with which standard gesture, is then identified as The gesture.Usual gesture identification is a continuous process, the depth image of the first camera and second camera acquisition successive frame With flat image, therefore it can be combined with the gesture in previous frame image and judge in gesture in current frame image, such as detection The registration of one frame image and current frame image, if registration is higher, then it is assumed that user gesture is constant, with the gesture of previous frame Recognition result as present frame gesture identification as a result, acquisition continuous multiple frames depth image and flat image after, open Dynamic step S120 and S130, if all or reaching preset quality more than a certain proportion of image in the depth image of continuous multiple frames Standard then identifies gesture by the variation of hand in the depth images of continuous multiple frames, the on the contrary then plan view that passes through continuous multiple frames The variation of hand identifies gesture as in.The disclosure for image procossing concrete mode without limitation.
Based on above description, the present exemplary embodiment is adopted by the first camera of electronic equipment with second camera respectively The second gesture image for collecting the first gesture image and plane with depth information leads to when first gesture picture quality is higher The processing of first gesture image is crossed to identify gesture, when first gesture picture quality is lower, passes through the place of second gesture image Reason is to identify gesture.To realize the higher Gesture Recognition Algorithm of robustness, overcome depth under existing hardware condition The defect of camera improves the accuracy of gesture identification to influence caused by recognition result.Also, the present exemplary embodiment From application scenarios such as mobile terminals, the process of Image Acquisition and image procossing is simple, the fortune that Gesture Recognition Algorithm is related to Calculation amount is lower, therefore applicability with higher.
When detecting the quality of first gesture image, can predominantly detect the depth information in image accurately reflect actual photographed Hand situation, be based on the thought principle, for different types of depth image, the different types of depth image of image information, The different depth image of application scenarios, used method can be different because of situation with standard, and the disclosure does not limit this, with It is lower that several specific detection method examples are provided.
(1) in one exemplary embodiment, as shown in Fig. 2, whether detection first gesture image reaches preset quality standard It can be realized by following steps S201~S203:
Whether step S201, the depth value for detecting each pixel in first gesture image are invalid;
Step S202, the invalid pixel of statistics depth value account for the ratio of first gesture image;
Step S203, judges whether aforementioned proportion is less than preset ratio threshold value, if being less than, first gesture image reaches pre- If quality standard.
Wherein, the first camera is when shooting first gesture image, if a part and camera of object or object Beyond there is abnormal illumination condition in distance, such as illumination is too strong causes hand images exposed in detection range or scene Degree, camera can not accurately detect corresponding portion depth value, usually its corresponding pixel depth value is exported as invalid value or Exceptional value, such as TOF camera shoot hand images, if hand distance is farther out, the TOF for causing TOF camera to sense is super The depth value of hand pixel can be then denoted as upper limit value or other exceptional values, therefore the depth of these pixels by the upper limit out It is worth insincere.If the ratio that these pixels occupy entire first gesture image is excessively high, illustrate entire first gesture image Quality it is lower, in the present exemplary embodiment, can rule of thumb, scene demand and the first camera features etc. are by default ratio Example threshold value is set as 20% or 30% etc., when the invalid pixel ratio of depth value is lower than the value, indicates that first gesture image reaches To preset quality standard.
(2) in one exemplary embodiment, it can also detect whether first gesture image reaches default by the following method Quality standard:
First gesture image is converted into flat image, and detects the similarity of first gesture image Yu second gesture image Whether default similarity threshold is reached, if reaching, first gesture image reaches preset quality standard.
Wherein, when detecting similarity, whether similar, i.e., first-hand if predominantly detecting the content information that two images are presented It whether is same object captured by gesture image and second gesture image, the image quality that this allows for second gesture image is logical Often higher, picture material is more clear accurate, and first gesture image is when detecting depth information exception, the figure presented As content is also likely to be present exception, so that there are bigger differences with second gesture image.It, can be by two before detecting similarity Class image carries out certain unification processing, and first gesture image is usually converted to flat image, in order to second gesture Image compares.Further, it is also possible to which first gesture image and second gesture image are all converted to the identical plan view of color mode As (such as be all converted to RGB image, HSL image or gray level image etc.);If the position of the first camera and second camera Or shooting angle difference is larger, can also first be registrated first gesture image with second gesture image, then detect phase again Like degree.The concrete mode of detection similarity may include: to detect the registration of two images, or pass through image recognition model inspection Hand is the probability, etc. of identical hand in two images.Default similarity threshold is whether preset its similarity of measurement is up to standard Standard, value is according to the actual situation or experience is set, if similarity reaches the threshold value, illustrates first gesture image Identical as the picture material that second gesture image is presented, first gesture image reaches preset quality standard.
(3) in one exemplary embodiment, it can also detect whether first gesture image reaches default by the following method Quality standard:
Determine the threshold value about hand thickness;
The depth value span (i.e. maximum depth value-minimum depth value) of first gesture image is counted, if be no more than above-mentioned Threshold value, then first gesture image reaches preset quality standard.
Wherein, the threshold value of hand thickness is the maximum gauge for considering hand under various gestures, i.e., hand is perpendicular to camera shooting The size of (depth direction) on head plane direction, when determining the threshold value, should connected applications scene, consider the ginseng of camera The factors such as number, user gesture habit, electronic equipment screen size.Under normal circumstances, the depth value span of first gesture image is answered When the threshold value for being no more than hand thickness, if it does, illustrating that there may be other interfering objects, Huo Zheshen in first gesture image Angle value accuracy is lower, so that it is determined that the quality of first gesture image is lower.It, can in order to improve the accuracy of above-mentioned detection method First to carry out background subtraction to first gesture image, to remove the background parts image other than hand, extract mainly comprising hand The foreground image in portion counts the depth value span in the foreground image, then can more accurately represent first gesture image Hand thickness detected.Alternatively, it is also possible to consider the minimum thickness of hand, the range about hand thickness is determined, if deep Angle value span within that range, illustrates that first gesture image reaches preset quality standard.
It should be appreciated that the present exemplary embodiment when detecting the quality of first gesture image, can use above-mentioned any one Kind of method, or using the combination of a variety of methods, such as use the above method (1) and (2) simultaneously, needs to meet ratio simultaneously and is less than Preset ratio threshold value and when similarity reaches default similarity threshold, determines that first gesture image reaches preset quality standard, also Any other similar method can be used.
In view of first gesture image and second gesture image have the superiority and inferiority of different aspect, can by the two into Row combines.In one exemplary embodiment, if first gesture image reaches preset quality standard, to first gesture image into Row processing, before identifying the gesture in first gesture image, gesture identification method can with the following steps are included:
First gesture image is registrated with second gesture image;
Processing is optimized to the first gesture image after registration using the second gesture image after registration.
Wherein, registration, which refers to, spatially carries out unification processing for first gesture image and second gesture image, makes two Person is able to carry out direct comparison.Such as the first camera and the respective internal reference of second camera and the two can be demarcated in advance Between outer ginseng, the transformation based on inside and outside ginseng obtains the parameters such as the parameter of image registration, including translation, rotation, scaling, can also be with Certain characteristic point is extracted respectively in first gesture image and second gesture image, pair based on characteristic point between two images It should be related to obtain the parameter of image registration.The present exemplary embodiment can carry out first gesture image based on second gesture image Registration can also be registrated second gesture image based on first gesture image, can also predefine conventional coordinates, so First gesture image and second gesture image are registrated to respectively in the coordinate system afterwards, therefore can there was only piece image after being registrated It changes, can also all be changed with two images.
It, can after registration, using second gesture image to since the image quality of second gesture image is usually higher One images of gestures optimizes processing, optimization processing may include it is following any one or more: edge filter, filling-up hole and distortion Correction.Wherein, edge filter refers to the feature referring to graphic edge in second gesture image, to graph edge in first gesture image The feature of edge filters, including smooth, remove flash removed, Local uniqueness etc.;Filling-up hole refers to referring to second gesture image, to first Hole in images of gestures is filled, and is eliminated the hole defect in image, is obtained complete figure;Distortion correction refers to reference Second gesture image is corrected radial distortion, the tangential distortion etc. in first gesture image, to eliminate figure shape therein Become, obtains the image more " planarized ".It should be appreciated that the present exemplary embodiment can also use optimization processing other than the above Mode.By optimization processing, the quality of first gesture image can be improved, be conducive to carry out more accurate gesture identification.
In one exemplary embodiment, the first camera is the TOF camera based on infrared light, and first gesture image is TOF image.Gesture identification method can with the following steps are included:
While acquiring TOF image (i.e. first gesture image) and second gesture image, acquired by the first camera Infrared image;
TOF image is pre-processed using infrared image.
Wherein, infrared image, which can be, carries out the image that imaging obtains using the infrared mould group in TOF camera, The depth value confidence information that may include each pixel in TOF image, also may include the information such as heating power, radiation.Pretreatment May include it is following any one or more: screenshot, removal noise with based on depth value confidence level pixel filter.Wherein, Screenshot refers to referring to the thermal information in infrared image, the topography that TOF image mainly includes hand is intercepted out, to go Except the picture material independent of gesture identification;Removal noise refers to the imaging effect referring to infrared image, removal TOF image imaging Interference information, noise in the process etc.;Pixel filtering based on depth value confidence level refers to depth value confidence in TOF image Lower pixel removal is spent, the quality of depth information is improved.It should be appreciated that the present exemplary embodiment can also use it is above-mentioned with Outer pretreatment mode.By pretreatment, the quality of first gesture image can also be improved, advantageously reduce subsequent gesture identification Operand, improve recognition accuracy.
You need to add is that above-mentioned optimize processing to first gesture image using second gesture image, it is red with utilizing Outer image pre-processes first gesture image (TOF image), and two steps can execute respectively, can also merge execution, The disclosure does not limit this.Such as can be after collecting TOF image, second gesture image and infrared image, utilization is infrared Image pre-processes TOF image, and does optimization processing to TOF image using second gesture image, then may be implemented quality compared with Then high TOF image executes subsequent step S120 or S130 again, is conducive to the accuracy for further increasing gesture identification.
In one exemplary embodiment, gesture may include the information of hand skeleton point and/or the information of hand gestures;Phase It answers, it is above-mentioned that first gesture image is handled, it can be especially by the step of to identify the gesture in first gesture image Following steps are realized:
First gesture image is identified using first nerves network model trained in advance, to obtain the letter of hand skeleton point Breath;And/or
First gesture image is identified using nervus opticus network model trained in advance, to obtain the information of hand gestures.
Wherein, according to scene demand and picture quality situation, it may be predetermined that the specific characteristic point of hand is bone Point, such as may include 21 skeleton points: 4 joint characteristic points of each finger and palm of the hand characteristic point also may include a part Skeleton point, such as when carrying out index finger gesture identification, it can be only using the joint characteristic of index finger point as hand skeleton point.This example It, can be in advance by manually marking skeleton point in a large amount of hand depth images, as sample data in training in property embodiment First nerves network model is stated, in the application stage of model, after the input of first gesture image, available hand skeleton point Coordinate.
The information of hand gestures can be gesture classification as a result, the present exemplary embodiment can predefine various gestures And numbered for it, such as perpendicular thumb is 1, erecting index finger is 2 etc., then by manually marking gesture point in a large amount of hand depth images Class number, forms sample data, with the above-mentioned nervus opticus network model of training, in the application stage of model, by first gesture figure After input, the result of available hand gestures (i.e. gesture) classification.
It should be noted that first nerves network model and nervus opticus can be used simultaneously in the present exemplary embodiment Network model obtains two class gesture identifications as a result, can also obtain a kind of gesture identification knot only with one of model Fruit.It, can be based on the distribution situation of hand skeleton point after obtaining the information of hand skeleton point using first nerves network model Estimate gesture posture, or the information of hand skeleton point is added in first gesture image, such as in first gesture image It marks out hand skeleton point or the coordinate of hand skeleton point and first gesture image mosaic is characterized matrix etc., recycle second First gesture image (or eigenmatrix) after neural network model processing addition hand skeleton point information, it is available more quasi- True gesture identification result.
Further, in one exemplary embodiment, first nerves network model or nervus opticus network model are being utilized Before identifying first gesture image, background subtraction first can also be carried out to first gesture image, such as can be excessive by depth value Background parts subduction, or in advance shooting background first gesture image, reduced from the first gesture image comprising hand Background parts etc., the first gesture image obtained only comprising hand foreground image can be into one to be used for subsequent identifying processing Step reduces operand, improves recognition accuracy.Correspondingly, needing to be implemented step if the quality of first gesture image is lower S120 can then extract part only comprising hand to handle second gesture image by modes such as Face Detection or test patterns Flat image, the disclosure do not limit this, and effect is similar with above-mentioned background subtraction.
The exemplary embodiment of the disclosure additionally provides a kind of interaction control method, can be applied to electronic equipment, the electricity Sub- equipment includes the first camera and second camera, and the first camera and second camera are set to the same side of electronic equipment; In other words, the electronic equipment is identical as the electronic equipment for executing above-mentioned gesture identification method.The interaction control method includes:
Gesture is identified by any of the above-described kind of gesture identification method, and control instruction is executed according to the gesture.
Wherein, the gesture identified is the gesture that user is operated before electronic equipment, i.e. the first camera or second User gesture taken by camera.Electronic equipment is built-in with the control instruction of gesture operation, is identifying that user is made that After specific gesture, corresponding control instruction is executed according to the gesture trigger.
In one exemplary embodiment, above-mentioned gesture may include the information of hand skeleton point and/or the letter of hand gestures Breath;Correspondingly, above-mentioned the step of executing control instruction according to gesture, can specifically include:
Execute the corresponding control instruction of hand gestures;And/or
According to mapping point of the hand skeleton point in the graphic user interface of electronic equipment, triggering is executed where mapping point Control option.
Wherein, hand gestures and control instruction have preset corresponding relationship, such as hand gestures are to refer to upwards, are corresponded to Control instruction be the pull-up page, hand gestures are to refer to downwards, and corresponding control instruction is drop-down page etc., based on the correspondence Relationship, after identifying the hand gestures of user's operation, the control instruction that can be converted on electronic equipment.In addition, The figure that by position of the specific hand skeleton point before the first camera or second camera, can be mapped to electronic equipment is used In the interface of family, executed as clicking operation.Such as: user's index finger tip point is projected on the screen of electronic equipment, is used The mobile index finger in family is equivalent to mobile click location on the screen, if user's maintenance index finger tip point is more than one in a certain position It fixes time (such as more than 3 seconds), is then considered as user and clicks the position, if the position is that the options such as " determination " or " cancellation " are pressed Button then triggers and executes corresponding option instruction.
It should be noted that hand gestures and skeleton point mapping are two kinds of interactive controlling modes, the present exemplary embodiment can It is therein any to use, two kinds can also be used simultaneously.Fig. 3 shows a kind of process of the present exemplary embodiment, comprising:
Step S301 acquires infrared image and TOF image (i.e. above-mentioned first gesture by the TOF camera of electronic equipment Image), pass through planar pickup head acquisition plane image (i.e. above-mentioned second gesture image);
Step S302 pre-processes TOF image using infrared image;
Step S303, judges whether pretreated TOF image reaches preset quality standard;
If so, executing step S304, processing and step S305 are optimized to TOF image using flat image, passed through Background subtraction is partitioned into images of gestures from TOF image, then executes step S306, carries out gesture identification to depth images of gestures;
If it is not, executing step S307, images of gestures is partitioned into from flat image by modes such as Face Detections, then execute Step S308 carries out gesture identification to plane images of gestures;
The result of gesture identification includes two parts: gesture-type, with specific skeleton point coordinate.Wherein, gesture-type is used In executing step S309, corresponding control instruction is determined by gesture-type, executes the control instruction;Skeleton point coordinate is used In executing step S310, skeleton point coordinate is converted into the mapping point in graphic user interface, triggering executes where mapping point Control option.To realize the control of human-computer interaction according to the user gesture taken, and pass through two class gesture identification results Two class interactive controllings are carried out, interactive diversity is improved.
The exemplary embodiment of the disclosure additionally provides a kind of gesture identifying device, can be applied to electronic equipment, the electricity Sub- equipment includes the first camera and second camera, and the first camera and second camera are set to the same side of electronic equipment. As shown in figure 4, the gesture identifying device 400 may include: image capture module 410, for passing through the first camera acquisition the One images of gestures, and second gesture image is acquired by second camera, wherein first gesture image is depth image, the Two images of gestures are flat image;First identification module 420, for being not up to preset quality standard when detection first gesture image When, second gesture image is handled, to identify the gesture in second gesture image;Second identification module 430, for when inspection When survey first gesture image reaches preset quality standard, first gesture image is handled, to identify in first gesture image Gesture.
In one exemplary embodiment, gesture identifying device 400 can also include: that quality detection module (is not shown in figure Out), quality detection module may include: depth value detection unit (not shown) again, for detecting in first gesture image Whether the depth value of each pixel is invalid;Invalid ration statistics unit (not shown), for counting the invalid picture of depth value Vegetarian refreshments accounts for the ratio of first gesture image;Quality standard judging unit (not shown), for judging it is pre- whether ratio is less than If proportion threshold value, if being less than, first gesture image reaches preset quality standard.
In one exemplary embodiment, gesture identifying device 400 can also include: that quality detection module (is not shown in figure Out), for first gesture image to be converted to flat image, and the similarity of first gesture image Yu second gesture image is detected Whether default similarity threshold is reached, if reaching, first gesture image reaches preset quality standard.
In one exemplary embodiment, the second identification module 420 is handled to first gesture image, to identify first It before gesture in images of gestures, is also used to be registrated first gesture image with second gesture image, and after utilization registration Second gesture image processing is optimized to the first gesture image after registration, wherein optimization processing includes following any one Kind is a variety of: edge filter, filling-up hole and distortion correction.
In one exemplary embodiment, the first camera can be the TOF camera based on infrared light, and first gesture image can To be TOF image;Image capture module 410 is also used to take the photograph by first while acquiring TOF image and second gesture image As head acquires infrared image;Gesture identifying device 400 can also include: preprocessing module (not shown), red for utilizing Outer image pre-processes TOF image, wherein pretreatment includes any of the following or a variety of: screenshot, removal noise and base It is filtered in the pixel of depth value confidence level.
In one exemplary embodiment, the gesture in first gesture image may include hand skeleton point information and/or The information of hand gestures;Second identification module 430 may include: that skeleton point recognition unit (not shown) and/or posture are known Other unit (not shown);Wherein, skeleton point recognition unit is used to identify using first nerves network model trained in advance First gesture image, to obtain the information of hand skeleton point, gesture recognition unit is used to utilize nervus opticus net trained in advance Network model identifies first gesture image, to obtain the information of hand gestures.
In one exemplary embodiment, the second identification module 430 can also include: that background subtraction unit (does not show in figure Out), for being carried out to first gesture image before skeleton point recognition unit or gesture recognition unit identification first gesture image Background subtraction, to obtain first gesture image only comprising hand foreground image.
The exemplary embodiment of the disclosure additionally provides a kind of interaction control device, can be applied to electronic equipment, the electricity Sub- equipment includes the first camera and second camera, and the first camera and second camera are set to the same side of electronic equipment. As shown in figure 5, the interaction control device 500 may include: image capture module 510, for passing through the first camera acquisition the One images of gestures, and second gesture image is acquired by second camera;First identification module 520, for when detection first When images of gestures is not up to preset quality standard, second gesture image is handled, to identify the hand in second gesture image Gesture;Second identification module 530, for when detect first gesture image reach preset quality standard when, to first gesture image into Row processing, to identify the gesture in first gesture image;Instruction execution module 540, for executing control instruction according to gesture.
In one exemplary embodiment, interaction control device 500 can also include any of the above-described kind of gesture identifying device institute Including whole module/units and instruction execution module 540.
In one exemplary embodiment, gesture may include the information of hand skeleton point and/or the information of hand gestures;Refer to Enabling execution module 540 may include: the first execution unit (not shown) and/or the second execution unit (not shown); Wherein, the first execution unit is used for for executing the corresponding control instruction of hand gestures, the second execution unit according to hand bone Mapping point of the point in the graphic user interface of electronic equipment, triggering execute the control option where mapping point.
The detail of each module/unit has been described in detail in the embodiment of method part in above-mentioned apparatus, therefore It repeats no more.
Person of ordinary skill in the field it is understood that various aspects of the disclosure can be implemented as system, method or Program product.Therefore, various aspects of the disclosure can be with specific implementation is as follows, it may be assumed that complete hardware embodiment, complete The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here Referred to as circuit, " module " or " system ".
The exemplary embodiment of the disclosure additionally provides a kind of computer readable storage medium, and being stored thereon with can be realized The program product of this specification above method.In some possible embodiments, various aspects of the disclosure can also be realized For a kind of form of program product comprising program code, when program product is run on the terminal device, program code is used for Execute terminal device described in above-mentioned " illustrative methods " part of this specification according to the various exemplary embodiment party of the disclosure The step of formula.
It is produced refering to what is shown in Fig. 6, describing the program according to the exemplary embodiment of the disclosure for realizing the above method Product 600, can be using portable compact disc read only memory (CD-ROM) and including program code, and can set in terminal It is standby, such as run on PC.However, the program product of the disclosure is without being limited thereto, in this document, readable storage medium storing program for executing can With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or Person is in connection.
Program product can be using any combination of one or more readable mediums.Readable medium can be readable signal Jie Matter or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or partly lead System, device or the device of body, or any above combination.More specific example (the non exhaustive column of readable storage medium storing program for executing Table) it include: the electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only storage Device (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have Line, optical cable, RF etc. or above-mentioned any appropriate combination.
Can with any combination of one or more programming languages come write for execute the disclosure operation program Code, programming language include object oriented program language-Java, C++ etc., further include conventional process Formula programming language-such as " C " language or similar programming language.Program code can be calculated fully in user It executes in equipment, partly execute on a user device, executing, as an independent software package partially in user calculating equipment Upper part executes on a remote computing or executes in remote computing device or server completely.It is being related to remotely counting In the situation for calculating equipment, remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP To be connected by internet).
The exemplary embodiment of the disclosure additionally provides a kind of electronic equipment that can be realized the above method.Referring to figure 7 describe the electronic equipment 700 of this exemplary embodiment according to the disclosure.The electronic equipment 700 that Fig. 7 is shown is only one A example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in fig. 7, electronic equipment 700 may include: the first camera 710, for acquiring acquisition first gesture image, First gesture image is depth image;Second camera 720, for acquiring second gesture image, second gesture image is plane Image;Processor 730;And memory 740, the executable instruction for storage processor.Wherein, the first camera 710 with Second camera 720 is set to the same side of electronic equipment 700;Processor 730 is configured to execute via executable instruction is executed: Any gesture identification method in the exemplary embodiment of the disclosure, to identify in first gesture image or second gesture image Gesture;Or any interaction control method in the exemplary embodiment of the disclosure, to identify first gesture image or second-hand Gesture in gesture image, and control instruction is executed according to the gesture.
In one exemplary embodiment, as shown in figure 8, electronic equipment 800 can be showed in the form of universal computing device. The component of electronic equipment 800 can include but is not limited to: at least one above-mentioned processing unit 810, at least one above-mentioned storage are single Member 820, the bus 830 of the different system components (including storage unit 820 and processing unit 810) of connection, display unit 840, with And first camera 870, second camera 880.
Wherein, storage unit 820 is stored with program code, and program code can be executed with unit 810 processed, so that processing Unit 810 executes described in above-mentioned " illustrative methods " part of this specification according to the various illustrative embodiments of the disclosure Step.For example, processing unit 810 can execute Fig. 1, Fig. 2 or method and step shown in Fig. 3 etc..
Storage unit 820 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit (RAM) 821 and/or cache memory unit 822, it can further include read-only memory unit (ROM) 823.
Storage unit 820 can also include program/utility 824 with one group of (at least one) program module 825, Such program module 825 includes but is not limited to: operating system, one or more application program, other program modules and It may include the realization of network environment in program data, each of these examples or certain combination.
Bus 830 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures Local bus.
Electronic equipment 800 can also be with one or more external equipments 900 (such as keyboard, sensing equipment, bluetooth equipment Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 800 communicate, and/or with make Any equipment (such as the router, modulation /demodulation that the electronic equipment 800 can be communicated with one or more of the other calculating equipment Device etc.) communication.This communication can be carried out by input/output (I/O) interface 850.Also, electronic equipment 800 can be with By network adapter 860 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, Such as internet) communication.As shown, network adapter 860 is communicated by bus 830 with other modules of electronic equipment 800. It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 800, including but not Be limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and Data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to the exemplary implementation of the disclosure The method of example.
In addition, above-mentioned attached drawing is only the schematic theory of the processing according to included by the method for disclosure exemplary embodiment It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description Member, but this division is not enforceable.In fact, according to an exemplary embodiment of the present disclosure, above-described two or More multimode or the feature and function of unit can embody in a module or unit.Conversely, above-described one A module or the feature and function of unit can be to be embodied by multiple modules or unit with further division.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Adaptive change follow the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim It points out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the attached claims.

Claims (13)

1. a kind of gesture identification method, be applied to electronic equipment, which is characterized in that the electronic equipment include the first camera with Second camera, first camera and the second camera are set to the same side of the electronic equipment, the method packet It includes:
First gesture image is acquired by first camera, and second gesture figure is acquired by the second camera Picture, wherein the first gesture image is depth image, and the second gesture image is flat image;
When the detection first gesture image is not up to preset quality standard, the second gesture image is handled, with Identify the gesture in the second gesture image;
When the detection first gesture image reaches the preset quality standard, the first gesture image is handled, To identify the gesture in the first gesture image.
2. the method according to claim 1, wherein whether the first gesture image reaches preset quality standard It detects by the following method:
Whether the depth value for detecting each pixel in the first gesture image is invalid;
The invalid pixel of statistics depth value accounts for the ratio of the first gesture image;
Judge whether the ratio is less than preset ratio threshold value, if being less than, the first gesture image reaches the default matter Amount standard.
3. the method according to claim 1, wherein whether the first gesture image reaches preset quality standard It detects by the following method:
The first gesture image is converted into flat image, and detects the first gesture image and the second gesture image Similarity whether reach default similarity threshold, if reaching, the first gesture image reaches the preset quality standard.
4. the method according to claim 1, wherein described handle the first gesture image, to know Before gesture in the not described first gesture image, the method also includes:
The first gesture image is registrated with the second gesture image;
Processing is optimized to the first gesture image after registration using the second gesture image after registration, wherein The optimization processing includes any of the following or a variety of: edge filter, filling-up hole and distortion correction.
5. the method according to claim 1, wherein first camera is the flight time based on infrared light Camera, the first gesture image are time-of-flight images;The method also includes:
Infrared image is acquired by first camera;
The time-of-flight images are pre-processed using the infrared image, wherein the pretreatment includes following any One or more: screenshot, removal noise and the pixel based on depth value confidence level filter.
6. the method according to claim 1, wherein the gesture in the first gesture image includes hand bone The information of point and/or the information of hand gestures;
It is described that the first gesture image is handled, to identify the gesture in the first gesture image, comprising:
The first gesture image is identified using first nerves network model trained in advance, to obtain the hand skeleton point Information;And/or
The first gesture image is identified using nervus opticus network model trained in advance, to obtain the letter of the hand gestures Breath.
7. according to the method described in claim 6, it is characterized in that, utilizing the first nerves network model or described second Before neural network model identifies the first gesture image, the method also includes:
Background subtraction is carried out to the first gesture image, obtains first gesture image only comprising hand foreground image.
8. a kind of interaction control method, be applied to electronic equipment, which is characterized in that the electronic equipment include the first camera with Second camera, first camera and the second camera are set to the same side of the electronic equipment, the method packet It includes:
Gesture is identified by the described in any item gesture identification methods of claim 1~7;
Control instruction is executed according to the gesture.
9. according to the method described in claim 8, it is characterized in that, the gesture includes the information and/or hand of hand skeleton point The information of portion's posture;
It is described that control instruction is executed according to the gesture, comprising:
Execute the corresponding control instruction of the hand gestures;And/or
According to mapping point of the hand skeleton point in the graphic user interface of the electronic equipment, triggering executes the mapping Control option where point.
10. a kind of gesture identifying device is applied to electronic equipment, which is characterized in that the electronic equipment includes the first camera With second camera, first camera and the second camera are set on the same side of the electronic equipment, described device Include:
Image capture module, for being imaged by first camera acquisition first gesture image, and by described second Head acquisition second gesture image, wherein the first gesture image is depth image, and the second gesture image is plan view Picture;
First identification module, for when detecting the first gesture image and being not up to preset quality standard, to the second-hand Gesture image is handled, to identify the gesture in the second gesture image;
Second identification module, for when detecting the first gesture image and reaching the preset quality standard, to described first Images of gestures is handled, to identify the gesture in the first gesture image.
11. a kind of interaction control device is applied to electronic equipment, which is characterized in that the electronic equipment includes the first camera With second camera, first camera and the second camera are set on the same side of the electronic equipment, described device Include:
Image capture module, for being imaged by first camera acquisition first gesture image, and by described second Head acquisition second gesture image, wherein the first gesture image is depth image, and the second gesture image is plan view Picture;
First identification module, for when detecting the first gesture image and being not up to preset quality standard, to the second-hand Gesture image is handled, to identify the gesture in the second gesture image;
Second identification module, for when detecting the first gesture image and reaching the preset quality standard, to described first Images of gestures is handled, to identify the gesture in the first gesture image;
Instruction execution module, for executing control instruction according to the gesture.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program Any one of the described in any item gesture identification methods of claim 1~7 or claim 8~9 institute is realized when being executed by processor The interaction control method stated.
13. a kind of electronic equipment characterized by comprising
First camera, for acquiring first gesture image, the first gesture image is depth image;
Second camera, for acquiring second gesture image, the second gesture image is flat image;
Processor;And
Memory, for storing the executable instruction of the processor;
Wherein, first camera and the second camera are set to the same side of the electronic equipment;
The processor is configured to execute via the executable instruction is executed:
The described in any item gesture identification methods of claim 1~7, to identify the first gesture image or second gesture image In gesture;Or
The described in any item interaction control methods of claim 8~9, to identify the first gesture image or second gesture image In gesture, and according to the gesture execute control instruction.
CN201910435353.7A 2019-05-23 2019-05-23 Gesture recognition method, interaction control method, device, medium and electronic equipment Active CN110209273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910435353.7A CN110209273B (en) 2019-05-23 2019-05-23 Gesture recognition method, interaction control method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910435353.7A CN110209273B (en) 2019-05-23 2019-05-23 Gesture recognition method, interaction control method, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110209273A true CN110209273A (en) 2019-09-06
CN110209273B CN110209273B (en) 2022-03-01

Family

ID=67788439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910435353.7A Active CN110209273B (en) 2019-05-23 2019-05-23 Gesture recognition method, interaction control method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110209273B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027403A (en) * 2019-11-15 2020-04-17 深圳市瑞立视多媒体科技有限公司 Gesture estimation method, device, equipment and computer readable storage medium
CN111368800A (en) * 2020-03-27 2020-07-03 中国工商银行股份有限公司 Gesture recognition method and device
CN111651038A (en) * 2020-05-14 2020-09-11 香港光云科技有限公司 Gesture recognition control method based on ToF and control system thereof
CN111753715A (en) * 2020-06-23 2020-10-09 广东小天才科技有限公司 Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN111814745A (en) * 2020-07-31 2020-10-23 Oppo广东移动通信有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN112613384A (en) * 2020-12-18 2021-04-06 安徽鸿程光电有限公司 Gesture recognition method, gesture recognition device and control method of interactive display equipment
CN112711324A (en) * 2019-10-24 2021-04-27 浙江舜宇智能光学技术有限公司 Gesture interaction method and system based on TOF camera
CN112861783A (en) * 2021-03-08 2021-05-28 北京华捷艾米科技有限公司 Hand detection method and system
CN113141502A (en) * 2021-03-18 2021-07-20 青岛小鸟看看科技有限公司 Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment
CN113486765A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN113805993A (en) * 2021-09-03 2021-12-17 四川新网银行股份有限公司 Method for quickly and continuously capturing pictures
CN114138121A (en) * 2022-02-07 2022-03-04 北京深光科技有限公司 User gesture recognition method, device and system, storage medium and computing equipment
WO2022174811A1 (en) * 2021-02-19 2022-08-25 中兴通讯股份有限公司 Photographing assistance method, electronic device and storage medium
CN116328276A (en) * 2021-12-22 2023-06-27 成都拟合未来科技有限公司 Gesture interaction method, system, device and medium based on body building device
CN117648035A (en) * 2023-12-14 2024-03-05 深圳灿和兄弟网络科技有限公司 Virtual gesture control method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040164851A1 (en) * 2003-02-24 2004-08-26 Crawshaw Richard D. Lane tracking system employing redundant image sensing devices
US20080192129A1 (en) * 2003-12-24 2008-08-14 Walker Jay S Method and Apparatus for Automatically Capturing and Managing Images
CN102467235A (en) * 2010-11-12 2012-05-23 Lg电子株式会社 Method for user gesture recognition in multimedia device and multimedia device thereof
US20120268572A1 (en) * 2011-04-22 2012-10-25 Mstar Semiconductor, Inc. 3D Video Camera and Associated Control Method
US20130050425A1 (en) * 2011-08-24 2013-02-28 Soungmin Im Gesture-based user interface method and apparatus
US20150302593A1 (en) * 2013-04-08 2015-10-22 Lsi Corporation Front-End Architecture for Image Processing
CN105096259A (en) * 2014-05-09 2015-11-25 株式会社理光 Depth value restoration method and system for depth image
US20160042242A1 (en) * 2014-08-11 2016-02-11 Sony Corporation Information processor, information processing method, and computer program
CN106200904A (en) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 A kind of gesture identifying device, electronic equipment and gesture identification method
CN109544620A (en) * 2018-10-31 2019-03-29 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040164851A1 (en) * 2003-02-24 2004-08-26 Crawshaw Richard D. Lane tracking system employing redundant image sensing devices
US20080192129A1 (en) * 2003-12-24 2008-08-14 Walker Jay S Method and Apparatus for Automatically Capturing and Managing Images
CN102467235A (en) * 2010-11-12 2012-05-23 Lg电子株式会社 Method for user gesture recognition in multimedia device and multimedia device thereof
US20120268572A1 (en) * 2011-04-22 2012-10-25 Mstar Semiconductor, Inc. 3D Video Camera and Associated Control Method
US20130050425A1 (en) * 2011-08-24 2013-02-28 Soungmin Im Gesture-based user interface method and apparatus
US20150302593A1 (en) * 2013-04-08 2015-10-22 Lsi Corporation Front-End Architecture for Image Processing
CN105096259A (en) * 2014-05-09 2015-11-25 株式会社理光 Depth value restoration method and system for depth image
US20160042242A1 (en) * 2014-08-11 2016-02-11 Sony Corporation Information processor, information processing method, and computer program
CN106200904A (en) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 A kind of gesture identifying device, electronic equipment and gesture identification method
CN109544620A (en) * 2018-10-31 2019-03-29 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic equipment

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711324A (en) * 2019-10-24 2021-04-27 浙江舜宇智能光学技术有限公司 Gesture interaction method and system based on TOF camera
CN112711324B (en) * 2019-10-24 2024-03-26 浙江舜宇智能光学技术有限公司 Gesture interaction method and system based on TOF camera
CN111027403A (en) * 2019-11-15 2020-04-17 深圳市瑞立视多媒体科技有限公司 Gesture estimation method, device, equipment and computer readable storage medium
CN111027403B (en) * 2019-11-15 2023-06-06 深圳市瑞立视多媒体科技有限公司 Gesture estimation method, device, equipment and computer readable storage medium
CN111368800A (en) * 2020-03-27 2020-07-03 中国工商银行股份有限公司 Gesture recognition method and device
CN111368800B (en) * 2020-03-27 2023-11-28 中国工商银行股份有限公司 Gesture recognition method and device
CN111651038A (en) * 2020-05-14 2020-09-11 香港光云科技有限公司 Gesture recognition control method based on ToF and control system thereof
CN111753715A (en) * 2020-06-23 2020-10-09 广东小天才科技有限公司 Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN111814745B (en) * 2020-07-31 2024-05-10 Oppo广东移动通信有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN111814745A (en) * 2020-07-31 2020-10-23 Oppo广东移动通信有限公司 Gesture recognition method and device, electronic equipment and storage medium
CN112613384A (en) * 2020-12-18 2021-04-06 安徽鸿程光电有限公司 Gesture recognition method, gesture recognition device and control method of interactive display equipment
CN112613384B (en) * 2020-12-18 2023-09-19 安徽鸿程光电有限公司 Gesture recognition method, gesture recognition device and control method of interactive display equipment
WO2022174811A1 (en) * 2021-02-19 2022-08-25 中兴通讯股份有限公司 Photographing assistance method, electronic device and storage medium
CN112861783A (en) * 2021-03-08 2021-05-28 北京华捷艾米科技有限公司 Hand detection method and system
CN113141502B (en) * 2021-03-18 2022-02-08 青岛小鸟看看科技有限公司 Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment
CN113141502A (en) * 2021-03-18 2021-07-20 青岛小鸟看看科技有限公司 Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment
CN113486765A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN113486765B (en) * 2021-06-30 2023-06-16 上海商汤临港智能科技有限公司 Gesture interaction method and device, electronic equipment and storage medium
CN113805993A (en) * 2021-09-03 2021-12-17 四川新网银行股份有限公司 Method for quickly and continuously capturing pictures
CN113805993B (en) * 2021-09-03 2023-06-06 四川新网银行股份有限公司 Method for rapidly and continuously capturing images
CN116328276A (en) * 2021-12-22 2023-06-27 成都拟合未来科技有限公司 Gesture interaction method, system, device and medium based on body building device
CN114138121B (en) * 2022-02-07 2022-04-22 北京深光科技有限公司 User gesture recognition method, device and system, storage medium and computing equipment
CN114138121A (en) * 2022-02-07 2022-03-04 北京深光科技有限公司 User gesture recognition method, device and system, storage medium and computing equipment
CN117648035A (en) * 2023-12-14 2024-03-05 深圳灿和兄弟网络科技有限公司 Virtual gesture control method and device

Also Published As

Publication number Publication date
CN110209273B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN110209273A (en) Gesture identification method, interaction control method, device, medium and electronic equipment
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
EP3467707A1 (en) System and method for deep learning based hand gesture recognition in first person view
JP5887775B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
CN103164022B (en) Many fingers touch method and device, portable terminal
CN106170978B (en) Depth map generation device, method and non-transitory computer-readable medium
JP5778967B2 (en) Information processing program, information processing method, information processing apparatus, and information processing system
EP2553656A2 (en) A computing device interface
CN109167893B (en) Shot image processing method and device, storage medium and mobile terminal
KR20100138602A (en) Apparatus and method for a real-time extraction of target's multiple hands information
US10168790B2 (en) Method and device for enabling virtual reality interaction with gesture control
CN106934351B (en) Gesture recognition method and device and electronic equipment
JP5756322B2 (en) Information processing program, information processing method, information processing apparatus, and information processing system
EP2996067A1 (en) Method and device for generating motion signature on the basis of motion signature information
EP3617851B1 (en) Information processing device, information processing method, and recording medium
CN103279225A (en) Projection type man-machine interactive system and touch control identification method
CN110827217B (en) Image processing method, electronic device, and computer-readable storage medium
CN109839827B (en) Gesture recognition intelligent household control system based on full-space position information
CN111259858B (en) Finger vein recognition system, method, device, electronic device and storage medium
CN116129526A (en) Method and device for controlling photographing, electronic equipment and storage medium
CN111259757A (en) Image-based living body identification method, device and equipment
CN114677737A (en) Biological information identification method, apparatus, device and medium
CN106951077B (en) Prompting method and first electronic device
CN104063041A (en) Information processing method and electronic equipment
CN116092158A (en) Face recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant