CN106934351B - Gesture recognition method and device and electronic equipment - Google Patents

Gesture recognition method and device and electronic equipment Download PDF

Info

Publication number
CN106934351B
CN106934351B CN201710105905.9A CN201710105905A CN106934351B CN 106934351 B CN106934351 B CN 106934351B CN 201710105905 A CN201710105905 A CN 201710105905A CN 106934351 B CN106934351 B CN 106934351B
Authority
CN
China
Prior art keywords
gesture
image
panoramic
correction
gesture recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710105905.9A
Other languages
Chinese (zh)
Other versions
CN106934351A (en
Inventor
陈树勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quarkdata Software Co ltd
Original Assignee
Quarkdata Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quarkdata Software Co ltd filed Critical Quarkdata Software Co ltd
Priority to CN201710105905.9A priority Critical patent/CN106934351B/en
Publication of CN106934351A publication Critical patent/CN106934351A/en
Application granted granted Critical
Publication of CN106934351B publication Critical patent/CN106934351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Abstract

The embodiment of the invention discloses a gesture recognition method, a gesture recognition device and an electronic device, relates to the technical field of data processing, and can solve the problem of low gesture recognition efficiency in the prior art. The gesture recognition method provided by the embodiment of the invention comprises the following steps: acquiring a panoramic image which is shot by a single camera from the panoramic component and contains gesture operation; carrying out distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image; performing blocking operation on the corrected image to obtain a blocking image; and extracting a gesture characteristic value of the block image, and determining a command of the gesture operation based on the gesture characteristic value. In addition, the embodiment of the invention also discloses a gesture recognition device and electronic equipment. By the scheme of the embodiment of the invention, the gesture recognition efficiency can be effectively improved.

Description

Gesture recognition method and device and electronic equipment
Technical Field
The invention relates to the technical field of data processing, in particular to data processing based on gesture recognition.
Background
Along with the development of computer and hardware technology, the computing power of the computer is stronger and stronger, the hardware cost is lower and lower, the intelligent hardware is gradually applied to the production life of people, so that a user is required to effectively perform information interaction with the intelligent hardware and transmit instruction information, and the gesture control based on vision is undoubtedly a good choice.
Gesture recognition is generated based on computer vision and image recognition techniques. Gesture recognition generally uses image acquisition equipment such as a camera to acquire images of gestures, calibration matching and modeling are performed through algorithm processing, so that relevant two-dimensional or three-dimensional hand information is generated, hand motion calculation is performed through information such as positions and posture changes of hand feature points, hand coordinates and vectors are obtained, and then the gestures are tracked.
In the field of computer vision, most of the cameras for traditional gesture recognition adopt common plane cameras. The scenes shot by the cameras can only be unidirectional, and have the defects of small visual angle and dead angles. If the camera is used for gesture recognition and control, a large visual blind area exists on the back of the intelligent hardware camera. In this area, human gestures cannot be captured by the intelligent hardware, and therefore gestures cannot be recognized and the intelligent hardware cannot be controlled effectively.
In view of this, gesture recognition is performed by using a 360-degree panoramic camera, and the method is more and more widely applied. After the 360-degree panoramic camera is adopted, the intelligent hardware can capture the gesture of a person within the 360-degree range, so that the gesture instruction of the person can be received without dead angles.
The inventor discovers that the structure of intelligent hardware equipment is complex and the equipment cost is increased when a 360-degree panoramic camera with a plurality of cameras is adopted to collect panoramic photos in the process of realizing the invention; a panoramic view acquired by a 360-degree panoramic camera with a single camera usually has certain radial distortion and distortion, and an original gesture recognition algorithm fails in the distorted panoramic view. In addition, the size of the panoramic image is generally large, and the smart device usually consumes more time to process one frame of the panoramic image, which affects the recognition speed of the smart device for gesture operation to a certain extent.
Disclosure of Invention
In view of this, embodiments of the present invention provide a gesture recognition method, a gesture recognition apparatus and an electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a gesture recognition method, including:
acquiring a panoramic image which is shot by a single camera from the panoramic component and contains gesture operation;
carrying out distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image;
performing blocking operation on the corrected image to obtain a blocking image;
and extracting a gesture characteristic value of the block image, and determining a command of the gesture operation based on the gesture characteristic value.
As a specific implementation manner of the embodiment of the present invention, performing distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component includes:
acquiring a correction model corresponding to the panoramic component;
carrying out parameter simplification operation on the correction model to obtain a simplified correction model;
and forming a malformation correction algorithm based on the simplified correction model, and performing a distortion correction operation on the panoramic image based on the malformation correction algorithm.
As a specific implementation manner of the embodiment of the present invention, performing distortion correction operation on the panoramic image based on the malformation correction algorithm includes:
searching a distortion central point of the panoramic image, and setting the distortion central point as a two-dimensional coordinate system origin of the panoramic image;
determining a horizontal coordinate x and a vertical coordinate y of a pixel point p on the panoramic image on the two-dimensional coordinate system;
based on the malformation correction algorithm, carrying out coordinate transformation on the transverse coordinate X and the longitudinal coordinate Y to obtain a new transverse coordinate X and a new longitudinal coordinate Y;
the coordinates of the pixel point p on the panoramic image are converted from (X, Y) to (X, Y).
As a specific implementation manner of the embodiment of the present invention, based on the malformation correction algorithm, performing coordinate transformation on the horizontal coordinate X and the vertical coordinate Y to obtain a new horizontal coordinate X and a new vertical coordinate Y, including:
acquiring a radial distance r between the pixel point p and the origin of the two-dimensional coordinate system;
obtaining a first correction coefficient k of the simplified correction model1And a second correction coefficient k2
Based on the radial distance r and the first correction coefficient k1And the second correction coefficient k2And performing dot product operation on the coordinates (X, Y) to obtain new coordinates (X, Y) of the pixel point p.
As a specific implementation manner of the embodiment of the present invention, extracting a gesture feature value of the tile image, and determining a command of the gesture operation based on the gesture feature value includes:
comparing the gesture characteristic value with the similarity of the parameters in a preset gesture recognition model to obtain a gesture similarity value; and
and determining a command of the gesture operation based on the gesture similarity value.
As a specific implementation manner of the embodiment of the present invention, determining the command of the gesture operation based on the gesture similarity value includes:
and when the gesture similarity value is smaller than a first preset similarity value and the block image is larger than a preset size, continuing to execute segmentation operation on the block image.
As a specific implementation manner of the embodiment of the present invention, determining the command of the gesture operation based on the gesture similarity value includes:
and when the gesture similarity value is larger than a second preset similarity value, determining a gesture command corresponding to the gesture characteristic value as a command of the gesture operation.
As a specific implementation manner of the embodiment of the present invention, determining the command of the gesture operation based on the gesture similarity value includes:
when the gesture similarity value is between a closed-loop interval formed by a first preset similarity value and a second preset similarity value, determining the gesture operation as a command to-be-determined operation; and
storing the gesture characteristic value related to the command pending operation.
As a specific implementation manner of the embodiment of the present invention, the method further includes:
counting gesture characteristic values related to the command pending operation;
determining a specific instruction of the command pending operation based on the gesture characteristic value; and
and updating the gesture recognition parameters related to the specific instruction in the preset gesture recognition model.
In a second aspect, an embodiment of the present invention further provides a gesture recognition apparatus, including:
the acquisition module is used for acquiring a panoramic image which is shot by a single camera from the panoramic component and contains gesture operation;
the correction module is used for carrying out distortion correction operation on the panoramic image by utilizing a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image;
the blocking module is used for executing blocking operation on the corrected image to obtain a blocking image;
and the determining module is used for extracting the gesture characteristic value of the block image and determining the command of the gesture operation based on the gesture characteristic value.
As a specific implementation manner of the embodiment of the present invention, the correction module is further configured to:
acquiring a correction model corresponding to the panoramic component;
carrying out parameter simplification operation on the correction model to obtain a simplified correction model;
and forming a malformation correction algorithm based on the simplified correction model, and performing a distortion correction operation on the panoramic image based on the malformation correction algorithm.
As a specific implementation manner of the embodiment of the present invention, the correction module is further configured to:
searching a distortion central point of the panoramic image, and setting the distortion central point as a two-dimensional coordinate system origin of the panoramic image;
determining a horizontal coordinate x and a vertical coordinate y of a pixel point p on the panoramic image on the two-dimensional coordinate system;
based on the malformation correction algorithm, carrying out coordinate transformation on the transverse coordinate X and the longitudinal coordinate Y to obtain a new transverse coordinate X and a new longitudinal coordinate Y;
the coordinates of the pixel point p on the panoramic image are converted from (X, Y) to (X, Y).
As a specific implementation manner of the embodiment of the present invention, the correction module is further configured to:
acquiring a radial distance r between the pixel point p and the origin of the two-dimensional coordinate system;
obtaining a first correction coefficient k of the simplified correction model1And a second correction coefficient k2
Based on the radial distance r and the first correction coefficient k1And the second correction coefficient k2Executing points on coordinates (x, y)And (4) performing product operation to obtain a new coordinate (X, Y) of the pixel point p.
As a specific implementation manner of the embodiment of the present invention, the determining module is further configured to:
comparing the gesture characteristic value with the similarity of the parameters in a preset gesture recognition model to obtain a gesture similarity value; and
and determining a command of the gesture operation based on the gesture similarity value.
As a specific implementation manner of the embodiment of the present invention, the determining module is further configured to:
and when the gesture similarity value is smaller than a first preset similarity value and the block image is larger than a preset size, continuing to execute segmentation operation on the block image.
As a specific implementation manner of the embodiment of the present invention, the determining module is further configured to:
and when the gesture similarity value is larger than a second preset similarity value, determining a gesture command corresponding to the gesture characteristic value as a command of the gesture operation.
As a specific implementation manner of the embodiment of the present invention, the determining module is further configured to:
when the gesture similarity value is between a closed-loop interval formed by a first preset similarity value and a second preset similarity value, determining the gesture operation as a command to-be-determined operation; and
storing the gesture characteristic value related to the command pending operation.
As a specific implementation manner of the embodiment of the present invention, the determining module is further configured to:
counting gesture characteristic values related to the command pending operation;
determining a specific instruction of the command pending operation based on the gesture characteristic value; and
and updating the gesture recognition parameters related to the specific instruction in the preset gesture recognition model.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the gesture recognition method of any one of the preceding first aspects or any implementation manner of the first aspect.
In a fourth aspect, the present invention also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the gesture recognition method in the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by a computer, the computer executes the gesture recognition method in the first aspect or any implementation manner of the first aspect.
The gesture recognition method, the gesture recognition apparatus, the electronic device, the non-transitory computer-readable storage medium, and the computer program provided in the embodiments of the present invention can acquire a panoramic image including a gesture operation photographed by a single camera from a panoramic component, perform a distortion correction operation on the panoramic image by using a deformity correction algorithm corresponding to the panoramic component to obtain a corrected image, perform a blocking operation on the corrected image to obtain a blocked image, extract a gesture feature value of the blocked image, and determine a command of the gesture operation based on the gesture feature value. Therefore, the single camera is adopted to obtain the panoramic image, so that the cost of the equipment is reduced; on the basis of the existing gesture recognition algorithm, the gesture recognition rate is improved under the condition of radial distortion by training and dynamically adjusting the characteristic value of gesture recognition; by means of the block processing of the panoramic image, the gesture recognition speed of each frame of image shot by the camera is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic structural diagram of a panoramic imaging apparatus according to an embodiment of the present invention;
fig. 1b is a schematic structural diagram of another panoramic camera apparatus according to an embodiment of the present invention
Fig. 2 is a schematic flowchart of a gesture recognition method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating another gesture recognition method according to an embodiment of the present invention;
FIG. 4 is a schematic flowchart of another gesture recognition method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating another gesture recognition method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating another gesture recognition method according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a gesture recognition apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1a and 1b are schematic structural diagrams of a panoramic camera apparatus according to an embodiment of the present invention, where the panoramic camera apparatus may be a panoramic camera or a panoramic video camera capable of taking a 360-degree panoramic picture or a 360-degree panoramic video. The panoramic camera device can realize independent monitoring without dead angles in a large range, gesture operation of a monitoring user without visual blind points can be realized, and seamless monitoring is realized. A panoramic camera can replace a plurality of ordinary cameras and realize seamless monitoring under the condition of low cost, so that the panoramic camera is widely applied to various fields including prisons, government offices, banks, social security, public places, cultural places and the like.
Referring to fig. 1a, the panoramic camera apparatus may include a camera 101, a reflective assembly 102a, a reflective assembly 102b, a top cover 104, and a base 105, wherein the reflective assembly 102a and the reflective assembly 102b together constitute a panoramic assembly. The reflection assemblies 102a and 102b are made of materials with light emission function, the incident light 103a is emitted by the emission assembly 102a to form the reflected light 103b, the emitted light 103b is reflected by the emission assembly 102a for a second time to form the emitted light 103c, and finally the emitted light 103c enters the camera 101 to form a panoramic image.
In addition to the embodiment provided in fig. 1a, fig. 1b provides another panoramic camera 10, the panoramic camera 10 comprising: a camera 101, a refraction component 106 and a base 105. The refractive component 106 constitutes the panoramic component of the panoramic camera. The incident light 103a is refracted by the refraction component 106 to form refracted light 103d, and the refracted light 103d enters the camera 101 to form a panoramic image.
In addition to the panoramic imaging apparatus 10 provided in fig. 1a and 1b, the panoramic imaging apparatus 10 may also be another type of imaging apparatus that acquires a panoramic image using a single camera.
The panoramic camera device in the embodiment adopts the panoramic group price used by matching the single camera with the camera, and can acquire panoramic images under the condition of effectively reducing the cost. Meanwhile, as the single camera is adopted to collect the panoramic image, compared with the method that the images are collected by a plurality of camera images and then spliced into the panoramic image, the complexity of image processing is reduced.
As a specific application of the panoramic camera apparatus, an embodiment of the present invention provides a gesture recognition method, which includes the following steps, with reference to fig. 2:
s201, acquiring a panoramic image which is shot by a single camera from the panoramic assembly and contains gesture operation.
In the field of gesture recognition, cameras for gesture recognition mostly adopt common plane cameras. The scenes shot by the cameras can only be unidirectional, and have the defects of small visual angle and dead angles. If the camera is used for gesture recognition and control, a large dead angle exists on the back of intelligent hardware. In this area, human gestures cannot be captured by the intelligent hardware and thus cannot be recognized and controlled. After the 360-degree panoramic camera is adopted, the intelligent hardware can capture the gesture of a person within the 360-degree range, so that the instruction of the person can be received without dead angles.
Specifically, after the camera 101 collects a full view image, the panoramic camera or other electronic devices in communication connection with the panoramic camera may acquire the panoramic image collected by the camera 101. The panoramic image can be obtained by a camera through timing shooting, or can be obtained by extracting image frames from a video shot by the camera.
S202, carrying out distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image.
The panoramic assembly is adopted to collect the panoramic image, and the obtained panoramic image has distortion due to the existence of curved surface reflection or refraction. Therefore, it is necessary to mathematically model the reflection or refraction of the panoramic component, and correct the distorted pattern generated by the reflection or refraction of the panoramic component based on the mathematical model to form a corrected image.
Radial distortion, for example, is typically a distance shift due to an inward or outward movement occurring between an image point and its ideal image position. Depending on the direction of the distortion, a positive radial distortion variation and a negative radial distortion variation can be distinguished. The positive radial distortion causes the pixel points to move away from the direction of the line point in the image, causing pincushion distortion. Conversely, negative radial distortion causes the pixel to move in a direction that is closer to the center of the image, which causes barrel distortion. The mathematical model from which the radial distortion is refined is:
x=x(k1r2+k2r4+k3r6+…)
y=y(k1r2+k2r4+k3r6+…)
wherein the content of the first and second substances,xythe offset, k, of the coordinate (x, y) representing the pixel point p in the lateral and longitudinal directions1、k2、k3Are coefficients of the distortion model.
S203, performing blocking operation on the corrected image to obtain a blocking image.
Because the panoramic image is generally large in size, if a gesture is recognized globally by one image, the processing speed is low. For a video image shot in real time, for example, a video image of 30FPS, directly performing image processing will increase the processing pressure of the processor, and even the real-time processing of the video image cannot be completed, so that the gesture recognition cannot be completed. The image is partitioned, and the partitioned images can be processed in parallel by using a GPU (graphics processing unit), so that the gesture recognition speed can be increased.
Due to the influence of distortion, if the gesture is recognized in the distorted image in a global mode, the total distortion of the global mode is large, and other areas have large interference on the gesture area. If image blocking is carried out, the distortion of each image is relatively small, and other areas have small interference on gesture areas, so that the probability of gesture recognition can be improved.
In actual operation, the panoramic image needs to be divided and the recognition processing needs to be performed in parallel. The size of the block image can be calculated in a mode of gradually reducing and positioning in a staggered mode. For example, the segmentation may be continued 1/2 for the length and width of the current image, and may not be stopped until the segmented image length × width is less than 64 × 64 pixels. In image segmentation, the positioning can be staggered, so that the human hand can possibly fall on the central area of the image.
S204, extracting the gesture characteristic value of the block image, and determining the command of the gesture operation based on the gesture characteristic value.
Because the panoramic view has certain distortion and distortion, the existing gesture recognition algorithm fails in the distorted and distorted panoramic view, and the speed of processing one frame of image is slow.
Therefore, on the existing gesture recognition algorithm, the characteristic value of gesture recognition is trained and dynamically adjusted, so that the gesture recognition rate is improved under the condition of radial distortion, and the speed of processing one frame of image is high.
Specifically, existing gesture algorithms recognize gestures at close distances (< 2 m). And the control intelligent hardware needs to perform gesture recognition at a longer distance, for example, within 5 meters indoors and within 10 meters outdoors. Manual optimization for longer distance gestures is required.
For gesture recognition based on the feature values, model training is usually performed in advance, and an operation instruction corresponding to the final feature values is determined in a manual labeling and machine learning manner.
As an example, for training gesture recognition in a radially distorted image, the working steps are as follows:
(1) collecting a planar image library with gestures;
(2) marking the position of the gesture in the image by using the existing gesture recognition algorithm;
(3) and radially distorting the original image by using a radial distortion method. Calculating a new position of the gesture marked in the previous step;
(4) calculating the characteristic value of the image indicating the position of the gesture in the last step by using the existing gesture recognition algorithm;
(5) and adjusting the parameters of the existing gesture recognition algorithm to enable the feature value obtained by the last step of calculation to be identified as the gesture.
By adopting the method in the embodiment, the single camera is adopted to collect the panoramic image, so that the equipment cost is reduced, and the appearance of a visual blind area is avoided; the panoramic image is segmented based on a blocking processing mode, so that the pressure of a processor for processing the panoramic image is reduced; and the gesture recognition is carried out by adopting the adjusted gesture recognition algorithm, so that the accuracy of the gesture recognition is improved.
According to another embodiment of the present invention, referring to fig. 3, the distortion correction operation on the panoramic image by using the malformation correction algorithm corresponding to the panoramic component may include the following steps:
s301, acquiring a correction model corresponding to the panoramic component.
Taking the radial distortion as an example, the radial distortion is the distortion distributed along the radius direction of the lens, and the generation reason is that the light ray generates more serious deviation at a place far away from the center of the lens than at a place close to the center, the distortion is more obvious in the common lens, and the radial distortion mainly comprises barrel distortion and pincushion distortion.
Radial distortion is symmetric distortion, the distortion at the center is 0, and is generated and gradually increased along the radial direction from the optical center (distortion center), and a distortion model is a mathematical model adopted for describing the corresponding relation between a distorted image and a source image:
(X, Y) T ═ X, Y) T (1+ k1r2+ k2r4+ … + higher order terms)
r2=x2+y2
Wherein, X and Y represent the relative coordinates of the pixel point in the image to the distortion center, X and Y represent the coordinates corresponding to the pixel point p in the distorted image, and k1、k2Are coefficients of the distortion model.
S302, parameter simplification operation is carried out on the correction model to obtain a simplified correction model.
To simplify the computation, typically only first and second order terms are considered, simplifying the computational model to:
(X,Y)T=(x,y)T(1+k1r2+k2r4)
by simplifying the model, the complexity of the operation is greatly reduced and the operation efficiency is improved on the premise of ensuring the correctness of the model.
And S303, forming a malformation correction algorithm based on the simplified correction model, and carrying out distortion correction operation on the panoramic image based on the malformation correction algorithm.
Specifically, referring to fig. 4, step S303 may be implemented as follows:
s401, finding out a distortion central point of the panoramic image, and setting the distortion central point as a two-dimensional coordinate origin of the panoramic image.
For a specific panoramic shooting device, since the shape of the panoramic component is fixed, the distortion center point is also usually a fixed point, and after the panoramic image is acquired, the distortion center point can be found by only finding a fixed coordinate point. Meanwhile, a two-dimensional coordinate system is set with the distortion center as the origin of coordinates.
S402, determining a horizontal coordinate x and a vertical coordinate y of a pixel point p on the panoramic image on the two-dimensional coordinate system;
specifically, the pixel distance and the orientation of the current pixel point p from the distortion center may be obtained, for example, if the pixel point p is located above the distortion center point, and the horizontal and vertical pixel distances from the origin of coordinates are 400 and 500, respectively, the horizontal coordinate x and the vertical coordinate y of the current pixel point p may be defined as-400 and 500, that is, the coordinate of the pixel point p is (-400, 500).
S403, based on the malformation correction algorithm, carrying out coordinate transformation on the transverse coordinate X and the longitudinal coordinate Y to obtain a new transverse coordinate X and a new longitudinal coordinate Y;
the coordinate of the pixel p may be transformed by using a formula (X, Y) T (1+ k1r2+ k2r4), so as to obtain a new coordinate of the pixel p.
S404, converting the coordinate of the pixel point p on the panoramic image from (X, Y) to (X, Y).
Through the operation, the malformed image can be effectively processed, and a basis is provided for subsequent effective gesture recognition.
Referring to fig. 5, an embodiment of the present invention further provides a specific gesture recognition method, including the following steps:
s501, comparing the gesture characteristic value with the similarity of parameters in a preset gesture recognition model to obtain a gesture similarity value.
Because the gesture of a person is random, static statistics needs to be performed on multiple states of the same gesture of multiple persons in advance, and the recognition rate of the gesture is improved. For each gesture recognition, the calculation result is a similarity value represented by a probability.
The gesture operation can be recognized by adopting various gesture recognition algorithms, such as an image matching algorithm, a hidden Markov model, a Meanshift algorithm and the like. By comparing the gesture feature values with the parameters in the recognition model, a gesture similarity value may be obtained.
S502, judging whether the gesture similarity value is smaller than a first preset similarity value.
After the gesture similarity value is obtained in step S501, a plurality of similarity determination thresholds may be set, for example, the first preset similarity value is set to be 40%, and for a probability below 40%, it is considered that there is no gesture. Of course, the first preset similarity value may be set to other values as needed.
S503, judging whether the block image is larger than a preset size.
Since the blocking of the panoramic image is performed in steps, it is necessary to determine whether the size of the blocked image is larger than a preset size (e.g., 200 × 200).
S504, continuously executing the segmentation operation on the block image.
In order to improve the recognition rate of the block images, the segmentation operation process may be continuously performed for images of which the block images are larger than a preset size.
And S505, whether the gesture similarity value is larger than a second preset similarity value or not.
In addition to determining whether the similarity is smaller than the first preset value, it may be further determined whether the gesture similarity is greater than a second preset similarity value (e.g., 60%).
S506, determining the gesture command corresponding to the gesture feature value as the command of the gesture operation.
And regarding the feature value with the gesture similarity larger than the second preset similarity value, determining the gesture command, directly searching a corresponding command in a preset feature value-command mapping table, and determining the searched command as a command of gesture operation.
In addition to the above steps, a certain probability interval may be set as an indeterminate interval (e.g., 40% to 60%), and for the gesture operation in this interval, the following steps are performed:
and S507, determining the gesture operation as a command pending operation.
And S508, storing the gesture characteristic value related to the command pending operation.
And for uncertain gesture recognition, carrying out dynamic statistics by adopting a Bayesian classification algorithm, and dynamically adjusting recognition parameters, so that the gesture recognition rate for fixed users is improved.
Through the mode, corresponding operations can be executed aiming at different types of gesture operations, and the accuracy of gesture recognition is improved.
For uncertain gesture recognition, carrying out dynamic statistics by adopting a Bayesian classification algorithm, and dynamically adjusting recognition parameters based on the following assumptions:
(1) the randomness of the gestures of the user makes the gestures of the user not be standard and standard every time.
(2) In the process of operating the intelligent hardware, when the user makes an irregular gesture and the intelligent hardware does not respond, the user can recognize that the user is irregular, and then a regular gesture is made.
(3) The intelligent hardware can associate the irregular gestures with the regular gestures of the same type, and the Bayesian classification algorithm is used for carrying out dynamic statistics and dynamically adjusting the identification parameters.
(4) When more gestures of the same irregular type are recorded, an individual gesture of the same type of gestures is formed, so that the gesture recognition rate of a fixed user is improved.
For this reason, referring to fig. 6, the gesture recognition method provided in the embodiment of the present invention may further include the following steps:
s601, counting gesture characteristic values related to the command pending operation.
Specifically, the gesture characteristic values related to the command to-be-determined operation within the preset time period may be counted, or whether the number of the gesture characteristic values related to the command to-be-determined operation reaches a preset threshold value is judged, and when the gesture characteristic value related to the command to-be-determined operation reaches the preset threshold value, the gesture characteristic values related to the command to-be-determined operation are counted.
S602, determining a specific instruction of the command pending operation based on the gesture characteristic value.
And for uncertain gestures, carrying out dynamic statistics by adopting a Bayesian classification algorithm, and dynamically adjusting recognition parameters, so that the gesture recognition rate for fixed users is improved. Therefore, the user can operate naturally, and the gesture command accords with the action characteristics of the user.
S603, updating the gesture recognition parameters related to the specific instruction in the preset gesture recognition model.
Through the mode, the recognition accuracy of the gesture command can be further improved.
Corresponding to the above gesture recognition method, referring to fig. 7, an embodiment of the present invention further discloses a gesture recognition apparatus 70, including:
an obtaining module 701, configured to obtain a panoramic image, which is shot by a single camera from the panoramic component and contains a gesture operation.
In the field of gesture recognition, cameras for gesture recognition mostly adopt common plane cameras. The scenes shot by the cameras can only be unidirectional, and have the defects of small visual angle and dead angles. If the camera is used for gesture recognition and control, a large dead angle exists on the back of intelligent hardware. In this area, human gestures cannot be captured by the intelligent hardware and thus cannot be recognized and controlled. After the 360-degree panoramic camera is adopted, the intelligent hardware can capture the gesture of a person within the 360-degree range, so that the instruction of the person can be received without dead angles.
Specifically, after the camera 101 collects a full view image, the panoramic camera or other electronic devices in communication connection with the panoramic camera may acquire the panoramic image collected by the camera 101. The panoramic image can be obtained by a camera through timing shooting, or can be obtained by extracting image frames from a video shot by the camera.
A correction module 702, configured to perform distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component, so as to obtain a corrected image.
The panoramic assembly is adopted to collect the panoramic image, and the obtained panoramic image has distortion due to the existence of curved surface reflection or refraction. Therefore, it is necessary to mathematically model the reflection or refraction of the panoramic component, and correct the distorted pattern generated by the reflection or refraction of the panoramic component based on the mathematical model to form a corrected image.
Radial distortion, for example, is typically a distance shift due to an inward or outward movement occurring between an image point and its ideal image position. Depending on the direction of the distortion, a positive radial distortion variation and a negative radial distortion variation can be distinguished. The positive radial distortion causes the pixel points to move away from the direction of the line point in the image, causing pincushion distortion. Conversely, negative radial distortion causes the pixel to move in a direction that is closer to the center of the image, which causes barrel distortion. The mathematical model from which the radial distortion is refined is:
x=x(k1r2+k2r4+k3r6+…)
y=y(k1r2+k2r4+k3r6+…)
wherein the content of the first and second substances,xythe offset, k, of the coordinate (x, y) representing the pixel point p in the lateral and longitudinal directions1、k2、k3Are coefficients of the distortion model.
A blocking module 703, configured to perform a blocking operation on the corrected image to obtain a blocked image;
because the panoramic image is generally large in size, if a gesture is recognized globally by one image, the processing speed is low. For a video image shot in real time, for example, a video image of 30FPS, directly performing image processing will increase the processing pressure of the processor, and even the real-time processing of the video image cannot be completed, so that the gesture recognition cannot be completed. The image is partitioned, and the partitioned images can be processed in parallel by using a GPU (graphics processing unit), so that the gesture recognition speed can be increased.
Due to the influence of distortion, if the gesture is recognized in the distorted image in a global mode, the total distortion of the global mode is large, and other areas have large interference on the gesture area. If image blocking is carried out, the distortion of each image is relatively small, and other areas have small interference on gesture areas, so that the probability of gesture recognition can be improved.
In actual operation, the panoramic image needs to be divided and the recognition processing needs to be performed in parallel. The size of the block image can be calculated in a mode of gradually reducing and positioning in a staggered mode. For example, the segmentation may be continued 1/2 for the length and width of the current image, and may not be stopped until the segmented image length × width is less than 64 × 64 pixels. In image segmentation, the positioning can be staggered, so that the human hand can possibly fall on the central area of the image.
A determining module 704, configured to extract a gesture feature value of the tile image, and determine a command of the gesture operation based on the gesture feature value.
Because the panoramic view has certain distortion and distortion, the existing gesture recognition algorithm fails in the distorted and distorted panoramic view, and the speed of processing one frame of image is slow.
Therefore, on the existing gesture recognition algorithm, the characteristic value of gesture recognition is trained and dynamically adjusted, so that the gesture recognition rate is improved under the condition of radial distortion, and the speed of processing one frame of image is high.
Specifically, existing gesture algorithms recognize gestures at close distances (< 2 m). And the control intelligent hardware needs to perform gesture recognition at a longer distance, for example, within 5 meters indoors and within 10 meters outdoors. Manual optimization for longer distance gestures is required.
For gesture recognition based on the feature values, model training is usually performed in advance, and an operation instruction corresponding to the final feature values is determined in a manual labeling and machine learning manner.
As an example, for training gesture recognition in a radially distorted image, the working steps are as follows:
(1) collecting a planar image library with gestures;
(2) marking the position of the gesture in the image by using the existing gesture recognition algorithm;
(3) and radially distorting the original image by using a radial distortion method. Calculating a new position of the gesture marked in the previous step;
(4) calculating the characteristic value of the image indicating the position of the gesture in the last step by using the existing gesture recognition algorithm;
(5) and adjusting the parameters of the existing gesture recognition algorithm to enable the feature value obtained by the last step of calculation to be identified as the gesture.
According to the device in the embodiment, the single camera is adopted to collect the panoramic image, so that the equipment cost is reduced, and the occurrence of a visual blind area is avoided; the panoramic image is segmented based on a blocking processing mode, so that the pressure of a processor for processing the panoramic image is reduced; and the gesture recognition is carried out by adopting the adjusted gesture recognition algorithm, so that the accuracy of the gesture recognition is improved.
Other functions of the gesture recognition apparatus 70 provided in the embodiment of the present invention correspond to corresponding embodiments or implementations of the gesture recognition method, and are not described herein again.
Referring to fig. 8, an embodiment of the present invention further provides an electronic device 80, where the electronic device 80 may include: at least one processor 801, a memory 802, an input output interface 803, a radio frequency circuit 804, an audio circuit 805, a camera component 806, and a panoramic component 807. Wherein, the rf circuit 804 receives signals through the antenna 8041; the audio circuit 805 is connected to a speaker 8051 and a microphone 8052; the camera module 806 is configured to obtain panoramic light provided by the panoramic module 807 to form a panoramic image or a panoramic video, and the camera module 806 may be the camera 101 shown in fig. 1a and 1b, or may be another type of device having a camera function; the panoramic image or panoramic video is stored in the memory 802. The at least one processor 801 is communicatively coupled to a memory 802, the memory 802 storing instructions executable by the at least one processor, the instructions being executable by the at least one processor 801 to enable the at least one processor to perform any of the embodiments of gesture recognition methods previously described.
The electronic device exists as a single image recognition device, and can also be used as an accessory of other devices for providing gesture recognition instructions for the other devices. For example, the electronic device may exist in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) The specific server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) Unmanned aerial vehicle, robot or similar product with gesture recognition function.
(6) Other electronic devices with gesture recognition functionality.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof.
In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A gesture recognition method, comprising:
acquiring a panoramic image containing gesture operations, shot by a single camera from a panoramic assembly, the panoramic assembly being used for capturing gestures within a 360-degree range, the panoramic image being formed by a panoramic camera device, the panoramic camera device comprising: the panoramic camera comprises a camera, a refraction component and a base (105), wherein the refraction component forms a panoramic component of the panoramic camera device, incident light is refracted by the refraction component to form refracted light, and the refracted light enters the camera to form a panoramic image;
carrying out distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image;
performing blocking operation on the corrected image to obtain a blocking image;
extracting a gesture characteristic value of the block image, and determining a command of the gesture operation based on the gesture characteristic value; wherein the content of the first and second substances,
carrying out distortion correction operation on the panoramic image by using a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image, comprising:
acquiring a correction model corresponding to the panoramic component; carrying out parameter simplification operation on the correction model to obtain a simplified correction model; forming a malformation correction algorithm based on the simplified correction model, and performing a malformation correction operation on the panoramic image based on the malformation correction algorithm, specifically: searching a distortion central point of the panoramic image, setting the distortion central point as an origin of a two-dimensional coordinate system of the panoramic image, determining a horizontal coordinate X and a longitudinal coordinate Y of a pixel point p on the panoramic image on the two-dimensional coordinate system, and performing coordinate transformation on the horizontal coordinate X and the longitudinal coordinate Y based on the deformity correction algorithm to obtain a new horizontal coordinate X and a new longitudinal coordinate Y, so that the coordinate of the pixel point p on the panoramic image is transformed from (X, Y) to (X, Y); based on the malformation correction algorithm, performing coordinate transformation on the transverse coordinate X and the longitudinal coordinate Y to obtain a new transverse coordinate X and a new longitudinal coordinate Y, wherein the coordinate transformation comprises: acquiring a radial distance r between the pixel point p and the origin of the two-dimensional coordinate system; obtaining a first correction coefficient k of the simplified correction model1And a second correction coefficient k2(ii) a Based on the radial distance r and the first correction coefficient k1And the second correction coefficient k2Performing dot product operation on the coordinates (X, Y) to obtain new coordinates (X, Y) of the pixel point p;
the distortion model adopts a mathematical model for describing the corresponding relation between a distorted image and a source image as follows:
(X,Y)T=(x,y)T(1+k1r2+k2r4+.. + high-order terms
Wherein, X and Y represent the relative coordinates of the pixel point in the image to the distortion center, X and Y represent the coordinates corresponding to the pixel point p in the distorted image, and k1、k2For the coefficients of the distortion model, to simplify the calculation, only the first and second order terms are considered, simplifying the calculation model to:
(X,Y)T=(x,y)T(1+k1r2+k2r4)。
2. the gesture recognition method according to claim 1, wherein the extracting of the gesture feature value of the block image and the determining of the command of the gesture operation based on the gesture feature value comprise:
comparing the gesture characteristic value with the similarity of parameters in a preset gesture recognition model to obtain a gesture similarity value; and
determining a command of the gesture operation based on the gesture similarity value.
3. The gesture recognition method according to claim 2, wherein the determining the command of the gesture operation based on the gesture similarity value comprises:
and when the gesture similarity value is smaller than a first preset similarity value and the block image is larger than a preset size, continuing to execute segmentation operation on the block image.
4. The gesture recognition method according to claim 2, wherein the determining the command of the gesture operation based on the gesture similarity value comprises:
and when the gesture similarity value is larger than a second preset similarity value, determining a gesture command corresponding to the gesture feature value as a command of the gesture operation.
5. The gesture recognition method according to claim 2, wherein the determining the command of the gesture operation based on the gesture similarity value comprises:
when the gesture similarity value is between a closed-loop interval formed by a first preset similarity value and a second preset similarity value, determining the gesture operation as a command to-be-determined operation; and
and storing the gesture characteristic value related to the command pending operation.
6. The gesture recognition method according to claim 5, further comprising:
counting gesture characteristic values related to the command pending operation;
determining a specific instruction of the command pending operation based on the gesture characteristic value; and
and updating the gesture recognition parameters related to the specific instruction in the preset gesture recognition model.
7. A gesture recognition apparatus, comprising:
an acquisition module for acquiring a panoramic image containing gesture operations shot by a single camera from a panoramic assembly, the panoramic assembly being used for capturing gestures within a 360-degree range, the panoramic image being formed by a panoramic camera device, the panoramic camera device comprising: the panoramic camera comprises a camera, a refraction component and a base (105), wherein the refraction component forms a panoramic component of the panoramic camera device, incident light is refracted by the refraction component to form refracted light, and the refracted light enters the camera to form a panoramic image;
the correction module is used for carrying out distortion correction operation on the panoramic image by utilizing a malformation correction algorithm corresponding to the panoramic component to obtain a corrected image;
the blocking module is used for executing blocking operation on the corrected image to obtain a blocking image;
the determining module is used for extracting the gesture characteristic value of the block image and determining the command of the gesture operation based on the gesture characteristic value; wherein the correction module is further configured to:
acquiring a correction model corresponding to the panoramic component; carrying out parameter simplification operation on the correction model to obtain a simplified correction model; forming a malformation correction algorithm based on the simplified correction model, and performing a malformation correction operation on the panoramic image based on the malformation correction algorithm, specifically: searching a distortion central point of the panoramic image, setting the distortion central point as an origin of a two-dimensional coordinate system of the panoramic image, determining a horizontal coordinate X and a longitudinal coordinate Y of a pixel point p on the panoramic image on the two-dimensional coordinate system, and performing coordinate transformation on the horizontal coordinate X and the longitudinal coordinate Y based on the deformity correction algorithm to obtain a new horizontal coordinate X and a new longitudinal coordinate Y, so that the coordinate of the pixel point p on the panoramic image is transformed from (X, Y) to (X, Y); based on the malformation correction algorithm, performing coordinate transformation on the transverse coordinate X and the longitudinal coordinate Y to obtain a new transverse coordinate X and a new longitudinal coordinate Y, wherein the coordinate transformation comprises: acquiring a radial distance r between the pixel point p and the origin of the two-dimensional coordinate system; obtaining a first correction coefficient k of the simplified correction model1And a second correction coefficient k2(ii) a Based on the radial distance r and the first correction coefficient k1And the second correction coefficient k2Performing dot product operation on the coordinates (X, Y) to obtain new coordinates (X, Y) of the pixel point p;
the distortion model adopts a mathematical model for describing the corresponding relation between a distorted image and a source image as follows:
(X,Y)T=(x,y)T(1+k1r2+k2r4+.. + high-order terms
Wherein, X and Y represent the relative coordinates of the pixel point in the image to the distortion center, X and Y represent the coordinates corresponding to the pixel point p in the distorted image, and k1、k2For the coefficients of the distortion model, to simplify the calculation, only the first and second order terms are considered, simplifying the calculation model to:
(X,Y)T=(x,y)T(1+k1r2+k2r4)。
8. the gesture recognition apparatus of claim 7, wherein the determination module is further configured to:
comparing the gesture characteristic value with the similarity of parameters in a preset gesture recognition model to obtain a gesture similarity value; and
determining a command of the gesture operation based on the gesture similarity value.
9. The gesture recognition apparatus of claim 8, wherein the determination module is further configured to:
and when the gesture similarity value is smaller than a first preset similarity value and the block image is larger than a preset size, continuing to execute segmentation operation on the block image.
10. The gesture recognition apparatus of claim 8, wherein the determination module is further configured to:
and when the gesture similarity value is larger than a second preset similarity value, determining a gesture command corresponding to the gesture feature value as a command of the gesture operation.
11. The gesture recognition apparatus of claim 8, wherein the determination module is further configured to:
when the gesture similarity value is between a closed-loop interval formed by a first preset similarity value and a second preset similarity value, determining the gesture operation as a command to-be-determined operation; and
and storing the gesture characteristic value related to the command pending operation.
12. The gesture recognition apparatus of claim 11, wherein the determination module is further configured to:
counting gesture characteristic values related to the command pending operation;
determining a specific instruction of the command pending operation based on the gesture characteristic value; and
and updating the gesture recognition parameters related to the specific instruction in the preset gesture recognition model.
13. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the gesture recognition method of any of claims 1-6.
CN201710105905.9A 2017-02-23 2017-02-23 Gesture recognition method and device and electronic equipment Active CN106934351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710105905.9A CN106934351B (en) 2017-02-23 2017-02-23 Gesture recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710105905.9A CN106934351B (en) 2017-02-23 2017-02-23 Gesture recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN106934351A CN106934351A (en) 2017-07-07
CN106934351B true CN106934351B (en) 2020-12-29

Family

ID=59423147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710105905.9A Active CN106934351B (en) 2017-02-23 2017-02-23 Gesture recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN106934351B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107260412B (en) * 2017-07-11 2019-01-04 雷磊 Folding intelligence stretcher
CN107212971B (en) * 2017-07-11 2018-09-21 薛红 A kind of folding intelligence stretcher
CN108344442B (en) * 2017-12-30 2021-01-08 广州正峰电子科技有限公司 Object state detection and identification method, storage medium and system
CN108490607A (en) * 2018-02-24 2018-09-04 江苏斯当特动漫设备制造有限公司 A kind of holographic virtual implementing helmet based on cultural tour service
CN111950328A (en) * 2019-05-15 2020-11-17 阿里巴巴集团控股有限公司 Method and device for determining object class in picture
TWI777153B (en) * 2020-04-21 2022-09-11 和碩聯合科技股份有限公司 Image recognition method and device thereof and ai model training method and device thereof
CN112329529B (en) * 2020-09-29 2023-08-18 远光软件股份有限公司 Automatic calibration checking device and method for sample storage cabinet
CN112819725B (en) * 2021-02-05 2023-10-03 广东电网有限责任公司广州供电局 Quick image correction method for radial distortion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523395A (en) * 2011-11-15 2012-06-27 中国科学院深圳先进技术研究院 Television system having multi-point touch function, touch positioning identification method and system thereof
CN103118189A (en) * 2013-01-25 2013-05-22 广东欧珀移动通信有限公司 Post-call gesture operation method and post-call gesture operation device for mobile phone
CN103247031A (en) * 2013-04-19 2013-08-14 华为技术有限公司 Method, terminal and system for correcting aberrant image
CN105046249A (en) * 2015-09-07 2015-11-11 哈尔滨市一舍科技有限公司 Human-computer interaction method
CN105957015A (en) * 2016-06-15 2016-09-21 武汉理工大学 Thread bucket interior wall image 360 DEG panorama mosaicing method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523395A (en) * 2011-11-15 2012-06-27 中国科学院深圳先进技术研究院 Television system having multi-point touch function, touch positioning identification method and system thereof
CN103118189A (en) * 2013-01-25 2013-05-22 广东欧珀移动通信有限公司 Post-call gesture operation method and post-call gesture operation device for mobile phone
CN103247031A (en) * 2013-04-19 2013-08-14 华为技术有限公司 Method, terminal and system for correcting aberrant image
CN105046249A (en) * 2015-09-07 2015-11-11 哈尔滨市一舍科技有限公司 Human-computer interaction method
CN105957015A (en) * 2016-06-15 2016-09-21 武汉理工大学 Thread bucket interior wall image 360 DEG panorama mosaicing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Generic Camera Model and Calibration Method for Conventional,Wide-Angle, and Fish-Eye Lenses;Juho Kannala等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20060831;第28卷(第8期);第1335-1340页 *

Also Published As

Publication number Publication date
CN106934351A (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN106934351B (en) Gesture recognition method and device and electronic equipment
US10198823B1 (en) Segmentation of object image data from background image data
US9965865B1 (en) Image data segmentation using depth data
US10217195B1 (en) Generation of semantic depth of field effect
WO2019218824A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
US7554575B2 (en) Fast imaging system calibration
US8855369B2 (en) Self learning face recognition using depth based tracking for database generation and update
WO2020125499A1 (en) Operation prompting method and glasses
CN111429517A (en) Relocation method, relocation device, storage medium and electronic device
CN109151442B (en) Image shooting method and terminal
CN104583902A (en) Improved identification of a gesture
CN111160291B (en) Human eye detection method based on depth information and CNN
US11354879B1 (en) Shape-based edge detection
CN104508680A (en) Object tracking in video stream
JP6157165B2 (en) Gaze detection device and imaging device
CN103105924A (en) Man-machine interaction method and device
US9129375B1 (en) Pose detection
CN112689221A (en) Recording method, recording device, electronic device and computer readable storage medium
CN111598149B (en) Loop detection method based on attention mechanism
CN111290584A (en) Embedded infrared binocular gesture control system and method
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
EP3975047B1 (en) Method for determining validness of facial feature, and electronic device
WO2024022301A1 (en) Visual angle path acquisition method and apparatus, and electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant