CN109101860A - Electronic equipment and its gesture identification method - Google Patents

Electronic equipment and its gesture identification method Download PDF

Info

Publication number
CN109101860A
CN109101860A CN201710475523.5A CN201710475523A CN109101860A CN 109101860 A CN109101860 A CN 109101860A CN 201710475523 A CN201710475523 A CN 201710475523A CN 109101860 A CN109101860 A CN 109101860A
Authority
CN
China
Prior art keywords
hand
depth information
block
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710475523.5A
Other languages
Chinese (zh)
Other versions
CN109101860B (en
Inventor
杨荣浩
蔡东佐
庄志远
郭锦斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuzhan Precision Technology Co ltd
Hon Hai Precision Industry Co Ltd
Original Assignee
Shenzhen Yuzhan Precision Technology Co ltd
Hon Hai Precision Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuzhan Precision Technology Co ltd, Hon Hai Precision Industry Co Ltd filed Critical Shenzhen Yuzhan Precision Technology Co ltd
Priority to CN201710475523.5A priority Critical patent/CN109101860B/en
Publication of CN109101860A publication Critical patent/CN109101860A/en
Application granted granted Critical
Publication of CN109101860B publication Critical patent/CN109101860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A kind of gesture identification method is applied to electronic equipment, and the gesture identification method is the following steps are included: obtain comprising a hand and with the image of depth information;Filter static state object included in the image;Coordinate of the hand in the image is obtained, and establishes the first block comprising the hand according to the coordinate;It obtains the depth information of each pixel in first block and counts the pixel number of each depth information;The depth information of the hand is obtained according to the statistical result, and establishes one second block using the depth information of the hand;And the hand is obtained in the motion track of second block and identifies according to the motion track gesture of the hand.The present invention also provides a kind of settings of electronics.Above-mentioned electronic equipment and its gesture identification method are realized and precisely establish gesture detection area, and the accuracy of gesture operation electronic equipment is improved, and enhance user experience.

Description

Electronic equipment and its gesture identification method
Technical field
The present invention relates to technical field of electronic communication more particularly to a kind of electronic equipment and its gesture identification methods.
Background technique
Current image recognition technology can identify gesture in image and current hand gesture location wherein, but detect The object that generally comprises of block there was only gesture, often cover object of other non-hands, such as wall, furniture, head, trunk etc. Deng there may be errors for the gesture depth information obtained at this time.These errors, which will lead to, can not establish accurately gesture detection region Domain thus will cause the inaccuracy that user utilizes gesture operation device, reduce user experience.
Summary of the invention
In view of this, it is necessary to provide the electronic equipment of a kind of gesture identification method and the utilization gesture identification method, It is able to achieve and precisely establishes gesture detection area.
An embodiment of the present invention provides a kind of gesture identification method, is applied to electronic equipment, the gesture identification method The following steps are included: obtaining comprising a hand and with the image of depth information;Filter static state object included in the image Part;Coordinate of the hand in the image is obtained, and establishes the first block comprising the hand according to the coordinate;It obtains It takes the depth information of each pixel in first block and counts the pixel number of each depth information;According to the system Meter result obtains the depth information of the hand, and establishes one second block using the depth information of the hand;And obtain institute Hand is stated in the motion track of second block and identifies according to the motion track gesture of the hand.
An embodiment of the present invention provides a kind of electronic equipment, memory;At least one processor;And one or more moulds Block, the storage of one or more of modules in the memory, and are executed by least one described processor, it is one or Multiple modules include: image acquiring module, include a hand and the image with depth information for obtaining;First filter module Block, for filtering static state object included in the image;First establishes module, for obtaining the hand in the image In coordinate, and according to the coordinate establish include the hand the first block;Statistical module, for obtaining firstth area The depth information of each pixel and the pixel number of each depth information is counted in block;Second establishes module, is used for basis The statistical result of the statistical module obtains the depth information of the hand, and establishes one using the depth information of the hand Second block;And identification module, obtain the hand the motion track of second block and according to the motion track come Identify the gesture of the hand.
Compared with prior art, above-mentioned electronic equipment and its gesture identification method, can be in the depth information for obtaining hand When filter out the object of non-hand so that the hand depth information error obtained is small, realization precisely establishes gesture detection area, can The accuracy of gesture operation electronic equipment is improved, user experience is enhanced.
Detailed description of the invention
Fig. 1 is the functional block diagram of the electronic equipment of an embodiment of the present invention.
Fig. 2 is the functional block diagram of the gesture recognition system of an embodiment of the present invention.
Fig. 3 is the schematic diagram of the first of an embodiment of the present invention the first block for establishing module foundation.
Fig. 4 is an embodiment of the present invention for counting the histogram for the pixel number that different depth value is included.
Fig. 5 is the schematic diagram of the second of an embodiment of the present invention the second block for establishing module foundation.
Fig. 6 is the step flow chart of the gesture identification method of an embodiment of the present invention.
Main element symbol description
The present invention that the following detailed description will be further explained with reference to the above drawings.
Specific embodiment
Referring to Fig. 1, in one embodiment, a kind of electronic equipment 100 includes gesture recognition system 1, processor 2 and deposits Reservoir 3.It is electrically connected between above-mentioned each element.Electronic equipment 100 can be the equipment such as TV, mobile phone, tablet computer.
Gesture recognition system 1 is realized and is controlled by gesture for real-time detection and the gesture of one hand 20 of identification Electronic equipment 100.Memory 3 can be used for storing the various data of electronic equipment 100, such as store gesture recognition system 1 Programming code.Gesture recognition system 1 includes one or more modules, and one or more of modules are stored in memory 3 And executed by processor 2, to complete function provided by the invention.
Please refer to Fig. 2-5, gesture recognition system 1 is built including image acquiring module 11, the first filtering module 12, first Formwork erection block 13, statistical module 14, second establish module 15 and identification module 16.It is special that the so-called module of the present invention can be completion one Determine the program segment of function.
Image acquiring module 11 is used to obtain comprising a hand 20 and with the image of depth information.For example, image Obtaining module 11 can star depth camera 4 to obtain each object in the RGB image comprising hand 20 and the RGB image Depth information.Each pixel in RGB image in each frame picture can be located at the coordinate of XY coordinate system with one to indicate, Depth information can be indicated with Z coordinate, and then each pixel can be indicated by three-dimensional coordinate.
First filtering module 12 is used to filter static state object included in the image of the acquisition of image acquiring module 11.One In embodiment, the image that image acquiring module 11 obtains can be sent into gauss hybrid models by the first filtering module 12 (Gaussian Mixture Model, GMM), and then filter out by GMM static object (background when shooting in image Object, such as wall, seat etc.), thus to retain the dynamic object in image (such as head, hand, body of people etc.).
First establishes module 13 for obtaining coordinate of the hand 20 in image, and establishing according to the coordinate of acquisition includes hand First block 200 in portion 20.In one embodiment, first establish module 13 can be by deep learning algorithm from by GMM The coordinate of hand 20 is found in filtered image.Specifically, first establish module 13 can by deep learning algorithm come Learn and establish the characteristic value of hand 20, and using the characteristic value of hand 20 from one's hands by searching in the filtered image of GMM The coordinate in portion 20 is establishing the first block 200 (as shown in Figure 3) comprising hand 20 according to the coordinate of hand 20.First block The accounting of the area of hand 20 is preferably greater than a default ratio in 200, avoids causing recognition speed low since accounting is too small. Default ratio can be adjusted according to actual accuracy of identification demand.In the present embodiment, presetting ratio is 40%, i.e., the The accounting of the area of hand 20 is preferably greater than 40% in one block 200.In the first block 200, each pixel equally has Corresponding XY coordinate and depth information (Z coordinate).
Statistical module 14 is for obtaining the depth information of each pixel in the first block 200 and counting each depth information Pixel number.Since each pixel has corresponding XY coordinate and depth information, statistical module 14 can directly lead to The XY coordinate of each pixel is crossed to inquire the depth information of each pixel.Further, statistical module 14 can use directly Square figure counts the pixel number that different depth value is included.
For example, as described in Figure 4, the block that the first block 200 is 5 (row) * 5 (column), the first block 200 include 25 Pixel, each pixel all have a depth value, and the range of depth value is 0~255.In Fig. 4, the X-coordinate of histogram is 0 ~255 numerical value (depth value), Y coordinate are pixel number, and the pixel for showing that depth value is 50 can be counted using histogram Point number is 10, and the pixel number that depth value is 90 is 12, and the pixel number that depth value is 240 is 2, depth value Pixel number for 8 is 1.
Second establishes module 15 for obtaining the depth information of hand 20, and benefit according to the statistical result of statistical module 14 One second block 300 is established with the depth information of hand 20.In one embodiment, depth information is less than the picture of predetermined depth value Vegetarian refreshments can be considered as noise, and second, which establishes module 15, to be first less than predetermined depth value for depth information in the first block 200 Pixel filter out.Predetermined depth value can be adjusted according to actual accuracy of identification demand, in the present embodiment, in advance If depth value can be 10.I.e. second, which establishes module 15, filters out pixel of the depth information less than 10 in the first block 200, And then the pixel that depth value is 8 in Fig. 4 is filtered, the pixel that remaining depth value is 50,90,240.
Second, which establishes module 15, extracts two groups from histogram and includes the largest number of depth informations of pixel and therefrom Depth information of the lower one group of depth information of selected depth value as hand 20.For example, depth value is 50 and 90 in Fig. 4 Pixel number at most (the pixel number that depth value is 50 is 10, and the pixel number that depth value is 90 is 12), And depth value 50 belongs to one group relatively low (50 < 90) of depth value, therefore, second establishes 15 selected depth value of module, 50 conduct The depth information of 20 position of hand.
In one embodiment, second establish module 15 be also used to obtain hand 20 depth information after filter out other Object (such as head, body etc.), and then only retain hand 20.Specifically, second establish module 15 can be according to hand 20 Depth information establishes a depth information section, and the not pixel in the depth information section in the first block 200 of filtering, To establish the block plane for including hand 20.For example, second establish module 15 with depth value 50 be intermediate value establish a depth Information section (48~52), second establishes module 15 for depth value less than 48 and depth value is filtered out greater than 52 pixel, in turn One, which can be made, can cover hand 20 block area and block plane as small as possible simultaneously.Second establishes module 15 again Using the block plane as face, the depth information of hand 20 is depth (indicating the depth information of hand 20 in Fig. 5 with " H ") foundation One the second block 300 with solid space region, second establishes the second block 300 of the foundation of module 15 as shown in Figure 5.
Identification module 16 is used to obtain motion track of the hand 20 in the second block 300 and according to the moving rail detected Mark identifies the gesture of hand 20.In one embodiment, the corresponding track of different gestures can be stored in advance in memory 3, know Other module 16 can be learnt by deep learning algorithm and improve the corresponding different tracks of different gestures, and then hand can be improved The accuracy of gesture identification.
Fig. 6 is the flow chart of gesture identification method in an embodiment of the present invention.This method can be used shown in Fig. 2 In gesture recognition system 1.
Step S600, image acquiring module 11 are obtained comprising hand 20 and with the image of depth information.
Step S602, the first filtering module 12 filter static state object included in the image obtained.
Step S604, first establishes the coordinate in the image of the acquisition of module 13 hand 20 after filtration, and according to acquisition Coordinate establishes the first block 200 comprising hand 20.
Step S606, statistical module 14 obtain the depth information of each pixel in the first block 200 and count each depth Spend the pixel number of information.
Step S608, second, which establishes module 15, obtains the depth information of hand 20 according to statistical result, and utilizes hand 20 depth information establishes the second block 300.
Step S610, identification module 16 obtain motion track of the hand 20 in the second block 300 and according to the movements Track identifies the gesture of hand 20, to realize controlling electronic devices 100.
In one embodiment, image acquiring module 11 can be obtained by starting depth camera 4 comprising hand 20 RGB image and the RGB image in each object depth information.
In one embodiment, the first filtering module 12 can filter out the static object in image by GMM, thus To retain the dynamic object in image.
In one embodiment, it first establishes module 13 and can be learnt by deep learning algorithm and establish hand 20 Characteristic value, and using the characteristic value of hand 20 from the coordinate by finding hand 20 in the filtered image of GMM, according to hand The coordinate in portion 20 establishes the first block 200 comprising hand 20.The accounting of the area of hand 20 is preferably greater than in first block 200 One default ratio avoids causing recognition speed low since accounting is too small.Default ratio can be according to actual accuracy of identification Demand is adjusted.
In one embodiment, statistical module 14 can inquire each pixel by the XY coordinate of each pixel Depth information simultaneously counts the pixel number that different depth value is included using histogram.
In one embodiment, second establish module 15 first depth information in the first block 200 can be less than it is default deep The pixel of angle value filters out.Predetermined depth value can be set and be adjusted according to actual accuracy of identification demand.
In one embodiment, second establish module 15 extracted from histogram two groups comprising pixel it is the largest number of Depth information of the lower one group of depth information of depth information and therefrom selected depth value as hand 20.
In one embodiment, it second establishes module 15 and can establish a depth information area according to the depth information of hand 20 Between, and the not object in the depth information section in the first block 200 of filtering, it is flat to establish the block comprising hand 20 Face, then using the block plane as face, the depth information of hand 20 is that depth establishes second with solid space region Block 300.
In one embodiment, the corresponding track of different gestures can be stored in advance in memory 3, and identification module 16 can lead to Depth learning algorithm is crossed to learn and improve the corresponding different tracks of different gestures, and then the accurate of gesture identification can be improved Property.
Above-mentioned electronic equipment and its gesture identification method can filter out non-hand when obtaining the depth information of hand Object, so that the hand depth information error obtained is small, realization precisely establishes gesture detection area, and gesture operation electronics can be improved The accuracy of equipment enhances user experience.
It will be apparent to those skilled in the art that the reality of production can be combined with scheme of the invention according to the present invention and inventive concept Border needs to make other and is altered or modified accordingly, and these change and adjustment all should belong to range disclosed in this invention.

Claims (14)

1. a kind of gesture identification method is applied to electronic equipment, which is characterized in that the gesture identification method includes following step It is rapid:
It obtains comprising a hand and with the image of depth information;
Filter static state object included in the image;
Coordinate of the hand in the image is obtained, and establishes the first block comprising the hand according to the coordinate;
It obtains the depth information of each pixel in first block and counts the pixel number of each depth information;
The depth information of the hand is obtained according to the statistical result, and establishes one second using the depth information of the hand Block;And
The hand is obtained in the motion track of second block and identifies according to the motion track hand of the hand Gesture.
2. gesture identification method as described in claim 1, which is characterized in that described to obtain comprising a hand and there is depth letter The step of image of breath includes:
Obtain the depth information of each object in RGB image and the RGB image comprising a hand.
3. gesture identification method as claimed in claim 2, which is characterized in that static state included in the filtering image The step of object includes:
Static state object included in the RGB image is filtered using gauss hybrid models.
4. gesture identification method as claimed in claim 3, which is characterized in that described to obtain the hand in the image Coordinate, and the step of including the first block of the hand according to coordinate foundation includes::
The characteristic value of the hand is established using deep learning algorithm;
Coordinate of the hand in the image is obtained according to the characteristic value of the hand;And
The first block comprising the hand is established according to the coordinate;
Wherein, the accounting of the area of hand is greater than a default ratio in first block.
5. gesture identification method as described in claim 1, which is characterized in that described to obtain each pixel in first block The depth information of point and the step of counting the pixel number of each depth information includes:
The depth information of each pixel is extracted according to the coordinate of each pixel in first block;And
Utilize the pixel number of each depth information of statistics with histogram.
6. gesture identification method as claimed in claim 5, which is characterized in that described to obtain the hand according to the statistical result The depth information in portion, and the step of establishing second block using the depth information of the hand includes:
Filter the pixel that depth information in the histogram is less than predetermined depth value;
Two groups are extracted from the histogram comprising the largest number of depth informations of pixel and therefrom selected depth value is lower Depth information of one group of depth information as the hand;And
Second block is established using the depth information of the hand.
7. gesture identification method as claimed in claim 6, which is characterized in that the depth information using the hand is established The step of second block includes:
A depth information section is established according to the depth information of the hand;
The not object in the depth information section is filtered in first block, it is flat to establish the block comprising the hand Face;
Second block is established according to the depth information of the block plane and the hand.
8. a kind of electronic equipment, comprising:
Memory;
At least one processor;And
One or more modules, one or more of modules store in the memory, and by least one described processing Device executes, which is characterized in that one or more of modules include:
Image acquiring module includes a hand and the image with depth information for obtaining;
First filtering module, for filtering static state object included in the image;
First establishes module, and for obtaining coordinate of the hand in the image, and establishing according to the coordinate includes institute State the first block of hand;
Statistical module, for obtaining the depth information of each pixel in first block and counting the picture of each depth information Vegetarian refreshments number;
Second establishes module, for obtaining the depth information of the hand, and benefit according to the statistical result of the statistical module One second block is established with the depth information of the hand;And
Identification module, for obtaining the hand in the motion track of second block and being identified according to the motion track The gesture of the hand.
9. electronic equipment as claimed in claim 8, which is characterized in that the image acquiring module includes the hand for obtaining The depth information of each object in the RGB image in portion and the RGB image.
10. electronic equipment as claimed in claim 9, which is characterized in that first filtering module is for passing through Gaussian Mixture Model filters static state object included in the RGB image.
11. electronic equipment as claimed in claim 10, which is characterized in that described first establishes module for passing through deep learning Algorithm establishes the characteristic value of the hand, and obtains seat of the hand in the image according to the characteristic value of the hand It marks and establishes the first block comprising the hand according to the coordinate;
Wherein, the accounting of the area of hand is greater than a default ratio in first block.
12. electronic equipment as claimed in claim 8, which is characterized in that the statistical module is used for according to first block In the coordinate of each pixel extract the depth information of each pixel, and each depth information is counted using histogram Pixel number.
13. electronic equipment as claimed in claim 12, which is characterized in that described second establishes module for filtering the histogram Depth information is less than the pixel of predetermined depth value in figure, then extracts from the histogram two groups comprising pixel number most More depth information and therefrom depth information of the lower one group of depth information of selected depth value as the hand, and utilize institute The depth information for stating hand establishes second block.
14. electronic equipment as claimed in claim 13, which is characterized in that described second, which establishes module, is also used to according to the hand The depth information in portion establishes a depth information section, and filters in first block not object in the depth information section Part, to establish the block plane for including the hand, described second, which establishes module, is also used to according to the block plane and described The depth information of hand establishes second block.
CN201710475523.5A 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof Active CN109101860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710475523.5A CN109101860B (en) 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710475523.5A CN109101860B (en) 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof

Publications (2)

Publication Number Publication Date
CN109101860A true CN109101860A (en) 2018-12-28
CN109101860B CN109101860B (en) 2022-05-13

Family

ID=64796257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710475523.5A Active CN109101860B (en) 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof

Country Status (1)

Country Link
CN (1) CN109101860B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024151A (en) * 2010-12-02 2011-04-20 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
CN103226708A (en) * 2013-04-07 2013-07-31 华南理工大学 Multi-model fusion video hand division method based on Kinect
CN104463191A (en) * 2014-10-30 2015-03-25 华南理工大学 Robot visual processing method based on attention mechanism
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104834922A (en) * 2015-05-27 2015-08-12 电子科技大学 Hybrid neural network-based gesture recognition method
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024151A (en) * 2010-12-02 2011-04-20 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN102789568A (en) * 2012-07-13 2012-11-21 浙江捷尚视觉科技有限公司 Gesture identification method based on depth information
CN103226708A (en) * 2013-04-07 2013-07-31 华南理工大学 Multi-model fusion video hand division method based on Kinect
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104463191A (en) * 2014-10-30 2015-03-25 华南理工大学 Robot visual processing method based on attention mechanism
CN104834922A (en) * 2015-05-27 2015-08-12 电子科技大学 Hybrid neural network-based gesture recognition method
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data

Also Published As

Publication number Publication date
CN109101860B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
TWI625678B (en) Electronic device and gesture recognition method applied therein
CN105426827B (en) Living body verification method, device and system
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
CN103530599B (en) The detection method and system of a kind of real human face and picture face
CN108519812B (en) Three-dimensional micro Doppler gesture recognition method based on convolutional neural network
CN109343700B (en) Eye movement control calibration data acquisition method and device
CN109829437A (en) Image processing method, text recognition method, device and electronic system
CN104484871B (en) edge extracting method and device
CN104268864B (en) Card edge extracting method and device
KR20170061629A (en) Method and device for identifying region
CN103679147A (en) Method and device for identifying model of mobile phone
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
CN104318263A (en) Real-time high-precision people stream counting method
CN109598234A (en) Critical point detection method and apparatus
EP2996067A1 (en) Method and device for generating motion signature on the basis of motion signature information
CN106295511A (en) Face tracking method and device
CN110456904B (en) Augmented reality glasses eye movement interaction method and system without calibration
CN105095860B (en) character segmentation method and device
CN103279188A (en) Method for operating and controlling PPT in non-contact mode based on Kinect
CN103985137A (en) Moving object tracking method and system applied to human-computer interaction
CN106326853A (en) Human face tracking method and device
CN102567716A (en) Face synthetic system and implementation method
US20190325208A1 (en) Human body tracing method, apparatus and device, and storage medium
CN108833774A (en) Camera control method, device and UAV system
CN105205482A (en) Quick facial feature recognition and posture estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant