CN108351708A - Three-dimension gesture unlocking method, the method and terminal device for obtaining images of gestures - Google Patents

Three-dimension gesture unlocking method, the method and terminal device for obtaining images of gestures Download PDF

Info

Publication number
CN108351708A
CN108351708A CN201780004005.3A CN201780004005A CN108351708A CN 108351708 A CN108351708 A CN 108351708A CN 201780004005 A CN201780004005 A CN 201780004005A CN 108351708 A CN108351708 A CN 108351708A
Authority
CN
China
Prior art keywords
finger
pixel
gesture
current
current gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201780004005.3A
Other languages
Chinese (zh)
Other versions
CN108351708B (en
Inventor
邵明明
王林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN108351708A publication Critical patent/CN108351708A/en
Application granted granted Critical
Publication of CN108351708B publication Critical patent/CN108351708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

This application provides a kind of three-dimension gesture unlocking method, the methods and terminal device of acquisition images of gestures.The three-dimension gesture unlocking method includes:Obtain the current images of gestures of user;According to preceding images of gestures is deserved, the outline data of the current gesture before deserving in images of gestures is obtained;According to the outline data of gesture before deserving, the finger root pixel of gesture before determining the finger tip pixel of gesture before deserving and/or deserving;The finger root pixel of gesture, determines the characteristic of gesture before deserving according to the finger tip pixel of gesture before deserving and/or before deserving;In this prior when the characteristic successful match of the characteristic of gesture and default gesture, the terminal device is unlocked.According to the three-dimension gesture unlocking method of the embodiment of the present application, the unlocking manner that a kind of completely new interest is strong, accuracy is high and rapidity is good has been provided to the user.

Description

Three-dimension gesture unlocking method, the method and terminal device for obtaining images of gestures
This application claims Patent Office of the People's Republic of China is submitted on October 14th, 2016, application No. is 201610898260.4, a kind of priority of the Chinese patent application of entitled " method and apparatus of the non-contact 3-D gesture unlock of intelligent interaction device ", entire contents are hereby incorporated by reference in the application.
Technical field
This application involves field of image recognition, and the method and terminal device for relating more specifically to a kind of three-dimension gesture unlocking method, obtaining images of gestures.
Background technique
Usual terminal device when being in unmanned interaction can in locking can not interaction mode, need to be unlocked by certain identity identification method first when user needs to terminal device equipment operation, so that terminal device is into can interaction mode.Terminal device is just unlocked after ensuring that the user has correct permission, to guarantee the safety of terminal device information.
Currently used unlocking terminal equipment mode includes: numerical ciphers unlock, the unlock of pattern password, unlocked by fingerprint, iris unlock and face unlock.Numerical ciphers unlocking manner password is fixed, as long as Password Input can correctly unlock, can not differentiate be whose input password, as long as therefore anyone see owner's input process after can operate and unlock terminal device.There is also fixed defects to be more easier to be seen and remembered by other people since its input process is more vivid for pattern password unlocking manner.Unlocked by fingerprint mode is strong to fingerprint dependence, and finger is people and extraneous one of the physical feeling being in the most contact, and inadvertently rubs, collides with, gets wet so that fingerprint is easy after changing can not be identified by terminal device in unlock.Also, fingerprint recognition belongs to contact type measurement, and more people are unhygienic when touching, and is influenced seriously by stain.Iris unlocking manner needs are acquired iris image by camera, therefore high to environment bright dependence, are difficult to obtain clearly iris image when ambient light is weaker, influence the accuracy of identification.Also, eyes of user is needed to be located at optimal imaging position in image acquisition process, the lesser user of eyes also needs to open eyes wide to acquire, and causes certain cumbersome.It is high to camera pixel request since iris area itself is smaller, more complicated operand is also brought while improving hardware cost.Face unlocking manner is also required to acquire facial image by camera, high to environment bright dependence, it is difficult to the environment weak applied to ambient light.Meanwhile being easy to be influenced by cosmetics, hair and eyes etc. when recognition of face, to reduce discrimination.
In interactive terminal facility more and more intelligent today, interest that people operate terminal device itself, accuracy, rapidity requirement are higher and higher.As a result, for defect existing for current unlocking terminal equipment mode, the unlocking manner that a kind of completely new interest is strong, accuracy is high and rapidity is good is needed.
Summary of the invention
This application provides a kind of three-dimension gesture unlocking methods, the method and terminal device of acquisition images of gestures, can be improved user experience.
In a first aspect, providing a kind of three-dimension gesture unlocking method, this method can be applied to terminal device, this method comprises:
Obtain the current images of gestures of user;
According to preceding images of gestures is deserved, the outline data of the current gesture before deserving in images of gestures is obtained;
According to the outline data of gesture before deserving, the finger root of gesture before determining the finger tip pixel of gesture before deserving and/or deserving Pixel;
The finger root pixel of gesture, determines the characteristic of gesture before deserving according to the finger tip pixel of gesture before deserving and/or before deserving;
In this prior when the characteristic successful match of the characteristic of gesture and default gesture, the terminal device is unlocked.
According to the three-dimension gesture unlocking method of the embodiment of the present application, the three-dimensional images of gestures presented in three-dimensional space in front of the camera by obtaining user in real time, the gesture of user in images of gestures is extracted, and by matching with the unlock gesture set before user, achievees the purpose that unlock terminal device.To provide the unlocking manner that a kind of completely new interest is strong, accuracy is high and rapidity is good for user.
In one possible implementation, the finger root pixel of gesture, determines the characteristic of gesture before deserving before which deserves the finger tip pixel of preceding gesture and/or deserve, comprising:
The finger root pixel of gesture according to the finger tip pixel of gesture before deserving and/or before deserving, determines at least one of the following information before deserving in gesture:
Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, which is used to indicate finger width.
In one possible implementation, which deserves preceding images of gestures, obtains the outline data of the current gesture in the images of gestures, comprising:
According to the current frame number of images of gestures before deserving, it is determined whether need to carry out contours extract to gesture before deserving;
When determining that needs carry out contours extract to gesture before deserving, pixel quantity of the wire-frame image vegetarian refreshments of gesture up to having searched for all wire-frame image vegetarian refreshments for deserving preceding gesture or by search, which is greater than, before search is deserved presets detection threshold value, the outline data of gesture before being deserved with acquisition.
Be conducive to the lookup that other wire-frame image vegetarian refreshments are reduced in the case where searching out all finger boundary pixel points by the way that default detection threshold value is arranged, to improve efficiency.
In one possible implementation, this method further include:
When determining do not need to preceding gesture progress contours extract is deserved, determine that the prediction finger tip pixel of gesture is predicted to refer to root pixel with preceding gesture is deserved before deserving according to exponential average (Exponential Moving Average, EMA) algorithm;
Refer to root pixel according to the prediction finger tip pixel of gesture before deserving and the prediction for deserving preceding gesture, obtains the outline data of gesture before deserving.
In one possible implementation, the outline data of gesture before which deserves, the finger root pixel of gesture before determining the finger tip pixel of gesture before deserving and/or deserving, comprising:
According to the outline data of gesture before deserving, determine that the v section camber line before deserving in gesture, v are the integer more than or equal to 1;
According to the end pixel point of the beginning pixel and the s sections of camber lines of s sections of camber lines in gesture before deserving, the central pixel point of the s sections of camber lines is determined, s traverses the integer in [1, v];
The vector m constituted according to k pixel before the central pixel point of the central pixel point of the s sections of camber lines to the s sections of camber lines, and the vector n that k pixel after the central pixel point of the s sections of camber lines to the central pixel point of the s sections of camber lines is constituted, determine the center vector p of the s sections of camber lines, the smaller angle in two angles that center vector p divides vector m equally and the vector n is constituted, k are the integer more than or equal to 1;
The distance between position where pixel and the depth camera on the direction pointed by the center vector p determine that central pixel point corresponding to center vector p is finger tip pixel at pre-determined distance section, or
The distance between position where pixel and the depth camera on the direction pointed by the center vector p do not exist When pre-determined distance section, central pixel point corresponding to center vector p is determined to refer to root pixel.
In one possible implementation, the finger length before which deserves in gesture includes:
Determine the finger tip pixel and the second distance referred between root pixel positioned at the first distance referred between root pixel on the finger tip pixel left side and the finger tip pixel and on the right of the finger tip pixel;
When the first distance and the absolute value of the difference of the second distance are greater than pre-set length threshold, projection of the smaller in both the first distance and the second distance on the finger orientation where the finger tip pixel is determined as the finger length,
When the first distance and the absolute value of the difference of the second distance are less than or equal to pre-set length threshold, the mean value of projection and second distance projection on the finger orientation finger tip pixel where of the first distance on the finger orientation where the finger tip pixel is determined as the finger length.
In one possible implementation, the finger length before which deserves in gesture includes:
Refer to root pixel when the right of the finger tip pixel is not present, or when the left side of the finger tip pixel is there is no root pixel is referred to,
Determine the finger tip pixel and the third distance referred between root pixel positioned at the finger tip pixel left side, or
Determine the finger tip pixel and the 4th distance referred between root pixel on the right of the finger tip pixel;
Projection of the third distance on the finger orientation where the finger tip pixel is determined as the finger length, or
Projection of 4th distance on the finger orientation where the finger tip pixel is determined as the finger length.
In one possible implementation, fingertip location characteristic DjMeet following formula:
Wherein, k is the length of longest finger in gesture before deserving, k > 0, n is the number of the finger before deserving in gesture, n is integer more than or equal to 1, p be finger tip pixel before deserving in gesture its in x, y be coordinate of the finger tip pixel in this prior in images of gestures, x and y are real number, z is the depth value in the finger tip pixel in this prior images of gestures, z >=0.
In one possible implementation, the finger width characteristic in the determination images of gestures includes:
The length of each finger is equally divided into m+1 parts on finger orientation in gesture before deserving, and m is the positive integer more than or equal to 0;
It for each finger in gesture before deserving, calculates on Along ent position perpendicular to the width of the finger orientation, obtains m*n absolute width value, wherein n is the finger number before deserving in gesture, and n is the integer more than or equal to 1;
The ratio for calculating the every two absolute value width in the m*n absolute width value obtains the feature vector value { d of mn (mn-1)/2 relative width compositioni, i=1,2 ..., mn (mn-1)/2 };
By this feature vector value { di, i=1,2 ..., mn (mn-1)/2 } it is determined as the finger width characteristic.
In one possible implementation, this method further include:
In this prior when the non-successful match of the characteristic of the characteristic of gesture and default gesture, determine whether the unlocking process time is overtime;
If the unlocking process time is overtime, screen locking processing is carried out to the terminal device after preset time period.
In one possible implementation, according to preceding images of gestures is deserved, acquisition is deserved current in preceding images of gestures Before the outline data of gesture, this method further include:
The unlock gesture operation that starts setting up based on the user obtains and candidate images of gestures is presented to the user;
Determine that the candidate gesture in candidate's images of gestures is set default gesture by operation based on the user.
In one possible implementation, before the candidate gesture in candidate's images of gestures is set default gesture by the determining operation at this based on the user, this method further include:
Determine whether the finger number of candidate's gesture is greater than or equal to 3;
If the finger number of candidate's gesture is greater than or equal to 3, determine that candidate's gesture is set the default gesture by operation based on the user;
Wherein, this method further include:
Obtain the characteristic of candidate's gesture.
Second aspect provides a kind of method for obtaining images of gestures, comprising:
Obtain the current images of gestures of user;
According to preceding images of gestures is deserved, the outline data of the current gesture before deserving in images of gestures is obtained;
According to the outline data of gesture before deserving, the finger root pixel of gesture before determining the finger tip pixel of gesture before deserving and/or deserving;
The finger root pixel of gesture, determines the characteristic of gesture before deserving according to the finger tip pixel of gesture before deserving and/or before deserving.
According to the method for the acquisition images of gestures of the application, pass through the outline data according to the current gesture in current images of gestures, the finger root pixel of gesture, may thereby determine that the characteristic of current gesture before the finger tip pixel of current gesture can be obtained and/or deserved.
In one possible implementation, the finger root pixel of gesture, determines the characteristic of gesture before deserving before which deserves the finger tip pixel of preceding gesture and/or deserve, comprising:
The finger root pixel of gesture according to the finger tip pixel of gesture before deserving and/or before deserving, determines at least one of the following before deserving in gesture:
Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, which is used to indicate finger width.
The third aspect provides a kind of terminal device, for executing the method in the arbitrarily possible implementation of first aspect or first aspect.Specifically, which may include the unit for executing the method in the arbitrarily possible implementation of first aspect or first aspect.
Fourth aspect provides a kind of terminal device, for executing the method in the arbitrarily possible implementation of second aspect or second aspect.Specifically, which may include the unit for executing the method in the arbitrarily possible implementation of second aspect or second aspect.
5th aspect, provide a kind of terminal device, including memory, processor and display, the memory is for storing computer program, processor from memory for calling and running computer program, when program is run, which executes the method in the arbitrarily possible implementation of above-mentioned first aspect or first aspect.
6th aspect, provide a kind of terminal device, including memory, processor and display, the memory is for storing computer program, processor from memory for calling and running computer program, when program is run, which executes the method in the arbitrarily possible implementation of above-mentioned second aspect or second aspect.
7th aspect, provides a kind of computer-readable medium, for storing computer program, which includes the instruction for executing the method in any possible implementation of first aspect or first aspect.
Eighth aspect provides a kind of computer-readable medium, and for storing computer program, which includes the instruction for executing the method in any possible implementation of second aspect or second aspect.
Detailed description of the invention
Fig. 1 is the schematic diagram for realizing the minimal hardware system of the terminal device of three-dimension gesture unlocking method of the application.
Fig. 2 is the schematic flow chart according to the three-dimension gesture unlocking method of the embodiment of the present application.
The schematic diagram of direction of search when Fig. 3 is the search wire-frame image vegetarian refreshments according to the application one embodiment.
Fig. 4 is the schematic diagram according to the sequence of the detection profile pixel of the application one embodiment.
Fig. 5 is the determination finger tip pixel according to the application one embodiment and the method schematic diagram for referring to root pixel.
Fig. 6 is the schematic flow chart of one embodiment of the default gesture of setting in the three-dimension gesture unlocking method according to the embodiment of the present application.
Fig. 7 is the schematic flow chart according to a specific embodiment of the three-dimension gesture unlocking method of the embodiment of the present application.
Fig. 8 is according to the schematic flow chart of the method for the acquisition images of gestures of the embodiment of the present application.
Fig. 9 is an example schematic block diagram according to the terminal device of the embodiment of the present application.
Figure 10 is another schematic block diagram according to the terminal device of the embodiment of the present application.
Figure 11 is the schematic block diagram according to an example again of the terminal device of the embodiment of the present application.
Specific embodiment
With reference to the accompanying drawing, embodiments herein is described.
The application is for defect existing for current equipment unlocking manner, by increasing depth camera to terminal device, the three-dimensional images of gestures that user is presented in three-dimensional space in front of the depth camera is obtained in real time, extract the gesture of user in images of gestures, by matching with the unlock gesture set before user, achieve the purpose that unlock terminal device.To provide the unlocking manner that a kind of completely new interest is strong, accuracy is high and rapidity is good for user.
The application is that the terminal device of embodiment can be access terminal, user equipment (user equipment, UE), subscriber unit, subscriber station, movement station, mobile station, remote station, remote terminal, mobile device, user terminal, wireless telecom equipment, user agent or user apparatus.Terminal device can be cellular phone, wireless phone, session initiation protocol (session initiation protocol, SIP) phone, wireless local loop (wireless local loop, WLL it) stands, personal digital assistant (personal digital assistant, PDA), the handheld device with wireless communication function, the other processing equipments for calculating equipment or being connected to radio modem, mobile unit, wearable device etc..
Fig. 1 is the schematic diagram for realizing the minimal hardware system 100 of the terminal device of three-dimension gesture unlocking method of the application.System 100 shown in FIG. 1 includes: light source emitter 110, depth camera 120, spectral analysis module 130, colour imagery shot 140, central processing unit 150, touch screen 160, nonvolatile memory 170 and memory 180.
Colour imagery shot 140, light source emitter 110 and depth camera 120 form spectrum input module, and spectral analysis module 130 constitutes image generation module.(such as central location right above equipment) can be mounted side by side above equipment in light source emitter 110, colour imagery shot 140 and depth camera 120.Light source emitter 110 can be infrared transmitter, and depth camera 120 can be infrared camera, and spectral analysis module 130 can be infrared spectrum analysis module.In this case, light source emitter 110 and 120 cooperating of depth camera show scene by infrared light coded image.Light source emitter 110 exports common laser light source, forms near infrared light after ground glass and infrared fileter filtering.Wherein, light source emitter 110 can continue the infrared light that comprehensive output wavelength is 840 nanometers (nm).
Depth camera 120 is a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) imaging sensor, the excitation light source come for receiving extraneous reflection, such as infrared light, digital image transmission is formed to spectral analysis module 130 after excitation light source is carried out digital coding.Spectral analysis module 130 analyzes speckle, calculates image corresponding pixel points at a distance from depth camera 120, and constitutes depth data matrix and read for driver.
Central processing unit 150 is controlled for gesture analysis and unlock behavior judgement and peripheral hardware.Nonvolatile memory 170 is used for save routine file, system file and gesture feature information.Memory 180 is for systems and procedure operation caching.Touch screen 160 with user for interacting.Specifically, central processing unit 150 reads depth data, extracts user gesture characteristic.Display presses confirmation key when user, which determines, sets unlock gesture for the gesture that touch screen 160 is shown in touch screen 160 after user analyzes gesture feature when setting unlocks gesture in real time, and the user gesture of identification is handled.Central processing unit 150 further extracts gesture feature value simultaneously, and saves it in nonvolatile memory 170.It obtains gesture data in real time when user needs to verify gesture to solve lock screen, extracts gesture feature, the setting gesture feature comparing with preservation, if it does, then unlocking successfully, otherwise unlock failure.
In the following, will be described in the three-dimension gesture unlocking method according to the terminal device of the embodiment of the present application.
Fig. 2 is the schematic flow chart according to the three-dimension gesture unlocking method of the application one embodiment.Method shown in Fig. 2 is executed by terminal device, for example, can terminal device as shown in Figure 1 execute.
S210 obtains the current images of gestures of user.
Specifically, obtaining current images of gestures can be understood as obtaining current depth image.Wherein, depth image is also referred to as apart from image (range image), refer to will from image acquisition device (such as, depth camera 120 in the application) image of the distance (depth) as pixel value of each point into scene, it directly reflects the geometry of scenery visible surface.
For example, when this method terminal device as shown in Figure 1 executes, the excitation light source that depth camera 120 is come by receiving extraneous reflection, such as infrared light, will excitation light source carry out digital coding after formation digital image transmission to spectral analysis module 130.Spectral analysis module 130 analyzes speckle, corresponding pixel points (x, y) and 120 distance z of depth camera in current images of gestures is calculated, so as to obtain current images of gestures.
S220 obtains the outline data of the current gesture in current images of gestures according to current images of gestures.
S230 determines the finger tip pixel of current gesture and/or the finger root pixel of current gesture according to the outline data of current gesture.
S240 determines the characteristic of current gesture according to the finger tip pixel of current gesture and/or the finger root pixel of current gesture.
Specifically, the effect of S220~S240 step is to identify the current gesture of user according to current images of gestures.
Illustratively, outline data described herein can refer to the coordinate (x, y, z) of each pixel of the composition gesture profile in images of gestures.It is alternatively possible to use mode one or mode two to obtain the outline data of current gesture according to current frame number.
Specifically, the embodiment of the present application can be using acceleration recognition strategy, to every L frame (such as, every 5 frame) the 1st frame images of gestures employing mode one in images of gestures obtains the outline data of gesture in images of gestures, for remaining L-1 frame (such as, remaining 4 frame) image, employing mode two obtain images of gestures in gesture outline data.
That is, pass-through mode one obtains the outline data of current gesture if current images of gestures is the 1st frame in every L frame;If current images of gestures is any one frame in the remaining L-1 frame in every L frame, pass-through mode two obtains the outline data of current gesture.
In the following, specifically introduce employing mode one and mode two obtain current gesture outline data method.
Mode one: complete contours extract is carried out to current gesture.
Contours extract mainly finds coordinate of the pixel at manpower edge in current gesture in depth image and is successively recorded in a sequential chained list and it is possible to which the coordinate of each pixel is packaged into a structural body, uses for subsequent step.
Specifically, the wire-frame image vegetarian refreshments in current gesture is searched for, until all wire-frame image vegetarian refreshments in current gesture have been searched for, to obtain the outline data in current gesture.Alternatively, the wire-frame image vegetarian refreshments in current gesture is searched for, until the pixel quantity by search stops search when being greater than default detection threshold value, to obtain the outline data in current gesture.
It is non-limiting as example, in the wire-frame image vegetarian refreshments searched in current gesture, noise reduction process can be carried out to current images of gestures first;Then, the first man handwheel exterior feature pixel of the current gesture in the current images of gestures after the noise reduction process is obtained;Other people handwheel exterior feature pixels in current images of gestures after the noise reduction process are obtained further according to the first man handwheel exterior feature pixel, until search terminates.
(1) image noise reduction
Current images of gestures in the application can be a depth image, and some noises can be generated in depth image acquisition process, and the source that noise generates is divided into inside and outside.Internal noise may be caused by photoelectricity fundamental property, be caused by mechanical oscillation, or be caused by component defect.External noise may be caused by electromagnetic crosstalk, be caused by impurity in air, or the impurity by adhering on hand causes.These noises show as salt-pepper noise and random noise on the image.
For the salt-pepper noise that most probable in depth image generates, the noise jamming in image can be reduced using median filtering mode in the embodiment of the present application.As shown in formula (1), med expression takes median operation, and filtering method is to do size sequence to the pixel in glide filter window 2I+1, and the intermediate value of the sequence is assigned to current pixel value.I is the positive integer more than or equal to 1, xkIndicate pixel.
xk=med { xk-I,xk-I+1,...xk,...xk+I}       (1)
It should be understood that the embodiment of the present application is not construed as limiting method used when carrying out image noise reduction.For example, the application can also carry out noise reduction process to depth image using approach of mean filter, wavelet filtering mode etc..Specifically noise reduction process mode is referred to the prior art, does not describe in detail herein.
(2) first wire-frame image vegetarian refreshments in current gesture is found
It first determines whether the images of gestures obtained, i.e., whether has gesture in current images of gestures.In other words, current gesture is judged whether there is.If existed at a distance from depth camera in current images of gestures in the pixel in pre-determined distance section, and at least one is above the pixel, lower section, left or right neighbor pixel be more than this pre-determined distance section at a distance from depth camera, then it is assumed that there are gestures in current images of gestures.
It should be understood that the pre-determined distance section can be arranged by user when setting unlocks the gesture stage, can also be defined by system.The upper lower limit value in the pre-determined distance section can define or be arranged according to the value of most users gesture and depth camera distance.For example, this can be 20cm-40cm apart from section.The selection of lower limit 20cm is depending on the size of the wide-angle of depth camera and common manpower, in order to allow the depth image of manpower can be all acquired in camera.The selection of upper limit 40cm, apart from rear the considerations of capable of getting the fine degree of data farther out, widens user's available range range under the premise of guaranteeing that manpower trickle data accurately obtains for manpower as far as possible.
When whether having gesture in judging current images of gestures, for example, current images of gestures centre can be first determined whether, i.e., whether the center pixel of present frame is in the pre-determined distance section at a distance from depth camera.If depth image centre is not at the pre-determined distance section at a distance from depth camera, with the distance of multiple (such as 10) pixels be interval successively to left and right, above and below, upper left, upper right, lower-left, the orientation detection of bottom right eight.When searching out the two neighboring pixel in pre-determined distance section, terminate detection, and begin looking for first man hand contour pixel Point.And if not searching out the two neighboring pixel in pre-determined distance section, illustrate there is no gesture in current images of gestures.
, can be from effective pixel points after searching out the two neighboring pixel in pre-determined distance section in above-mentioned steps, i.e., any pixel point in two pixels is searched for the left, searches first wire-frame image vegetarian refreshments.
For example, can search for the left from effective pixel points, by the way of binary search, whether the depth value of interpretation current pixel point is in boundary point.Judge that the method whether depth value of current pixel point is in boundary point is that current pixel point is in pre-determined distance section at a distance from depth camera, and is not at pre-determined distance section at a distance from the pixel and depth camera adjacent with current pixel point.Here adjacent refer to above current pixel point, the pixel of lower section, left or right.
When searching by the way of binary search, if current pixel point is in pre-determined distance section at a distance from depth camera, illustrate that manpower wire-frame image vegetarian refreshments on the left side of the pixel, then continues binary search to the left;If current pixel point is not at pre-determined distance section at a distance from depth camera, illustrate manpower wire-frame image vegetarian refreshments on the right of the pixel, then binary search to the right.If searching out the pixel in boundary point, record the pixel position and save, which is first wire-frame image vegetarian refreshments, and other wire-frame image vegetarian refreshments of the current gesture of tracing detection.If not searching out the pixel in boundary point, illustrate that current gesture is excessive, the frame images of gestures, i.e., current images of gestures is undesirable, therefore does not continue to handle.
It should be understood that binary search is only to look for a specific embodiment of contour pixel point, search efficiency can be improved using binary search, but the embodiment of the present application is not especially limited the mode for searching wire-frame image vegetarian refreshments.
(3) other wire-frame image vegetarian refreshments of the current gesture of tracing detection
Searching out after first wire-frame image vegetarian refreshments can be with tracing detection other similar wire-frame image vegetarian refreshments therewith.In this process, firstly, determining investigation.
The application can define four directions of search for example as shown in Figure 3 to accelerate to track.
As shown in Figure 3, grey square frame indicates the currently active pixel (wire-frame image vegetarian refreshments), (a) in Fig. 3 is to search for upper left side, (b) in Fig. 3 is to search for upper right side, (c) in Fig. 3 is lower section search to the left, and (d) in Fig. 3 is to search for the right.The sequence that digital representation neighbor pixel detects in figure.Since finger boundary is distributed in vertical direction more, four kinds of ways of search reinforce lookup in vertical direction.For a pixel, start to search for by the direction of a upper pixel search, then top search to the right, upper right side search are not searched for then to the right for upper left side search, lower right search not then top search to the right, still searches for and does not attempt then to search for from all directions.
If the pixel of recorded mistake before searching out, changing direction is direction corresponding when a upper pixel is found, and is found counterclockwise.Judge that the method whether a pixel was saved is one Hash table of maintenance, often finding an available point all is that Hash table is added in key with positional value by it, hereafter can judge whether the pixel was found with location parameter.
If the pixel of recorded mistake before still finding, judges whether current pixel point is on the pixel line of edge-perpendicular or level.If being not at this pixel line, illustrate to have searched for all wire-frame image vegetarian refreshments.If right above the pixel and underface pixel is not available point, which is in horizontal single pixel line;If the pixel front-left and front-right pixel are not available point, which is in vertical single pixel line.Next the case where processing single pixel line.
As shown in (a) in Fig. 4, the 5th pixel can be detected after detecting the 6th pixel, and the pixel has been tested, and above and below pixel be not effective pixel points, then the 6th pixel is on horizontal single pixel line, then is detected to the right of 5 positions.
As shown in (b) in Fig. 4, the 4th pixel can be detected after detecting the 5th pixel, and the pixel has been tested, and the pixel of the left and right is not effective pixel points, then the 5th pixel is on vertical single pixel line, then is detected to the top of 4 positions.
When meeting search termination condition, terminate search operation.
The condition that search terminates can be all wire-frame image vegetarian refreshments for the current gesture searched in current images of gestures, or the pixel quantity by search is greater than default detection threshold value.That is, terminating search operation if having searched for all wire-frame image vegetarian refreshments in the current gesture in current images of gestures.Alternatively, the number for the pixel that ought have been searched for is greater than default detection threshold value, such as when the number of the pixel searched for is greater than 750, then terminate search operation.
It limits available point and detects the number upper limit, that is, set default detection threshold value and be conducive to the lookup for reducing other wire-frame image vegetarian refreshments in the case where searching out all finger boundary pixel points, to improve efficiency.
Images of gestures in the embodiment of the present application can be the depth map of 640x480 resolution ratio, for the depth map of 640x480 resolution ratio, can set 750 for default detection threshold value.
Mode two: complete contours extract is not carried out to current gesture.
It is non-limiting as example, it can determine that the prediction finger tip pixel of the current gesture in current images of gestures and prediction refer to root pixel by exponential average EMA algorithm;Then, root pixel is referred to according to prediction finger tip pixel and prediction, obtains the outline data of current gesture.
For example, EMA algorithm can be used, carry out finger tip to current gesture under the premise of without complete contours extract and refer to location of root identification, to each finger tip point in present frame and refers to root point, carry out position prediction using formula (2).In formula (2), st+1It is the position predicted at the t+1 moment, stIt is the position of t moment prediction, otIt is the physical location of t moment, ω is weighted factor (0 < ω < 1).
st+1=ω ot+(1-ω)st       (2)
This method needs a series of predicted value, the embodiment of the present application can safeguard a nearest multiframe (such as, 10 frames) finger tip point in image, refer to the sequential chained list of root point and finger number, when needing to calculate the predicted position that certain in next frame image is put, the location information of respective point in this all multiframe (for example, 10 frames) image is inputed into EMA algorithm.
For example, if it is the sequential chained list safeguarded finger tip point in a nearly 10 frame image, refer to root point and finger number, then 10 are used to limit coefficient ω, the value 2/ (10+1) of ω.The position of finger tip point and finger root point is exactly physical location in the 1st frame image in this 10 frame image.
It after calculating prediction finger tip pixel and prediction refers to root pixel, available prediction finger tip pixel and predicts to refer to the wire-frame image vegetarian refreshments around root pixel, to obtain the outline data of images of gestures.Such as, matrix can be tieed up using n as range, detect the pixel that gesture profile is in matrix, then the pixel of other a certain number of gesture profiles respectively forwardly and is backward tracked, and profile sequential chained list is added in the point traced into, finger tip pixel can be obtained under the premise of without complete contours extract step in this way, refers to the profile sequential chained list around root pixel.
After pass-through mode one or mode two get the outline data of current gesture, it may further determine that the finger tip pixel of current gesture and/or the finger root pixel of current gesture, finally obtain the characteristic of current gesture.
Optionally, as the application one embodiment, the finger tip pixel of current gesture and/or the finger root pixel of current gesture can be determined by following steps:
According to the outline data of current gesture, determine that the v section camber line in current gesture, v are the integer more than or equal to 1;
According to the end pixel point of the beginning pixel and s sections of camber lines of s sections of camber lines in current gesture, the central pixel point of s sections of camber lines is determined, s traverses the integer in [1, v];
The vector m constituted according to k pixel before the central pixel point of the central pixel point of s sections of camber lines to s sections of camber lines, and the vector n that k pixel after the central pixel point of s sections of camber lines to the central pixel point of s sections of camber lines is constituted, determine the center vector p of s sections of camber lines, the smaller angle in two angles that center vector p divides vector m equally and vector n is constituted, k are the integer more than or equal to 1;
The distance between position where pixel and depth camera on the direction pointed by the center vector p determine that central pixel point corresponding to center vector p is finger tip pixel at pre-determined distance section, or
The distance between position where pixel and depth camera on the direction pointed by the center vector p determine central pixel point corresponding to center vector p not at pre-determined distance section to refer to root pixel.
It should be understood that the quantity that v sections of camber lines are expressed as the camber line in current gesture is v.
Specifically, it is possible, firstly, to the camber line in current gesture is obtained by K curvature algorithm first.K curvature algorithm defconstant k is used to describe a spacing, and angle a is used to the angle for describing to be formed at radian with both sides pixel.For example, k can be defined as to 25, angle a is defined as 50 degree, and the embodiment of the present application is not particularly limited this.To each pixel in profile sequential chained list, two intersecting vectors m, n can be constructed.Two vector starting points are all current pixel points, and the terminal of vector m is k-th of pixel before current pixel point, and the terminal of vector n is k-th of pixel after current pixel point.Then, the angle between two vectors is calculated, when being less than angle a, then marking current pixel point is radian point.In this way, the cambered point of institute in current gesture can be obtained, adjacent radian point constitutes one section of camber line.
In addition, in aforesaid way one, in i.e. complete contouring process, when first man exterior feature pixel be in forearm edge and with first finger distance farther out when, due to available point detection number limitation, be likely to result in when detecting all fingers in current gesture not yet just as available point reaches the upper limit and the case where termination.To avoid the occurrence of this, in the embodiment of the present application, it can be after extracting first camber line point, the difference with first man hand exterior feature pixel is calculated, if being judged that the difference is excessive, is greater than a certain default value, then the i.e. default detection threshold value of available point detection number is modified, increase default detection threshold value, so as to continue tracing detection wire-frame image vegetarian refreshments, completes the extraction of all wire-frame image vegetarian refreshments that can detecte.
Then, it carries out finger tip and refers to root identification, determine finger tip pixel and refer to root pixel.
Specific method is to traverse wire-frame image vegetarian refreshments chained list, finds camber line and starts pixel and end pixel point.Due to the centre symmetry of finger fingertip and finger root camber line, arc centers point is set as finger tip pixel or refers to root pixel.
Such as, as shown in Figure 5, take two intersecting vectors m, n of arc centers point, m, the center vector p of n, if the pixel in vector p direction is more than pre-determined distance section at a distance from depth camera, then the central point is the central point of finger bottom, that is, refers to root pixel, be otherwise exactly finger tip pixel.Wherein, finger orientation is described by vector p.
Since gesture variation has diversity and rapidity, in order to be accurately detected finger tip pixel in real time and/or refer to root pixel, optionally, the embodiment of the present application can be used Kalman filtering and predict to each finger tip pixel and/or each finger root pixel and then determine finger tip or refer to the exact position of root.
Firstly, defining state vector x of the finger tip in t momentt, as shown in formula (3).X, y is the coordinate of finger tip point in the picture, vx、vyIt is movement speed of the finger tip point in x, y-axis.
Estimation x of the finger tip state vector at the t+1 momentt+1As shown in formula (4).State-transition matrix A is defined as formula (5), and noise transformation matrices B is defined as formula (6).wtIt is the noise in estimation procedure, Gaussian distributed N (0, Qt), mean value 0, Q is defined as formula (7), the pace of change amplitude of latter two 8 expression finger tip point coordinate on diagonal line.
xt+1=Axt+Bwt           (4)
The finger tip recognition result calculated in the previous step is shown in the definition such as formula (8) of t moment, and calculating matrix C is defined as formula (9), vtIt is the error of calculated result and actual position, also Gaussian distributed N (0, Qt), mean value 0.wt、vtCovariance matrix be respectivelyWherein I2×2It is 2 × 2 unit matrixs.
yt=Cxt+vt         (8)
Next the Kalman filter that the definition present invention uses.Filtering gain equation KtIt is defined as formula (10), indicates calculated value and estimates the weight of value difference.The one-step prediction equation of stateIt is equal toIt is defined as formula (11), indicates yt-1Predicted value later.The one-step prediction equation of mean square errorIt is equal toIt is defined as formula (12), indicatesEvaluated error, Λ be equal to BBT.ByCalculated resultThe exact position (x, y) of finger tip can be obtained.
In this way, the more accurate finger tip pixel of current gesture can be obtained and/or refer to root pixel.
Later, the characteristic of current gesture can be determined according to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture.
Illustratively, the characteristic of current gesture mainly includes at least one of the following information in current gesture: finger number, finger length, fingertip location characteristic and finger width characteristic.Wherein, the fingertip location characteristic is used to indicate the relative position between finger, and the finger width characteristic is used to indicate finger width.
In other words, after obtaining the finger root pixel of finger tip pixel and/or the current gesture of current gesture, at least one of the following in current gesture: finger number, finger length, fingertip location characteristic and finger width characteristic can be determined.
In the embodiment of the present application, after the finger tip pixel that current gesture is obtained according to the above method, so that it may which remote holder is worked as in determination Finger number in gesture.That is, there are several finger tip pixels, mean that there are several fingers in current gesture.
In the embodiment of the present application, in order to improve detection accuracy, the value that finger number can be set is 3-5.So, when the finger number in the current gesture of determination is less than 3, then give up current images of gestures, continue to identify next frame images of gestures.And if it is determined that then continuing to identify finger length, fingertip location characteristic and the finger width characteristic in current gesture when the finger number in current gesture is greater than or equal to 3.
In the following, specific introduce how to determine finger length, fingertip location characteristic and finger width characteristic.
Optionally, when finger length in the current gesture of determination, if the left and right both sides of finger tip pixel, which exist, refers to root pixel, finger tip pixel and the first distance referred between root pixel and finger tip pixel positioned at the finger tip pixel left side and the second distance referred between root pixel on the right of finger tip pixel are then determined first.When first distance and the absolute value of the difference of second distance are greater than pre-set length threshold, projection of the smaller in both first distance and second distance on the finger orientation where finger tip pixel is determined as finger length;Or, when first distance and the absolute value of the difference of second distance are less than or equal to pre-set length threshold, the mean value by first distance in the projection of projection and second distance on the finger orientation where finger tip pixel on the finger orientation where finger tip pixel is determined as finger length.And if when the right of finger tip pixel is there is no root pixel is referred to, then it needs to be determined that finger tip pixel and the third distance referred between root pixel positioned at the finger tip pixel left side, are determined as finger length for projection of the third distance on the finger orientation where finger tip pixel.If the left side of finger tip pixel is there is no root pixel is referred to, then it needs to be determined that finger tip pixel and the 4th distance referred between root pixel on the right of finger tip pixel, are determined as finger length for projection of the 4th distance on the finger orientation where finger tip pixel.
It should be understood that above-mentioned first distance, second distance etc. are merely to distinguish the distance between different objects.
By the above-mentioned means, the length of all fingers being identified in current gesture can be determined.
It should be understood that can be terminal device system pre-set for pre-set length threshold.
Optionally, fingertip location characteristic D is being determinedjWhen, it can determine according to the following formula:
Wherein, k is the length of longest finger in current gesture, k > 0, n is the number of the finger in current gesture, n is the integer more than or equal to 1, and p is the finger tip pixel in current gesture, is coordinate of the finger tip pixel in current images of gestures in x, y, z is depth value of the finger tip pixel in current images of gestures, z >=0.
Specifically, p is the finger tip pixel in current gesture, the depth value z corresponding with its of x, y value in the picture constitutes the coordinate of the point.d(pi,pj) for describing the relationship of two pixel positions of certain in image, it is defined as formula (14).K is the length of the longest finger detected, pi,pjTwo closer, d (p of point distancei,pj) will be smaller.DjFor describing the fingertip location feature of current gesture, i.e. the sum of the positional relationship of every two finger tip, it is defined as formula (13).
According in formula (14), each numerical value can be obtained by the way that the application is described previously on the right of equation, therefore, can obtain fingertip location characteristic D according to formula (13) herej
Optionally it is determined that the length of finger each in current gesture can be equally divided into m+1 parts on finger orientation first, m is the integer more than or equal to 0 when finger width characteristic in images of gestures.For each finger in current gesture, calculates on Along ent position perpendicular to the width of the finger orientation, obtain m*n absolute width value.Wherein, N is the finger number in current gesture, and n is the integer more than or equal to 1.The ratio for calculating the every two absolute value width in m*n absolute width value again obtains the feature vector value { d of mn (mn-1)/2 relative width compositioni, i=1,2 ..., mn (mn-1)/2 }.By feature vector value { di, i=1,2 ..., mn (mn-1)/2 } it is finger width characteristic.
S250 unlocks the terminal device in the characteristic successful match of the characteristic and default gesture of current gesture.
It should be understood that the default gesture is that user is arranged in gesture setup phase.Default gesture is set similar to the fingerprint setup phase setting unlock fingerprint in unlocked by fingerprint in gesture setup phase.The default gesture can be multiple unlock gestures of gesture setup phase setting, i.e., the only one of a unlock gesture or the setting of gesture setup phase in multiple default gestures unlocks gesture.
Optionally, for user when default gesture is arranged, terminal device can start setting up unlock gesture operation triggering light source emitter output drive light source based on user.Then, it is obtained by depth camera and candidate images of gestures is presented to user.Finally, determining that the candidate gesture in candidate images of gestures is set default gesture by operation based on user.
For example, user clicks the button for for example starting setting up unlock gesture at user interface (User Interface, UI), the action triggers light source emitter output drive light source, by depth camera sampling depth data, and by depth image, i.e., candidate images of gestures is presented to the user.If pressing the key for determining and the gesture being arranged when user's satisfaction candidate's images of gestures.The candidate gesture in candidate's images of gestures is set default gesture by terminal device.
Further, before the candidate gesture in candidate images of gestures is set default gesture by the determining operation based on user, this method can also comprise determining that whether the finger number of candidate gesture is greater than or equal to 3, if the finger number of candidate gesture is greater than or equal to 3, it determines that candidate gesture is set default gesture by operation based on user, and obtains the characteristic of candidate gesture.
That is, the finger number in candidate gesture has to be larger than or is equal to 3.Otherwise, give up the current candidate gesture, continue to acquire next frame candidate images of gestures.
After default gesture is arranged in gesture setup phase, terminal device preserves the characteristic of the default gesture.
The characteristic of the default gesture, for example, it may be finger number, finger length, fingertip location characteristic and finger width characteristic in the default gesture.
In this case, one embodiment as S250, the terminal device can be unlocked when finger number, finger length, fingertip location characteristic and the finger width characteristic in current gesture are matched with the finger number of default gesture, finger length, fingertip location characteristic and finger width characteristic respectively.
Illustratively, finger number in current gesture, finger length, fingertip location characteristic and finger width characteristic respectively with the finger number of default gesture, finger length, fingertip location characteristic and the matching of finger width characteristic, it can refer to, finger number in current gesture is equal with the finger number in default gesture, each finger length in current gesture is less than or equal to the first preset threshold with the absolute value of the difference of corresponding each finger length in default gesture, the difference of the fingertip location characteristic in fingertip location characteristic and default gesture in current gesture is less than or equal to the second preset threshold, and the Euclidean distance Dis of the difference of the finger width characteristic in the finger width characteristic and default gesture in current gesture is less than or equal to third predetermined threshold value.So, when current gesture meets above-mentioned condition, then the terminal device is unlocked.
Wherein, the finger width characteristic { d in gesture is presetj, j=1,2 ..., mn (mn-1)/2 can by with the finger width characteristic { d in current gesturei, i=1,2 ..., mn (mn-1)/2 the identical mode of acquisition modes obtain.That is, The length of each finger in default gesture is equally divided into m+1 parts on finger orientation first.For presetting each finger in gesture, calculates on Along ent position perpendicular to the width of the finger orientation, obtain m*n absolute width value.Wherein, n is the finger number in default gesture.The ratio for calculating the every two absolute value width in m*n absolute width value again obtains the feature vector value { d of mn (mn-1)/2 relative width compositioni, i=1,2 ..., mn (mn-1)/2 }.
The Euclidean distance Dis of the difference of the finger width characteristic in finger width characteristic and default gesture in current gesture meets formula (15):
As described in S250, in the characteristic successful match of the characteristic and default gesture of current gesture, the terminal device is unlocked.If the current characteristic of gesture and the non-successful match of characteristic of default gesture can determine first whether the unlocking process time is overtime as one embodiment of the application.If the unlocking process time is overtime, for example, graphic user interface (Graphical User Interface, GUI) prompt unlocking process time time-out, and the screen locking after preset time period.Further, it is also possible to close light source emitter and the depth camera in unlocking process time time-out.And if the unlocking process time has not timed out, terminal device continue identify next frame image.
Therefore, according to the three-dimension gesture unlocking method of the embodiment of the present application, the three-dimensional images of gestures presented in three-dimensional space in front of the camera by obtaining user in real time, extract the gesture of user in images of gestures, and by matching with the unlock gesture set before user, achieve the purpose that unlock terminal device.To provide the unlocking manner that a kind of completely new interest is strong, accuracy is high and rapidity is good for user.
Illustrate the mode of the default gesture of setting according to the embodiment of the present application for greater clarity, below with reference to Fig. 6, detailed description presets one embodiment of gesture according to the setting in the three-dimension gesture unlocking method of the embodiment of the present application.
It should be noted that in Fig. 6 and the Fig. 7 that will hereafter introduce, using light source emitter as infrared transmitter, depth camera is to be illustrated for infrared camera.
To guarantee the quality of unlock gesture and reducing the time of gesture setting, when gesture is preset, the three-dimension gesture unlocking method for terminal device of the embodiment of the present application can prompt the hand of user to should be at the intermediate position of image, and unlock and should contain 3-5 finger in gesture.
S601, the first operation of terminal device detection.
Specifically, the first operation can be user and click the operation of the button of setting unlock gesture at the interface UI.If user clicks the button of setting unlock gesture at the interface UI, terminal be can detecte to the first operation.
S602 triggers infrared transmitter and infrared camera work.
Specifically, if terminal device detects the first operation, i.e., if user clicks the push-botton operation of setting unlock gesture at the interface UI, terminal device triggers infrared transmitter and infrared camera work.
S603 obtains current images of gestures in real time.
Specifically, infrared spectrum analysis chip obtains the depth data of infrared camera acquired image in real time.
S604, terminal device judge whether to need global follow according to current frame number, that is, execute the mode one being described above, carry out complete contours extract to current gesture.If it is required, then executing S605, S607 is otherwise executed.
S605, terminal device carry out contours extract.
Specifically, the mode one that terminal device can be described above obtains the outline data of the frame image.
S606, terminal device judge whether profile extracts success, that is, judge whether have gesture and gesture whether excessive in the frame image.If the frame image has gesture, and there are gesture profile points, then may determine that contours extract success.
S607, terminal device carry out finger tip identification.
Specifically, it if contours extract success, next identifies the characteristic of gesture, is the identification for carrying out finger tip first.Or, in S604, terminal device does not need to carry out global follow according to the judgement of current frame number, i.e. then two in the way of being described above, not to current gesture carry out complete contours extract, and fingertip location prediction is carried out, root pixel is referred to according to obtained prediction finger tip pixel and prediction, finally obtains the gesture profile chained list similar with contours extract.
S608 is modified contour detecting upper limit value according to the first fingertip location information detected.
Specifically, which is referred to description above, for sake of simplicity, details are not described herein again.It should be noted that if necessary to be modified to contour detecting upper limit value, then executing S605 after being modified to contour detecting upper limit value.
S609, Data Format Transform.
Because the data for recognizing fingertip location are only the coordinate of finger tip in the picture, in order to show that fingertip location, terminal device can change the color data of finger tip and peripheral part pixel to user, to be checked all over user.In addition, terminal device can also change the color data of gesture profile.Fingertip location is integrated into image data and submits to display control by S610, terminal device.
For example, terminal device can centered on fingertip location coordinate point, using 10 pixels as the region of the circle of radius, change the color value of the pixel of corresponding region in color image, while the color value of gesture wire-frame image vegetarian refreshments and the pixel adjacent thereto corresponding position in color image can also be changed.
S611, gui interface images of gestures echo.
Display control, which show, passes through S610 treated image, checks for user.
S612, terminal device judge whether user is determined as current gesture unlocking gesture
Specifically, after the completion of finger tip identifies, if user is satisfied with current gesture, the key for determining and the gesture being set is pressed, confirmation is using the gesture as unlock gesture, while terminal device will record the frame image.Terminal device continues to execute S614.If user is unsatisfied with current gesture, unlock gesture, terminal device are reset
It jumps back to and executes S603.
S613, terminal device judge whether the finger number in the gesture meets sets requirement.
As the application one embodiment, terminal device judges whether finger number is greater than or equal to 3, executes S615 if meeting, and present image is abandoned if being unsatisfactory for, and carries out the identification of next frame image.
S614, terminal device continue to identify other gesture feature data.
When finger number in the gesture meets sets requirement, terminal device continues to identify finger length, fingertip location characteristic and finger width characteristic.
S615, terminal device save the gesture feature data including finger number to nonvolatile memory, use for unlocking phases.
So far, the image just shown S611 is set as unlock gesture.
Below with reference to Fig. 7, one embodiment of the three-dimension gesture unlocking method according to the embodiment of the present application is described in detail.The embodiment mainly describes the corresponding operating of unlocking phases user and terminal device.
S701, the second operation of terminal device detection.
For example, the second operation can be the operation that user presses power key.In unlocking phases, power key is pressed by user and opens gesture unlocking process.
S702 triggers infrared transmitter and infrared camera work.
For example, triggering infrared transmitter and infrared camera work after user presses power key.
S703 obtains current images of gestures in real time.
Specifically, infrared spectrum analysis chip obtains the depth data of infrared camera acquired image in real time.
S704, terminal device judge whether to need to be implemented the mode one being described above according to current frame number, i.e., to the complete contours extract of current gesture progress.If it is required, then executing S705, S707 is otherwise executed.
S705, terminal device carry out contours extract.
Specifically, terminal device can obtain the outline data of the frame image according to the mode one being described above.
S706, terminal device judge whether profile extracts success, that is, judge whether have gesture and gesture whether excessive in the frame image.If the frame image has gesture, and there are gesture profile points, then may determine that contours extract success.
S707, terminal device carry out finger tip identification.
Specifically, it if contours extract success, next identifies the characteristic of gesture, is the identification for carrying out finger tip first.Or, in S704, terminal device does not need to carry out global follow according to the judgement of current frame number, it does not need to execute the mode being described above for the moment, then two in the way of being described above, complete contours extract is not carried out to current gesture, and carries out fingertip location prediction, refer to root pixel according to obtained prediction finger tip pixel and prediction, finally obtains the gesture profile chained list similar with contours extract.
Optionally, after carrying out finger tip identification, this method can also include: that terminal device judges whether the finger number in the gesture meets sets requirement.
As the application one embodiment, terminal device judges whether finger number is greater than or equal to 3, executes S708 or S709 if meeting, and present image is abandoned if being unsatisfactory for, and carries out the identification of next frame image.
S708 is modified contour detecting upper limit value according to the first fingertip location information detected.
Specifically, which is referred to description above, for sake of simplicity, details are not described herein again.It should be noted that if necessary to be modified to contour detecting upper limit value, then executing S705 after being modified to contour detecting upper limit value.
S709, terminal device continue to identify other gesture feature data.
For example, terminal device continues to identify finger length, fingertip location characteristic and finger width characteristic when the finger number in the gesture meets sets requirement.The latter directly executes the step after having carried out finger tip identification.
S710, judges whether the characteristic of the gesture with the characteristic of default gesture meets condition.
Specifically, according to the matching for whether having carried out finger number before the step, judge finger length, fingertip location characteristic and the finger width characteristic in the gesture, if match with corresponding finger length, fingertip location characteristic and the finger width characteristic in default gesture.Alternatively, judging finger number, finger length, fingertip location characteristic and finger width characteristic, if finger number corresponding in default gesture, finger length, fingertip location characteristic and the matching of finger width characteristic.If it does, then executing S711, S712 is otherwise executed.
S711 unlocks terminal device if the characteristic of the gesture is to meet condition with the characteristic of default gesture.
S712 judges whether the unlocking process time is overtime if the characteristic of the gesture is to be unsatisfactory for condition with the characteristic of default gesture.If it times out, executing S713, otherwise jumps back to and continue to execute S703.
S713 closes infrared equipment, i.e. closing infrared transmitter and infrared camera.
S714, terminal device prompt user's unlocking process time time-out, and screen locking after a certain period of time.
It should be understood that can be system pre-set for the certain period of time, for example can be 2 seconds, 4 seconds etc..
It should also be understood that S713 and S714 execution in no particular order sequence.
It should also be understood that the execution of step A and step B described herein sequence in no particular order, refers to that step A and step B may be performed simultaneously, can also first carry out step A, then execute and step B, or first carry out step B, then execute step A.
Fig. 8 is the schematic flow chart according to the method for the acquisition images of gestures of the embodiment of the present application.This method can be applied In but be not limited to system shown in FIG. 1.
S810 obtains the current images of gestures of user.
For example, when this method system as shown in Figure 1 executes, the excitation light source that depth camera 120 is come by receiving extraneous reflection, such as infrared light, will excitation light source carry out digital coding after formation digital image transmission to spectral analysis module 130.Spectral analysis module 130 analyzes speckle, corresponding pixel points (x, y) and 120 distance z of depth camera in current images of gestures is calculated, so as to obtain current images of gestures.
S820 obtains the outline data of the current gesture before deserving in images of gestures according to preceding images of gestures is deserved.
Illustratively, outline data described herein can refer to the coordinate (x, y, z) of each pixel of the composition gesture profile in images of gestures.
It is alternatively possible to by way of above one or mode two obtain the outline data of the current gesture in current images of gestures.It is specifically referred to the description above, for sake of simplicity, details are not described herein again.
S830, according to the outline data of gesture before deserving, the finger root pixel of gesture before determining the finger tip pixel of gesture before deserving and/or deserving.
Specifically, it is referred to above-mentioned described method, for sake of simplicity, details are not described herein again.
S840, the finger root pixel of gesture, determines the characteristic of gesture before deserving according to the finger tip pixel of gesture before deserving and/or before deserving.
Optionally, the finger root pixel of gesture, determines the characteristic of gesture before deserving before which deserves the finger tip pixel of preceding gesture and/or deserve, comprising:
The finger root pixel of gesture according to the finger tip pixel of gesture before deserving and/or before deserving, determines at least one of the following before deserving in gesture:
Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, which is used to indicate finger width.
Specifically, it is referred to above-mentioned described method, for sake of simplicity, details are not described herein again.
According to the method for the acquisition images of gestures of the application, pass through the outline data according to the current gesture in current images of gestures, the finger root pixel of gesture, may thereby determine that the characteristic of current gesture before the finger tip pixel of current gesture can be obtained and/or deserved.Optionally, after step S840 further include: when the characteristic successful match of the characteristic of the current gesture and default gesture, unlock the terminal device.
Fig. 9 shows a kind of schematic block diagram of terminal device 900 of the embodiment of the present application.As shown in figure 9, the terminal device 900 includes: acquiring unit 910 and processing unit 920.
Acquiring unit 910, for obtaining the current images of gestures of user.
Processing unit 920, for obtaining the outline data of the current gesture in the current images of gestures according to the current images of gestures;According to the outline data of the current gesture, the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture are determined;According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, the characteristic of the current gesture is determined;And in the characteristic successful match of the characteristic and default gesture of the current gesture, unlock the terminal device.
Therefore, the terminal device of the embodiment of the present application, the three-dimensional images of gestures presented in front of the camera in three-dimensional space by obtaining user in real time, extracts the gesture of user in images of gestures, and by matching with the unlock gesture set before user, achieve the purpose that unlock terminal device.To provide the unlocking manner that a kind of completely new interest is strong, accuracy is high and rapidity is good for user.
Optionally, the terminal device of the embodiment of the present application, including but not limited to mobile phone, plate, computer, multimedia machine and Game machine.All equipment using mobile communications network, in the protection scope of the embodiment of the present application.
The terminal device 900 of the embodiment of the present application can correspond to the terminal device in the application embodiment of the method, and above and other operation and/or function of the modules in terminal device 900 is respectively in order to realize the corresponding process of Fig. 2 to method shown in Fig. 7, for sake of simplicity, details are not described herein.
Fig. 9 shows a kind of schematic block diagram of terminal device 1000 of the embodiment of the present application.As shown in Figure 10, which includes: acquiring unit 1010 and processing unit 1020.
Acquiring unit 1010, for obtaining the current images of gestures of user.
Processing unit 1020, for obtaining the outline data of the current gesture in the current images of gestures according to the current images of gestures;According to the outline data of the current gesture, the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture are determined;And according to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, determine the characteristic of the current gesture.
According to the terminal device of the application, by the outline data according to the current gesture in current images of gestures, the finger root pixel of gesture, may thereby determine that the characteristic of current gesture before can obtaining the finger tip pixel of current gesture and/or deserving.
Optionally, the terminal device of the embodiment of the present application, including but not limited to mobile phone, plate, computer, multimedia machine and game machine.All equipment using mobile communications network, in the protection scope of the embodiment of the present application.
The terminal device 1000 of the embodiment of the present application can correspond to the terminal device in the application embodiment of the method, and above and other operation and/or function of the modules in terminal device 1000 is respectively in order to realize the corresponding process of method shown in Fig. 8, for sake of simplicity, details are not described herein.
Figure 11 is another schematic block diagram according to the terminal device of the embodiment of the present application.Terminal device 1100 shown in Figure 11 includes: the components such as radio frequency (Radio Frequency, RF) circuit 1110, memory 1120, other input equipments 1130, display screen 1140, sensor 1150, voicefrequency circuit 1160, I/O subsystem 1170, processor 1180 and power supply 1190.It will be understood by those skilled in the art that terminal device structure shown in Figure 11 does not constitute the restriction to terminal device, it may include perhaps combining certain components than illustrating more or fewer components and perhaps splitting certain components or different component layouts.Skilled person is understood that display screen 1140 belongs to user interface (User Interface, UI), and terminal device 1100 may include than diagram or less user interface.
It is specifically introduced below with reference to each component parts of the Figure 11 to terminal device 1100:
RF circuit 1110 can be used for receiving and sending messages or communication process in, signal sends and receivees, and particularly, after the downlink information of base station is received, handles to processor 1180;In addition, the data for designing uplink are sent to base station.In general, RF circuit includes but is not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuit 1110 can also be communicated with network and other equipment by wireless communication.Any communication standard or agreement can be used in the wireless communication, including but not limited to global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), Email, short message service (Short Messaging Service, SMS) etc..
Memory 1120 can be used for storing software program and module, and processor 1180 is stored in the software program and module of memory 1120 by operation, thereby executing the various function application and data processing of terminal device 1100.Memory 1120 can mainly include storing program area and storage data area, wherein storing program area can application program (such as sound-playing function, image player function etc.) needed for storage program area, at least one function etc.;Storage data area can store Created data (such as audio data, phone directory etc.) etc. are used according to terminal device 1100.In addition, memory 1120 may include high-speed random access memory, it can also include nonvolatile memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Other input equipments 1130 can be used for receiving the number or character information of input, and generate key signals input related with the user setting of terminal device 1100 and function control.Specifically, other input equipments 1130 may include but be not limited to one of physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick, light mouse (extension that light mouse is the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen) etc. or a variety of.Other input equipments 1130 are connected with other input device controls devices 1171 of I/O subsystem 1170, carry out signal interaction with processor 1180 under the control of other equipment input controller 1171.
Display screen 1140 can be used for showing information input by user or be supplied to the information of user and the various menus of terminal device 1100, can also receive user's input.Specific display screen 1140 may include display panel 1141 and touch panel 1142.Wherein display panel 1141 can configure display panel 1141 using the forms such as liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED).Touch panel 1142, also referred to as touch screen, touch-sensitive screen etc., the on it or neighbouring contact of collectable user or Touchless manipulation (for example user is using the operation of any suitable object or attachment on touch panel 1142 or near touch panel 1142 such as finger, stylus, also may include somatosensory operation;The operation includes the action types such as single-point control operation, multiparty control operation), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 1142 may include both touch detecting apparatus and touch controller.Wherein, touch orientation, the posture of touch detecting apparatus detection user, and touch operation bring signal is detected, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into the information that processor is capable of handling, then give processor 1180, and can receive order that processor 1180 is sent and be executed.Furthermore, it is possible to realize touch panel 1142 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves, touch panel 1142 can also be realized using any technology of future development.Further, touch panel 1142 can cover display panel 1141, user can (the display content includes but is not limited to according to the content that display panel 1141 is shown, soft keyboard, virtual mouse, virtual key, icon etc.), on the touch panel 1142 covered on display panel 1141 or nearby operated, after touch panel 1142 detects operation on it or nearby, processor 1180 is sent to by I/O subsystem 1170 to determine that user inputs, it is followed by subsequent processing device 1180 and provides corresponding visual output on display panel 1141 by I/O subsystem 1170 according to user's input.Although in Figure 11, touch panel 1142 and display panel 1141 are the input and input function for realizing terminal device 1100 as two independent components, but it is in some embodiments it is possible to touch panel 1142 and display panel 1141 is integrated and that realizes terminal device 1100 output and input function.
Terminal device 1100 may also include at least one sensor 1150, such as optical sensor, motion sensor and other sensors.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can adjust the brightness of display panel 1141 according to the light and shade of ambient light, proximity sensor can close display panel 1141 and/or backlight when terminal device 1100 is moved in one's ear.As a kind of motion sensor, accelerometer sensor can detect the size of (generally three axis) acceleration in all directions, size and the direction that can detect that gravity when static can be used to identify application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) of mobile phone posture etc.;The other sensors such as the gyroscope, barometer, hygrometer, thermometer, the infrared sensor that can also configure as terminal device 1100, details are not described herein.
Voicefrequency circuit 1160, loudspeaker 1161, microphone 1162 can provide the audio interface between user and terminal device 1100.Signal after the audio data received conversion can be transferred to loudspeaker 1161, by loudspeaking by voicefrequency circuit 1160 Device 1161 is converted to voice signal output;On the other hand, the voice signal of collection is converted to signal by microphone 1162, audio data is converted to after being received by voicefrequency circuit 1160, then audio data is exported to RF circuit 1110 to be sent to such as another mobile phone, or audio data is exported to memory 1120 to be further processed.
I/O subsystem 1170 is used to control the external equipment of input and output, may include other equipment input controller 1171, sensor controller 1172, display controller 1173.Optionally, other one or more input control apparatus controllers 1171 receive signal from other input equipments 1130 and/or send signal to other input equipments 1130, other input equipments 1130 may include physical button (push button, rocker buttons etc.), dial, slide switch, control stick, click idler wheel, light mouse (extension that light mouse is the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen).It is worth noting that other input control apparatus controllers 1171 can be connect with any one or multiple above equipments.Display controller 1173 in the I/O subsystem 1170 receives signal from display screen 1140 and/or sends signal to display screen 1140.After display screen 1140 detects user's input, user's input that display controller 1173 will test is converted to and is shown in the interaction of the user interface object on display screen 1140, i.e. realization human-computer interaction.Sensor controller 1172 can receive signal from one or more sensor 1150 and/or send signal to one or more sensor 1150.
Processor 1180 is the control centre of terminal device 1100, utilize the various pieces of various interfaces and the entire terminal device of connection, by running or executing the software program and/or module that are stored in memory 1120, and call the data being stored in memory 1120, the various functions and processing data for executing terminal device 1100, to carry out integral monitoring to terminal device.Optionally, processor 1180 may include one or more processing units;Optionally, processor 1180 can integrate application processor and modem processor, wherein the main processing operation system of application processor, user interface and application program etc., modem processor mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1180.
Processor 1180 is used for: obtaining the current images of gestures of user;According to the current images of gestures, the outline data of the current gesture in the current images of gestures is obtained;According to the outline data of the current gesture, the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture are determined;According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, the characteristic of the current gesture is determined;In the characteristic successful match of the characteristic and default gesture of the current gesture, the terminal device is unlocked.
Alternatively, processor 1180 is used for: obtaining the current images of gestures of user;According to the current images of gestures, the outline data of the current gesture in the current images of gestures is obtained;According to the outline data of the current gesture, the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture are determined;According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, the characteristic of the current gesture is determined.
Terminal device 1100 further includes the power supply 1190 (such as battery) powered to all parts, optionally, power supply can be logically contiguous by power-supply management system and processor 1180, to realize the functions such as management charging, electric discharge and power consumption by power-supply management system.
Although being not shown, terminal device 1100 can also include camera, bluetooth module etc., and details are not described herein.
It should be understood that the terminal device 1100 can correspond to the terminal device in the three-dimension gesture unlocking method according to the embodiment of the present application, which may include the solid element for executing the method for the terminal device in the above method or electronic equipment execution.Also, each solid element and other above-mentioned operation and/or functions in the terminal device 1100 are respectively for the corresponding process of the above method, for sake of simplicity, details are not described herein.
It should also be understood that the terminal device 1100 may include for executing the solid element in the method for above-mentioned acquisition images of gestures.Also, each solid element and other above-mentioned operation and/or functions in the terminal device 1100 are respectively for above-mentioned side The corresponding process of method, for sake of simplicity, details are not described herein.
It should also be understood that the processor in the embodiment of the present application can be a kind of IC chip, the processing capacity with signal.During realization, each step of above method embodiment can be completed by the integrated logic circuit of the hardware in processor or the instruction of software form.Above-mentioned processor can be central processing unit (Central Processing Unit, CPU), the processor can also be other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present application.General processor can be microprocessor or the processor is also possible to any conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present application, can be embodied directly in hardware decoding processor and execute completion, or in decoding processor hardware and the combination of software device execute completion.Software device can be located at random access memory, flash memory, read-only memory, in the storage medium of this fields such as programmable read only memory or electrically erasable programmable memory, register maturation.The step of storage medium is located at memory, and processor reads the information in memory, completes the above method in conjunction with its hardware.
It it should also be understood that the memory in the embodiment of the present application can be volatile memory or nonvolatile memory, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), Erasable Programmable Read Only Memory EPROM (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), be used as External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate SDRAM, DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links dynamic random access memory (Synchlink DRAM, SLDRAM it) is deposited with direct rambus arbitrary access Reservoir (Direct Rambus RAM, DR RAM).It should be noted that the memory of system and method described herein is intended to include but is not limited to the memory of these and any other suitable type.
It should also be understood that the bus system in addition to including data/address bus, can also include power bus, control bus and status signal bus in addition etc..But for the sake of clear explanation, various buses are all designated as bus system in figure.
It should also be understood that in the embodiment of the present application, " B corresponding with A " indicates that B is associated with A, B can be determined according to A.It is also to be understood that determining that B is not meant to determine B only according to A according to A, B can also be determined according to A and/or other information.It should be understood that the terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B, can indicate: individualism A, while 10 are deposited in A and B, these three situations of individualism B.In addition, character "/" herein, typicallys represent the relationship that forward-backward correlation object is a kind of "or".
During realization, each step of the above method can be completed by the integrated logic circuit of the hardware in processor or the instruction of software form.The step of method of uplink signal is used for transmission in conjunction with disclosed in the embodiment of the present application can be embodied directly in hardware processor and execute completion, or in processor hardware and the combination of software device execute completion.Software device can be located at random access memory, flash memory, read-only memory, in the storage medium of this fields such as programmable read only memory or electrically erasable programmable memory, register maturation.The storage medium is located at memory, and processor is read in memory Information, in conjunction with its hardware complete the above method the step of.To avoid repeating, it is not detailed herein.
The embodiment of the present application also proposed a kind of computer readable storage medium, the computer-readable recording medium storage one or more program, the one or more program includes instruction, the instruction is when the portable electronic device for being included multiple application programs executes, method that the portable electronic device can be made to execute Fig. 2 and/or embodiment illustrated in fig. 3.
Those of ordinary skill in the art may be aware that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Professional technician can use different methods to achieve the described function each specific application, but this realization is it is not considered that exceed the range of the embodiment of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, system, the specific work process of device and unit of foregoing description can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods may be implemented in other ways.Such as, the apparatus embodiments described above are merely exemplary, such as, the division of the unit, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed mutual coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, and component shown as a unit may or may not be physical unit, it can and it is in one place, or may be distributed over multiple network units.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in each embodiment of the embodiment of the present application can integrate in one processing unit, it is also possible to each unit and physically exists alone, can also be integrated in one unit with two or more units.
If the function is realized in the form of SFU software functional unit and when sold or used as an independent product, can store in a computer readable storage medium.Based on this understanding, substantially the part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products the technical solution of the embodiment of the present application in other words, the computer software product is stored in a storage medium, it uses including some instructions so that a computer equipment (can be personal computer, server or the network equipment etc.) execute each embodiment the method for the embodiment of the present application all or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), the various media that can store program code such as random access memory (Random Access Memory, RAM), magnetic or disk.
It is described above; the only specific embodiment of the embodiment of the present application; but the protection scope of the embodiment of the present application is not limited thereto; anyone skilled in the art is in the technical scope that the embodiment of the present application discloses; it can easily think of the change or the replacement, should all cover within the protection scope of the embodiment of the present application.Therefore, the protection scope of the embodiment of the present application should be based on the protection scope of the described claims.

Claims (30)

  1. A kind of three-dimension gesture unlocking method, which is characterized in that the method is applied to terminal device, which comprises
    Obtain the current images of gestures of user;
    According to the current images of gestures, the outline data of the current gesture in the current images of gestures is obtained;
    According to the outline data of the current gesture, the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture are determined;
    According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, the characteristic of the current gesture is determined;
    In the characteristic successful match of the characteristic and default gesture of the current gesture, the terminal device is unlocked.
  2. The method as described in claim 1, which is characterized in that it is described according to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, determine the characteristic of the current gesture, comprising:
    According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, at least one of the following information in the current gesture is determined:
    Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, and the finger width characteristic is used to indicate finger width.
  3. Method according to claim 2, which is characterized in that it is described according to the current images of gestures, obtain the outline data of the current gesture in the images of gestures, comprising:
    According to the current frame number of the current images of gestures, it is determined whether need to carry out contours extract to the current gesture;
    When determining that needs carry out contours extract to the current gesture, the wire-frame image vegetarian refreshments of the current gesture is searched for until having searched for all wire-frame image vegetarian refreshments of the current gesture or the pixel quantity by search greater than default detection threshold value, to obtain the outline data of the current gesture.
  4. Method as claimed in claim 3, which is characterized in that the method also includes:
    When determining do not need to the current gesture progress contours extract, determine that the prediction finger tip pixel of the current gesture and the current gesture prediction refer to root pixel according to exponential average EMA algorithm;
    Refer to root pixel according to the prediction of the prediction finger tip pixel of the current gesture and the current gesture, obtains the outline data of the current gesture.
  5. Method as described in any one of claim 2 to 4, which is characterized in that the outline data according to the current gesture determines the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, comprising:
    According to the outline data of the current gesture, determine that the v section camber line in the current gesture, v are the integer more than or equal to 1;
    According to the end pixel point of the beginning pixel and the s sections of camber lines of s sections of camber lines in the current gesture, the central pixel point of the s sections of camber lines is determined, s traverses the integer in [1, v];
    The vector m constituted according to k pixel before the central pixel point of the central pixel point of the s sections of camber lines to the s sections of camber lines, and the vector n that k pixel after the central pixel point of the s sections of camber lines to the central pixel point of the s sections of camber lines is constituted, determine the center vector p of the s sections of camber lines, the smaller angle in two angles that the center vector p divides the vector m equally and the vector n is constituted, k are the integer more than or equal to 1;
    The distance between position where pixel and the depth camera on the direction pointed by the center vector p determine that central pixel point corresponding to the center vector p is finger tip pixel at pre-determined distance section, or
    The distance between position where pixel and the depth camera on the direction pointed by the center vector p Not at pre-determined distance section, central pixel point corresponding to the center vector p is determined to refer to root pixel.
  6. Method as claimed in claim 5, which is characterized in that the finger length in the determination current gesture includes:
    Determine the finger tip pixel and the second distance referred between root pixel positioned at the first distance referred between root pixel on the finger tip pixel left side and the finger tip pixel and on the right of the finger tip pixel;
    When the first distance and the absolute value of the difference of the second distance are greater than pre-set length threshold, projection of the smaller in both the first distance and the second distance on the finger orientation where the finger tip pixel is determined as the finger length,
    When the first distance and the absolute value of the difference of the second distance are less than or equal to pre-set length threshold, the mean value of projection and second distance projection on the finger orientation finger tip pixel where of the first distance on the finger orientation where the finger tip pixel is determined as the finger length.
  7. Such as method described in claim 5 or 6, which is characterized in that the finger length in the determination current gesture includes:
    Refer to root pixel when the right of the finger tip pixel is not present, or when the left side of the finger tip pixel is there is no root pixel is referred to,
    It determines the finger tip pixel and the third distance referred between root pixel positioned at the finger tip pixel left side, or determines the finger tip pixel and the 4th distance referred between root pixel on the right of the finger tip pixel;
    Projection of the third distance on the finger orientation where the finger tip pixel is determined as the finger length, or
    Projection of 4th distance on the finger orientation where the finger tip pixel is determined as the finger length.
  8. Method as described in any one of claim 5 to 7, which is characterized in that the fingertip location characteristic DjMeet following formula:
    Wherein, k is the length of longest finger in the current gesture, k > 0, n is the number of the finger in the current gesture, n is integer more than or equal to 1, p be finger tip pixel in the current gesture its in x, y be coordinate of the finger tip pixel in the current images of gestures, x and y are real number, z is depth value of the finger tip pixel in the current images of gestures, z >=0.
  9. Method as described in any one of claim 5 to 8, which is characterized in that the finger width characteristic in the determination images of gestures includes:
    The length of each finger in the current gesture is equally divided into m+1 parts on finger orientation, m is the positive integer more than or equal to 0;
    It for each finger in the current gesture, calculates on Along ent position perpendicular to the width of the finger orientation, obtains m*n absolute width value, wherein n is the finger number in the current gesture, and n is the integer more than or equal to 1;
    The ratio for calculating the every two absolute value width in the m*n absolute width value obtains mn (mn-1)/2 phase Feature vector value { the d that width is constitutedi, i=1,2 ..., mn (mn-1)/2 };
    By described eigenvector value { di, i=1,2 ..., mn (mn-1)/2 } it is determined as the finger width characteristic.
  10. Method as claimed in any one of claims 1-9 wherein, which is characterized in that the method also includes:
    In the characteristic of the current gesture and the non-successful match of the characteristic of default gesture, determine whether the unlocking process time is overtime;
    If the unlocking process time time-out, carries out screen locking processing to the terminal device after preset time period.
  11. Method as described in any one of claims 1 to 10, which is characterized in that according to the current images of gestures, before the outline data for obtaining the current gesture in the current images of gestures, the method also includes:
    The unlock gesture operation that starts setting up based on the user obtains and candidate images of gestures is presented to the user;
    Determine that the candidate gesture in the candidate images of gestures is set default gesture by operation based on the user.
  12. Method as claimed in claim 11, which is characterized in that it is described based on the user determine that the candidate gesture in the candidate images of gestures is set default gesture by operation before, the method also includes:
    Determine whether the finger number of the candidate gesture is greater than or equal to 3;
    If the finger number of candidate's gesture is greater than or equal to 3, determine that the candidate gesture is set the default gesture by operation based on the user;
    Wherein, the method also includes:
    Obtain the characteristic of the candidate gesture.
  13. A method of obtaining images of gestures characterized by comprising
    Obtain the current images of gestures of user;
    According to the current images of gestures, the outline data of the current gesture in the current images of gestures is obtained;
    According to the outline data of the current gesture, the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture are determined;
    According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, the characteristic of the current gesture is determined.
  14. Method as claimed in claim 13, which is characterized in that it is described according to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, determine the characteristic of the current gesture, comprising:
    According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, at least one of the following in the current gesture is determined:
    Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, and the finger width characteristic is used to indicate finger width.
  15. A kind of terminal device characterized by comprising
    Acquiring unit, for obtaining the current images of gestures of user;
    Processing unit, for obtaining the outline data of the current gesture in the current images of gestures according to the current images of gestures;
    The processing unit is also used to the outline data according to the current gesture, determines the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture;
    The processing unit is also used to the finger root pixel of the finger tip pixel and/or the current gesture according to the current gesture, determines the characteristic of the current gesture;
    The processing unit is also used to unlock the terminal device in the characteristic successful match of the characteristic and default gesture of the current gesture.
  16. Terminal device as claimed in claim 15, which is characterized in that the processing unit is specifically used for:
    According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, at least one of the following information in the current gesture is determined:
    Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, and the finger width characteristic is used to indicate finger width.
  17. Terminal device as claimed in claim 16, which is characterized in that the processing unit is specifically used for:
    According to the current frame number of the current images of gestures, it is determined whether need to carry out contours extract to the current gesture;
    When determining that needs carry out contours extract to the current gesture, the wire-frame image vegetarian refreshments of the current gesture is searched for until having searched for all wire-frame image vegetarian refreshments of the current gesture or the pixel quantity by search greater than default detection threshold value, to obtain the outline data of the current gesture.
  18. Terminal device as claimed in claim 17, which is characterized in that the processing unit is also used to:
    When determining do not need to the current gesture progress contours extract, determine that the prediction finger tip pixel of the current gesture and the current gesture prediction refer to root pixel according to exponential average EMA algorithm;
    Refer to root pixel according to the prediction of the prediction finger tip pixel of the current gesture and the current gesture, obtains the outline data of the current gesture.
  19. Terminal device as described in any one of claim 16 to 18, which is characterized in that the processing unit is specifically used for:
    According to the outline data of the current gesture, determine that the v section camber line in the current gesture, v are the integer more than or equal to 1;
    According to the end pixel point of the beginning pixel and the s sections of camber lines of s sections of camber lines in the current gesture, the central pixel point of the s sections of camber lines is determined, s traverses the integer in [1, v];
    The vector m constituted according to k pixel before the central pixel point of the central pixel point of the s sections of camber lines to the s sections of camber lines, and the vector n that k pixel after the central pixel point of the s sections of camber lines to the central pixel point of the s sections of camber lines is constituted, determine the center vector p of the s sections of camber lines, the smaller angle in two angles that the center vector p divides the vector m equally and the vector n is constituted, k are the integer more than or equal to 1;
    The distance between position where pixel and the depth camera on the direction pointed by the center vector p determine that central pixel point corresponding to the center vector p is finger tip pixel at pre-determined distance section, or
    The distance between position where pixel and the depth camera on the direction pointed by the center vector p determine central pixel point corresponding to the center vector p not at pre-determined distance section to refer to root pixel.
  20. Terminal device as claimed in claim 19, which is characterized in that the processing unit is specifically used for:
    Determine the finger tip pixel and the second distance referred between root pixel positioned at the first distance referred between root pixel on the finger tip pixel left side and the finger tip pixel and on the right of the finger tip pixel;
    When the first distance and the absolute value of the difference of the second distance are greater than pre-set length threshold, projection of the smaller in both the first distance and the second distance on the finger orientation where the finger tip pixel is determined as the finger length,
    When the first distance and the absolute value of the difference of the second distance are less than or equal to pre-set length threshold, the mean value of projection and second distance projection on the finger orientation finger tip pixel where of the first distance on the finger orientation where the finger tip pixel is determined as the finger length.
  21. Terminal device as described in claim 19 or 20, which is characterized in that the processing unit is specifically used for:
    Refer to root pixel when the right of the finger tip pixel is not present, or when the left side of the finger tip pixel is there is no root pixel is referred to,
    It determines the finger tip pixel and the third distance referred between root pixel positioned at the finger tip pixel left side, or determines the finger tip pixel and the 4th distance referred between root pixel on the right of the finger tip pixel;
    Projection of the third distance on the finger orientation where the finger tip pixel is determined as the finger length, or
    Projection of 4th distance on the finger orientation where the finger tip pixel is determined as the finger length.
  22. Terminal device as described in any one of claim 19 to 21, which is characterized in that the fingertip location characteristic DjMeet following formula:
    Wherein, k is the length of longest finger in the current gesture, k > 0, n is the number of the finger in the current gesture, n is integer more than or equal to 1, p be finger tip pixel in the current gesture its in x, y be coordinate of the finger tip pixel in the current images of gestures, x and y are real number, z is depth value of the finger tip pixel in the current images of gestures, z >=0.
  23. Terminal device as described in any one of claim 19 to 22, which is characterized in that the processing unit is specifically used for:
    The length of each finger in the current gesture is equally divided into m+1 parts on finger orientation, m is the positive integer more than or equal to 0;
    It for each finger in the current gesture, calculates on Along ent position perpendicular to the width of the finger orientation, obtains m*n absolute width value, wherein n is the finger number in the current gesture, and n is the integer more than or equal to 1;
    The ratio for calculating the every two absolute value width in the m*n absolute width value obtains the feature vector value { d of mn (mn-1)/2 relative width compositioni, i=1,2 ..., mn (mn-1)/2 };
    By described eigenvector value { di, i=1,2 ..., mn (mn-1)/2 } it is determined as the finger width characteristic.
  24. Terminal device as described in any one of claim 15 to 23, which is characterized in that the processing unit is also used to:
    In the characteristic of the current gesture and the non-successful match of the characteristic of default gesture, determine whether the unlocking process time is overtime;
    If the unlocking process time time-out, carries out screen locking processing to the terminal device after preset time period.
  25. Terminal device as described in any one of claim 15 to 24, which is characterized in that the processing unit is also used to:
    The unlock gesture operation that starts setting up based on the user obtains and candidate images of gestures is presented to the user;
    Determine that the candidate gesture in the candidate images of gestures is set default gesture by operation based on the user.
  26. Terminal device as claimed in claim 25, which is characterized in that the processing unit is also used to:
    Determine whether the finger number of the candidate gesture is greater than or equal to 3;
    If the finger number of candidate's gesture is greater than or equal to 3, determine that the candidate gesture is set the default gesture by operation based on the user;
    Wherein, the terminal device further include:
    Obtain the characteristic of the candidate gesture.
  27. A kind of terminal device characterized by comprising
    Acquiring unit, for obtaining the current images of gestures of user;
    Processing unit, for obtaining the outline data of the current gesture in the current images of gestures according to the current images of gestures;
    The processing unit is also used to the outline data according to the current gesture, determines the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture;
    The processing unit is also used to the finger root pixel of the finger tip pixel and/or the current gesture according to the current gesture, determines the characteristic of the current gesture.
  28. Terminal device as claimed in claim 27, which is characterized in that the processing unit is specifically used for:
    According to the finger tip pixel of the current gesture and/or the finger root pixel of the current gesture, at least one of the following in the current gesture is determined:
    Finger number, finger length, fingertip location characteristic and finger width characteristic, wherein the fingertip location characteristic is used to indicate the relative position between finger, and the finger width characteristic is used to indicate finger width.
  29. A kind of terminal device characterized by comprising memory, processor and display;
    Memory, for storing program;
    The processor, for executing the described program of the memory storage, when described program is performed, the processor is used to execute the method as described in any one of claim 1-12.
  30. A kind of terminal device characterized by comprising memory, processor and display;
    The memory, for storing program;
    The processor, for executing the described program of the memory storage, when described program is performed, the processor is for executing method according to claim 13 or 14.
CN201780004005.3A 2016-10-14 2017-04-10 Three-dimensional gesture unlocking method, gesture image obtaining method and terminal equipment Active CN108351708B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201610898260 2016-10-14
CN2016108982604 2016-10-14
PCT/CN2017/079936 WO2018068484A1 (en) 2016-10-14 2017-04-10 Three-dimensional gesture unlocking method, method for acquiring gesture image, and terminal device

Publications (2)

Publication Number Publication Date
CN108351708A true CN108351708A (en) 2018-07-31
CN108351708B CN108351708B (en) 2020-04-03

Family

ID=61905077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780004005.3A Active CN108351708B (en) 2016-10-14 2017-04-10 Three-dimensional gesture unlocking method, gesture image obtaining method and terminal equipment

Country Status (2)

Country Link
CN (1) CN108351708B (en)
WO (1) WO2018068484A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110187771A (en) * 2019-05-31 2019-08-30 努比亚技术有限公司 Gesture interaction method, device, wearable device and computer storage medium high up in the air
CN110532863A (en) * 2019-07-19 2019-12-03 平安科技(深圳)有限公司 Gesture operation method, device and computer equipment
CN112748822A (en) * 2019-10-29 2021-05-04 Oppo广东移动通信有限公司 Projection keyboard system, mobile terminal and implementation method of projection keyboard
CN110187771B (en) * 2019-05-31 2024-04-26 努比亚技术有限公司 Method and device for interaction of air gestures, wearable equipment and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117391A (en) * 2009-12-30 2011-07-06 优方科技股份有限公司 Touch lock circuit architecture operated by using gesture or graph and operating method thereof
KR20110103598A (en) * 2010-03-15 2011-09-21 주식회사 엘지유플러스 Terminal unlock system and terminal unlock method
US20130174094A1 (en) * 2012-01-03 2013-07-04 Lg Electronics Inc. Gesture based unlocking of a mobile terminal
CN103246836A (en) * 2013-04-03 2013-08-14 李健 Finger slide identification unlocking method for touch screen
CN103733614A (en) * 2011-06-29 2014-04-16 亚马逊技术公司 User identification by gesture recognition
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods
US9355236B1 (en) * 2014-04-03 2016-05-31 Fuji Xerox Co., Ltd. System and method for biometric user authentication using 3D in-air hand gestures

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6155786B2 (en) * 2013-04-15 2017-07-05 オムロン株式会社 Gesture recognition device, gesture recognition method, electronic device, control program, and recording medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117391A (en) * 2009-12-30 2011-07-06 优方科技股份有限公司 Touch lock circuit architecture operated by using gesture or graph and operating method thereof
KR20110103598A (en) * 2010-03-15 2011-09-21 주식회사 엘지유플러스 Terminal unlock system and terminal unlock method
CN103733614A (en) * 2011-06-29 2014-04-16 亚马逊技术公司 User identification by gesture recognition
US20130174094A1 (en) * 2012-01-03 2013-07-04 Lg Electronics Inc. Gesture based unlocking of a mobile terminal
CN103246836A (en) * 2013-04-03 2013-08-14 李健 Finger slide identification unlocking method for touch screen
US20150177842A1 (en) * 2013-12-23 2015-06-25 Yuliya Rudenko 3D Gesture Based User Authorization and Device Control Methods
US9355236B1 (en) * 2014-04-03 2016-05-31 Fuji Xerox Co., Ltd. System and method for biometric user authentication using 3D in-air hand gestures

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110187771A (en) * 2019-05-31 2019-08-30 努比亚技术有限公司 Gesture interaction method, device, wearable device and computer storage medium high up in the air
CN110187771B (en) * 2019-05-31 2024-04-26 努比亚技术有限公司 Method and device for interaction of air gestures, wearable equipment and computer storage medium
CN110532863A (en) * 2019-07-19 2019-12-03 平安科技(深圳)有限公司 Gesture operation method, device and computer equipment
CN112748822A (en) * 2019-10-29 2021-05-04 Oppo广东移动通信有限公司 Projection keyboard system, mobile terminal and implementation method of projection keyboard

Also Published As

Publication number Publication date
CN108351708B (en) 2020-04-03
WO2018068484A1 (en) 2018-04-19

Similar Documents

Publication Publication Date Title
CN107507239B (en) A kind of image partition method and mobile terminal
EP3637290B1 (en) Unlocking control method and related product
US11074466B2 (en) Anti-counterfeiting processing method and related products
CN111985265A (en) Image processing method and device
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN108985220B (en) Face image processing method and device and storage medium
CN110443769B (en) Image processing method, image processing device and terminal equipment
US10664145B2 (en) Unlocking control methods and apparatuses, and electronic devices
US11782478B2 (en) Unlocking control method and related products
EP3252665B1 (en) Method for unlocking terminal and terminal
US9959449B2 (en) Method for controlling unlocking and terminal
CN107004073A (en) The method and electronic equipment of a kind of face verification
WO2019024718A1 (en) Anti-counterfeiting processing method, anti-counterfeiting processing apparatus and electronic device
CN108875594A (en) A kind of processing method of facial image, device and storage medium
CN109594880A (en) Control method and apparatus, storage medium and the vehicle of vehicle trunk
CN110825223A (en) Control method and intelligent glasses
CN110689479A (en) Face makeup method, device, equipment and medium
CN109101119A (en) terminal control method, device and mobile terminal
CN108351708A (en) Three-dimension gesture unlocking method, the method and terminal device for obtaining images of gestures
WO2020015655A1 (en) Mobile terminal and screen unlocking method and device
CN110944112A (en) Image processing method and electronic equipment
CN109657643A (en) A kind of image processing method and device
CN109284591A (en) Face unlocking method and device
KR101622197B1 (en) Apparatus and method for recognzing gesture in mobile device
CN109618234A (en) Video playing control method, device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant