CN108537203A - A kind of palm key independent positioning method based on convolutional neural networks - Google Patents

A kind of palm key independent positioning method based on convolutional neural networks Download PDF

Info

Publication number
CN108537203A
CN108537203A CN201810363953.2A CN201810363953A CN108537203A CN 108537203 A CN108537203 A CN 108537203A CN 201810363953 A CN201810363953 A CN 201810363953A CN 108537203 A CN108537203 A CN 108537203A
Authority
CN
China
Prior art keywords
finger
palm
neural networks
convolutional neural
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810363953.2A
Other languages
Chinese (zh)
Other versions
CN108537203B (en
Inventor
谢清禄
余孟春
邹向群
徐宏锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shizhen Information Technology Co Ltd
Original Assignee
Guangzhou Shizhen Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shizhen Information Technology Co Ltd filed Critical Guangzhou Shizhen Information Technology Co Ltd
Priority to CN201810363953.2A priority Critical patent/CN108537203B/en
Publication of CN108537203A publication Critical patent/CN108537203A/en
Application granted granted Critical
Publication of CN108537203B publication Critical patent/CN108537203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a kind of palm key independent positioning method based on convolutional neural networks, is as follows:S1, acquisition palm image tagged key point information, and training convolutional neural networks;S2, finger areas image is collected as data set;S3,6 key points for positioning every finger;S4,2 key points of the lower finger joint lower end joint line segment midpoint of every finger with the fingertip end farthest point apart from line segment midpoint within the scope of corresponding finger and as finger are positioned;S5, convolutional neural networks take adjacent two to refer to lower finger joint lower end joint line segment midpoint and are attached, and the midpoint of connecting line is as palm key point.The present invention is according to palm key independent positioning method described above, the positioning of palm key point can fast and accurately be obtained, the positioning of correlated characteristic point is carried out by the advantage of the fixed character combination convolutional neural networks self study of articulations digitorum manus streakline, it can avoid only relying on marginal information and contour feature carries out the variability of crucial point location, keep fixed point more accurate.

Description

A kind of palm key independent positioning method based on convolutional neural networks
Technical field
The present invention relates to palm key point field of locating technology, specially a kind of palm based on convolutional neural networks is crucial Independent positioning method.
Background technology
Palm print and palm vein feature identification technique, the general palm figure using under camera acquisition palm visible light or near infrared light Picture, by being pre-processed to palm image, identification region positioning, feature extraction and compare matching and etc. realize.Identification The positioning in region is the basic link of palm print and palm vein identification, and fixation and recognition region is to close very much how quickly, accurately, in high quality One step of key, also directly influences the performance of a whole set of identifying system.Positioning is carried out to identification region to generally require to palm image The positioning for carrying out key point, the interception in region is identified with the key point of positioning.It under normal circumstances, can be by palm image Marginal information between acquisition background carries out the profile description of palm, remakes crucial point location.
A kind of biological characteristic zone location is disclosed in Chinese invention patent application prospectus CN102542242A Method carries out binaryzation using to biometric image, removes image background, and denoising and acquisition marginal points information, positioning are crucial Point, finally according to key point to determine biological characteristic region;It is public in application for a patent for invention prospectus CN104361339A It has opened a kind of posterior probability collection of illustrative plates according to foreground image and palm shape marginal information and image segmentation, the extraction palm is carried out to palm shape region The method of shape image;A kind of vena metacarpea to after binaryzation is disclosed in application for a patent for invention prospectus CN106991380A Image carries out image outline extraction using Canny algorithms, refers to root point further according to legal position is searched, to refer to the acquisition of root point connecting line Method of the midpoint as key point is obtained, to obtain ROI (Region of Interest) image.
The above-mentioned method that crucial point location is carried out according to marginal information and contour feature, although can determine one it is relatively solid Fixed identification region, but need clearly have higher requirement to marginal information and the complete of contour feature, by light, regard Under conditions of the factor variation at angle, background and distance, it tends to be difficult to obtain the crucial point location and identification region of high quality.
Invention content
The purpose of the present invention is to solve disadvantages present in above-mentioned technology, and the one kind proposed is based on convolutional Neural net The palm key independent positioning method of network defines finger finger joint streakline line segment, determines the midpoint of lower finger joint lower end joint line segment Position, connects the midpoint of two adjacent finger joint lower end joint line segments, obtains connecting line, position the midpoint of the connecting line as palm Key point refers to from index finger to little finger four and can get 3 key points.
Finger joint streakline line segment midpoint is positioned, the method that positioning obtains palm key point step by step, in marginal information And in the case that contour feature is centainly changed, stable finger joint joint line segment midpoint positioning can be also obtained, convolution is utilized Advantage of the neural network on image procossing can obtain preferable key point positioning mould by way of largely training study Type realizes the palm key point location of fast accurate under big data quantity.
To achieve the above object, the present invention provides the following technical solutions:A kind of palm based on convolutional neural networks is crucial Independent positioning method is somebody's turn to do the palm key independent positioning method based on convolutional neural networks and is as follows:
S1, acquisition palm image, and key point information is marked, convolutional neural networks are input to as training sample set, it is right Network is trained;
The first layer of S2, convolutional neural networks detect palm image, palm image are divided into finger areas and metacarpus area Domain two parts, and finger areas image is collected as data set;
It is fixed that S3, the second layer carry out key point to the finger areas image data set that first layer convolutional neural networks are collected into Position, positions 6 key points of every finger, and is cut out 4 finger-images as data set;
S4, convolutional neural networks third layer, the lower finger joint lower end joint line segment midpoint of every finger of positioning and corresponding finger Fingertip end farthest point in range apart from line segment midpoint, lower finger joint joint line segment midpoint and fingertip end farthest point as finger 2 A key point;
S5, convolutional neural networks take adjacent two to refer to lower finger joint lower end joint line segment midpoint and are attached, in connecting line Point is used as palm key point, 3 palm key points between four fingers to be respectively defined as GapB, GapC and GapD.
Preferably, the palm image in step S1 is acquired by capture apparatus, while utilizing image enhancement technique will Image preprocessing makes palm image meet call format, key point label is carried out to palm image, as training convolutional nerve net The sample set of network is inputted and is trained.
Preferably, the convolutional neural networks in step S1 include that convolutional layer and pond layer, convolutional layer are mainly used for characteristic pattern Calculating, pond layer is mainly used for reducing the size of characteristic pattern, while keeping rotation and the translation feature of characteristic pattern, specifically such as Under:
When characteristic pattern, which reaches the size of design, to be required with the number of plies, two-dimensional characteristic pattern is lined up conversion in sequence For one-dimensional feature vector, it is attached and exports finally by full articulamentum, wherein the operation of convolutional layer is represented by:
Wherein, X(l,k)Indicate the kth group characteristic pattern of l layers of output, nlIndicate the number of plies of l layers of characteristic pattern, W(l,k,p)Table Show in l-1 layers required filter when pth group characteristic pattern is mapped to kth group characteristic pattern in l layers, each group of l layers Feature map generalization is required for nl-1A filter and a biasing;
Pond layer uses maximum value pond method, and size of the characteristic image behind maximum value pond can be according to step-length step It is contracted to original 1/step, the form in maximum value pond is represented by:
Wherein, X(l+1,k)(m, n) is the value at the kth group characteristic pattern coordinate (m, n) of l+1 layers of output, and s is Chi Huahe's Size, step are step-length when pond core moves, and s and step are disposed as 2 in the present invention.
Preferably, the finger key point in step S3 be using articulations digitorum manus lines lower end line segment two-end-point on the image as Key point is marked, and every finger has 3 articulations digitorum manus lines lower end line segments, then every finger can position to obtain 6 finger areas Domain key point.
Preferably, according to the output of second layer convolutional neural networks as a result, estimate its rotation angle to every finger areas, Every finger is corrected according to the rotation angle of estimation, using the image collection after correction as new training sample.
Preferably, the key point described in step S4 is the midpoint of finger joint lines lower end line segment and correspondence under the finger navigated to Fingertip end farthest point within the scope of finger apart from line segment midpoint, this 2 points 2 key points as finger;Refer under the finger Section, since finger tip, finger-joint position is defined as finger joint, middle finger joint and lower finger joint successively.
Preferably, the output of step S4 third layer convolutional neural networks is as a result, according to every finger in image flame detection step Rotation angle, angle convolution is carried out to every finger-image, the finger-image after convolution is combined into finger areas image simultaneously It collects as new training sample.
Preferably, the palm key point described in step S5 is respectively defined as GapB, GapC and GapD, GapB be index finger and The key point of middle interphalangeal, key points of the GapC between middle finger and the third finger, GapD are the key point between nameless and little finger.
Compared with prior art, the beneficial effects of the invention are as follows:The present invention according to above-mentioned palm key independent positioning method, The positioning that palm key point can fast and accurately be obtained, by the fixed character combination convolutional neural networks of articulations digitorum manus streakline from The advantage of study carries out the positioning of correlated characteristic point, can avoid only relying on marginal information and contour feature carries out crucial point location Variability, keep fixed point more accurate.
Description of the drawings
Fig. 1 is convolutional neural networks structure chart of the present invention;
Fig. 2 is finger-joint line segment endpoint location schematic diagram of the present invention;
Fig. 3 is the crucial point location schematic diagram of 2, finger of the present invention;
Fig. 4 is palm key point location network diagram of the present invention;
Fig. 5 is palm key point location of the present invention and label schematic diagram.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with specific embodiment, to this Invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, not For limiting the present invention.
Embodiment 1
- 5 are please referred to Fig.1, the present invention provides a kind of technical solution:A kind of palm key point based on convolutional neural networks is fixed Position method, including 4 convolutional networks, each layer is made of convolutional layer, pond layer and full articulamentum three parts, convolutional neural networks Multiple convolution and pond are carried out to the image of input, eventually pass through full palm image of the articulamentum output through crucial point location, Implementation step is as follows:
Step S1, palm image is acquired, and marks key point information, convolutional Neural net is input to as training sample set Network is trained network;
Further, step S1 is obtained palm image, is closed to the image obtained by palm image acquisition device Key point location marks, and the convolution god of structure is input to using the palm image that key point information is marked as training sample image Through being trained in network, the convolutional neural networks model of palm key point location is obtained.
Step S2, the first layer of convolutional neural networks detects palm image, and palm image is divided into finger areas and the palm Portion region two parts, and finger areas image is collected as data set;
Further, the convolutional network first layer described in step S2 carries out region division, finger to the palm image of input Region is that index finger, middle finger, the third finger and little finger four refer to region, and intercept finger areas image as the crucial point location of progress Data set.
The convolutional neural networks, for structure as shown in Figure 1, in convolutional neural networks, convolutional layer is mainly used for spy The calculating of figure is levied, pond layer is mainly used for reducing the size of characteristic pattern, while keeping rotation and the translation feature of characteristic pattern.Work as spy When the size that sign figure reaches design is required with the number of plies, two-dimensional characteristic pattern is lined up in sequence and is converted to one-dimensional feature Vector is attached and exports finally by full articulamentum.Wherein, the operation of convolutional layer is represented by:
Wherein, X(l,k)Indicate the kth group characteristic pattern of l layers of output, nlIndicate the number of plies of l layers of characteristic pattern, W(l,k,p)Table Show required filter when pth group characteristic pattern is mapped to kth group characteristic pattern in l layers in l-1 layers.Each group of l layers Feature map generalization is required for nl-1A filter and a biasing.
Common pond method has maximum value pond, mean value pond etc., and the convolutional neural networks in the present invention use maximum It is worth pond.Size of the characteristic image behind maximum value pond can be contracted to original 1/step according to step-length step.Maximum value The form in pond is represented by:
Wherein, X(l+1,k)(m, n) is the value at the kth group characteristic pattern coordinate (m, n) of l+1 layers of output.S is Chi Huahe's Size, step are step-length when pond core moves, and s and step are disposed as 2 in the present invention.
Step S3, the second layer carries out key point to the finger areas image data set that first layer convolutional neural networks are collected into Positioning, positions 6 key points of every finger, and is cut out 4 finger-images as data set;
Further, the convolutional neural networks second layer carries out crucial point location to the finger areas image that first layer is collected, Two endpoints of the articulations digitorum manus lower end joint line segment of every finger are positioned, 3 finger joint lower end joint lines of 1 finger Section can navigate to totally 6 endpoints, and be cut out the finger-image of four fingers as data according to the endpoint and profile information that navigate to Collection.
As shown in Fig. 2, the articulations digitorum manus line segment both ends point location of palm image, with the joint of index finger (Index Finger) The two-end-point of line segment two-end-point example, the lower end joint upper finger joint (Tip Segment) line segment of index finger is expressed as TI1 (Top Segment of Index finger 1) and TI2 (Top segment of Index finger 2);Similarly, middle finger joint The two-end-point of the lower end joint (Middle Segment) line segment is expressed as MI1 (Middle segment of Index Finger 1) and MI2 (MI2:Middle segment of Index finger 2);Lower finger joint (Base Segment) lower end The two-end-point of joint line segment is expressed as BI1 (Base segment of Index finger 1) and BI2 (Base segment of Index finger 2).In turn, every finger can navigate to 6 key points.
Step S4, convolutional neural networks third layer, position every finger lower finger joint lower end joint line segment midpoint with it is corresponding Fingertip end farthest point within the scope of finger apart from line segment midpoint, lower finger joint joint line segment midpoint and fingertip end farthest point are as finger 2 key points;
Further, the convolutional neural networks third layer carries out under the lower finger joint of four fingers the finger areas of palm image The positioning for holding two endpoints of joint line segment removes the midpoint of finger joint lower end joint line segment as key point, in corresponding finger model It encloses the interior fingertip end farthest point apart from line segment midpoint to be positioned, the midpoint and fingertip end farthest point of the joint line segment are finger 2 key points.
The crucial point location schematic diagrames of 2 of finger as shown in Figure 3, finger joint lower end joint line segment BI1-BI2 under index finger, take The midpoint MIB (Middle point of Index finger Base knuckle) of its line segment, in corresponding finger range The interior fingertip end farthest point apart from line segment midpoint MIB is positioned, which is TopI (Top point of Index Finger), then line segment midpoint MIB and 2 key points that farthest point TopI is the finger navigated to.
Step S5, convolutional neural networks take adjacent two to refer to lower finger joint lower end joint line segment midpoint and are attached, connecting line Midpoint as palm key point, four refer between 3 palm key points be respectively defined as GapB, GapC and GapD.
Further, the palm of point location in the lower finger joint lower end line segment that the 4th layer of convolutional neural networks obtain third layer Image, the midpoint between adjacent two are referred to are attached, and obtain the connecting line as both ends based on lower finger joint lower end line segment midpoint, The midpoint of the connecting line is positioned, which that is to say the positioning of interdigital space of hand point between four fingers, and divide by the key point as palm Biao Ji not be, GapC and GapD.As shown in Fig. 3, the midpoint MIB and middle finger of finger joint lower end joint line segment under index finger can use Lower finger joint lower end joint line segment midpoint MMB (Middle point of Middle finger Base knuckle) is used as two Endpoint is attached, and can obtain connecting line MIB-MMB, positions the midpoint of the connecting line segment, then is labeled as GapB, and as palm Key point.It is illustrated in figure 5 palm key point location and label schematic diagram, can position to obtain four fingers by above-mentioned technical method Between 3 gap points, i.e., as key point GapB, GapC and GapD of palm.
A kind of palm key independent positioning method based on convolutional neural networks provided by the invention can obtain relatively stabilization Palm key point is conducive to the metacarpus identification region for quickly and accurately obtaining high quality, promotes palmmprint or palm vein recognition technical System performance.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Any one skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (8)

1. a kind of palm key independent positioning method based on convolutional neural networks, it is characterised in that:It should be based on convolutional neural networks Palm key independent positioning method be as follows:
S1, acquisition palm image, and key point information is marked, convolutional neural networks are input to as training sample set, to network It is trained;
The first layer of S2, convolutional neural networks detect palm image, palm image are divided into finger areas and hand region two Part, and finger areas image is collected as data set;
S3, the second layer carry out crucial point location to the finger areas image data set that first layer convolutional neural networks are collected into, fixed 6 key points of the every finger in position, and 4 finger-images are cut out as data set;
S4, convolutional neural networks third layer position the lower finger joint lower end joint line segment midpoint of every finger and corresponding finger range 2 passes of the interior fingertip end farthest point apart from line segment midpoint, lower finger joint joint line segment midpoint and fingertip end farthest point as finger Key point;
S5, convolutional neural networks take adjacent two to refer to lower finger joint lower end joint line segment midpoint and are attached, and the midpoint of connecting line is made For palm key point, four refer between 3 palm key points be respectively defined as GapB, GapC and GapD.
2. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: Palm image in step S1 is acquired by capture apparatus, while image preprocessing is made hand using image enhancement technique Palm image meets call format, and key point label is carried out to palm image, and the sample set as training convolutional neural networks inputs And training.
3. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: Convolutional neural networks in step S1 include convolutional layer and pond layer, and convolutional layer is mainly used for the calculating of characteristic pattern, pond layer master Will be for reducing the size of characteristic pattern, while rotation and the translation feature of characteristic pattern are kept, it is specific as follows:
When the size that characteristic pattern reaches design is required with the number of plies, two-dimensional characteristic pattern is lined up in sequence and is converted to one The feature vector of dimension is attached and exports finally by full articulamentum, wherein the operation of convolutional layer is represented by:
Wherein, X(l,k)Indicate the kth group characteristic pattern of l layers of output, nlIndicate the number of plies of l layers of characteristic pattern, W(l,k,p)Indicate l- Required filter, l layers of each group of characteristic pattern when pth group characteristic pattern is mapped to kth group characteristic pattern in l layers in 1 layer Generation be required for nl-1A filter and a biasing;
Pond layer uses maximum value pond method, size of the characteristic image behind maximum value pond that can be reduced according to step-length step To original 1/step, the form in maximum value pond is represented by:
Wherein, X(l+1,k)(m, n) is the value at the kth group characteristic pattern coordinate (m, n) of l+1 layers of output, and s is the size of Chi Huahe, Step is step-length when pond core moves, and s and step are disposed as 2 in the present invention.
4. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: Finger key point in step S3 is to be marked using the two-end-point of articulations digitorum manus lines lower end line segment on the image as key point, Every finger has 3 articulations digitorum manus lines lower end line segments, then every finger can position to obtain 6 finger areas key points.
5. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: According to the output of second layer convolutional neural networks as a result, estimating its rotation angle to every finger areas, by every finger according to The rotation angle of estimation is corrected, using the image collection after correction as new training sample.
6. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: Key point described in step S4 is the midpoint of finger joint lines lower end line segment and distance within the scope of corresponding finger under the finger navigated to The fingertip end farthest point at line segment midpoint, this 2 points 2 key points as finger;Finger joint under the finger, since finger tip, Finger-joint position is defined as finger joint, middle finger joint and lower finger joint successively.
7. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: The output of step S4 third layer convolutional neural networks is as a result, according to the rotation angle of every finger in image flame detection step, to every Root finger-image carries out angle convolution, and the finger-image after convolution is combined into finger areas image and is collected as new training Sample.
8. a kind of palm key independent positioning method based on convolutional neural networks according to claim 1, it is characterised in that: Palm key point described in step S5 is respectively defined as GapB, GapC and GapD, and GapB is the key point of index finger and middle interphalangeal, Key points of the GapC between middle finger and the third finger, GapD are the key point between nameless and little finger.
CN201810363953.2A 2018-04-22 2018-04-22 Palm key point positioning method based on convolutional neural network Active CN108537203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810363953.2A CN108537203B (en) 2018-04-22 2018-04-22 Palm key point positioning method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810363953.2A CN108537203B (en) 2018-04-22 2018-04-22 Palm key point positioning method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN108537203A true CN108537203A (en) 2018-09-14
CN108537203B CN108537203B (en) 2020-04-21

Family

ID=63478058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810363953.2A Active CN108537203B (en) 2018-04-22 2018-04-22 Palm key point positioning method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN108537203B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670471A (en) * 2018-12-28 2019-04-23 广州市久邦数码科技有限公司 A kind of Palmprint feature extraction and palmistry recognition methods
CN109840592A (en) * 2018-12-24 2019-06-04 梦多科技有限公司 A kind of method of Fast Labeling training data in machine learning
CN109905593A (en) * 2018-11-06 2019-06-18 华为技术有限公司 A kind of image processing method and device
CN109903749A (en) * 2019-02-26 2019-06-18 天津大学 The sound identification method of robust is carried out based on key point coding and convolutional neural networks
CN110349096A (en) * 2019-06-14 2019-10-18 平安科技(深圳)有限公司 Bearing calibration, device, equipment and the storage medium of palm image
CN110728232A (en) * 2019-10-10 2020-01-24 清华大学深圳国际研究生院 Hand region-of-interest acquisition method and hand pattern recognition method
CN111339932A (en) * 2020-02-25 2020-06-26 南昌航空大学 Palm print image preprocessing method and system
WO2020168759A1 (en) * 2019-02-20 2020-08-27 平安科技(深圳)有限公司 Palmprint recognition method and apparatus, computer device and storage medium
CN112156451A (en) * 2020-09-22 2021-01-01 歌尔科技有限公司 Handle and size adjusting method, size adjusting system and size adjusting device thereof
CN112232157A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Fingerprint area detection method, device, equipment and storage medium
CN113705344A (en) * 2021-07-21 2021-11-26 西安交通大学 Palm print recognition method and device based on full palm, terminal equipment and storage medium
CN113780201A (en) * 2021-09-15 2021-12-10 墨奇科技(北京)有限公司 Hand image processing method and device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130229270A1 (en) * 2012-03-02 2013-09-05 Seven Networks, Inc. Providing data to a mobile application accessible at a mobile device via different network connections without interruption
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN107103613A (en) * 2017-03-28 2017-08-29 深圳市未来媒体技术研究院 A kind of three-dimension gesture Attitude estimation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130229270A1 (en) * 2012-03-02 2013-09-05 Seven Networks, Inc. Providing data to a mobile application accessible at a mobile device via different network connections without interruption
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN107103613A (en) * 2017-03-28 2017-08-29 深圳市未来媒体技术研究院 A kind of three-dimension gesture Attitude estimation method

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905593A (en) * 2018-11-06 2019-06-18 华为技术有限公司 A kind of image processing method and device
US11917288B2 (en) 2018-11-06 2024-02-27 Huawei Technologies Co., Ltd. Image processing method and apparatus
CN109905593B (en) * 2018-11-06 2021-10-15 华为技术有限公司 Image processing method and device
CN109840592A (en) * 2018-12-24 2019-06-04 梦多科技有限公司 A kind of method of Fast Labeling training data in machine learning
CN109840592B (en) * 2018-12-24 2019-10-18 梦多科技有限公司 A kind of method of Fast Labeling training data in machine learning
CN109670471A (en) * 2018-12-28 2019-04-23 广州市久邦数码科技有限公司 A kind of Palmprint feature extraction and palmistry recognition methods
WO2020168759A1 (en) * 2019-02-20 2020-08-27 平安科技(深圳)有限公司 Palmprint recognition method and apparatus, computer device and storage medium
CN109903749A (en) * 2019-02-26 2019-06-18 天津大学 The sound identification method of robust is carried out based on key point coding and convolutional neural networks
CN110349096A (en) * 2019-06-14 2019-10-18 平安科技(深圳)有限公司 Bearing calibration, device, equipment and the storage medium of palm image
CN110728232A (en) * 2019-10-10 2020-01-24 清华大学深圳国际研究生院 Hand region-of-interest acquisition method and hand pattern recognition method
CN111339932A (en) * 2020-02-25 2020-06-26 南昌航空大学 Palm print image preprocessing method and system
CN112156451A (en) * 2020-09-22 2021-01-01 歌尔科技有限公司 Handle and size adjusting method, size adjusting system and size adjusting device thereof
CN112156451B (en) * 2020-09-22 2022-07-22 歌尔科技有限公司 Handle and size adjusting method, size adjusting system and size adjusting device thereof
CN112232157A (en) * 2020-09-30 2021-01-15 墨奇科技(北京)有限公司 Fingerprint area detection method, device, equipment and storage medium
CN112232157B (en) * 2020-09-30 2022-03-18 墨奇科技(北京)有限公司 Fingerprint area detection method, device, equipment and storage medium
CN113705344A (en) * 2021-07-21 2021-11-26 西安交通大学 Palm print recognition method and device based on full palm, terminal equipment and storage medium
CN113780201A (en) * 2021-09-15 2021-12-10 墨奇科技(北京)有限公司 Hand image processing method and device, equipment and medium
CN113780201B (en) * 2021-09-15 2022-06-10 墨奇科技(北京)有限公司 Hand image processing method and device, equipment and medium

Also Published As

Publication number Publication date
CN108537203B (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN108537203A (en) A kind of palm key independent positioning method based on convolutional neural networks
AU2015317344B2 (en) Mobility empowered biometric appliance a tool for real-time verification of identity through fingerprints
CN109308459B (en) Gesture estimation method based on finger attention model and key point topology model
Matsuda et al. Finger-vein authentication based on deformation-tolerant feature-point matching
CN103310196B (en) The finger vein identification method of area-of-interest and direction element
CN112950651A (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
CN107273844A (en) Vena metacarpea recognizes matching process and device
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
CN106384126A (en) Clothes pattern identification method based on contour curvature feature points and support vector machine
CN107862249A (en) A kind of bifurcated palm grain identification method and device
CN113936307B (en) Vein image recognition method and device based on thin film sensor
CN108334875A (en) Vena characteristic extracting method based on adaptive multi-thresholding
CN112699845A (en) Online non-contact palm vein region-of-interest extraction method
CN105488512A (en) Sift feature matching and shape context based test paper inspection method
CN106971131A (en) A kind of gesture identification method based on center
CN102306415A (en) Portable valuable file identification device
Chen et al. Method on water level ruler reading recognition based on image processing
CN116645705A (en) Near-infrared Palm Vein ROI Extraction Method and System Based on Lightweight Network
CN113706514B (en) Focus positioning method, device, equipment and storage medium based on template image
CN107016414A (en) A kind of recognition methods of footprint
CN104036494B (en) A kind of rapid matching computation method for fruit image
CN109410233A (en) A kind of accurate extracting method of high-definition picture road of edge feature constraint
CN116051638A (en) Blood vessel positioning device and method based on multi-source heterogeneous information fusion technology
CN114882539B (en) Vein image ROI extraction method and device
CN116524549A (en) Method for positioning key points and ROI (region of interest) of back or palm vein image based on improved UNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 510665 17 / F, building 3, Yunsheng Science Park, No. 11, puyuzhong Road, Huangpu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU MELUX INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 510665 5th floor, building 5, No.8, science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee before: GUANGZHOU MELUX INFORMATION TECHNOLOGY Co.,Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20231120

Granted publication date: 20200421

PD01 Discharge of preservation of patent
PD01 Discharge of preservation of patent

Date of cancellation: 20231219

Granted publication date: 20200421