WO2020155764A1 - 掌纹提取方法、装置及存储介质、服务器 - Google Patents

掌纹提取方法、装置及存储介质、服务器 Download PDF

Info

Publication number
WO2020155764A1
WO2020155764A1 PCT/CN2019/117915 CN2019117915W WO2020155764A1 WO 2020155764 A1 WO2020155764 A1 WO 2020155764A1 CN 2019117915 W CN2019117915 W CN 2019117915W WO 2020155764 A1 WO2020155764 A1 WO 2020155764A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
corrected
palmprint
value
Prior art date
Application number
PCT/CN2019/117915
Other languages
English (en)
French (fr)
Inventor
惠慧
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020155764A1 publication Critical patent/WO2020155764A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Definitions

  • This application relates to the technical field of biometrics and palmprint recognition, and in particular to a palmprint extraction method, device, storage medium, and server.
  • Palmprints refer to the skin texture on the surface of the palm.
  • the palmprints of the hand are divided into normal and abnormal patterns due to various reasons, and abnormal patterns may appear as "ten" patterns, "well” patterns, and "meter” patterns.
  • the palmprint characteristics are constant, permanent and unique to human individuals.
  • Binarization processing is performed on the corrected image to obtain a palm print image.
  • the embodiment of the present application also provides a palmprint extraction device, including:
  • the palm print image acquisition module to be corrected is used to obtain palm images, and preprocess the palm images to obtain palm print images to be corrected;
  • the corrected image obtaining module is used to calculate the curvature value of each characteristic pixel point in the palmprint image to be corrected, and correct the palmprint image to be corrected based on the curvature value of the characteristic pixel point to obtain a corrected image ;
  • the binarization processing module is used to perform binarization processing on the corrected image to obtain a palmprint image.
  • the corrected image obtaining module includes:
  • a cutting unit configured to cut the palmprint image to be corrected according to a preset cutting direction and a preset pixel interval to obtain n cutting lines, where n is a positive integer;
  • the curvature calculation unit is used to calculate the curvature value of each pixel on the cutting line, and correct the palmprint image to be corrected based on the curvature value to obtain a corrected image.
  • Each pixel on the cutting line is The characteristic pixels.
  • the embodiment of the present application also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the program is executed by a processor, the palmprint extraction method described in any technical solution is implemented.
  • the embodiment of the present application also provides a server, including:
  • One or more processors are One or more processors;
  • One or more application programs wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, and the one or more application programs are configured to execute The steps of the palmprint extraction method according to any technical solution.
  • the palmprint extraction method of the embodiment of the present application can make the palmprint more prominent, improve the palmprint image and the resistance to light, and improve the palmprint image, the resistance to people with unclear palmprints, and the palmprint The success rate of texture extraction.
  • FIG. 1 is a schematic flowchart of an implementation in a typical embodiment of the palmprint extraction method of this application;
  • FIG. 2 is a schematic diagram of a palmprint image to be corrected in a typical embodiment of the palmprint extraction method of this application;
  • FIG. 3 is a schematic diagram of a palmprint image in a typical embodiment of the palmprint extraction method of this application;
  • FIG. 4 is a schematic structural diagram of a typical embodiment of the palmprint extraction device of this application.
  • Figure 5 is a schematic structural diagram of an embodiment of the application server.
  • the palmprint extraction provided in the embodiments of this application is applied to a scenario including a server and a client, where the server and the client are connected through a network, and the client is used to collect palm images, and the collected palm images Send to the server, the client can be specifically, but not limited to, cameras, cameras, scanners or palm image acquisition devices with other camera functions; the server is used to extract palm textures from palm images, and the server can specifically use an independent A server or a server cluster composed of multiple servers is implemented.
  • the image processing method provided in the embodiment of the application is applied to the server.
  • the palmprint extraction method provided by the embodiment of the present application in one of the implementation manners, as shown in FIG. 1, includes: S100, S200, and S300.
  • S100 Obtain a palm image, preprocess the palm image, and obtain a palmprint image to be corrected;
  • S200 Calculate the curvature value of each characteristic pixel in the palmprint image to be corrected, and correct the palmprint image to be corrected based on the curvature value of the characteristic pixel to obtain a corrected image;
  • S300 Binarize the corrected image to obtain a palmprint image.
  • a palm image is obtained.
  • the palm image can be an image of the entire palm or a palm image that does not include fingers. Since the palm prints are mainly extracted in this application, it can be The palm prints in part of the palm are extracted, of course, the entire palm prints can also be extracted. In order to improve the recognition rate of palmprint lines in the palm image and avoid noise affecting the extraction of palmprints, it is necessary to pre-process the palm image, and the pre-processed image is the palmprint image to be corrected.
  • the image is preprocessed to make it easier to extract the palm prints later, that is, to make the palm prints of the palm more prominent, or to remove the noise in the palm image, so as to prevent the noise from affecting the palm print. extract.
  • grayscale the collected color palmprint image read a palm image in RGB mode (three-element (red, green, blue) mode), and use the gray image conversion function such as rgb2gray() to convert the image mode Is a grayscale image; make the grayscale histogram of the image, find the bottom value T between the two peaks of the grayscale histogram as the threshold threshold for binarization segmentation, and convert the grayscale image into a binary image; use 3*3
  • the square median filter window is used to perform median filtering on the image; the boundary tracking algorithm is used to extract the contour of the palm image.
  • Grayscale the collected color palmprint image Based on the foregoing, the grayscale image is binarized.
  • Grayscale image binarization is to select the grayscale image with 256 grayscale levels (the range of grayscale value is 0 to 255) through an appropriate threshold, and reset the grayscale value of the pixels on the image to 0 Or 255, which means that the entire image is rendered with only black and white colors.
  • the horizontal axis of the grayscale histogram represents the grayscale value range of the grayscale image, from 0 to 255, and the vertical axis represents the number of times a grayscale value appears on the image. Because the palm gray image has a single foreground (only the palm) and a simple background, the gray histogram distribution of the palm gray image all presents significant bimodal characteristics.
  • any horizontal axis value corresponding to the middle part of the valley between the two peaks of the grayscale histogram is selected as the threshold threshold for binarization segmentation, and the points on the grayscale image whose grayscale value is greater than this threshold threshold are reset to 255 (White), the points smaller than this threshold are reset to 0 (black), so that the gray-scale image is converted into a binary image, that is, the image background is black and the palm of the foreground is white; smooth image. Since the boundary of the binary image obtained after the grayscale image is binarized is often not smooth, in order to obtain a relatively smooth palm contour line, it is necessary to smooth the binarized image to make the image edge sharp The "burr" becomes smooth.
  • the contour tracking algorithm is used to extract the contour of the binary image, namely the palm contour. Then, the image that is surrounded by the palm contour and subjected to the aforementioned processing is used as the palmprint image to be corrected (as shown in FIG. 2).
  • the pixels in the palm image are traversed to obtain the RGB component value of each pixel.
  • the pixel points in the palm image are traversed according to a preset traversal mode to obtain the RGB component value of each pixel point, where R, G, and B represent the colors of the three channels of red, green, and blue, respectively.
  • the preset traversal method can be based on the pixel point of the upper left corner of the palm image as a starting point, and traverse line by line from top to bottom from left to right, or traverse from the center line of the palm image to both sides at the same time. It can also be other traversal methods, and there is no restriction here.
  • the palm image is grayed out according to formula (1) to obtain a grayed out image:
  • x and y are the abscissa and ordinate of each pixel in the palm image
  • g(x,y) is the gray value of the pixel (x,y) after grayscale processing
  • R(x,y) Is the color component of the R channel of the pixel (x,y)
  • G(x,y) is the color component of the G channel of the pixel (x,y)
  • B(x,y) is the pixel (x,y)
  • the color components k 1 , k 2 , and k 3 of the B channel are the proportion parameters corresponding to the R channel, the G channel, and the B channel, respectively.
  • the formula (1) is the aforementioned gray scale rule.
  • the palm image in order to achieve accurate extraction of the information content in the palm image, the palm image needs to be grayed out first.
  • the parameter values of k 1 , k 2 , k 3 and ⁇ can be based on actual application It needs to be set, and there is no restriction here.
  • the proportions of R channel, G channel and B channel can be adjusted respectively.
  • the RGB model is a commonly used way of expressing color information. It uses the brightness of the three primary colors of red, green and blue to quantitatively represent colors.
  • This model is also called the additive color mixing model, which is a method in which RGB three-color light is superimposed on each other to achieve color mixing, so it is suitable for display of luminous bodies such as displays.
  • the gray value is calculated by weighting by formula (1).
  • the component method, the maximum value method, or the average value method may also be used to perform grayscale processing on the image. There is no restriction here.
  • grayscale inversion processing is performed on the grayed image to obtain a palm image after grayscale inversion.
  • each pixel in the obtained grayed-out image is traversed, the pixel value of each pixel is obtained, the gray-scale inversion process is performed on the grayed-out image, and the pixels in the grayed-out image are
  • the pixel value range of is changed from [0, 255] to [255, 0], that is, the pixel value of the pixel is adjusted from 0 to 255, and the pixel value of the pixel is adjusted from 255 to 0, so that the original in the image is grayed out
  • the white pixels become black pixels, and the original black pixels become white pixels.
  • the palm image after gray-scale inversion is obtained, that is, the palm print image to be corrected.
  • the value range of pixels can be further compressed from [0, 255] to [0, 1], that is, the pixel value of each pixel is divided by 255 to get
  • the compressed pixel value for example, a pixel with a pixel value of 1 has a compressed pixel value of 1/255, a pixel with a pixel value of 254 has a compressed pixel value of 254/255, and the pixel values of other pixels are transformed into And so on.
  • the palm image is grayed out using formula (1).
  • the curvature value of each pixel in the palmprint image to be corrected is calculated to improve the success rate of palmprint texture extraction, and then the palmprint image is treated based on the curvature value of each pixel Make corrections to obtain corrected images.
  • the curvature is calculated based on the preset direction, so that the curvature value calculation process has a corresponding reference, which is convenient for pixel screening based on the reference to obtain a suitable pixel for the pixel, or Based on the curvature value of some pixels, the pixels of other pixels in the palmprint image are corrected to make the palmprint more prominent, improve the palmprint image's resistance to light, and improve the palmprint image to make the palmprint unclear It is easy to obtain the corrective image of palm texture extraction, which further improves the success rate of palm texture extraction. Furthermore, in order to improve the success rate of palmprint extraction, in the embodiment provided in this application, it is also necessary to perform binarization processing on the corrected image.
  • the palmprint in the image can obviously be compared with other parts of the palm.
  • the palm lines are shown in Figure 3.
  • the comparison between Figure 2 and Figure 3 shows that the curvature of the palmprint image to be corrected is screened and corrected by the pixel curvature value.
  • the palmprint in Figure 3 is more prominent than that in Figure 2, which improves the palmprint.
  • the image has improved the resistance to light, and the palmprint image has improved the resistance of people with unclear palmprints.
  • Binarization is to set the pixel value of the pixels on the image to 0 or 255, that is, to present the entire image with an obvious visual effect of only black and white.
  • the pixel value of the pixel is set to 0, that is, the pixel becomes black; If the pixel value of is greater than or equal to the preset pixel threshold, the pixel value of the pixel is set to 255, that is, the pixel becomes white, and a binary image is obtained.
  • the calculation of the curvature value of each feature pixel point in the palmprint image to be corrected is performed, and the palmprint image to be corrected is corrected based on the curvature value of the feature pixel point to obtain a corrected image, include:
  • the embodiment of the present application can calculate the curvature value of each pixel point on the cutting line through each cutting line obtained by cutting the palmprint image to be corrected, and use the curvature value to judge whether the pixel point belongs to the pixel point on the palmprint, and then It is convenient to perform processes such as the aforementioned pixel point screening and correction based on the curvature value, so as to realize the correction of other pixels in the palmprint image to be corrected to obtain a corrected image.
  • the preset cutting direction may be horizontal cutting, vertical cutting, or cutting in other directions, which can be specifically set according to actual application requirements, and is not limited here.
  • the preset pixel interval refers to a preset number of pixels as the interval, which can be 1 pixel as an interval, or 5 pixels as an interval, and can also be set according to actual application needs , There is no restriction here.
  • a specific example is used to illustrate. For example, if the acquired palm image is rectangular, in the corrected image, the preset cutting direction is the vertical direction, and the preset pixel interval is 5 pixels. The corrected image is cut. If there are 2000 For each pixel, 399 vertical cutting lines will be obtained.
  • the curvature value of each pixel on each cutting line is calculated.
  • the curvature value is calculated by the following formula (2):
  • z is the pixel point on the cutting line
  • K(z) is the curvature value of the pixel point z
  • P f (z) is the pixel value of the pixel point z
  • the curvature value of each pixel on the cutting line is calculated according to formula (2), and the curvature value is used to determine whether the pixel belongs to the pixel on the palmprint . It is convenient to carry out the process of filtering and correcting the aforementioned pixels based on the curvature value, and then realize the correction of other pixels in the palmprint image to be corrected to obtain the corrected image, and the specific pixel value based on the pixel value is to be corrected.
  • the correction of the palmprint image to obtain the corrected image is described in detail later, and will not be repeated here.
  • the correcting the palmprint image to be corrected based on the curvature value to obtain the corrected image includes:
  • the pixel value of the evaluation pixel is adjusted according to the evaluation score to obtain the corrected pixel value of each evaluation pixel, and the palmprint image to be corrected is corrected based on the corrected pixel value to obtain the corrected image .
  • the curvature value of a pixel on the cutting line is greater than 0, it means that the pixel is a pixel on the palm print, and is used as an evaluation pixel. If the curvature of the pixel is less than or equal to 0, it means This pixel does not belong to the pixel on the palm print.
  • the local palmprint area is composed of continuous pixels with a curvature value greater than 0, that is, continuous evaluation pixels.
  • the local palmprint area is composed of continuous pixels with a curvature value greater than 0.
  • its width can be the number of continuous pixels with a curvature value greater than 0. For example, if the curvature value is greater than 0, If the number of pixels is 5, the width of the local palmprint area is 5.
  • the width of the local palmprint area containing the evaluation pixel is multiplied by the curvature value of the evaluation pixel, and the result of the multiplication is used as the evaluation score of the evaluation pixel.
  • the evaluation score of the evaluation pixel is calculated by formula (3):
  • z i is the i-th evaluation pixel
  • i is a positive number greater than 0
  • S r (z i ) is the evaluation score of the i-th evaluation pixel
  • k(z i ) is the i-th evaluation pixel Curvature value
  • W r is the width of the local palmprint area including z i .
  • the original pixel value of each evaluation pixel and its corresponding evaluation score are added, and the obtained sum is used as the corrected pixel value of the evaluation pixel.
  • the corrected pixel value of each pixel is adjusted to obtain the corrected pixel value after the pixel value of each evaluated pixel is adjusted, and the pixel value of the same pixel in the palmprint image to be corrected is updated based on the corrected pixel value, so that the palmprint
  • the points on the area become more obvious, improve the recognition of the palmprint area, and can better identify the palmprint area and the non-palmprint area, and obtain the corrected image as described above.
  • the corrected pixel value of the evaluation pixel is calculated by formula (4):
  • V a '(x,y) V a (x,y)+S r (z a ) (4)
  • x and y are the abscissa and ordinate of the a-th evaluation pixel in the palmprint image to be corrected
  • a is a positive number greater than 0
  • z a is the a-th evaluation pixel
  • V a '(x,y ) is a th pixel correction pixel value of the evaluation point
  • V a (x, y) is a pixel value of a pixel of the evaluation
  • S r (z a) is a th pixel of the evaluation score evaluation.
  • the corrected pixel value of the evaluated pixel after calculation exceeds the maximum pixel value, the corrected pixel value is set to the maximum pixel value.
  • the method further includes:
  • For each pixel in the corrected image obtain the pixel value of the first adjacent pixel on one side of the pixel and adjacent to the pixel, and the first interval pixel that is one pixel apart from the pixel Pixel value of a point; wherein the first interval pixel point and the first adjacent pixel point are located on the same side of the pixel point;
  • the pixel value of the pixel is corrected according to the pixel value of the one side and the other side.
  • the pixel value of each pixel is corrected according to formula (5):
  • x and y are the abscissa and ordinate of each pixel in the palm image
  • V(x,y) is the pixel value of the pixel (x,y) in the corrected image
  • C(x,y) is the pixel (x,y) The corrected pixel value.
  • the pixel is the pixel of the image boundary, only the pixel value on one side is compared. For example, if the pixel is located on the left boundary of the image, then the two adjacent pixels on the right side of the pixel are selected. If the pixel is located at the right edge of the image, select the larger pixel value among the two adjacent pixels to the left of the pixel, and correct the pixel value. The pixel value of the pixel is corrected.
  • the pixel value of the pixel (x, y) is increased by formula (5), so that the pixel
  • the dots and the pixels on both sides can be connected to form lines; if the pixel value of the pixel (x, y) is large but the pixel value on both sides is small, the pixel is considered to be noise, in order to prevent the noise from affecting the palmprint
  • the extraction causes interference.
  • the pixel value of the pixel (x, y) is reduced by formula (5) to eliminate the noise in the palmprint image, so that the palmprint area becomes more obvious and improves the palmprint
  • the degree of discrimination also improves the accuracy of the subsequent palmprint extraction.
  • the preset cutting direction includes at least two directions; the binarizing the corrected image to obtain a palmprint image includes:
  • Binarization processing is performed on the composite image to obtain the palmprint image.
  • the preset cutting direction includes at least 2 directions, that is, the palmprint image to be corrected can be image processed based on 2 or more different cutting directions to obtain the correction image.
  • the preset cutting direction can specifically include 4 directions of 45°, 90°, 135°, and 180°, but it is not limited to this, and it can also include other directions, which can be set according to the needs of the actual application. limit.
  • the image cut according to the preset cutting direction according to the corrected image is used as the image to be synthesized.
  • the preset cutting direction includes 45°, 90°, 135°
  • the corrected image obtained with the cutting direction of 45° is an image to be synthesized
  • the corrected image obtained with the cutting direction of 90° is the other image to be synthesized
  • the other directions can be deduced by analogy.
  • the pixel value of the pixel at the same position in each image to be synthesized is compared, and the largest pixel value is selected as the pixel value of the pixel at the corresponding position of the synthesized image to obtain the synthesized image. image.
  • the composite image On the basis of the composite image, in order to make the pixel values of the pixels in the image only present 0 or 255, that is, the image only presents two colors of black or white, the composite image needs to be further binarized to obtain palm prints image.
  • the pixel value of the pixel is set to 0, that is, the pixel becomes black; If the pixel value of the dot is greater than or equal to the preset pixel threshold, the pixel value of the pixel is set to 255, that is, the pixel becomes white, and the palm print image is obtained.
  • different images to be synthesized are acquired according to different cutting directions, and then the pixel values of each pixel at the same position in each image to be synthesized are compared, and the maximum pixel value of each pixel is selected as the synthesized image
  • the pixel value of the pixel in the corresponding position in the image is synthesized, and finally the synthesized image is binarized to obtain the palmprint image.
  • the palmprint image obtained by binarizing the composite image is synthesized by synthesizing the image to be synthesized in multiple cutting directions. Line extraction can effectively reduce errors, realize accurate extraction of palm print lines, and improve the accuracy of palm print lines extraction.
  • the preprocessing of the palm image includes:
  • Gabor filter transformation is performed on the palm image to obtain the palmprint image to be corrected.
  • the Gabor filter transformation method is used to enhance the image, and finally the processed enhanced image is obtained.
  • a convolution operation is performed on the palm image according to the Gabor filter function, and the enhanced image is obtained through the result of the convolution operation.
  • the convolution operation refers to the use of a convolution kernel to perform a series of operations on each pixel in the palm image.
  • the convolution kernel is a preset matrix template used to perform operations on the palm image.
  • the Gabor filter transform belongs to the windowed Fourier transform.
  • the Gabor function can extract the relevant features of the image in different scales and directions in the frequency domain to achieve an image enhancement effect.
  • the palm image processing process through Gabor filtering is as follows: According to formula (6), the palm image is transformed by Gabor filtering:
  • g(x,y; ⁇ , ⁇ , ⁇ , ⁇ ) is the Gabor filter function
  • x and y are the abscissa and ordinate of the pixel in the palm image
  • is the preset wavelength
  • is the preset direction
  • Is the phase shift
  • is the standard deviation of the Gaussian factor of the gabor function
  • is the aspect ratio
  • U(x,y) is the enhanced image
  • I(x,y) is the palm image
  • It is a tensor product operation
  • the Gabor filter function of formula (5) is used to transform the palm image, thereby filtering out the high frequency waves of the palm image, leaving only the low frequency part in the preset direction Filter out the low frequency wave, leaving only the high frequency part, and finally make the image brighter, that is, the enhanced image obtained after Gabor filter transformation.
  • the preset wavelength ⁇ can be 1, or it can be set according to actual needs, and there is no limitation here.
  • the preset direction ⁇ can be selected as 0, For these 8 directions, other directions can also be selected, which can be specifically selected according to actual application needs, and there is no restriction here.
  • the Gabor filter transformation is performed on the palm image by formula (6), which can quickly highlight the image and achieve the effect of image enhancement, thereby improving the image quality of the palm image and distinguishing the texture of the palm image In order to achieve accurate positioning when extracting low-quality palm images collected by low-end palmprint collection devices, which can improve the accuracy of palmprint extraction and the applicability of different palmprint collection devices .
  • the palmprint image to be corrected is obtained by Gabor filtering transformation of the palm image, where the palmprint image to be corrected is the enhanced image after Gabor filtering transformation. Cut the enhanced image and obtain n cutting lines, calculate the curvature value of each pixel on each cutting line, obtain pixels with curvature values greater than 0 as evaluation pixels, and obtain continuous pixels with curvature values greater than zero The area is used as the local palmprint area.
  • the product of the curvature value of the evaluation pixel and the width of the local palmprint area where the evaluation pixel is located is used to calculate the evaluation score for each evaluation pixel, and then the evaluation score is used to evaluate
  • the pixel value of the pixel is adjusted, the corrected pixel value of each evaluation pixel is obtained and the pixel on the enhanced image is updated, and finally the updated enhanced image is binarized to obtain the palm print image.
  • the Gabor filter transformation is used to improve the image quality of the palm image, so that the accuracy of palmprint recognition can be improved when the palmprint is extracted, so as to realize the palmprint of the low-quality palm image collected by the low-end palmprint acquisition device
  • Accurate positioning can effectively improve the accuracy of palmprint extraction in palm images and the applicability to a variety of different palmprint acquisition devices
  • the curvature algorithm can quickly identify palmprints in palm images and improve palmprint performance Recognition efficiency; and by calculating the evaluation score, it can further accurately distinguish between palmprint areas and non-palmprint areas, thereby further improving the accuracy of palmprint extraction.
  • the embodiment of the present application also provides a palmprint extraction device.
  • a palmprint image acquisition module 100 to be corrected includes: a palmprint image acquisition module 100 to be corrected, a corrected image acquisition module 200, and a binarization processing module 300:
  • the palm print image acquisition module 100 to be corrected is used to acquire a palm image, and preprocess the palm image to obtain the palm print image to be corrected;
  • the corrected image obtaining module 200 is configured to calculate the curvature value of each characteristic pixel in the palmprint image to be corrected, and correct the palmprint image to be corrected based on the curvature value of the characteristic pixel point to obtain the correction image;
  • the binarization processing module 300 is configured to perform binarization processing on the corrected image to obtain a palmprint image.
  • the palmprint extraction device provided in the embodiment of the present application further includes: a cutting unit 210, configured to cut the palmprint image to be corrected according to a preset cutting direction and a preset pixel interval, Obtain n cutting lines, where n is a positive integer; the curvature calculation unit 220 is configured to calculate the curvature value of each pixel on each of the cutting lines, and correct the palmprint image to be corrected based on the curvature value, Obtain a corrected image, and each pixel on the cutting line is the characteristic pixel.
  • a cutting unit 210 configured to cut the palmprint image to be corrected according to a preset cutting direction and a preset pixel interval, Obtain n cutting lines, where n is a positive integer
  • the curvature calculation unit 220 is configured to calculate the curvature value of each pixel on each of the cutting lines, and correct the palmprint image to be corrected based on the curvature value, Obtain a corrected image, and each pixel on the cutting line is the characteristic pixel.
  • the local palmprint area determining unit 221 is configured to determine the pixels with the curvature value greater than zero as evaluation pixels, and determine the area where the continuous evaluation pixels are located as the local palmprint area; the evaluation score obtaining unit 222, Used to obtain the width of the local palmprint area, and multiply the width and the curvature value of each evaluation pixel in the local palmprint area to obtain each evaluation pixel in the local palmprint area.
  • the adjustment unit 223 is configured to adjust the pixel value of the evaluation pixel according to the evaluation score to obtain the corrected pixel value of each evaluation pixel, and correct the pixel value based on the corrected pixel value.
  • the palmprint image to be corrected is used to obtain the corrected image.
  • the first pixel value obtaining unit 230 is configured to obtain, for each pixel in the corrected image, the pixel value of the first adjacent pixel on one side of the pixel and adjacent to the pixel, and the pixel value of the pixel The pixel value of the first interval pixel point separated by one pixel; wherein, the first interval pixel point and the first adjacent pixel point are located on the same side of the pixel point; the second pixel value acquiring unit 240 uses To obtain the pixel value of the second adjacent pixel on the other side of the pixel and adjacent to the pixel, and the pixel value of the second interval pixel that is one pixel apart from the pixel; wherein, the first The two-spaced pixels and the second adjacent pixel are located on the same side of the pixel, and the one side is opposite to the other side; the correction unit 250 is used to determine the pixel values of the one side and the other side Correct the pixel value of this pixel.
  • the to-be-composited image obtaining unit 310 is configured to use the corrected image obtained according to each of the cutting directions as the to-be-composited image;
  • the comparison unit 320 is configured to compare the pixel value of the same pixel in each of the to-be-composited images, The maximum pixel value of the pixel is determined as the synthesized pixel value of the pixel in the synthesized image;
  • the synthesized image obtaining unit 330 is configured to obtain the synthesized image according to the synthesized pixel value of each pixel;
  • the binarization processing unit 340 configured to perform binarization processing on the composite image to obtain the palmprint image.
  • the gray-scale unit 110 is used to traverse the palm image, obtain the RGB component value of each pixel, and perform gray-scale processing on the RGB component value of each pixel through the gray-scale processing rules to obtain the gray-scale A transformation unit 120, which inverts the grayscale image, and processes the inverted grayscale image by filter transformation to obtain the palmprint image to be corrected.
  • the palmprint extraction method and device provided in the embodiments of the present application can implement the foregoing palmprint extraction method embodiments.
  • an embodiment of the present application provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the program is executed by a processor, the palmprint extraction method described in any one of the technical solutions is implemented.
  • the computer-readable storage medium includes, but is not limited to, any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory), RAM (Random AccesSS) Memory), EPROM (EraSable Programmable Read-Only Memory), EEPROM (Electrically EraSable Programmable Read-Only Memory), flash memory, magnetic card or Light card.
  • a storage device includes any medium that stores or transmits information in a readable form by a device (for example, a computer or a mobile phone), and may be a read-only memory, a magnetic disk, or an optical disk.
  • the embodiment of the present application provides a computer-readable storage medium that can implement the above-mentioned palmprint extraction method.
  • the image is preprocessed to make it easier to extract the palmprint subsequently, that is, the palm
  • the palm print is displayed more prominently, or the noise in the palm image is removed to avoid noise affecting the extraction of the palm print, so that the image display effect is clearer, and the accuracy of subsequent palm print extraction is improved.
  • a palmprint extraction method provided in an embodiment of the present application includes: obtaining a palm image and palm The image is preprocessed to obtain the palmprint image to be corrected; the curvature value of each characteristic pixel in the palmprint image to be corrected is calculated, and the palmprint image to be corrected is performed based on the curvature value of the characteristic pixel. Correcting to obtain a corrected image; binarizing the corrected image to obtain a palmprint image.
  • a palm image is obtained.
  • the palm image can be an image of the entire palm or a palm image that does not include fingers.
  • the palm prints are mainly extracted in this application, it can be The palm prints in part of the palm are extracted, of course, the entire palm prints can also be extracted.
  • the palm image needs to be preprocessed, and the preprocessed image is the palmprint image to be corrected.
  • the image is preprocessed to make it easier to extract the palm prints later, that is, to make the palm prints of the palm more prominent, or to remove the noise in the palm image, so as to prevent the noise from affecting the palm print. extract.
  • the pixels in the palm image are traversed to obtain the RGB component value of each pixel.
  • the pixel points in the palm image are traversed according to a preset traversal method, the RGB component value of each pixel point is obtained, and the palm image is grayed according to the RGB component value of the pixel point to obtain a grayed image ; Perform grayscale inversion processing on the grayed image to obtain a palm image after grayscale inversion, so that the original white pixels in the grayed image become black pixels, and the original black pixels become white pixels. After gray-scale inversion processing, the palm image after gray-scale inversion is obtained, that is, the palmprint image to be corrected.
  • the pixel value range of the pixels in the image is set between 0-255 to reduce the amount of original image data and improve the calculation efficiency in subsequent processing calculations; and then grayscale the image after grayscale processing.
  • the degree of inversion processing makes the image display more clear and improves the accuracy of subsequent palmprint extraction.
  • the curvature is calculated based on the preset direction, so that the curvature value calculation process has a corresponding reference, which is convenient for pixel screening based on the reference to obtain a suitable pixel for the pixel, or Based on the curvature values of some pixels, the pixels of other pixels in the palmprint image to be corrected are corrected to obtain an image that is convenient for palmprint extraction and correction. Furthermore, in order to improve the success rate of palmprint extraction, this application provides In the embodiment, it is also necessary to perform binarization processing on the corrected image. After this process, the palm lines in the image can be clearly distinguished from other parts of the palm to obtain the palm lines. In addition, in another embodiment, the present application also provides a server. As shown in FIG.
  • the server processor 503, memory 505, input unit 507, display unit 509 and other devices may be used to store the structural components shown in FIG. 5 do not constitute a limitation on all servers, and may include more or less components than those shown in the figure, or combine certain components.
  • the memory 505 may be used to store the application program 501 and various functional modules, and the processor 503 runs the application program 501 stored in the memory 505 to execute various functional applications and data processing of the device.
  • the memory 505 may be internal memory or external memory, or include both internal memory and external memory.
  • the internal memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, or random access memory.
  • External storage can include hard disks, floppy disks, ZIP disks, U disks, tapes, etc.
  • the memory disclosed in this application includes but is not limited to these types of memory.
  • the memory 505 disclosed in this application is only an example and
  • the input unit 507 is used to receive signal input, as well as personal information and related physical condition information input by the user.
  • the input unit 507 may include a touch panel and other input devices.
  • the touch panel can collect customer touch operations on or near it (for example, customers use fingers, stylus and other suitable objects or accessories to operate on the touch panel or near the touch panel), and according to the preset
  • the program drives the corresponding connection device; other input devices can include, but are not limited to, one or more of a physical keyboard, function keys (such as playback control keys, switch keys, etc.), trackball, mouse, and joystick.
  • the display unit 509 can be used to display information input by the customer or information provided to the customer and various menus of the computer device.
  • the display unit 509 can take the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the processor 503 is the control center of the computer equipment. It uses various interfaces and lines to connect the various parts of the entire computer. By running or executing the software programs and/or modules stored in the memory 503, and calling the data stored in the memory, execute Various functions and processing data.
  • the one or more processors 503 shown in FIG. 5 can execute and realize the functions of the palmprint image acquisition module 100 to be corrected, the functions of the corrected image acquisition module 200, and the functions of the binarization processing module 300 shown in FIG.
  • the server includes one or more processors 503, one or more memories 505, and one or more application programs 501, wherein the one or more application programs 501 are stored in the memory 505
  • the middle part is configured to be executed by the one or more processors 503, and the one or more application programs 301 are configured to execute the palmprint extraction method described in the above embodiments.
  • the embodiment of the present application provides a server that can implement the above-mentioned palmprint extraction method.
  • the image is preprocessed to make it easier to extract the palmprint subsequently, that is, the palmprint of the palm is more Highlight or remove the noise in the palm image to prevent noise from affecting the extraction of palm prints, make the image display more clear, and improve the accuracy of subsequent palm print extraction.
  • a palmprint extraction method provided in an embodiment of the present application includes: obtaining a palm image and palm The image is preprocessed to obtain the palmprint image to be corrected; the curvature value of each characteristic pixel in the palmprint image to be corrected is calculated, and the palmprint image to be corrected is performed based on the curvature value of the characteristic pixel. Correcting to obtain a corrected image; binarizing the corrected image to obtain a palmprint image.
  • a palm image is obtained.
  • the palm image can be an image of the entire palm or a palm image that does not include fingers.
  • the palm prints are mainly extracted in this application, it can be The palm prints in part of the palm are extracted, of course, the entire palm prints can also be extracted.
  • the palm image needs to be preprocessed, and the preprocessed image is the palmprint image to be corrected.
  • the image is preprocessed to make it easier to extract the palm prints later, that is, to make the palm prints of the palm more prominent, or to remove the noise in the palm image, so as to prevent the noise from affecting the palm print. extract.
  • the pixels in the palm image are traversed to obtain the RGB component value of each pixel.
  • the pixel points in the palm image are traversed according to a preset traversal method, the RGB component value of each pixel point is obtained, and the palm image is grayed according to the RGB component value of the pixel point to obtain a grayed image ; Perform grayscale inversion processing on the grayed image to obtain a palm image after grayscale inversion, so that the original white pixels in the grayed image become black pixels, and the original black pixels become white pixels. After gray-scale inversion processing, the palm image after gray-scale inversion is obtained, that is, the palmprint image to be corrected.
  • the pixel value range of the pixels in the image is set between 0-255 to reduce the amount of original image data and improve the calculation efficiency in subsequent processing calculations; and then grayscale the image after grayscale processing.
  • the degree of inversion processing makes the image display more clear and improves the accuracy of subsequent palmprint extraction.
  • the curvature is calculated based on the preset direction, so that the curvature value calculation process has a corresponding reference, which is convenient for pixel screening based on the reference to obtain a suitable pixel for the pixel, or Based on the curvature values of some pixels, the pixels of other pixels in the palmprint image to be corrected are corrected to obtain an image that is convenient for palmprint extraction and correction. Furthermore, in order to improve the success rate of palmprint extraction, this application provides In the embodiment, it is also necessary to perform binarization processing on the corrected image. After this process, the palm lines in the image can be clearly distinguished from other parts of the palm to obtain the palm lines.
  • the server provided in the embodiment of the present application can implement the embodiment of the palmprint extraction method provided above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种掌纹提取方法,包括:获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像(S100);对待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对待修正掌纹图像进行修正,获得修正图像(S200);对所述修正图像进行二值化处理,获得掌纹图像(S300)。该方法对图像进行预处理,使得后续更容易的提取出掌纹,即使得手掌的掌纹更为突出显示,或者去掉手掌图像中的噪声,避免噪声影响掌纹的提取,使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。通过待修正掌纹图像中各像素点进行曲率值对像素点筛选,更进一步提高掌纹纹路提取的成功率。

Description

掌纹提取方法、装置及存储介质、服务器
本申请要求于2019年01月29日提交中国专利局、申请号为201910087378.2、申请名称为“掌纹提取方法、装置及存储介质、服务器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及生物识别、掌纹识别技术领域,具体涉及一种掌纹提取方法、装置及存储介质、服务器。
背景技术
近年来,工业界、学术界不断致力于提高身份信息的验证效果,以满足门禁控制、航空安全、电子银行等多个不同领域中,对于识别人的身份的严苛需求。基于生物特征识别的方法正吸引着越来越多的关注,掌纹识别便是其中一种极具代表的生物特征识别方法。掌纹识别方法具有区分性高、鲁棒性强、用户友好等诸多优点。掌纹指掌心表面的皮肤纹理手部掌纹由于各种原因分为正常纹和异常纹,而异常纹可能会出现“十”状纹、“井”状纹、“米”状纹等纹路。掌纹特征对于人类个体而言是不变的、永久的、独一无二的。
目前提取掌纹的方法有很多,但是效果一般,主要受限于成像因素,受光照条件影响较大,即对光照的抗性较差,还有对掌纹不清晰的人的抗性较差。
发明内容
为克服以上技术问题,特别是掌纹提取过程中,对光照的对抗性差以及对掌纹不清晰的人的抗性较差的问题,特提出以下技术方案:
本申请实施例提供的一种掌纹提取方法,包括:
获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
对所述修正图像进行二值化处理,获得掌纹图像。
本申请实施例还提供了一种掌纹提取装置,包括:
待修正掌纹图像获取模块,用于获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
修正图像获得模块,用于对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
二值化处理模块,用于对所述修正图像进行二值化处理,获得掌纹图像。
可选地,所述修正图像获得模块,包括:
切割单元,用于根据预设切割方向和预设像素间隔切割所述待修正掌纹图像,得到n条切割线,其中,n为正整数;
曲率计算单元,用于计算每条所述切割线上各像素点的曲率值,基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,所述切割线上的各像素点为所述特征像素点。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现任一技术方案所述的掌纹提取方法。
本申请实施例还提供了一种服务器,包括:
一个或多个处理器;
存储器;
一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置 为由所述一个或多个处理器执行,所述一个或多个应用程序配置用于执行根据任一技术方案所述的掌纹提取方法的步骤。
本申请与现有技术相比,具有以下有益效果:
本申请实施例的掌纹提取方法能够使得掌纹更加突出,提高了掌纹图像了对光照的抗性,以及提高了掌纹图像了对掌纹不清晰的人的抗性,提高了掌纹纹路提取的成功率。
本申请附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请掌纹提取方法的典型实施例中一种实施方式的流程示意图;
图2为本申请掌纹提取方法的典型实施例中待修正掌纹图像的示意图;
图3为本申请掌纹提取方法的典型实施例中掌纹图像示意图;
图4为本申请掌纹提取装置的典型实施例的结构示意图;
图5为本申请服务器的一实施例结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作。
本领域技术人员应当理解,本申请所称的“应用”、“应用程序”、“应用软件”以及类似表述的概念,是业内技术人员所公知的相同概念,是指由一系列计算机指令及相关数据资源有机构造的适于电子运行的计算机软件。除非特别指定,这种命名本身不受编程语言种类、级别,也不受其赖以运行的操作系统或平台所限制。理所当然地,此类概念也不受任何形式的终端所限制。
本申请实施例提供的掌纹提取应用于包括服务端和客户端的场景,其中,服务端和客户端之间通过网络进行连接,客户端用于对手掌图像进行采集,并且将采集到的手掌图像发送到服务端,客户端具体可以但不限于是摄像机、相机、扫描仪或者带有其他拍照功能的手掌图像采集设备;服务端用于对手掌图像进行手掌纹路提取,服务端具体可以用独立的服务器或者多个服务器组成的服务器集群实现。本申请实施例提供的图像处理方法应用于服务端。
本申请实施例提供的一种掌纹提取方法,在其中一种实施方式中,如图1所示,包括:S100、S200、S300。
S100:获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
S200:对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
S300:对所述修正图像进行二值化处理,获得掌纹图像。
本申请提供的实施例中,获取手掌图像,该手掌图像可以是整个手掌的图像,也可以是不包括手指的手掌图像,由于本申请中主要是对掌纹进行提取,可以是对不包括手指部分的手掌中的掌纹进行提取,当然也可以对整个手掌掌纹进行提取。为了提高手掌图像中掌纹纹路的识别率,避免噪音影响掌纹的提取,需要对手掌图像进行预 处理,预处理后的图像则为待修正掌纹图像。在本申请提供的实施例中,对图像进行预处理,使得后续更容易的提取出掌纹,即使得手掌的掌纹更为突出显示,或者去掉手掌图像中的噪声,避免噪声影响掌纹的提取。例如:将采集到的彩色掌纹图像灰度化:读取一幅RGB模式(三元素(红、绿、蓝)模式)的手掌图片,使用灰度图转化函数如rgb2gray()将图片模式转化为灰度图;做出图像的灰度直方图,找到灰度直方图两个峰之间的谷底值T作为二值化分割的门限阈值,将灰度图像化为二值图像;使用3*3的正方形中值滤波窗口对图像进行中值滤波;使用边界跟踪算法提取手掌图像的轮廓。将采集到的彩色掌纹图像灰度化。在前述基础上将灰度图像二值化。灰度图像二值化就是将256个灰度等级(灰度值的取值范围是0到255)的灰度图像通过适当的阈值选取,把图像上的像素点的灰度值重新设置为0或255,也就是将整个图像呈现出只有黑白两种颜色的效果。通过imhist()函数做出灰度图像的灰度直方图。灰度直方图的横轴表示的是灰度图像的灰度值范围,是从0到255,纵轴表示的是某个灰度值在图像上出现的次数。由于手掌灰度图像前景单一(只有手掌),背景简单,因此手掌灰度图像的灰度直方图分布均呈现显著的双峰特点。所以选取灰度直方图两个峰之间的谷底中间部分对应的任一横轴值作为二值化分割的门限阈值,把灰度图像上灰度值大于这一门限阈值的的点重置为255(白),小于这一门限阈值的点重置为0(黑),这样就将灰度图像转化为二值图像,即图像背景为黑色,图像前景手掌为白色的图像;平滑图像。由于灰度图像在二值化后所得到二值图像的边界往往是很不平滑的,因此为了得到比较光滑的手掌轮廓线,需要对二值化后的图像作平滑处理,使图像边缘尖锐的“毛刺”变平缓。由于采集的手掌图像内容简单,细节少,故采用简单的中值滤波法,使用3*3的正方形中值滤波窗口,它能够在滤除噪声的同时保持边缘不被模糊。使用轮廓跟踪算法提取二值图像的轮廓,即手掌轮廓。之后将手掌轮廓包围部分的且经过前述处理之后的图像作为待修正掌纹图像(如图2所示)。
结合前述示例的图像预处理过程中,对手掌图像进行预处理的详细过程如下:
进一步地,在前述的基础上,对手掌图像中的像素点进行遍历,获取每个像素点的RGB分量值。
具体地,按照预设的遍历方式对手掌图像中的像素点进行遍历,获取每个像素点的RGB分量值,其中,R、G、B分别代表红、绿、蓝三个通道的颜色。
其中,预设的遍历方式具体可以是以手掌图像的左上角像素点为起点,从上往下从左往右的顺序进行逐行遍历,也可以是从手掌图像的中线位置同时向两边遍历,还可以是其他遍历方式,此处不做限制。
进一步地,在前述的基础上,根据像素点的RGB分量值,按照公式(1)对手掌图像作灰度化处理,得到灰化图像:
g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)  (1)
其中,x和y为手掌图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为像素点(x,y)的R通道的颜色分量,G(x,y)为像素点(x,y)的G通道的颜色分量,B(x,y)为像素点(x,y)的B通道的颜色分量k 1,k 2,k 3分别为R通道,G通道和B通道对应的占比参数。其中,公式(1)即为前述的灰度化规则。
在本申请实施例中,为了实现对手掌图像中信息内容的准确提取,首先需要对手掌图像进行灰度化处理,其中,k 1,k 2,k 3和σ的参数值可以根据实际应用的需要进行设置,此处不做限制,通过调节k 1,k 2,k 3的取值范围可以分别对R通道,G通道和B通道的占比进行调整。
RGB模型是目前常用的一种彩色信息表达方式,它使用红、绿、蓝三原色的亮度 来定量表示颜色。该模型也称为加色混色模型,是以RGB三色光互相叠加来实现混色的方法,因而适合于显示器等发光体的显示。
灰度化是指在RGB模型中,如果R=G=B时,则色彩表示只有一种灰度颜色,其中R=G=B的值叫灰度值,因此,灰度图像每个像素只需一个字节存放灰度值,灰度范围为0-255。
需要说明的是,在本申请实施例中,通过公式(1)进行加权计算灰度值,在其他实施例中还可以采用分量法、最大值法或者平均值法对图像进行灰度化处理,此处不做限制。
进一步地,在前述的基础上,对灰化图像进行灰度反转处理,得到灰度反转后的手掌图像。从而减少图像原始数据量,提高在后续处理计算中的计算效率;且能够使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。
具体地,在前述过程中,对获取的灰化图像中的每个像素点进行遍历,获取每个像素点的像素值,对灰化图像进行灰度反转处理,将灰化图像中像素点的像素值范围从[0,255]变换为[255,0],即将像素点的像素值从0调整为255,将像素点的像素值从255调整为0,从而使灰化图像中原始的白色像素点变为黑色像素点,原始的黑色像素顶变为白色像素点,经过灰度反转处理后得到灰度反转后的手掌图像,即待修正掌纹图像。
需要说明的是,为了方便在不同环境下的计算,还可进一步将像素点的取值范围从[0,255]压缩为[0,1],即将每个像素点的像素值除以255得到压缩后的像素值,例如,像素值为1的像素点压缩后的像素值为1/255,像素值为254的像素点压缩后的像素值为254/255,其他像素点的像素值变换以此类推。
例如:结合前述说明,在MATLAB工具中,可通过直接调用imadjust函数,对灰化图像进行灰度反转处理,将图像中像素值区间由原来的[0,255]变换为[255,0],再压缩变换为[1,0],生成与灰化图像灰度相反的手掌图像,即待修正掌纹图像。
本实施例中,通过遍历手掌图像中的像素点并获取对应像素点的RGB分量值,根据获取到的每个像素点的RGB分量值,利用公式(1)对手掌图像进行灰度化处理,将图像中像素点的像素值范围设定在0-255之间,从而减少图像原始数据量,提高在后续处理计算中的计算效率;再对灰度化处理后的图像进行灰度反转处理,使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。
在获得了待修正掌纹图像之后,则对待修正掌纹图像中各像素点进行曲率值进行计算,以提高掌纹纹路提取的成功率,之后则基于各像素点的曲率值对待修正掌纹图像进行修正,获得修正图像。在具体的曲率计算过程中,是基于预设方向进行曲率的计算,以便于曲率值计算过程中都具有相应的基准,进而便于基于该基准进行像素点筛选,以获得像素合适的像素点,或者基于部分像素点的曲率值对待修正掌纹图像中的其他像素点的像素进行修正,使得掌纹更加突出,提高掌纹图像了对光照的抗性,以及提高掌纹图像了对掌纹不清晰的人的抗性,获得便于进行手掌纹路的提取修正图像,更进一步提高掌纹纹路提取的成功率。更进一步地,为了提高掌纹纹路提取的成功率,在本申请提供的实施例中,还需要对修正图像进行二值化处理,该过程之后,图像中手掌纹路可以明显地和手掌的其他部分进行区分,得到手掌纹路如图3所示。相应的,为了实现手掌纹路的提取,为了让图像中的像素点的像素值只呈现0或者255,即图像只呈现黑色或者白色两种颜色,需要进一步对该增强图像进行二值化处理。经过前述处理之后,结合图2和图3对比可知,在通过像素点曲率值对待修正掌纹图像中的曲率进行了筛选修正方法进行了图3中的掌纹较图2更加突出,提高掌纹图像了对光照的抗性,以及提高掌纹图像了对掌纹不清晰的人的抗性。
二值化,就是将图像上的像素点的像素值设置为0或255,也就是将整个图像呈现出明显的只有黑和白的视觉效果。
具体地,扫描修正图像中的每个像素点,若该像素点的像素值小于预设的像素阈值,则将该像素点的像素值设为0,即像素点变为黑色;若该像素点的像素值大于等于预设值的像素阈值,则将该像素点的像素值设为255,即像素点变为白色,得到二值化图像。
可选地,所述对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像,包括:
根据预设切割方向和预设像素间隔切割所述待修正掌纹图像,得到n条切割线,其中,n为正整数;
计算每条所述切割线上各像素点的曲率值,基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,所述切割线上的各像素点为所述特征像素点。
本申请实施例能够通过对待修正掌纹图像切割获得的每条切割线,计算该切割线上每个像素点的曲率值,使用曲率值对像素点是否属于掌纹上的像素点进行判断,进而便于基于曲率值进行如前述像素点的筛选、修正等过程,进而实现对待修正掌纹图像中的其他像素点的像素进行修正,以获得修正图像。
在本申请实施例中,预设的切割方向可以是水平切割、垂直切割或者其它方向的切割,其具体可以根据实际应用的需要进行设置,此处不做限制。预设的像素间隔是指以预设个数的像素点作为间隔,其可以是以1个像素点为间隔,也可以是以5个像素点为间隔,具体也可以根据实际应用的需要进行设置,此处不做限制。为了更好的理解本步骤,下面通过一个具体的例子进行说明。例如,获取到的手掌图像为矩形,则在该修正图像中,预设的切割方向为垂直方向,预设的像素间隔为5个像素点,对修正图像进行切割,若图像中每行有2000个像素点,则将得到399条垂直的切割线。
在对待修正掌纹图像按照预设方向切割完成后,则计算每一条切割线上个像素点的曲率值,在本申请提供的实施例中,通过如下式(2)进行曲率值的计算:
Figure PCTCN2019117915-appb-000001
其中,z为切割线上的像素点,K(z)为像素点z的曲率值,P f(z)为像素点z的像素值,
Figure PCTCN2019117915-appb-000002
为P f(z)的二阶导数值,
Figure PCTCN2019117915-appb-000003
为P f(z)的一阶导数值。
具体地,对待修正掌纹图像切割获得的每条切割线,根据公式(2)计算该切割线上每个像素点的曲率值,使用曲率值对像素点是否属于掌纹上的像素点进行判断,进而便于基于曲率值进行如前述像素点的筛选、修正等过程,进而实现对待修正掌纹图像中的其他像素点的像素进行修正,以获得修正图像,具体的基于像素点的像素值对待修正掌纹图像进行修正以获得修正图像的在后文详述,在此不做赘述。
可选地,所述基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,包括:
将所述曲率值大于零的像素点确定为评估像素点,将连续的所述评估像素点所在的区域确定为局部掌纹区域;
获取所述局部掌纹区域的宽度,将所述宽度与所述局部掌纹区域内的各所述评估像素点的曲率值乘积,获得所述局部掌纹区域内的各所述评估像素点的评估分数;
依据所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,基于所述修正像素值修正所述待修正掌纹图像,获得所述修正图像。
结合前述说明,若切割线上像素点的曲率值大于0,则表示该像素点为掌纹上的像素点,并将其作为评估像素点,若像素点的曲率值小于或者等于0,则表示该像素点不属于掌纹上的像素点。并且,局部掌纹区域由曲率值大于0的连续像素点构成,也即连续的评估像素点构成。
在本申请实施例中,由于局部掌纹区域是由曲率值大于0的连续像素点构成,故其宽度可以为曲率值大于0的连续像素点的个数,例如,若曲率值大于0的连续像素点的个数为5,则该局部掌纹区域的宽度为5。
针对每个评估像素点,将包含该评估像素点的局部掌纹区域的宽度与该评估像素点的曲率值进行相乘,并将相乘得到的结果作为该评估像素点的评估分数。
具体地,通过公式(3)计算评估像素点的评估分数:
S r(z i)=k(z i)*W r  (3)
其中,z i为第i个评估像素点,i为大于0的正数,S r(z i)为第i个评估像素点的评估分数,k(z i)为第i个评估像素点的曲率值,W r为包含z i的局部掌纹区域的宽度。
在本申请实施例中,针对每个评估像素点,将每个评估像素点的原始像素值与其对应的评估分数进行相加,得到的和作为该评估像素点的修正像素值,根据每个评估像素点的修正像素值,对每个评估像素点的像素值进行调整后得到修正像素值,基于该修正像素值更新所述待修正掌纹图像中的相同像素点的像素值,从而使掌纹区域上的点变得更加明显,提高掌纹区域的识别度,并且能够更好地识别出掌纹区域和非掌纹区域,得到如前所述的修正图像。
具体地,通过公式(4)计算评估像素点的修正像素值:
V a'(x,y)=V a(x,y)+S r(z a)  (4)
其中,x和y为待修正掌纹图像中第a个评估像素点的横坐标和纵坐标,a为大于0的正数,z a为第a个评估像素点,V a'(x,y)为第a个评估像素点的修正像素值,V a(x,y)为第a评估像素点的像素值,S r(z a)为第a个评估像素点的评估分数。
需要说明的是,若评估像素点经过计算后的修正像素值超过最大像素值,则将修正像素值设置为最大像素值。
可选地,所述获得修正图像之后,还包括:
对于所述修正图像中的每个像素点,获取该像素点一侧且与该像素点相邻的第一相邻像素点的像素值,以及与该像素点间隔一个像素点的第一间隔像素点的像素值;其中,所述第一间隔像素点与所述第一相邻像素点位于该像素点的同一侧;
获取该像素点另一侧且与该像素点相邻的第二相邻像素点的像素值,以及与该像素点间隔一个像素点的第二间隔像素点的像素值;其中,所述第二间隔像素点与所述第二相邻像素点位于该像素点的同一侧,所述一侧与另一侧相对设置;
依据所述一侧和另一侧的像素值对该像素点的像素值进行修正。
在本申请实施例中,按照公式(5)对每个像素点的像素值进行修正:
C(x,y)=min{max(V(x+1,y),V(x+2,y)),max(V(x-1,y),V(x-2,y))}  (5)
其中,x和y为手掌图像中每个像素点的横坐标和纵坐标,V(x,y)为修正图像中像素点(x,y)的像素值,C(x,y)为像素点(x,y)修正后的像素值。
具体地,选取修正图像中的像素点(x,y)左侧相邻两个像素点(x-1,y)、(x-2,y)和 右侧相邻两个像素点(x+1,y)、(x+2,y),若(x,y)和两侧的像素点的像素值一样大,则不做处理;若像素点(x,y)的像素值和两侧的像素点的像素值不同,则选取左侧两个像素点的像素值中较大的像素值,再选取右侧两个像素点的像素值中较大的像素值,最后比较左侧较大的像素值和右侧较大的像素值,选取两者中较小的像素值对像素点(x,y)进行修正。
需要说明的是,若像素点为图像边界的像素点,则只对一侧的像素值进行比较,例如,若像素点位于图像左边界,则选取该像素点右侧相邻两个像素点的像素值中较大的像素值,对像素点的像素值进行修正;若像素点位于图像右边界,则选取该像素点左侧相邻两个像素点的像素值中较大的像素值,对像素点的像素值进行修正。
本实施例中,若像素点(x,y)的像素值很小而两侧的像素值很大,则通过公式(5)将像素点(x,y)的像素值调大,使得该像素点和两侧的像素点能够连接起来形成纹路;若像素点(x,y)的像素值很大而两侧的像素值很小,则认为该像素点为噪点,为避免该噪点对掌纹的提取造成干扰,通过公式(5)将像素点(x,y)的像素值调小,实现对掌纹图像中的噪点进行消除,从而使掌纹区域变得更加明显,提高对掌纹的辨别度,同时也提高在后续对掌纹提取的准确性。
可选地,所述预设的切割方向包括至少2个方向;所述对所述修正图像进行二值化处理,获得掌纹图像,包括:
将根据每个所述切割方向得到的所述修正图像作为待合成图像;
对比同一像素点在各所述待合成图像中的像素值,将该像素点的最大像素值确定为该像素点在合成图像中的合成像素值;
依据每一个像素点的所述合成像素值获得合成图像;
对所述合成图像进行二值化处理,得到所述掌纹图像。
在本申请提供的实施例中的一种实施方式中,预设的切割方向包括至少2个方向,即可以基于2个或者2个以上不同的切割方向对待修正掌纹图像进行图像处理,得到修正图像。预设的切割方向具体可以包括45°、90°、135°和180°共4个方向,但并不限于此,其也可以包括其他方向,可根据实际应用的需要进行设置,此处不做限制。
在本申请实施例中,对每个具体的切割方向,均按修正图像按照预设切割方向切割后的图像作为待合成图像,例如,若预设的切割方向包括45°、90°、135°和180°共4个方向,则以45°的切割方向得到的修正图像为一个待合成图像,以90°的切割方向得到的修正图像为另一个待合成图像,其他方向以此类推,一共可得到四个待合成图像。
具体地,根据前述过程得到的待合成图像,通过对每个待合成图像中相同位置的像素点的像素值进行比较,选取最大的像素值作为合成图像对应位置的像素点的像素值,得到合成图像。
在合成图像的基础上,为了让图像中的像素点的像素值只呈现0或者255,即图像只呈现黑色或者白色两种颜色,需要进一步对该合成图像进行二值化处理,以获得掌纹图像。
具体地,扫描合成图像中的每个像素点,若该像素点的像素值小于预设的像素阈值,则将该像素点的像素值设为0,即为像素点变为黑色;若该像素点的像素值大于等于预设值的像素阈值,则将该像素点的像素值设为255,即像素点变为白色,得到掌纹图像。
本实施例中,根据不同的切割方向获取不同的待合成图像,再对每个待合成图像中每个相同位置的像素点的像素值进行比较,选取每个像素点的最大像素值作为合成 图像中对应位置的像素点的像素值,对图像进行合成,最后再对合成图像进行二值化处理,得到掌纹图像。由于仅对一个切割方向上得到的修正图像进行掌纹提取可能存在误差,因此通过对多个切割方向的待合成图像进行合成,再对合成图像进行二值化处理得到的掌纹图像进行掌纹纹路提取,能够有效地降低误差,实现对掌纹纹路的准确提取,提高掌纹纹路提取的准确性。
可选地,所述对手掌图像进行预处理,包括:
对所述手掌图像进行Gabor滤波变换,得到所述待修正掌纹图像。
在本申请实施例中,在获取的手掌图像,为了进一步提高该手掌图像的质量,采用Gabor滤波变换的方法对图像作增强处理,最终得到处理后的增强图像。
具体地,根据Gabor滤波函数对手掌图像进行卷积运算,通过卷积运算结果获取增强图像。其中,卷积运算指的是使用一个卷积核对手掌图像中的每个像素点进行一系列操作,卷积核是预设的矩阵模板,用于与手掌图像进行运算,其具体可以是一个四方形的网格结构,例如3*3的矩阵,该矩阵中的每个元素都有一个预设的权重值,在使用卷积核进行计算时,将卷积核的中心放置在要计算的目标像素点上,计算卷积核中每个元素的权重值和其覆盖的图像像素点的像素值之间的乘积并求和,得到的结果即为目标像素点的新像素值。
Gabor滤波变换属于加窗傅里叶变换,Gabor函数可以在频域不同尺度、不同方向上提取图像的相关特征,实现对图像的增强效果。具体的,通过Gabor滤波对手掌图像处理过程如下:按照公式(6)对手掌图像进行Gabor滤波变换:
Figure PCTCN2019117915-appb-000004
Figure PCTCN2019117915-appb-000005
x'=xcosθ+ysinθ
y'=-xsinθ+ycosθ         (6)
其中,g(x,y;λ,θ,
Figure PCTCN2019117915-appb-000006
σ,γ)为Gabor滤波函数,x和y为手掌图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
Figure PCTCN2019117915-appb-000007
为相位偏移,σ为gabor函数的高斯因子的标准差,γ为长宽比,U(x,y)为增强图像,I(x,y)为手掌图像,
Figure PCTCN2019117915-appb-000008
为张量积运算,x'和y'为所述手掌图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
具体地,使用预设的波长和预设的方向,利用公式(5)的Gabor滤波函数对手掌图像进行变换,从而将手掌图像的高频波滤掉,只留下低频部分,在预设的方向上将低频波滤掉,只留下高频部分,最终使图像变得高亮,即通过Gabor滤波变换后得到的增强图像。
其中,预设的波长λ可取1,也可以根据实际需求进行设定,此处不做限制。预设的方向θ可以分别选取0、
Figure PCTCN2019117915-appb-000009
这8个方向,也可以选择其他方向,具体可以根据实际应用的需要进行选择,此处不做限制。
本实施例中,通过公式(6)对手掌图像进行Gabor滤波变换,能够快速地将图像变得高亮,达到图像增强的效果,从而提高手掌图像的图像质量,以及对手掌图像中纹路的辨别率,以便在对低端掌纹采集设备采集到的低质量手掌图像进行手掌纹路提取时,能够实现准确定位,从而提高掌纹提取的准确性,同时也提高对不同掌纹采集设备的适用性。
本申请提供的实施例中,结合前述的技术方案,可以实现:通过对手掌图像进行 Gabor滤波变换得到所述待修正掌纹图像,其中待修正掌纹图像即为Gabor滤波变换之后的增强图像,对该增强图像进行切割并获取n条切割线,计算每条切割线上每个像素点的曲率值,获取曲率值大于0的像素点作为评估像素点,以及获取曲率值大于零的连续像素点所在的区域作为局部掌纹区域,利用评估像素点的曲率值与该评估像素点所在的局部掌纹区域的宽度的积,对每个评估像素点进行计算得到评估分数,再利用评估分数对评估像素点的像素值进行调整,获取每个评估像素点的修正像素值并对增强图像上的像素点进行更新,最后对更新后的增强图像进行二值化处理,得到掌纹图像。一方面,通过Gabor滤波变换提高手掌图像的图像质量,使得在对手掌纹路提取时能够提高识别掌纹的准确性,从而实现对低端掌纹采集设备采集到的低质量手掌图像进行掌纹的准确定位,有效提高手掌图像中掌纹的提取准确性,以及对多种不同掌纹采集设备的适用性;另一方面,通过曲率算法能够快速地识别手掌图像中的掌纹,提高掌纹的识别效率;并且通过计算评估分数能够进一步准确区分掌纹区域和非掌纹区域,从而进一步提高对掌纹提取的准确性。
本申请实施例还提供了一种掌纹提取装置,在其中一种实施方式中,如图4所示,包括:待修正掌纹图像获取模块100、修正图像获得模块200、二值化处理模块300:
待修正掌纹图像获取模块100,用于获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
修正图像获得模块200,用于对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
二值化处理模块300,用于对所述修正图像进行二值化处理,获得掌纹图像。
进一步地,如图4所示,本申请实施例中提供的一种掌纹提取装置还包括:切割单元210,用于根据预设切割方向和预设像素间隔切割所述待修正掌纹图像,得到n条切割线,其中,n为正整数;曲率计算单元220,用于计算每条所述切割线上各像素点的曲率值,基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,所述切割线上的各像素点为所述特征像素点。局部掌纹区域确定单元221,用于将所述曲率值大于零的像素点确定为评估像素点,将连续的所述评估像素点所在的区域确定为局部掌纹区域;评估分数获得单元222,用于获取所述局部掌纹区域的宽度,将所述宽度与所述局部掌纹区域内的各所述评估像素点的曲率值乘积,获得所述局部掌纹区域内的各所述评估像素点的评估分数;调整单元223,用于依据所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,基于所述修正像素值修正所述待修正掌纹图像,获得所述修正图像。第一像素值获取单元230,用于对于所述修正图像中的每个像素点,获取该像素点一侧且与该像素点相邻的第一相邻像素点的像素值,以及与该像素点间隔一个像素点的第一间隔像素点的像素值;其中,所述第一间隔像素点与所述第一相邻像素点位于该像素点的同一侧;第二像素值获取单元240,用于获取该像素点另一侧且与该像素点相邻的第二相邻像素点的像素值,以及与该像素点间隔一个像素点的第二间隔像素点的像素值;其中,所述第二间隔像素点与所述第二相邻像素点位于该像素点的同一侧,所述一侧与另一侧相对设置;修正单元250,用于依据所述一侧和另一侧的像素值对该像素点的像素值进行修正。待合成图像获得单元310,用于将根据每个所述切割方向得到的所述修正图像作为待合成图像;对比单元320,用于对比同一像素点在各所述待合成图像中的像素值,将该像素点的最大像素值确定为该像素点在合成图像中的合成像素值;合成图像获得单元330,用于依据每一个像素点的所述合成像素值获得合成图像;二值化处理单元340,用于对所述合成 图像进行二值化处理,得到所述掌纹图像。灰度化单元110,用于遍历所述手掌图像中,获取每个像素点的RGB分量值,通过灰度处理规则对所述每个像素点的RGB分量值进行灰度化处理,获得灰度化图像;变换单元120,反转所述灰度化图像,采用滤波变换处理反转后的所述灰度化图像,获得所述待修正掌纹图像。
本申请实施例提供的一种掌纹提取方法装置可以实现上述掌纹提取方法的实施例,具体功能实现请参见方法实施例中的说明,在此不再赘述。
本申请实施例提供的一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现任一项技术方案所述的掌纹提取方法。其中,所述计算机可读存储介质包括但不限于任何类型的盘(包括软盘、硬盘、光盘、CD-ROM、和磁光盘)、ROM(Read-Only Memory,只读存储器)、RAM(Random AcceSS Memory,随即存储器)、EPROM(EraSable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically EraSable Programmable Read-Only Memory,电可擦可编程只读存储器)、闪存、磁性卡片或光线卡片。也就是,存储设备包括由设备(例如,计算机、手机)以能够读的形式存储或传输信息的任何介质,可以是只读存储器,磁盘或光盘等。
本申请实施例提供的一种计算机可读存储介质,可实现上述掌纹提取方法的实施例,在本申请中通过对图像进行预处理,使得后续更容易的提取出掌纹,即使得手掌的掌纹更为突出显示,或者去掉手掌图像中的噪声,避免噪声影响掌纹的提取,使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。通过待修正掌纹图像中各像素点进行曲率值对像素点筛选,更进一步提高掌纹纹路提取的成功率;本申请实施例提供的一种掌纹提取方法,包括:获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;对所述修正图像进行二值化处理,获得掌纹图像。本申请提供的实施例中,获取手掌图像,该手掌图像可以是整个手掌的图像,也可以是不包括手指的手掌图像,由于本申请中主要是对掌纹进行提取,可以是对不包括手指部分的手掌中的掌纹进行提取,当然也可以对整个手掌掌纹进行提取。为了提高手掌图像中掌纹纹路的识别率,避免噪音影响掌纹的提取,需要对手掌图像进行预处理,预处理后的图像则为待修正掌纹图像。在本申请提供的实施例中,对图像进行预处理,使得后续更容易的提取出掌纹,即使得手掌的掌纹更为突出显示,或者去掉手掌图像中的噪声,避免噪声影响掌纹的提取。进一步地,在前述的基础上,对手掌图像中的像素点进行遍历,获取每个像素点的RGB分量值。具体地,按照预设的遍历方式对手掌图像中的像素点进行遍历,获取每个像素点的RGB分量值,根据像素点的RGB分量值,对手掌图像作灰度化处理,得到灰化图像;对灰化图像进行灰度反转处理,得到灰度反转后的手掌图像,从而使灰化图像中原始的白色像素点变为黑色像素点,原始的黑色像素顶变为白色像素点,经过灰度反转处理后得到灰度反转后的手掌图像,即待修正掌纹图像。本实施例中,将图像中像素点的像素值范围设定在0-255之间,减少图像原始数据量,提高在后续处理计算中的计算效率;再对灰度化处理后的图像进行灰度反转处理,使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。在获得了待修正掌纹图像之后,则对待修正掌纹图像中各像素点进行曲率值进行计算,之后则基于各像素点的曲率值对待修正掌纹图像进行修正,获得修正图像。在具体的曲率计算过程中,是基于预设方向进行曲率的计算,以便于曲率值计算过程中都具有相应的基准,进而便于基于该基准进行像素点筛选,以获得像素合适的像素点,或者基于部分像素点的曲率值对待修正掌纹图像中的其他像素点的像素进行修正,获得便于进行手掌纹路的提取修正图像,更进 一步地,为了提高掌纹纹路提取的成功率,在本申请提供的实施例中,还需要对修正图像进行二值化处理,该过程之后,图像中手掌纹路可以明显地和手掌的其他部分进行区分,得到手掌纹路。此外,在又一种实施例中,本申请还提供一种服务器,如图5所示,所述服务器处理器503、存储器505、输入单元507以及显示单元509等器件。本领域技术人员可以理解,图5示出的结构器件并不构成对所有服务器的限定,可以包括比图示更多或更少的部件,或者组合某些部件。存储器505可用于存储应用程序501以及各功能模块,处理器503运行存储在存储器505的应用程序501,从而执行设备的各种功能应用以及数据处理。存储器505可以是内存储器或外存储器,或者包括内存储器和外存储器两者。内存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦写可编程ROM(EEPROM)、快闪存储器、或者随机存储器。外存储器可以包括硬盘、软盘、ZIP盘、U盘、磁带等。本申请所公开的存储器包括但不限于这些类型的存储器。本申请所公开的存储器505只作为例子而非作为限定。
输入单元507用于接收信号的输入,以及用户输入的个人信息和相关的身体状况信息。输入单元507可包括触控面板以及其它输入设备。触控面板可收集客户在其上或附近的触摸操作(比如客户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作),并根据预先设定的程序驱动相应的连接装置;其它输入设备可以包括但不限于物理键盘、功能键(比如播放控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。显示单元509可用于显示客户输入的信息或提供给客户的信息以及计算机设备的各种菜单。显示单元509可采用液晶显示器、有机发光二极管等形式。处理器503是计算机设备的控制中心,利用各种接口和线路连接整个电脑的各个部分,通过运行或执行存储在存储器503内的软件程序和/或模块,以及调用存储在存储器内的数据,执行各种功能和处理数据。图5中所示的一个或多个处理器503能够执行、实现图4中所示的待修正掌纹图像获取模块100的功能、修正图像获得模块200的功能、二值化处理模块300的功能、切割单元210的功能、曲率计算单元220的功能、局部掌纹区域确定单元221的功能、评估分数获得单元222的功能、调整单元223的功能、第一像素值获取单元230的功能、第二像素值获取单元240的功能、修正单元250的功能、待合成图像获得单元310的功能、对比单元320的功能、合成图像获得单元330的功能、二值化处理单元340的功能、灰度化单元110的功能、变换单元120的功能。
在一种实施方式中,所述服务器包括一个或多个处理器503,以及一个或多个存储器505,一个或多个应用程序501,其中所述一个或多个应用程序501被存储在存储器505中并被配置为由所述一个或多个处理器503执行,所述一个或多个应用程序301配置用于执行以上实施例所述的掌纹提取方法。
本申请实施例提供的一种服务器,可实现上述掌纹提取方法的实施例,在本申请中通过对图像进行预处理,使得后续更容易的提取出掌纹,即使得手掌的掌纹更为突出显示,或者去掉手掌图像中的噪声,避免噪声影响掌纹的提取,使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。通过待修正掌纹图像中各像素点进行曲率值对像素点筛选,更进一步提高掌纹纹路提取的成功率;本申请实施例提供的一种掌纹提取方法,包括:获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;对所述修正图像进行二值化处理,获得掌纹图像。本申请提供的实施例中,获取手掌图像,该手掌图像可以是整个手掌的图像,也可以是不包括手指的手掌图像,由于本申请中主要是对掌纹进 行提取,可以是对不包括手指部分的手掌中的掌纹进行提取,当然也可以对整个手掌掌纹进行提取。为了提高手掌图像中掌纹纹路的识别率,避免噪音影响掌纹的提取,需要对手掌图像进行预处理,预处理后的图像则为待修正掌纹图像。在本申请提供的实施例中,对图像进行预处理,使得后续更容易的提取出掌纹,即使得手掌的掌纹更为突出显示,或者去掉手掌图像中的噪声,避免噪声影响掌纹的提取。进一步地,在前述的基础上,对手掌图像中的像素点进行遍历,获取每个像素点的RGB分量值。具体地,按照预设的遍历方式对手掌图像中的像素点进行遍历,获取每个像素点的RGB分量值,根据像素点的RGB分量值,对手掌图像作灰度化处理,得到灰化图像;对灰化图像进行灰度反转处理,得到灰度反转后的手掌图像,从而使灰化图像中原始的白色像素点变为黑色像素点,原始的黑色像素顶变为白色像素点,经过灰度反转处理后得到灰度反转后的手掌图像,即待修正掌纹图像。本实施例中,将图像中像素点的像素值范围设定在0-255之间,减少图像原始数据量,提高在后续处理计算中的计算效率;再对灰度化处理后的图像进行灰度反转处理,使图像的显示效果更加清晰,提高后续对手掌掌纹提取的准确性。在获得了待修正掌纹图像之后,则对待修正掌纹图像中各像素点进行曲率值进行计算,之后则基于各像素点的曲率值对待修正掌纹图像进行修正,获得修正图像。在具体的曲率计算过程中,是基于预设方向进行曲率的计算,以便于曲率值计算过程中都具有相应的基准,进而便于基于该基准进行像素点筛选,以获得像素合适的像素点,或者基于部分像素点的曲率值对待修正掌纹图像中的其他像素点的像素进行修正,获得便于进行手掌纹路的提取修正图像,更进一步地,为了提高掌纹纹路提取的成功率,在本申请提供的实施例中,还需要对修正图像进行二值化处理,该过程之后,图像中手掌纹路可以明显地和手掌的其他部分进行区分,得到手掌纹路。
本申请实施例提供的服务器可以实现上述提供的掌纹提取方法的实施例,具体功能实现请参见方法实施例中的说明,在此不再赘述。
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (20)

  1. 一种掌纹提取方法,其特征在于,包括:
    获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
    对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
    对所述修正图像进行二值化处理,获得掌纹图像。
  2. 根据权利要求1所述的掌纹提取方法,其特征在于,所述对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像,包括:
    根据预设切割方向和预设像素间隔切割所述待修正掌纹图像,得到n条切割线,其中,n为正整数;
    计算每条所述切割线上各像素点的曲率值,基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,所述切割线上的各像素点为所述特征像素点。
  3. 根据权利要求2所述的掌纹提取方法,其特征在于,所述基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,包括:
    将所述曲率值大于零的像素点确定为评估像素点,将连续的所述评估像素点所在的区域确定为局部掌纹区域;
    获取所述局部掌纹区域的宽度,将所述宽度与所述局部掌纹区域内的各所述评估像素点的曲率值乘积,获得所述局部掌纹区域内的各所述评估像素点的评估分数;
    依据所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,基于所述修正像素值修正所述待修正掌纹图像,获得所述修正图像。
  4. 根据权利要求1所述的掌纹提取方法,其特征在于,所述获得修正图像之后,还包括:
    对于所述修正图像中的每个像素点,获取该像素点一侧且与该像素点相邻的第一相邻像素点的像素值,以及与该像素点间隔一个像素点的第一间隔像素点的像素值;其中,所述第一间隔像素点与所述第一相邻像素点位于该像素点的同一侧;
    获取该像素点另一侧且与该像素点相邻的第二相邻像素点的像素值,以及与该像素点间隔一个像素点的第二间隔像素点的像素值;其中,所述第二间隔像素点与所述第二相邻像素点位于该像素点的同一侧,所述一侧与另一侧相对设置;
    依据所述一侧和另一侧的像素值对该像素点的像素值进行修正。
  5. 根据权利要求1至4任一项所述的掌纹提取方法,其特征在于,所述预设的切割方向包括至少2个方向;所述对所述修正图像进行二值化处理,获得掌纹图像,包括:
    将根据每个所述切割方向得到的所述修正图像作为待合成图像;
    对比同一像素点在各所述待合成图像中的像素值,将该像素点的最大像素值确定为该像素点在合成图像中的合成像素值;
    依据每一个像素点的所述合成像素值获得合成图像;
    对所述合成图像进行二值化处理,得到所述掌纹图像。
  6. 根据权利要求1至4任一项所述的掌纹提取方法,其特征在于,所述对手掌图像进行预处理,获得待修正掌纹图像,包括:
    遍历所述手掌图像中,获取每个像素点的RGB分量值,通过灰度处理规则对所述每个像素点的RGB分量值进行灰度化处理,获得灰度化图像;
    反转所述灰度化图像,采用滤波变换处理反转后的所述灰度化图像,获得所述待修正掌纹图像。
  7. 根据权利要求1至4任一项所述的掌纹提取方法,其特征在于,所述对手掌图像进行预处理,获得待修正掌纹图像,包括:
    对所述手掌图像进行Gabor滤波变换,得到所述待修正掌纹图像。
  8. 一种掌纹提取装置,其特征在于,包括:
    待修正掌纹图像获取模块,用于获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
    修正图像获得模块,用于对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
    二值化处理模块,用于对所述修正图像进行二值化处理,获得掌纹图像。
  9. 根据权利要求8所述的掌纹提取装置,其特征在于,所述修正图像获得模块,包括:
    切割单元,用于根据预设切割方向和预设像素间隔切割所述待修正掌纹图像,得到n条切割线,其中,n为正整数;
    曲率计算单元,用于计算每条所述切割线上各像素点的曲率值,基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,所述切割线上的各像素点为所述特征像素点。
  10. 根据权利要求9所述的掌纹提取装置,其特征在于,所述曲率计算单元,包括:
    局部掌纹区域确定单元,用于将所述曲率值大于零的像素点确定为评估像素点,将连续的所述评估像素点所在的区域确定为局部掌纹区域;
    评估分数获得单元,用于获取所述局部掌纹区域的宽度,将所述宽度与所述局部掌纹区域内的各所述评估像素点的曲率值乘积,获得所述局部掌纹区域内的各所述评估像素点的评估分数;
    调整单元,用于依据所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,基于所述修正像素值修正所述待修正掌纹图像,获得所述修正图像。
  11. 根据权利要求8所述的掌纹提取装置,其特征在于,所述修正图像获得模块,还包括:
    第一像素值获取单元,用于对于所述修正图像中的每个像素点,获取该像素点一侧且与该像素点相邻的第一相邻像素点的像素值,以及与该像素点间隔一个像素点的第一间隔像素点的像素值;其中,所述第一间隔像素点与所述第一相邻像素点位于该像素点的同一侧;
    第二像素值获取单元,用于获取该像素点另一侧且与该像素点相邻的第二相邻像素点的像素值,以及与该像素点间隔一个像素点的第二间隔像素点的像素值;其中,所述第二间隔像素点与所述第二相邻像素点位于该像素点的同一侧,所述一侧与另一侧相对设置;
    修正单元,用于依据所述一侧和另一侧的像素值对该像素点的像素值进行修正。
  12. 根据权利要求8至11任一项所述的掌纹提取装置,其特征在于,所述预设的切割方向包括至少2个方向;所述二值化处理模块,包括:
    待合成图像获得单元,用于将根据每个所述切割方向得到的所述修正图像作为待合成图像;
    对比单元,用于对比同一像素点在各所述待合成图像中的像素值,将该像素点的最大像素值确定为该像素点在合成图像中的合成像素值;
    合成图像获得单元,用于依据每一个像素点的所述合成像素值获得合成图像;
    二值化处理单元,用于对所述合成图像进行二值化处理,得到所述掌纹图像。
  13. 根据权利要求8至11任一项所述的掌纹提取装置,其特征在于,所述待修正掌纹图像获取模块,包括:
    灰度化单元,用于遍历所述手掌图像中,获取每个像素点的RGB分量值,通过灰度 处理规则对所述每个像素点的RGB分量值进行灰度化处理,获得灰度化图像;
    变换单元,用于反转所述灰度化图像,采用滤波变换处理反转后的所述灰度化图像,获得所述待修正掌纹图像。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现权利要求1至7任一项所述的掌纹提取方法。
  15. 一种服务器,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个应用程序配置用于执行以下步骤:
    获取手掌图像,对手掌图像进行预处理,获得待修正掌纹图像;
    对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像;
    对所述修正图像进行二值化处理,获得掌纹图像。
  16. 根据权利要求15所述的服务器,其特征在于,所述对所述待修正掌纹图像中的各特征像素点进行曲率值的计算,基于所述特征像素点的曲率值对所述待修正掌纹图像进行修正,获得修正图像时,所述一个或多个应用程序被配置用于执行以下步骤:
    根据预设切割方向和预设像素间隔切割所述待修正掌纹图像,得到n条切割线,其中,n为正整数;
    计算每条所述切割线上各像素点的曲率值,基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像,所述切割线上的各像素点为所述特征像素点。
  17. 根据权利要求16所述的服务器,其特征在于,所述基于该曲率值对所述待修正掌纹图像进行修正,获得修正图像时,所述一个或多个应用程序被配置用于执行以下步骤:
    将所述曲率值大于零的像素点确定为评估像素点,将连续的所述评估像素点所在的区域确定为局部掌纹区域;
    获取所述局部掌纹区域的宽度,将所述宽度与所述局部掌纹区域内的各所述评估像素点的曲率值乘积,获得所述局部掌纹区域内的各所述评估像素点的评估分数;
    依据所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,基于所述修正像素值修正所述待修正掌纹图像,获得所述修正图像。
  18. 根据权利要求15所述的服务器,其特征在于,所述获得修正图像之后,所述一个或多个应用程序还被配置用于执行以下步骤:
    对于所述修正图像中的每个像素点,获取该像素点一侧且与该像素点相邻的第一相邻像素点的像素值,以及与该像素点间隔一个像素点的第一间隔像素点的像素值;其中,所述第一间隔像素点与所述第一相邻像素点位于该像素点的同一侧;
    获取该像素点另一侧且与该像素点相邻的第二相邻像素点的像素值,以及与该像素点间隔一个像素点的第二间隔像素点的像素值;其中,所述第二间隔像素点与所述第二相邻像素点位于该像素点的同一侧,所述一侧与另一侧相对设置;
    依据所述一侧和另一侧的像素值对该像素点的像素值进行修正。
  19. 根据权利要求15至18任一项所述的服务器,其特征在于,所述预设的切割方向包括至少2个方向;所述对所述修正图像进行二值化处理,获得掌纹图像时,所述一个或多个应用程序被配置用于执行以下步骤:
    将根据每个所述切割方向得到的所述修正图像作为待合成图像;
    对比同一像素点在各所述待合成图像中的像素值,将该像素点的最大像素值确定为该像素点在合成图像中的合成像素值;
    依据每一个像素点的所述合成像素值获得合成图像;
    对所述合成图像进行二值化处理,得到所述掌纹图像。
  20. 根据权利要求15至18任一项所述的服务器,其特征在于,所述对手掌图像进行预处理,获得待修正掌纹图像时,所述一个或多个应用程序被配置用于执行以下步骤:
    遍历所述手掌图像中,获取每个像素点的RGB分量值,通过灰度处理规则对所述每个像素点的RGB分量值进行灰度化处理,获得灰度化图像;
    反转所述灰度化图像,采用滤波变换处理反转后的所述灰度化图像,获得所述待修正掌纹图像。
PCT/CN2019/117915 2019-01-29 2019-11-13 掌纹提取方法、装置及存储介质、服务器 WO2020155764A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910087378.2 2019-01-29
CN201910087378.2A CN109902586A (zh) 2019-01-29 2019-01-29 掌纹提取方法、装置及存储介质、服务器

Publications (1)

Publication Number Publication Date
WO2020155764A1 true WO2020155764A1 (zh) 2020-08-06

Family

ID=66944437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117915 WO2020155764A1 (zh) 2019-01-29 2019-11-13 掌纹提取方法、装置及存储介质、服务器

Country Status (2)

Country Link
CN (1) CN109902586A (zh)
WO (1) WO2020155764A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651304A (zh) * 2020-12-11 2021-04-13 西安电子科技大学 基于特征融合的可撤销掌纹模板生成方法、装置、设备和存储介质
CN114004757A (zh) * 2021-10-14 2022-02-01 大族激光科技产业集团股份有限公司 去除工业图像中干扰的方法、系统、设备和存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902586A (zh) * 2019-01-29 2019-06-18 平安科技(深圳)有限公司 掌纹提取方法、装置及存储介质、服务器
CN110414332A (zh) * 2019-06-20 2019-11-05 平安科技(深圳)有限公司 一种掌纹识别的方法及装置
CN110473242B (zh) * 2019-07-09 2022-05-27 平安科技(深圳)有限公司 一种纹理特征提取方法、纹理特征提取装置及终端设备
CN111667520B (zh) * 2020-06-09 2023-05-16 中国人民解放军63811部队 红外图像和可见光图像的配准方法、装置及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020164055A1 (en) * 2001-03-26 2002-11-07 Nec Corporation Fingerprint/palmprint image processor and processing method
CN105184216A (zh) * 2015-07-24 2015-12-23 山东大学 一种心二区掌纹的数字提取方法
CN108805023A (zh) * 2018-04-28 2018-11-13 平安科技(深圳)有限公司 一种图像检测方法、装置、计算机设备及存储介质
CN108875621A (zh) * 2018-06-08 2018-11-23 平安科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN109902586A (zh) * 2019-01-29 2019-06-18 平安科技(深圳)有限公司 掌纹提取方法、装置及存储介质、服务器

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3505713B2 (ja) * 2000-10-11 2004-03-15 國枝 博昭 曲線識別システム
WO2018121552A1 (zh) * 2016-12-29 2018-07-05 北京奇虎科技有限公司 基于掌纹数据的业务处理方法、装置、程序及介质
CN108256456B (zh) * 2018-01-08 2020-04-07 杭州电子科技大学 一种基于多特征阈值融合的手指静脉识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020164055A1 (en) * 2001-03-26 2002-11-07 Nec Corporation Fingerprint/palmprint image processor and processing method
CN105184216A (zh) * 2015-07-24 2015-12-23 山东大学 一种心二区掌纹的数字提取方法
CN108805023A (zh) * 2018-04-28 2018-11-13 平安科技(深圳)有限公司 一种图像检测方法、装置、计算机设备及存储介质
CN108875621A (zh) * 2018-06-08 2018-11-23 平安科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN109902586A (zh) * 2019-01-29 2019-06-18 平安科技(深圳)有限公司 掌纹提取方法、装置及存储介质、服务器

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651304A (zh) * 2020-12-11 2021-04-13 西安电子科技大学 基于特征融合的可撤销掌纹模板生成方法、装置、设备和存储介质
CN114004757A (zh) * 2021-10-14 2022-02-01 大族激光科技产业集团股份有限公司 去除工业图像中干扰的方法、系统、设备和存储介质
CN114004757B (zh) * 2021-10-14 2024-04-05 大族激光科技产业集团股份有限公司 去除工业图像中干扰的方法、系统、设备和存储介质

Also Published As

Publication number Publication date
CN109902586A (zh) 2019-06-18

Similar Documents

Publication Publication Date Title
WO2020155764A1 (zh) 掌纹提取方法、装置及存储介质、服务器
US7072523B2 (en) System and method for fingerprint image enhancement using partitioned least-squared filters
US9633269B2 (en) Image-based liveness detection for ultrasonic fingerprints
WO2019205290A1 (zh) 一种图像检测方法、装置、计算机设备及存储介质
WO2020147257A1 (zh) 一种人脸识别方法和装置
US8358813B2 (en) Image preprocessing
CN108596197B (zh) 一种印章匹配方法及装置
US10558841B2 (en) Method and apparatus for recognizing fingerprint ridge point
CN106981077B (zh) 基于dce和lss的红外图像和可见光图像配准方法
WO2019232945A1 (zh) 图像处理方法、装置、计算机设备及存储介质
Frucci et al. WIRE: Watershed based iris recognition
CN110084135A (zh) 人脸识别方法、装置、计算机设备及存储介质
CN109919960B (zh) 一种基于多尺度Gabor滤波器的图像连续边缘检测方法
US10922535B2 (en) Method and device for identifying wrist, method for identifying gesture, electronic equipment and computer-readable storage medium
US11475707B2 (en) Method for extracting image of face detection and device thereof
CN108510499A (zh) 一种基于模糊集和Otsu的图像阈值分割方法及装置
CN109993161A (zh) 一种文本图像旋转矫正方法及系统
CN111832405A (zh) 一种基于hog和深度残差网络的人脸识别方法
CN111223063A (zh) 基于纹理特征和双核函数的手指静脉图像nlm去噪方法
CN110008825A (zh) 掌纹识别方法、装置、计算机设备和存储介质
Hao et al. An optimized face detection based on adaboost algorithm
Thamaraimanalan et al. Multi biometric authentication using SVM and ANN classifiers
CN111027637A (zh) 一种文字检测方法及计算机可读存储介质
Elawady et al. Wavelet-based reflection symmetry detection via textural and color histograms: Algorithm and results
Fan et al. Skew detection in document images based on rectangular active contour

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19912490

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19912490

Country of ref document: EP

Kind code of ref document: A1