WO2019232945A1 - 图像处理方法、装置、计算机设备及存储介质 - Google Patents

图像处理方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2019232945A1
WO2019232945A1 PCT/CN2018/103809 CN2018103809W WO2019232945A1 WO 2019232945 A1 WO2019232945 A1 WO 2019232945A1 CN 2018103809 W CN2018103809 W CN 2018103809W WO 2019232945 A1 WO2019232945 A1 WO 2019232945A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
finger vein
evaluation
value
Prior art date
Application number
PCT/CN2018/103809
Other languages
English (en)
French (fr)
Inventor
惠慧
侯丽
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019232945A1 publication Critical patent/WO2019232945A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image processing method, device, computer device, and storage medium.
  • Finger vein recognition technology is a new type of biometric recognition technology. It uses finger vein recognition technology as one of the most advanced emerging second-generation biometrics technology. Because of its high level of security, high stability, and universality, Collection equipment has become a research hotspot for many scholars at home and abroad.
  • the traditional finger vein extraction algorithm is not ideal for use in low-end acquisition equipment, and can not accurately extract vein veins in vein images, especially when vein veins are extracted from non-clear vein images, its accuracy cannot be guaranteed, leading to accurate vein vein extraction Sex is low.
  • An image processing method includes:
  • a curvature value of each pixel point on the cutting line is calculated, a pixel point with a curvature value greater than zero is determined as an evaluation pixel point, and the continuous evaluation pixel point is located The area is determined as a local venous area;
  • the updated enhanced image is binarized to obtain a vein image.
  • An image processing device includes:
  • An acquisition module for acquiring a raw finger vein image using a finger vein acquisition device
  • a transformation module configured to perform Gabor filtering transformation on the finger vein image to obtain an enhanced image
  • a cutting module configured to cut the enhanced image according to a preset cutting direction and a preset pixel interval to obtain n cutting lines, where n is a positive integer;
  • a matching module configured to calculate, for each of the cutting lines, a curvature value of each pixel point on the cutting line, determine a pixel point with a curvature value greater than zero as an evaluation pixel point, and successively The area where the evaluation pixel is located is determined as the local vein area;
  • a calculation module configured to obtain, for each of the evaluation pixels, the width of the local vein region including the evaluation pixel, and use the product of the width and the curvature value of the evaluation pixel as the evaluation pixel Evaluation score
  • An update module configured to adjust the pixel value of the evaluation pixel using the evaluation score to obtain a corrected pixel value of each of the evaluation pixels, and update the enhanced image using the corrected pixel value;
  • a binarization module is used for binarizing the updated enhanced image to obtain a vein image.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor executes the computer-readable instructions, the image processing method is implemented. step.
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to execute the image processing method described above A step of.
  • FIG. 1 is a schematic diagram of an application environment of an image processing method according to an embodiment of the present application.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application.
  • FIG 3 is an example diagram of cutting an enhanced image in an image processing method provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of grayscale and grayscale inversion processing of a finger vein image in the image processing method according to an embodiment of the present application
  • step S7 is a flowchart of step S7 in the image processing method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a computer device according to an embodiment of the present application.
  • FIG. 1 illustrates an application environment provided by an embodiment of the present application.
  • the application environment includes a server and a client, where the server and the client are connected through a network, and the client is used to collect a finger vein image, and Send the captured finger vein image to the server.
  • the client can specifically but not limited to a camera, camera, scanner or a finger vein image acquisition device with other photographing functions; the server is used to perform finger veins on the finger vein image.
  • the server can be implemented by an independent server or a server cluster composed of multiple servers.
  • the image processing method provided in the embodiment of the present application is applied to a server.
  • an image processing method is provided.
  • the method is applied to the server in FIG. 1 as an example, and includes the following steps:
  • S1 Use the finger vein acquisition device to obtain the original finger vein image.
  • the original finger vein image refers to a finger vein image directly acquired from a finger vein acquisition device without any processing.
  • the quality of finger vein images acquired by different finger vein acquisition devices is different, the quality of finger vein images acquired by commonly used finger vein acquisition devices is relatively low. According to the method provided in the embodiment of the present application, It can accurately recognize finger vein veins in low-quality finger vein images, thereby effectively improving the accuracy of finger vein vein extraction in finger vein images, and its applicability to a variety of different finger vein acquisition devices.
  • a Gabor filter transform method is used to enhance the image to finally obtain a processed enhanced image.
  • a convolution operation is performed on a finger vein image according to a Gabor filter function, and an enhanced image is obtained through a convolution operation result.
  • the convolution operation refers to using a convolution kernel to perform a series of operations on each pixel point in a finger vein image.
  • the convolution kernel is a preset matrix template for performing operations on the finger vein image. It can be specifically A square grid structure, such as a 3 * 3 matrix, each element in the matrix has a preset weight value.
  • On the target pixel point of calculate the product of the weight value of each element in the convolution kernel and the pixel values of the image pixel points it covers and sum them. The result is the new pixel value of the target pixel point.
  • the Gabor filter transform is a windowed Fourier transform.
  • the Gabor function can extract the relevant features of the image in different scales and directions in the frequency domain to achieve the enhancement effect on the image.
  • S3 Cut the enhanced image according to a preset cutting direction and a preset pixel interval to obtain n cutting lines, where n is a positive integer.
  • the preset cutting direction may be horizontal cutting, vertical cutting, or cutting in other directions, which may be specifically set according to actual application requirements, which is not limited herein.
  • the preset pixel interval refers to a preset number of pixels as an interval. It can be set at an interval of 1 pixel or at an interval of 5 pixels. It can also be set according to the needs of the actual application. There is no restriction here.
  • the enhanced image obtained in step S2 is cut according to a preset cutting direction and a preset pixel interval to obtain n cutting lines.
  • FIG. 3 is a schematic diagram of cutting the enhanced image obtained in step S2.
  • fingers are placed horizontally, the preset cutting direction is vertical, and the preset pixel interval is 5 pixels. , Cut the enhanced image. If there are 2000 pixels in each line of the image, you will get 399 vertical cutting lines.
  • the curvature value of each pixel is calculated according to formula (1):
  • z is a pixel point on a cutting line
  • K (z) is a curvature value of the pixel point z
  • P f (z) is a pixel value of the pixel point z
  • a curvature value of each pixel point on the cutting line is calculated according to formula (1). Use the curvature value to determine whether the pixel is a pixel on the vein. If the pixel's curvature is greater than 0, it means that the pixel is a pixel on the vein and use it as an evaluation pixel. If the curvature of the pixel is A value less than or equal to 0 indicates that the pixel does not belong to a pixel on a vein.
  • the local vein area is composed of continuous pixels with a curvature value greater than 0, that is, continuous evaluation pixels.
  • the local vein area is composed of continuous pixels with a curvature value greater than 0.
  • its width can be the number of continuous pixels with a curvature value greater than 0. For example, if the curvature value is greater than 0 continuous pixels If the number of points is 5, the width of the local vein area is 5.
  • the width of the local vein area containing the evaluation pixel is multiplied by the curvature value of the evaluation pixel, and the result of the multiplication is used as the evaluation score of the evaluation pixel.
  • the evaluation score of the evaluation pixel is calculated by formula (2):
  • z i is the i-th evaluation pixel
  • i is a positive number greater than 0
  • S r (z i ) is the evaluation score of the i-th evaluation pixel
  • k (z i ) is the i-th evaluation pixel.
  • Curvature value, W r is the width of the local vein region including z i .
  • S6 Use the evaluation score to adjust the pixel value of the evaluation pixel to obtain the corrected pixel value of each evaluation pixel, and use the corrected pixel value to update the enhanced image.
  • the original pixel value of each evaluation pixel point is added to its corresponding evaluation score, and the obtained sum is used as the corrected pixel value of the evaluation pixel point.
  • Correct the pixel value of the pixel and obtain the enhanced image after adjusting the pixel value of each evaluation pixel, so that the points on the vein area become more obvious, the recognition degree of the vein area is improved, and the vein can be better recognized Area and non-venous area.
  • the corrected pixel value of the evaluation pixel is calculated by formula (3):
  • V a '(x, y) V a (x, y) + S r (z a ) Formula (3)
  • x and y are the abscissa and ordinate of the a-th evaluation pixel point in the finger vein image
  • a is a positive number greater than
  • z a is the a-th evaluation pixel point
  • V a '(x, y) is The corrected pixel value of the a-th evaluation pixel
  • V a (x, y) is the pixel value of the a-th evaluation pixel
  • S r (z a ) is the evaluation score of the a-th evaluation pixel.
  • the corrected pixel value is set to the maximum pixel value if the calculated corrected pixel value of the evaluation pixel point exceeds the maximum pixel value.
  • the updated enhanced image is obtained according to step S6.
  • the enhanced image needs to be further Binarize.
  • Binarization is to set the pixel value of the pixels on the image to 0 or 255, that is, to render the entire image with obvious visual effects of only black and white.
  • each pixel point in the updated enhanced image obtained in step S6 is scanned. If the pixel value of the pixel point is less than a preset pixel threshold, the pixel value of the pixel point is set to 0, that is, the pixel point changes. It is black; if the pixel value of the pixel point is greater than or equal to a preset pixel threshold, the pixel value of the pixel point is set to 255, that is, the pixel point becomes white to obtain a binary image.
  • an enhanced image is obtained by performing a Gabor filtering transformation on a finger vein image, the enhanced image is cut and n cutting lines are obtained, and a curvature value of each pixel point on each cutting line is calculated to obtain a curvature value greater than 0. Pixels are used as evaluation pixels, and the area where consecutive pixels with curvature values greater than zero are located is used as the local vein area. The product of the curvature value of the evaluation pixel and the width of the local vein area where the evaluation pixel is located is calculated for each Each evaluation pixel is calculated to obtain an evaluation score, and then the evaluation score is used to adjust the pixel value of the evaluation pixel to obtain the corrected pixel value of each evaluation pixel and update the pixels on the enhanced image.
  • the updated The enhanced image is binarized to obtain a vein image.
  • the image quality of finger vein images is improved by Gabor filter transformation, which can improve the accuracy of vein vein recognition when extracting vein veins, so as to implement veins on low-quality finger vein images collected by low-end finger vein acquisition equipment.
  • the accurate positioning of the veins effectively improves the accuracy of extraction of vein veins in finger vein images and the applicability to a variety of different finger vein acquisition devices.
  • the curvature algorithm can quickly identify vein veins in vein images. Improve the recognition efficiency of venous lines; and calculate the evaluation score to further accurately distinguish between venous areas and non-venous areas, thereby further improving the accuracy of venous line extraction.
  • the image processing method may further perform grayscale and grayscale inversion processing on the image, as detailed below:
  • the pixels in the finger vein image are traversed in a preset traversal manner to obtain the RGB component value of each pixel, where R, G, and B represent the colors of the three channels of red, green, and blue, respectively.
  • the preset traversal method may specifically use the upper left pixel point of the finger vein image as a starting point, and traverse line by line from top to bottom in order from left to right, or from the midline position of the finger vein image to both sides simultaneously. Traversing can also be other traversal methods, which are not limited here.
  • x and y are the abscissa and ordinate of each pixel point in the finger vein image
  • g (x, y) is the gray value after the pixel point (x, y) is grayed out
  • R (x, y ) Is the color component of the R channel of the pixel (x, y)
  • G (x, y) is the color component of the G channel of the pixel (x, y)
  • B (x, y) is the pixel (x, y)
  • the color components of the B channel, k 1 , k 2 , and k 3 are the proportion parameters corresponding to the R channel, G channel, and B channel, respectively.
  • the finger vein image in order to accurately extract information content in a finger vein image, first, the finger vein image needs to be grayed out, where the parameter values of k 1 , k 2 , k 3, and ⁇ can be based on actual conditions.
  • the application needs to be set, and there is no limitation here.
  • the proportions of the R channel, G channel, and B channel can be adjusted respectively.
  • the RGB model is a commonly used expression of color information. It uses the brightness of the three primary colors of red, green, and blue to quantify the color.
  • This model is also called additive color mixing model, which is a method of mixing colors by superimposing RGB three-color light on each other. Therefore, this model is suitable for the display of light emitters such as displays.
  • the gray value is weighted by formula (4).
  • the component method, the maximum value method, or the average value method may also be used to perform graying processing on the image. There are no restrictions here.
  • S83 Perform grayscale inversion processing on the grayed image to obtain a finger vein image after grayscale inversion.
  • each pixel in the grayed image obtained in step S82 is traversed to obtain the pixel value of each pixel, the grayed image is subjected to grayscale inversion processing, and the pixel value of the pixel in the grayed image is processed.
  • the range is changed from [0,255] to [255,0], that is, the pixel value of the pixel is adjusted from 0 to 255, and the pixel value of the pixel is adjusted from 255 to 0, so that the original white pixels in the grayed image are grayed out. It becomes a black pixel point, the original black pixel top becomes a white pixel point, and the grayscale inversion finger vein image is obtained after the grayscale inversion process.
  • the pixel value range can be further compressed from [0, 255] to [0, 1], that is, the pixel value of each pixel is divided by 255 to obtain
  • the compressed pixel value for example, a pixel value of 1 is a pixel value of 1/255 after compression, a pixel value of 254 is a pixel value of 254/255 after compression, and pixel values of other pixels are converted to And so on.
  • the finger vein image is grayed out using formula (4). Processing, setting the pixel value range of the pixels in the image between 0-255, thereby reducing the amount of original data in the image and improving the calculation efficiency in subsequent processing calculations; then performing grayscale inversion on the grayscaled image
  • the conversion process makes the display effect of the image clearer and improves the accuracy of subsequent vein extraction of the finger veins.
  • step S2 that is, performing a Gabor filter transformation on the finger vein image to obtain an enhanced image specifically includes the following steps:
  • x and y are the abscissa and ordinate of the pixel point in the finger vein image
  • is the preset wavelength
  • is the preset direction.
  • is the standard deviation of the Gaussian factor of the gabor function
  • is the aspect ratio
  • U (x, y) is the enhanced image
  • I (x, y) is the finger vein image
  • x 'and y' are the abscissa and ordinate of the pixel points (x, y) in the finger vein image after being rotated according to ⁇ .
  • a Gabor filter function of formula (5) is used to transform the finger vein image to filter out the high frequency waves of the finger vein image, leaving only the low frequency part.
  • the low-frequency wave is filtered in the direction, leaving only the high-frequency part, and finally the image becomes bright, that is, the enhanced image obtained by Gabor filtering transformation.
  • the preset wavelength ⁇ can be set to 1, or it can be set according to actual needs, which is not limited here.
  • the preset direction ⁇ can be selected as 0, These 8 directions can also choose other directions, which can be selected according to the actual application requirements, and there is no limitation here.
  • Gabor filter transformation is performed on the finger vein image by formula (5), which can quickly highlight the image and achieve the effect of image enhancement, thereby improving the image quality of the finger vein image, and improving the quality of the finger vein image.
  • the discrimination rate of the veins so that when vein vein extraction is performed on low-quality finger vein images collected by the low-end finger vein acquisition device, accurate positioning can be achieved, thereby improving the accuracy of vein vein extraction, and also improving the acquisition of different finger veins. Applicability of equipment.
  • the image processing method may further modify the pixel value of each pixel, as detailed below:
  • the pixel value of the pixel point of the adjacent region is used to modify the pixel value of the pixel point according to a preset adjacent region.
  • the pixel value of each pixel point is modified according to formula (6):
  • x and y are the abscissa and ordinate of each pixel in the finger vein image
  • V (x, y) is the pixel value of pixel (x, y) in the updated enhanced image
  • C (x, y ) Is the corrected pixel value of pixel point (x, y).
  • the pixel in the updated enhanced image is selected with two adjacent pixels (x-1, y), (x-2, y) on the left and two adjacent pixels on the right (x + 1, y), (x + 2, y), if (x, y) is as large as the pixel value of the pixels on both sides, no processing will be performed; if the pixel value of (x, y) is Different from the pixel values of the pixels on both sides, the larger pixel value among the pixel values of the two pixels on the left is selected, and then the larger pixel value of the pixel values of the two pixels on the right are selected, and finally the left is compared. The larger pixel value on the side and the larger pixel value on the right side are selected to correct the pixel point (x, y).
  • the pixel point is a pixel point on the image boundary, only the pixel values on one side are compared. For example, if the pixel point is on the left edge of the image, then the The larger pixel value of the pixel value is used to correct the pixel value of the pixel point; if the pixel point is located at the right edge of the image, the larger pixel value of the pixel values of the two adjacent pixel points to the left of the pixel point is selected. The pixel value of the pixel is corrected.
  • the pixel value of the pixel point (x, y) is increased by formula (6), so that the pixel
  • the dots and the pixels on both sides can be connected to form a texture; if the pixel value of the pixel (x, y) is large and the pixel values on both sides are small, the pixel is considered to be a noise, in order to prevent the noise from affecting the vein texture
  • the extraction caused interference, and the pixel value of the pixel point (x, y) was reduced by formula (6) to eliminate the noise in the vein image of the finger, thereby making the vein area more obvious and improving the recognition of the vein pattern. Degree, while also improving the accuracy of subsequent vein vein extraction.
  • the preset cutting direction includes at least two directions, that is, the finger vein image can be image processed based on two or more different cutting directions to obtain a vein image.
  • the preset cutting directions can specifically include 4 directions of 45 °, 90 °, 135 °, and 180 °, but it is not limited to this. It can also include other directions, which can be set according to the needs of the actual application. limit.
  • step S7 the enhanced image is binarized to obtain a vein image, which specifically includes the following steps:
  • the updated enhanced image in each cutting direction obtained according to steps S3 to S6 is used as the image to be synthesized.
  • the preset cutting direction includes 45
  • the updated enhanced image obtained with the cutting direction of 45 ° is one image to be synthesized, and the enhanced image obtained with the cutting direction of 90 ° is another image to be synthesized.
  • Image, and so on in other directions a total of four images to be synthesized can be obtained.
  • step S71 by comparing the pixel values of pixels at the same position in each of the images to be synthesized, selecting the largest pixel value as the pixel value of the pixel at the corresponding position in the synthesized image to obtain the composition image.
  • the composition needs to be further processed.
  • the image was binarized to obtain a vein image.
  • each pixel in the composite image obtained in step S72 is scanned. If the pixel value of the pixel is smaller than a preset pixel threshold, the pixel value of the pixel is set to 0, that is, the pixel becomes black. ; If the pixel value of the pixel is greater than or equal to a preset pixel threshold, the pixel value of the pixel is set to 255, that is, the pixel becomes white, and a vein image is obtained.
  • different to-be-synthesized images are obtained according to different cutting directions, and then the pixel values of each pixel in the same position in each to-be-synthesized image are compared, and the maximum pixel value of each pixel is selected as the synthesized image.
  • the pixel values of the corresponding pixels in the corresponding position are used to synthesize the image, and finally the binarization process is performed on the synthesized image to obtain a vein image. Because there may be errors in vein vein extraction for the enhanced image obtained in only one cutting direction, the vein vein extraction is performed by combining the to-be-combined images in multiple cutting directions and then binarizing the synthesized image. , Can effectively reduce errors, achieve accurate extraction of vein veins, and improve the accuracy of vein vein extraction.
  • an image processing apparatus corresponds to the image processing method in the above embodiment one-to-one.
  • the image processing apparatus includes: an acquisition module 61, a transformation module 62, a cutting module 63, a matching module 64, a calculation module 65, an update module 66, and a binarization module 67.
  • the detailed description of each function module is as follows:
  • An acquisition module 61 configured to acquire an original finger vein image using a finger vein acquisition device
  • a transformation module 62 configured to perform Gabor filtering transformation on the finger vein image to obtain an enhanced image
  • a cutting module 63 configured to cut the enhanced image according to a preset cutting direction and a preset pixel interval to obtain n cutting lines, where n is a positive integer;
  • the matching module 64 is configured to calculate, for each cutting line, a curvature value of each pixel point on the cutting line, determine a pixel point with a curvature value greater than zero as an evaluation pixel point, and determine a continuous evaluation pixel point.
  • the area is determined as a local venous area
  • a calculation module 65 is configured to obtain, for each evaluation pixel, a width of a local vein region including the evaluation pixel, and use a product of the width and a curvature value of the evaluation pixel as an evaluation score of the evaluation pixel;
  • An update module 66 is configured to adjust the pixel value of the evaluation pixel using the evaluation score, obtain a corrected pixel value of each evaluation pixel, and update the enhanced image using the corrected pixel value;
  • a binarization module 67 is configured to perform binarization processing on the updated enhanced image to obtain a vein image.
  • the image processing apparatus further includes:
  • An obtaining module 68 configured to traverse the pixels in the finger vein image to obtain the RGB component value of each pixel
  • the ashing module 69 is configured to perform graying processing on a finger vein image according to the following formula according to the RGB component values of the pixels to obtain a grayed image:
  • g (x, y) k 1 * R (x, y) + k 2 * G (x, y) + k 3 * B (x, y)
  • x and y are the abscissa and ordinate of each pixel point in the finger vein image
  • g (x, y) is the gray value after the pixel point (x, y) is grayed out
  • R (x, y ) Is the color component of the R channel of the pixel (x, y)
  • G (x, y) is the color component of the G channel of the pixel (x, y)
  • B (x, y) is the pixel (x, y)
  • the color components of the B channel, k 1 , k 2 , and k 3 are the corresponding parameters of the R channel, G channel, and B channel;
  • the inversion module 610 is configured to perform grayscale inversion processing on the grayed image to obtain a finger vein image after grayscale inversion.
  • transformation module 62 includes:
  • the filtering sub-module 621 is configured to perform Gabor filtering transformation on the finger vein image according to the following formula:
  • x and y are the abscissa and ordinate of the pixel point in the finger vein image
  • is the preset wavelength
  • is the preset direction.
  • is the standard deviation of the Gaussian factor of the gabor function
  • is the aspect ratio
  • U (x, y) is the enhanced image
  • I (x, y) is the finger vein image
  • x 'and y' are the abscissa and ordinate of the pixel points (x, y) in the finger vein image after being rotated according to ⁇ .
  • the image processing apparatus further includes:
  • the correction module 611 is configured to correct, for each pixel point in the updated enhanced image, the pixel value of the pixel point according to a preset adjacent region using the pixel value of the adjacent pixel point of the adjacent region.
  • the binarization module 67 includes:
  • Sub-module to be synthesized 671 It is used to use the updated enhanced image obtained according to each cutting direction as an image to be synthesized;
  • Synthesis sub-module 672 for the pixels at the same position in each to-be-synthesized image, selecting the maximum pixel value of the pixel in each to-be-synthesized image as the pixel value of the pixel in the synthesized image to obtain the synthesis image;
  • Extraction sub-module 673 used for binarizing the composite image to obtain a vein image.
  • Each module in the image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware in or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 7.
  • the computer device includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in a non-volatile storage medium.
  • the database of the computer equipment is used to store data of finger vein images.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by a processor to implement an image processing method.
  • a computer device which includes a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor implements the image processing of the foregoing embodiment when the processor executes the computer-readable instructions
  • the steps of the method are, for example, steps S1 to S7 shown in FIG. 2.
  • the processor executes the computer-readable instructions
  • the functions of the modules of the image processing apparatus in the foregoing embodiment are implemented, for example, the functions of the modules 61 to 67 shown in FIG. 6. To avoid repetition, we will not repeat them here.
  • one or more non-volatile readable storage media are provided, and computer-readable instructions are stored thereon.
  • the computer-readable instructions are executed by one or more processors, the images in the foregoing method embodiments are implemented.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种图像处理方法、装置、计算机设备及存储介质,涉及图像处理技术领域,所述图像处理方法包括:从采集设备中获取原始的手指静脉图像(S1);对所述手指静脉图像进行Gabor滤波变换,得到增强图像(S2);根据预设的切割方向和预设的像素间隔,对增强图像进行切割得到n条切割线(S3);针对每条切割线,计算切割线上的每个像素点的曲率值,确定评估像素点和局部静脉区域(S4);针对每个评估像素点,计算评估像素点的评估分数(S5);利用评估分数对评估像素点的像素值进行调整后更新增强图像(S6);对更新后的增强图像进行二值化处理,得到静脉图像(S7)。所述方法实现了对静脉纹路的准确定位,提高静脉纹路提取的准确性,以及对多种不同指静脉采集设备的适用性。

Description

图像处理方法、装置、计算机设备及存储介质
本申请以2018年6月8日提交的申请号为201810588087.7,名称为“图像处理方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、计算机设备及存储介质。
背景技术
手指静脉识别技术是一种新的生物特征识别技术,它利用手指静脉识别技术作为最先进的新兴的第二代生物识别技术之一,因其安全等级高,稳定性高,普适性强及采集设备便捷成为国内外诸多学者的研究热点。
传统的手指静脉提取算法在低端采集设备中使用不理想,无法准确提取静脉图像中的静脉纹路,尤其是对非清晰静脉图像进行静脉纹路提取时无法保证其准确性,导致静脉纹路提取的准确性较低。
发明内容
基于此,有必要针对上述技术问题,提供一种提高对手指静脉图像中静脉纹路提取的准确性的图像处理方法、装置、计算机设备及存储介质。
一种图像处理方法,包括:
使用指静脉采集设备获取原始的手指静脉图像;
对所述手指静脉图像进行Gabor滤波变换,得到增强图像;
根据预设的切割方向和预设的像素间隔,对所述增强图像进行切割,得到n条切割线,其中,n为正整数;
针对每条所述切割线,计算在该切割线上的每个像素点的曲率值,将所述曲率值大于零的像素点确定为评估像素点,并将连续的所述评估像素点所在的区域确定为局部静脉区域;
针对每个所述评估像素点,获取包含该评估像素点的所述局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像;
对更新后的增强图像进行二值化处理,得到静脉图像。
一种图像处理装置,包括:
采集模块,用于使用指静脉采集设备获取原始的手指静脉图像;
变换模块,用于对所述手指静脉图像进行Gabor滤波变换,得到增强图像;
切割模块,用于根据预设的切割方向和预设的像素间隔,对所述增强图像进行切割,得到n条切割线,其中,n为正整数;
匹配模块,用于针对每条所述切割线,计算在该切割线上的每个像素点的曲率值,将所述曲率值大于零的像素点确定为评估像素点,并将连续的所述评估像素点所在的区域确定为局部静脉区域;
计算模块,用于针对每个所述评估像素点,获取包含该评估像素点的所述局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
更新模块,用于使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像;
二值化模块,用于对更新后的增强图像进行二值化处理,得到静脉图像。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述图像处理方法的步骤。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行上述图像处理方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的图像处理方法的应用环境示意图;
图2是本申请实施例提供的图像处理方法的流程图;
图3是本申请实施例提供的图像处理方法中对增强图像进行切割的示例图;
图4是本申请实施例提供的图像处理方法中对手指静脉图像进行灰度化及灰度反转处理的流程图;
图5是本申请实施例提供的图像处理方法中步骤S7的流程图;
图6是本申请实施例提供的图像处理装置的示意图;
图7是本申请实施例提供的计算机设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1示出了本申请实施例提供的应用环境,该应用环境包括服务端和客户端,其中,服务端和客户端之间通过网络进行连接,客户端用于对手指静脉图像进行采集,并且将采集到的手指静脉图像发送到服务端,客户端具体可以但不限于是摄像机、相机、扫描仪或者带有其他拍照功能的手指静脉图像采集设备;服务端用于对手指静脉图像进行手指静脉纹路提取,服务端具体可以用独立的服务器或者多个服务器组成的服务器集群实现。本申请实施例提供的图像处理方法应用于服务端。
在一实施例中,如图2所示,提供一种图像处理方法,以该方法应用在图1中的服务器为例进行说明,包括如下步骤:
S1:使用指静脉采集设备获取原始的手指静脉图像。
在本申请实施例中,原始的手指静脉图像是指未经过任何处理,直接从指静脉采集设备中采集到的手指静脉图像。
需要说明的是,由于不同的指静脉采集设备获取到的手指静脉图像的质量不同,通常使用的指静脉采集设备采集到的手指静脉图像的质量均比较低,通过本申请实施例提供的方法,能够对低质量的手指静脉图像中手指静脉纹路进行准确的识别,从而有效提高手指静脉图像中手指静脉纹路提取的准确性,以及对多种不同指静脉采集设备的适用性。
S2:对手指静脉图像进行Gabor滤波变换,得到增强图像。
在本申请实施例中,根据步骤S1获取的手指静脉图像,为了进一步提高该手指静脉图像的质量,采用Gabor滤波变换的方法对图像作增强处理,最终得到处理后的增强图像。
具体地,根据Gabor滤波函数对手指静脉图像进行卷积运算,通过卷积运算结果获取增强图像。其中,卷积运算指的是使用一个卷积核对手指静脉图像中的每个像素点进行一系列操作,卷积核是预设的矩阵模板,用于与手指静脉图像进行运算,其具体可以是一个四方形的网格结构,例如3*3的矩阵,该矩阵中的每个元素都有一个预设的权重值,在使用卷积核进行计算时,将卷积核的中心放置在要计算的目标像素点上,计算卷积核中每个元素的权重值和其覆盖的图像像素点的像素值之间的乘积并求和,得到的结果即为目标像素点的新像素值。
Gabor滤波变换属于加窗傅里叶变换,Gabor函数可以在频域不同尺度、不同方向上提取图像的相关特征,实现对图像的增强效果。
S3:根据预设的切割方向和预设的像素间隔,对增强图像进行切割,得到n条切割线,其中,n为正整数。
在本申请实施例中,预设的切割方向可以是水平切割、垂直切割或者其它方向的切割,其具体可以根据实际应用的需要进行设置,此处不做限制。预设的像素间隔是指以预设个数的像素点作为间隔,其可以是以1个像素点为间隔,也可以是以5个像素点为间隔,具体也可以根据实际应用的需要进行设置,此处不做限制。
具体地,对步骤S2获取到的增强图像按照预设的切割方向和预设的像素间隔进行切割,获取n条切割线。
为了更好的理解本步骤,下面通过一个具体的例子进行说明。如图3所示,图3为对步骤S2得到的增强图像进行切割的示意图,在该增强图像中,手指水平放置,预设的切割方向为垂直方向,预设的像素间隔为5个像素点,对增强图像进行切割,若图像中每行有2000个像素点,则将得到399条垂直的切割线。
S4:针对每条切割线,计算在该切割线上的每个像素点的曲率值,将曲率值大于零的像素点确定为评估像素点,并将连续的评估像素点所在的区域确定为局部静脉区域。
在本申请实施例中,按照公式(1)对每个像素点的曲率值进行计算:
Figure PCTCN2018103809-appb-000001
其中,z为切割线上的像素点,K(z)为像素点z的曲率值,P f(z)为像素点z的像素值,
Figure PCTCN2018103809-appb-000002
为P f(z)的二阶导数值,
Figure PCTCN2018103809-appb-000003
为P f(z)的一阶导数值。
具体地,针对S3中获得的每条切割线,根据公式(1)计算该切割线上每个像素点的曲率值。使用曲率值对像素点是否属于静脉上的像素点进行判断,若像素点的曲率值大于0,则表示该像素点为静脉上的像素点,并将其作为评估像素点,若像素点的曲率值小于或者等于0,则表示该像素点不属于静脉上的像素点。并且,局部静脉区域由曲率值大于0的连续像素点构成,也即连续的评估像素点构成。
S5:针对每个评估像素点,获取包含该评估像素点的局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数。
在本申请实施例中,由于局部静脉区域是由曲率值大于0的连续像素点构成,故其宽 度可以为曲率值大于0的连续像素点的个数,例如,若曲率值大于0的连续像素点的个数为5,则该局部静脉区域的宽度为5。
针对每个评估像素点,将包含该评估像素点的局部静脉区域的宽度与该评估像素点的曲率值进行相乘,并将相乘得到的结果作为该评估像素点的评估分数。
具体地,通过公式(2)计算评估像素点的评估分数:
S r(z i)=k(z i)*W r   公式(2)
其中,z i为第i个评估像素点,i为大于0的正数,S r(z i)为第i个评估像素点的评估分数,k(z i)为第i个评估像素点的曲率值,W r为包含z i的局部静脉区域的宽度。
S6:使用评估分数对评估像素点的像素值进行调整,得到每个评估像素点的修正像素值,并使用修正像素值更新增强图像。
在本申请实施例中,针对每个评估像素点,将每个评估像素点的原始像素值与其对应的评估分数进行相加,得到的和作为该评估像素点的修正像素值,根据每个评估像素点的修正像素值,对每个评估像素点的像素值进行调整后获取增强图像,从而使静脉区域上的点变得更加明显,提高静脉区域的识别度,并且能够更好地识别出静脉区域和非静脉区域。
具体地,通过公式(3)计算评估像素点的修正像素值:
V a'(x,y)=V a(x,y)+S r(z a)    公式(3)
其中,x和y为手指静脉图像中第a个评估像素点的横坐标和纵坐标,a为大于0的正数,z a为第a个评估像素点,V a'(x,y)为第a个评估像素点的修正像素值,V a(x,y)为第a评估像素点的像素值,S r(z a)为第a个评估像素点的评估分数。
需要说明的是,若评估像素点经过计算后的修正像素值超过最大像素值,则将修正像素值设置为最大像素值。
S7:对更新后的增强图像进行二值化处理,得到静脉图像。
在本申请实施例中,根据步骤S6获取更新后的增强图像,为了让图像中的像素点的像素值只呈现0或者255,即图像只呈现黑色或者白色两种颜色,需要进一步对该增强图像进行二值化处理。
二值化,就是将图像上的像素点的像素值设置为0或255,也就是将整个图像呈现出明显的只有黑和白的视觉效果。
具体地,扫描步骤S6获取的更新后的增强图像中的每个像素点,若该像素点的像素值小于预设的像素阈值,则将该像素点的像素值设为0,即像素点变为黑色;若该像素点的像素值大于等于预设值的像素阈值,则将该像素点的像素值设为255,即像素点变为白色,得到二值化图像。
本实施例中,通过对手指静脉图像进行Gabor滤波变换得到增强图像,对该增强图像进行切割并获取n条切割线,计算每条切割线上每个像素点的曲率值,获取曲率值大于0的像素点作为评估像素点,以及获取曲率值大于零的连续像素点所在的区域作为局部静脉区域,利用评估像素点的曲率值与该评估像素点所在的局部静脉区域的宽度的积,对每个评估像素点进行计算得到评估分数,再利用评估分数对评估像素点的像素值进行调整,获取每个评估像素点的修正像素值并对增强图像上的像素点进行更新,最后对更新后的增强图像进行二值化处理,得到静脉图像。一方面,通过Gabor滤波变换提高手指静脉图像的图像质量,使得在对静脉纹路提取时能够提高识别静脉纹路的准确性,从而实现对低端指 静脉采集设备采集到的低质量手指静脉图像进行静脉纹路的准确定位,有效提高手指静脉图像中静脉纹路的提取的准确性,以及对多种不同指静脉采集设备的适用性;另一方面,通过曲率算法能够快速地识别静脉图像中的静脉纹路,提高静脉纹路的识别效率;并且通过计算评估分数能够进一步准确区分静脉区域和非静脉区域,从而进一步提高对静脉纹路提取的准确性。
在一实施例中,如图4所示,步骤S1之后,步骤S2之前,该图像处理方法还可以进一步对图像进行灰度化和灰度反转处理,详述如下:
S81:对手指静脉图像中的像素点进行遍历,获取每个像素点的RGB分量值。
具体地,按照预设的遍历方式对手指静脉图像中的像素点进行遍历,获取每个像素点的RGB分量值,其中,R、G、B分别代表红、绿、蓝三个通道的颜色。
其中,预设的遍历方式具体可以是以手指静脉图像的左上角像素点为起点,从上往下从左往右的顺序进行逐行遍历,也可以是从手指静脉图像的中线位置同时向两边遍历,还可以是其他遍历方式,此处不做限制。
S82:根据像素点的RGB分量值,按照公式(4)对手指静脉图像作灰度化处理,得到灰化图像:
g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)    公式(4)
其中,x和y为手指静脉图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为像素点(x,y)的R通道的颜色分量,G(x,y)为像素点(x,y)的G通道的颜色分量,B(x,y)为像素点(x,y)的B通道的颜色分量,k 1,k 2,k 3分别为R通道,G通道和B通道对应的占比参数。
在本申请实施例中,为了实现对手指静脉图像中信息内容的准确提取,首先需要对手指静脉图像进行灰度化处理,其中,k 1,k 2,k 3和σ的参数值可以根据实际应用的需要进行设置,此处不做限制,通过调节k 1,k 2,k 3的取值范围可以分别对R通道,G通道和B通道的占比进行调整。
RGB模型是目前常用的一种彩色信息表达方式,它使用红、绿、蓝三原色的亮度来定量表示颜色。该模型也称为加色混色模型,是以RGB三色光互相叠加来实现混色的方法,因而适合于显示器等发光体的显示。
灰度化是指在RGB模型中,如果R=G=B时,则色彩表示只有一种灰度颜色,其中R=G=B的值叫灰度值,因此,灰度图像每个像素只需一个字节存放灰度值,灰度范围为0-255。
需要说明的是,在本申请实施例中,通过公式(4)进行加权计算灰度值,在其他实施例中还可以采用分量法、最大值法或者平均值法对图像进行灰度化处理,此处不做限制。
S83:对灰化图像进行灰度反转处理,得到灰度反转后的手指静脉图像。
具体地,对步骤S82获取的灰化图像中的每个像素点进行遍历,获取每个像素点的像素值,对灰化图像进行灰度反转处理,将灰化图像中像素点的像素值范围从[0,255]变换为[255,0],即将像素点的像素值从0调整为255,将像素点的像素值从255调整为0,从而使灰化图像中原始的白色像素点变为黑色像素点,原始的黑色像素顶变为白色像素点,经过灰度反转处理后得到灰度反转后的手指静脉图像。
需要说明的是,为了方便在不同环境下的计算,还可进一步将像素点的取值范围从[0, 255]压缩为[0,1],即将每个像素点的像素值除以255得到压缩后的像素值,例如,像素值为1的像素点压缩后的像素值为1/255,像素值为254的像素点压缩后的像素值为254/255,其他像素点的像素值变换以此类推。
例如:在MATLAB工具中,可通过直接调用imadjust函数,对灰化图像进行灰度反转处理,将图像中像素值区间由原来的[0,255]变换为[255,0],再压缩变换为[1,0],生成与灰化图像灰度相反的手指静脉图像。
本实施例中,通过遍历手指静脉图像中的像素点并获取对应像素点的RGB分量值,根据获取到的每个像素点的RGB分量值,利用公式(4)对手指静脉图像进行灰度化处理,将图像中像素点的像素值范围设定在0-255之间,从而减少图像原始数据量,提高在后续处理计算中的计算效率;再对灰度化处理后的图像进行灰度反转处理,使图像的显示效果更加清晰,提高后续对手指静脉纹路提取的准确性。
在一实施例中,步骤S2中,即对手指静脉图像进行Gabor滤波变换,得到增强图像具体包括如下步骤:
按照公式(5)对手指静脉图像进行Gabor滤波变换:
Figure PCTCN2018103809-appb-000004
其中,
Figure PCTCN2018103809-appb-000005
为Gabor滤波函数,x和y为手指静脉图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
Figure PCTCN2018103809-appb-000006
为相位偏移,σ为gabor函数的高斯因子的标准差,γ为长宽比,U(x,y)为增强图像,I(x,y)为手指静脉图像,
Figure PCTCN2018103809-appb-000007
为张量积运算,x'和y'为所述手指静脉图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
具体地,使用预设的波长和预设的方向,利用公式(5)的Gabor滤波函数对手指静脉图像进行变换,从而将手指静脉图像的高频波滤掉,只留下低频部分,在预设的方向上将低频波滤掉,只留下高频部分,最终使图像变得高亮,即通过Gabor滤波变换后得到的增强图像。
其中,预设的波长λ可取1,也可以根据实际需求进行设定,此处不做限制。预设的方向θ可以分别选取0、
Figure PCTCN2018103809-appb-000008
这8个方向,也可以选择其他方向,具体可以根据实际应用的需要进行选择,此处不做限制。
本实施例中,通过公式(5)对手指静脉图像进行Gabor滤波变换,能够快速地将图像变得高亮,达到图像增强的效果,从而提高手指静脉图像的图像质量,以及对手指静脉图像中纹路的辨别率,以便在对低端指静脉采集设备采集到的低质量手指静脉图像进行静脉纹路提取时,能够实现准确定位,从而提高静脉纹路提取的准确性,同时也提高对不同指静脉采集设备的适用性。
在一实施例中,步骤S6之后,以及步骤S7之前,该图像处理方法还可以进一步对每个像素点的像素值进行修正,详述如下:
针对更新后的增强图像中的每个像素点,按照预设的相邻区域,使用相邻区域的相邻像素点的像素值,对该像素点的像素值进行修正。
在本申请实施例中,按照公式(6)对每个像素点的像素值进行修正:
C(x,y)=min{max(V(x+1,y),V(x+2,y)),max(V(x-1,y),V(x-2,y))}    公式(6)
其中,x和y为手指静脉图像中每个像素点的横坐标和纵坐标,V(x,y)为更新后的增强图像中像素点(x,y)的像素值,C(x,y)为像素点(x,y)修正后的像素值。
具体地,选取更新后的增强图像中的像素点(x,y)左侧相邻两个像素点(x-1,y)、(x-2,y)和右侧相邻两个像素点(x+1,y)、(x+2,y),若(x,y)和两侧的像素点的像素值一样大,则不做处理;若像素点(x,y)的像素值和两侧的像素点的像素值不同,则选取左侧两个像素点的像素值中较大的像素值,再选取右侧两个像素点的像素值中较大的像素值,最后比较左侧较大的像素值和右侧较大的像素值,选取两者中较小的像素值对像素点(x,y)进行修正。
需要说明的是,若像素点为图像边界的像素点,则只对一侧的像素值进行比较,例如,若像素点位于图像左边界,则选取该像素点右侧相邻两个像素点的像素值中较大的像素值,对像素点的像素值进行修正;若像素点位于图像右边界,则选取该像素点左侧相邻两个像素点的像素值中较大的像素值,对像素点的像素值进行修正。
本实施例中,若像素点(x,y)的像素值很小而两侧的像素值很大,则通过公式(6)将像素点(x,y)的像素值调大,使得该像素点和两侧的像素点能够连接起来形成纹路;若像素点(x,y)的像素值很大而两侧的像素值很小,则认为该像素点为噪点,为避免该噪点对静脉纹路的提取造成干扰,通过公式(6)将像素点(x,y)的像素值调小,实现对手指静脉图像中的噪点进行消除,从而使静脉区域变得更加明显,提高对静脉纹路的辨别度,同时也提高在后续对静脉纹路提取的准确性。
在一实施例中,预设的切割方向包括至少2个方向,即可以基于2个或者2个以上不同的切割方向对手指静脉图像进行图像处理,得到静脉图像。预设的切割方向具体可以包括45°、90°、135°和180°共4个方向,但并不限于此,其也可以包括其他方向,可根据实际应用的需要进行设置,此处不做限制。
如图5所示,步骤S7中,即对增强图像进行二值化处理,得到静脉图像,具体包括如下步骤:
S71:将根据每个切割方向得到的更新后的增强图像作为待合成图像。
在本申请实施例中,对每个具体的切割方向,均按照步骤S3至步骤S6得到的每个切割方向上的更新后的增强图像作为待合成图像,例如,若预设的切割方向包括45°、90°、135°和180°共4个方向,则以45°的切割方向得到的更新后的增强图像为一个待合成图像,以90°的切割方向得到的增强图像为另一个待合成图像,其他方向以此类推,一共可得到四个待合成图像。
S72:对每个待合成图像中相同位置的像素点,选取该像素点在每个待合成图像中的最大像素值,作为该像素点在合成图像中的像素值,得到合成图像。
具体地,根据步骤S71获取的待合成图像,通过对每个待合成图像中相同位置的像素点的像素值进行比较,选取最大的像素值作为合成图像对应位置的像素点的像素值,得到合成图像。
S73:对合成图像进行二值化处理,得到静脉图像。
在本申请实施例中,在步骤S72获取的合成图像的基础上,为了让图像中的像素点的 像素值只呈现0或者255,即图像只呈现黑色或者白色两种颜色,需要进一步对该合成图像进行二值化处理,获取静脉图像。
具体地,扫描步骤S72获取的合成图像中的每个像素点,若该像素点的像素值小于预设的像素阈值,则将该像素点的像素值设为0,即为像素点变为黑色;若该像素点的像素值大于等于预设值的像素阈值,则将该像素点的像素值设为255,即像素点变为白色,得到静脉图像。
本实施例中,根据不同的切割方向获取不同的待合成图像,再对每个待合成图像中每个相同位置的像素点的像素值进行比较,选取每个像素点的最大像素值作为合成图像中对应位置的像素点的像素值,对图像进行合成,最后再对合成图像进行二值化处理,得到静脉图像。由于仅对一个切割方向上得到的增强图像进行静脉纹路提取可能存在误差,因此通过对多个切割方向的待合成图像进行合成,再对合成图像进行二值化处理得到的静脉图像进行静脉纹路提取,能够有效地降低误差,实现对静脉纹路的准确提取,提高静脉纹路提取的准确性。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种图像处理装置,该图像处理装置与上述实施例中图像处理方法一一对应。如图6所示,该图像处理装置包括:采集模块61,变换模块62,切割模块63,匹配模块64,计算模块65,更新模块66和二值化模块67。各功能模块详细说明如下:
采集模块61,用于使用指静脉采集设备获取原始的手指静脉图像;
变换模块62,用于对手指静脉图像进行Gabor滤波变换,得到增强图像;
切割模块63,用于根据预设的切割方向和预设的像素间隔,对增强图像进行切割,得到n条切割线,其中,n为正整数;
匹配模块64,用于针对每条切割线,计算在该切割线上的每个像素点的曲率值,将曲率值大于零的像素点确定为评估像素点,并将连续的评估像素点所在的区域确定为局部静脉区域;
计算模块65,用于针对每个评估像素点,获取包含该评估像素点的局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
更新模块66,用于使用评估分数对评估像素点的像素值进行调整,得到每个评估像素点的修正像素值,并使用修正像素值更新增强图像;
二值化模块67,用于对更新后的增强图像进行二值化处理,得到静脉图像。
进一步地,该图像处理装置还包括:
获取模块68,用于对手指静脉图像中的像素点进行遍历,获取每个像素点的RGB分量值;
灰化模块69,用于根据像素点的RGB分量值,按照如下公式对手指静脉图像作灰度化处理,得到灰化图像:
g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)
其中,x和y为手指静脉图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为像素点(x,y)的R通道的颜色分量,G(x,y)为像素点(x,y)的G通道的颜色分量,B(x,y)为像素点(x,y)的B通道的颜色分量,k 1,k 2,k 3分别为R通道,G通道和B通道对应的占比参数;
反转模块610,用于对灰化图像进行灰度反转处理,得到灰度反转后的手指静脉图像。
进一步地,变换模块62包括:
滤波子模块621:用于按照如下公式对手指静脉图像进行Gabor滤波变换:
Figure PCTCN2018103809-appb-000009
其中,
Figure PCTCN2018103809-appb-000010
为Gabor滤波函数,x和y为手指静脉图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
Figure PCTCN2018103809-appb-000011
为相位偏移,σ为gabor函数的高斯因子的标准差,γ为长宽比,U(x,y)为增强图像,I(x,y)为手指静脉图像,
Figure PCTCN2018103809-appb-000012
为张量积运算,x'和y'为所述手指静脉图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
进一步地,该图像处理装置还包括:
修正模块611:用于针对更新后的增强图像中的每个像素点,按照预设的相邻区域,使用相邻区域的相邻像素点的像素值,对该像素点的像素值进行修正。
进一步地,二值化模块67包括:
待合成子模块671:用于将根据每个切割方向得到的更新后的增强图像作为待合成图像;
合成子模块672:用于对每个待合成图像中相同位置的像素点,选取该像素点在每个待合成图像中的最大像素值,作为该像素点在合成图像中的像素值,得到合成图像;
提取子模块673:用于对合成图像进行二值化处理,得到静脉图像。
关于图像处理装置的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图7所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储手指静脉图像的数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种图像处理方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例图像处理法的步骤,例如图2所示的步骤S1至步骤S7。或者,处理器执行计算机可读指令时实现上述实施例中图像处理装置的各模块的功能,例如图6所示模块61至模块67的功能。为避免重复,这里不再赘述。
在一个实施例中,提供了一个或多个非易失性可读存储介质,其上存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时实现上述方法实施例中图像处理方法,或者,该计算机可读指令被一个或多个处理器执行时实现上述装置实施例中图像处理装置中各模块/单元的功能。为避免重复,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一个或多个非易失性可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种图像处理方法,其特征在于,所述图像处理方法包括:
    使用指静脉采集设备获取原始的手指静脉图像;
    对所述手指静脉图像进行Gabor滤波变换,得到增强图像;
    根据预设的切割方向和预设的像素间隔,对所述增强图像进行切割,得到n条切割线,其中,n为正整数;
    针对每条所述切割线,计算在该切割线上的每个像素点的曲率值,将所述曲率值大于零的像素点确定为评估像素点,并将连续的所述评估像素点所在的区域确定为局部静脉区域;
    针对每个所述评估像素点,获取包含该评估像素点的所述局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
    使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像;
    对更新后的增强图像进行二值化处理,得到静脉图像。
  2. 如权利要求1所述的图像处理方法,其特征在于,在所述使用指静脉采集设备获取原始的手指静脉图像之后,并且在所述对所述手指静脉图像进行Gabor滤波变换,得到增强图像之前,所述图像处理方法还包括:
    对所述手指静脉图像中的像素点进行遍历,获取每个所述像素点的RGB分量值;
    根据所述像素点的RGB分量值,按照如下公式对所述手指静脉图像作灰度化处理,得到灰化图像:
    g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)
    其中,x和y为所述手指静脉图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为所述像素点(x,y)的R通道的颜色分量,G(x,y)为所述像素点(x,y)的G通道的颜色分量,B(x,y)为所述像素点(x,y)的B通道的颜色分量,k 1,k 2,k 3分别为所述R通道,所述G通道和所述B通道对应的占比参数;
    对所述灰化图像进行灰度反转处理,得到灰度反转后的所述手指静脉图像。
  3. 如权利要求1所述的图像处理方法,其特征在于,所述对所述手指静脉图像进行Gabor滤波变换,得到增强图像包括:
    按照如下公式对所述手指静脉图像进行Gabor滤波变换:
    Figure PCTCN2018103809-appb-100001
    Figure PCTCN2018103809-appb-100002
    x'=x cosθ+y sinθ
    y'=-x sinθ+y cosθ
    其中,
    Figure PCTCN2018103809-appb-100003
    为Gabor滤波函数,x和y为所述手指静脉图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
    Figure PCTCN2018103809-appb-100004
    为相位偏移,σ为gabor函数的高 斯因子的标准差,γ为长宽比,U(x,y)为所述增强图像,I(x,y)为所述手指静脉图像,
    Figure PCTCN2018103809-appb-100005
    为张量积运算,x'和y'为所述手指静脉图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
  4. 如权利要求1所述的图像处理方法,其特征在于,在所述使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像之后,并且在所述对更新后的增强图像进行二值化处理,得到静脉图像之前,所述图像处理方法还包括:
    针对更新后的所述增强图像中的每个像素点,按照预设的相邻区域,使用所述相邻区域的相邻像素点的像素值,对该像素点的像素值进行修正。
  5. 如权利要求1所述的图像处理方法,其特征在于,所述预设的切割方向包括至少2个方向,所述对更新后的增强图像进行二值化处理,得到静脉图像包括:
    将根据每个所述切割方向得到的所述更新后的增强图像作为待合成图像;
    对每个所述待合成图像中相同位置的像素点,选取该像素点在每个所述待合成图像中的最大像素值,作为该像素点在合成图像中的像素值,得到合成图像;
    对所述合成图像进行二值化处理,得到所述静脉图像。
  6. 一种图像处理装置,其特征在于,所述图像处理装置包括:
    采集模块,用于使用指静脉采集设备获取原始的手指静脉图像;
    变换模块,用于对所述手指静脉图像进行Gabor滤波变换,得到增强图像;
    切割模块,用于根据预设的切割方向和预设的像素间隔,对所述增强图像进行切割,得到n条切割线,其中,n为正整数;
    匹配模块,用于针对每条所述切割线,计算在该切割线上的每个像素点的曲率值,将所述曲率值大于零的像素点确定为评估像素点,并将连续的所述评估像素点所在的区域确定为局部静脉区域;
    计算模块,用于针对每个所述评估像素点,获取包含该评估像素点的所述局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
    更新模块,用于使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像;
    二值化模块,用于对更新后的增强图像进行二值化处理,得到静脉图像。
  7. 如权利要求6所述的图像处理装置,其特征在于,所述图像处理装置还包括:
    获取模块,用于对所述手指静脉图像中的像素点进行遍历,获取每个所述像素点的RGB分量值;
    灰化模块,用于根据所述像素点的RGB分量值,按照如下公式对所述手指静脉图像作灰度化处理,得到灰化图像:
    g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)
    其中,x和y为所述手指静脉图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为所述像素点(x,y)的R通道的颜色分量,G(x,y)为所述像素点(x,y)的G通道的颜色分量,B(x,y)为所述像素点(x,y)的B通道的颜色分量,k 1,k 2,k 3分别为所述R通道,所述G通道和所述B通道对应的占比参数;
    反转模块,用于对所述灰化图像进行灰度反转处理,得到灰度反转后的所述手指静脉 图像。
  8. 如权利要求6所述的图像处理装置,其特征在于,所述变换模块包括:
    滤波子模块,用于按照如下公式对所述手指静脉图像进行Gabor滤波变换:
    Figure PCTCN2018103809-appb-100006
    Figure PCTCN2018103809-appb-100007
    x'=x cosθ+y sinθ
    y'=-x sinθ+y cosθ
    其中,
    Figure PCTCN2018103809-appb-100008
    为Gabor滤波函数,x和y为所述手指静脉图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
    Figure PCTCN2018103809-appb-100009
    为相位偏移,σ为gabor函数的高斯因子的标准差,γ为长宽比,U(x,y)为所述增强图像,I(x,y)为所述手指静脉图像,
    Figure PCTCN2018103809-appb-100010
    为张量积运算,x'和y'为所述手指静脉图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
  9. 如权利要求6所述的图像处理装置,其特征在于,所述图像处理装置还包括:
    修正模块,用于针对更新后的所述增强图像中的每个像素点,按照预设的相邻区域,使用所述相邻区域的相邻像素点的像素值,对该像素点的像素值进行修正。
  10. 如权利要求6所述的图像处理装置,其特征在于,所述预设的切割方向包括至少2个方向,所述二值化模块包括:
    待合成子模块,用于将根据每个所述切割方向得到的所述更新后的增强图像作为待合成图像;
    合成子模块,用于对每个所述待合成图像中相同位置的像素点,选取该像素点在每个所述待合成图像中的最大像素值,作为该像素点在合成图像中的像素值,得到合成图像;
    提取子模块,用于对所述合成图像进行二值化处理,得到所述静脉图像
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    使用指静脉采集设备获取原始的手指静脉图像;
    对所述手指静脉图像进行Gabor滤波变换,得到增强图像;
    根据预设的切割方向和预设的像素间隔,对所述增强图像进行切割,得到n条切割线,其中,n为正整数;
    针对每条所述切割线,计算在该切割线上的每个像素点的曲率值,将所述曲率值大于零的像素点确定为评估像素点,并将连续的所述评估像素点所在的区域确定为局部静脉区域;
    针对每个所述评估像素点,获取包含该评估像素点的所述局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
    使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像;
    对更新后的增强图像进行二值化处理,得到静脉图像。
  12. 如权利要求11所述的计算机设备,其特征在于,在所述使用指静脉采集设备获取原始的手指静脉图像之后,并且在所述对所述手指静脉图像进行Gabor滤波变换,得到增强图像之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    对所述手指静脉图像中的像素点进行遍历,获取每个所述像素点的RGB分量值;
    根据所述像素点的RGB分量值,按照如下公式对所述手指静脉图像作灰度化处理,得到灰化图像:
    g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)
    其中,x和y为所述手指静脉图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为所述像素点(x,y)的R通道的颜色分量,G(x,y)为所述像素点(x,y)的G通道的颜色分量,B(x,y)为所述像素点(x,y)的B通道的颜色分量,k 1,k 2,k 3分别为所述R通道,所述G通道和所述B通道对应的占比参数;
    对所述灰化图像进行灰度反转处理,得到灰度反转后的所述手指静脉图像。
  13. 如权利要求11所述的计算机设备,其特征在于,所述对所述手指静脉图像进行Gabor滤波变换,得到增强图像包括:
    按照如下公式对所述手指静脉图像进行Gabor滤波变换:
    Figure PCTCN2018103809-appb-100011
    Figure PCTCN2018103809-appb-100012
    x'=x cosθ+y sinθ
    y'=-x sinθ+y cosθ
    其中,
    Figure PCTCN2018103809-appb-100013
    为Gabor滤波函数,x和y为所述手指静脉图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
    Figure PCTCN2018103809-appb-100014
    为相位偏移,σ为gabor函数的高斯因子的标准差,γ为长宽比,U(x,y)为所述增强图像,I(x,y)为所述手指静脉图像,
    Figure PCTCN2018103809-appb-100015
    为张量积运算,x'和y'为所述手指静脉图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
  14. 如权利要求11所述的计算机设备,其特征在于,在所述使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像之后,并且在所述对更新后的增强图像进行二值化处理,得到静脉图像之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    针对更新后的所述增强图像中的每个像素点,按照预设的相邻区域,使用所述相邻区域的相邻像素点的像素值,对该像素点的像素值进行修正。
  15. 如权利要求11所述的计算机设备,其特征在于,所述预设的切割方向包括至少2个方向,所述对更新后的增强图像进行二值化处理,得到静脉图像包括:
    将根据每个所述切割方向得到的所述更新后的增强图像作为待合成图像;
    对每个所述待合成图像中相同位置的像素点,选取该像素点在每个所述待合成图像中的最大像素值,作为该像素点在合成图像中的像素值,得到合成图像;
    对所述合成图像进行二值化处理,得到所述静脉图像。
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    使用指静脉采集设备获取原始的手指静脉图像;
    对所述手指静脉图像进行Gabor滤波变换,得到增强图像;
    根据预设的切割方向和预设的像素间隔,对所述增强图像进行切割,得到n条切割线,其中,n为正整数;
    针对每条所述切割线,计算在该切割线上的每个像素点的曲率值,将所述曲率值大于零的像素点确定为评估像素点,并将连续的所述评估像素点所在的区域确定为局部静脉区域;
    针对每个所述评估像素点,获取包含该评估像素点的所述局部静脉区域的宽度,并将该宽度与该评估像素点的曲率值的乘积,作为该评估像素点的评估分数;
    使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像;
    对更新后的增强图像进行二值化处理,得到静脉图像。
  17. 如权利要求16所述的非易失性可读存储介质,其特征在于,在所述使用指静脉采集设备获取原始的手指静脉图像之后,并且在所述对所述手指静脉图像进行Gabor滤波变换,得到增强图像之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    对所述手指静脉图像中的像素点进行遍历,获取每个所述像素点的RGB分量值;
    根据所述像素点的RGB分量值,按照如下公式对所述手指静脉图像作灰度化处理,得到灰化图像:
    g(x,y)=k 1*R(x,y)+k 2*G(x,y)+k 3*B(x,y)
    其中,x和y为所述手指静脉图像中每个像素点的横坐标和纵坐标,g(x,y)为像素点(x,y)灰度化处理后的灰度值,R(x,y)为所述像素点(x,y)的R通道的颜色分量,G(x,y)为所述像素点(x,y)的G通道的颜色分量,B(x,y)为所述像素点(x,y)的B通道的颜色分量,k 1,k 2,k 3分别为所述R通道,所述G通道和所述B通道对应的占比参数;
    对所述灰化图像进行灰度反转处理,得到灰度反转后的所述手指静脉图像。
  18. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述对所述手指静脉图像进行Gabor滤波变换,得到增强图像包括:
    按照如下公式对所述手指静脉图像进行Gabor滤波变换:
    Figure PCTCN2018103809-appb-100016
    Figure PCTCN2018103809-appb-100017
    x'=x cosθ+y sinθ
    y'=-x sinθ+y cosθ
    其中,
    Figure PCTCN2018103809-appb-100018
    为Gabor滤波函数,x和y为所述手指静脉图像中像素点的横坐标和纵坐标,λ为预设的波长,θ为预设的方向,
    Figure PCTCN2018103809-appb-100019
    为相位偏移,σ为gabor函数的高斯因子的标准差,γ为长宽比,U(x,y)为所述增强图像,I(x,y)为所述手指静脉图像,
    Figure PCTCN2018103809-appb-100020
    为 张量积运算,x'和y'为所述手指静脉图像中像素点(x,y)根据θ旋转后的横坐标和纵坐标。
  19. 如权利要求16所述的非易失性可读存储介质,其特征在于,在所述使用所述评估分数对所述评估像素点的像素值进行调整,得到每个所述评估像素点的修正像素值,并使用所述修正像素值更新所述增强图像之后,并且在所述对更新后的增强图像进行二值化处理,得到静脉图像之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    针对更新后的所述增强图像中的每个像素点,按照预设的相邻区域,使用所述相邻区域的相邻像素点的像素值,对该像素点的像素值进行修正。
  20. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述预设的切割方向包括至少2个方向,所述对更新后的增强图像进行二值化处理,得到静脉图像包括:
    将根据每个所述切割方向得到的所述更新后的增强图像作为待合成图像;
    对每个所述待合成图像中相同位置的像素点,选取该像素点在每个所述待合成图像中的最大像素值,作为该像素点在合成图像中的像素值,得到合成图像;
    对所述合成图像进行二值化处理,得到所述静脉图像。
PCT/CN2018/103809 2018-06-08 2018-09-03 图像处理方法、装置、计算机设备及存储介质 WO2019232945A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810588087.7A CN108875621B (zh) 2018-06-08 2018-06-08 图像处理方法、装置、计算机设备及存储介质
CN201810588087.7 2018-06-08

Publications (1)

Publication Number Publication Date
WO2019232945A1 true WO2019232945A1 (zh) 2019-12-12

Family

ID=64338548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/103809 WO2019232945A1 (zh) 2018-06-08 2018-09-03 图像处理方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN108875621B (zh)
WO (1) WO2019232945A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223787A (zh) * 2020-01-02 2020-06-02 长江存储科技有限责任公司 三维存储器的沟槽结构测量方法、装置、设备及介质
CN111382703A (zh) * 2020-03-10 2020-07-07 大连海事大学 一种基于二次筛选与分数融合的指静脉识别方法
CN112116542A (zh) * 2020-09-24 2020-12-22 西安宇视信息科技有限公司 图像对比度增强方法、装置、电子设备和存储介质
CN112801980A (zh) * 2021-01-28 2021-05-14 浙江聚视信息技术有限公司 一种图像的角点检测方法及装置
CN113421203A (zh) * 2021-06-30 2021-09-21 深圳市纵维立方科技有限公司 图像处理方法、打印方法、打印相关装置及可读存储介质
CN115082507A (zh) * 2022-07-22 2022-09-20 聊城扬帆田一机械有限公司 一种路面切割机智能调控系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766761A (zh) * 2018-12-15 2019-05-17 深圳壹账通智能科技有限公司 滑冰评级方法、装置、设备及存储介质
CN109859165B (zh) * 2018-12-24 2023-06-09 新绎健康科技有限公司 一种取脉点的定位方法及装置
CN109902586A (zh) * 2019-01-29 2019-06-18 平安科技(深圳)有限公司 掌纹提取方法、装置及存储介质、服务器
CN110084238B (zh) * 2019-04-09 2023-01-03 五邑大学 基于LadderNet网络的指静脉图像分割方法、装置和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867704A (zh) * 2010-05-20 2010-10-20 苏州新海宜通信科技股份有限公司 一种去除视频图像块状噪声的方法
CN102184528A (zh) * 2011-05-12 2011-09-14 中国人民解放军国防科学技术大学 低质量手指静脉图像增强方法
CN102254163A (zh) * 2011-08-03 2011-11-23 山东志华信息科技股份有限公司 自适应模板大小的Gabor指纹图像增强方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7519209B2 (en) * 2004-06-23 2009-04-14 Vanderbilt University System and methods of organ segmentation and applications of same
EP2495699B1 (en) * 2009-10-30 2019-07-10 Fujitsu Frontech Limited Biometric information registration method, biometric authentication method, and biometric authentication device
CN104239769B (zh) * 2014-09-18 2017-05-31 北京智慧眼科技股份有限公司 基于手指静脉特征的身份识别方法及系统
CN104616260A (zh) * 2015-02-06 2015-05-13 武汉工程大学 静脉图像增强方法及装置
CN105404864A (zh) * 2015-11-16 2016-03-16 成都四象联创科技有限公司 基于灰度图像的识别方法
CN107862282B (zh) * 2017-11-07 2020-06-16 深圳市金城保密技术有限公司 一种手指静脉识别与安全认证方法及其终端及系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867704A (zh) * 2010-05-20 2010-10-20 苏州新海宜通信科技股份有限公司 一种去除视频图像块状噪声的方法
CN102184528A (zh) * 2011-05-12 2011-09-14 中国人民解放军国防科学技术大学 低质量手指静脉图像增强方法
CN102254163A (zh) * 2011-08-03 2011-11-23 山东志华信息科技股份有限公司 自适应模板大小的Gabor指纹图像增强方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223787A (zh) * 2020-01-02 2020-06-02 长江存储科技有限责任公司 三维存储器的沟槽结构测量方法、装置、设备及介质
CN111223787B (zh) * 2020-01-02 2023-04-07 长江存储科技有限责任公司 三维存储器的沟槽结构测量方法、装置、设备及介质
CN111382703A (zh) * 2020-03-10 2020-07-07 大连海事大学 一种基于二次筛选与分数融合的指静脉识别方法
CN111382703B (zh) * 2020-03-10 2023-06-23 大连海事大学 一种基于二次筛选与分数融合的指静脉识别方法
CN112116542A (zh) * 2020-09-24 2020-12-22 西安宇视信息科技有限公司 图像对比度增强方法、装置、电子设备和存储介质
CN112116542B (zh) * 2020-09-24 2024-03-08 西安宇视信息科技有限公司 图像对比度增强方法、装置、电子设备和存储介质
CN112801980A (zh) * 2021-01-28 2021-05-14 浙江聚视信息技术有限公司 一种图像的角点检测方法及装置
CN112801980B (zh) * 2021-01-28 2023-08-08 浙江聚视信息技术有限公司 一种图像的角点检测方法及装置
CN113421203A (zh) * 2021-06-30 2021-09-21 深圳市纵维立方科技有限公司 图像处理方法、打印方法、打印相关装置及可读存储介质
CN115082507A (zh) * 2022-07-22 2022-09-20 聊城扬帆田一机械有限公司 一种路面切割机智能调控系统
CN115082507B (zh) * 2022-07-22 2022-11-18 聊城扬帆田一机械有限公司 一种路面切割机智能调控系统

Also Published As

Publication number Publication date
CN108875621B (zh) 2023-04-18
CN108875621A (zh) 2018-11-23

Similar Documents

Publication Publication Date Title
WO2019232945A1 (zh) 图像处理方法、装置、计算机设备及存储介质
WO2019237520A1 (zh) 一种图像匹配方法、装置、计算机设备及存储介质
WO2019205290A1 (zh) 一种图像检测方法、装置、计算机设备及存储介质
US10325151B1 (en) Method of extracting image of port wharf through multispectral interpretation
CN110163842B (zh) 建筑裂缝检测方法、装置、计算机设备和存储介质
WO2020155764A1 (zh) 掌纹提取方法、装置及存储介质、服务器
CN106981077B (zh) 基于dce和lss的红外图像和可见光图像配准方法
WO2020143316A1 (zh) 证件图像提取方法及终端设备
CN110059700B (zh) 图像摩尔纹识别方法、装置、计算机设备及存储介质
WO2017088637A1 (zh) 自然背景中图像边缘的定位方法及装置
CN109325498B (zh) 基于窗口动态阈值改进Canny算子的叶脉提取方法
CN110378351B (zh) 印章鉴别方法及装置
CN104361335B (zh) 一种基于扫描图像自动去除黑边的处理方法
CN117152163B (zh) 一种桥梁施工质量视觉检测方法
CN111915541A (zh) 基于人工智能的图像增强处理方法、装置、设备及介质
CN114898412A (zh) 一种针对低质量指纹、残缺指纹的识别方法
CN109035285B (zh) 图像边界确定方法及装置、终端及存储介质
CN111915645B (zh) 影像匹配方法、装置、计算机设备及计算机可读存储介质
CN110930358B (zh) 一种基于自适应算法的太阳能面板图像处理方法
CN104408430B (zh) 一种车牌定位方法及装置
CN108470351B (zh) 利用图像斑块追踪测量机体偏移的方法、装置及存储介质
CN111445402A (zh) 一种图像去噪方法及装置
CN104408452A (zh) 一种基于旋转投影宽度的拉丁字符倾斜纠正方法及系统
CN112308044B (zh) 针对掌静脉图像的图像增强处理方法和掌静脉识别方法
CN115100068A (zh) 一种红外图像校正方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 19.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18921514

Country of ref document: EP

Kind code of ref document: A1