WO2021042562A1 - 基于手写签名的用户身份识别方法、装置及终端设备 - Google Patents

基于手写签名的用户身份识别方法、装置及终端设备 Download PDF

Info

Publication number
WO2021042562A1
WO2021042562A1 PCT/CN2019/118690 CN2019118690W WO2021042562A1 WO 2021042562 A1 WO2021042562 A1 WO 2021042562A1 CN 2019118690 W CN2019118690 W CN 2019118690W WO 2021042562 A1 WO2021042562 A1 WO 2021042562A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
character segmentation
character
preset
user signature
Prior art date
Application number
PCT/CN2019/118690
Other languages
English (en)
French (fr)
Inventor
朱超群
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021042562A1 publication Critical patent/WO2021042562A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/30Writer recognition; Reading and verifying signatures
    • G06V40/33Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • This application belongs to the field of computer technology, and in particular relates to a method, device and terminal equipment for user identification based on handwritten signatures.
  • biometric verification methods mean that verification does not require the help of foreign objects. No need to remember passwords, etc.
  • Numerous biological recognition methods such as fingerprint recognition, face recognition, pupil recognition, etc.
  • the user identification method based on handwritten signature does not need to collect biological signals, only the handwritten signature of the person needs to be collected, so it has no effect on the human body. Aggressive and widely used in daily life.
  • the embodiments of the present application provide a user identification method, device, and terminal device based on handwritten signatures to solve the problem that the data processing volume of related technologies is very large, which occupies a lot of system resources, and affects the overall performance of computing equipment.
  • Technical issues
  • the first aspect of the embodiments of the present application provides a user identification method based on handwritten signatures, including:
  • the feature vector of each of the character segmented images is matched with the corresponding preset feature vector in the preset ranking to generate a user identification result.
  • a second aspect of the embodiments of the present application provides a user identity recognition device based on handwritten signatures, including:
  • a grayscale module is used to perform grayscale processing on the signature image to obtain a grayscale image
  • a segmentation module configured to segment each of the user signature characters in the grayscale image, and retain the order of the user signature characters, to obtain a plurality of sorted character segmentation images
  • the decomposition module is used to perform wavelet decomposition with a preset number of layers N for each character segmented image to obtain a preset number of M high-frequency components corresponding to each character segmented image, where N is a positive integer, and M Is a positive integer greater than N;
  • a calculation module configured to calculate the mean value and variance of the high-frequency coefficient matrix of each of the high-frequency components as the characteristic element corresponding to each of the high-frequency components;
  • a combination module configured to combine a preset number of M high-frequency component feature elements corresponding to each of the character segmented images to form a feature vector of the character segmented image
  • the matching module is configured to match the feature vector of each of the character segmented images with the corresponding preset feature vector in the preset ranking according to the ranking to generate a user identification result.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory and a processor.
  • the memory stores computer-readable instructions that can run on the processor, and the processor executes the computer When reading instructions, the following steps are implemented:
  • the feature vector of each of the character segmented images is matched with the corresponding preset feature vector in the preset ranking to generate a user identification result.
  • the fourth aspect of the embodiments of the present application provides a computer-readable storage medium, the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • the feature vector of each of the character segmented images is matched with the corresponding preset feature vector in the preset ranking to generate a user identification result.
  • the image including the user's signature character is first subjected to wavelet decomposition of a preset number of layers to obtain a preset number of high-frequency components, and then the mean and variance of each high-frequency component are calculated as each said
  • the feature elements corresponding to the high-frequency components, the feature elements of the preset number of high-frequency components corresponding to each of the character segmented images are combined to form the feature vector of the character segmented image, and finally the feature vector is combined with the preset
  • the feature vector is matched to obtain the user identification result.
  • This application extracts finite-dimensional feature vectors from an image that includes user signature characters, and only needs to perform feature vector matching to realize user identity recognition. On the premise of preserving user signature character image information, the amount of data processing is greatly reduced, and the System resource occupation.
  • FIG. 1 is a flowchart of the implementation of a method for user identification based on handwritten signatures provided by an embodiment of the present application
  • Figure 2 is a flowchart of another implementation of a method for user identification based on handwritten signatures provided by an embodiment of the present application
  • FIG. 3 is a flowchart of implementing another method for user identification based on handwritten signatures according to an embodiment of the present application
  • Figure 4 is a flowchart of another implementation of a method for user identification based on handwritten signatures provided by an embodiment of the present application
  • FIG. 5 is an implementation flowchart of another user identity recognition method based on handwritten signatures according to an embodiment of the present application
  • FIG. 6 is a structural block diagram of a user identity recognition device based on handwritten signature provided by an embodiment of the present application
  • Fig. 7 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • Fig. 1 shows an implementation process of a user identification method based on a handwritten signature provided by an embodiment of the present application.
  • the process of the method includes steps S101 to S107. This method is suitable for situations that require user identification based on the user's handwritten signature.
  • the method is executed by a user identification device based on a handwritten signature.
  • the user identification device based on a handwritten signature is configured in a terminal device and can be implemented by software and/or hardware.
  • the specific implementation principle of each step is as follows.
  • S101 Acquire a signature image including characters of the user's signature.
  • the signature image including the user's signature characters is the object used to realize the user's identity recognition.
  • the camera of the terminal device can be called to capture the signature image including the user's signature characters; the scanner can also be called to scan the file containing the user's signature characters to obtain the signature image; the user's signature image can also be obtained through the touch screen of the terminal device. Sliding operation trajectory, save the user’s sliding operation trajectory as a signature image.
  • the touch screen may be a capacitive screen or a resistive screen; the user's sliding operation track may be generated by a sliding operation directly contacting the touch screen, or may be generated by a sliding operation at a certain distance from the touch screen.
  • S102 Perform grayscale processing on the signature image to obtain a grayscale image.
  • the gray-scale processing of the signature image may adopt image gray-scale processing methods of related technologies, including but not limited to the component method, the maximum value method, the average method, and the weighted average method.
  • the signature image is gray-scaled and converted into a gray-scale image by the following formula:
  • V gray ( i , j ) (0.3 R i,j 2 +0.59 G i,j 2 +0.11 B i,j 2 ) 1/2 ;
  • V gray ( i , j ) represents the gray value of the pixel (i , j ), R i,j , G i,j and B i,j in turn represent the R value of the pixel (i , j ), G Value and B value.
  • the signature image is grayed out by the following formula and converted into a grayscale image:
  • V gray ( i , j ) ( R i,j + G i,j + B i,j )/3.
  • V gray ( i , j ) represents the gray value of the pixel (i , j ), R i,j , G i,j and B i,j in turn represent the R value of the pixel (i , j ), G Value and B value.
  • the grayscale image processed by the calculation formula of the previous embodiment has more complete details than the grayscale image obtained by the calculation formula of the latter embodiment, which can further improve the accuracy of subsequent identity recognition.
  • S103 Segment each of the user signature characters in the grayscale image, and retain the order of the user signature characters, to obtain a plurality of sorted character segmentation images.
  • the result of the segmentation is to segment the user signature characters in the signature image, including text or characters, one by one to form a single character block, that is, the character segmentation image.
  • the generated character block can be input to the next step for decomposition, thereby generating the feature vector corresponding to the character block.
  • the method for segmenting each of the user signature characters in the gray-scale image may adopt an existing contour segmentation method.
  • step 103 includes the following steps 201 to 202.
  • S201 Determine an arrangement direction of user signature characters in the gray-scale image, where the arrangement direction includes a horizontal arrangement and a vertical arrangement.
  • the customer signature characters are arranged in a horizontal arrangement, the vertical arrangement is not ruled out. Therefore, in the embodiment of the present application, the arrangement direction of the user signature characters in the grayscale image is first determined.
  • step 201 includes: horizontally projecting the grayscale image, counting the number of first black pixels in each row, and obtaining the number of rows where the number of first black pixels is greater than a first preset threshold; Perform longitudinal projection, count the number of second black pixels in each column, and obtain the number of columns with the number of second black pixels greater than the second preset threshold; if the number of rows is greater than the number of columns, determine the user in the grayscale image
  • the arrangement direction of the signature characters is a longitudinal arrangement, and if the number of rows is less than or equal to the number of columns, it is determined that the arrangement direction of the user signature characters in the gray-scale image is a horizontal arrangement.
  • the horizontal projection is a statistical grayscale image, specifically the number of black pixels in each row of the binarized grayscale image, that is, the number of first black pixels
  • the vertical projection is a statistical grayscale image, specifically two The number of black pixels in each column of the valued grayscale image, that is, the number of second black pixels. In this way, the number of first black pixels in each row and the number of second black pixels in each column are counted.
  • the first preset threshold and the second preset threshold the rows and columns with and without characters will be set. To distinguish them, you get the number of rows and columns.
  • the number of pixel rows occupied will be less than the number of pixel columns occupied, and when the signature characters are arranged vertically, the number of pixel rows occupied will be greater than the number of pixel columns occupied. Therefore, in the embodiment of the present application, when the number of rows is greater than the number of columns, it is determined that the arrangement direction of the user signature characters is vertical arrangement, and when the number of rows is less than or equal to the number of columns, it is determined that the arrangement direction of the user signature characters is Horizontal arrangement.
  • the first preset threshold and the second preset threshold may be set to the same value or different values, both of which are empirical values, which can be selected and set according to actual conditions, which are not specifically limited in the embodiments of the present application.
  • the corresponding character segmentation strategy is adopted for character segmentation. That is to say, when the arrangement direction is determined to be horizontal, the character segmentation strategy corresponding to the horizontal arrangement is adopted to segment each user signature character; when the arrangement direction is determined to be the vertical arrangement, the character segmentation strategy corresponding to the vertical arrangement is adopted Separate each user's signature character. After the segmentation, the sequence of the user signature characters is retained, and a plurality of sorted character segmentation images are obtained.
  • step 202 includes: when it is determined that the arrangement direction is a longitudinal arrangement, retaining the pixel column whose number of the second black pixels is greater than a second preset threshold; A predetermined threshold of continuous pixel lines are connected, and each connected area is used as a character segmented image, and each character segmented image is sorted according to the sorting of the pixel rows.
  • the pixel rows with the number of first black pixels greater than the first preset threshold are reserved, and the consecutive pixel columns with the second number of black pixels greater than the second preset threshold in the reserved pixel rows are connected;
  • Each connected area is used as a character segmentation image, and each character segmentation image is sorted according to the sorting of pixel columns.
  • the user signature character image is segmented to form several pixel blocks. , The pixels within each pixel block are connected, and each pixel block corresponds to a character segmentation image.
  • S104 Perform wavelet decomposition with a preset number of layers N for each of the character segmented images to obtain a preset number of M high-frequency components corresponding to each of the character segmented images, where N is a positive integer, and M is greater than N A positive integer.
  • the pixels of the photos are getting higher and higher, but the high-pixel photos will consume a lot of time when image processing is required.
  • the reason for the time consuming is that the data of each pixel in the image needs to be processed, and because the image is a whole, it is difficult to perform parallel data processing.
  • the data of all pixels are processed in the same way, lacking Pertinence, there may be some pixel data processed, which will not help the improvement of the photo effect, on the contrary, it wastes system resources and prolongs the processing time.
  • each character segmentation image is subjected to wavelet decomposition. Since the pixels of the image after wavelet decomposition are significantly reduced, the overall data calculation amount is reduced, the resource occupation is reduced, and the calculation speed is also improved.
  • the first-level wavelet decomposition decomposes the character segmentation image into multiple first-level high-frequency components and a first-level low-frequency component, and then a second-level wavelet decomposition is performed on the first-level low-frequency components to decompose into multiple second-levels High-frequency components and a second-layer low-frequency component, perform the third-layer wavelet decomposition on the second-layer low-frequency components, decompose into multiple third-layer high-frequency components and a third-layer low-frequency component, and so on, until the preset is completed
  • the wavelet decomposition of the number of layers N obtains the M high-frequency components corresponding to the character segmentation image. It is understandable that the M high-frequency components corresponding to the character segmentation image include multiple first-layer high-frequency components, multiple second-layer high-frequency components,..., and multiple N-th layer high-frequency components .
  • the high-frequency component contains the high-frequency information and noise information of the edge of the object in the character segmentation image
  • the low-frequency component image contains the information of the smoothly changing part of the character segmentation image. Therefore, the important feature information of the character segmentation image is obtained through wavelet decomposition, which reduces the system resource occupation and also reduces the computing time.
  • step 104 includes steps 301 to 304.
  • S301 Decompose the character segmentation image into three first-layer high-frequency components and one first-layer low-frequency component through the first-layer wavelet transform;
  • S302 Decompose the first low-frequency component of the first layer into three second-layer high-frequency components and one second-layer low-frequency component through the second-layer wavelet transform;
  • S303 Decompose the low-frequency component of the second layer into three high-frequency components of the third layer and one low-frequency component of the third layer through a third-layer wavelet transform;
  • the character segmentation image is decomposed in the first layer through the Haar wavelet transform to generate the first high-frequency component in the horizontal direction, the first high-frequency component in the vertical direction, and the first high-frequency component in the diagonal direction of the first layer. 1 first low-frequency component of the first layer.
  • the first low-frequency component obtained by the decomposition of the first layer can be resolved by the Haar wavelet transform into the second high-frequency component in the horizontal direction, the second high-frequency component in the vertical direction, and the second high-frequency component in the oblique direction. And 1 second low frequency component.
  • each layer of decomposition can generate three high-frequency components and one low-frequency component
  • the embodiment of the present application uses a total of 9 high-frequency components of the three layers as characters The 9 high-frequency components corresponding to the segmented image.
  • the character segmented image is decomposed multiple times to generate multiple high-frequency components. It is understandable that the pixels of the high-frequency components of the lower layer are significantly reduced compared to the pixels of the upper-layer image. Therefore, the processing time for subsequent processing of each high-frequency component (including parallel processing or serial processing) will be lower than that of the character.
  • S105 Calculate the mean value and the variance of the high-frequency coefficient matrix of each high-frequency component as a characteristic element corresponding to each high-frequency component.
  • S106 Combine a preset number of M high-frequency component feature elements corresponding to each of the character segmented images to form a feature vector of the character segmented image.
  • step 105 the mean value and variance of the high-frequency coefficient matrix of each high-frequency component in the M high-frequency components are calculated as the characteristic element corresponding to each of the high-frequency components.
  • step 106 the mean value and variance of the M high-frequency components are combined to obtain the feature vector of the character segmented image.
  • the M high-frequency components have a total of 2M feature elements, and the 2M feature elements are combined to form the feature vector of the character segmentation image, that is, the feature vector is a 2M-dimensional feature vector.
  • the value of M can be limited according to needs, that is, the value of M can be preset according to needs.
  • the value of the number of layers N is limited, and the embodiments of the present application do not specifically limit the values of M and N.
  • the high-frequency component corresponding to the character segmentation image is 9 or 6, the applicant found that it can not only ensure the accuracy of the result, but also reduce the amount of data processing.
  • the 9 high-frequency components when there are 9 high-frequency components corresponding to the character segmentation image, calculate the mean and variance of the high-frequency coefficient matrix of each of the 9 high-frequency components as characteristic elements; then, the 9 high-frequency components The components have a total of eighteen feature elements, and the obtained eighteen feature elements are combined to obtain the eighteen-dimensional feature vector of the character segmentation image.
  • the mean and variance of the high-frequency coefficient matrix of each high-frequency component are calculated as feature elements; then, the six high-frequency components have twelve features in total Element, the twelve feature elements are combined to form the feature vector of the character segmentation image, that is, the feature vector is a twelve-dimensional feature vector.
  • the terminal device stores user signature characters in advance, and calculates the preset feature vectors corresponding to each character segmentation image, and stores these preset feature vectors according to the character order, that is, the preset sort. It is understandable that the preset
  • the calculation method of the eigenvector is the same as the calculation method of the eigenvector, please refer to the foregoing description, and will not be repeated here.
  • step 107 the feature vector of each character segmented image is matched with the corresponding preset feature vector according to the sorting. If the feature vector of each character segmented image is successfully matched with the preset feature vector, the user identity is generated The recognition result of successful recognition, otherwise, the recognition result of user identity recognition failure is generated.
  • the two are matched by calculating the similarity between the feature vector and the preset feature vector.
  • the similarity is greater than the preset threshold, the character segmentation image is matched successfully, and the next character segmentation image is performed. Until the calculated similarity is less than or equal to the threshold, or all character segmentation images are matched.
  • the matching of the next character segmentation image is stopped, and a recognition result that the user's identity recognition fails is generated. It is understandable that in the matching process of each feature vector and the preset feature vector, parallel processing can be adopted to further increase the calculation speed and reduce the system resource occupation.
  • the signature image is segmented to obtain a character segmentation image, and then each character segmentation image is subjected to a preset number of wavelet decomposition to generate a low-dimensional feature vector corresponding to each character segmentation image, and finally based on
  • the feature vector realizes the identification of the user's identity, guarantees the accuracy of the identification result under the premise of retaining the signature image information, greatly reduces the amount of data processing, and reduces the occupation of system resources.
  • step 102' is further included to binarize the gray image to obtain Binarize the image.
  • the process of binarization is the process of segmenting the character area in the gray image from the background area, which further reduces the amount of data and further reduces the system resource occupation.
  • step 103 it is no longer to segment each of the user signature characters in the grayscale image, but to perform segmentation on each of the user signature characters in the binarized image. segmentation.
  • the gray image is binarized by the following formula to obtain the binarized image f ( i , j ):
  • Criticalvalue (i, j) binarization threshold at pixel (i, j), the gray value to be 4 neighboring pixel (i, j) is, D neighborhood or 8 neighborhood.
  • the binarization method calculates the binarization critical value for each pixel.
  • the amount of calculation data is not large, it reduces the weak adaptability of the image binarization to the environment. It can be used for signature images in any scene. A better binarization effect is obtained, and the universality is improved.
  • step 102' after the step of binarizing the gray image to obtain the binarized image, the method further includes step 102' ', perform noise reduction processing on the binarized image to obtain a noise reduction image.
  • the noise reduction can use the smoothing filtering method of the prior art, including but not limited to: mean filtering, median filtering, Wiener filtering, and so on.
  • the binarized image contains noise information, removing the noise in the binarized image can greatly improve the quality of the processed image, thereby further improving the accuracy of subsequent results.
  • each user signature character in the binarized image is no longer segmented, but each user signature character in the noise-reduction image is performed segmentation.
  • the method further includes the step of: adjusting each of the character segmentation images Size normalized character image into a preset size;
  • each character segmentation image is mapped to a dot matrix of the same size, and characteristic information such as the position, inclination and direction of the character strokes is maintained.
  • the image size is normalized, and the methods that can be used include but are not limited to: four-side delimitation method, center-of-gravity alignment method, unilateral delimitation method and bilateral delimitation method, etc.
  • step 104 the wavelet decomposition of the preset number of layers N is no longer performed for each of the character segmentation images, but a preset is performed for each of the size normalized character images. Wavelet decomposition of the number of layers N.
  • FIG. 6 shows a structural block diagram of the handwritten signature-based user identification device provided in an embodiment of the present application. For ease of description, only the The relevant part of the embodiment of this application.
  • the user identification device based on handwritten signature includes:
  • the obtaining module 61 is configured to obtain a signature image including user signature characters
  • the gray-scale module 62 is used to perform gray-scale processing on the signature image to obtain a gray-scale image
  • the segmentation module 63 is configured to segment each of the user signature characters in the gray-scale image, and retain the order of the user signature characters, to obtain a plurality of sorted character segmentation images;
  • the decomposition module 64 is configured to perform wavelet decomposition with a preset number of layers N for each of the character segmented images to obtain a preset number of M high-frequency components corresponding to each of the character segmented images, where N is a positive integer, M is a positive integer greater than N;
  • the calculation module 65 is configured to calculate the mean value and the variance of the high-frequency coefficient matrix of each of the high-frequency components as the characteristic element corresponding to each of the high-frequency components;
  • the combination module 66 is configured to combine a preset number of M high-frequency component feature elements corresponding to each of the character segmented images to form a feature vector of the character segmented image;
  • the matching module 67 is configured to match the feature vector of each of the character segmented images with the corresponding preset feature vector in the preset ranking according to the ranking, and generate a user identification result.
  • the decomposition module 64 is specifically used for:
  • the segmentation module 63 includes:
  • the determining sub-module is used to determine the arrangement direction of user signature characters in the gray-scale image, and the arrangement direction includes a horizontal arrangement and a vertical arrangement;
  • the segmentation sub-module is configured to adopt a corresponding character segmentation strategy according to the determined arrangement direction, retain the order of the user signature characters, and obtain a plurality of sorted character segmentation images.
  • the determining submodule is specifically configured to:
  • the segmentation submodule is specifically configured to:
  • the pixel column with the second black pixel number greater than the second preset threshold is reserved; the continuous pixel row in the pixel column with the first black pixel number greater than the first preset threshold is reserved.
  • each connected area is used as a character segmentation image, and each character segmentation image is sorted according to the pixel row sorting.
  • the pixel rows with the number of first black pixels greater than the first preset threshold are reserved, and the consecutive pixel columns with the second number of black pixels greater than the second preset threshold in the reserved pixel rows are connected;
  • Each connected area is used as a character segmentation image, and each character segmentation image is sorted according to the sorting of pixel columns.
  • it further includes a binarization module for binarizing the grayscale image to obtain a binarized image.
  • it further includes a noise reduction module for performing noise reduction processing on the binarized image to obtain a noise reduction image.
  • a noise reduction module for performing noise reduction processing on the binarized image to obtain a noise reduction image.
  • it further includes a normalization module, configured to adjust each of the character segmented images into a size normalized character image of a preset size.
  • a normalization module configured to adjust each of the character segmented images into a size normalized character image of a preset size.
  • Fig. 7 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 7 of this embodiment includes: a processor 70, a memory 71, and computer-readable instructions 72 stored in the memory 71 and running on the processor 70, for example, based on a handwritten signature User identification procedures.
  • the processor 70 executes the computer-readable instructions 72
  • the steps in the embodiment of the user identity recognition method based on handwritten signatures are implemented, for example, steps S101 to S107 shown in FIG. 1.
  • the processor 70 executes the computer-readable instructions 72
  • the functions of the modules/units in the foregoing device embodiments are implemented, for example, the functions of the modules 61 to 67 shown in FIG. 6.
  • the computer-readable instructions 72 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 71 and executed by the processor 70, To complete this application.
  • the one or more modules/units may be a series of computer-readable instruction instruction segments capable of completing specific functions, and the instruction segment is used to describe the execution process of the computer-readable instructions 72 in the terminal device 7.
  • the terminal device 7 may be a smart phone, a computer, a tablet, a server, etc.
  • the terminal device 7 may include, but is not limited to, a processor 70 and a memory 71.
  • FIG. 7 is only an example of the terminal device 7 and does not constitute a limitation on the terminal device 7. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 70 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 71 may be an internal storage unit of the terminal device 7, such as a hard disk or memory of the terminal device 7.
  • the memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) equipped on the terminal device 7. Card, Flash Card, etc.
  • the memory 71 may also include both an internal storage unit of the terminal device 7 and an external storage device.
  • the memory 71 is used to store the computer-readable instructions and other programs and data required by the terminal device.
  • the memory 71 can also be used to temporarily store data that has been output or will be output.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions can be stored in a computer-readable storage medium.
  • the computer-readable instruction is executed by the processor, the steps of the foregoing method embodiments can be implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)

Abstract

本申请适用于计算机技术领域,提供了一种基于手写签名的用户身份识别方法、装置及终端设备,所述方法包括:获取签名图像进行灰度化处理;对灰度图像中各个用户签名字符进行分割,得到排序好的多个字符分割图像;针对每个字符分割图像进行预设层数的小波分解,得到每个字符分割图像对应的预设数量个高频分量;计算每个高频分量的高频系数矩阵的均值和方差,作为每个高频分量对应的特征元素;将每个字符分割图像对应的预设数量个高频分量的特征元素组合形成所述字符分割图像的特征向量;按照排序将每个字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。本申请大大降低了数据处理量,降低了系统资源占用。

Description

基于手写签名的用户身份识别方法、装置及终端设备
本申请要求于2019年9月6日提交中国专利局、申请号为201910840237.3、发明名称为“基于手写签名的用户身份识别方法、装置及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于计算机技术领域,尤其涉及一种基于手写签名的用户身份识别方法、装置及终端设备。
背景技术
生物学为用户身份识别提供了更加多样的方法,与普通所用的方法,比如:钥匙,磁卡,刻章,密码等身份识别的方法不同,生物验证方法意味着验证不需要依赖外物的帮助,也不需要额外的记忆密码等。众多的生物识别方法,比如指纹识别,人脸识别,瞳孔识别等,基于手写签名的用户身份识别方法相对于其他生物识别方法由于无需采集生物信号,只需要采集人的手写签名,故对人体无侵略性,在日常生活中使用也非常广泛。
目前基于手写签名的用户身份识别方法,为了得到准确的识别结果,数据处理量非常大,所占系统资源较多,影响了计算设备的整体性能。因此,亟需提供一种新的基于手写签名的用户身份识别方法以解决现有技术的问题。
技术问题
有鉴于此,本申请实施例提供了一种基于手写签名的用户身份识别方法、装置及终端设备,以解决相关技术数据处理量非常大,所占系统资源较多,影响了计算设备的整体性能的技术问题。
技术解决方案
本申请实施例的第一方面提供了一种基于手写签名的用户身份识别方法,包括:
获取包括用户签名字符的签名图像;
将所述签名图像进行灰度化处理得到灰度图像;
对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
本申请实施例的第二方面提供了一种基于手写签名的用户身份识别装置,包括:
获取模块,用于获取包括用户签名字符的签名图像;
灰度化模块,用于将所述签名图像进行灰度化处理得到灰度图像;
分割模块,用于对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
分解模块,用于针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
计算模块,用于计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
组合模块,用于将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
匹配模块,用于按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
本申请实施例的第三方面提供了一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时,实现如下步骤:
获取包括用户签名字符的签名图像;
将所述签名图像进行灰度化处理得到灰度图像;
对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下步骤:
获取包括用户签名字符的签名图像;
将所述签名图像进行灰度化处理得到灰度图像;
对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
有益效果
本申请实施例中,通过先将包括用户签名字符的图像进行预设层数的小波分解,获得预设数量的高频分量,然后计算每个高频分量的均值和方差,作为每个所述高频分量对应的特征元素,再将每个所述字符分割图像对应的预设数量的高频分量的特征元素组合形成所述字符分割图像的特征向量,最后再将所述特征向量与预设特征向量进行匹配,得到用户身份识别结果。本申请在包括用户签名字符的图像中提取有限维度的特征向量,仅需要进行特征向量匹配就能实现用户身份识别,在保留用户签名字符图像信息的前提下,大大降低了数据处理量,降低了系统资源占用。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种基于手写签名的用户身份识别方法的实现流程图;
图2是本申请实施例提供的另一种基于手写签名的用户身份识别方法的实现流程图;
图3是本申请实施例提供的另一种基于手写签名的用户身份识别方法的实现流程图;
图4是本申请实施例提供的另一种基于手写签名的用户身份识别方法的实现流程图;
图5是本申请实施例提供的另一种基于手写签名的用户身份识别方法的实现流程图;
图6是本申请实施例提供的一种基于手写签名的用户身份识别装置的结构框图;
图7是本申请实施例提供的终端设备的示意图。
本发明的实施方式
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚,完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,所获得的所有其他实施例,都应当属于本申请保护的范围。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
需要说明的是,在本申请的说明书、权利要求书及附图中的术语中涉及“第一”或“第二”等的描述仅用于区别类似的对象,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量,也就是说,这些描述不必用于描述特定的顺序或先后次序。此外,应该理解这些描述在适当情况下可以互换,以便描述本申请的实施例。
图1示出了本申请实施例提供的基于手写签名的用户身份识别方法的实现流程,该方法流程包括步骤S101至S107。该方法适用于需要基于用户的手写签名进行用户身份识别的情形。该方法由基于手写签名的用户身份识别装置执行,所述基于手写签名的用户身份识别装置配置于终端设备,可由软件和/或硬件实现。各步骤的具体实现原理如下。
S101,获取包括用户签名字符的签名图像。
其中,包括用户签名字符的签名图像为用于实现用户身份识别的对象。
在本申请实施例中,可以通过调用终端设备的摄像头拍摄包括用户签名字符的签名图像;还可以通过调用扫描仪扫描包括用户签名字符的文件获得签名图像;还可以通过终端设备的触摸屏获取用户的滑动操作轨迹,将用户的滑动操作轨迹保存作为签名图像。需要说明的是,触摸屏可以为电容屏或电阻屏;用户的滑动操作轨迹可以为通过直接接触触摸屏的滑动操作产生,也可以为通过与触摸屏间隔一定距离的滑动操作产生。
S102,将所述签名图像进行灰度化处理得到灰度图像。
在本申请实施例中,灰度化处理签名图像可以采用相关技术的图像灰度化处理方法,包括但不限于分量法、最大值法、平均值法和加权平均法。
可选地,作为本申请一实施例,通过以下公式将签名图像进行灰度化处理,转换为灰度图像:
V gray ( i, j)=(0.3 R i,j 2+0.59 G i,j 2+0.11 B i,j 2) 1/2
其中, V gray ( i, j)表示像素点( i, j)的灰度值, R i,j G i,j B i,j 依次表示像素点( i, j)的 R值, G值和 B值。
作为本申请另一实施例,通过以下公式将签名图像进行灰度化处理,转换为灰度图像:
V gray ( i, j)=( R i,j + G i,j + B i,j )/3。
其中, V gray ( i, j)表示像素点( i, j)的灰度值, R i,j G i,j B i,j 依次表示像素点( i, j)的 R值, G值和 B值。
需要说明的是,前一实施例的计算公式处理得到的灰度图像比后一实施例的计算公式得到的灰度图像细节更完整,能进一步提高后续身份识别的准确度。
S103,对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
其中,分割的结果就是将签名图像中的用户签名字符,包括文字或字符,一一分割开来,形成单一的字符块,即字符分割图像。生成的字符块就可以输入下一步进行分解,从而生成字符块对应的特征向量。
可选地,作为本申请一实施例,对所述灰度图像中各个所述用户签名字符进行分割的方法可以采用现有的轮廓分割法。
作为本申请另一实施例,如图2所示,步骤103包括如下步骤201至202。
S201,确定所述灰度图像中用户签名字符的排列方向,所述排列方向包括横向排列和纵向排列。
虽然通常情况下,客户签名字符均采用横向排列的排列方式,但是也不排除纵向排列的情形,因此,本申请实施例中,先确定所述灰度图像中用户签名字符的排列方向。
可选地,步骤201包括:对所述灰度图像进行横向投影,统计每行的第一黑像素数目,得到第一黑像素数目大于第一预设阈值的行数;对所述灰度图像进行纵向投影,统计每列的第二黑像素数目,得到第二黑像素数目大于第二预设阈值的列数;若所述行数大于所述列数,则确定所述灰度图像中用户签名字符的排列方向为纵向排列,若所述行数小于或等于所述列数,则确定所述灰度图像中用户签名字符的排列方向为横向排列。
其中,横向投影是统计灰度图像,具体地为二值化后的灰度图像中每行黑色像素点的个数,即第一黑像素数目;纵向投影是统计灰度图像,具体地为二值化后的灰度图像中每列黑色像素点的个数,即第二黑像素数目。这样就统计出了每行的第一黑像素数目和每列的第二黑像素数目,通过设置第一预设阈值和第二预设阈值,将有字符存在和无字符存在的各行和各列区分开来,也就得到了行数和列数。基于普遍的书写习惯,当签名字符横向排列时,所占像素行数将小于所占像素列数,而当签名字符纵向排列时,所占像素行数将大于所占像素列数。因此,在本申请实施例中,将行数大于列数的情况,确定为用户签名字符的排列方向为纵向排列,而当行数小于或等于列数的情况,确定为用户签名字符的排列方向为横向排列。
第一预设阈值可以与第二预设阈值可以设置成相同数值,也可以为不相同数值,均为经验值,可以根据实际情况进行选择设定,本申请实施例对此不做具体限定。
S202,根据确定出的所述排列方向采用对应的字符分割策略,保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
在确定好用户签名字符的排列方向之后,采用对应的字符分割策略进行字符分割。也就是说,当确定排列方向为横向排列时,采用与横向排列方式对应的字符分割策略对各个用户签名字符进行分割;当确定排列方向为纵向排列时,采用与纵向排列方式对应的字符分割策略对各个用户签名字符进行分割。分割之后保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
可选地,步骤202包括:当确定所述排列方向为纵向排列,则保留第二黑像素数目大于第二预设阈值的像素列;将保留的所述像素列中第一黑像素数目大于第一预设阈值的连续像素行联通,每个连通区域作为一个字符分割图像,并按照像素行的排序,将各个字符分割图像进行排序。
当确定所述排列方向为横向排列,则保留第一黑像素数目大于第一预设阈值的像素行,将保留的像素行中第二黑像素数目大于第二预设阈值的连续像素列联通,每个连通区域作为一个字符分割图像,并按照像素列的排序,将各个字符分割图像进行排序。
由于在横向投影和纵向投影的过程中,统计了每行和每列的黑像素点数目,并确定了用户签名字符的排列方向,而通常情况下,签名字符是比较规整化的,字符与字符之间会存在一定的间隙,这个间隙的存在导致这个行或列的黑色像素点数目达不到预设数目,因此,基于这个特点,对用户签名字符图像进行分割,从而形成了若干个像素块,每个像素块内部的像素点是连通的,每个像素块对应一个字符分割图像。
S104,针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数。
现有的摄像设备为了拍摄出更加清晰的照片,照片的像素越来越高,但是高像素的照片在需要进行图像处理时就会耗费大量的时间。耗费大量时间的原因在于,需要对图像中各个像素的数据进行数据处理,而由于图像是一个整体,难以进行并行的数据处理过程,此外对所有的像素的数据均进行同样的数据处理方式,缺乏针对性,可能存在一些像素的数据被处理后也不会对照片效果的提升带来帮助,反而浪费了系统资源和延长了处理时间。
基于上述的理由,本申请实施例中对各个字符分割图像进行小波分解,由于进行小波分解后的图像的像素明显降低,减少了整体数据计算量,减少了资源占用,也有助于提高计算速度。
针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数。
第一层小波分解将字符分割图像分解为多个第一层高频分量以及一个第一层低频分量,进而,再对第一层低频分量进行第二层小波分解,分解为多个第二层高频分量以及一个第二层低频分量,对第二层低频分量执行第三层的小波分解,分解为多个第三层高频分量以及一个第三层低频分量,依次类推,直至完成预设层数N的小波分解,得到字符分割图像对应的M个高频分量。可以理解的是,字符分割图像对应的M个高频分量包括多个第一层高频分量,多个第二层高频分量,......,以及多个第N层高频分量。
高频分量中包含了字符分割图像中物体边缘的高频信息以及噪声信息,低频分量图中包含了字符分割图像中变化较为平缓的部分的信息。因此通过小波分解获取了字符分割图像的重要特征信息,降低了系统资源占用,也减少运算时间。
可选地,作为本申请的一个实施例,如图3所示,步骤104包括步骤301至304。
S301,经过第一层小波变换将所述字符分割图像分解为3个第一层的高频分量以及1个第一层的低频分量;
S302,将第一层的第一低频分量经过第二层小波变换分解为3个第二层的高频分量以及1个第二层的低频分量;
S303,将第二层的低频分量经过第三层小波变换分解为3个第三层的高频分量以及1个第三层的低频分量;
S304,直至执行完预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,M=3N。
其中,通过哈尔小波变换对字符分割图像进行第一层分解,生成第一层的水平方向的第一高频分量、垂直方向的第一高频分量、斜行方向的第一高频分量以及1个第一层的第一低频分量。
同样地,可以通过哈尔小波变换将第一层分解得到的第一低频分量生解成水平方向的第二高频分量、垂直方向的第二高频分量、斜行方向的第二高频分量以及1个第二低频分量。
以此类推,直至完成预设层数N的小波分解,得到字符分割图像对应的M个高频分量,M=3N。
示例性地,假设将字符分割图像通过三层小波分解,每一层分解均可以生成三个高频分量以及一个低频分量,则本申请实施例将这三层的总共9个高频分量作为字符分割图像对应的9个高频分量。
在本申请实施例中,将所述字符分割图像进行多次迭代分解,生成多个高频分量。可以理解地,下层的高频分量的像素相较于上层图像的像素显著降低,因此,在后续对各个高频分量进行处理(包括并行处理或串行处理)时的处理时长会低于对字符分割图像进行整体处理所需的时长,在保留图像信息的前提下,大大降低了数据处理量,降低了系统资源占用率。
S105,计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素。
S106,将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量。
其中,在步骤105中计算M个高频分量中每个高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素。在步骤106中,将M个高频分量的均值和方差进行组合,得到所述字符分割图像的特征向量。
可以理解地,M个高频分量共有2M个特征元素,将2M个特征元素组合形成字符分割图像的特征向量,也就是说,特征向量为2M维的特征向量。
当M的取值越大,特征向量的维度越高,计算的准确度越高,但是数据量也越大,可以根据需要对M的取值进行限度,也就是说,可以根据需要对预设层数N的取值进行限度,本申请实施例对M和N 的取值不做具体限定。但是当字符分割图像对应的高频分量取9或者6时,申请人发现既可以在确保结果准确度的同时又能减少数据处理量。
示例性地,当字符分割图像对应的高频分量为9个时,计算9个高频分量中每个高频分量的高频系数矩阵的均值和方差,作为特征元素;那么,9个高频分量共有十八个特征元素,将得到的十八个特征元素进行组合,得到所述字符分割图像的十八维的特征向量。
示例性地,当字符分割图像对应的高频分量为6个时,计算每个高频分量的高频系数矩阵的均值和方差,作为特征元素;那么,6个高频分量共有十二个特征元素,将十二个特征元素组合形成字符分割图像的特征向量,也就是说,特征向量为十二维的特征向量。
S107,按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
其中,终端设备事先存储有用户签名字符,并计算好了各个字符分割图像对应的预设特征向量,将这些预设特征向量按照字符顺序,即预设排序进行存储,可以理解的是,预设特征向量的计算方式与特征向量的计算方式相同,请参照前述描述,此处不再赘述。
在步骤107中,按照排序将每个字符分割图像的特征向量与对应的预设特征向量进行匹配,若每个所述字符分割图像的特征向量分别与预设特征向量匹配成功,则生成用户身份识别成功的识别结果,否则,生成用户身份识别失败的识别结果。
在本申请一实施例中,可以通过计算特征向量与预设特征向量的相似度来判断两者是否匹配,当相似度大于预设阈值,则该字符分割图像匹配成功,进行下一个字符分割图像的匹配,直至计算的相似度小于或等于阈值,或所有字符分割图像均匹配完成。当其中一个匹配不成功,则停止下一个字符分割图像的匹配,生成用户身份识别失败的识别结果。可以理解的是,在每个特征向量与预设特征向量的匹配过程可以采用并行处理的方式,进一步提高计算速度,减少系统资源占用。
在本申请实施例中,将签名图像进行分割后得到字符分割图像,进而对每个字符分割图像进行预设层数的小波分解,生成对应每个字符分割图像的低维度的特征向量,最后基于特征向量实现对用户身份的识别,在保留签名图像信息的前提下,保证了识别结果的准确度,大大降低了数据处理量,降低了系统资源占用。
可选地,在上述图1所示实施例的基础上,如图4所示,在步骤102对签名图像进行灰度化之后,还包括步骤102’,对灰度图像进行二值化,得到二值化图像。
其中,二值化的过程是将灰度图像中的字符区域从背景区域分割出来的过程,进一步减少了数据量,进一步降低了系统资源占用。
需要说明的是,此时在步骤103中,就不再是对所述灰度图像中各个所述用户签名字符进行分割,而是对所述二值化图像中的各个所述用户签名字符进行分割。
示例性地,作为本申请一实施例,对灰度图像进行二值化,可以采用如下步骤:
针对灰度图像 V gray ( i, j),通过以下公式进行灰度图像的二值化,得到二值化图像 f( i, j):
V gray ( i, j)≥ Criticalvalue( i, j), f( i, j)=1;
V gray ( i, j)< Criticalvalue( i, j), f( i, j)=0。
其中, Criticalvalue( i, j)为像素点( i, j)的二值化临界值,可以为像素点( i, j)的4邻域,D邻域或8邻域的灰度均值。
该二值化方法,针对每个像素计算二值化临界值,在计算数据量不大的情况下,降低了图像二值化对环境的适应性弱,针对不管什么场景下的签名图像都能获得较好的二值化效果,提高了普适性。
可选地,在上述图4所示实施例的基础上,如图5所示,在步骤102’,对灰度图像进行二值化,得到二值化图像的步骤之后,还包括步骤102’’,对所述二值化图像进行降噪处理得到降噪图像。
其中,降噪可以采用现有技术的平滑滤波方法,包括但不限于:均值滤波,中值滤波以及维纳滤波等。
可以理解地是,由于二值化图像中包含了噪声信息,所以对二值化图像中的噪声进行去除,可以较大幅度的提升处理后图像的质量,从而进一步提高后续结果的准确性。
需要说明的是,此时在步骤103中,就不再是对所述二值化图像中各个所述用户签名字符进行分割,而是对所述降噪图像中的各个所述用户签名字符进行分割。
可选地,在上述图1,图4,图5任一实施例的基础上,在步骤103中得到排序好的多个字符分割图像之后,还包括步骤:将每个所述字符分割图像调整成预设尺寸的尺寸归一化字符图像;
其中,将每个字符分割图像映射到尺寸大小一致的点阵中,保持了字符笔画的位置,倾斜和方向等特征信息。
在本申请实施例中,图像尺寸归一化,可采用的方法包括但不限于:四边定界法、重心对准法,单边定界法和双边定界法等。
需要说明的是,此时在步骤104中,就不再是针对每个所述字符分割图像进行预设层数N的小波分解,而是针对每个所述尺寸归一化字符图像进行预设层数N的小波分解。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的基于手写签名的用户身份识别方法,图6示出了本申请实施例提供的基于手写签名的用户身份识别装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图6,该基于手写签名的用户身份识别装置包括:
获取模块61,用于获取包括用户签名字符的签名图像;
灰度化模块62,用于将所述签名图像进行灰度化处理得到灰度图像;
分割模块63,用于对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
分解模块64,用于针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
计算模块65,用于计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
组合模块66,用于将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
匹配模块67,用于按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
可选地,分解模块64具体用于:
经过第一层小波变换将所述字符分割图像分解为3个第一高频分量以及1个第一低频分量;
将第一层的第一低频分量经过第二层小波变换分解为3个第二高频分量以及1个第二低频分量;
将第二次的第二低频分量经过第三层小波变换分解为3个第三高频分量以及1个第三低频分量;
直至执行完预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,M=3N。
可选地,分割模块63包括:
确定子模块,用于确定所述灰度图像中用户签名字符的排列方向,所述排列方向包括横向排列和纵向排列;
分割子模块,用于根据确定出的所述排列方向采用对应的字符分割策略,保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
可选地,所述确定子模块具体用于:
对所述灰度图像进行横向投影,统计每行的第一黑像素数目,得到第一黑像素数目大于第一预设阈值的行数;对所述灰度图像进行纵向投影,统计每列的第二黑像素数目,得到第二黑像素数目大于第二预设阈值的列数;若所述行数大于所述列数,则确定所述灰度图像中用户签名字符的排列方向为纵向排列,若所述行数小于或等于所述列数,则确定所述灰度图像中用户签名字符的排列方向为横向排列。
可选地,所述分割子模块具体用于:
当确定所述排列方向为纵向排列,则保留第二黑像素数目大于第二预设阈值的像素列;将保留的所述像素列中第一黑像素数目大于第一预设阈值的连续像素行联通,每个连通区域作为一个字符分割图像,并按照像素行的排序,将各个字符分割图像进行排序。
当确定所述排列方向为横向排列,则保留第一黑像素数目大于第一预设阈值的像素行,将保留的像素行中第二黑像素数目大于第二预设阈值的连续像素列联通,每个连通区域作为一个字符分割图像,并按照像素列的排序,将各个字符分割图像进行排序。
可选地,还包括二值化模块,用于将所述灰度图像进行二值化得到二值化图像。
可选地,还包括降噪模块,用于对所述二值化图像进行降噪处理得到降噪图像。
可选地,还包括归一化模块,用于将每个所述字符分割图像调整成预设尺寸的尺寸归一化字符图像。
图7是本申请一实施例提供的终端设备的示意图。如图7所示,该实施例的终端设备7包括:处理器70、存储器71以及存储在所述存储器71中并可在所述处理器70上运行的计算机可读指令72,例如基于手写签名的用户身份识别的程序。所述处理器70执行所述计算机可读指令72时实现上述基于手写签名的用户身份识别方法实施例中的步骤,例如图1所示的步骤S101至S107。或者,所述处理器70执行所述计算机可读指令72时实现上述各装置实施例中各模块/单元的功能,例如图6所示模块61至67的功能。
示例性的,所述计算机可读指令72可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器71中,并由所述处理器70执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述所述计算机可读指令72在所述终端设备7中的执行过程。
所述终端设备7可以是智能电话、电脑、平板、服务器等。所述终端设备7可包括,但不仅限于,处理器70、存储器71。本领域技术人员可以理解,图7仅仅是终端设备7的示例,并不构成对终端设备7的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器70可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器71可以是所述终端设备7的内部存储单元,例如终端设备7的硬盘或内存。所述存储器71也可以是所述终端设备7的外部存储设备,例如所述终端设备7上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器71还可以既包括所述终端设备7的内部存储单元也包括外部存储设备。所述存储器71用于存储所述计算机可读指令以及所述终端设备所需的其他程序和数据。所述存储器71还可以用于暂时地存储已经输出或者将要输出的数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种基于手写签名的用户身份识别方法,其特征在于,包括:
    获取包括用户签名字符的签名图像;
    将所述签名图像进行灰度化处理得到灰度图像;
    对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
    针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
    计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
    将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
    按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
  2. 如权利要求1所述的方法,其特征在于,所述针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,包括:
    经过第一层小波变换将所述字符分割图像分解为3个第一层的高频分量以及1个第一层的低频分量;
    将第一层的低频分量经过第二层小波变换分解为3个第二层的高频分量以及1个第二层的低频分量;
    将第二层的低频分量经过第三层小波变换分解为3个第三层的高频分量以及1个第三层的低频分量;
    直至执行完预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,M=3N。
  3. 如权利要求1或2所述的方法,其特征在于,所述对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像,包括:
    确定所述灰度图像中用户签名字符的排列方向,所述排列方向包括横向排列和纵向排列;
    根据确定出的所述排列方向采用对应的字符分割策略,保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
  4. 如权利要求3所述的方法,其特征在于,所述确定所述灰度图像中用户签名字符的排列方向,包括:
    对所述灰度图像进行横向投影,统计每行的第一黑像素数目,得到第一黑像素数目大于第一预设阈值的行数;对所述灰度图像进行纵向投影,统计每列的第二黑像素数目,得到第二黑像素数目大于第二预设阈值的列数;若所述行数大于所述列数,则确定所述灰度图像中用户签名字符的排列方向为纵向排列,若所述行数小于或等于所述列数,则确定所述灰度图像中用户签名字符的排列方向为横向排列。
  5. 如权利要求4所述的方法,其特征在于,所述根据确定出的所述排列方向采用对应的字符分割策略,得到多个字符分割图像,包括:
    当确定所述排列方向为纵向排列,则保留第二黑像素数目大于第二预设阈值的像素列;将保留的所述像素列中第一黑像素数目大于第一预设阈值的连续像素行联通,每个连通区域作为一个字符分割图像,并按照像素行的排序,将各个字符分割图像进行排序。
    当确定所述排列方向为横向排列,则保留第一黑像素数目大于第一预设阈值的像素行,将保留的像素行中第二黑像素数目大于第二预设阈值的连续像素列联通,每个连通区域作为一个字符分割图像,并按照像素列的排序,将各个字符分割图像进行排序。
  6. 如权利要求1或2所述的方法,其特征在于,所述将所述签名图像进行灰度化处理得到灰度图像之后,还包括:
    将所述灰度图像进行二值化得到二值化图像;
    相应的,对所述灰度图像中各个所述用户签名字符进行分割,得到多个字符分割图像,包括:
    对所述二值化图像中各个所述用户签名字符进行分割,得到多个字符分割图像。
  7. 如权利要求6所述的方法,其特征在于,所述将所述灰度图像进行二值化得到二值化图像之后,还包括:
    对所述二值化图像进行降噪处理得到降噪图像,
    相应的,所述对所述二值化图像中各个所述用户签名字符进行分割,得到多个字符分割图像,包括:
    对所述降噪图像中各个所述用户签名字符进行分割,得到多个字符分割图像。
  8. 一种基于手写签名的用户身份识别装置,其特征在于,包括:
    获取模块,用于获取包括用户签名字符的签名图像;
    灰度化模块,用于将所述签名图像进行灰度化处理得到灰度图像;
    分割模块,用于对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
    分解模块,用于针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
    计算模块,用于计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
    组合模块,用于将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
    匹配模块,用于按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
  9. 一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时,实现如下步骤:
    获取包括用户签名字符的签名图像;
    将所述签名图像进行灰度化处理得到灰度图像;
    对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
    针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
    计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
    将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
    按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
  10. 如权利要求9所述的终端设备,其特征在于,所述针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,包括:
    经过第一层小波变换将所述字符分割图像分解为3个第一层的高频分量以及1个第一层的低频分量;
    将第一层的低频分量经过第二层小波变换分解为3个第二层的高频分量以及1个第二层的低频分量;
    将第二层的低频分量经过第三层小波变换分解为3个第三层的高频分量以及1个第三层的低频分量;
    直至执行完预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,M=3N。
  11. 如权利要求9或10所述的终端设备,其特征在于,所述对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像,包括:
    确定所述灰度图像中用户签名字符的排列方向,所述排列方向包括横向排列和纵向排列;
    根据确定出的所述排列方向采用对应的字符分割策略,保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
  12. 如权利要求11所述的终端设备,其特征在于,所述确定所述灰度图像中用户签名字符的排列方向,包括:
    对所述灰度图像进行横向投影,统计每行的第一黑像素数目,得到第一黑像素数目大于第一预设阈值的行数;对所述灰度图像进行纵向投影,统计每列的第二黑像素数目,得到第二黑像素数目大于第二预设阈值的列数;若所述行数大于所述列数,则确定所述灰度图像中用户签名字符的排列方向为纵向排列,若所述行数小于或等于所述列数,则确定所述灰度图像中用户签名字符的排列方向为横向排列。
  13. 如权利要求12所述的终端设备,其特征在于,所述根据确定出的所述排列方向采用对应的字符分割策略,得到多个字符分割图像,包括:
    当确定所述排列方向为纵向排列,则保留第二黑像素数目大于第二预设阈值的像素列;将保留的所述像素列中第一黑像素数目大于第一预设阈值的连续像素行联通,每个连通区域作为一个字符分割图像,并按照像素行的排序,将各个字符分割图像进行排序。
    当确定所述排列方向为横向排列,则保留第一黑像素数目大于第一预设阈值的像素行,将保留的像素行中第二黑像素数目大于第二预设阈值的连续像素列联通,每个连通区域作为一个字符分割图像,并按照像素列的排序,将各个字符分割图像进行排序。
  14. 如权利要求9或10所述的终端设备,其特征在于,所述将所述签名图像进行灰度化处理得到灰度图像之后,还包括:
    将所述灰度图像进行二值化得到二值化图像;
    相应的,对所述灰度图像中各个所述用户签名字符进行分割,得到多个字符分割图像,包括:
    对所述二值化图像中各个所述用户签名字符进行分割,得到多个字符分割图像。
  15. 如权利要求14所述的终端设备,其特征在于,所述将所述灰度图像进行二值化得到二值化图像之后,还包括:
    对所述二值化图像进行降噪处理得到降噪图像,
    相应的,所述对所述二值化图像中各个所述用户签名字符进行分割,得到多个字符分割图像,包括:
    对所述降噪图像中各个所述用户签名字符进行分割,得到多个字符分割图像。
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:
    获取包括用户签名字符的签名图像;
    将所述签名图像进行灰度化处理得到灰度图像;
    对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像;
    针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,N为正整数,M为大于N的正整数;
    计算每个所述高频分量的高频系数矩阵的均值和方差,作为每个所述高频分量对应的特征元素;
    将每个所述字符分割图像对应的预设数量M个高频分量的特征元素组合形成所述字符分割图像的特征向量;
    按照排序将每个所述字符分割图像的特征向量与预设排序中对应的预设特征向量进行匹配,生成用户身份识别结果。
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,所述针对每个所述字符分割图像进行预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,包括:
    经过第一层小波变换将所述字符分割图像分解为3个第一层的高频分量以及1个第一层的低频分量;
    将第一层的低频分量经过第二层小波变换分解为3个第二层的高频分量以及1个第二层的低频分量;
    将第二层的低频分量经过第三层小波变换分解为3个第三层的高频分量以及1个第三层的低频分量;
    直至执行完预设层数N的小波分解,得到每个所述字符分割图像对应的预设数量M个高频分量,其中,M=3N。
  18. 如权利要求16或17所述的计算机可读存储介质,其特征在于,所述对所述灰度图像中各个所述用户签名字符进行分割,并保留所述用户签名字符的顺序,得到排序好的多个字符分割图像,包括:
    确定所述灰度图像中用户签名字符的排列方向,所述排列方向包括横向排列和纵向排列;
    根据确定出的所述排列方向采用对应的字符分割策略,保留所述用户签名字符的顺序,得到排序好的多个字符分割图像。
  19. 如权利要求18所述的计算机可读存储介质,其特征在于,所述确定所述灰度图像中用户签名字符的排列方向,包括:
    对所述灰度图像进行横向投影,统计每行的第一黑像素数目,得到第一黑像素数目大于第一预设阈值的行数;对所述灰度图像进行纵向投影,统计每列的第二黑像素数目,得到第二黑像素数目大于第二预设阈值的列数;若所述行数大于所述列数,则确定所述灰度图像中用户签名字符的排列方向为纵向排列,若所述行数小于或等于所述列数,则确定所述灰度图像中用户签名字符的排列方向为横向排列。
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,所述根据确定出的所述排列方向采用对应的字符分割策略,得到多个字符分割图像,包括:
    当确定所述排列方向为纵向排列,则保留第二黑像素数目大于第二预设阈值的像素列;将保留的所述像素列中第一黑像素数目大于第一预设阈值的连续像素行联通,每个连通区域作为一个字符分割图像,并按照像素行的排序,将各个字符分割图像进行排序。
    当确定所述排列方向为横向排列,则保留第一黑像素数目大于第一预设阈值的像素行,将保留的像素行中第二黑像素数目大于第二预设阈值的连续像素列联通,每个连通区域作为一个字符分割图像,并按照像素列的排序,将各个字符分割图像进行排序。
PCT/CN2019/118690 2019-09-06 2019-11-15 基于手写签名的用户身份识别方法、装置及终端设备 WO2021042562A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910840237.3 2019-09-06
CN201910840237.3A CN110751024A (zh) 2019-09-06 2019-09-06 基于手写签名的用户身份识别方法、装置及终端设备

Publications (1)

Publication Number Publication Date
WO2021042562A1 true WO2021042562A1 (zh) 2021-03-11

Family

ID=69276032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118690 WO2021042562A1 (zh) 2019-09-06 2019-11-15 基于手写签名的用户身份识别方法、装置及终端设备

Country Status (2)

Country Link
CN (1) CN110751024A (zh)
WO (1) WO2021042562A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113423024A (zh) * 2021-06-21 2021-09-21 上海宏英智能科技股份有限公司 一种车载无线遥控方法及系统
CN113421256A (zh) * 2021-07-22 2021-09-21 凌云光技术股份有限公司 一种点阵文本行字符投影分割方法及装置
CN113591855A (zh) * 2021-08-18 2021-11-02 易思维(杭州)科技有限公司 一种粘连vin码分割方法
CN117728960A (zh) * 2024-02-07 2024-03-19 中国标准化研究院 一种基于电子签名的标准数据数字化转换验证方法和系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084470A (zh) * 2020-07-14 2020-12-15 深圳市能信安技术有限公司 用户身份认证方法、装置、用户终端及服务器
CN112926099B (zh) * 2021-04-02 2021-10-26 珠海市鸿瑞信息技术股份有限公司 一种基于远程控制身份认证的管理系统
CN114724133B (zh) * 2022-04-18 2024-02-02 北京百度网讯科技有限公司 文字检测和模型训练方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620244B1 (en) * 2004-01-06 2009-11-17 Motion Computing, Inc. Methods and systems for slant compensation in handwriting and signature recognition
CN103218624A (zh) * 2013-04-25 2013-07-24 华东理工大学 基于生物特征的识别方法及装置
CN108734168A (zh) * 2018-05-18 2018-11-02 天津科技大学 一种手写数字的识别方法
CN109002756A (zh) * 2018-06-04 2018-12-14 平安科技(深圳)有限公司 手写汉字图像识别方法、装置、计算机设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1219271C (zh) * 2003-06-12 2005-09-14 上海交通大学 脱机中文签名鉴定方法
WO2006010855A1 (fr) * 2004-06-30 2006-02-02 France Telecom Procede et dispositif de signature et de reconnaissance d'un visage base sur des transformations ondelettes
CN102496013B (zh) * 2011-11-11 2013-08-21 苏州大学 用于脱机手写汉字识别的汉字字符切分方法
CN107464251A (zh) * 2016-06-03 2017-12-12 上海顺久电子科技有限公司 一种图像的黑边检测方法及装置
WO2017219962A1 (en) * 2016-06-21 2017-12-28 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620244B1 (en) * 2004-01-06 2009-11-17 Motion Computing, Inc. Methods and systems for slant compensation in handwriting and signature recognition
CN103218624A (zh) * 2013-04-25 2013-07-24 华东理工大学 基于生物特征的识别方法及装置
CN108734168A (zh) * 2018-05-18 2018-11-02 天津科技大学 一种手写数字的识别方法
CN109002756A (zh) * 2018-06-04 2018-12-14 平安科技(深圳)有限公司 手写汉字图像识别方法、装置、计算机设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HOU WEIPING: "Off-line Handwritten Signature Texture Feature Extraction and Verification Based on Spectrum Analysis", CHINA DOCTORAL/MASTER DISSERTATION DATABASE (MASTER'S) INFORMATION TECHNOLOGY SERIES, NO. 08, 2005, 15 December 2005 (2005-12-15), XP055788427 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113423024A (zh) * 2021-06-21 2021-09-21 上海宏英智能科技股份有限公司 一种车载无线遥控方法及系统
CN113421256A (zh) * 2021-07-22 2021-09-21 凌云光技术股份有限公司 一种点阵文本行字符投影分割方法及装置
CN113421256B (zh) * 2021-07-22 2024-05-24 凌云光技术股份有限公司 一种点阵文本行字符投影分割方法及装置
CN113591855A (zh) * 2021-08-18 2021-11-02 易思维(杭州)科技有限公司 一种粘连vin码分割方法
CN113591855B (zh) * 2021-08-18 2023-07-04 易思维(杭州)科技有限公司 一种粘连vin码分割方法
CN117728960A (zh) * 2024-02-07 2024-03-19 中国标准化研究院 一种基于电子签名的标准数据数字化转换验证方法和系统
CN117728960B (zh) * 2024-02-07 2024-05-07 中国标准化研究院 一种基于电子签名的标准数据数字化转换验证方法和系统

Also Published As

Publication number Publication date
CN110751024A (zh) 2020-02-04

Similar Documents

Publication Publication Date Title
WO2021042562A1 (zh) 基于手写签名的用户身份识别方法、装置及终端设备
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
WO2022042365A1 (zh) 一种基于图神经网络识别证件的方法及系统
EP3333768A1 (en) Method and apparatus for detecting target
Liang et al. Multi-spectral fusion based approach for arbitrarily oriented scene text detection in video images
CN112001302B (zh) 基于人脸感兴趣区域分割的人脸识别方法
CN110852311A (zh) 一种三维人手关键点定位方法及装置
CN105681324B (zh) 互联网金融交易系统及方法
CN112507988B (zh) 一种图像处理方法、装置、存储介质及电子设备
WO2021051939A1 (zh) 一种证件区域定位的方法及装置
CN113627428A (zh) 文档图像矫正方法、装置、存储介质及智能终端设备
CN110443184A (zh) 身份证信息提取方法、装置及计算机存储介质
CN116403094A (zh) 一种嵌入式图像识别方法及系统
CN115631112A (zh) 一种基于深度学习的建筑轮廓矫正方法及装置
CN108764121B (zh) 用于检测活体对象的方法、计算设备及可读存储介质
WO2016192213A1 (zh) 一种图像特征提取方法和装置、存储介质
CN111199228B (zh) 一种车牌定位的方法及装置
CN112348008A (zh) 证件信息的识别方法、装置、终端设备及存储介质
WO2020247494A1 (en) Cross-matching contactless fingerprints against legacy contact-based fingerprints
WO2020244076A1 (zh) 人脸识别方法、装置、电子设备及存储介质
WO2020237481A1 (zh) 反色区域的确定方法、指纹芯片及电子设备
CN113610090B (zh) 印章图像识别分类方法、装置、计算机设备和存储介质
CN116246298A (zh) 一种空间占用人数统计方法、终端设备及存储介质
CN113239738B (zh) 一种图像的模糊检测方法及模糊检测装置
WO2022156088A1 (zh) 指纹签名生成方法、装置、电子设备及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944244

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19944244

Country of ref document: EP

Kind code of ref document: A1