WO2020134674A1 - 掌纹识别方法、装置、计算机设备和存储介质 - Google Patents

掌纹识别方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2020134674A1
WO2020134674A1 PCT/CN2019/118262 CN2019118262W WO2020134674A1 WO 2020134674 A1 WO2020134674 A1 WO 2020134674A1 CN 2019118262 W CN2019118262 W CN 2019118262W WO 2020134674 A1 WO2020134674 A1 WO 2020134674A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
amplitude
recognized
palmprint image
main direction
Prior art date
Application number
PCT/CN2019/118262
Other languages
English (en)
French (fr)
Inventor
巢中迪
庄伯金
魏鑫
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020134674A1 publication Critical patent/WO2020134674A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Definitions

  • the present application relates to the field of image recognition technology, and in particular, to a palmprint recognition method, device, computer equipment, and storage medium.
  • palmprint recognition has become more and more popular, for example: the identification of depositors in the bank, the access control system of the family, and the attendance system for employees in the company can all use palmprint recognition.
  • palmprint recognition When palmprint recognition is performed, an image of the user's palmprint is usually obtained first, and then the palmprint in the image is matched with the pre-stored palmprint. If the matching is successful, it can be determined that the palmprint recognition is successful and the user's identity is legal ; Otherwise, the user's operation request is rejected.
  • Embodiments of the present application provide a palmprint recognition method, device, computer equipment, and storage medium, so as to realize palmprint image recognition, and improve the efficiency and accuracy of palmprint recognition.
  • an embodiment of the present application provides a palmprint recognition method, including: acquiring a palmprint image to be recognized; calculating a main direction of pixel points in the palmprint image to be recognized, and calculating the pixel points at The amplitude in the main direction; according to the amplitude of the pixels in the palmprint image to be recognized, obtain the feature descriptor of the pixels in the palmprint image to be recognized; according to the palmprint image to be recognized The feature descriptor of each pixel searches for matching user identity information.
  • an embodiment of the present application provides a palmprint recognition device, including: an acquisition module for acquiring a palmprint image to be identified; a calculation module for calculating a palmprint image to be identified acquired by the acquisition module The main direction of the pixel, and calculating the amplitude of the pixel in the corresponding main direction; the acquiring module is also used to calculate the amplitude of the pixel in the palmprint image to be recognized calculated by the calculation module, Obtain a feature descriptor of the pixel; a search module is used to search for matching user identity information according to the feature descriptor of each pixel in the palm print image to be recognized obtained by the acquisition module.
  • embodiments of the present application provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, Implement the method described above.
  • an embodiment of the present application provides a non-transitory computer-readable storage medium on which a computer program is stored, where the computer program is executed by a processor to implement the method described above.
  • the main direction of the pixel point in the palmprint image to be recognized is calculated, and the amplitude of the pixel point in the corresponding main direction is calculated, according to the palm to be recognized
  • the amplitude of the pixels in the pattern image to obtain the feature descriptors of the above pixels, according to the feature descriptors of each pixel in the palmprint image to be identified, to find matching user identity information, so that the palmprint image can be realized Recognize, and then identify the user's identity based on the palmprint recognition results, and find matching user identity information according to the feature descriptor of each pixel, taking full advantage of the advantages of the feature descriptor's illumination, rotation, and scale invariance. Improves the accuracy of palmprint recognition.
  • FIG. 5 is a schematic structural diagram of an embodiment of a palmprint recognition device disclosed in this application.
  • FIG. 6 is a schematic structural diagram of another embodiment of a palmprint recognition device disclosed in this application.
  • FIG. 7 is a schematic structural diagram of an embodiment of a computer device disclosed in this application.
  • FIG. 1 is a flowchart of an embodiment of a palmprint recognition method disclosed in this application. As shown in FIG. 1, the above palmprint recognition method may include:
  • Step 101 Acquire a palmprint image to be recognized.
  • the palmprint image to be recognized may be a palmprint image taken by a user using a camera or a camera or other photographing device; or, the user may contact his palm with the palmprint input module in the palmprint recognition device, and pass the palmprint Enter the palmprint image to be recognized input by the module.
  • the palm print image to be recognized can also be obtained in other ways, which is not limited in this embodiment.
  • Step 102 Calculate the main direction of the pixels in the palmprint image to be identified, and calculate the amplitude of the pixels in the corresponding main directions.
  • Step 103 Obtain the feature descriptor of the pixel according to the amplitude of the pixel in the palmprint image to be identified.
  • Step 104 Find matching user identity information according to the feature descriptor of each pixel in the palmprint image to be identified.
  • the user identity information may include: the user's facial image, name and/or gender, etc., so that after finding the matching user identity information, the legality of the user identity inputting the palmprint image to be recognized can be verified confirm.
  • the main direction of the pixel in the palmprint image to be recognized is calculated, and the amplitude of the pixel in the corresponding main direction is calculated, according to the above
  • the amplitude of the pixels in the palmprint image of the pixel to obtain the feature descriptors of the above pixels, according to the feature descriptors of each pixel in the palmprint image to be identified, to find matching user identity information, so that the palm can be realized Recognize the pattern image, and then identify the user's identity according to the palmprint recognition result, and find the matching user identity information according to the feature descriptor of each pixel point, making full use of the illumination, rotation and scale invariance of the feature descriptor Advantages, improve the accuracy of palmprint recognition.
  • FIG. 2 is a flowchart of another embodiment of the palmprint recognition method disclosed in this application. As shown in FIG. 2, in the embodiment shown in FIG. 1 of this application, step 102 may include:
  • Step 201 Filter the pixel points in the palmprint image to be identified by at least two directional filters in different directions to obtain the filter value of each directional filter for the pixel points, and select the directional filter with the largest filter value The indicated direction is taken as the main direction of the pixel.
  • the above directional filter may be a gabor filter, of course, other types of directional filters may also be used, which is not limited in this embodiment.
  • Step 202 Filter the pixel points by at least two scale filters of different scales in the main direction of the pixel points, obtain the filter value of each scale filter for the pixel points, and select the largest filter value as the pixel The amplitude of the point in the corresponding main direction.
  • the above scale filter may be a gabor filter, and of course, other types of scale filters may also be used, which is not limited in this embodiment.
  • n gabor filters in different directions can be used to filter the pixels in the palmprint image to be identified, to obtain the filter value of each gabor filter for the pixels, and select the gabor with the largest filter value
  • the direction indicated by the filter serves as the main direction of the pixel.
  • the pixel points are filtered by k different scale gabor filters to obtain the filter value of each scale filter for the pixel point, and the maximum filter value is selected as the pixel point in the Corresponding to the amplitude in the main direction, n, k ⁇ 2, n, k are positive integers.
  • FIG. 3 is a flowchart of still another embodiment of the palmprint recognition method disclosed in this application. As shown in FIG. 3, in the embodiment shown in FIG. 1 of this application, step 103 may include:
  • Step 301 Divide the amplitude of the pixel points into a first predetermined number of intervals according to the maximum and minimum values of the amplitude of the pixel points in the palmprint image to be identified, and perform the division on the obtained amplitude intervals Number, the first predetermined number is the difference between the maximum value and the minimum value.
  • Step 302 Record the number of the amplitude interval to which the amplitude of the pixel in the palmprint image to be identified belongs.
  • Step 303 Extract a second predetermined number of rays centering on the pixel point, obtain the maximum pixel point closest to the pixel point on each ray, and record the main direction of the maximum pixel point and the maximum pixel
  • the number of the amplitude section to which the point belongs, and the number of the recorded main direction and the amplitude section are used as the feature vector of the pixel point.
  • the second predetermined number is a positive integer, and the size of the second predetermined number can be set according to system performance and/or implementation requirements, etc.
  • the size of the second predetermined number is not limited. The number may be the same as or different from the above-mentioned first predetermined number.
  • obtaining the maximum value pixel point closest to the pixel point on each ray may be: according to the distance between the pixel point and each ray, among the pixel points closest to the distance of each ray, select the largest amplitude Pixels.
  • Step 304 Rotate the feature vector of the pixel from the starting position to the main direction of the pixel with the largest amplitude among the second predetermined maximum number of pixels, to obtain the feature descriptor of the pixel.
  • the length of the feature vector of the above pixels is 2 ⁇ l
  • each ray corresponds to a maximum pixel
  • the pixel with the largest amplitude and then rotate the feature vector of the pixel from the starting position to the main direction of the pixel with the largest amplitude to obtain the feature descriptor of each pixel in the palmprint image to be identified .
  • This embodiment uses the main direction and amplitude of each pixel in the palmprint image to be recognized as the feature descriptor, that is, the feature descriptor in this embodiment uses each pixel in the palmprint image to be recognized.
  • the local features of the algorithm can overcome the problem of algorithm inefficiency and complex calculation in the case of sparse feature points, simplify the calculation process of palmprint recognition, and improve the efficiency of palmprint recognition.
  • the main direction and amplitude of each pixel in the palmprint image to be recognized are used as the feature descriptors.
  • at least two Gabor filters of different scales are filtered, and are rotated according to the main direction in step 304, so the feature descriptor of each pixel obtained has scale invariance and rotation invariance.
  • due to the characteristics and The amplitude is divided into the first predetermined number of intervals, so the obtained feature descriptor of each pixel also has a certain invariance to the illumination.
  • FIG. 4 is a flowchart of still another embodiment of the palmprint recognition method disclosed in this application. As shown in FIG. 4, in the embodiment shown in FIG. 1 of this application, step 104 may include:
  • Step 401 According to the value of the feature descriptor of each pixel, map the feature descriptor of each pixel to different subintervals in the histogram to obtain the feature point histogram of the palmprint image to be identified .
  • Step 402 Find a feature point histogram matching the feature point histogram of the palmprint image to be identified above in the pre-stored feature point histogram.
  • Step 403 Acquire user identity information corresponding to the matched feature point histogram.
  • the feature descriptors of each pixel point are mapped to different sub-intervals in the feature point histogram to obtain the feature point histogram of the palmprint image to be identified, so as long as the feature point histogram is passed It can find the feature point histogram matching the feature point histogram of the palmprint image to be identified above, so that the advantages of the illumination, rotation and scale invariance of the feature descriptor can be fully utilized, and the feature descriptor is mapped to the feature point histogram
  • the graph can also simplify the search operation.
  • FIG. 5 is a schematic structural diagram of an embodiment of a palmprint recognition apparatus disclosed in the present application.
  • the palmprint recognition apparatus in this embodiment may implement the palmprint recognition method provided in the embodiment of the present application.
  • the above palmprint recognition device may include: an acquisition module 51, a calculation module 52, and a search module 53;
  • the obtaining module 51 is used to obtain a palmprint image to be recognized; the palmprint image to be recognized may be a palmprint image taken by a user using a camera or a camera or other photographing device; or, the user may recognize his palm and palmprint
  • the palm print input module in the device is in contact, and the palm print image to be recognized is input through the palm print input module.
  • the palm print image to be recognized can also be obtained in other ways, which is not limited in this embodiment.
  • the calculation module 52 is used to calculate the main direction of the pixel points in the palmprint image to be recognized acquired by the acquisition module 51, and calculate the amplitude of the pixel points in the corresponding main direction.
  • the obtaining module 51 is further configured to obtain the feature descriptor of the pixel point according to the amplitude of the pixel point in the palmprint image to be recognized calculated by the calculation module 52.
  • the searching module 53 is configured to search for matching user identity information according to the feature descriptor of each pixel in the palmprint image to be recognized obtained by the obtaining module 51.
  • the above user identity information may include: the user's facial image, name and/or gender, etc. In this way, after the search module 53 finds the matching user identity information, the user identity of the user who inputs the palmprint image to be recognized Confirm the legality.
  • the search module 53 finds no matching user identity information according to the feature descriptor of each pixel in the palm print image to be recognized, it can be determined that the identity of the user who inputs the palm print image to be recognized is not legitimate.
  • the calculation module 52 calculates the main direction of the pixel in the palmprint image to be recognized, and calculates the width of the pixel in the corresponding main direction Value
  • the obtaining module 51 obtains the feature descriptors of the pixels according to the amplitude of the pixels in the palmprint image to be identified
  • the search module 53 determines the feature descriptors of each pixel in the palmprint image to be identified, Find matching user identity information, so that palmprint images can be identified, and then user identity can be identified based on palmprint recognition results, and matching user identity information can be found according to the feature descriptor of each pixel, making full use of
  • the advantages of feature descriptors such as illumination, rotation and scale invariance improve the accuracy of palmprint recognition.
  • the module 52 is specifically configured to filter the pixel points in the palmprint image to be identified through at least two directional filters in different directions, to obtain the filter value of each directional filter for the pixel points, and select the one with the largest filter value
  • the direction indicated by the direction filter is used as the main direction of the pixel point; and the pixel point is filtered by at least two scale filters of different scales in the main direction of the pixel point to obtain each pixel
  • the maximum filter value is selected as the amplitude of the pixel in the corresponding main direction.
  • the above directional filter may be a gabor filter, of course, other types of directional filters may also be used, which is not limited in this embodiment;
  • the above scale filter may be a gabor filter, and of course, other types of scale filters may also be used This is not limited in this embodiment.
  • the calculation module 52 may use n gabor filters in different directions to filter the pixels in the palmprint image to be identified, to obtain the filter value of each gabor filter for the pixels, and select the filter value The direction indicated by the largest gabor filter is taken as the main direction of the pixels. Then, in the main direction of the pixel points, the pixel points are filtered by k different scale gabor filters to obtain the filter value of each scale filter for the pixel point, and the maximum filter value is selected as the pixel point in the Corresponding to the amplitude in the main direction, n, k ⁇ 2, n, k are positive integers.
  • the acquisition module 51 may include: a division submodule 511, a numbering submodule 512, a recording submodule 513, and a rotation submodule 514;
  • a dividing submodule 511 configured to divide the amplitude of the pixel points into a first predetermined number of intervals according to the maximum and minimum values of the amplitude of the pixel points in the palmprint image to be identified;
  • the numbering submodule 512 is used to number the amplitude intervals obtained by dividing the submodule 511, and the first predetermined number is the difference between the maximum value and the minimum value;
  • the recording sub-module 513 is used to record the number of the amplitude interval to which the amplitude of the pixel point in the palmprint image to be identified belongs; to extract the second predetermined number of rays with the pixel point as the center, to obtain the distance value on each ray The nearest maximum pixel of the pixel, recording the main direction of the maximum pixel and the number of the amplitude interval to which the maximum pixel belongs, and using the recorded main direction and the number of the amplitude interval as the pixel
  • the characteristic vector of; wherein the second predetermined number is a positive integer, and the size of the second predetermined number can be set according to system performance and/or implementation requirements, etc. In this embodiment, the size of the second predetermined number is not limited, The second predetermined number and the first predetermined number may be the same or different.
  • obtaining the maximum value pixel point closest to the pixel point on each ray may be: according to the distance between the pixel point and each ray, select the pixel with the largest amplitude among the pixel points closest to each ray point.
  • the rotation submodule 514 is configured to rotate the feature vector of the pixel point from the starting position to the main direction of the pixel point with the largest amplitude among the second predetermined maximum number of pixel points to obtain the feature description of the pixel point child.
  • the length of the feature vector of the above pixels is 2 ⁇ l
  • each ray corresponds to a maximum pixel
  • the pixel with the largest amplitude and then the rotation submodule 514 rotates the feature vector of the pixel from the starting position to the main direction of the pixel with the largest amplitude to obtain each pixel in the palmprint image to be identified Character descriptor.
  • This embodiment uses the main direction and amplitude of each pixel in the palmprint image to be recognized as the feature descriptor, that is, the feature descriptor in this embodiment uses each pixel in the palmprint image to be recognized.
  • the local features of the algorithm can overcome the problem of algorithm inefficiency and complex calculation in the case of sparse feature points, simplify the calculation process of palmprint recognition, and improve the efficiency of palmprint recognition.
  • the main direction and amplitude of each pixel in the palmprint image to be recognized are used as feature descriptors.
  • at least two gabor filters of different scales are used to perform Filtering, and the rotation submodule 514 rotates according to the main direction, so the obtained feature descriptor of each pixel has scale invariance and rotation invariance.
  • the rotation submodule 514 rotates according to the main direction, so the obtained feature descriptor of each pixel has scale invariance and rotation invariance.
  • the search module 53 may include: a mapping submodule 531, a search submodule 532, and an acquisition submodule 533
  • the mapping sub-module 531 is configured to map the feature descriptor of each pixel to different sub-intervals in the histogram according to the value of the feature descriptor of each pixel to obtain the palmprint image to be identified Histogram of feature points;
  • the searching sub-module 532 is used to search for a feature point histogram matching the feature point histogram of the palmprint image to be identified in the pre-stored feature point histogram;
  • the obtaining submodule 533 is used to obtain user identity information corresponding to the matched feature point histogram.
  • the mapping sub-module 531 maps the feature descriptors of each pixel to different sub-intervals in the feature point histogram to obtain the feature point histogram of the palmprint image to be identified, so as to find the sub-module 532
  • the feature point histogram can be used to find the feature point histogram that matches the feature point histogram of the palmprint image to be identified above, this can make full use of the advantages of the illumination, rotation and scale invariance of the feature descriptor.
  • Mapping feature descriptors to feature point histograms can also simplify search operations.
  • the computer device may include a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer.
  • the palmprint recognition method provided in the embodiments of the present application may be implemented.
  • the computer device may be a server, such as a cloud server, or an electronic device, such as a smart phone, smart watch, or tablet computer.
  • a server such as a cloud server
  • an electronic device such as a smart phone, smart watch, or tablet computer.
  • the specific form of the computer device is not limited in this embodiment.
  • FIG. 7 shows a block diagram of an exemplary computer device 12 suitable for implementing embodiments of the present application.
  • the computer device 12 shown in FIG. 7 is just an example, and should not bring any limitation to the functions and use scope of the embodiments of the present application.
  • the computer device 12 is represented in the form of a general-purpose computing device.
  • the components of the computer device 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 connecting different system components (including the system memory 28 and the processing unit 16).
  • the bus 18 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture; hereinafter referred to as ISA) bus, Micro Channel Architecture (Micro Channel Architecture (hereinafter referred to as MAC) bus, enhanced ISA bus, video electronics Standard Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component Interconnection; hereinafter referred to as: PCI) bus.
  • Industry Standard Architecture Industry Standard Architecture
  • MAC Micro Channel Architecture
  • VESA Video Electronics Standards Association
  • PCI peripheral component interconnection
  • the computer device 12 typically includes a variety of computer system readable media. These media may be any available media that can be accessed by the computer device 12, including volatile and nonvolatile media, removable and non-removable media.
  • the system memory 28 may include a computer system readable medium in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter referred to as RAM) 30 and/or cache memory 32.
  • RAM random access memory
  • the computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • the storage system 34 may be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 7 and is commonly referred to as a "hard disk drive").
  • a disk drive for reading and writing to a removable non-volatile disk for example, "floppy disk"
  • a removable non-volatile optical disk for example: compact disk read-only memory (Compact) Disc Read Only Memory (hereinafter referred to as CD-ROM), digital multi-function read-only disc (Digital Video Disc Read Only (hereinafter referred to as DVD-ROM) or other optical media) read and write optical disc drive.
  • CD-ROM compact disk read-only memory
  • DVD-ROM digital multi-function read-only disc
  • each drive may be connected to the bus 18 through one or more data medium interfaces.
  • the memory 28 may include at least one program product having a set (eg, at least one) of program modules, which are configured to perform the functions of the embodiments of the present application.
  • a program/utility tool 40 having a set of (at least one) program modules 42 may be stored in, for example, the memory 28.
  • Such program modules 42 include, but are not limited to, an operating system, one or more application programs, and other programs Modules and program data, each of these examples or some combination may include the implementation of the network environment.
  • the program module 42 generally performs the functions and/or methods in the embodiments described in this application.
  • the computer device 12 may also communicate with one or more external devices 14 (such as a keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable a user to interact with the computer device 12, and/or with This allows the computer device 12 to communicate with any device (such as a network card, modem, etc.) that communicates with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 22.
  • the computer device 12 can also be connected to one or more networks (such as a local area network (Local Area Network; hereinafter referred to as LAN), wide area network (Wide Area Network; hereinafter referred to as WAN) and/or a public network such as the Internet through the network adapter 20 ) Communication.
  • networks such as a local area network (Local Area Network; hereinafter referred to as LAN), wide area network (Wide Area Network; hereinafter referred to as WAN) and/or a public network such as the Internet through the network adapter 20
  • the network adapter 20 communicates with other modules of the computer device 12 through the bus 18. It should be understood that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with the computer device 12, including but not limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape Drives and data backup storage systems, etc.
  • the processing unit 16 executes various functional applications and data processing by running the program stored in the system memory 28, for example, to implement the palmprint recognition method provided by the embodiment of the present application.
  • An embodiment of the present application further provides a non-transitory computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the palmprint recognition method provided by the embodiment of the present application may be implemented.
  • the foregoing non-transitory computer-readable storage medium may employ any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above.
  • computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (Read Only Memory) ; Hereinafter referred to as: ROM), erasable programmable read-only memory (Erasable Programmable Read Only; hereinafter referred to as: EPROM) or flash memory, optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic memory Pieces, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device. .
  • the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • the computer program code for performing the operation of the present application can be written in one or more programming languages or a combination thereof.
  • the above programming languages include object-oriented programming languages such as Java, Smalltalk, C++, and also include the conventional process Programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (Local Area Network; hereinafter referred to as LAN) or wide area network (Wide Area Network; hereinafter referred to as WAN), or, it can Connect to an external computer (for example, using an Internet service provider to connect through the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • the features defined as “first” and “second” may include at least one of the features explicitly or implicitly.
  • the meaning of “plurality” is at least two, such as two, three, etc., unless otherwise specifically limited.
  • Any process or method description in a flowchart or otherwise described herein may be understood as representing a module, segment, or portion of code that includes one or more executable instructions for implementing custom logic functions or steps of a process , And the scope of the preferred embodiment of the present application includes additional implementations, in which the order may not be shown or discussed, including performing the functions in a substantially simultaneous manner or in reverse order according to the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present application belong.
  • the word “if” as used herein may be interpreted as “when” or “when” or “in response to determination” or “in response to detection”.
  • the phrases “if determined” or “if detected (statement or event stated)” can be interpreted as “when determined” or “in response to determination” or “when detected (statement or event stated) )” or “in response to detection (statement or event stated)”.
  • terminals involved in the embodiments of the present application may include, but are not limited to, personal computers (Personal Computer; hereinafter referred to as PC), personal digital assistants (Personal Digital Assistant; hereinafter referred to as PDA), wireless handheld devices, tablet Computer (Tablet Computer), mobile phone, MP3 player, MP4 player, etc.
  • PC Personal Computer
  • PDA Personal Digital Assistant
  • Tablet Computer Tablet Computer
  • mobile phone MP3 player, MP4 player, etc.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the above-mentioned units is only a division of logical functions. In actual implementation, there may be other divisions.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above integrated unit implemented in the form of a software functional unit may be stored in a computer-readable storage medium.
  • the above software functional unit is stored in a storage medium, and includes several instructions to make a computer device (which may be a personal computer, server, or network device, etc.) or processor (Processor) perform part of the above method in various embodiments of the present application step.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory; hereinafter referred to as ROM), random access memory (Random Access Memory; hereinafter referred to as RAM), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种掌纹识别方法、装置、计算机设备和存储介质,其中,上述掌纹识别方法包括:获取待识别的掌纹图像(101);计算所述待识别的掌纹图像中像素点的主方向,以及计算所述像素点在主方向上的幅值(102);根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子(103);根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息(104)。该方法可以实现对掌纹图像进行识别,并且提高掌纹识别的效率和准确度。

Description

掌纹识别方法、装置、计算机设备和存储介质
本申请要求于2018年12月29日提交中国专利局、申请号为201811646405.7、发明名称为“掌纹识别方法、装置和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别技术领域,尤其涉及一种掌纹识别方法、装置、计算机设备和存储介质。
背景技术
现在,掌纹识别的应用已经越来越普及,例如:储户在银行的身份验证,家庭的门禁系统,以及在公司对员工的考勤系统,均可运用掌纹识别。
在进行掌纹识别时,通常会先获取一张用户的掌纹的图像,然后将图像中的掌纹与预存的掌纹进行匹配,如匹配成功,则可以确定掌纹识别成功,用户身份合法;反之,则拒绝用户的操作请求。
但是,现有相关技术中提供的掌纹识别方案存在识别效率和识别准确度较低的问题,无法快速、准确地获得掌纹的识别结果。
发明内容
本申请实施例提供了一种掌纹识别方法、装置、计算机设备和存储介质,以实现对掌纹图像进行识别,并且提高掌纹识别的效率和准确度。
第一方面,本申请实施例提供了一种掌纹识别方法,包括:获取待识别的掌纹图像;计算所述待识别的掌纹图像中像素点的主方向,以及计算所述像素点在主方向上的幅值;根据所述待识别的掌纹图像中像素点的幅值,获得所述待识别的掌纹图像中像素点的特征描述子;根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息。
第二方面,本申请实施例提供一种掌纹识别装置,包括:获取模块,用于获取待识别的掌纹图像;计算模块,用于计算所述获取模块获取的待识别的掌纹图像中像素点的主方向,以及计算所述像素点在对应的主方向上的幅值;所述获取模块,还用于根据所述计算模块计算的待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子;查找模块,用于根据所述获取模块获得的待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信 息。
第三方面,本申请实施例提供一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如上所述的方法。
第四方面,本申请实施例提供一种非临时性计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的方法。
以上技术方案中,获取待识别的掌纹图像之后,计算上述待识别的掌纹图像中像素点的主方向,以及计算上述像素点在对应的主方向上的幅值,根据上述待识别的掌纹图像中像素点的幅值,获得上述像素点的特征描述子,根据上述待识别的掌纹图像中每个像素点的特征描述子,查找匹配的用户身份信息,从而可以实现对掌纹图像进行识别,进而可以根据掌纹识别结果对用户身份进行识别,并且根据每个像素点的特征描述子查找匹配的用户身份信息,充分利用了特征描述子的光照、旋转和尺度不变性的优势,提高了掌纹识别的准确度。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请公开的掌纹识别方法的一个实施例的流程图;
图2为本申请公开的掌纹识别方法的另一个实施例的流程图;
图3为本申请公开的掌纹识别方法的再一个实施例的流程图;
图4为本申请公开的掌纹识别方法的再一个实施例的流程图;
图5为本申请公开的掌纹识别装置的一个实施例的结构示意图;
图6为本申请公开的掌纹识别装置的另一个实施例的结构示意图;
图7为本申请公开的计算机设备的一个实施例的结构示意图。
具体实施方式
为了更好的理解本申请的技术方案,下面结合附图对本申请实施例进行详细描述。
应当明确,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在本申请实施例中使用的术语是仅仅出于描述特定实施例的目 的,而非旨在限制本申请。在本申请实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
图1为本申请公开的掌纹识别方法的一个实施例的流程图,如图1所示,上述掌纹识别方法可以包括:
步骤101,获取待识别的掌纹图像。
其中,上述待识别的掌纹图像可以为用户使用照相机或摄像头等摄影装置拍摄的掌纹图像;或者,用户将自己的手掌与掌纹识别装置中的掌纹录入模块相接触,通过上述掌纹录入模块输入的待识别的掌纹图像。当然,还可以通过其他方式获取待识别的掌纹图像,本实施例对此不作限定。
步骤102,计算上述待识别的掌纹图像中像素点的主方向,以及计算上述像素点在对应的主方向上的幅值。
步骤103,根据上述待识别的掌纹图像中像素点的幅值,获得上述像素点的特征描述子。
步骤104,根据上述待识别的掌纹图像中每个像素点的特征描述子,查找匹配的用户身份信息。
其中,上述用户身份信息可以包括:上述用户的面部图像、姓名和/或性别等,这样,查找到匹配的用户身份信息之后,就可以对输入待识别的掌纹图像的用户身份的合法性进行确认。
另外,如果根据上述待识别的掌纹图像中每个像素点的特征描述子,未查找到与之匹配的用户身份信息,则可以确定输入待识别的掌纹图像的用户的身份不合法。
上述掌纹识别方法中,获取待识别的掌纹图像之后,计算上述待识别的掌纹图像中像素点的主方向,以及计算上述像素点在对应的主方向上的幅值,根据上述待识别的掌纹图像中像素点的幅值,获得上述像素点的特征描述子,根据上述待识别的掌纹图像中每个像素点的特征描述子,查找匹配的用户身份信息,从而可以实现对掌纹图像进行识别,进而可以根据掌纹识别结果对用户身份进行识别,并且根据每个像素点的特征描述子查找匹配的用户身份信息,充分利用了特征描述子的光照、旋转和尺度不变性的优势,提高了掌纹识别的准确度。
图2为本申请公开的掌纹识别方法的另一个实施例的流程图,如图2所示,本申请图1所示实施例中,步骤102可以包括:
步骤201,通过至少两个不同方向的方向滤波器对上述待识别的掌纹图像中的像素点进行滤波,获得每个方向滤波器对上述像素点的滤波值,选择滤波值最大的方向滤波器所指示的方向作为上述像素点的主方向。
其中,上述方向滤波器可以为gabor滤波器,当然也可以采用其他种类的方向滤波器,本实施例对此不作限定。
步骤202,在上述像素点的主方向上通过至少两个不同尺度的尺度滤波器对上述像素点进行滤波,获得每个尺度滤波器对上述像素点的滤波值,选择最大的滤波值作为上述像素点在对应的主方向上的幅值。
其中,上述尺度滤波器可以为gabor滤波器,当然也可以采用其他种类的尺度滤波器,本实施例对此不作限定。
在具体实现时,可以使用n个不同方向的gabor滤波器对上述待识别的掌纹图像中的像素点进行滤波,获得每个gabor滤波器对上述像素点的滤波值,选择滤波值最大的gabor滤波器所指示的方向作为上述像素点的主方向。然后,在上述像素点的主方向上通过k个不同尺度的gabor滤波器对上述像素点进行滤波,获得每个尺度滤波器对上述像素点的滤波值,选择最大的滤波值作为上述像素点在对应的主方向上的幅值,n,k≥2,n,k为正整数。
图3为本申请公开的掌纹识别方法的再一个实施例的流程图,如图3所示,本申请图1所示实施例中,步骤103可以包括:
步骤301,根据上述待识别的掌纹图像中像素点的幅值中的最大值和最小值,将上述像素点的幅值划分成第一预定数量的区间,并对划分获得的幅值区间进行编号,上述第一预定数量为上述最大值与上述最小值之差。
步骤302,记录上述待识别的掌纹图像中像素点的幅值所属的幅值区间的编号。
步骤303,以上述像素点为中心引出第二预定数量的射线,获取每根射线上距离本像素点最近的极大值像素点,记录上述极大值像素点的主方向和上述极大值像素点所属的幅值区间的编号,将记录的主方向和幅值区间的编号作为上述像素点的特征向量。
其中,上述第二预定数量为正整数,上述第二预定数量的大小可以根据系统性能和/或实现需求等自行设定,本实施例对上述第二预定数量的大小不作限定,上述第二预定数量与上述第一预定数量可以相同或不同。
具体地,对于上述待识别的掌纹图像中的像素点,以该像素点为中心引出l根射线,获取每根射线上距离本像素点最近的极大值像素点,记录该极大值像素点的主方向和对应的幅值区间编号,将记录的主方向和幅值区间的编号作为上述像素点的特征向量,这样就得到一个2×l长度的特征向量,即上述像素点的特征向量,其中,l即为上述第二预定数量。
本步骤中,获取每根射线上距离本像素点最近的极大值像素点 可以为:根据像素点与每根射线的距离,在与每根射线的距离最近的像素点中,选择幅值最大的像素点。
步骤304,将上述像素点的特征向量从起始位置旋转到上述第二预定数量的极大值像素点中幅值最大的像素点的主方向上,获得上述像素点的特征描述子。
具体地,上述像素点的特征向量的长度为2×l,每根射线对应一个极大值像素点,l根射线就存在l个极大值像素点,从l个极大值像素点中选择幅值最大的像素点,然后将上述像素点的特征向量从起始位置旋转到上述幅值最大的像素点的主方向上,获得上述待识别的掌纹图像中每个像素点的特征描述子。
本实施例使用待识别的掌纹图像中各个像素点的主方向和幅值作为特征描述子,也就是说,本实施例中的特征描述子使用的是待识别的掌纹图像中各个像素点的局部特征,从而可以克服特征点稀疏情况下的算法无效问题和计算复杂问题,简化了掌纹识别的计算过程,提高了掌纹识别的效率。
本申请图2和图3所示实施例中,使用待识别的掌纹图像中各个像素点的主方向和幅值作为特征描述子,在获得每个像素点的特征描述子时,使用至少两个不同尺度的gabor滤波器进行滤波,并且在步骤304中根据主方向进行旋转,所以获得的每个像素点的特征描述子具有尺度不变性和旋转不变性,另外由于gabor滤波器本身的特性和将幅值划分成第一预定数量的区间,所以获得的每个像素点的特征描述子对光照也具有一定的不变性。
图4为本申请公开的掌纹识别方法的再一个实施例的流程图,如图4所示,本申请图1所示实施例中,步骤104可以包括:
步骤401,根据上述每个像素点的特征描述子的值,将上述每个像素点的特征描述子映射到直方图中的不同子区间,以获取上述待识别的掌纹图像的特征点直方图。
步骤402,在预先存储的特征点直方图中查找与上述待识别的掌纹图像的特征点直方图匹配的特征点直方图。
步骤403,获取匹配的特征点直方图对应的用户身份信息。
本实施例中,将上述每个像素点的特征描述子映射到特征点直方图中的不同子区间,以获取上述待识别的掌纹图像的特征点直方图,这样只要通过特征点直方图即可查找与上述待识别的掌纹图像的特征点直方图匹配的特征点直方图,这样可以充分利用特征描述子的光照、旋转和尺度不变性的优势,另外将特征描述子映射成特征点直方图还可以简化查找运算。
图5为本申请公开的掌纹识别装置的一个实施例的结构示意图, 本实施例中的掌纹识别装置可以实现本申请实施例提供的掌纹识别方法。如图5所示,上述掌纹识别装置可以包括:获取模块51、计算模块52和查找模块53;
获取模块51,用于获取待识别的掌纹图像;其中,上述待识别的掌纹图像可以为用户使用照相机或摄像头等摄影装置拍摄的掌纹图像;或者,用户将自己的手掌与掌纹识别装置中的掌纹录入模块相接触,通过上述掌纹录入模块输入的待识别的掌纹图像。当然,还可以通过其他方式获取待识别的掌纹图像,本实施例对此不作限定。
计算模块52,用于计算获取模块51获取的待识别的掌纹图像中像素点的主方向,以及计算上述像素点在对应的主方向上的幅值。
获取模块51,还用于根据计算模块52计算的待识别的掌纹图像中像素点的幅值,获得上述像素点的特征描述子。
查找模块53,用于根据上述获取模块51获得的待识别的掌纹图像中每个像素点的特征描述子,查找匹配的用户身份信息。其中,上述用户身份信息可以包括:上述用户的面部图像、姓名和/或性别等,这样,查找模块53查找到匹配的用户身份信息之后,就可以对输入待识别的掌纹图像的用户身份的合法性进行确认。
另外,如果查找模块53根据上述待识别的掌纹图像中每个像素点的特征描述子,未查找到与之匹配的用户身份信息,则可以确定输入待识别的掌纹图像的用户的身份不合法。
上述掌纹识别装置中,获取模块51获取待识别的掌纹图像之后,计算模块52计算上述待识别的掌纹图像中像素点的主方向,以及计算上述像素点在对应的主方向上的幅值,获取模块51根据上述待识别的掌纹图像中像素点的幅值,获得上述像素点的特征描述子,查找模块53根据上述待识别的掌纹图像中每个像素点的特征描述子,查找匹配的用户身份信息,从而可以实现对掌纹图像进行识别,进而可以根据掌纹识别结果对用户身份进行识别,并且根据每个像素点的特征描述子查找匹配的用户身份信息,充分利用了特征描述子的光照、旋转和尺度不变性的优势,提高了掌纹识别的准确度。
图6为本申请公开的掌纹识别装置的另一个实施例的结构示意图,与图5所示的掌纹识别装置相比,不同之处在于,图6所示的掌纹识别装置中,计算模块52,具体用于通过至少两个不同方向的方向滤波器对上述待识别的掌纹图像中的像素点进行滤波,获得每个方向滤波器对上述像素点的滤波值,选择滤波值最大的方向滤波器所指示的方向作为上述像素点的主方向;以及在上述像素点的主方向上通过至少两个不同尺度的尺度滤波器对上述像素点进行滤波,获得每个尺度滤波器对上述像素点的滤波值,选择最大的滤波值作 为上述像素点在对应的主方向上的幅值。
其中,上述方向滤波器可以为gabor滤波器,当然也可以采用其他种类的方向滤波器,本实施例对此不作限定;上述尺度滤波器可以为gabor滤波器,当然也可以采用其他种类的尺度滤波器,本实施例对此不作限定。
在具体实现时,计算模块52可以使用n个不同方向的gabor滤波器对上述待识别的掌纹图像中的像素点进行滤波,获得每个gabor滤波器对上述像素点的滤波值,选择滤波值最大的gabor滤波器所指示的方向作为上述像素点的主方向。然后,在上述像素点的主方向上通过k个不同尺度的gabor滤波器对上述像素点进行滤波,获得每个尺度滤波器对上述像素点的滤波值,选择最大的滤波值作为上述像素点在对应的主方向上的幅值,n,k≥2,n,k为正整数。
本实施例中,获取模块51可以包括:划分子模块511、编号子模块512、记录子模块513和旋转子模块514;
划分子模块511,用于根据上述待识别的掌纹图像中像素点的幅值中的最大值和最小值,将上述像素点的幅值划分成第一预定数量的区间;
编号子模块512,用于对划分子模块511划分获得的幅值区间进行编号,上述第一预定数量为上述最大值与上述最小值之差;
记录子模块513,用于记录上述待识别的掌纹图像中像素点的幅值所属的幅值区间的编号;以上述像素点为中心引出第二预定数量的射线,获取每根射线上距离本像素点最近的极大值像素点,记录上述极大值像素点的主方向和上述极大值像素点所属的幅值区间的编号,将记录的主方向和幅值区间的编号作为上述像素点的特征向量;其中,上述第二预定数量为正整数,上述第二预定数量的大小可以根据系统性能和/或实现需求等自行设定,本实施例对上述第二预定数量的大小不作限定,上述第二预定数量与上述第一预定数量可以相同或不同。
具体地,对于上述待识别的掌纹图像中的每个像素点,以该像素点为中心引出l根射线,获取每根射线上距离本像素点最近的极大值像素点,记录子模块513记录该极大值像素点的主方向和对应的幅值区间编号,将记录的主方向和幅值区间的编号作为上述像素点的特征向量,这样就得到一个2×l长度的特征向量,即上述像素点的特征向量,其中,l即为上述第二预定数量。其中,获取每根射线上距离本像素点最近的极大值像素点可以为:根据像素点与每根射线的距离,在与每根射线的距离最近的像素点中,选择幅值最大的像素点。
旋转子模块514,用于将上述像素点的特征向量从起始位置旋转到上述第二预定数量的极大值像素点中幅值最大的像素点的主方向 上,获得上述像素点的特征描述子。
具体地,上述像素点的特征向量的长度为2×l,每根射线对应一个极大值像素点,l根射线就存在l个极大值像素点,从l个极大值像素点中选择幅值最大的像素点,然后旋转子模块514将上述像素点的特征向量从起始位置旋转到上述幅值最大的像素点的主方向上,获得上述待识别的掌纹图像中每个像素点的特征描述子。
本实施例使用待识别的掌纹图像中各个像素点的主方向和幅值作为特征描述子,也就是说,本实施例中的特征描述子使用的是待识别的掌纹图像中各个像素点的局部特征,从而可以克服特征点稀疏情况下的算法无效问题和计算复杂问题,简化了掌纹识别的计算过程,提高了掌纹识别的效率。
本实施例中,使用待识别的掌纹图像中各个像素点的主方向和幅值作为特征描述子,在获得每个像素点的特征描述子时,使用至少两个不同尺度的gabor滤波器进行滤波,并且旋转子模块514根据主方向进行旋转,所以获得的每个像素点的特征描述子具有尺度不变性和旋转不变性,另外由于gabor滤波器本身的特性和将幅值划分成第一预定数量的区间,所以获得的每个像素点的特征描述子对光照也具有一定的不变性。
本实施例中,上述查找模块53可以包括:映射子模块531、查找子模块532和获取子模块533
映射子模块531,用于根据上述每个像素点的特征描述子的值,将上述每个像素点的特征描述子映射到直方图中的不同子区间,以获取上述待识别的掌纹图像的特征点直方图;
查找子模块532,用于在预先存储的特征点直方图中查找与上述待识别的掌纹图像的特征点直方图匹配的特征点直方图;
获取子模块533,用于获取匹配的特征点直方图对应的用户身份信息。
本实施例中,映射子模块531将上述每个像素点的特征描述子映射到特征点直方图中的不同子区间,以获取上述待识别的掌纹图像的特征点直方图,这样查找子模块532只要通过特征点直方图即可查找与上述待识别的掌纹图像的特征点直方图匹配的特征点直方图,这样可以充分利用特征描述子的光照、旋转和尺度不变性的优势,另外将特征描述子映射成特征点直方图还可以简化查找运算。
图7为本申请公开的计算机设备的一个实施例的结构示意图,上述计算机设备可以包括存储器、处理器及存储在上述存储器上并可在上述处理器上运行的计算机程序,上述处理器执行上述计算机程序时,可以实现本申请实施例提供的掌纹识别方法。
其中,上述计算机设备可以为服务器,例如:云服务器,也可 以为电子设备,例如:智能手机、智能手表或平板电脑等智能电子设备,本实施例对上述计算机设备的具体形态不作限定。
图7示出了适于用来实现本申请实施方式的示例性计算机设备12的框图。图7显示的计算机设备12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图7所示,计算机设备12以通用计算设备的形式表现。计算机设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture;以下简称:ISA)总线,微通道体系结构(Micro Channel Architecture;以下简称:MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association;以下简称:VESA)局域总线以及外围组件互连(Peripheral Component Interconnection;以下简称:PCI)总线。
计算机设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory;以下简称:RAM)30和/或高速缓存存储器32。计算机设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图7未显示,通常称为“硬盘驱动器”)。尽管图7中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如:光盘只读存储器(Compact Disc Read Only Memory;以下简称:CD-ROM)、数字多功能只读光盘(Digital Video Disc Read Only Memory;以下简称:DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器28中,这样的程序模块42包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序 模块42通常执行本申请所描述的实施例中的功能和/或方法。
计算机设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机设备12交互的设备通信,和/或与使得该计算机设备12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,计算机设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(Local Area Network;以下简称:LAN),广域网(Wide Area Network;以下简称:WAN)和/或公共网络,例如因特网)通信。如图7所示,网络适配器20通过总线18与计算机设备12的其它模块通信。应当明白,尽管图7中未示出,可以结合计算机设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例提供的掌纹识别方法。
本申请实施例还提供一种非临时性计算机可读存储介质,其上存储有计算机程序,上述计算机程序被处理器执行时可以实现本申请实施例提供的掌纹识别方法。
上述非临时性计算机可读存储介质可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(Read Only Memory;以下简称:ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory;以下简称:EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质 可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network;以下简称:LAN)或广域网(Wide Area Network;以下简称:WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条 件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,本申请实施例中所涉及的终端可以包括但不限于个人计算机(Personal Computer;以下简称:PC)、个人数字助理(Personal Digital Assistant;以下简称:PDA)、无线手持设备、平板电脑(Tablet Computer)、手机、MP3播放器、MP4播放器等。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机装置(可以是个人计算机,服务器,或者网络装置等)或处理器(Processor)执行本申请各个实施例上述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory;以下简称:ROM)、随机存取存储器(Random Access Memory;以下简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上上述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (20)

  1. 一种掌纹识别方法,其特征在于,包括:
    获取待识别的掌纹图像;
    计算所述待识别的掌纹图像中像素点的主方向,以及计算所述像素点在主方向上的幅值;
    根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子;
    根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息。
  2. 根据权利要求1所述的方法,其特征在于,所述计算所述待识别的掌纹图像中像素点的主方向包括:
    通过至少两个不同方向的方向滤波器对所述待识别的掌纹图像中的像素点进行滤波,获得每个方向滤波器对所述像素点的滤波值,选择滤波值最大的方向滤波器所指示的方向作为所述像素点的主方向;
    所述计算像素点在主方向上的幅值包括:
    在所述像素点的主方向上通过至少两个不同尺度的尺度滤波器对所述像素点进行滤波,获得每个尺度滤波器对所述像素点的滤波值,选择最大的滤波值作为所述像素点在对应的主方向上的幅值。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子包括:
    根据所述待识别的掌纹图像中像素点的幅值中的最大值和最小值,将所述像素点的幅值划分成第一预定数量的区间,并对划分获得的幅值区间进行编号,所述第一预定数量为所述最大值与所述最小值之差;
    记录所述待识别的掌纹图像中像素点的幅值所属的幅值区间的编号;
    以所述像素点为中心引出第二预定数量的射线,获取每根射线上距离本像素点最近的极大值像素点,记录所述极大值像素点的主方向和所述极大值像素点所属的幅值区间的编号,将记录的主方向和幅值区间的编号作为所述像素点的特征向量;
    将所述像素点的特征向量从起始位置旋转到所述第二预定数量的极大值像素点中幅值最大的像素点的主方向上,获得所述像素点的特征描述子。
  4. 根据权利要求3所述的方法,其特征在于,所述获取每根射线上距离本像素点最近的极大值像素点包括:
    根据像素点与每根射线的距离,在与每根射线的距离最近的像素点中,选择幅值最大的像素点。
  5. 根据权利要求1-4任意一项所述的方法,其特征在于,所述根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息包括:
    根据所述每个像素点的特征描述子的值,将所述每个像素点的特征描述子映射到直方图中的不同子区间,以获取所述待识别的掌纹图像的特征点直方图;
    在预先存储的特征点直方图中查找与所述待识别的掌纹图像的特征点直方图匹配的特征点直方图;
    获取匹配的特征点直方图对应的用户身份信息。
  6. 一种掌纹识别装置,其特征在于,包括:
    获取模块,用于获取待识别的掌纹图像;
    计算模块,用于计算所述获取模块获取的待识别的掌纹图像中像素点的主方向,以及计算所述像素点在对应的主方向上的幅值;
    所述获取模块,还用于根据所述计算模块计算的待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子;
    查找模块,用于根据所述获取模块获得的待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息。
  7. 根据权利要求6所述的装置,其特征在于,
    所述计算模块,具体用于至少两个不同方向的方向滤波器对所述待识别的掌纹图像中的像素点进行滤波,获得每个方向滤波器对所述像素点的滤波值,选择滤波值最大的方向滤波器所指示的方向作为所述像素点的主方向;以及在所述像素点的主方向上通过至少两个不同尺度的尺度滤波器对所述像素点进行滤波,获得每个尺度滤波器对所述像素点的滤波值,选择最大的滤波值作为所述像素点在对应的主方向上的幅值。
  8. 根据权利要求6所述的装置,其特征在于,所述获取模块包括:
    划分子模块,用于根据所述待识别的掌纹图像中像素点的幅值中的最大值和最小值,将所述像素点的幅值划分成第一预定数量的区间;
    编号子模块,用于对所述划分子模块划分获得的幅值区间进行编号,所述第一预定数量为所述最大值与所述最小值之差;
    记录子模块,用于记录所述待识别的掌纹图像中每个像素点的幅值所属的幅值区间的编号;以所述像素点为中心引出第二预定数量的射线,获取每根射线上距离本像素点最近的极大值像素点,记录所述极大值像素点的主方向和所述极大值像素点所属的幅值区间的编号,将记录的主方向和幅值区间的编号作为所述像素点的特征向量;
    旋转子模块,用于将所述像素点的特征向量从起始位置旋转到所述第二预定数量的极大值像素点中幅值最大的像素点的主方向上, 获得所述像素点的特征描述子。
  9. 根据权利要求8所述的装置,其特征在于,所述记录子模块用于获取每根射线上距离本像素点最近的极大值像素点包括:
    所述记录子模块,具体用于根据像素点与每根射线的距离,在与每根射线的距离最近的像素点中,选择幅值最大的像素点。
  10. 根据权利要求6-9任意一项所述的装置,其特征在于,所述查找模块包括:
    映射子模块,用于根据所述每个像素点的特征描述子的值,将所述每个像素点的特征描述子映射到直方图中的不同子区间,以获取所述待识别的掌纹图像的特征点直方图;
    查找子模块,用于在预先存储的特征点直方图中查找与所述待识别的掌纹图像的特征点直方图匹配的特征点直方图;
    获取子模块,用于获取匹配的特征点直方图对应的用户身份信息。
  11. 一种计算机设备,其特征在于,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序;
    所述处理器执行所述计算机程序时,用于获取待识别的掌纹图像;
    计算所述待识别的掌纹图像中像素点的主方向,以及计算所述像素点在主方向上的幅值;
    根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子;
    根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机程序实现所述计算所述待识别的掌纹图像中像素点的主方向,包括:
    通过至少两个不同方向的方向滤波器对所述待识别的掌纹图像中的像素点进行滤波,获得每个方向滤波器对所述像素点的滤波值,选择滤波值最大的方向滤波器所指示的方向作为所述像素点的主方向;
    在所述像素点的主方向上通过至少两个不同尺度的尺度滤波器对所述像素点进行滤波,获得每个尺度滤波器对所述像素点的滤波值,选择最大的滤波值作为所述像素点在对应的主方向上的幅值。
  13. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机程序实现所述根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子包括:
    根据所述待识别的掌纹图像中像素点的幅值中的最大值和最小值,将所述像素点的幅值划分成第一预定数量的区间,并对划分获得的幅值区间进行编号,所述第一预定数量为所述最大值与所述最 小值之差;
    记录所述待识别的掌纹图像中像素点的幅值所属的幅值区间的编号;
    以所述像素点为中心引出第二预定数量的射线,获取每根射线上距离本像素点最近的极大值像素点,记录所述极大值像素点的主方向和所述极大值像素点所属的幅值区间的编号,将记录的主方向和幅值区间的编号作为所述像素点的特征向量;
    将所述像素点的特征向量从起始位置旋转到所述第二预定数量的极大值像素点中幅值最大的像素点的主方向上,获得所述像素点的特征描述子。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述处理器执行所述计算机程序实现获取每根射线上距离本像素点最近的极大值像素点包括:
    根据像素点与每根射线的距离,在与每根射线的距离最近的像素点中,选择幅值最大的像素点。
  15. 根据权利要求11-14任意一项所述的计算机设备,其特征在于,所述处理器执行所述计算机程序实现根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息包括:
    根据所述每个像素点的特征描述子的值,将所述每个像素点的特征描述子映射到直方图中的不同子区间,以获取所述待识别的掌纹图像的特征点直方图;
    在预先存储的特征点直方图中查找与所述待识别的掌纹图像的特征点直方图匹配的特征点直方图;获取匹配的特征点直方图对应的用户身份信息。
  16. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下步骤:
    获取待识别的掌纹图像;
    计算所述待识别的掌纹图像中像素点的主方向,以及计算所述像素点在主方向上的幅值;
    根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子;
    根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息。
  17. 根据权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,实现计算所述待识别的掌纹图像中像素点的主方向的步骤包括:
    通过至少两个不同方向的方向滤波器对所述待识别的掌纹图像中的像素点进行滤波,获得每个方向滤波器对所述像素点的滤波值,选择滤波值最大的方向滤波器所指示的方向作为所述像素点的主方 向;
    所述计算像素点在主方向上的幅值包括:
    在所述像素点的主方向上通过至少两个不同尺度的尺度滤波器对所述像素点进行滤波,获得每个尺度滤波器对所述像素点的滤波值,选择最大的滤波值作为所述像素点在对应的主方向上的幅值。
  18. 根据权利要求16所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,实现根据所述待识别的掌纹图像中像素点的幅值,获得所述像素点的特征描述子的步骤包括:
    根据所述待识别的掌纹图像中像素点的幅值中的最大值和最小值,将所述像素点的幅值划分成第一预定数量的区间,并对划分获得的幅值区间进行编号,所述第一预定数量为所述最大值与所述最小值之差;
    记录所述待识别的掌纹图像中像素点的幅值所属的幅值区间的编号;
    以所述像素点为中心引出第二预定数量的射线,获取每根射线上距离本像素点最近的极大值像素点,记录所述极大值像素点的主方向和所述极大值像素点所属的幅值区间的编号,将记录的主方向和幅值区间的编号作为所述像素点的特征向量;
    将所述像素点的特征向量从起始位置旋转到所述第二预定数量的极大值像素点中幅值最大的像素点的主方向上,获得所述像素点的特征描述子。
  19. 根据权利要求18所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,实现获取每根射线上距离本像素点最近的极大值像素点的步骤包括:
    根据像素点与每根射线的距离,在与每根射线的距离最近的像素点中,选择幅值最大的像素点。
  20. 根据权利要求16-19任意一项所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,实现根据所述待识别的掌纹图像中每个像素点的所述特征描述子,查找匹配的用户身份信息的步骤包括:
    根据所述每个像素点的特征描述子的值,将所述每个像素点的特征描述子映射到直方图中的不同子区间,以获取所述待识别的掌纹图像的特征点直方图;
    在预先存储的特征点直方图中查找与所述待识别的掌纹图像的特征点直方图匹配的特征点直方图;
    获取匹配的特征点直方图对应的用户身份信息。
PCT/CN2019/118262 2018-12-29 2019-11-14 掌纹识别方法、装置、计算机设备和存储介质 WO2020134674A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811646405.7 2018-12-29
CN201811646405.7A CN109829383B (zh) 2018-12-29 2018-12-29 掌纹识别方法、装置和计算机设备

Publications (1)

Publication Number Publication Date
WO2020134674A1 true WO2020134674A1 (zh) 2020-07-02

Family

ID=66860649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118262 WO2020134674A1 (zh) 2018-12-29 2019-11-14 掌纹识别方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN109829383B (zh)
WO (1) WO2020134674A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329660A (zh) * 2020-11-10 2021-02-05 浙江商汤科技开发有限公司 一种场景识别方法、装置、智能设备及存储介质
CN112802138A (zh) * 2021-02-04 2021-05-14 联仁健康医疗大数据科技股份有限公司 一种图像处理方法、装置、存储介质及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829383B (zh) * 2018-12-29 2024-03-15 平安科技(深圳)有限公司 掌纹识别方法、装置和计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853383A (zh) * 2010-05-17 2010-10-06 清华大学 高分辨率掌纹方向场提取方法
CN102163282A (zh) * 2011-05-05 2011-08-24 汉王科技股份有限公司 掌纹图像感兴趣区域的获取方法及装置
CN104156707A (zh) * 2014-08-14 2014-11-19 深圳市汇顶科技股份有限公司 指纹识别方法及其指纹识别装置
US20160034779A1 (en) * 2014-07-31 2016-02-04 International Business Machines Corporation High Speed Searching For Large-Scale Image Databases
CN109829383A (zh) * 2018-12-29 2019-05-31 平安科技(深圳)有限公司 掌纹识别方法、装置和计算机设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001243465A (ja) * 2000-03-01 2001-09-07 Nippon Telegr & Teleph Corp <Ntt> 指紋画像照合方法および指紋画像照合装置
WO2004111919A1 (en) * 2003-06-12 2004-12-23 The Hong Kong Polytechnic University Method of palm print identification
US20050281438A1 (en) * 2004-06-21 2005-12-22 Zhang David D Palm print identification using palm line orientation
CN100458832C (zh) * 2007-06-21 2009-02-04 中国科学院合肥物质科学研究院 基于方向特征的掌纹识别方法
CN102332084B (zh) * 2010-07-23 2015-01-14 中国农业大学 基于掌纹和人脸特征提取的身份识别方法
CN102254188B (zh) * 2011-08-04 2013-03-13 汉王科技股份有限公司 掌纹识别方法及装置
CN104866804B (zh) * 2014-02-20 2019-10-11 阿里巴巴集团控股有限公司 一种掌纹信息识别的方法和设备
US10192098B2 (en) * 2016-09-09 2019-01-29 MorphoTrak, LLC Palm print image matching techniques
WO2018121552A1 (zh) * 2016-12-29 2018-07-05 北京奇虎科技有限公司 基于掌纹数据的业务处理方法、装置、程序及介质
CN107122700A (zh) * 2017-03-02 2017-09-01 华南理工大学 一种基于视频的掌纹掌脉联合注册和识别方法
CN107909004A (zh) * 2017-10-23 2018-04-13 黑龙江省科学院自动化研究所 一种3d掌纹识别技术
CN108427923B (zh) * 2018-03-08 2022-03-25 广东工业大学 一种掌纹识别方法及装置
CN108596250B (zh) * 2018-04-24 2019-05-14 深圳大学 图像特征编码方法、终端设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853383A (zh) * 2010-05-17 2010-10-06 清华大学 高分辨率掌纹方向场提取方法
CN102163282A (zh) * 2011-05-05 2011-08-24 汉王科技股份有限公司 掌纹图像感兴趣区域的获取方法及装置
US20160034779A1 (en) * 2014-07-31 2016-02-04 International Business Machines Corporation High Speed Searching For Large-Scale Image Databases
CN104156707A (zh) * 2014-08-14 2014-11-19 深圳市汇顶科技股份有限公司 指纹识别方法及其指纹识别装置
CN109829383A (zh) * 2018-12-29 2019-05-31 平安科技(深圳)有限公司 掌纹识别方法、装置和计算机设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329660A (zh) * 2020-11-10 2021-02-05 浙江商汤科技开发有限公司 一种场景识别方法、装置、智能设备及存储介质
CN112329660B (zh) * 2020-11-10 2024-05-24 浙江商汤科技开发有限公司 一种场景识别方法、装置、智能设备及存储介质
CN112802138A (zh) * 2021-02-04 2021-05-14 联仁健康医疗大数据科技股份有限公司 一种图像处理方法、装置、存储介质及电子设备
CN112802138B (zh) * 2021-02-04 2024-03-12 联仁健康医疗大数据科技股份有限公司 一种图像处理方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN109829383B (zh) 2024-03-15
CN109829383A (zh) 2019-05-31

Similar Documents

Publication Publication Date Title
US11113523B2 (en) Method for recognizing a specific object inside an image and electronic device thereof
CN107545241B (zh) 神经网络模型训练及活体检测方法、装置及存储介质
JP7265034B2 (ja) 人体検出用の方法及び装置
WO2020134674A1 (zh) 掌纹识别方法、装置、计算机设备和存储介质
CN109189879B (zh) 电子书籍显示方法及装置
WO2020143330A1 (zh) 一种人脸图像的捕捉方法、计算机可读存储介质及终端设备
US20150161236A1 (en) Recording context for conducting searches
WO2023179095A1 (zh) 一种图像分割方法、装置、终端设备及存储介质
CN110796108B (zh) 一种人脸质量检测的方法、装置、设备及存储介质
WO2022042120A1 (zh) 目标图像提取方法、神经网络训练方法及装置
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
WO2021135603A1 (zh) 意图识别方法、服务器及存储介质
WO2024046012A1 (zh) 多模态数据的情感分析方法、装置、设备及存储介质
WO2023197648A1 (zh) 截图处理方法及装置、电子设备和计算机可读介质
WO2023005169A1 (zh) 深度图像生成方法和装置
KR102303206B1 (ko) 전자장치에서 이미지 내의 특정 객체를 인식하기 위한 방법 및 장치
WO2024012371A1 (zh) 目标跟踪方法、装置、设备以及存储介质
WO2022027191A1 (zh) 平面矫正方法及装置、计算机可读介质和电子设备
CN116704614B (zh) 动作识别方法、装置、电子设备和存储介质
KR102185131B1 (ko) 썸네일 생성 방법 및 그 전자 장치
WO2023273227A1 (zh) 指甲识别方法、装置、设备及存储介质
JP6281207B2 (ja) 情報処理装置、情報処理方法、及びプログラム
WO2022156088A1 (zh) 指纹签名生成方法、装置、电子设备及计算机存储介质
CN109886089A (zh) 掌纹识别方法、装置和计算机设备
US11482024B2 (en) Electronic device and method for processing writing input

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19902726

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 19.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19902726

Country of ref document: EP

Kind code of ref document: A1