WO2019024350A1 - 生物特征识别方法及装置 - Google Patents

生物特征识别方法及装置 Download PDF

Info

Publication number
WO2019024350A1
WO2019024350A1 PCT/CN2017/113585 CN2017113585W WO2019024350A1 WO 2019024350 A1 WO2019024350 A1 WO 2019024350A1 CN 2017113585 W CN2017113585 W CN 2017113585W WO 2019024350 A1 WO2019024350 A1 WO 2019024350A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
network
palm
palm vein
node
Prior art date
Application number
PCT/CN2017/113585
Other languages
English (en)
French (fr)
Inventor
张晨
Original Assignee
歌尔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔科技有限公司 filed Critical 歌尔科技有限公司
Publication of WO2019024350A1 publication Critical patent/WO2019024350A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Definitions

  • the present invention relates to the field of biometrics, and in particular to a biometric identification method and apparatus.
  • Each individual has a unique physiological characteristic or behavior that can be measured or automatically recognized and verified, ie, a biological feature.
  • Biometric technology can identify and identify the identity through the unique biological characteristics of each individual. It usually refers to the technology that the computer uses the inherent biological characteristics such as fingerprint, face or sound to perform user identity authentication.
  • fingerprints of each individual can be collected, and fingerprint recognition can be performed based on the fingerprint features of the fingerprint conversion.
  • Face recognition can be performed by acquiring a facial face image and converting it into a facial feature based on the facial image. It is also possible to perform facial recognition by collecting sounds emitted by each individual and converting the sound into sound features based on the sound.
  • fingerprints can be forged, faces can be occluded, and sounds can be changed using a voice changer. Therefore, effective recognition cannot be guaranteed.
  • the present invention provides a biometric recognition method and device, which combines palmprint recognition and palm vein recognition to solve the problem that cannot be effectively recognized in the prior art, and improves the effectiveness and accuracy of the recognition. degree.
  • a biometric identification method comprising:
  • a constraint condition is to construct a complex network by a network node that satisfies any constraint condition to obtain a plurality of complex networks; the network features of the plurality of complex networks constitute a feature to be identified.
  • the constraint condition includes that the node distance of any network node is less than the constraint distance; the constraint distances of different constraints are different;
  • the method further includes:
  • the constructing a complex network by a network node satisfying any constraint condition based on different constraints to obtain a plurality of complex networks includes:
  • a complex network is constructed by network nodes that are less than any constraint distance from nodes of any network node to obtain multiple complex networks.
  • the calculating the node distance of any two network nodes comprises:
  • the fusing the palmprint image and the palm vein image to obtain a fused image comprises:
  • the palmprint image after the transformation and the palm vein image obtain a fused image.
  • the taking the palm print and the palm vein corresponding pixel point in the fused image as the network node comprises:
  • a pixel point having a pixel value of a first value is used as a network node.
  • the forming, by the network feature of the plurality of complex networks, the feature to be identified includes:
  • Determining a node degree of each network node in any complex network calculating a network feature of any of the complex networks according to the node degree; and combining feature characteristics of the network features of any of the plurality of complex networks;
  • the combined network features are used as features to be identified.
  • the fusing the palmprint image and the palm vein image to obtain a fused image comprises:
  • the palmprint image of the user to be identified and the palm vein image are denoised; and the palmprint image after the noise reduction is merged with the palm vein image to obtain a fused image.
  • a second aspect of the invention provides a biometric identification device, the device comprising:
  • An image acquisition module configured to collect a palm print image and a palm vein image of the user to be identified
  • An image fusion module configured to fuse the palm print image and the palm vein image to obtain a fused image
  • a node determining module configured to use the palm print and the palm vein corresponding pixel point in the fused image as a network node
  • a network building module for constructing a complex network by a network node satisfying any constraint condition to obtain a plurality of complex networks based on different constraints
  • a feature building module is configured to form a feature to be identified by network features of the plurality of complex networks.
  • the constraint condition includes that the node distance of any network node is less than the constraint distance; the constraint distances of different constraints are different;
  • the device also includes:
  • a distance calculation module for calculating a node distance of any two network nodes
  • the network building module includes:
  • a network building unit is configured to construct a complex network by a network node that is less than any constraint distance from a node of any network node based on different constraint distances to obtain a plurality of complex networks.
  • the distance calculation module comprises:
  • a distance calculating unit configured to calculate a coordinate distance of the any two network nodes according to pixel coordinates corresponding to any two network nodes
  • a distance normalization unit for normalizing the coordinate distance to obtain the node distance.
  • the image fusion module comprises:
  • An image conversion unit configured to binarize the palm print image and the palm vein image to convert the palm print and the palm vein corresponding pixel into a first value, and convert the non-palm and non-palm vein corresponding pixel values into the first Two values
  • the first fusion unit is configured to merge the palmprint image after the binarization with the palm vein image to obtain a fused image.
  • the present invention can obtain the following technical effects:
  • the palmprint image and the palm vein image of the user to be identified are collected, and the two are merged to obtain a fused image, and the fused image combines the features of the palmprint image and the palm vein image, and the feature recognition degree is enhanced.
  • the palm print and the corresponding pixel of the palm vein as the network node in the fused image, and constructing a complex network by the network node satisfying any constraint condition, based on different constraint conditions, obtaining a plurality of complex networks, by the plurality of complex networks
  • the network features constitute the features to be identified.
  • the plurality of complex networks are formed based on palms with high recognition and palm pixels of the palm vein, combining the characteristics of the two, and the characteristics of the network to be identified by the network features of the plurality of complex networks are more capable of characterizing the to-be-identified
  • the user's identification features can improve the effectiveness and accuracy of the recognition.
  • FIG. 1 is a flow chart of an embodiment of a biometric identification method according to an embodiment of the present invention
  • FIG. 2 is a flow chart of still another embodiment of a biometric identification method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an embodiment of a biometric identification device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of still another embodiment of a biometric identification device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an embodiment of a biometric electronic device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an embodiment of a head mounted display device according to an embodiment of the present invention.
  • the embodiments of the present invention are mainly applied to the field of biometric identification, and mainly collect the palmprint and the palm vein to characterize the biometric characteristics of the user to be identified, and can improve the effectiveness and accuracy of the recognition.
  • biometric recognition mostly uses facial recognition, iris recognition or voice recognition, but these identification features are easily falsified and cannot be effectively recognized.
  • palm prints and palm veins are biological characteristics of the human body. And each person's palm prints and palm veins can be used for biometric recognition.
  • the inventors found that when using palmprint recognition, the palmprint is an exposed biometric that is easily forged; while the palm vein has fewer palm vein features and lower recognition accuracy. Therefore, in order to overcome the above-mentioned shortcomings, the inventors have thought of integrating the palmprint recognition with the palm vein recognition to obtain the fusion features of the palmprint and the palm vein, which can ensure the recognition accuracy of the recognition feature of the user to be identified, and can ensure that the recognition is performed. The unforgeability of the identification feature of the user is identified, and the technical solution of the present invention is proposed accordingly.
  • the palmprint image and the palm vein image of the user to be identified are collected, and the merged image is obtained by fusing the collected two images; and different constraints are determined according to the determined network nodes in the fused image.
  • Different multiple complex networks the network features of multiple complex networks constitute the features to be identified.
  • the constructed complex network is the basis for determining the features to be identified, and the complex network is obtained based on the pixels of the fused image, which can ensure that the information to be identified has two kinds of biometric content at the same time, thereby improving the effective identification. Sex and accuracy.
  • FIG. 1 is a flowchart of an embodiment of a biometric identification method according to an embodiment of the present invention. The method may include the following steps:
  • the palmprint image has the same size as the palm vein image.
  • both of them may be images with a pixel height of h and a pixel width of w, and the size of the pixel matrix is w*h.
  • the palm print image and the palm vein image may be acquired using a multi-spectral palm image sensor.
  • Multispectral palm The image sensor can collect different types of palm images under different light sources. For example, a palmprint image can be collected under a visible light source, and a palm vein image can be collected under an infrared light source.
  • the palmprint image and the palm vein image of the user to be identified may be continuously acquired in a short time interval to ensure that the position of the palm in the palmprint image and the palm vein image are unchanged.
  • the palm print image and the palm vein image may also be collected at different time periods, and it is only necessary to ensure that the palm print image and the palm position of the palm vein image are not changed during the acquisition.
  • the palmprint image and the palm vein image include a palmprint feature and a palm vein feature, and the palmprint image and the palm vein image are merged, that is, the palmprint feature and the palm vein feature are merged.
  • the merging the palmprint image and the palm vein image may mean that the palmprint image and the palm vein image are respectively overlapped by pixels according to positions of respective pixel points, so that the same position corresponds to two pixel values.
  • the palmprint image and the palm vein image may be represented as a two-dimensional matrix of w*h, and two After the overlap, the fused image can be represented as a 2*w*h three-dimensional matrix.
  • the pixel values corresponding to the palm print and the palm vein are different from the pixel values corresponding to the surrounding non-palm and non-palm veins. Therefore, the difference between the pixel values can be used to determine the palmprint and the palm vein corresponding in the fused image. Pixels.
  • a complex network is constructed by network nodes satisfying any constraint to obtain multiple complex networks.
  • the constraint is mainly used to constrain the network node, and when the network node satisfies the constraint condition, the network node that satisfies the constraint condition may be used to construct the complex network.
  • the network nodes that satisfy the constraint condition are also different. Therefore, the complex network constructed by different network nodes is also different.
  • the network node is the palm point and the corresponding pixel point of the palm vein
  • the network node of the complex network is actually formed according to the palm print and the corresponding pixel point of the palm vein, and thus the complex network can be used to determine the palm. Patterns and related features of the palm vein.
  • the network features of the multiple complex networks may be acquired.
  • the network features may refer to feature values such as averaging, variance, and maximum.
  • the feature to be identified is formed by network features of the plurality of complex networks.
  • the network features of the plurality of complex networks may be connected in series to form the to-be-identified feature.
  • the series consists of The recognition feature is 10*3.
  • the palmprint image and the palm vein image are collected to obtain a fusion image of the two, and then the palmprint and the palm vein biometric features can be merged, and the corresponding pixel points of the palm vein and the palm vein are used as
  • the network node determines different complex networks, that is, acquires multiple network associations between the palm print and the palm vein, and thus can determine different network characteristics, and the content of the feature to be identified formed by the network features of the plurality of complex networks More comprehensive, can improve the effectiveness and accuracy of recognition.
  • FIG. 2 is a flowchart of still another embodiment of a biometric identification method according to an embodiment of the present invention.
  • the constraint may include: a node distance of any network node is less than a constraint distance; The constraint distances of different constraints are different.
  • the method may further comprise the following steps:
  • a complex network is constructed by network nodes that are less than any constraint distance from nodes of any network node to obtain multiple complex networks.
  • any pixel point can be understood as being arranged in a three-dimensional matrix manner, so that any one of the pixel points has a corresponding coordinate point, so any two can be determined.
  • the coordinate distance between network nodes can be understood as being arranged in a three-dimensional matrix manner, so that any one of the pixel points has a corresponding coordinate point, so any two can be determined.
  • the constraint distance refers to a distance constant used to constrain the distance between nodes of two network nodes, and the constraint distance is estimated according to the distance of the node.
  • the distances of the nodes may be different, and there may be a large difference between different node distances, and it is necessary to determine the corresponding differences for the obtained node distances with large differences. Large constraint distance.
  • determining the constraint distance in turn is complicated, which is disadvantageous for the wide range of applications of the present invention.
  • calculating the node distance of any two network nodes may include:
  • the coordinate distance is normalized to obtain the node distance.
  • Normalization is to calculate a ratio of the coordinate distance between the network nodes to the size of the fused image, and obtain a corresponding normalized distance.
  • the normalization formula can be calculated according to the following formula:
  • w is the width of the palmprint image and the palm vein image
  • h is the palmprint image and the height of the palm vein image; since the fused image is obtained by overlapping the palmprint image and the palm vein image, the height of the fused image is 2.
  • the network node is composed of palm prints and pixel points of the palm vein.
  • the plurality of complex networks are composed of a connection between any two network nodes that satisfy the constraint distance, and record the distance relationship between any two network nodes, instead of the spatial position relationship of any network node.
  • the feature to be identified is composed of network features of a plurality of complex networks. Therefore, the feature to be identified represents a relative positional relationship between the palm print and any two pixel points of the palm vein, and the relative rotation, displacement, and the like. Has a strong robustness. That is, each time the palmprint image and the palm vein image are acquired to obtain a feature to be recognized, it is not affected by rotation, displacement, etc., and a relatively stable feature to be identified can be obtained.
  • the finger points to the 12 o'clock direction, and the palm print image and the palm vein image are captured for the second time, the finger points to the 1 o'clock direction, although the two acquisitions There is a 30° difference between the palmprint image and the palm vein image.
  • the relative positional relationship between the palmprint and the palm veins of the two palms is unchanged, and the complex network is formed unchanged.
  • the feature to be identified is relatively stable.
  • determining a corresponding different complex network by using a node distance between two network nodes may determine a plurality of different complex networks, and the complex network is based on palm prints and palm veins.
  • the composition in turn, can determine a plurality of network features of the palm print and the palm vein, so that the network features are more accurate, and thus a higher recognition effect and accuracy can be obtained.
  • the fusing the palmprint image and the palm vein image to obtain a fused image comprises:
  • the palmprint image after the binarization is merged with the palm vein image to obtain a fused image.
  • the taking the palm print and the palm vein corresponding pixel point in the fused image as the network node includes:
  • a pixel point having a pixel value of a first value is used as a network node.
  • Binarizing the palmprint image and the vein image means extracting the palmprint in the palmprint image and the palm vein in the palm vein image, and marking the corresponding pixel point with the defined first numerical value.
  • the other non-palm and non-palm vein pixels are identified by a defined second value, which in turn can clearly determine the palmprint in the palmprint image and the palm vein in the palm vein image.
  • the first value may be one and the second value may be zero.
  • the palmprint image and the palm vein image may be binarized using a binarization algorithm.
  • the binarization algorithm may refer to an algorithm such as LBP (Local Binary Patterns), and a mean window filtering algorithm.
  • the palmprint image and the palm vein image are binarized, and then the palmprint and the palm vein in the palmprint image can be The characteristics of the palm vein in the image are determined, and other useless features are discarded, so that the pixels belonging to the palm print and the palm vein can be accurately determined, and the palm line pixel and the palm vein pixel point which are accurately determined are used as network nodes.
  • the forming, by the network feature of the plurality of complex networks, the feature to be identified includes:
  • the combined network features are taken as features to be identified.
  • the node degree of the network node may refer to the number of connections of one network node to other network nodes. For example, one network node is connected to three other network nodes, and the degree of the network node is three.
  • Determining the node degree of each network node in any complex network is actually determining the connection relationship between each network node in any complex network and other network nodes, and then determining the corresponding network characteristics according to the connection relationship between the network nodes.
  • the average degree refers to an average value of degrees of all network nodes; the variance degree refers to a variance value of degrees obtained according to the average degree and each node degree; the maximum degree refers to all network node medium degrees The maximum value.
  • multiple network features of the fused image are calculated based on the node degrees of the respective network nodes.
  • Pick Node degree calculation can more accurately determine the network characteristics of any complex network, and can obtain more accurate network features, thereby improving the effectiveness and accuracy of recognition.
  • the fusing the palmprint image and the palm vein image to obtain a fused image comprises:
  • the palmprint image after the noise reduction is merged with the palm vein image to obtain a fused image.
  • the noise reduction of the palmprint image and the palm vein image of the user to be identified may refer to filtering the palmprint image of the user to be recognized and the high frequency component in the palm vein image.
  • a noise reduction algorithm may be used to filter the palmprint image of the user to be identified and the high frequency components in the palm vein image.
  • the noise reduction algorithm may refer to a wavelet transform, a Kalman filter algorithm, a median filter algorithm, and the like.
  • the palmprint image and the vein image are subjected to noise reduction processing, so that the palmprint and the palm vein image in the palmprint image can be
  • the palm vein is clearer.
  • the reduction of various noises can make the palm print and the corresponding pixel of the palm vein more accurate, and thus can determine a more accurate complex network to obtain accuracy.
  • the higher features to be identified further enhance the effectiveness and accuracy of the identification.
  • FIG. 3 is a schematic structural diagram of an embodiment of a biometric device according to an embodiment of the present invention.
  • the device may include the following modules:
  • the image acquisition module 301 is configured to collect a palm print image and a palm vein image of the user to be identified.
  • the palmprint image has the same size as the palm vein image.
  • both of them may be images with a pixel height of h and a pixel width of w, and the size of the pixel matrix is w*h.
  • the palm print image and the palm vein image may be acquired using a multi-spectral palm image sensor.
  • the multi-spectral palm image sensor can capture different types of palm images under different light sources. For example, a palmprint image can be collected under visible light sources, and a palm vein image can be collected under an infrared light source.
  • the palmprint image and the palm vein image of the user to be identified may be continuously acquired in a short time interval to ensure that the position of the palm in the palmprint image and the palm vein image are unchanged.
  • the palm print image and the palm vein image may also be collected at different time periods, and it is only necessary to ensure that the palm print image and the palm position of the palm vein image are not changed during the acquisition.
  • the image fusion module 302 is configured to fuse the palm print image and the palm vein image to obtain a fused image.
  • the palmprint image and the palm vein image include a palmprint feature and a palm vein feature, and the palmprint image and the palm vein image are merged, that is, the palmprint feature and the palm vein feature are merged.
  • the merging the palmprint image and the palm vein image may mean that the palmprint image and the palm vein image are respectively overlapped by pixels according to positions of respective pixel points, so that the same position includes two pixel values.
  • the palmprint image and the palm vein image may be represented as a two-dimensional matrix of w*h, and two After the overlap, the fused image can be represented as a 2*w*h three-dimensional matrix.
  • the node determining module 303 is configured to use the palm print and the palm vein corresponding pixel in the fused image as a network node.
  • the difference between the pixel value of the palm print and the palm vein and the pixel value of the surrounding normal skin is large, and therefore, the difference between the pixel values can be used to determine the palm print and the pixel corresponding to the palm vein in the fused image.
  • the network construction module 304 is configured to construct a complex network by a network node that satisfies any constraint condition to obtain a plurality of complex networks based on different constraints.
  • the constraint is mainly used to constrain the network node, and when the network node satisfies the constraint condition, the network node that satisfies the constraint condition may be used to construct the complex network.
  • the network nodes that satisfy the constraint condition are also different. Therefore, the complex network constructed by different network nodes is also different.
  • the network node is the palm point and the corresponding pixel point of the palm vein
  • the network node of the complex network is actually formed according to the palm print and the corresponding pixel point of the palm vein, and thus the complex network can be used to determine the palm. Patterns and related features of the palm vein.
  • the network features of the multiple complex networks may be acquired.
  • the network features may refer to feature values such as averaging, variance, and maximum.
  • the feature building module 305 is configured to form a feature to be identified by network features of the plurality of complex networks.
  • the network features of the plurality of complex networks may be connected in series to form the to-be-identified feature.
  • the feature to be identified formed in series is 10*3.
  • the palmprint image and the palm vein image are collected to obtain a fusion image of the two, and then the palmprint and the palm vein biometric features can be merged, and the corresponding pixel points of the palm vein and the palm vein are used as
  • the network node determines different complex networks, that is, acquires multiple network associations between the palm print and the palm vein, and thus can determine different network characteristics, and the content of the feature to be identified formed by the network features of the plurality of complex networks More comprehensive, can improve the effectiveness and accuracy of recognition.
  • FIG. 4 is a schematic structural diagram of an embodiment of a biometric device according to an embodiment of the present invention.
  • the device may include the following modules:
  • the image acquisition module 401 is configured to collect a palm print image and a palm vein image of the user to be identified.
  • the image fusion module 402 is configured to fuse the palm print image and the palm vein image to obtain a fused image.
  • the node determining module 403 is configured to use the palm print and the palm vein corresponding pixel point in the fused image as a network node.
  • the distance calculation module 404 is configured to calculate the node distance of any two network nodes.
  • any pixel point can be understood as being arranged in a three-dimensional matrix manner, so that any one of the pixel points has a corresponding coordinate point, so any two can be determined.
  • the coordinate distance between network nodes can be understood as being arranged in a three-dimensional matrix manner, so that any one of the pixel points has a corresponding coordinate point, so any two can be determined.
  • the constraint distance refers to a distance constant used to constrain the distance between nodes of two network nodes, and the constraint distance is estimated according to the distance of the node.
  • the distances of the nodes may be different, and there may be a large difference between different node distances, and it is necessary to determine the corresponding differences for the obtained node distances with large differences. Large constraint distance.
  • determining the constraint distance in turn is complicated, which is disadvantageous for the wide range of applications of the present invention.
  • the distance calculation module may include:
  • a distance calculating unit configured to calculate a coordinate distance of the any two network nodes according to pixel coordinates corresponding to any two network nodes
  • a distance normalization unit for normalizing the coordinate distance to obtain the node distance.
  • the normalization is a ratio of the coordinate distance between the network nodes to the size of the fused image, and a corresponding normalized distance is obtained.
  • the network construction module 405 is configured to construct a complex network by using network nodes satisfying any constraint condition to obtain multiple complex networks based on different constraints.
  • the constraint may include that the node distance of any network node is less than the constraint distance; the constraint distances of different constraints are different.
  • the network building module may include:
  • the network construction unit 4051 is configured to construct a complex network by a network node that is less than any constraint distance from a node of any network node based on different constraint distances to obtain a plurality of complex networks.
  • the feature building module 406 is configured to form a feature to be identified by network features of the plurality of complex networks.
  • the network node is composed of palm prints and pixel points of the palm vein.
  • the plurality of complex networks are composed of a connection between any two network nodes that satisfy the constraint distance, and record the distance relationship between any two network nodes, instead of the spatial position relationship of any network node.
  • the feature to be identified is composed of network features of a plurality of complex networks. Therefore, the feature to be identified represents a relative positional relationship between the palm print and any two pixel points of the palm vein, and the relative rotation, displacement, and the like. Has a strong robustness. That is, each time the palmprint image and the palm vein image are acquired to obtain a feature to be recognized, it is not affected by rotation, displacement, etc., and a relatively stable feature to be identified can be obtained.
  • determining a corresponding different complex network by using a node distance between two network nodes may determine a plurality of different complex networks, and the complex network is based on palm prints and palm veins.
  • the composition in turn, can determine a plurality of network features of the palm print and the palm vein, so that the network features are more accurate, and thus a higher recognition effect and accuracy can be obtained.
  • the image fusion module may include:
  • An image conversion unit configured to binarize the palm print image and the palm vein image to convert the palm print and the palm vein corresponding pixel into a first value, and convert the non-palm and non-palm vein corresponding pixel values into the first Two values.
  • the first fusion unit is configured to merge the palmprint image after the binarization with the palm vein image to obtain a fused image.
  • the node determining module may include:
  • the node determining unit is configured to use, as the network node, a pixel point in the fused image that the pixel value is the first value.
  • Binarizing the palmprint image and the vein image means extracting the palmprint in the palmprint image and the palm vein in the palm vein image, and marking the corresponding pixel point with the defined first numerical value. Other non-palm and non-palm vein pixels are identified by a defined second value. The palm print in the palm print image and the palm vein in the palm vein image can be clearly determined.
  • the first value may be one and the second value may be zero.
  • the palmprint image and the palm vein image may be binarized using a binarization algorithm.
  • the binarization algorithm may refer to LBP (Local Binary Patterns), mean window filtering algorithm, and the like.
  • the palmprint image and the palm vein image are binarized, and then the palmprint and the palm vein in the palmprint image can be The characteristics of the palm vein in the image are determined, and other useless features are discarded, so that the pixels belonging to the palm print and the palm vein can be accurately determined, and the palm line pixel and the palm vein pixel point which are accurately determined are used as network nodes.
  • the feature building module may include:
  • the first determining unit is configured to determine a node degree of each network node in any complex network.
  • a feature calculation unit configured to calculate network characteristics of the any complex network according to the node degree.
  • a feature combination unit configured to perform feature combination on network features of any of the plurality of complex networks.
  • a second determining unit configured to use the combined network feature as the feature to be identified.
  • the node degree of the network node may refer to a connection situation between one network node and other network nodes. For example, one network node is connected to three other network nodes, and the degree of the network node is three.
  • Determining the node degree of each network node in any complex network is actually determining the connection relationship between each network node in any complex network and other network nodes, and then determining the corresponding network characteristics according to the connection relationship between the network nodes.
  • the average degree refers to an average value of degrees of all network nodes; the variance degree refers to a variance value of degrees obtained according to the average degree and each node degree; the maximum degree refers to all network node medium degrees The maximum value.
  • multiple network features of the fused image are calculated based on the node degrees of the respective network nodes.
  • the node degree calculation method can more accurately determine the network characteristics of any complex network, and can obtain more accurate network features, thereby improving the effectiveness and accuracy of the identification.
  • the image fusion module may include:
  • An image noise reduction unit configured to perform noise reduction on the palmprint image of the user to be identified and the palm vein image
  • the second fusion unit is configured to fuse the palmprint image after the noise reduction and the palm vein image to obtain a fused image.
  • the noise reduction of the palmprint image and the palm vein image of the user to be identified may refer to filtering the palmprint image of the user to be recognized and the high frequency component in the palm vein image.
  • a noise reduction algorithm may be used to filter the palmprint image of the user to be identified and the high frequency components in the palm vein image.
  • the noise reduction algorithm may refer to a wavelet transform, a Kalman filter algorithm, a median filter algorithm, and the like.
  • the palmprint image and the vein image are subjected to noise reduction processing, so that the palmprint and the palm vein image in the palmprint image can be
  • the palm vein is clearer.
  • the reduction of various noises can make the palm print and the corresponding pixel of the palm vein more accurate, and thus can determine a more accurate complex network to obtain accuracy.
  • the higher features to be identified further enhance the effectiveness and accuracy of the identification.
  • the above biometric device may be implemented as a biometric electronic device.
  • the electronic device may include: a processing component 501 and a storage component 502 respectively connected to the processing component;
  • An image acquisition component 503 is configured to collect a palm print image and a palm vein image of the user to be identified;
  • the storage component 502 stores one or more computer instructions;
  • the processing component 501 invokes and executes the one or more computer program instructions to: integrate the palmprint image and the palm vein image to obtain a fused image; and the palmprint and the palm vein in the fused image Corresponding pixel points are used as network nodes; based on different constraints, a complex network is constructed by network nodes satisfying any constraint condition to obtain a plurality of complex networks; network features of the plurality of complex networks constitute features to be identified.
  • the palmprint image and the palm vein image are collected to obtain a fusion image of the two, and then the palmprint and the palm vein biometric features can be merged, and the corresponding pixel points of the palm vein and the palm vein are used as
  • the network node determines different complex networks, that is, acquires multiple network associations between the palm print and the palm vein, and thus can determine different network characteristics, and the content of the feature to be identified formed by the network features of the plurality of complex networks More comprehensive, can improve the effectiveness and accuracy of recognition.
  • the constraint condition includes that the node distance of any network node is less than the constraint distance; the constraint distances of different constraints are different;
  • the processing component can also be used to:
  • the constructing a complex network by a network node satisfying any constraint condition based on different constraints to obtain a plurality of complex networks includes:
  • a complex network is constructed by network nodes that are less than any constraint distance from nodes of any network node to obtain multiple complex networks.
  • the processing component calculates the node distance of any two network nodes specifically:
  • the coordinate distance is normalized to obtain the node distance.
  • determining a corresponding different complex network by using a node distance between two network nodes may determine a plurality of different complex networks, and the complex network is based on palm prints and palm veins.
  • the composition in turn, can determine a plurality of network features of the palm print and the palm vein, so that the network features are more accurate, and thus a higher recognition effect and accuracy can be obtained.
  • the processing component combines the palm print image and the palm vein image, and the obtained fusion image may be:
  • the palmprint image after the binarization is merged with the palm vein image to obtain a fused image.
  • the palm print and the palm vein corresponding pixel point in the fused image as the network node include:
  • a pixel point having a pixel value of a first value is used as a network node.
  • the palmprint image and the palm vein image are binarized, and then the palmprint and the palm vein in the palmprint image can be The characteristics of the palm vein in the image are determined, and other useless features are discarded, so that the pixels belonging to the palm print and the palm vein can be accurately determined, and the palm line pixel and the palm vein pixel point which are accurately determined are used as network nodes.
  • the processing component is configured by the network features of the plurality of complex networks to be identified, including:
  • the combined network features are taken as features to be identified.
  • multiple network features of the fused image are calculated based on the node degrees of the respective network nodes.
  • the node degree calculation method can more accurately determine the network characteristics of any complex network, and can obtain more accurate network features, thereby improving the effectiveness and accuracy of the identification.
  • the processing component combines the palm print image and the palm vein image to obtain a fused image, specifically:
  • the palmprint image after the noise reduction is merged with the palm vein image to obtain a fused image.
  • the palmprint image and the vein image are subjected to noise reduction processing, so that the palmprint and the palm vein image in the palmprint image can be
  • the palm vein is clearer.
  • the reduction of various noises can make the palm print and the corresponding pixel of the palm vein more accurate, and thus can determine a more accurate complex network to obtain accuracy. Higher features to be identified further enhance the identification Effectiveness and accuracy.
  • the electronic device shown in FIG. 5 may be a head mounted display device 600, and the head mounted display device 600 may be an external head mounted display device or an integrated head mounted display device, wherein the external device Head mounted display devices need to be used with external processing systems such as computer processing systems.
  • the electronic device is a head mounted display device, as shown in FIG. 6:
  • the electronic device may further include:
  • the display component 601 can include a display panel disposed on a side surface of the head mounted display device facing the user's face, and can be a whole panel or a left panel corresponding to the left and right eyes of the user respectively. And the right panel.
  • the display panel may be an electroluminescence (EL) element, a liquid crystal display or a microdisplay having a similar structure, or a laser-scanned display in which the retina may be directly displayed or similar.
  • EL electroluminescence
  • the electronic device may further include:
  • a virtual image optics assembly 602 that captures an image displayed by display component 601 in an enlarged manner and allows a user to view the displayed image in an enlarged virtual image.
  • the display image outputted to the display component 601 it may be an image of a virtual scene supplied from a content reproduction device (a Blu-ray disc or a DVD player) or a streaming media server or an image of a real scene photographed using an external camera.
  • virtual image optics assembly 602 can include a lens unit, such as a spherical lens, an aspheric lens, a Fresnel lens, and the like.
  • the electronic device may further include:
  • An input operation component 603 which may include at least one operational component for performing an input operation, such as a button, button, switch, or other component having similar functions, receives a user instruction through the operational component, and outputs an instruction to the processing component 501.
  • the electronic device may further include:
  • the status information acquisition component 604 is configured to obtain status information of a user wearing the display device.
  • the status information acquisition component 604 can include various types of sensors for self-detecting status information and can obtain status information from external devices (e.g., smart phones, wristwatches, and other multi-function terminals worn by the user) via the communication component 605.
  • the status information acquisition component 604 can obtain location information and/or gesture information of the user's head.
  • the status information acquisition component 604 can include one or more of a gyro sensor, an acceleration sensor, a global positioning system (GPS) sensor, a geomagnetic sensor, a Doppler effect sensor, an infrared sensor, a radio frequency field strength sensor.
  • GPS global positioning system
  • the status information acquisition component 604 acquires status information of the user wearing the display device 600, such as, for example, acquiring an operation status of the user (whether the user wears the head mounted display device 600), an action state of the user (such as still, walking, running, and Such a state of movement, a posture of a hand or a fingertip, an open or closed state of the eye, a line of sight direction, a pupil size, a mental state (whether the user is immersed in the image displayed by the observation, and the like), Even physiological state.
  • an operation status of the user whether the user wears the head mounted display device 600
  • an action state of the user such as still, walking, running, and Such a state of movement, a posture of a hand or a fingertip, an open or closed state of the eye, a line of sight direction, a pupil size, a mental state (whether the user is immersed in the image displayed by the observation, and the like), Even physiological state.
  • the electronic device may further include:
  • the communication component 605 is configured to perform communication processing, modulation and demodulation processing with an external device, and encoding and decoding processing of the communication signal. Additionally, processing component 501 can transmit transmission data from communication component 605 to an external device.
  • the communication method may be wired or wireless, such as mobile high-definition link (MHL) or universal serial bus (USB), high-definition multimedia interface (HDMI), wireless fidelity (Wi-Fi), Bluetooth communication, or low-power Bluetooth communication. And the mesh network of the IEEE802.11s standard.
  • communication component 605 can be a cellular wireless transceiver that operates in accordance with Wideband Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and the like.
  • W-CDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • the electronic device may further include:
  • the image processing component 606 is configured to perform signal processing, such as image quality correction associated with image signals of the virtual scene output from the processing component 501, and to convert its resolution to a resolution according to the screen of the display component 601.
  • the display driver component 607 sequentially selects each row of pixels of the display component 601 and sequentially scans each row of pixels of the display component 601 row by row, thereby providing pixel signals based on the signal processed image signals.
  • the image acquisition component 503 in the electronic device can include:
  • An external camera 608, the external camera 608 may be disposed on a front surface of the main body of the electronic device, and the external camera 608 may be one or more.
  • the external camera 608 can acquire image information.
  • a position sensitive detector (PSD) or other type of distance sensor that detects reflected signals from the object can be used with the external camera 608.
  • An external camera 608 and a distance sensor can be used to detect the body position, posture and shape of the user wearing the head mounted display device. In addition, under certain conditions, the user can directly view or preview the real scene through the external camera 608.
  • the electronic device may further include:
  • the sound processing component 609, the sound processing component 609 can perform sound quality correction or sound amplification of the sound signal output from the processing component 501, signal processing of the input sound signal, and the like.
  • the sound input/output unit 610 outputs sound to the outside after sound processing and inputs sound from the microphone.
  • the structure or component shown by the dotted line in FIG. 1 may be independent of the head mounted display device, for example, may be disposed in an external processing system (for example, a computer system) for use with the head mounted display device; or, a dotted line
  • the illustrated structure or component may be disposed inside or on the surface of the head mounted display device.
  • Processing component 501 can include one or more processors to execute computer instructions to perform all or part of the steps described above.
  • the processing component can also be one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs).
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • Storage component 502 is configured to store various types of data to support operation at the electronic device.
  • Storage component can be Any type of volatile or non-volatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable programmable read only memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable programmable read only memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • the embodiment of the present application further provides a computer readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the gesture information acquisition method of any of the above embodiments may be implemented.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明公开了一种生物特征识别方法及装置,该方法包括:采集待识别用户的掌纹图像与掌静脉图像;融合所述掌纹图像与所述掌静脉图像,获得融合图像;将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络;由所述多个复杂网络的网络特征构成待识别特征。本发明提高了识别的有效性以及准确度。

Description

生物特征识别方法及装置 技术领域
本发明涉及生物识别领域,具体地说,涉及一种生物特征识别方法及装置。
背景技术
每个个体都有唯一的可以测量或可自动识别和验证的生理特征或行为方式,即生物特征。生物识别技术可以通过这种每个个体之间独一无二的生物特征对其进行识别与身份的认证,通常指计算机利用人体固有的诸如指纹、面部或声音等生物特征,来进行用户身份认证的技术。
现有技术中,可以通过采集每个个体的指纹,并可以基于该指纹转换的指纹特征进行指纹识别。可以通过采集人脸面部图像,并可以基于该面部图像转换为面部特征进行面部识别。还可以通过采集每个个体发出的声音,并基于该声音转换为声音特征进行面部识别。
但是,指纹、面部或声音等生物特征容易被篡改,例如,指纹可以伪造,面部可以遮挡,声音也可以利用变声器来改变,因此,无法保证有效识别。
发明内容
有鉴于此,本发明提供了一种生物特征识别方法及装置,通过将掌纹识别以以及掌静脉识别相结合,解决了现有技术中无法有效识别的问题,提高了识别的有效性以及准确度。
为了解决上述技术问题,本发明的第一方面提供一种生物特征识别方法,该方法包括:
采集待识别用户的掌纹图像与掌静脉图像;融合所述掌纹图像与所述掌静脉图像,获得融合图像;将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络;由所述多个复杂网络的网络特征构成待识别特征。
优选地,所述约束条件包括任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同;
所述方法还包括:
计算任意两个网络节点的节点距离;
所述基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络包括:
基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
优选地,所述计算任意两个网络节点的节点距离包括:
根据任意两个网络节点对应的像素坐标,计算所述任意两个网络节点的坐标距离;将所述坐标距离进行归一化获得所述节点距离。
优选地,所述融合所述掌纹图像与所述掌静脉图像,获得融合图像包括:
将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值;融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
优选地,所述将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点包括:
将所述融合图像中,像素值为第一数值的像素点作为网络节点。
优选地,所述由所述多个复杂网络的网络特征构成待识别特征包括:
确定任一复杂网络中各个网络节点的节点度;根据所述节点度,计算所述任一复杂网络的网络特征;将所述多个复杂网络中任一复杂网络的网络特征进行特征组合;将组合后的网络特征作为待识别特征。
优选地,所述融合所述掌纹图像与所述掌静脉图像,获得融合图像包括:
将所述待识别用户的掌纹图像以及掌静脉图像进行降噪;融合降噪后的所述掌纹图像与所述掌静脉图像,获得融合图像。
本发明的第二方面提供了一种生物特征识别装置,该装置包括:
图像采集模块,用于采集待识别用户的掌纹图像与掌静脉图像;
图像融合模块,用于融合所述掌纹图像与所述掌静脉图像,获得融合图像;
节点确定模块,用于将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;
网络构建模块,用于基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络;
特征构建模块,用于由所述多个复杂网络的网络特征构成待识别特征。
优选地,所述约束条件包括任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同;
所述装置还包括:
距离计算模块,用于计算任意两个网络节点的节点距离;
所述网络构建模块包括:
网络构建单元,用于基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
优选地,所述距离计算模块包括:
距离计算单元,用于根据任意两个网络节点对应的像素坐标,计算所述任意两个网络节点的坐标距离;
距离归一化单元,用于将所述坐标距离进行归一化获得所述节点距离。
优选地,所述图像融合模块包括:
图像转换单元,用于将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值;
第一融合单元,用于融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
与现有技术相比,本发明可以获得包括以下技术效果:
本发明中,采集待识别用户的掌纹图像与掌静脉图像,并将二者进行融合,获得融合图像,所述融合图像结合了掌纹图像与掌静脉图像的特征,特征辨识度增强。将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点,并基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络获得多个复杂网络,由所述多个复杂网络的网络特征构成待识别特征。所述多个复杂网络是基于辨识度较高的掌纹以及掌静脉的像素点构成的,结合了二者的特征,并且,由多个复杂网络的网络特征构成待识别特征更能够表征待识别用户的识别特征,可以提高识别的有效性以及准确度。
附图说明
图1是本发明实施例的一种生物特征识别方法的一个实施例的流程图;
图2是本发明实施例的一种生物特征识别方法的又一个实施例的流程图;
图3是是本发明实施例的一种生物特征识别装置的一个实施例的结构示意图;
图4是本发明实施例的一种生物特征识别装置的又一个实施例的结构示意图;
图5是本发明实施例的一种生物特征识别电子设备的一个实施例的结构示意图;
图6是本发明实施例的一种头戴显示设备的一个实施例的结构示意图。
具体实施方式
以下将配合附图及实施例来详细说明本发明的实施方式,藉此对本发明如何应用技术手段来解决技术问题并达成技术功效的实现过程能充分理解并据以实施。
本发明实施例主要应用于生物识别领域,主要是采集掌纹与掌静脉两种特征来表征待识别用户的生物特征,可以提高识别的有效性以及准确度。
现有技术中,生物识别多采用面部识别、虹膜识别或声音识别,但是这些识别特征容易被篡改,不能保证有效识别。
发明人研究发现,掌纹以及掌静脉均是人体的生物特征。且每个人的掌纹以及掌静脉不同,可以用于生物识别。但是发明人发现,使用掌纹识别时,掌纹是一种暴露在外的生物特征容易被伪造;而掌静脉构成的掌静脉特征较少,识别精度较低。因此,为了克服容易上述缺陷,发明人想到将掌纹识别与掌静脉识别相融合,获得掌纹以及掌静脉的融合特征,其既可以保证待识别用户的识别特征的识别精度,又可以确保待识别用户的识别特征的不可伪造性,据此提出了本发明的技术方案。
本发明实施例中,采集待识别用户的掌纹图像与掌静脉图像,通过将采集的两种图像进行融合,获得融合图像;并根据融合图像中确定的网络节点,利用不同的约束条件以确定不同的多个复杂网络,由多个复杂网络的网络特征构成待识别特征。构建的复杂网络是确定待识别特征的基础,而复杂网络又是以融合图像的像素点为基础获取的,可以确保构建的待识别信息同时具有两种生物特征的内容,进而提高了识别的有效性以及准确性。
下面,将结合附图对本发明实施例进行详细描述。
如图1所示,为本发明实施例提供的一种生物特征识别方法的一个实施例的流程图,该方法可以包括以下几个步骤:
101:采集待识别用户的掌纹图像与掌静脉图像。
其中,所述掌纹图像与所述掌静脉图像的大小相同,例如,二者可以均为像素高度为h,像素宽度为w的图像,其像素矩阵的大小为w*h。
可选地,可以使用多光谱手掌图像传感器采集所述掌纹图像与掌静脉图像。多光谱手掌 图像传感器可以在不同光源下采集不同类型的手掌图像,例如,在可见光源下可以采集掌纹图像,在红外光源下可以采集掌静脉图像。
可选地,可以在时间间隔较短的时间内连续采集所述待识别用户的掌纹图像与掌静脉图像,以确保采集的掌纹图像与掌静脉图像中手掌的位置不变。所述掌纹图像与掌静脉图像也可以是在不同时段采集的,只需要确保采集时掌纹图像与掌静脉图像的手掌位置不发生变化即可。
102:融合所述掌纹图像与所述掌静脉图像,获得融合图像。
所述掌纹图像与所述掌静脉图像中包含了掌纹特征以及掌静脉特征,将掌纹图像以及掌静脉图像进行融合也即将掌纹特征以及掌静脉特征进行了融合。
所述融合所述掌纹图像与所述掌静脉图像可以是指将所述掌纹图像与所述掌静脉图像分别按照各个像素点的位置进行像素重叠,使同一个位置对应两个像素值。以所述掌纹图像与所述掌静脉图像的像素高度为h,像素宽度为w为例,所述掌纹图像与所述掌静脉图像均可以表示为w*h的二维矩阵,将二者进行重叠后,所述融合图像即可以表示为2*w*h的三维矩阵。
103:将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点。
融合图像中,掌纹以及掌静脉对应的像素值与周围非掌纹以及非掌静脉对应的像素值的差异较大,因此,可以利用像素值的差异确定出融合图像中掌纹以及掌静脉对应的像素点。
104:基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络。
所述约束条件主要用于约束所述网络节点,在所述网络节点满足约束条件时,即可以使用满足约束条件的网络节点构建复杂网络。
可选地,不同约束条件下,满足该约束条件的网络节点也不同,因此,由不同网络节点构建的复杂网络也不同。
由于网络节点是所述掌纹以及掌静脉对应的像素点,因此所述复杂网络的网络节点实际上是根据所述掌纹以及掌静脉对应的像素点构成的,进而复杂网络可以用于确定掌纹以及掌静脉的相关特征。
可选地,可以获取所述多个复杂网络的网络特征,例如,所述网络特征可以指平均度、方差度以及最大度等特征值。
105:由所述多个复杂网络的网络特征构成待识别特征。
可选地,可以将所述多个复杂网络的网络特征串联构成所述待识别特征。以所述复杂网络特征的网络特征为1*3的二维矩阵为例,假设所述多个复杂网络为10个,则串联构成的待 识别特征为10*3。
本发明实施例中,采集掌纹图像以及掌静脉图像,以获取二者的融合图像,进而可以将掌纹以及掌静脉两种生物特征进行融合,并基于掌纹以及掌静脉的对应像素点作为网络节点,确定不同的复杂网络,也即获取了掌纹以及掌静脉之间多种网络关联,进而可以确定不同的网络特征,由所述多个复杂网络的网络特征构成的待识别特征的内容更全面,可以提高识别的有效性以及准确度。
如图2所示为本发明实施例提供的一种生物特征识别方法的又一个实施例的流程图,在该实施例中,所述约束条件可以包括:任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同。所述方法还可以包括以下几个步骤:
201:采集待识别用户的掌纹图像与掌静脉图像;
202:融合所述掌纹图像与所述掌静脉图像,获得融合图像;
203:将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;
204:计算任意两个网络节点的节点距离。
205:基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
在将所述融合图像看作是一个三维矩阵的情况下,任一个像素点可以理解为按照三维矩阵的方式进行排列的,故任一个像素点均存在相应的坐标点,因此可以确定任意两个网络节点之间的坐标距离。
假定,掌纹图像上任一个像素点的坐标位置为(X1,Y1,Z1),掌静脉图像上任一个像素点的坐标位置为(X2,Y2,Z2),则两个网络节点之间的坐标距离可以表示为D,D可以按照以下公式计算得到:
Figure PCTCN2017113585-appb-000001
当然,所述不止包括掌纹像素点与掌静脉像素点之间的坐标距离,还可以包括掌纹像素点与掌纹像素点之间的坐标距离,以及掌静脉像素点与掌静脉像素点之间的坐标距离。
所述约束距离是指用于约束两个网络节点之间的节点距离的距离常数,约束距离是根据所述节点距离估算得到的。当面对不同的掌纹图像与静脉图像时,所述节点距离可能不同,且不同的节点距离之间可能存在较大差异,需要针对获得的差异较大的节点距离,需要确定对应的差异较大的约束距离。但是,当面对数量较大的掌纹图像以及静脉图像时,依次确定约束距离十分复杂,不利于本发明的大范围应用。
因此,可选地,所述计算任意两个网络节点的节点距离可以包括:
根据任意两个网络节点对应的像素坐标,计算所述任意两个网络节点的坐标距离。
将所述坐标距离进行归一化获得所述节点距离。
归一化是计算所述网络节点之间的坐标距离与所述融合图像的大小的比值,获得相应的归一化距离。该归一化公式可以按照以下公式计算:
Figure PCTCN2017113585-appb-000002
其中,w为掌纹图像以及掌静脉图像的宽度,h为掌纹图像以及掌静脉图像的高度;由于融合图像由掌纹图像以及掌静脉图像重叠得到,所述融合图像的高度为2。
所述将所述坐标距离进行归一化,以将所述坐标距离进行统一,并且将所述约束距统一至(0,1]区间内。进而在面对大量的掌纹图像以及静脉图像时,可以使用同一组约束距离,不需要将所述约束距离进行多次定义。简化了计算过程,也方便了本发明的大量应用。
206:由所述多个复杂网络的网络特征构成待识别特征。
所述网络节点由掌纹以及掌静脉的像素点构成。所述多个复杂网络由满足约束距离的任意两个网络节点之间的连线构成,记录了任意两个网络节点之间距离关系,而不是任意网络节点的空间位置关系。而所述待识别特征由多个复杂网络的网络特征构成,因此,所述待识别特征代表了所述掌纹与掌静脉的任意两个像素点的相对位置关系,其相对旋转、位移等误差具有较强的鲁棒性。也即,每次采集所述掌纹图像与掌静脉图像,以获得待识别特征时,不受旋转、位移等的影响,能够获得较稳定的待识别特征。
例如,假定掌纹图像以及掌静脉图像第一次采集时,手指指向12点钟方向,而掌纹图像以及掌静脉图像第二次采集时,手指指向1点钟方向,虽然这两次采集的掌纹图像以及掌静脉图像存在30°的差异,但是,两次采集的掌纹以及掌静脉的任意两个像素点之间的相对位置关系不变,进而构成的多个复杂网络不变,获得的所述待识别特征较为稳定。
本发明实施例中,通过两个网络节点之间的节点距离来确定相应的不同的复杂网络,可以通过确定多个不同的复杂网络,而所述复杂网络是基于掌纹以及掌静脉的像素点构成的,进而可以确定掌纹以及掌静脉的多个网络特征,使得网络特征更准确,进而可以获得较高的识别效果以及准确度。
作为又一个实施例,所述融合所述掌纹图像与所述掌静脉图像,获得融合图像包括:
将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值;
融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
可选地,所述将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点包括:
将所述融合图像中,像素值为第一数值的像素点作为网络节点。
将所述掌纹图像与静脉图像二值化是指,将掌纹图像中的掌纹,以及掌静脉图像中的掌静脉提取出来,将其对应的像素点用定义的第一数值进行标识,而其他的非掌纹以及非掌静脉的像素点用定义的第二数值进行标识,进而可以明确确定掌纹图像中的掌纹以及掌静脉图像中的掌静脉。
可选地,为了使所述掌纹以及掌静脉更清晰,所述第一数值可以是1,所述第二数值可以是0。
可选地,可以利用二值化算法将所述掌纹图像与掌静脉图像进行二值化变换。
所述二值化算法可以是指LBP(Local Binary Patterns,局部二值模式),均值窗口滤波算法等算法。
本发明实施例中,将所述掌纹图像以及静脉图像进行融合之前,先将所述掌纹图像以及掌静脉图像进行了二值化变换,进而可以将掌纹图像中的掌纹,掌静脉图像中的掌静脉等特征确定出来,其他的无用特征则被舍弃,进而可以准确地确定属于掌纹以及掌静脉的像素点,并将准确确定的掌纹像素端以及掌静脉像素点作为网络节点,以构成准确的复杂网络,进而可以确定更准确的待识别特征,进一步增加了识别特征的准确性以及有效性。
作为又一个实施例,所述由所述多个复杂网络的网络特征构成待识别特征包括:
确定任一复杂网络中各个网络节点的节点度;
根据所述节点度,计算所述任一复杂网络的网络特征;
将所述多个复杂网络中任一复杂网络的网络特征进行特征组合;
将组合后的网络特征作为待识别特征。
所述网络节点的节点度可以是指一个网络节点与其他的网络节点的连接数量。例如,一个网络节点连接了3个其他网络节点,该网络节点的度为3。
确定任一复杂网络中各个网络节点的节点度实际上是确定任一复杂网络中各个网络节点与其他网络节点之间的连接关系,进而可以根据网络节点之间的连接关系确定相应的网络特征。
所述计算所述任一复杂网络的网络特征可以包括:
计算所述任一复杂网络的平均度、方差度和/或最大度。
所述平均度是指所有网络节点中度的平均值;所述方差度是指根据所述平均度以及各个节点度计算获得的度的方差值;所述最大度是指所有网络节点中度的最大值。
本发明实施例中,以各个网络节点的节点度为基础,计算融合图像的多个网络特征。采 用节点度的计算方式可以更准确地确定任一复杂网络的网络特征,可以获取更准确的网络特征,进而提高识别的有效性以及准确度。
作为又一个实施例,所述融合所述掌纹图像与所述掌静脉图像,获得融合图像包括:
将所述待识别用户的掌纹图像以及掌静脉图像进行降噪;
融合降噪后的所述掌纹图像与所述掌静脉图像,获得融合图像。
其中,将所述待识别用户的掌纹图像以及掌静脉图像进行降噪可以是指将所述待识别用户的掌纹图像以及掌静脉图像中的高频成分进行滤除。
可选地,可以采用降噪算法将所述待识别用户的掌纹图像以及掌静脉图像中的高频成分进行滤除。所述降噪算法可以是指小波变换、卡尔曼滤波算法、中值滤波算法等。
本发明实施例中,在将所述掌纹图像以及掌静脉图像进行融合前,将所述掌纹图像以及静脉图像进行了降噪处理,可以使掌纹图像中的掌纹、掌静脉图像中的掌静脉更清晰,在确定掌纹以及掌静脉时,各种噪声的减少,可以使所述掌纹以及掌静脉对应的像素点更准确,进而可以确定更准确的复杂网络,以获得准确度更高的待识别特征,进一步提高了识别的有效性和准确度。
如图3所示,为本发明实施例一种生物特征识别装置的一个实施例的结构示意图,该装置可以包括以下几个模块:
图像采集模块301,用于采集待识别用户的掌纹图像与掌静脉图像。
其中,所述掌纹图像与所述掌静脉图像的大小相同,例如,二者可以均为像素高度为h,像素宽度为w的图像,其像素矩阵的大小为w*h。
可选地,可以使用多光谱手掌图像传感器采集所述掌纹图像与掌静脉图像。多光谱手掌图像传感器可以在不同光源下采集不同类型的手掌图像,例如,在可见光源下可以采集掌纹图像,在红外光源下可以采集掌静脉图像。
可选地,可以在时间间隔较短的时间内连续采集所述待识别用户的掌纹图像与掌静脉图像,以确保采集的掌纹图像与掌静脉图像中手掌的位置不变。所述掌纹图像与掌静脉图像也可以是在不同时段采集的,只需要确保采集时掌纹图像与掌静脉图像的手掌位置不发生变化即可。
图像融合模块302,用于融合所述掌纹图像与所述掌静脉图像,获得融合图像。
所述掌纹图像与所述掌静脉图像中包含了掌纹特征以及掌静脉特征,将掌纹图像以及掌静脉图像进行融合也即将掌纹特征以及掌静脉特征进行了融合。
所述融合所述掌纹图像与所述掌静脉图像可以是指将所述掌纹图像与所述掌静脉图像分别按照各个像素点的位置进行像素重叠,使同一个位置包含两个像素值。以所述掌纹图像与所述掌静脉图像的像素高度为h,像素宽度为w为例,所述掌纹图像与所述掌静脉图像均可以表示为w*h的二维矩阵,将二者进行重叠后,所述融合图像即可以表示为2*w*h的三维矩阵。
节点确定模块303,用于将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点。
融合图像中,掌纹以及掌静脉的像素值与周围正常皮肤的像素值的差异较大,因此,可以利用像素值的差异确定出融合图像中掌纹以及掌静脉对应的像素点。
网络构建模块304,用于基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络。
所述约束条件主要用于约束所述网络节点,在所述网络节点满足约束条件时,即可以使用满足约束条件的网络节点构建复杂网络。
可选地,不同约束条件下,满足该约束条件的网络节点也不同,因此,由不同网络节点构建的复杂网络也不同。
由于网络节点是所述掌纹以及掌静脉对应的像素点,因此所述复杂网络的网络节点实际上是根据所述掌纹以及掌静脉对应的像素点构成的,进而复杂网络可以用于确定掌纹以及掌静脉的相关特征。
可选地,可以获取所述多个复杂网络的网络特征,例如,所述网络特征可以指平均度、方差度以及最大度等特征值。
特征构建模块305,用于由所述多个复杂网络的网络特征构成待识别特征。
可选地,可以将所述多个复杂网络的网络特征串联构成所述待识别特征。以所述复杂网络特征的网络特征为1*3的二维矩阵为例,假设所述多个复杂网络为10个,则串联构成的待识别特征为10*3。
本发明实施例中,采集掌纹图像以及掌静脉图像,以获取二者的融合图像,进而可以将掌纹以及掌静脉两种生物特征进行融合,并基于掌纹以及掌静脉的对应像素点作为网络节点,确定不同的复杂网络,也即获取了掌纹以及掌静脉之间多种网络关联,进而可以确定不同的网络特征,由所述多个复杂网络的网络特征构成的待识别特征的内容更全面,可以提高识别的有效性以及准确度。
如图4所示,为本发明实施例一种生物特征识别装置的一个实施例的结构示意图,该装置可以包括以下几个模块:
图像采集模块401,用于采集待识别用户的掌纹图像与掌静脉图像。
图像融合模块402,用于融合所述掌纹图像与所述掌静脉图像,获得融合图像。
节点确定模块403,用于将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点。
距离计算模块404,用于计算任意两个网络节点的节点距离。
在将所述融合图像看作是一个三维矩阵的情况下,任一个像素点可以理解为按照三维矩阵的方式进行排列的,故任一个像素点均存在相应的坐标点,因此可以确定任意两个网络节点之间的坐标距离。
当然,所述不止包括掌纹像素点与掌静脉像素点之间的坐标距离,还可以包括掌纹像素点与掌纹像素点之间的坐标距离,以及掌静脉像素点与掌静脉像素点之间的坐标距离。
所述约束距离是指用于约束两个网络节点之间的节点距离的距离常数,约束距离是根据所述节点距离估算得到的。当面对不同的掌纹图像与静脉图像时,所述节点距离可能不同,且不同的节点距离之间可能存在较大差异,需要针对获得的差异较大的节点距离,需要确定对应的差异较大的约束距离。但是,当面对数量较大的掌纹图像以及静脉图像时,依次确定约束距离十分复杂,不利于本发明的大范围应用。
可选地,所述距离计算模块可以包括:
距离计算单元,用于根据任意两个网络节点对应的像素坐标,计算所述任意两个网络节点的坐标距离;
距离归一化单元,用于将所述坐标距离进行归一化获得所述节点距离。
归一化是所述网络节点之间的坐标距离与所述融合图像的大小的比值,获得相应的归一化距离。
所述将所述坐标距离进行归一化,以将所述坐标距离进行统一,并且将所述约束距统一至(0,1]区间内。进而在面对大量的掌纹图像以及静脉图像时,可以使用同一组约束距离,不需要将所述约束距离进行多次定义。简化了计算过程,也方便了本发明的大量应用。
网络构建模块405,用于基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络。
所述约束条件可以包括任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同。
所述网络构建模块可以包括:
网络构建单元4051,用于基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
特征构建模块406,用于由所述多个复杂网络的网络特征构成待识别特征。
所述网络节点由掌纹以及掌静脉的像素点构成。所述多个复杂网络由满足约束距离的任意两个网络节点之间的连线构成,记录了任意两个网络节点之间距离关系,而不是任意网络节点的空间位置关系。而所述待识别特征由多个复杂网络的网络特征构成,因此,所述待识别特征代表了所述掌纹与掌静脉的任意两个像素点的相对位置关系,其相对旋转、位移等误差具有较强的鲁棒性。也即,每次采集所述掌纹图像与掌静脉图像,以获得待识别特征时,不受旋转、位移等的影响,能够获得较稳定的待识别特征。
本发明实施例中,通过两个网络节点之间的节点距离来确定相应的不同的复杂网络,可以通过确定多个不同的复杂网络,而所述复杂网络是基于掌纹以及掌静脉的像素点构成的,进而可以确定掌纹以及掌静脉的多个网络特征,使得网络特征更准确,进而可以获得较高的识别效果以及准确度。
作为又一个实施例,所述图像融合模块可以包括:
图像转换单元,用于将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值。
第一融合单元,用于融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
可选地,所述节点确定模块可以包括:
节点确定单元,用于将所述融合图像中,像素值为第一数值的像素点作为网络节点。
将所述掌纹图像与静脉图像二值化是指,将掌纹图像中的掌纹,以及掌静脉图像中的掌静脉提取出来,将其对应的像素点用定义的第一数值进行标识,而其他的非掌纹以及非掌静脉的像素点用定义的第二数值进行标识。可以明确确定掌纹图像中的掌纹以及掌静脉图像中的掌静脉。
可选地,为了使所述掌纹以及掌静脉更清晰,所述第一数值可以是1,所述第二数值可以是0。
可选地,可以利用二值化算法将所述掌纹图像与掌静脉图像进行二值化。
所述二值化算法可以是指LBP(Local Binary Patterns,局部二值模式),均值窗口滤波算法等。
本发明实施例中,将所述掌纹图像以及静脉图像进行融合之前,先将所述掌纹图像以及掌静脉图像进行了二值化变换,进而可以将掌纹图像中的掌纹,掌静脉图像中的掌静脉等特征确定出来,其他的无用特征则被舍弃,进而可以准确地确定属于掌纹以及掌静脉的像素点,并将准确确定的掌纹像素端以及掌静脉像素点作为网络节点,以构成准确的复杂网络,进而可以确定更准确的待识别特征,进一步增加了识别特征的准确性以及有效性。
作为又一个实施例,所述特征构建模块可以包括:
第一确定单元,用于确定任一复杂网络中各个网络节点的节点度。
特征计算单元,用于根据所述节点度,计算所述任一复杂网络的网络特征。
特征组合单元,用于将所述多个复杂网络中任一复杂网络的网络特征进行特征组合。
第二确定单元,用于将组合后的网络特征作为待识别特征。
所述网络节点的节点度可以是指一个网络节点与其他的网络节点的连接情况。例如,一个网络节点连接了3个其他网络节点,该网络节点的度为3。
确定任一复杂网络中各个网络节点的节点度实际上是确定任一复杂网络中各个网络节点与其他网络节点之间的连接关系,进而可以根据网络节点之间的连接关系确定相应的网络特征。
所述计算所述任一复杂网络的网络特征可以包括:
计算所述任一复杂网络的平均度、方差度和/或最大度。
所述平均度是指所有网络节点中度的平均值;所述方差度是指根据所述平均度以及各个节点度计算获得的度的方差值;所述最大度是指所有网络节点中度的最大值。
本发明实施例中,以各个网络节点的节点度为基础,计算融合图像的多个网络特征。采用节点度的计算方式可以更准确地确定任一复杂网络的网络特征,可以获取更准确的网络特征,进而提高识别的有效性以及准确度。
作为又一个实施例,所述图像融合模块可以包括:
图像降噪单元,用于将所述待识别用户的掌纹图像以及掌静脉图像进行降噪;
第二融合单元,用于融合降噪后的所述掌纹图像与所述掌静脉图像,获得融合图像。
其中,将所述待识别用户的掌纹图像以及掌静脉图像进行降噪可以是指将所述待识别用户的掌纹图像以及掌静脉图像中的高频成分进行滤除。
可选地,可以采用降噪算法将所述待识别用户的掌纹图像以及掌静脉图像中的高频成分进行滤除。所述降噪算法可以是指小波变换、卡尔曼滤波算法、中值滤波算法等。
本发明实施例中,在将所述掌纹图像以及掌静脉图像进行融合前,将所述掌纹图像以及静脉图像进行了降噪处理,可以使掌纹图像中的掌纹、掌静脉图像中的掌静脉更清晰,在确定掌纹以及掌静脉时,各种噪声的减少,可以使所述掌纹以及掌静脉对应的像素点更准确,进而可以确定更准确的复杂网络,以获得准确度更高的待识别特征,进一步提高了识别的有效性和准确度。
在一些可能的设计中,上述生物特征识别装置可以实现为生物特征识别电子设备,如图5所示,所述电子设备可以包括:处理组件501以及分别与所述处理组件连接的存储组件502以及图像采集组件503;所述图像采集组件503用以采集待识别用户的掌纹图像与掌静脉图像;所述存储组件502存储一条或多条计算机指令;
所述处理组件501调用并执行所述一条或多条计算机程序指令,实现如下操作:融合所述掌纹图像与所述掌静脉图像,获得融合图像;将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络;由所述多个复杂网络的网络特征构成待识别特征。
本发明实施例中,采集掌纹图像以及掌静脉图像,以获取二者的融合图像,进而可以将掌纹以及掌静脉两种生物特征进行融合,并基于掌纹以及掌静脉的对应像素点作为网络节点,确定不同的复杂网络,也即获取了掌纹以及掌静脉之间多种网络关联,进而可以确定不同的网络特征,由所述多个复杂网络的网络特征构成的待识别特征的内容更全面,可以提高识别的有效性以及准确度。
在某些实施例中,所述约束条件包括任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同;
所述处理组件还可以用于:
计算任意两个网络节点的节点距离;
所述基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络包括:
基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
作为一种可能的实现方式,所述处理组件计算任意两个网络节点的节点距离具体是:
根据任意两个网络节点对应的像素坐标,计算所述任意两个网络节点的坐标距离;
将所述坐标距离进行归一化获得所述节点距离。
本发明实施例中,通过两个网络节点之间的节点距离来确定相应的不同的复杂网络,可以通过确定多个不同的复杂网络,而所述复杂网络是基于掌纹以及掌静脉的像素点构成的,进而可以确定掌纹以及掌静脉的多个网络特征,使得网络特征更准确,进而可以获得较高的识别效果以及准确度。
作为又一个实施例,所述处理组件融合所述掌纹图像与所述掌静脉图像,获得融合图像具体可以是:
将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值;
融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
作为一种可能的实现发共识,所述将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点包括:
将所述融合图像中,像素值为第一数值的像素点作为网络节点。
本发明实施例中,将所述掌纹图像以及静脉图像进行融合之前,先将所述掌纹图像以及掌静脉图像进行了二值化变换,进而可以将掌纹图像中的掌纹,掌静脉图像中的掌静脉等特征确定出来,其他的无用特征则被舍弃,进而可以准确地确定属于掌纹以及掌静脉的像素点,并将准确确定的掌纹像素端以及掌静脉像素点作为网络节点,以构成准确的复杂网络,进而可以确定更准确的待识别特征,进一步增加了识别特征的准确性以及有效性。
作为又一个实施例,所述处理组件由所述多个复杂网络的网络特征构成待识别特征包括:
确定任一复杂网络中各个网络节点的节点度;
根据所述节点度,计算所述任一复杂网络的网络特征;
将所述多个复杂网络中任一复杂网络的网络特征进行特征组合;
将组合后的网络特征作为待识别特征。
本发明实施例中,以各个网络节点的节点度为基础,计算融合图像的多个网络特征。采用节点度的计算方式可以更准确地确定任一复杂网络的网络特征,可以获取更准确的网络特征,进而提高识别的有效性以及准确度。
作为又一个实施例,所述处理组件融合所述掌纹图像与所述掌静脉图像,获得融合图像具体是:
将所述待识别用户的掌纹图像以及掌静脉图像进行降噪;
融合降噪后的所述掌纹图像与所述掌静脉图像,获得融合图像。
本发明实施例中,在将所述掌纹图像以及掌静脉图像进行融合前,将所述掌纹图像以及静脉图像进行了降噪处理,可以使掌纹图像中的掌纹、掌静脉图像中的掌静脉更清晰,在确定掌纹以及掌静脉时,各种噪声的减少,可以使所述掌纹以及掌静脉对应的像素点更准确,进而可以确定更准确的复杂网络,以获得准确度更高的待识别特征,进一步提高了识别的有 效性和准确度。
其中,在一个实际应用中,如图5所示的电子设备可以为一个头戴显示设备600,该头戴显示设备600可以为外接式头戴显示设备或者一体式头戴显示设备,其中,外接式头戴显示设备需要与外部处理系统(例如计算机处理系统)配合使用。
因此,所述电子设备为头戴显示设备时,如图6中所示:
在一些实施例中,该电子设备还可以包括:
显示组件601;其中,该显示组件601可以包括显示面板,显示面板设置在头戴显示设备面向用户面部的侧表面,可以为一整块面板、或者为分别对应用户左眼和右眼的左面板和右面板。显示面板可以为电致发光(EL)元件、液晶显示器或具有类似结构的微型显示器、或者视网膜可直接显示或类似的激光扫描式显示器。
在一些实施例中,该电子设备还可以包括:
虚拟图像光学组件602,该虚拟图像光学组件以放大方式拍摄显示组件601所显示的图像,并允许用户按放大的虚拟图像观察所显示的图像。作为输出到显示组件601上的显示图像,可以是从内容再现设备(蓝光光碟或DVD播放器)或流媒体服务器提供的虚拟场景的图像或者使用外部相机拍摄的现实场景的图像。一些实施例中,虚拟图像光学组件602可以包括透镜单元,例如球面透镜、非球面透镜、菲涅尔透镜等。
在一些实施例中,该电子设备还可以包括:
输入操作组件603,其可以包括至少一个用来执行输入操作的操作部件,例如按键、按钮、开关或者其他具有类似功能的部件,通过操作部件接收用户指令,并且向处理组件501输出指令。
在一些实施例中,该电子设备还可以包括:
状态信息获取组件604用于获取头戴显示设备的用户的状态信息。状态信息获取组件604可以包括各种类型的传感器,用于自身检测状态信息,并可以通过通信组件605从外部设备(例如智能手机、腕表和用户穿戴的其它多功能终端)获取状态信息。状态信息获取组件604可以获取用户的头部的位置信息和/或姿态信息。状态信息获取组件604可以包括陀螺仪传感器、加速度传感器、全球定位系统(GPS)传感器、地磁传感器、多普勒效应传感器、红外传感器、射频场强度传感器中的一个或者多个。此外,状态信息获取组件604获取头戴显示设备600的用户的状态信息,例如获取例如用户的操作状态(用户是否穿戴头戴显示设备600)、用户的动作状态(诸如静止、行走、跑动和诸如此类的移动状态,手或指尖的姿势、眼睛的开或闭状态、视线方向、瞳孔尺寸)、精神状态(用户是否沉浸在观察所显示的图像以及诸如此类的), 甚至生理状态。
在一些实施例中,该电子设备还可以包括:
通信组件605,用于执行与外部装置的通信处理、调制和解调处理、以及通信信号的编码和解码处理。另外,处理组件501可以从通信组件605向外部装置发送传输数据。通信方式可以是有线或者无线形式,例如移动高清链接(MHL)或通用串行总线(USB)、高清多媒体接口(HDMI)、无线保真(Wi-Fi)、蓝牙通信或低功耗蓝牙通信,以及IEEE802.11s标准的网状网络等。另外,通信组件605可以是根据宽带码分多址(W-CDMA)、长期演进(LTE)和类似标准操作的蜂窝无线收发器。
在一些实施例中,该电子设备还可以包括:
图像处理组件606,用于执行信号处理,比如与从处理组件501输出的虚拟场景的图像信号相关的图像质量校正,以及将其分辨率转换为根据显示组件601的屏幕的分辨率。
显示驱动组件607,依次选择显示组件601的每行像素,并逐行依次扫描显示组件601的每行像素,因而提供基于经信号处理的图像信号的像素信号。
在一些实施例中,该电子设备中的图像采集组件503可以包括:
外部相机608,外部相机608可以设置在电子设备的主体前表面,外部相机608可以为一个或者多个。外部相机608可以获取图像信息。另外,探测来自物体的反射信号的位置灵敏探测器(PSD)或者其他类型的距离传感器可以与外部相机608一起使用。外部相机608和距离传感器可以用于检测穿戴头戴显示设备的用户的身体位置、姿态和形状。另外,一定条件下用户可以通过外部相机608直接观看或者预览现实场景。
在一些实施例中,该电子设备还可以包括:
声音处理组件609,声音处理组件609可以执行从处理组件501输出的声音信号的声音质量校正或声音放大,以及输入声音信号的信号处理等。
声音输入/输出组件610,在声音处理后向外部输出声音以及输入来自麦克风的声音。
需要说明的是,图1中虚线框示出的结构或部件可以独立于头戴显示设备之外,例如可以设置在外部处理系统(例如计算机系统)中与头戴显示设备配合使用;或者,虚线框示出的结构或部件可以设置在头戴显示设备内部或者表面上。
处理组件501可以包括一个或多个处理器来执行计算机指令,以完成上述的方法中的全部或部分步骤。当然处理组件也可以为一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
存储组件502被配置为存储各种类型的数据以支持在电子设备的操作。存储组件可以由 任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被计算机执行时可以实现上述任一实施例的姿态信息获取方法。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (10)

  1. 一种生物特征识别方法,其特征在于,包括:
    采集待识别用户的掌纹图像与掌静脉图像;
    融合所述掌纹图像与所述掌静脉图像,获得融合图像;
    将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;
    基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络;
    由所述多个复杂网络的网络特征构成待识别特征。
  2. 根据权利要求1所述的方法,其特征在于,所述约束条件包括任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同;
    所述方法还包括:
    计算任意两个网络节点的节点距离;
    所述基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络包括:
    基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
  3. 根据权利要求2所述的方法,其特征在于,所述计算任意两个网络节点的节点距离包括:
    根据任意两个网络节点对应的像素坐标,计算所述任意两个网络节点的坐标距离;
    将所述坐标距离进行归一化获得所述节点距离。
  4. 根据权利要求1-3所述的方法,其特征在于,所述融合所述掌纹图像与所述掌静脉图像,获得融合图像包括:
    将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值;
    融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点包括:
    将所述融合图像中,像素值为第一数值的像素点作为网络节点。
  6. 根据权利要求1-5所述的方法,其特征在于,所述由所述多个复杂网络的网络特征构成待识别特征包括:
    确定任一复杂网络中各个网络节点的节点度;
    根据所述节点度,计算所述任一复杂网络的网络特征;
    将所述多个复杂网络中任一复杂网络的网络特征进行特征组合;
    将组合后的网络特征作为待识别特征。
  7. 根据权利要求1-6所述的方法,其特征在于,所述融合所述掌纹图像与所述掌静脉图像,获得融合图像包括:
    将所述待识别用户的掌纹图像以及掌静脉图像进行降噪;
    融合降噪后的所述掌纹图像与所述掌静脉图像,获得融合图像。
  8. 一种生物特征识别装置,其特征在于,包括:
    图像采集模块,用于采集待识别用户的掌纹图像与掌静脉图像;
    图像融合模块,用于融合所述掌纹图像与所述掌静脉图像,获得融合图像;
    节点确定模块,用于将所述融合图像中掌纹以及掌静脉对应像素点作为网络节点;
    网络构建模块,用于基于不同约束条件,由满足任一约束条件的网络节点构建复杂网络,以获得多个复杂网络;
    特征构建模块,用于由所述多个复杂网络的网络特征构成待识别特征。
  9. 根据权利要求8所述的装置,其特征在于,所述约束条件包括任意网络节点的节点距离小于约束距离;不同约束条件的约束距离不同;
    所述装置还包括:
    距离计算模块,用于计算任意两个网络节点的节点距离;
    所述网络构建模块包括:
    网络构建单元,用于基于不同的约束距离,由与任意网络节点的节点距离小于任一约束距离的网络节点构建复杂网络,以获得多个复杂网络。
  10. 根据权利要求8所述的装置,其特征在于,所述图像融合模块包括:
    图像转换单元,用于将所述掌纹图像与掌静脉图像二值化,以将掌纹以及掌静脉对应像素转化为第一数值,以及将非掌纹以及非掌静脉对应像素值转化为第二数值;
    第一融合单元,用于融合二值化之后的所述掌纹图像与所述掌静脉图像,获得融合图像。
PCT/CN2017/113585 2017-07-31 2017-11-29 生物特征识别方法及装置 WO2019024350A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710637494.8A CN107403161B (zh) 2017-07-31 2017-07-31 生物特征识别方法及装置
CN201710637494.8 2017-07-31

Publications (1)

Publication Number Publication Date
WO2019024350A1 true WO2019024350A1 (zh) 2019-02-07

Family

ID=60401655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/113585 WO2019024350A1 (zh) 2017-07-31 2017-11-29 生物特征识别方法及装置

Country Status (2)

Country Link
CN (1) CN107403161B (zh)
WO (1) WO2019024350A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403161B (zh) * 2017-07-31 2019-07-05 歌尔科技有限公司 生物特征识别方法及装置
CN109344849B (zh) * 2018-07-27 2022-03-11 广东工业大学 一种基于结构平衡理论的复杂网络图像识别方法
TWI678661B (zh) * 2018-08-31 2019-12-01 中華電信股份有限公司 具資料擴展性之掌紋辨識裝置及方法
CN109614988B (zh) * 2018-11-12 2020-05-12 国家电网有限公司 一种生物识别方法及装置
CN117542090B (zh) * 2023-11-25 2024-06-18 一脉通(深圳)智能科技有限公司 基于融合网络及sif特征的掌纹静脉识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045788A1 (en) * 2008-08-19 2010-02-25 The Hong Kong Polytechnic University Method and Apparatus for Personal Identification Using Palmprint and Palm Vein
CN103116741A (zh) * 2013-01-28 2013-05-22 天津理工大学 手掌静脉与手掌纹融合图像的采集识别系统
CN103200096A (zh) * 2013-03-13 2013-07-10 南京理工大学 一种复杂网络中避免关键节点的启发式路由方法
CN106022218A (zh) * 2016-05-06 2016-10-12 浙江工业大学 一种基于小波变换和Gabor滤波器的掌纹掌静脉图像层融合方法
CN106548134A (zh) * 2016-10-17 2017-03-29 沈阳化工大学 Ga优化svm和归一化相结合的掌纹与掌静脉融合识别方法
CN107403161A (zh) * 2017-07-31 2017-11-28 歌尔科技有限公司 生物特征识别方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045788A1 (en) * 2008-08-19 2010-02-25 The Hong Kong Polytechnic University Method and Apparatus for Personal Identification Using Palmprint and Palm Vein
CN103116741A (zh) * 2013-01-28 2013-05-22 天津理工大学 手掌静脉与手掌纹融合图像的采集识别系统
CN103200096A (zh) * 2013-03-13 2013-07-10 南京理工大学 一种复杂网络中避免关键节点的启发式路由方法
CN106022218A (zh) * 2016-05-06 2016-10-12 浙江工业大学 一种基于小波变换和Gabor滤波器的掌纹掌静脉图像层融合方法
CN106548134A (zh) * 2016-10-17 2017-03-29 沈阳化工大学 Ga优化svm和归一化相结合的掌纹与掌静脉融合识别方法
CN107403161A (zh) * 2017-07-31 2017-11-28 歌尔科技有限公司 生物特征识别方法及装置

Also Published As

Publication number Publication date
CN107403161B (zh) 2019-07-05
CN107403161A (zh) 2017-11-28

Similar Documents

Publication Publication Date Title
US11281288B2 (en) Eye and head tracking
WO2019024350A1 (zh) 生物特征识别方法及装置
US20210183516A1 (en) Systems and methods to identify persons and/or identify and quantify pain, fatigue, mood, and intent with protection of privacy
US11715231B2 (en) Head pose estimation from local eye region
US9750420B1 (en) Facial feature selection for heart rate detection
JP5567853B2 (ja) 画像認識装置および方法
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
CN111212594B (zh) 电子设备和利用电子设备确定结膜充血程度的方法
JP6564271B2 (ja) 撮像装置及び画像処理方法、プログラム、並びに記憶媒体
US9697415B2 (en) Recording medium, image processing method, and information terminal
US9858680B2 (en) Image processing device and imaging apparatus
TWI694809B (zh) 檢測眼球運動的方法、其程式、該程式的記憶媒體以及檢測眼球運動的裝置
US20160045109A1 (en) Method, apparatus and computer program product for positioning pupil
US20210319585A1 (en) Method and system for gaze estimation
JP6822482B2 (ja) 視線推定装置、視線推定方法及びプログラム記録媒体
JP6191943B2 (ja) 視線方向推定装置、視線方向推定装置および視線方向推定プログラム
TW201704934A (zh) 用於眼部追蹤的校正模組及其方法及電腦可讀取紀錄媒體
JP2017169803A (ja) 情報処理装置、情報処理方法、およびプログラム
JPWO2019123554A1 (ja) 画像処理装置、画像処理方法、及び、プログラム
JP2012155405A (ja) 生体画像撮影システム、生体画像取得方法、及びプログラム
JP7228509B2 (ja) 識別装置及び電子機器
US20210199957A1 (en) Eye-tracking system and method for pupil detection, associated systems and computer programs
JP7103443B2 (ja) 情報処理装置、情報処理方法、およびプログラム
US20220225936A1 (en) Contactless vitals using smart glasses
Bajpai et al. Moving towards 3D-biometric

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17919952

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17919952

Country of ref document: EP

Kind code of ref document: A1