WO2022000337A1 - 生物特征融合方法及装置、电子设备及存储介质 - Google Patents

生物特征融合方法及装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022000337A1
WO2022000337A1 PCT/CN2020/099567 CN2020099567W WO2022000337A1 WO 2022000337 A1 WO2022000337 A1 WO 2022000337A1 CN 2020099567 W CN2020099567 W CN 2020099567W WO 2022000337 A1 WO2022000337 A1 WO 2022000337A1
Authority
WO
WIPO (PCT)
Prior art keywords
biometric
feature
level
biometrics
biological
Prior art date
Application number
PCT/CN2020/099567
Other languages
English (en)
French (fr)
Inventor
于磊
朱亚军
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2020/099567 priority Critical patent/WO2022000337A1/zh
Priority to CN202080001407.XA priority patent/CN111919224A/zh
Publication of WO2022000337A1 publication Critical patent/WO2022000337A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Definitions

  • the embodiments of the present disclosure relate to the field of wireless communication, but are not limited to the field of wireless communication, and in particular, relate to a method and apparatus for fusion of biometrics, an electronic device, and a storage medium.
  • biometrics represented by fingerprints, faces, irises, veins, voiceprints, behaviors, etc. are unique (that is, the characteristics of any two people are different), robust (that is, the characteristics do not change with time), and can be collected ( That is, the characteristics can be quantitatively collected), high reliability and high accuracy, etc., play an increasingly important role in identity authentication, and receive more and more attention.
  • biometric identification may be limited by scenarios. For example, for fingerprint recognition, some people's fingerprints are not suitable for fingerprint recognition; for face recognition, the performance of face recognition has certain requirements for the surrounding environment. For example, the performance of face recognition will be affected in the case of strong light or dark light. Biometric recognition still has the problems of low recognition accuracy and poor adaptability.
  • Embodiments of the present disclosure provide a biometric fusion method and apparatus, an electronic device, and a storage medium.
  • a first aspect of the embodiments of the present disclosure provides a biometric fusion method, the method comprising:
  • the first biometrics are fused to form a second biometric.
  • a second aspect of the embodiments of the present disclosure provides a biometric fusion apparatus, wherein the apparatus includes: an acquisition module configured to acquire first biometrics from at least multiple sources of a target; wherein the first biometrics, belong to at least two different levels; the fusion module is configured to fuse the first biological feature to form a second biological feature.
  • a third aspect of the embodiments of the present disclosure provides an electronic device, wherein the electronic device at least includes: a processor and a memory for storing executable instructions that can be executed on the processor, wherein:
  • the executable instructions execute the biometric fusion method provided in the first aspect and/or the second aspect.
  • a fourth aspect of the embodiments of the present disclosure provides a non-transitory computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the computer-executable instructions are executed by a processor, execute the above-mentioned first The biometric fusion method provided by the aspect and/or the second aspect.
  • the first biometrics of the same target from different sources belong to at least two different levels, so the second biometrics simultaneously include the second biometrics of the original biometrics of different levels.
  • the second biometrics simultaneously include the second biometrics of the original biometrics of different levels.
  • the second biometric feature is fused with the cross-level (or cross-modal) original biometric feature (ie, the first biometric feature), the advantages of authentication and identification of biometric features at different levels are retained, so that The mutual reinforcement of the original biometrics at different levels improves the authentication and identification performance of the second biometrics in the authentication and identification process.
  • the second biometric feature is formed by the fusion of first biometric features at different levels, the level of the original first biometric feature corresponding to the second biometric feature is different, and can be applied to different application scenarios, so as to meet the needs of multiple
  • the authentication and identification requirements of biometrics in various application scenarios are characterized by a wide range of applications.
  • FIG. 1 is a schematic structural diagram of a wireless communication system according to an exemplary embodiment
  • FIG. 2 is a schematic diagram illustrating various types of biometric features according to an exemplary embodiment
  • FIG. 3 is a schematic flowchart of a biometric fusion method according to an exemplary embodiment
  • FIG. 4 is a schematic flowchart of a biometric fusion method according to an exemplary embodiment
  • FIG. 5 is a schematic structural diagram of a biometric fusion device according to an exemplary embodiment
  • FIG. 6 is a schematic structural diagram of a UE shown according to an exemplary embodiment
  • FIG. 7 is a schematic structural diagram of a base station according to an exemplary embodiment.
  • first, second, third, etc. may be used in embodiments of the present disclosure to describe various pieces of information, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • the words "if” and “if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • an embodiment of the present disclosure takes an application scenario of an intelligent electric meter control system as an example for illustrative description.
  • FIG. 1 shows a schematic structural diagram of a wireless communication system provided by an embodiment of the present disclosure.
  • the wireless communication system is a communication system based on cellular mobile communication technology, and the wireless communication system may include: several terminals 11 and several base stations 12 .
  • the terminal 11 may be a device that provides voice and/or data connectivity to the user.
  • the terminal 11 may communicate with one or more core networks via a radio access network (RAN), and the terminal 11 may be an IoT terminal such as a sensor device, a mobile phone (or "cellular" phone) and a
  • RAN radio access network
  • the computer of the IoT terminal for example, may be a fixed, portable, pocket, hand-held, built-in computer or a vehicle-mounted device.
  • a station For example, a station (Station, STA), a subscriber unit (subscriber unit), a subscriber station (subscriber station), a mobile station (mobile station), a mobile station (mobile), a remote station (remote station), an access point, a remote terminal ( remote terminal), access terminal, user terminal, user agent, user device, or user equipment (terminal).
  • the terminal 11 may also be a device of an unmanned aerial vehicle.
  • the terminal 11 may also be a vehicle-mounted device, for example, a trip computer with a wireless communication function, or a wireless terminal connected to an external trip computer.
  • the terminal 11 may also be a roadside device, for example, a street light, a signal light, or other roadside devices with a wireless communication function.
  • the base station 12 may be a network-side device in a wireless communication system.
  • the wireless communication system may be a 4th generation mobile communication (4G) system, also known as a Long Term Evolution (LTE) system; or, the wireless communication system may also be a 5G system, Also known as new radio (NR) system or 5G NR system. Alternatively, the wireless communication system may also be a next-generation system of the 5G system.
  • 4G 4th generation mobile communication
  • LTE Long Term Evolution
  • NR new radio
  • the wireless communication system may also be a next-generation system of the 5G system.
  • the access network in the 5G system can be called NG-RAN (New Generation-Radio Access Network, a new generation of radio access network).
  • NG-RAN New Generation-Radio Access Network
  • the base station 12 may be an evolved base station (eNB) used in the 4G system.
  • the base station 12 may also be a base station (gNB) that adopts a centralized distributed architecture in a 5G system.
  • eNB evolved base station
  • gNB base station
  • the base station 12 adopts a centralized distributed architecture it usually includes a centralized unit (central unit, CU) and at least two distributed units (distributed unit, DU).
  • the centralized unit is provided with a protocol stack of a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control Protocol (Radio Link Control, RLC) layer, and a Media Access Control (Media Access Control, MAC) layer; distribution A physical (Physical, PHY) layer protocol stack is set in the unit, and the specific implementation manner of the base station 12 is not limited in this embodiment of the present disclosure.
  • PDCP Packet Data Convergence Protocol
  • RLC Radio Link Control Protocol
  • MAC Media Access Control
  • distribution A physical (Physical, PHY) layer protocol stack is set in the unit, and the specific implementation manner of the base station 12 is not limited in this embodiment of the present disclosure.
  • a wireless connection can be established between the base station 12 and the terminal 11 through a wireless air interface.
  • the wireless air interface is a wireless air interface based on the fourth generation mobile communication network technology (4G) standard; or, the wireless air interface is a wireless air interface based on the fifth generation mobile communication network technology (5G) standard, such as
  • the wireless air interface is a new air interface; alternatively, the wireless air interface may also be a wireless air interface based on a 5G next-generation mobile communication network technology standard.
  • an E2E (End to End, end-to-end) connection may also be established between the terminals 11 .
  • V2V vehicle to vehicle, vehicle-to-vehicle
  • V2I vehicle to Infrastructure, vehicle-to-roadside equipment
  • V2P vehicle to pedestrian, vehicle-to-person communication in vehicle-to-everything (V2X) communication etc. scene.
  • the above wireless communication system may further include a network management device 13 .
  • the network management device 13 may be a core network device in a wireless communication system, for example, the network management device 13 may be a mobility management entity (Mobility Management Entity) in an evolved packet core network (Evolved Packet Core, EPC). MME).
  • the network management device may also be other core network devices, such as a serving gateway (Serving GateWay, SGW), a public data network gateway (Public Data Network GateWay, PGW), a policy and charging rule functional unit (Policy and Charging Rules) Function, PCRF) or home subscriber server (Home Subscriber Server, HSS), etc.
  • the implementation form of the network management device 13 is not limited in this embodiment of the present disclosure.
  • Biometric identification technology refers to the use of automatic technology to extract individual physiological characteristics or personal behavior characteristics for identification, and compare these characteristics or characteristics with the existing template data in the database to complete the process of identity authentication and identification. .
  • all physiological characteristics and personal behavioral characteristics with universality, uniqueness, robustness, and collectability are collectively referred to as biological characteristics.
  • biometric identification uses the individual characteristics of human beings for identity authentication.
  • a general biometric identification system should include subsystems such as data collection, data storage, comparison and decision-making.
  • Biometric identification technology involves a wide range of content, including fingerprints, faces, irises, veins, voiceprints, gestures and other identification methods.
  • the identification process involves data collection, data processing, graphic image recognition, comparison algorithms, software design, etc. multiple technologies.
  • various software and hardware products and industry application solutions based on biometric identification technology have been widely used in finance, human society, public security, education and other fields.
  • biometric identification system In the two processes of biometric registration and identity authentication, the biometric identification system is in a state of interaction with the outside world, and the system is very vulnerable to external attacks at this time. During the authentication process of the biometric identification system, the security of the system is vulnerable to the following threats:
  • Forged features the attacker provides forged biometric information during the identity authentication process
  • Replay attack the attacker attacks the information transfer between the biometric acquisition subsystem and the matching subsystem, replays the biometric information of the legally registered user, and deceives the matching subsystem, so as to achieve the purpose of passing the identity authentication;
  • d) Transmission attack The attacker attacks when the biometric matching subsystem transmits data to the biometric template database. On the one hand, the attacker can block the transmission of the biometric information of legitimate registered users, and on the other hand, he can tamper and forge the information. The biometric information is sent to the matching subsystem, so as to achieve the purpose of identity authentication;
  • an embodiment of the present disclosure provides a biometric fusion method, wherein the method includes:
  • S110 Acquire first biological features of a target from at least multiple sources; wherein, the first biological features belong to at least two different levels;
  • This biometric fusion method can be applied in the biometric generation stage, and can be used in the biometric verification stage, for example, using this method to generate sample features that will be used in the verification stage; The feature to be verified to be used.
  • the biometric fusion method can be applied to a terminal or a server.
  • the terminal includes, but is not limited to, a mobile terminal directly carried by a user, such as a mobile phone, a tablet computer, or a wearable device, and may also be a vehicle-mounted terminal, or a public service device in a public place.
  • the server may be various application servers or communication servers.
  • the target here can be any organism, such as a human or an animal.
  • the first biological feature here may include: fingerprint features, iris features, vein distribution features, and/or facial features, etc. various types directly present to the body surface features of the living body or the characteristics of biological tissues such as muscles, bones, or skin in the body. .
  • the first biometric feature may be determined by the part of the target body, but not the feature of the body part itself, the feature of the trajectory of the hand wave, the feature of bowing or tilting the head.
  • the first biometric feature may further include: after the height and arm length of the target are determined, and the target has one movement habit, the first biometric feature here may further include: arm swing trajectory or stride length track, etc.
  • the regularity or loudness of the target's heart rhythm can also be regarded as a biological feature.
  • the first biometrics from multiple sources of the same target may include: first biometrics from multiple body parts of the same target; for example, facial features and fingerprint features from the same human body are different from the same target.
  • Multiple sources of first biometrics of the same target may include different patterns of first biometrics from the same body part of the same target. For example, a pattern of biometric features from the shape and/or texture of the hand of the same target, trajectory features from the user's hand movements, etc. For example, using different wavelengths to capture images of the same body part of the same target, for example, a face image collected based on visible light and an infrared face image collected based on infrared light, can be considered to be the same target and different sources of first creatures feature.
  • First biological features from different sources can provide different fusion data for feature fusion in a word, reflecting the biological features of different first biological features.
  • the features from multiple sources of the same target may include: the first biological features of the same target from two sources, or may be the first biological features of the same target from more than two sources.
  • the first biometrics of different levels have different characteristics.
  • the first biometrics of some levels have the characteristics of detailed information and high accuracy in the verification process, but there may be problems of large amount of data and amount of calculation.
  • the first biometric feature of some levels has the advantages of small amount of information and small amount of calculation, but there may be a phenomenon that the verification accuracy is not particularly high.
  • Mode 1 directly splicing the first biological features of different levels to obtain the second biological features
  • Method 2 According to the fusion algorithm, the first biological features at different levels are used as dependent variables to perform the function value operation of the fusion algorithm to obtain the second biological features.
  • the fusion algorithm includes, but is not limited to, a dot product operation or a cross product operation.
  • the data corresponding to the first biometrics of the two different levels are written into two arrays, and then the point product of the two arrays is calculated, and then The second biometric is obtained.
  • the first biometrics of the same target from different sources belong to at least two different levels, so the second biometrics simultaneously include the second biometrics of the original biometrics of different levels, and the second biometrics are in
  • the features of different levels of the first biometrics from at least two sources are integrated, which is equivalent to the authentication and identification of a single biometrics, which can improve the accuracy;
  • the two biometrics are fused with the original biometrics (ie, the first biometrics) across levels (or cross-modalities), retaining the advantages of authentication and identification of biometrics at different levels, making the original biometrics at different levels
  • the mutual reinforcement of the second biometric feature improves the authentication and identification performance of the second biometric feature in the authentication and identification process.
  • the second biometric feature is formed by the fusion of first biometric features at different levels, the level of the original first biometric feature corresponding to the second biometric feature is different, and can be applied to different application scenarios, so as to meet the needs of multiple
  • the authentication and identification requirements of biometrics in various application scenarios are characterized by a wide range of applications.
  • different levels may also be referred to as different modalities.
  • the at least two different levels include any at least two of the following:
  • Sample level corresponding to the sample data data of a single biometric
  • Feature level which corresponds to a feature of a single biometric
  • Score scale corresponding to the matching score for a single biometric
  • a single biometric here can be understood as: one of the first biometrics; or, one or more of the first biometrics from one source.
  • the sample level may be: one or more sets of sample data for a single first biometric.
  • the current first biometric feature corresponds to a biological sample, for example, an iris image collected from an iris, a fingerprint image collected from a fingerprint, and a face image collected from a face;
  • the audio data collected on the voiceprint may include acquisition data or raw data formed by biometric acquisition.
  • the feature level may be: one or more sets of features of a single biological feature, where one or more sets of features may be considered as feature values, for example, the first biological feature of the feature level may include: features of a single first biological feature set/or eigenvectors. Both the feature set and the feature vector are composed of feature values, and the feature values may be extracted from the first biometric feature at the sample level.
  • the first biometric feature of the score level includes: a matching score of a single first biometric feature, where the matching score is a matching score obtained by matching the corresponding first biometric feature with the third biometric feature in the preset database.
  • the score-level first biometric includes: one or more matching scores.
  • the matching score may be: matching the feature value corresponding to the first biological feature with the feature value in the preset database to obtain the matching degree; and scoring the matching score according to the matching degree. For example, match degree is positively correlated with match score.
  • the decision level corresponds to a Boolean value for a single first biometric, typically one first biometric may correspond to one Boolean value.
  • the Boolean value may be determined based on the match score of the first biometric. For example, compare the matching score with the score threshold. If the matching score is larger, the matching degree is higher, and the matching score is greater than or equal to the score threshold.
  • the Boolean value is: "1", otherwise it is "0".
  • Boolean value is "1" it can be considered that the matching is successful, that is, the authentication of the single first biometric feature has passed; the Boolean value is "0", which can be considered that the matching has failed, that is, the authentication of the single first biometric feature has failed.
  • the hierarchies from the sample level, the feature level, the score level to the decision level are getting higher and higher, and then the biometrics with higher tiers have smaller amounts of data and smaller amounts of computation.
  • the first biometric features at different levels may be selected according to the fusion strategy; and different requirements in different application scenarios may be considered during fusion in S120 .
  • the first biometrics at different levels may include a greater number of fractional-level first biometrics and/or decision-level first biometrics.
  • the data corresponding to the first biometric features at different levels can be referred to as: feature data; the feature data may include: sample-level sample data, feature-level feature data, score-level matching scores, and decision-level Boolean values.
  • the S120 may include:
  • the first biometric feature at the sample level and the first biometric feature at the feature level are fused to form the second biometric feature.
  • the second biometric feature obtained by fusing the first biometric feature at the sample level and the first biometric feature at the feature level may be the sample feature and/or the feature to be verified in the verification stage.
  • the data volume of face feature is particularly large, but the recognition accuracy is high.
  • the fusion of feature-level facial features and sample-level fingerprint features can be considered.
  • the data volume of a set of sample data corresponding to sample-level fingerprint features is smaller than the data volume of a set of sample data corresponding to sample-level face features. In this way, the fusion of feature-level face features and sample-level fingerprint features, both It makes full use of the high precision of face features and reduces the amount of calculation.
  • the sample data of the first biometric feature at the sample level with a larger amount can be converted into the first biometric feature at the feature level, and then the sample data of the first biometric feature at the feature level with a smaller amount of data can be converted into other first biometric features at the feature level with a smaller amount of data. Fusion is performed to obtain the second biological feature.
  • the S120 may include:
  • the first biometric at the sample level and the first biometric at the fractional level are fused to form the second biometric.
  • the fusion of the first biological feature at the sample level and the first biological feature at the score level may include: fusion of the sample data and the matching score of the first biological feature from another source into a sample feature or a feature to be verified.
  • the fusion of the first biological feature of the sample level and the first biological feature of the fractional level is realized, and a second biological feature containing both the original biological features of the sample level and the fractional level is obtained, and the second biological feature is used in the biological feature.
  • the sample-level and score-level feature data of the first biometric feature from at least two sources is integrated, which is equivalent to the authentication and identification of a single biometric feature, which can improve the accuracy;
  • the second biometrics are fused with the original biometrics (ie, the first biometrics) across levels (or cross-modalities), retaining the advantages of the authentication and identification of the biometrics at the sample level and the fractional level, making the sample level
  • the mutual complementation with the original biometrics at the fractional level improves the authentication and identification performance of the second biometrics in the authentication and identification process.
  • the S120 may include:
  • the first biometric feature at the sample level and the first biometric feature at the decision level are fused to form the second biometric feature.
  • the second biometric feature is obtained, which may include: fusing the sample data of the first biometric feature at the sample level and the Boolean value at the decision level to obtain the second biometric.
  • the fusion of the first biological feature of the sample level and the first biological feature of the decision level is realized, and a second biological feature including the original biological features of the sample level and the decision level is obtained, and the second biological feature is used in the biological feature.
  • the sample-level and decision-level data of the first biometric feature from at least two sources are integrated, which is equivalent to the authentication and identification of a single biometric feature, which can improve the accuracy;
  • the two biometrics are fused with the original biometrics (ie, the first biometrics) across the hierarchy (or cross-modality), which retains the advantages of the authentication and identification of the biometrics at the sample level and the decision level, so that the sample-level and The mutual reinforcement of the original biometrics at the decision level improves the authentication and identification performance of the second biometrics in the authentication and identification process.
  • the preset database includes: S sample-level sample data, match the sample data corresponding to the first biological feature at the decision-level with the S sample-level sample data to obtain a matching score, and convert the matching score into a Boolean value, S Boolean values will be obtained; these S Boolean values are fused with the first biometric at the sample level to form the second biometric.
  • the S Boolean values are concatenated with the sample-level first biometric to form the second biometric.
  • the S110 may include:
  • the second biological feature is obtained by fusing the first biological feature of the feature level and the first biological feature of the fractional level.
  • the second biological feature is obtained by fusing the first biological feature of the feature level and the first biological feature of the score level, which will include the feature value and the matching score; or a function value obtained by calculating the feature value and the matching score.
  • the fusion of the first biometric feature at the feature level and the first biometric feature at the fractional level is achieved, and a second biometric feature containing both the sample-level and decision-level original biometric features is obtained.
  • the feature-level and score-level data of the first biometric feature from at least two sources is integrated, which is equivalent to the authentication and identification of a single biometric feature, which can improve the accuracy;
  • the two biometrics are fused with the original biometrics (ie, the first biometrics) across the levels (or cross-modalities), retaining the advantages of the authentication and recognition of the biometrics at the feature level and the fractional level, making the feature-level and
  • the mutual reinforcement of the original biometrics at the fractional level improves the authentication and identification performance of the second biometrics in the authentication and identification process.
  • the first biological feature to be fused is matched with the first biological feature of the same type in the preset database to obtain a matching score.
  • a feature value is extracted from the sample data of the first biological feature to be fused as the first biological feature of the feature level.
  • the feature value (ie, feature data) of the first biometric feature at the feature level is fused with the matching score of the first biometric feature at the score level to obtain the second biometric feature.
  • the S120 may include:
  • the first biometric feature at the feature level and the first biometric feature at the decision level are fused to form the second biometric feature.
  • the fusion of the first biometric feature at the feature level and the first biometric feature at the decision level is achieved, and a second biometric feature including the original biometric feature at the feature level and the decision level is obtained.
  • the feature-level and decision-level data of the first biometric feature from at least two sources are integrated, which is equivalent to the authentication and identification of a single biometric feature, which can improve the accuracy;
  • the two biometrics are fused with the original biometrics (ie, the first biometrics) across levels (or cross-modalities), retaining the advantages of authentication and identification of biometrics at the feature level and decision level, making the feature-level and
  • the mutual reinforcement of the original biometrics at the decision level improves the authentication and identification performance of the second biometrics in the authentication and identification process.
  • the S120 may further include: fusing the first biometric feature of the fractional level and the first biometric feature of the decision level to form the second biometric feature.
  • the fusion of the first biological feature of the fractional level and the first biological feature of the decision-making level is realized, and a second biological feature including the original biological features of the fractional level and the decision-making level is obtained, and the second biological feature is used in the biological feature.
  • the score-level and decision-level data of the first biometric feature from at least two sources are integrated, which is equivalent to the authentication and identification of a single biometric feature, which can improve the accuracy;
  • the two biometrics are fused with the original biometrics (ie, the first biometrics) across levels (or cross-modalities), retaining the advantages of biometric authentication and identification at the fractional level and the decision level, making the fractional level and
  • the mutual reinforcement of the original biometrics at the decision level improves the authentication and identification performance of the second biometrics in the authentication and identification process.
  • the above embodiment is an example of using two levels of the first biometrics.
  • the fusion of the first biometrics of three levels or four levels may also be used to form the second biometrics.
  • the method further includes:
  • S100 According to an application scenario, determine the merged first biometric features of different levels.
  • application scenarios such as payment scenarios and tagging scenarios have different authentication requirements for biometrics.
  • the payment scenario may involve property transfer and requires high security, and the payment scenario includes but is not limited to: online payment.
  • the second biological feature is obtained by fusing the first biological feature of the decision level with a higher degree of ambiguity and the first biological feature of the fractional level.
  • the first biometrics to be merged may include at least one first biometric of a sample level, or the first biometrics to be merged may include at least a First biometric.
  • the first biometrics to be fused may not include the first biometrics at the sample level and/or the first biometrics at the feature level, but only include: the first biometrics at the fractional level The first biometric at the characteristic and decision level.
  • a common database is stored in both the terminal and the server, and there may be a plurality of sample data or feature data for processing the first biometric feature and performing transformation at different levels in the common database.
  • the public database includes: several sample data from different sources; or several features from different sources.
  • the terminal can convert the face image acquired by itself and the face image in the sample data in the public database into a matching score at the fractional level and/or a Boolean value at the feature level.
  • the facial features in the facial image are directly extracted.
  • the terminal before the terminal generates the second biometric feature, it performs feature extraction on the sample data obtained from the first biometric feature, obtains feature values such as feature-level feature sets or feature vectors, and matches the feature values with the feature values in the public database, Get a fractional matching score. Further, a decision-level Boolean value can be obtained according to the matching score. Merge fractional scale and boolean value into second biometric.
  • the second biometric features of at least two levels are fused and stored in the server as a verification sample in the verification stage, so that the server will store the second biometric feature for use in the subsequent verification stage.
  • the second biometrics of at least two levels are fused and sent to the server for verification.
  • the server can directly match the verification sample stored in the sample generation stage.
  • the verification of the first biometrics of different sources across different levels is realized by one verification, which is equivalent to improving the verification security in terms of a single biometric, and satisfies the biometrics in different business scenarios. Validation or identification or differentiation of application requirements.
  • voice data can be extracted, and the voiceprint features can be extracted from the voice data, which is compared with the voiceprints of M people extracted in the voice group chat scene.
  • the features are matched to obtain M matching scores at the score level; or, the M matching scores are converted into Boolean values to obtain M Boolean values.
  • the features extracted from the voice data may also include: pronunciation data; the pronunciation data can also be used as a biological feature, and M matching scores and/or M matching scores and/or M are obtained by matching with the pronunciation features of M individuals. a boolean value.
  • the M matching scores or M Boolean values of the voiceprint features corresponding to each person are combined with the M Boolean values or M matching scores of the pronunciation features to form a second biological feature that can distinguish different people, so as to realize the matching of different people. distinction between different people.
  • the multimodal fusion of biological features can generally be divided into four levels: sample-level fusion, feature-level fusion, fraction-level fusion, and decision-level fusion:
  • Sample-level fusion means that each single biometric identification process outputs a set of sample data, and fuses multiple sets of biometric sample data into one sample data;
  • Feature-level fusion means that each single biometric identification process outputs a set of features, and fuses multiple sets of biometrics into a feature set or feature vector;
  • Score-level fusion means that each single biometric process typically outputs a single match score, or multiple scores. Fusion of multiple biometric scores into a single score or decision, which is then compared to system acceptance thresholds;
  • Decision-level fusion means that each single biometric process outputs a Boolean value.
  • the results are fused using hybrid algorithms such as sum and or, or using more parameters such as input sample quality scores.
  • Fractional-level fusion and decision-level fusion will weaken the correlation between biometric modalities, resulting in decreased accuracy, and different modalities or levels are generated at different locations.
  • the embodiment of the present disclosure proposes a fusion between different levels of different modalities, and proposes the following fusion of biological features across levels, which may specifically include:
  • One or more sets of samples output from one or more single biometric identification processes are fused with a set of multiple sets of features output from one or more single biometric identification processes to form one sample.
  • the sample data of some biometrics is very large, such as high-precision facial feature recognition, which includes two-dimensional facial features and three-dimensional stereo data. At this time, if its samples are fused with other samples, very large computing power is required.
  • Feature information can be extracted from the samples, and the feature information can be fused with other samples to form a new set of samples.
  • One or more sets of samples output from one or more single biometric identification processes are fused with a set of multiple sets of matching scores output from one or more single biometric identification processes to form one sample.
  • the biometric comparison process can be time-consuming because it requires screening of all users.
  • Certain biometric features that can be compared quickly can form scores at a faster rate. Then compare the scores with other biological samples with slow but high accuracy, and output a new set of samples, so that accurate biometric recognition can be performed more efficiently.
  • One or more sets of samples output from one or more single biometric identification processes are fused with a set of multiple sets of decision Boolean values output from one or more single biometric identification processes to form one sample.
  • this method can improve the comparison efficiency of multiple users.
  • One or more sets of features output from one or more single biometric identification processes are fused with a set of multiple sets of matching scores output from one or more single biometric identification processes to form a feature set or feature vector
  • the fusion of sample-level and score-level can improve the comparison efficiency of multiple users; when the amount of data and calculation of sample-level fusion is large, the feature-level can be used to replace the sample-level. That is, the fusion of sample level and score level.
  • One or more sets of features output from one or more single biometric identification processes are combined with a set of multiple sets of decision Boolean values output from one or more single biometric identification processes to form a feature set or Feature vector.
  • this method can improve the comparison efficiency of multiple users.
  • One or more sets of matching scores output from one or more single biometric identification processes are combined with a set of multiple sets of decision Boolean values output from one or more single biometric identification processes to form a matching score or decision.
  • This fusion method is necessary in the following scenarios: Some biometrics cannot produce boolean values, only matching scores, when they seek to fuse with boolean values of other biometrics.
  • a biometric fusion device wherein the device includes:
  • the obtaining module 510 is configured to obtain first biological features of a target from at least multiple sources; wherein, the first biological features belong to at least two different levels;
  • the fusion module 520 is configured to fuse the first biological feature to form a second biological feature.
  • the acquisition module 510 and the fusion module 520 may be program modules; after the program modules are executed by the processor, the above-mentioned first biometric feature can be merged to form the second biometric feature.
  • the acquisition module 510 and the fusion module 520 can be a combination of hardware and software modules; the combination of software and hardware modules can include various programmable arrays; the programmable arrays include but are not limited to: complex Programmable array or field programmable array.
  • the acquisition module 510 and the fusion module 520 may be pure hardware modules; the pure hardware modules include but are not limited to: application specific integrated circuits.
  • the at least two different levels include any at least two of the following:
  • Feature level which corresponds to a feature of a single biometric
  • Score scale corresponding to the matching score for a single biometric
  • the fusion module 520 is configured to fuse the first biometric feature at the sample level with the first biometric feature at the feature level to form the second biometric feature.
  • the fusion module 520 is configured to fuse the first biometric feature at the sample level and the first biometric feature at the fractional level to form the second biometric feature.
  • the fusion module 520 is configured to fuse the first biometric feature at the sample level and the first biometric feature at the decision level to form the second biometric feature.
  • the fusion module 520 is configured to fuse the first biometric feature at the feature level and the first biometric feature at the fractional level to obtain the second biometric feature.
  • the fusion module 520 is configured to fuse the first biometric feature at the feature level and the first biometric feature at the decision level to form the second biometric feature.
  • the fusion module 520 is configured to fuse the first biometric at the fractional level and the first biometric at the decision level to form the second biometric.
  • the apparatus further includes:
  • the determining module is configured to determine the first biometric features of the fused different levels according to the application scenario.
  • Embodiments of the present disclosure provide an electronic device, including a processor, a transceiver, a memory, and an executable program stored on the memory and capable of being run by the processor, wherein the processor executes the execution of any of the foregoing technical solutions when running the executable program.
  • Biometric fusion methods including a processor, a transceiver, a memory, and an executable program stored on the memory and capable of being run by the processor, wherein the processor executes the execution of any of the foregoing technical solutions when running the executable program.
  • the electronic device may be a base station, a UE or a server.
  • the processor may include various types of storage media, which are non-transitory computer storage media, and can continue to memorize and store information on the electronic device after the electronic device is powered off.
  • the electronic equipment includes a base station or user equipment.
  • the processor may be connected to the memory through a bus or the like, and is used to read the executable program stored in the memory, for example, the biometric fusion method shown in FIG. 3 or FIG. 4 .
  • An embodiment of the present disclosure provides a computer storage medium, where an executable program is stored in the computer storage medium; after the executable program is executed by a processor, the method shown in any technical solution of the first aspect or the second aspect can be implemented, For example, the biometric fusion method shown in Figure 3 or Figure 4.
  • FIG. 6 is a block diagram of a UE 800 according to an exemplary embodiment.
  • UE 800 may be a mobile phone, computer, digital broadcast user equipment, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.
  • the UE 800 may include at least one of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
  • the processing component 802 generally controls the overall operations of the UE 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include at least one processor 820 to execute instructions to perform all or part of the steps of the above-described methods.
  • processing component 802 may include at least one module that facilitates interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at UE 800 . Examples of such data include instructions for any application or method operating on the UE 800, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply component 806 provides power to various components of UE 800 .
  • Power components 806 may include a power management system, at least one power source, and other components associated with generating, managing, and distributing power to UE 800 .
  • Multimedia component 808 includes screens that provide an output interface between the UE 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes at least one touch sensor to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect wake-up time and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the UE 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when the UE 800 is in operating modes, such as call mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes at least one sensor for providing various aspects of status assessment for UE 800 .
  • the sensor component 814 can detect the open/closed state of the device 800, the relative positioning of components, such as the display and keypad of the UE 800, the sensor component 814 can also detect the position change of the UE 800 or a component of the UE 800, the user and the UE 800. Presence or absence of UE800 contact, UE800 orientation or acceleration/deceleration and UE800 temperature changes.
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communications between UE 800 and other devices.
  • the UE 800 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • NFC near field communication
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the UE 800 may be implemented by at least one Application Specific Integrated Circuit (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array ( FPGA), controller, microcontroller, microprocessor or other electronic components implemented for performing the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components implemented for performing the above method.
  • non-transitory computer-readable storage medium including instructions, such as a memory 804 including instructions, which are executable by the processor 820 of the UE 800 to perform the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • an embodiment of the present disclosure shows a structure of a base station.
  • the base station 900 may be provided as a network device.
  • base station 900 includes processing component 922, which further includes at least one processor, and a memory resource represented by memory 932 for storing instructions executable by processing component 922, such as application programs.
  • An application program stored in memory 932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 922 is configured to execute instructions to execute any of the aforementioned methods applied to the base station, eg, the methods shown in FIGS. 2-6 .
  • the base station 900 may also include a power supply assembly 926 configured to perform power management of the base station 900, a wired or wireless network interface 950 configured to connect the base station 900 to a network, and an input output (I/O) interface 958.
  • Base station 900 may operate based on an operating system stored in memory 932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本公开实施例提供一种生物特征融合方法及装置、电子设备及存储介质。所述生物特征融合方法,包括:获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;将所述第一生物特征相融合形成第二生物特征。

Description

生物特征融合方法及装置、电子设备及存储介质 技术领域
本公开实施例涉及无线通信领域但不限于无线通信领域,尤其涉及一种生物特征融合方法及装置、电子设备及存储介质。
背景技术
随着人们对身份鉴别的准确性、可靠性要求的日益提高,传统的密码和磁卡等身份认证方式因容易被盗用和伪造等原因已远远不能满足人们的需求。而以指纹、人脸、虹膜、静脉、声纹,行为等为代表的生物特征以其具有唯一性(即任意两人的特征不同)、稳健性(即特征不随时间变化)、可采集性(即特征可以定量采集)、高可信度和高准确度等特点,在身份认证中发挥着越来越重要的作用,越来越受到重视。相关技术中,生物特征识别可能会受到场景的限制。比如对于指纹识别来说,部分人的指纹是不适合用来做指纹识别的;对于人脸识别来说,人脸识别的性能对于周围环境有一定的要求。比如在强光或者暗光的情况下人脸识别的性能会受到影响。生物特征识别仍然存在识别准确率低、适应性差的问题。
发明内容
本公开实施例提供一种生物特征融合方法及装置、电子设备及存储介质。
本公开实施例第一方面提供一种生物特征融合方法,所述方法包括:
获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;
将所述第一生物特征相融合形成第二生物特征。
本公开实施例第二方面提供一种生物特征融合装置,其中,所述装置包括:获取模块,被配置为获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;融合模块,被配置为将所述第一生物特征相融合形成第二生物特征。
本公开实施例第三方面提供一种电子设备,其中,所述电子设备至少包括:处理器和用于存储能够在所述处理器上运行的可执行指令的存储器,其中:
处理器用于运行所述可执行指令时,所述可执行指令执行上述第一方面和/或第二方面提供的生物特征融合方法。
本公开实施例第四方面提供一种非临时性计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机可执行指令,该计算机可执行指令被处理器执行时执行上述第一方面和/或第二方面提供的生物特征融合方法。
在本公开实施例中,同一个目标的不同个来源的第一生物特征属于至少两个不同层级,因此第二生物特征同时包含不同层级的原始生物特征的第二生物特征。第二生物特征在用于生物特征的认证识别过程中,一方面由于融合了至少两种来源的第一生物特征的不同层级的特征,相当于单一生物特征的认证识别,能够提升精确度。另一方面,由于第二生物特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了不同层级的生物特征的认证识别的优点,使得不同层级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。再一方面,由于第二生物特征是由不同层级的第一生物特征融合形成的,第二生物特征对应的原始的第一生物特征的层级不同,可适用于不同的应用场景,从而能够满足多种应用场景下生物特征的认证和识别需求,具有应用范围广的特点。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开实施例实施例,并与说明书一起用于解释本公开实施例实施例的原理。
图1是根据一示例性实施例示出的一种无线通信系统的结构示意图;
图2是根据一示例性实施例示出的各种类型的生物特征的示意图;
图3是根据一示例性实施例示出的一种生物特征融合方法的流程示意图;
图4是根据一示例性实施例示出的一种生物特征融合方法的流程示意图;
图5是根据一示例性实施例示出的一种生物特征融合装置的结构示意图;
图6是根据一示例性实施例示出的UE的结构示意图;
图7是根据一示例性实施例示出的基站的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开实施例的一些方面相一致的装置和方法的例子。
在本公开实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开实施例。在本公开实施例和所附权利要求书中所使用的单数形式的“一种”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开实施例可能采用术语第一、第二、第三等来 描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”及“若”可以被解释成为“在……时”或“当……时”或“响应于确定”。
为了更好地描述本公开任一实施例,本公开一实施例以一个电表智能控制系统的应用场景为例进行示例性说明。
请参考图1,其示出了本公开实施例提供的一种无线通信系统的结构示意图。如图1所示,无线通信系统是基于蜂窝移动通信技术的通信系统,该无线通信系统可以包括:若干个终端11以及若干个基站12。
其中,终端11可以是指向用户提供语音和/或数据连通性的设备。终端11可以经无线接入网(Radio Access Network,RAN)与一个或多个核心网进行通信,终端11可以是物联网终端,如传感器设备、移动电话(或称为“蜂窝”电话)和具有物联网终端的计算机,例如,可以是固定式、便携式、袖珍式、手持式、计算机内置的或者车载的装置。例如,站(Station,STA)、订户单元(subscriber unit)、订户站(subscriber station),移动站(mobile station)、移动台(mobile)、远程站(remote station)、接入点、远程终端(remote terminal)、接入终端(access terminal)、用户装置(user terminal)、用户代理(user agent)、用户设备(user device)、或用户终端(user equipment,终端)。或者,终端11也可以是无人飞行器的设备。或者,终端11也可以是车载设备,比如,可以是具有无线通信功能的行车电脑,或者是外接行车电脑的无线终端。或者,终端11也可以是路边设备,比如,可以是具有无线通信功能的路灯、信号灯或者其它路边设备等。
基站12可以是无线通信系统中的网络侧设备。其中,该无线通信系统可以是第四代移动通信技术(the 4th generation mobile communication,4G) 系统,又称长期演进(Long Term Evolution,LTE)系统;或者,该无线通信系统也可以是5G系统,又称新空口(new radio,NR)系统或5G NR系统。或者,该无线通信系统也可以是5G系统的再下一代系统。其中,5G系统中的接入网可以称为NG-RAN(New Generation-Radio Access Network,新一代无线接入网)。
其中,基站12可以是4G系统中采用的演进型基站(eNB)。或者,基站12也可以是5G系统中采用集中分布式架构的基站(gNB)。当基站12采用集中分布式架构时,通常包括集中单元(central unit,CU)和至少两个分布单元(distributed unit,DU)。集中单元中设置有分组数据汇聚协议(Packet Data Convergence Protocol,PDCP)层、无线链路层控制协议(Radio Link Control,RLC)层、媒体访问控制(Media Access Control,MAC)层的协议栈;分布单元中设置有物理(Physical,PHY)层协议栈,本公开实施例对基站12的具体实现方式不加以限定。
基站12和终端11之间可以通过无线空口建立无线连接。在不同的实施方式中,该无线空口是基于第四代移动通信网络技术(4G)标准的无线空口;或者,该无线空口是基于第五代移动通信网络技术(5G)标准的无线空口,比如该无线空口是新空口;或者,该无线空口也可以是基于5G的更下一代移动通信网络技术标准的无线空口。
在一些实施例中,终端11之间还可以建立E2E(End to End,端到端)连接。比如车联网通信(vehicle to everything,V2X)中的V2V(vehicle to vehicle,车对车)通信、V2I(vehicle to Infrastructure,车对路边设备)通信和V2P(vehicle to pedestrian,车对人)通信等场景。
在一些实施例中,上述无线通信系统还可以包含网络管理设备13。
若干个基站12分别与网络管理设备13相连。其中,网络管理设备13可以是无线通信系统中的核心网设备,比如,该网络管理设备13可以是演 进的数据分组核心网(Evolved Packet Core,EPC)中的移动性管理实体(Mobility Management Entity,MME)。或者,该网络管理设备也可以是其它的核心网设备,比如服务网关(Serving GateWay,SGW)、公用数据网网关(Public Data Network GateWay,PGW)、策略与计费规则功能单元(Policy and Charging Rules Function,PCRF)或者归属签约用户服务器(Home Subscriber Server,HSS)等。对于网络管理设备13的实现形态,本公开实施例不做限定。
生物特征识别技术是指为了进行身份识别而采用自动技术对个体生理特征或个人行为特点进行提取,并将这些特征或特点同数据库中已有的模板数据进行比对,从而完成身份认证识别的过程。理论上,所有具有普遍性、唯一性、稳健性、可采集性的生理特征和个人行为特点统称为生物特征。与传统的识别方式不同,生物特征识别是利用人类自身的个体特性进行身份认证。通用生物特征识别系统应包含数据采集、数据存储、比对和决策等子系统。
生物特征识别技术涉及内容广泛,包括指纹、人脸、虹膜、静脉、声纹、姿态等多种识别方式,其识别过程涉及到数据采集、数据处理、图形图像识别、比对算法、软件设计等多项技术。目前各种基于生物特征识别技术的软硬件产品和行业应用解决方案在金融、人社、公共安全、教育等领域得到了广泛应用。
生物特征识别的使用中存在一定的风险。在生物特征注册和身份认证这两个过程中,生物特征识别系统处于与外界交互的状态,系统此时非常容易受到外界攻击。在生物特征识别系统的身份认证过程中,系统的安全性容易受到以下威胁:
a)伪造特征:攻击者在身份认证过程中,提供了伪造的生物特征信息;
b)重放攻击:攻击者对生物特征采集子系统和匹配子系统之间的信息 传递进行攻击,重放合法注册用户生物特征信息,对匹配子系统进行欺骗,从而达到通过身份认证的目的;
c)侵库攻击:攻击者通过黑客手段侵入系统的生物特征模板数据库,对已注册的生物特征信息进行篡改和伪造,从而达到通过生物特征信息匹配和身份认证的目的;
d)传送攻击:攻击者在生物特征匹配子系统向生物特征模板数据库进行数据传送时进行攻击,攻击者一方面可以阻断合法注册用户的生物特征信息传送,另一方面也可以将篡改和伪造的生物特征信息发送给匹配子系统,从而达到通过身份认证的目的;
e)篡改匹配器:攻击者通过对匹配器进行攻击,篡改匹配结果,从而达到通过身份认证的目的。
如图3所示,本公开实施例提供一种生物特征融合方法,其中,所述方法包括:
S110:获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;
S120:将所述第一生物特征相融合形成第二生物特征。
这种生物特征的融合方法,可应用于生物特征生成阶段,可以用于生物特征的验证阶段,具体如,利用该方法生成在验证阶段会使用的样本特征;也可以用于生成在验证阶段所需使用的待验证特征。
该生物特征融合方法可应用于终端或服务器中。该终端包括但不限于:手机、平板电脑或可穿戴式设备等用户直接携带的移动终端,还可以是车载终端、或者公共场合的公共服务设备等。该服务器可为各种应用服务器或者通信服务器。
此处的目标可为任意生物体,例如,人体或动物等生物体。
此处的第一生物特征可包括:指纹特征、虹膜特征、静脉分布特征 和/或人脸特征等各种类型直接对生物体的体表特征或体内肌肉、骨骼或皮肤等生物组织呈现的特点。
在另一些实施例中,所述第一生物特征可是决定于目标身体的部分,但是并非是身体部分自身的特征,摆手的轨迹特征、低头或仰头的特征。
再例如,所述第一生物特征还可包括:目标的身高和臂长确定了之后,且目标有一个的运动习惯是,则此处的第一生物特征还可包括:摆臂轨迹或者步幅轨迹等。
还例如,一个目标的体重和身体素质一旦确定,则目标的心律跳动的规律或者响度等这也算是一种生物特征。
同一个目标的多种来源的第一生物特征可包括:来自同一个目标的多处身体部位的第一生物特征;例如,来自同一个人体的人脸特征和指纹特征,是同一个目标的不同给身体部分的多种来源的第一生物特征。
同一个目标的多种来源的第一生物特征可包括:来自同一个目标的相同身体部位的不同样式的第一生物特征。例如,来自同一个目标的手部的形状和/或纹理构成的一种样式的生物特征,来自该用户的手部运动的轨迹特征等。例如,利用不同波长对同一个目标的相同身体部位的图像采集,例如,基于可见光采集的人脸图像和基于红外光采集的红外人脸图像,可认为是同一个目标的不同来源的第一生物特征。
不同来源的第一生物特征,总之可以为特征融合提供不同的融合数据,反应了不同第一生物特征的生物特点。
在本公开实施例中,同一个目标的多种来源的特征可包括:同一个目标的两种来源的第一生物特征,还可以是同一个目标的两种以上来源的第一生物特征。
不同层级的第一生物特征具有不同的特点,例如,有的层级的第一生物特征,具有信息详细及在验证过程中精确度高的特点,但是可能会 存在数据量大和计算量的问题。再例如,有的层级的第一生物特征的,具有信息量小及计算量小的优点,但是可能存在则验证精确度不是特别高的现象。
在S120中将不同层级的第一生物特征相融合的实现方式有多种,以下提供几种可选方式:
方式一:直接拼接不同层级的所述第一生物特征,得到所述第二生物特征;
方式二:根据融合算法,将不同层级的第一生物特征作为因变量进行融合算法的函数值运算,得到所述第二生物特征。例如,所述融合算法包括但不限于:点乘运算或叉乘运算。
以点乘运算且以两个不同层级第一生物特征的融合为例进行说明,将两个不同层级的第一生物特征对应的数据写成两个阵列,然后求取两个阵列的点乘,就得到了所述第二生物特征。
在具体的实现过程中,所述第一生物特征和所述第二生物特征的融合方式有很多种,不局限于上述任意一种。
在本公开实施例中,同一个目标的不同个来源的第一生物特征属于至少两个不同层级,因此第二生物特征同时包含不同层级的原始生物特征的第二生物特征,第二生物特征在用于生物特征的认证识别过程中,一方面由于融合了至少两种来源的第一生物特征的不同层级的特征,相当于单一生物特征的认证识别,能够提升精确度;另一方面,由于第二生物特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了不同层级的生物特征的认证识别的优点,使得不同层级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。再一方面,由于第二生物特征是由不同层级的第一生物特征融合形成的,第二生物特征对应的原始的第一生物特征的层级 不同,可适用于不同的应用场景,从而能够满足多种应用场景下生物特征的认证和识别需求,具有应用范围广的特点。
在一些实施例中,不同层级也可以称之为不同模态。
例如,所述至少两种不同层级包括以下任意至少两个:
样本级,对应于单一生物特征的样本数据数据;
特征级,对应于单一生物特征的特征;
分数级,对应于单一生物特征的匹配分数;
决策级,对应于单一生物特征的布尔值。
此处的单一生物特征可理解为:一个所述第一生物特征;或者,一个来源的一个或多个所述第一生物特征。
样本级可为:单个第一生物特征的一组或多组样本数据。例如若当前第一生物特征是样本级的,则当前第一生物特征对应的是一个生物样本,例如,对虹膜采集的虹膜图像、对指纹采集的指纹图像、对人脸采集的人脸图像;对声纹采集的音频数据。在一些实施例中,此处样本级对应的样本可包括:生物特征采集形成的采集数据或原始数据。
特征级可为:单个生物特征的一组或多组特征,此处的一组或多组特征可认为是特征值,例如,特征级的第一生物特征可包括:单个第一生物特征的特征集合/或特征向量。特征集合和特征向量都是有特征值组成,而特征值可为从样本级的第一生物特征中提取的。
分数级的第一生物特征是包含:单个第一生物特征的匹配分数,该匹配分数是将对应的第一生物特征与预设数据库内第三生物特征匹配得到的匹配分数。
分数级的第一生物特征包括:一个或多个匹配分数。
匹配分数可为:对应第一生物特征的特征值与预设数据库内的特征值进行匹配,得到匹配程度;根据匹配程度进行评分得到的匹配分数。 例如,匹配程度与匹配分数正相关。
决策级对应于单个第一生物特征的布尔值,通常一个第一生物特征可对应于一个布尔值。该布尔值可以根据第一生物特征的匹配分数确定的。例如,将匹配分数与分数阈值比较,若匹配分数越大代表的匹配程度越高,则匹配分数大于或等于分数阈值,则布尔值为:“1”,否则为“0”。
布尔值为“1”,可认为匹配成功,即单一的第一生物特征的认证通过;布尔值为“0”,可认为匹配失败,即单一的第一生物特征的认证失败。
在一些实施例中,可假设样本级、特征级、分数级到决策级的层级是越来越高的,则此时层级越高的生物特征的数据量越小且计算量越小。
在本公开实施例中,在S110中可以根据:根据融合策略,选择不同层级的第一生物特征;以在S120中融合时,考虑不同应用场景下的不同需求。例如,为了确保满足高安全性需求,不同层级的多个第一生物特征中,可以多选择一些样本级的生物特征;为了满足一定的安全性且减少更多的计算量,不同层级的多个第一生物特征可包括较多个的分数级的第一生物特征和/或决策级的第一生物特征。
不同层级的第一生物特征所对应的数据都可以称之为:特征数据;该特征数据可包括:样本级的样本数据、特征级的特征数据、分数级的匹配分数以及决策级别的布尔值。
在一些式实施例中,所述S120可包括:
将所述样本级的所述第一生物特征和所述特征级的第一生物特征相融合形成所述第二生物特征征。
此处由样本级的第一生物特征和特征级的第一生物特征融合得到的第二生物特征可以是验证阶段的样本特征和/或待验证特征。
例如,以两个来源的第一生物特征分别是人脸特征和指纹特征为例 进行说明,人脸特征的数据量特别大,但是识别精度高。在平衡生物特征的识别和验证过程中的精确度和计算量时,可以考虑融合特征级的人脸特征和样本级的指纹特征。样本级的指纹特征对应的一组样本数据的数据量,比样本级的人脸特征对应的一组样本数据的数据量小,如此,融合特征级的人脸特征和样本级的指纹特征,既充分利用了人脸特征的高精度,且降低了计算量。具体如,将样本级的人脸特征对应的人脸图像中提取若干个人脸特征,例如,M个人脸特征的特征值,将M个人脸特征的特征值和样本级的指纹特征的指纹图像,共同组成一个用于身份验证的样本特征,或者,等待验证的待验证特征。
又例如,提升安全性,且通过多来源生物特征的验证,也可以将样本级的人脸特征和特征级的指纹特征进行融合。
总之,在本实施例中,可以将数量较大的样本级的第一生物特征的样本数据转换为特征级的第一生物特征之后,然后与数据量较小的特征级的其他第一生物特征进行融合,得到所述第二生物特征。
在一些实施例中,所述S120可包括:
将所述样本级的所述第一生物特征及所述分数级的第一生物特征相融合形成所述第二生物特征。
此处的样本级的第一生物特征和分数级的第一生物特征融合,可包括:样本数据和另一个来源的第一生物特征的匹配分数融合成一个样本特征或者一个待验证特征。
如此,实现了样本级的第一生物特征和分数级的第一生物特征的融合,得到了一个同时包含样本级和分数级的原始生物特征的第二生物特征,第二生物特征在用于生物特征的认证识别过程中,一方面由于融合了至少两种来源的第一生物特征的样本级和分数级的特征数据,相当于单一生物特征的认证识别,能够提升精确度;另一方面,由于第二生物 特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了样本级和分数级的生物特征的认证识别的优点,使得样本级和分数级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。
在一些实施例中,所述S120可包括:
将所述样本级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
此处的样本级的第一生物特征和决策级的第一生物特征融合后,得到第二生物特征,可包括:将样本级的第一生物特征的样本数据和决策级的布尔值融合,得到所述第二生物特征。
如此,实现了样本级的第一生物特征和决策级的第一生物特征的融合,得到了一个同时包含样本级和决策级的原始生物特征的第二生物特征,第二生物特征在用于生物特征的认证识别过程中,一方面由于融合了至少两种来源的第一生物特征的样本级和决策级的数据,相当于单一生物特征的认证识别,能够提升精确度;另一方面,由于第二生物特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了样本级和决策级的生物特征的认证识别的优点,使得样本级和决策级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。
例如,预设数据库中包括:S个样本级的样本数据,将决策级的第一生物特征对应的样本数据与S个样本级的样本数据进行匹配得到匹配分数,将匹配分数转换为布尔值,将得到S个布尔值;将这S个布尔值与样本级的第一生物特征融合形成所述第二生物特征。例如,将所述S个布尔值与所述样本级的第一生物特征拼接形成所述第二生物特征。
在一些实施例中,所述S110可包括包括:
将所述特征级的第一生物特征及所述分数级的第一生物特征相融合得到所述第二生物特征。
将特征级的第一生物特征和分数级的第一生物特征融合后得到第二生物特征,将包含特征值和匹配分数;或者包含特征值和匹配分数计算得到的函数值。
如此,实现了特征级的第一生物特征和分数级的第一生物特征的融合,得到了一个同时包含样本级和决策级的原始生物特征的第二生物特征,第二生物特征在用于生物特征的认证识别过程中,一方面由于融合了至少两种来源的第一生物特征的特征级和分数级的数据,相当于单一生物特征的认证识别,能够提升精确度;另一方面,由于第二生物特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了特征级和分数级的生物特征的认证识别的优点,使得特征级和分数级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。
例如,将待融合的第一生物特征与预设数据库中的同一类第一生物特征分别进行匹配,得到匹配分数。同时待融合的第一生物特征的样本数据提取出特征值作为所述特征级的第一生物特征。
将特征级的第一生物特征的特征值(即特征数据),与分数级的第一生物特征的匹配分数融合,得到第二生物特征。
在一些实施例中,所述S120可包括:
将所述特征级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
如此,实现了特征级的第一生物特征和决策级的第一生物特征的融合,得到了一个同时包含特征级和决策级的原始生物特征的第二生物特征,第二生物特征在用于生物特征的认证识别过程中,一方面由于融合 了至少两种来源的第一生物特征的特征级和决策级的数据,相当于单一生物特征的认证识别,能够提升精确度;另一方面,由于第二生物特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了特征级和决策级的生物特征的认证识别的优点,使得特征级和决策级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。
在一些实施例中,所述S120还可包括:将所述分数级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
如此,实现了分数级的第一生物特征和决策级的第一生物特征的融合,得到了一个同时包含分数级和决策级的原始生物特征的第二生物特征,第二生物特征在用于生物特征的认证识别过程中,一方面由于融合了至少两种来源的第一生物特征的分数级和决策级的数据,相当于单一生物特征的认证识别,能够提升精确度;另一方面,由于第二生物特征是用跨层级(或称跨模态)的原始生物特征(即第一生物特征)融合而成的,保留了分数级和决策级的生物特征的认证识别的优点,使得分数级和决策级的原始生物特征的相互补强,提升了第二生物特征在认证识别过程中的认证识别性能。
上述实施例是采用两个层级的第一生物特征的举例,实际处理过程中还可以采用三个层级或四个层级的第一生物特征的融合,形成所述第二生物特征。
在一些实施例中,如图4所示,所述方法还包括:
S100:根据应用场景,确定相融合的不同所述层级的所述第一生物特征。
例如,支付场景和标记场景等应用场景,对于生物特征的认证需求是不同的。
对于支付场景,对安全性要求高,对计算量的大小可以放宽限制;对于标记场景,对于安全性要求可能不高,仅需满足能够区分不同的目标即可。支付场景可涉及财产转移,对安全性要求高,该支付场景包括但不限于:网络支付。
标记场景,例如,在会议场景上通过声纹特征和人脸特征,区分不同的人。为了减少计算量,且为了减少样本数据或特征数据的泄露,采用模糊程度较高的决策级的第一生物特征和分数级的第一生物特征融合得到所述第二生物特征。
因此,在对于安全性要求高的第一类场景,确定相融合的第一生物特征可至少包括一个样本级的第一生物特征,或者,相融合的第一生物特征可至少包括一个特征级的第一生物特征。而对于安全性要求低的第二类场景,确定相融合的第一生物特征可不包含样本级的第一生物特征和/或特征级的第一生物特征,而仅包括:分数级的第一生物特征和决策级的第一生物特征。
在一些实施例中,在终端和服务器中都存储公共数据库,这个公共数据库中可能有多个处理所述第一生物特征,进行不同层级转换的样本数据或特征数据。
例如,该公共数据库包括:不同来源源的若干个样本数据;或者不同来源的若干个特征。在终端生成第二生物特时,终端可以将自身获取的人脸图像和公共数据库内样本数据内的人脸图像,从而转换为分数级的匹配分数和/或特征级的布尔值。或者,直接提取出所述人脸图像中的人脸特征。再例如,终端生成第二生物特征之前,将获取第一生物特征的样本数据进行特征提取,得到特征级的特征集合或特征向量等特征值,将特征值与公共数据库中的特征值进行匹配,得到分数级的匹配分数。进一步可以根据匹配分数得到决策级的布尔值。将分数级和布尔值融合成第二生物特征。
融合至少两个层级的第二生物特征,存储到服务器中作为验证阶段的验证样本,如此服务器会存储该第二生物特征,用于供后续验证阶段使用。
或者,融合至少两个层级的第二生物特征,发送到服务器进行验证。服务器接收到待验证的第二生物特征之后,直接与样本生成阶段存储的验证样本匹配即可。
如此,在后续验证过程中,进行一次验证就实现了跨层级的不同来源的第一生物特征的验证,相当于单一生物特征而言提升了验证安全性,且满足了不同业务场景下的生物特征验证或识别或者区分的应用需求。
例如,在一个语音群聊场景,不预备分配标识,为了区分不同的人,可以提取出声音数据,将声音数据中提取出声纹特征,与该语音群聊场景内提取的M个人的声纹特征进行匹配,得到分数级的M个匹配分数;或者,将M个匹配分数转换为布尔值,得到M个布尔值。该声音数据提取出的特征除了声纹特征以外,还可包括:发音数据;该发音数据也可以作为一种生物特征,通过与M个人的发音特征进行匹配,得到M个匹配分数和/或M个布尔值。
如此,每个人对应的声纹特征的M个匹配分数或M个布尔值,与发音特征的M个布尔值或M个匹配分数,组合后形成能够区分不同人的第二生物特征,从而实现对不同人的区分。
在生物特征识别的技术发展中,提出了通过多模态融合的方法来进一步的提高生物特征识别的安全性以及可用性。目前,生物特征的多模态融合一般可以分为样本级融合、特征级融合、分数级融合和决策级融合四个层级:
样本级融合是指每个单一生物特征识别过程输出一组样本数据,将多组生物特征样本数据融合为一个样本数据;
特征级融合是指每个单一生物特征识别过程输出一组特征,将多组 生物特征融合为一个特征集或者特征向量;
分数级融合指每个单一生物特征识别过程通常输出单一匹配分数,也可能是多个分数。将多个生物特征识别分数融合成一个分数或决策,然后与系统接受阈值进行比较;
决策级融合指每个单一生物特征识别过程输出一个布尔值。利用混合算法如和与或,或者采用更多参数,如输入样本质量分数将结果进行融合。
但是在同一个层级的生物特征融合,可能这存在以下问题:
某些生物特征识别的若干层级已经模块化,无法拆分出更细的层级。样本级融合和特征级融合会产生较大的计算量和数据量,会增加延时和功耗。
分数级融合和决策级融合会减弱生物特征模态之间的关联性,造成准确性下降,不同模态或层级生成的位置各异。
本公开实施例提出了一种不同模态的不同层级之间的融合,提出了以下跨层级的生物特征的融合,具体可包括:
样本级与特征级融合:
将某个或多个单一生物特征识别过程中输出的一组或多组样本,与某个或多个单一生物特征识别过程输出的一组过多组特征相融合,为一个样本。某些生物特征的样本数据量很大,如高精度人脸特征识别,其包含了二维的人脸特征与三维的立体数据。此时如果将其样本与其他样本进行融合,需要非常大的计算能力。可以从样本中提取特征信息,并将特征信息与其他的样本进行融合而形成一组新的样本。
样本级与分数级融合:
将某个或多个单一生物特征识别过程中输出的一组或多组样本,与某个或多个单一生物特征识别过程输出的一组过多组匹配分数相融合, 为一个样本
在多用户系统中,生物特征识别的比对过程可能耗时很长,因为其需要对所有的用户进行筛查。
某些能够快速比对的生物特征,能够以更快的速度形成分数。再将分数与其他比对效率慢,但是准确性高的生物样本进行融合,并输出一组新的样本,则可以更高效的进行准确生物特征识别。
样本级与决策级融合:
将某个或多个单一生物特征识别过程中输出的一组或多组样本,与某个或多个单一生物特征识别过程输出的一组过多组决策布尔值相融合,为一个样本。
与样本级与分数级融合的生物特征一样,本方法可以提升多用户的比对效率。
特征级与分数级融合:
将某个或多个单一生物特征识别过程中输出的一组或多组特征,与某个或多个单一生物特征识别过程输出的一组过多组匹配分数相融合,为一个特征集或特征向量
样本级与分数级融合,可以提升多用户的比对效率;当样本级融合的数据量和计算量较大时,可以用特征级取代样本级。即样本级与分数级融合。
特征级与决策级融合:
将某个或多个单一生物特征识别过程中输出的一组或多组特征,与某个或多个单一生物特征识别过程输出的一组过多组决策布尔值相融合,为一个特征集或特征向量。与“特征级与分数级融合”一样,本方法可以提升多用户的比对效率。
分数级与决策级融合:
将某个或多个单一生物特征识别过程中输出的一组或多组匹配分数,与某个或多个单一生物特征识别过程输出的一组过多组决策布尔值相融合,为一个匹配分数或决策。该融合方式在以下场景中非常必要:,某些生物特征无法产生布尔值,只产生匹配分数,当它们寻求与其他生物特征的布尔值进行融合的时候。
如图5所示,一种生物特征融合装置,其中,所述装置包括:
获取模块510,被配置为获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;
融合模块520,被配置为将所述第一生物特征相融合形成第二生物特征。
在一些实施例中,所述获取模块510及所述融合模块520可为程序模块;所述程序模块被处理器执行后,能够实现上述第一生物特征的融合,形成第二生物特征。
在另一些实施例中,所述获取模块510及所述融合模块520可为软硬结合模块;所述软硬结合模块可包括各种可编程阵列;所述可编程阵列包括但不限于:复杂可编程阵列或者现场可编程阵列。
在还有一些实施例中,所述获取模块510及所述融合模块520可为纯硬件模块;所述纯硬件模块包括但不限于:专用集成电路。
在一些实施例中,所述至少两种不同层级包括以下任意至少两个:
样本级,对应于单一生物特征的样本;
特征级,对应于单一生物特征的特征;
分数级,对应于单一生物特征的匹配分数;
决策级,对应于单一生物特征的布尔值。
在一些实施例中,所述融合模块520,被配置为将所述样本级的所述第一生物特征和所述特征级的第一生物特征相融合形成所述第二生物特 征征。
在一些实施例中,所述融合模块520,被配置为将所述样本级的所述第一生物特征及所述分数级的第一生物特征相融合形成所述第二生物特征。
在一些实施例中,所述融合模块520,被配置为将所述样本级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
在一些实施例中,所述融合模块520,被配置为将所述特征级的第一生物特征及所述分数级的第一生物特征相融合得到所述第二生物特征。
在一些实施例中,所述融合模块520,被配置为将所述特征级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
在一些实施例中,所述融合模块520,被配置为将所述分数级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
在一些实施例中,所述装置还包括:
确定模块,被配置为根据应用场景,确定相融合的不同所述层级的所述第一生物特征。
本公开实施例提供一种电子设备,包括处理器、收发器、存储器及存储在存储器上并能够有处理器运行的可执行程序,其中,处理器运行可执行程序时执行前述任意技术方案提供的生物特征融合方法。
该电子设备可为基站、UE或服务器。
其中,处理器可包括各种类型的存储介质,该存储介质为非临时性计算机存储介质,在电子设备掉电之后能够继续记忆存储其上的信息。这里,所述电子设备包括基站或用户设备。
所述处理器可以通过总线等与存储器连接,用于读取存储器上存储的可执行程序,例如,如图3或图4所示的生物特征融合方法。
本公开实施例提供一种计算机存储介质,所述计算机存储介质存储有可执行程序;所述可执行程序被处理器执行后,能够实现第一方面或第二方面任意技术方案所示的方法,例如,图3或图4所示的生物特征融合方法。
图6是根据一示例性实施例示出的一种UE800的框图。例如,UE800可以是移动电话,计算机,数字广播用户设备,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图6,UE800可以包括以下至少一个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制UE800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括至少一个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括至少一个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在UE800的操作。这些数据的示例包括用于在UE800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为UE800的各种组件提供电力。电源组件806可以包括电源管理系统,至少一个电源,及其他与为UE800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述UE800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括至少一个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的唤醒时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当UE800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当UE800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括至少一个传感器,用于为UE800提供各个方面的状态评估。例如,传感器组件814可以检测到设备800的打开/关闭状态,组件的相对定位,例如所述组件为UE800的显示器和小键盘,传感器组件814还可以检测UE800或UE800一个组件的位置改变,用户与UE800接触 的存在或不存在,UE800方位或加速/减速和UE800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于UE800和其他设备之间有线或无线方式的通信。UE800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,UE800可以被至少一个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器804,上述指令可由UE800的处理器820执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
如图7所示,本公开一实施例示出一种基站的结构。例如,基站900可以被提供为一网络设备。参照图7,基站900包括处理组件922,其进一步包括至少一个处理器,以及由存储器932所代表的存储器资源,用于存 储可由处理组件922的执行的指令,例如应用程序。存储器932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件922被配置为执行指令,以执行上述方法前述应用在所述基站的任意方法,例如,如图2至图6所示方法。
基站900还可以包括一个电源组件926被配置为执行基站900的电源管理,一个有线或无线网络接口950被配置为将基站900连接到网络,和一个输入输出(I/O)接口958。基站900可以操作基于存储在存储器932的操作系统,例如Windows Server TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开实施例的其它实施方案。本公开旨在涵盖本公开实施例的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开实施例的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开实施例的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开实施例并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开实施例的范围仅由所附的权利要求来限制。

Claims (20)

  1. 一种生物特征融合方法,其中,所述方法包括:
    获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;
    将所述第一生物特征相融合形成第二生物特征。
  2. 根据权利要求1所述的方法,其中,所述至少两种不同层级包括以下任意至少两个:
    样本级,对应于单一生物特征的样本;
    特征级,对应于单一生物特征的特征;
    分数级,对应于单一生物特征的匹配分数;
    决策级,对应于单一生物特征的布尔值。
  3. 根据权利要求2所述的方法,其中,所述将所述第一生物特征相融合形成第二生物特征,包括:
    将所述样本级的所述第一生物特征和所述特征级的第一生物特征相融合形成所述第二生物特征征。
  4. 根据权利要求2所述的方法,其中,所述将所述第一生物特征相融合形成第二生物特征,包括:
    将所述样本级的所述第一生物特征及所述分数级的第一生物特征相融合形成所述第二生物特征。
  5. 根据权利要求1或2所述的方法,其中,所述将所述第一生物特征相融合形成第二生物特征,包括:
    将所述样本级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
  6. 根据权利要求1或2所述的方法,其中,所述将所述第一生物特征相融合形成第二生物特征,包括:
    将所述特征级的第一生物特征及所述分数级的第一生物特征相融合得到所述第二生物特征。
  7. 根据权利要求1或2所述的方法,其中,所述将所述第一生物特征相融合形成第二生物特征,包括:
    将所述特征级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
  8. 根据权利要求1或2所述的方法,其中,所述将所述第一生物特征相融合形成第二生物特征,包括:
    将所述分数级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
  9. 根据权利要求1至8任一项所述的方法,其中,所述方法还包括:
    根据应用场景,确定相融合的不同所述层级的所述第一生物特征。
  10. 一种生物特征融合装置,其中,所述装置包括:
    获取模块,被配置为获取一个目标至少多种来源的第一生物特征;其中,所述第一生物特征,分属至少两种不同层级;
    融合模块,被配置为将所述第一生物特征相融合形成第二生物特征。
  11. 根据权利要求10所述的装置,其中,所述至少两种不同层级包括以下任意至少两个:
    样本级,对应于单一生物特征的样本;
    特征级,对应于单一生物特征的特征;
    分数级,对应于单一生物特征的匹配分数;
    决策级,对应于单一生物特征的布尔值。
  12. 根据权利要求11所述的装置,其中,所述融合模块,被配置为将所述样本级的所述第一生物特征和所述特征级的第一生物特征相融合形成所述第二生物特征征。
  13. 根据权利要求11所述的装置,其中,所述融合模块,被配置为将所述样本级的所述第一生物特征及所述分数级的第一生物特征相融合形成所述第二生物特征。
  14. 根据权利要求10或11所述的装置,其中,所述融合模块,被配置为将所述样本级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
  15. 根据权利要求10或11所述的装置,其中,所述融合模块,被配置为将所述特征级的第一生物特征及所述分数级的第一生物特征相融合得到所述第二生物特征。
  16. 根据权利要求10或11所述的装置,其中,所述融合模块,被配置为将所述特征级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
  17. 根据权利要求10或11所述的装置,其中,所述融合模块,被配置为将所述分数级的所述第一生物特征及所述决策级的第一生物特征相融合形成所述第二生物特征。
  18. 根据权利要求10至17任一项所述的装置,其中,所述装置还包括:
    确定模块,被配置为根据应用场景,确定相融合的不同所述层级的所述第一生物特征。
  19. 一种电子设备,其中,所述电子设备至少包括:处理器和用于存储能够在所述处理器上运行的可执行指令的存储器,其中:
    处理器用于运行所述可执行指令时,所述可执行指令执行上述权利要求1至9任一项提供的生物特征融合方法。
  20. 一种非临时性计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机可执行指令,该计算机可执行指令被处理器执行 时实现上述权利要求1至9任一项提供的生物特征融合方法。
PCT/CN2020/099567 2020-06-30 2020-06-30 生物特征融合方法及装置、电子设备及存储介质 WO2022000337A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/099567 WO2022000337A1 (zh) 2020-06-30 2020-06-30 生物特征融合方法及装置、电子设备及存储介质
CN202080001407.XA CN111919224A (zh) 2020-06-30 2020-06-30 生物特征融合方法及装置、电子设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/099567 WO2022000337A1 (zh) 2020-06-30 2020-06-30 生物特征融合方法及装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022000337A1 true WO2022000337A1 (zh) 2022-01-06

Family

ID=73265207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099567 WO2022000337A1 (zh) 2020-06-30 2020-06-30 生物特征融合方法及装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN111919224A (zh)
WO (1) WO2022000337A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710593A (zh) * 2005-06-27 2005-12-21 北京交通大学 一种基于特征关系度量的手部特征融合认证方法
US20060204049A1 (en) * 2005-01-14 2006-09-14 Schneider John K Multimodal fusion decision logic system
CN102542258A (zh) * 2011-12-16 2012-07-04 天津理工大学 基于手指生物特征信息的成像设备及多模态身份识别方法
US20120308089A1 (en) * 2011-06-03 2012-12-06 Korea Basic Science Institute Method of biometric authentication by using pupil border and apparatus using the method
CN107294730A (zh) * 2017-08-24 2017-10-24 北京无线电计量测试研究所 一种多模态生物特征身份认证方法、装置及系统
CN110909582A (zh) * 2018-09-18 2020-03-24 华为技术有限公司 一种人脸识别的方法及设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100228692A1 (en) * 2009-03-03 2010-09-09 Honeywell International Inc. System and method for multi-modal biometrics
EP2523149B1 (en) * 2011-05-11 2023-01-11 Tata Consultancy Services Ltd. A method and system for association and decision fusion of multimodal inputs
US20160366317A1 (en) * 2015-06-12 2016-12-15 Delta ID Inc. Apparatuses and methods for image based biometric recognition
CN108256452A (zh) * 2018-01-06 2018-07-06 天津大学 一种基于特征融合的ecg信号分类的方法
CN109614880A (zh) * 2018-11-19 2019-04-12 国家电网有限公司 一种多模态生物特征融合方法及装置
CN110955661B (zh) * 2019-11-29 2023-03-21 北京明略软件系统有限公司 数据融合方法、装置、可读存储介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204049A1 (en) * 2005-01-14 2006-09-14 Schneider John K Multimodal fusion decision logic system
CN1710593A (zh) * 2005-06-27 2005-12-21 北京交通大学 一种基于特征关系度量的手部特征融合认证方法
US20120308089A1 (en) * 2011-06-03 2012-12-06 Korea Basic Science Institute Method of biometric authentication by using pupil border and apparatus using the method
CN102542258A (zh) * 2011-12-16 2012-07-04 天津理工大学 基于手指生物特征信息的成像设备及多模态身份识别方法
CN107294730A (zh) * 2017-08-24 2017-10-24 北京无线电计量测试研究所 一种多模态生物特征身份认证方法、装置及系统
CN110909582A (zh) * 2018-09-18 2020-03-24 华为技术有限公司 一种人脸识别的方法及设备

Also Published As

Publication number Publication date
CN111919224A (zh) 2020-11-10

Similar Documents

Publication Publication Date Title
US11074466B2 (en) Anti-counterfeiting processing method and related products
US10936709B2 (en) Electronic device and method for controlling the same
CN108960209B (zh) 身份识别方法、装置及计算机可读存储介质
CN107066983B (zh) 一种身份验证方法及装置
CN111985265B (zh) 图像处理方法和装置
WO2017181769A1 (zh) 一种人脸识别方法、装置和系统、设备、存储介质
CN107451449B (zh) 生物识别解锁方法及相关产品
CN108830062B (zh) 人脸识别方法、移动终端及计算机可读存储介质
CN108280418A (zh) 脸部图像的欺骗识别方法及装置
CN110287671B (zh) 验证方法及装置、电子设备和存储介质
US10733279B2 (en) Multiple-tiered facial recognition
Witte et al. Context-aware mobile biometric authentication based on support vector machines
WO2018133282A1 (zh) 一种动态识别的方法及终端设备
CN107220614B (zh) 图像识别方法、装置及计算机可读存储介质
CN111919217B (zh) 生物特征注册的方法、装置、用户设备及存储介质
CN108206892B (zh) 联系人隐私的保护方法、装置、移动终端及存储介质
WO2019011106A1 (zh) 状态控制方法及相关产品
WO2019062347A1 (zh) 人脸识别方法及相关产品
US10803159B2 (en) Electronic device and method for controlling the same
CN108154466A (zh) 图像处理方法及装置
WO2019024718A1 (zh) 防伪处理方法、防伪处理装置及电子设备
CN107977636B (zh) 人脸检测方法及装置、终端、存储介质
CN107545163B (zh) 解锁控制方法及相关产品
WO2021248382A1 (zh) 生物特征的验证方法及装置、电子设备及存储介质
WO2022000337A1 (zh) 生物特征融合方法及装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942756

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942756

Country of ref document: EP

Kind code of ref document: A1