WO2021072870A1 - Adversarial network-based fingerprint model generation method and related apparatus - Google Patents

Adversarial network-based fingerprint model generation method and related apparatus Download PDF

Info

Publication number
WO2021072870A1
WO2021072870A1 PCT/CN2019/118092 CN2019118092W WO2021072870A1 WO 2021072870 A1 WO2021072870 A1 WO 2021072870A1 CN 2019118092 W CN2019118092 W CN 2019118092W WO 2021072870 A1 WO2021072870 A1 WO 2021072870A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingerprint
model
machine learning
image
learning sub
Prior art date
Application number
PCT/CN2019/118092
Other languages
French (fr)
Chinese (zh)
Inventor
王义文
王健宗
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021072870A1 publication Critical patent/WO2021072870A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Definitions

  • the output of the machine learning sub-model is the judgment result of a fingerprint simulation image; a recognition rate calculation unit for calculating the recognition rate of the second machine learning sub-model, wherein the recognition rate includes the second machine learning sub-model
  • the ratio of the judgment result that the output of the positive sample is a simulated fingerprint image and the judgment result that the output of the negative sample is not the simulated fingerprint image accounts for all the judgment results output by the second machine learning sub-model; the fingerprint model output unit is used if the When the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, the fingerprint simulation image is output as the generated fingerprint model.
  • the recognition rate of the second machine learning sub-model for the fingerprint simulation image is increased with The similarity between the simulated fingerprint image and the fingerprint sample image increases and decreases.
  • the recognition rate of the fingerprint simulation image output by the first machine learning sub-model is reduced to a second preset threshold by the second machine learning sub-model, the training of the first machine learning sub-model is stopped.
  • the similarity between the simulated fingerprint image and the fingerprint sample image reaches a high level.
  • the method for training the machine learning model is training based on maximizing the value of the loss function, where the loss function is:
  • a program product 600 for implementing the above method according to an embodiment of the present application is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer.
  • CD-ROM compact disk read-only memory
  • the program product of this application is not limited to this.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

An adversarial network-based fingerprint model generation method and apparatus, a computer device, and a storage medium. The adversarial network-based fingerprint model generation method comprises: inputting a fingerprint sample image into a first machine learning sub-model, and outputting, by the first machine learning sub-model, a fingerprint simulated image (S121); inputting the fingerprint simulated image into a second machine learning sub-model, and outputting, by the second machine learning sub-model, a determining result concerning whether it is a fingerprint simulated image, such that the first machine learning sub-model adjusts parameters according to the determination result and then the fingerprint simulated image output by the first machine learning sub-model is more similar to the finger sample image (S122); and if the recognition rate of the second machine learning sub-model reaches a predetermined recognition threshold, outputting the fingerprint simulated image as a fingerprint model (S124). A fingerprint model having high similarity to real fingerprints can be synthesized in the case of lacking a fingerprint database.

Description

基于对抗网络的指纹模型生成方法以及相关装置Fingerprint model generation method and related device based on confrontation network
本申请要求2019年10月25日递交、申请名称为“基于对抗网络的指纹模型生成方法以及相关装置”的中国专利申请201910979602.9的优先权,在此通过引用将其全部内容合并于此。This application claims the priority of the Chinese patent application 201910979602.9 filed on October 25, 2019 with the application name "A fingerprint model generation method and related device based on a confrontation network", the entire content of which is incorporated herein by reference.
技术领域Technical field
本申请涉及机器学习技术领域,特别是涉及基于对抗网络的指纹模型生成方法、装置、计算机设备和存储介质。This application relates to the field of machine learning technology, in particular to a fingerprint model generation method, device, computer equipment and storage medium based on a confrontation network.
背景技术Background technique
随着技术的发展,指纹这一生物特征越来越广泛地被应用于取证、机场和智能手机等场景中。故目前对于指纹研究的相关技术越来越热门,但却缺少大量的专供研究的指纹样本,而大量采集专供研究的指纹样本的难度特别的高。若要通过合成的方式生成指纹,由于每个人的指纹都具有独特的特征和模式,因此生成一个合成的逼真指纹很困难。由于缺乏指纹数据库的支持,目前合成指纹与真实指纹的相似度不高,不适合被用作研究等用途。With the development of technology, fingerprints, a biological feature, are more and more widely used in scenarios such as forensics, airports, and smart phones. Therefore, related technologies for fingerprint research are becoming more and more popular, but there is a lack of a large number of fingerprint samples exclusively for research, and it is particularly difficult to collect a large number of fingerprint samples exclusively for research. To generate a fingerprint by synthesis, since each person's fingerprint has unique characteristics and patterns, it is difficult to generate a synthetic and realistic fingerprint. Due to the lack of fingerprint database support, the current synthetic fingerprints are not very similar to real fingerprints and are not suitable for research and other purposes.
发明内容Summary of the invention
基于此,为解决相关技术中合成指纹与真实指纹的相似度不高的技术问题,本申请提供了一种基于对抗网络的指纹模型生成方法、装置、计算机设备和存储介质。Based on this, in order to solve the technical problem of low similarity between synthetic fingerprints and real fingerprints in related technologies, this application provides a fingerprint model generation method, device, computer equipment and storage medium based on a confrontation network.
一方面,一种基于对抗网络的指纹模型生成方法,包括:获取指纹样本图像;将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型;其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型包括:将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,以便所述第一机器学习子模型根据所述判断结果调整参数,使所述第一机器学习子模型输出指纹模拟图像与所述指纹样本图像更相似;计算所述第二机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。In one aspect, a fingerprint model generation method based on a confrontation network includes: acquiring a fingerprint sample image; inputting the fingerprint sample image into a machine learning model, and the machine learning model outputs the generated fingerprint model; wherein, the machine learning model It includes a first machine learning sub-model and a second machine learning sub-model, said inputting the fingerprint sample image into a machine learning model, and outputting the generated fingerprint model from the machine learning model includes: inputting the fingerprint sample image into the first A machine learning sub-model, the first machine learning sub-model outputs a fingerprint simulation image; the fingerprint simulation image is input into the second machine learning sub-model, whether the second machine learning sub-model output is a fingerprint simulation image The judgment result, so that the first machine learning sub-model adjusts parameters according to the judgment result, so that the fingerprint simulation image output by the first machine learning sub-model is more similar to the fingerprint sample image; and the second machine learning sub-model is calculated The recognition rate of the model, wherein the recognition rate includes the judgment result of the second machine learning sub-model that the output of the positive sample is a fingerprint simulation image and the judgment result that the output of the negative sample is not a fingerprint simulation image accounts for the second machine learning The proportion of all judgment results output by the sub-model; if the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, the fingerprint simulation image is output as the generated fingerprint model.
另一方面,一种基于对抗网络的指纹模型生成装置,包括:样本图像获取单元,用于获取指纹样本图像;机器学习输出单元,用于将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型;其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述机器学习输出单元包括:模拟图像输出单元,用于将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;判断结果输出单元,用于将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果;识别率计算单元,用于计算所述第二 机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;指纹模型输出单元,用于若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。On the other hand, a fingerprint model generation device based on a confrontation network includes: a sample image acquisition unit for acquiring a fingerprint sample image; a machine learning output unit for inputting the fingerprint sample image into a machine learning model, and the machine The learning model outputs the generated fingerprint model; wherein, the machine learning model includes a first machine learning sub-model and a second machine learning sub-model, and the machine learning output unit includes: a simulated image output unit for converting the fingerprint sample An image is input to the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image; the judgment result output unit is configured to input the fingerprint simulation image into the second machine learning sub-model, and the first machine learning sub-model 2. Whether the output of the machine learning sub-model is the judgment result of a fingerprint simulation image; a recognition rate calculation unit for calculating the recognition rate of the second machine learning sub-model, wherein the recognition rate includes the second machine learning sub-model The ratio of the judgment result that the output of the positive sample is a simulated fingerprint image and the judgment result that the output of the negative sample is not the simulated fingerprint image accounts for all the judgment results output by the second machine learning sub-model; the fingerprint model output unit is used if the When the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, the fingerprint simulation image is output as the generated fingerprint model.
另一方面,一种基于对抗网络的指纹模型生成装置,包括处理器及存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如上所述的基于对抗网络的指纹模型生成方法。On the other hand, a fingerprint model generation device based on a confrontation network includes a processor and a memory, and computer-readable instructions are stored on the memory. When the computer-readable instructions are executed by the processor, the above-mentioned Fingerprint model generation method based on confrontation network.
另一方面,一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的基于对抗网络的指纹模型生成方法。On the other hand, a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the method for generating a fingerprint model based on a confrontation network as described above is realized.
上述基于对抗网络的指纹模型生成方法、装置、计算机设备和存储介质,通过将所述指纹样本图像输入机器学习模型,所述机器学习模型根据指纹样本图像模拟生成指纹模型并输出,其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述机器学习模型根据指纹样本图像模拟生成指纹模型并输出时,先将所述指纹样本图像输入所述第一机器学习子模型,使所述第一机器学习子模型输出指纹模拟图像,然后再将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,以便所述第一机器学习子模型根据所述判断结果调整参数,使所述第一机器学习子模型输出指纹模拟图像与所述指纹样本图像更相似,这样使得第一机器学习子模型和第二机器学习子模型的输出结果互相影响对方输出结果的准确性,形成对抗学习,使得所述机器学习模型输出的指纹模型越来越接近真实指纹图像。若所述第二机器学习子模型的识别率达到所述预定识别阈值时,证明所述机器学习模型已经训练的足够好,就将所述指纹模拟图像作为指纹模型输出。这样就可以在缺乏指纹数据库时合成与真实指纹的相似度较高的指纹模型。The fingerprint model generation method, device, computer equipment, and storage medium based on the confrontation network described above input the fingerprint sample image into a machine learning model, and the machine learning model simulates and outputs a fingerprint model based on the fingerprint sample image. The machine learning model includes a first machine learning sub-model and a second machine learning sub-model. When the machine learning model simulates and outputs a fingerprint model based on the fingerprint sample image, it first inputs the fingerprint sample image into the first machine learning sub-model. Model, making the first machine learning sub-model output a fingerprint simulation image, and then input the fingerprint simulation image into the second machine learning sub-model, the second machine learning sub-model output is a fingerprint simulation image judgment As a result, the first machine learning sub-model adjusts parameters according to the judgment result, so that the output fingerprint simulation image of the first machine learning sub-model is more similar to the fingerprint sample image, so that the first machine learning sub-model and The output result of the second machine learning sub-model mutually influences the accuracy of the output result of the other party, forming an adversarial learning, so that the fingerprint model output by the machine learning model becomes closer and closer to the real fingerprint image. If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, it is proved that the machine learning model has been trained well enough, and the fingerprint simulation image is output as a fingerprint model. In this way, it is possible to synthesize a fingerprint model with higher similarity to the real fingerprint when there is no fingerprint database.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。It should be understood that the above general description and the following detailed description are only exemplary and explanatory, and cannot limit the application.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并于说明书一起用于解释本申请的原理。The drawings here are incorporated into the specification and constitute a part of the specification, show embodiments that conform to the application, and are used together with the specification to explain the principle of the application.
图1是根据一示例性实施例示出的一种基于对抗网络的指纹模型生成方法的流程图。Fig. 1 is a flow chart showing a method for generating a fingerprint model based on a confrontation network according to an exemplary embodiment.
图2是根据图1对应实施例示出的基于对抗网络的指纹模型生成方法中步骤S120的一种具体实现流程图。Fig. 2 is a specific implementation flow chart of step S120 in the method for generating a fingerprint model based on a confrontation network according to the corresponding embodiment of Fig. 1.
图3是根据图1对应实施例示出的基于对抗网络的指纹模型生成方法中机器学习模型训练实现的一种具体实现流程图。FIG. 3 is a specific implementation flow chart of the implementation of machine learning model training in the fingerprint model generation method based on the confrontation network according to the embodiment corresponding to FIG. 1.
图4是根据一示例性实施例示出的一种基于对抗网络的指纹模型生成装置的框图。Fig. 4 is a block diagram showing a fingerprint model generation device based on a confrontation network according to an exemplary embodiment.
图5示意性示出一种用于实现上述基于对抗网络的指纹模型生成方法的计算机设备 示例框图。Fig. 5 schematically shows an exemplary block diagram of a computer device for implementing the above-mentioned method for generating a fingerprint model based on a confrontation network.
图6示意性示出一种用于实现上述基于对抗网络的指纹模型生成方法的计算机可读存储介质。Fig. 6 schematically shows a computer-readable storage medium for implementing the aforementioned method for generating a fingerprint model based on a confrontation network.
图7是一个实施例中提供的用户分流方法的实施环境图。Fig. 7 is an implementation environment diagram of a user shunt method provided in an embodiment.
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。Through the above drawings, the specific embodiments of the present application have been shown, and there will be more detailed descriptions in the following. These drawings and text descriptions are not intended to limit the scope of the concept of the present application in any way, but by referring to specific embodiments. The concept of this application is explained to those skilled in the art.
具体实施方式Detailed ways
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。Here, an exemplary embodiment will be described in detail, and examples thereof are shown in the accompanying drawings. When the following description refers to the drawings, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements. The implementation manners described in the following exemplary embodiments do not represent all implementation manners consistent with the present application. On the contrary, they are merely examples of devices and methods consistent with some aspects of the application as detailed in the appended claims.
图7为一个实施例中提供的基于对抗网络的指纹模型生成方法的实施环境图,如图7所示,在该实施环境中,包括计算机设备100。计算机设备100中包括指纹生成器101以及指纹区分器102。FIG. 7 is an implementation environment diagram of a fingerprint model generation method based on a confrontation network provided in an embodiment. As shown in FIG. 7, the implementation environment includes a computer device 100. The computer device 100 includes a fingerprint generator 101 and a fingerprint distinguisher 102.
计算机设备100为指纹生成系统设备,例如为指纹生成系统维护人员使用的电脑、服务器等计算机设备,指纹生成器101以及指纹区分器102为其内部的子模块。当计算机设备100获取到指纹样本图像后,将其输入到指纹生成器101中,然后指纹生成器101生成指纹模拟图像,将所述指纹模拟图像输入指纹区分器102,指纹区分器102输出是否为指纹模拟图像的判断结果,指纹生成器101根据所述判断结果调整参数,使其输出指纹模拟图像与所述指纹样本图像更相似。最后计算指纹区分器102的识别率,其中,所述识别率包括指纹区分器102对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占指纹区分器102输出的所有判断结果的比例,若其识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。The computer device 100 is a fingerprint generation system device, for example, a computer or a server used by maintenance personnel of the fingerprint generation system. The fingerprint generator 101 and the fingerprint distinguisher 102 are internal sub-modules. When the computer device 100 obtains the fingerprint sample image, it is input to the fingerprint generator 101, and then the fingerprint generator 101 generates a fingerprint simulation image, and inputs the fingerprint simulation image to the fingerprint distinguisher 102. Whether the output of the fingerprint distinguisher 102 is According to the judgment result of the fingerprint simulation image, the fingerprint generator 101 adjusts the parameters according to the judgment result so that the output fingerprint simulation image is more similar to the fingerprint sample image. Finally, the recognition rate of the fingerprint distinguisher 102 is calculated, where the recognition rate includes the judgment result that the fingerprint distinguisher 102 outputs a fingerprint simulation image for positive samples and the judgment result that the output of negative samples is not a fingerprint simulation image accounts for the output of the fingerprint distinguisher 102. If the recognition rate reaches the predetermined recognition threshold, the fingerprint simulation image is output as the generated fingerprint model.
需要说明的是,计算机设备100可为智能手机、平板电脑、笔记本电脑、台式计算机等,但并不局限于此。计算机设备100可以通过蓝牙、USB(Universal Serial Bus,通用串行总线)或者其他通讯连接方式进行连接,本发明在此不做限制。It should be noted that the computer device 100 can be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., but is not limited to this. The computer device 100 may be connected via Bluetooth, USB (Universal Serial Bus, Universal Serial Bus) or other communication connection methods, and the present invention is not limited herein.
如图1所示,在一个实施例中,提出了一种基于对抗网络的指纹模型生成方法,该基于对抗网络的指纹模型生成方法可以应用于指纹生成设备中,具体可以包括以下步骤:步骤S110,获取指纹样本图像;在其中一个实施例中,所述指纹样本图像包括指纹真图以及指纹草图,步骤S110可以包括:获取完整的指纹真图或残缺的指纹真图;获取全为闭合曲线的指纹草图或含有闭合曲线及非闭合曲线的指纹草图。其中所述指纹真图为真实的指纹样本,所述指纹草图为初步模拟的指纹图。将初步模拟的指纹图输入机器学习模型,使机器学习模型以初步模拟的指纹图为基础相比于直接从空白画面生成指纹模型,将会更好更快生成指纹模型。其中,在获取指纹样本图像时,可以先获取完整的指纹真图或残 缺的指纹真图,再获取全为闭合曲线的指纹草图或含有闭合曲线及非闭合曲线的指纹草图;也可以先获取全为闭合曲线的指纹草图或含有闭合曲线及非闭合曲线的指纹草图,再获取完整的指纹真图或残缺的指纹真图;还可以是同时获取指纹真图和指纹草图,本申请在此不做限定。在其中一个实施例中,所述指纹样本图像包括指纹真图以及指纹草图,步骤S110可以包括:从指纹录入器获取或从指纹库调取指纹真图;从指纹录入器获取或通过绘图软件制作或自动生成指纹草图。在采集指纹的过程中,采集到部分指纹可能是不完整的。此时,就需要基于残缺不完整的指纹部分图,模拟还原出完整的指纹图。所以将残缺的指纹真图作为样本输入指纹生成器,可以促使指纹生成器学习到指纹还原的能力。步骤S120,将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型。As shown in Figure 1, in one embodiment, a fingerprint model generation method based on a confrontation network is proposed. The fingerprint model generation method based on a confrontation network can be applied to a fingerprint generation device, and specifically may include the following steps: Step S110 , Obtain a fingerprint sample image; in one of the embodiments, the fingerprint sample image includes a fingerprint image and a fingerprint sketch. Step S110 may include: obtaining a complete fingerprint image or an incomplete fingerprint image; obtaining all closed curves Fingerprint sketches or fingerprint sketches with closed and unclosed curves. The true fingerprint image is a real fingerprint sample, and the fingerprint sketch is a preliminary simulated fingerprint image. Inputting the preliminary simulated fingerprint image into the machine learning model, making the machine learning model based on the preliminary simulated fingerprint image will generate a fingerprint model better and faster than generating a fingerprint model directly from a blank screen. Among them, when obtaining the fingerprint sample image, you can first obtain the complete fingerprint image or the incomplete fingerprint image, and then obtain the fingerprint sketch with all closed curves or the fingerprint sketch with closed and non-closed curves; you can also obtain the full fingerprint first. It is a fingerprint sketch with a closed curve or a fingerprint sketch with a closed curve and a non-closed curve, and then obtains the complete fingerprint image or the incomplete fingerprint image; it can also obtain the fingerprint image and the fingerprint sketch at the same time. This application will not do it here. limited. In one of the embodiments, the fingerprint sample image includes a fingerprint image and a fingerprint sketch, and step S110 may include: obtaining the fingerprint image from the fingerprint logger or from the fingerprint library; obtaining the fingerprint image from the fingerprint logger or making it through drawing software Or automatically generate fingerprint sketches. In the process of collecting fingerprints, some fingerprints may be incomplete. At this time, it is necessary to simulate and restore a complete fingerprint image based on the incomplete and incomplete part of the fingerprint image. Therefore, inputting the incomplete fingerprint image as a sample into the fingerprint generator can prompt the fingerprint generator to learn the ability of fingerprint restoration. Step S120: Input the fingerprint sample image into a machine learning model, and the machine learning model outputs the generated fingerprint model.
图2是根据图1对应实施例示出的基于对抗网络的指纹模型生成方法中步骤S120的细节描述,该基于对抗网络的指纹模型生成方法中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,步骤S120可以包括以下步骤:步骤S121,将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;其中,所述指纹模拟图像指基于指纹样本图像,根据指纹样本图像的特征分布,模拟生成的指纹图像。步骤S122,将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,以便所述第一机器学习子模型根据所述判断结果调整参数,使所述第一机器学习子模型输出指纹模拟图像与所述指纹样本图像更相似;步骤S123,计算所述第二机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;步骤S124,若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。在其中一个实施例中,所述机器学习模型的训练方法可以是:将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像,然后将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,并以此来训练第二机器学习子模型,当所述第二机器学习子模型对所述指纹模拟图像的识别率达到第一预设阈值时,停止训练所述第二机器学习子模型。在这个过程中提高了第二机器学习子模型对指纹模拟图像的判断能力。这样,当所述第二机器学习子模型对所述指纹模拟图像的识别率达到第一预设阈值时后,第二机器学习子模型对指纹模拟图像的判断准确性就达到了较高的程度。然后继续将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型将输出的判断结果反馈给第一机器学习子模型,第一机器学习子模型可以根据该反馈信息调整自身模型参数,提高输出的指纹模拟图像与指纹样本图像的相似度。第二机器学习子模型不断地将识别信息反馈给第一机器学习子模型,第一机器学习子模型不断的调整自身模型参数,这个时候第二机器学习子模型对指纹模拟图像的识别率随着指纹模拟图像与 指纹样本图像的相似度的增高而降低。当所述第一机器学习子模型输出的指纹模拟图像被所述第二机器学习子模型识别的识别率降低到第二预设阈值时,停止训练所述第一机器学习子模型。此时,指纹模拟图像与指纹样本图像的相似度达到一个较高的水平。此时,继续将所述指纹模拟图像输入所述第二机器学习子模型,依次循环所述对所诉指纹第二机器学习子模型进行训练和所述对所述第一机器学习子模型进行训练的过程,直至所述第二机器学习子模型对所述第一机器学习子模型输出的指纹模拟图像的识别率降低为预定识别阈值且无法进一步提高时,整个训练过程结束,得到指纹模型。这样通过第一机器学习子模型与第二机器学习子模型组成对抗网络后互相之间的对抗学习,使得第一机器学习子模型输出的指纹模拟图像与指纹样本图像的相似度以及第二机器学习子模型判断的正确率在对抗学习中不断由于对方的提高而逐渐提高,直至到达一个较高的水平。2 is a detailed description of step S120 in the method for generating a fingerprint model based on a confrontation network according to the corresponding embodiment of FIG. 1. In the method for generating a fingerprint model based on a confrontation network, the machine learning model includes a first machine learning sub-model and The second machine learning sub-model, step S120 may include the following steps: step S121, the fingerprint sample image is input to the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image; wherein, the The fingerprint simulation image refers to a fingerprint image generated by simulation based on the fingerprint sample image and according to the feature distribution of the fingerprint sample image. Step S122: The fingerprint simulation image is input to the second machine learning sub-model, and the second machine learning sub-model outputs a judgment result of whether the fingerprint simulation image is output, so that the first machine learning sub-model can judge according to the judgment. As a result, the parameters are adjusted so that the simulated fingerprint image output by the first machine learning sub-model is more similar to the fingerprint sample image; step S123, the recognition rate of the second machine learning sub-model is calculated, wherein the recognition rate includes all The second machine learning sub-model outputs the judgment result of the fingerprint simulation image for the positive sample and the judgment result of the negative sample output is not the fingerprint simulation image to the proportion of all judgment results output by the second machine learning sub-model; step S124, If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, output the fingerprint simulation image as the generated fingerprint model. In one of the embodiments, the training method of the machine learning model may be: input the fingerprint sample image into the first machine learning sub-model, the first machine learning sub-model outputs a fingerprint simulation image, and then The fingerprint simulation image is input to the second machine learning sub-model, and the second machine learning sub-model outputs the judgment result of whether it is a fingerprint simulation image, and the second machine learning sub-model is trained based on this. When the second machine When the recognition rate of the learning sub-model for the fingerprint simulation image reaches the first preset threshold, the training of the second machine learning sub-model is stopped. In this process, the second machine learning sub-model's ability to judge fingerprint simulation images is improved. In this way, when the recognition rate of the simulated fingerprint image by the second machine learning sub-model reaches the first preset threshold, the judgment accuracy of the simulated fingerprint image by the second machine learning sub-model reaches a high degree . Then continue to input the fingerprint simulation image into the second machine learning sub-model, and the second machine learning sub-model will feed back the output judgment result to the first machine learning sub-model, and the first machine learning sub-model can be based on the feedback The information adjusts its own model parameters to improve the similarity between the output fingerprint simulation image and the fingerprint sample image. The second machine learning sub-model continuously feeds back identification information to the first machine learning sub-model, and the first machine learning sub-model continuously adjusts its own model parameters. At this time, the recognition rate of the second machine learning sub-model for the fingerprint simulation image is increased with The similarity between the simulated fingerprint image and the fingerprint sample image increases and decreases. When the recognition rate of the fingerprint simulation image output by the first machine learning sub-model is reduced to a second preset threshold by the second machine learning sub-model, the training of the first machine learning sub-model is stopped. At this time, the similarity between the simulated fingerprint image and the fingerprint sample image reaches a high level. At this time, continue to input the fingerprint simulation image into the second machine learning sub-model, and sequentially loop the training of the second machine learning sub-model of the fingerprint in question and the training of the first machine learning sub-model Until the second machine learning sub-model's recognition rate of the fingerprint simulation image output by the first machine learning sub-model is reduced to a predetermined recognition threshold and cannot be further improved, the entire training process ends and the fingerprint model is obtained. In this way, the first machine learning sub-model and the second machine learning sub-model form an adversarial network to learn from each other, so that the fingerprint simulation image output by the first machine learning sub-model is similar to the fingerprint sample image and the second machine learning The correct rate of the sub-model judgment is gradually improved due to the improvement of the opponent in the confrontation learning, until it reaches a higher level.
在其中一个实施例中,如图3所示,所述机器学习模型如下训练出:步骤S101,将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;步骤S102,将所述一次指纹模拟图像作为正样本,将所述指纹样本图像作为负样本,构成第一指纹图像样本集;步骤S103,将所述第一指纹图像样本集中的每一个指纹图像样本逐一输入第二机器学习子模型中进行学习,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,如果对于正样本输出不是指纹模拟图像的判断结果,或对于负样本输出是指纹模拟图像的判断结果,调整第一机器学习子模型,使第二机器学习子模型输出相反判断结果;步骤S104,将所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果输入第一机器学习子模型,使第一机器学习子模型根据所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果,调整所述第一机器学习子模型,使所述第一机器学习子模型输出的指纹模拟图像与所述指纹样本图像的相似度提升;步骤S105,计算所述第二机器学习子模型的识别率;步骤S106,若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。In one of the embodiments, as shown in FIG. 3, the machine learning model is trained as follows: Step S101, the fingerprint sample image is input to the first machine learning sub-model, and the first machine learning sub-model outputs Fingerprint simulation image; step S102, use the primary fingerprint simulation image as a positive sample, and use the fingerprint sample image as a negative sample to form a first fingerprint image sample set; step S103, combine each of the first fingerprint image samples A fingerprint image sample is input into the second machine learning sub-model one by one for learning. The second machine learning sub-model outputs the judgment result of fingerprint simulation image. If the output of the positive sample is not the judgment result of fingerprint simulation image, or for negative The sample output is the judgment result of the fingerprint simulation image. The first machine learning sub-model is adjusted so that the second machine learning sub-model outputs the opposite judgment result; step S104, whether the output of the second machine learning sub-model is the fingerprint simulation image The judgment result is input into the first machine learning sub-model, so that the first machine learning sub-model adjusts the first machine learning sub-model according to the judgment result of whether the output of the second machine learning sub-model is a fingerprint simulation image or not, so that the The similarity between the fingerprint simulation image output by the first machine learning sub-model and the fingerprint sample image is improved; step S105, the recognition rate of the second machine learning sub-model is calculated; step S106, if the second machine learning sub-model When the recognition rate reaches the predetermined recognition threshold, the fingerprint simulation image is output as the generated fingerprint model.
在其中一个实施例中,对所述机器学习模型进行训练的方式为基于最大化损失函数的值进行训练,其中,所述损失函数为:In one of the embodiments, the method for training the machine learning model is training based on maximizing the value of the loss function, where the loss function is:
L GAN=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))] L GAN =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]
其中:LGAN为损失函数值,x为指纹真图,D为第二机器学习子模型,EX~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z)))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,Ez~Pz(z)为对于第一机器学习子模型损失的期望。Among them: LGAN is the loss function value, x is the true fingerprint image, D is the second machine learning sub-model, EX~Pdata(x) is the loss expectation for the second machine learning sub-model, and z is the first machine learning sub-model of the input fingerprint. The fingerprint sample image of the model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z))) is the processing function of inputting the fingerprint simulation image into the second machine learning sub-model, Ez~Pz(z) are the expectations for the loss of the first machine learning sub-model.
其中,损失函数是指:衡量由模型所得数据与实际数据的差异程度的函数,即是用来评价模型好坏的函数。在这里指,用来衡量指纹生成器生成的指纹拟图与真实的指纹真图之间的差异程度。如果用指纹生成器进行计算判断该差异程度,得出该差异程度(损失函数值)越小,说明指纹生成器生成的指纹拟图与真实的指纹真图之间的相似度越高,因此是基于最小化损失函数的值对所述指纹生成器进行训练。Among them, the loss function refers to a function that measures the degree of difference between the data obtained by the model and the actual data, that is, a function used to evaluate the quality of the model. Here it refers to the degree of difference between the fingerprint mimic generated by the fingerprint generator and the real fingerprint image. If the fingerprint generator is used to calculate the degree of difference, it is concluded that the smaller the degree of difference (loss function value) is, the higher the similarity between the fingerprint pseudo image generated by the fingerprint generator and the true fingerprint image is, so The fingerprint generator is trained based on the value of the minimized loss function.
在其中一个实施例中,所述第一机器学习子模型输出指纹模拟图像的具体步骤可以包括:获取指纹样本图像中相邻区域的像素变化值,判断所述像素变化值是否小于预设变化阈值;若所述像素变化值小于预设变化阈值,对所述指纹样本图像中相邻区域的像素进行调整。若指纹样本图像中相邻区域的像素变化量较大,则说明该指纹样本图像中相邻区域的像素间是不流畅的。通过计算指纹样本图像中相邻区域像素间变化量的大小,对该区域的像素进行微调,从而使得图像保持流畅。具体地是将该变化值与预设变化阈值进行比较,如果该变化值大于预设变化阈值,则说明该区域像素是不连续的,因此需要调整。In one of the embodiments, the specific steps of the first machine learning sub-model to output the fingerprint simulation image may include: obtaining pixel change values of adjacent areas in the fingerprint sample image, and judging whether the pixel change value is less than a preset change threshold If the pixel change value is less than the preset change threshold, adjust the pixels in the adjacent area of the fingerprint sample image. If the amount of pixel change in the adjacent area in the fingerprint sample image is large, it means that the pixels in the adjacent area in the fingerprint sample image are not smooth. By calculating the amount of change between pixels in adjacent areas in the fingerprint sample image, fine-tuning the pixels in this area, so that the image remains smooth. Specifically, the change value is compared with the preset change threshold. If the change value is greater than the preset change threshold, it means that the pixels in the area are discontinuous and therefore need to be adjusted.
在其中一个实施例中,可以通过获取指纹样本图像中相邻区域的各个像素的变化值,计算各个像素变化值的和,得出该相邻区域各像素总的变化值,并将该各像素总的变化值反馈到损失函数中,从而对该相邻区域的各像素进行调整,指纹生成器生成更加流畅的指纹图像。In one of the embodiments, the change value of each pixel in the adjacent area in the fingerprint sample image can be obtained, and the sum of the change value of each pixel can be calculated to obtain the total change value of each pixel in the adjacent area. The total change value is fed back to the loss function, so that each pixel in the adjacent area is adjusted, and the fingerprint generator generates a smoother fingerprint image.
其中,基于一维的角度计算各像素变化值及各像素变化值的和时,所述相邻区域各所述像素变化值的和为:Wherein, when calculating the change value of each pixel and the sum of the change value of each pixel based on a one-dimensional angle, the sum of the change value of each pixel in the adjacent area is:
Figure PCTCN2019118092-appb-000001
Figure PCTCN2019118092-appb-000001
所述将所述相邻区域各所述像素变化值的和添加至所述损失函数为:The adding the sum of the change values of the pixels in the adjacent area to the loss function is:
L GAN-TV=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))]+λTV(G(z)) L GAN-TV =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]+λTV(G(z))
其中,y n+1-y n表示相邻两个像素的变化值,
Figure PCTCN2019118092-appb-000002
表示该相邻区域内所有的相邻两个像素变化值的总和,λ为常数项系数。
Among them, y n+1 -y n represents the change value of two adjacent pixels,
Figure PCTCN2019118092-appb-000002
Represents the sum of the change values of all two adjacent pixels in the adjacent area, and λ is a constant term coefficient.
其中,基于二维的角度计算各像素变化值及该相邻区域内所有像素变化值的和时,所述相邻区域各所述像素变化值的和为:Wherein, when calculating the change value of each pixel and the sum of the change values of all pixels in the adjacent area based on a two-dimensional angle, the sum of the change values of each pixel in the adjacent area is:
Figure PCTCN2019118092-appb-000003
Figure PCTCN2019118092-appb-000003
所述将所述相邻区域各所述像素变化值的和添加至所述损失函数为:The adding the sum of the change values of the pixels in the adjacent area to the loss function is:
L GAN-TV=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))]+λTV(G(z)) L GAN-TV =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]+λTV(G(z))
其中,y i+1,j-y i,j表示二维平面中i方向上相邻两个像素的变化值,y i,j+1-y i,j表示二维平面中j方向上相邻两个像素的变化值,|y i+1,j-y i,j|+|y i,j+1-y i,j|表示二维平面上相邻两个像素的i、j两个方向上像素变化值的绝对值的和,∑ i,j|y i+1,j-y i,j|+|y i,j+1-y i,j|表示相邻区域中所有的相邻的两个像素的i、j两个方向上像素变化值的绝对值的和的总和,λ为常数项系数。 Among them, y i+1,j -y i,j represents the change value of two adjacent pixels in the i direction in the two-dimensional plane, and y i,j+1 -y i,j represents the phase up in the j direction in the two-dimensional plane. The change value of two adjacent pixels, |y i+1,j -y i,j |+|y i,j+1 -y i,j | The sum of the absolute values of the pixel change values in each direction, ∑ i,j |y i+1,j -y i,j |+|y i,j+1 -y i,j | The sum of the absolute values of the pixel change values in the i and j directions of two adjacent pixels, and λ is a constant term coefficient.
这样从横纵两个方向评估像素间的变化值,优化后的指纹模拟图像在各个角度上更加的流畅。In this way, the change value between pixels is evaluated from the horizontal and vertical directions, and the optimized fingerprint simulation image is smoother in all angles.
在其中一个实施例中,所述机器学习模型输出指纹模型时的公式为:In one of the embodiments, the formula when the machine learning model outputs the fingerprint model is:
G *=arg min Gmax DL GAN G * = arg min G max D L GAN
其中:式中:G *表示第一机器学习子模型,min G表示最小化第一机器学习子模型,max D 就是最大化第二机器学习子模型,L GAN表示损失函数。 Where: where: G * represents the first machine learning sub-model, min G represents the minimization of the first machine learning sub-model, max D is the maximization of the second machine learning sub-model, and L GAN represents the loss function.
如图4所示,在一个实施例中,提供了一种基于对抗网络的指纹模型生成装置,该基于对抗网络的指纹模型生成装置可以集成于上述的指纹生成设备中,具体可以包括样本图像获取单元110以及机器学习输出单元120。样本图像获取单元110,用于获取指纹样本图像;机器学习输出单元120,用于将所述指纹样本图像输入机器学习模型,所述机器学习模型输出指纹模型;其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述机器学习输出单元120包括:模拟图像输出单元121,用于将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;判断结果输出单元122,用于将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果;识别率计算单元123,用于计算所述第二机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;指纹模型输出单元124,用于若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为指纹模型输出。上述装置中各个模块的功能和作用的实现过程具体详见上述基于对抗网络的指纹模型生成方法中对应步骤的实现过程,在此不再赘述。应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。此外,尽管在附图中以特定顺序描述了本申请中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、移动终端、或者网络设备等)执行根据本申请实施方式的方法。As shown in FIG. 4, in one embodiment, a fingerprint model generation device based on a confrontation network is provided. The fingerprint model generation device based on a confrontation network can be integrated into the above-mentioned fingerprint generation device, and can specifically include sample image acquisition. The unit 110 and the machine learning output unit 120. The sample image obtaining unit 110 is used to obtain a fingerprint sample image; the machine learning output unit 120 is used to input the fingerprint sample image into a machine learning model, and the machine learning model outputs a fingerprint model; wherein, the machine learning model includes a first A machine learning sub-model and a second machine learning sub-model, the machine learning output unit 120 includes: a simulation image output unit 121, configured to input the fingerprint sample image into the first machine learning sub-model, the first The machine learning sub-model outputs a fingerprint simulation image; the judgment result output unit 122 is configured to input the fingerprint simulation image into the second machine learning sub-model, and the second machine learning sub-model outputs a judgment result of whether the fingerprint simulation image is The recognition rate calculation unit 123 is used to calculate the recognition rate of the second machine learning sub-model, where the recognition rate includes the judgment result of the second machine learning sub-model for the positive sample output is a fingerprint simulation image and the The negative sample output is not the ratio of the judgment result of the fingerprint simulation image to all the judgment results output by the second machine learning sub-model; the fingerprint model output unit 124 is used for if the recognition rate of the second machine learning sub-model reaches the When the recognition threshold is predetermined, the fingerprint simulation image is output as a fingerprint model. For the implementation process of the functions and roles of each module in the above device, please refer to the implementation process of the corresponding steps in the fingerprint model generation method based on the confrontation network for details, which will not be repeated here. It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to the embodiments of the present application, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied. In addition, although the various steps of the method in the present application are described in a specific order in the drawings, this does not require or imply that these steps must be performed in the specific order, or that all the steps shown must be performed to achieve the desired result. Additionally or alternatively, some steps may be omitted, multiple steps may be combined into one step for execution, and/or one step may be decomposed into multiple steps for execution, etc. Through the description of the above embodiments, those skilled in the art can easily understand that the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the method according to the embodiment of the present application.
在本申请的示例性实施例中,还提供了一种能够实现上述方法的计算机设备。所属技术领域的技术人员能够理解,本申请的各个方面可以实现为系统、方法或程序产品。因此,本申请的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。下面参照图5来描述根据本申请的这种实施方式的计算机设备500。图5显示的计算机设备500仅仅是一个示例,不应对本申请实施例的功能和使用范 围带来任何限制。如图5所示,计算机设备500以通用计算设备的形式表现。计算机设备500的组件可以包括但不限于:上述至少一个处理单元510、上述至少一个存储单元520、连接不同系统组件(包括存储单元520和处理单元510)的总线530。其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元510执行,使得所述处理单元510执行本说明书上述“示例性方法”部分中描述的根据本申请各种示例性实施方式的步骤。例如,所述处理单元510可以执行如图1中所示的步骤S110,获取指纹样本图像。步骤S120,将所述指纹样本图像输入机器学习模型,所述机器学习模型输出指纹模型。存储单元520可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)5201和/或高速缓存存储单元5202,还可以进一步包括只读存储单元(ROM)5203。存储单元520还可以包括具有一组(至少一个)程序模块5205的程序/实用工具5204,这样的程序模块5205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。总线530可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。计算机设备500也可以与一个或多个外部设备700(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该计算机设备500交互的设备通信,和/或与使得该计算机设备500能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口550进行。并且,计算机设备500还可以通过网络适配器560与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器560通过总线530与计算机设备500的其它模块通信。应当明白,尽管图中未示出,可以结合计算机设备500使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本申请实施方式的方法。在本申请的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施方式中,本申请的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本申请各种示例性实施方式的步骤。In the exemplary embodiment of the present application, a computer device capable of implementing the above method is also provided. Those skilled in the art can understand that various aspects of the present application can be implemented as a system, a method, or a program product. Therefore, each aspect of the present application can be specifically implemented in the following forms, namely: complete hardware implementation, complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software implementations, which can be collectively referred to herein as "Circuit", "Module" or "System". Hereinafter, a computer device 500 according to this embodiment of the present application will be described with reference to FIG. 5. The computer device 500 shown in FIG. 5 is only an example, and should not bring any limitation to the function and use range of the embodiments of the present application. As shown in FIG. 5, the computer device 500 is represented in the form of a general-purpose computing device. The components of the computer device 500 may include, but are not limited to: the aforementioned at least one processing unit 510, the aforementioned at least one storage unit 520, and a bus 530 connecting different system components (including the storage unit 520 and the processing unit 510). Wherein, the storage unit stores program code, and the program code can be executed by the processing unit 510, so that the processing unit 510 executes the various exemplary methods described in the “Exemplary Method” section of this specification. Steps of implementation. For example, the processing unit 510 may perform step S110 as shown in FIG. 1 to obtain a fingerprint sample image. Step S120: Input the fingerprint sample image into a machine learning model, and the machine learning model outputs a fingerprint model. The storage unit 520 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 5201 and/or a cache storage unit 5202, and may further include a read-only storage unit (ROM) 5203. The storage unit 520 may also include a program/utility tool 5204 having a set (at least one) program module 5205. Such program module 5205 includes but is not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment. The bus 530 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus. The computer device 500 may also communicate with one or more external devices 700 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable a user to interact with the computer device 500, and/or communicate with Any device (such as a router, modem, etc.) that enables the computer device 500 to communicate with one or more other computing devices. Such communication can be performed through an input/output (I/O) interface 550. In addition, the computer device 500 may also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 560. As shown in the figure, the network adapter 560 communicates with other modules of the computer device 500 through the bus 530. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the computer device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc. Through the description of the above embodiments, those skilled in the art can easily understand that the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present application. In the exemplary embodiment of the present application, a computer-readable storage medium is also provided, on which is stored a program product capable of implementing the above-mentioned method in this specification. In some possible implementation manners, various aspects of the present application can also be implemented in the form of a program product, which includes program code. When the program product runs on a terminal device, the program code is used to make the The terminal device executes the steps according to various exemplary embodiments of the present application described in the above-mentioned "Exemplary Method" section of this specification.
参考图6所示,描述了根据本申请的实施方式的用于实现上述方法的程序产品600,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以 是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。上述计算机可读存储介质可以为非易失性可读存储介质,例如存储于CD-ROM,U盘,移动硬盘设备中的非易失性可读存储介质,可以包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本申请实施方式的方法。所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。此外,上述附图仅是根据本申请示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。Referring to FIG. 6, a program product 600 for implementing the above method according to an embodiment of the present application is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer. However, the program product of this application is not limited to this. In this document, the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device. The above-mentioned computer-readable storage medium may be a non-volatile readable storage medium, such as a non-volatile readable storage medium stored in a CD-ROM, U disk, or mobile hard disk device, and may include several instructions to make a computer A device (which may be a personal computer, a server, a terminal device, or a network device, etc.) executes the method according to the embodiment of the present application. The program product can use any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. The computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device. The program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above. The program code used to perform the operations of the present application can be written in any combination of one or more programming languages. The programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming languages. Programming language-such as "C" language or similar programming language. The program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on. In the case of a remote computing device, the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet). In addition, the above-mentioned drawings are merely schematic illustrations of the processing included in the method according to the exemplary embodiments of the present application, and are not intended for limitation. It is easy to understand that the processing shown in the above drawings does not indicate or limit the time sequence of these processings. In addition, it is easy to understand that these processes can be executed synchronously or asynchronously in multiple modules, for example.
上述内容,仅为本申请的较佳示例性实施例,并非用于限制本申请的实施方案,本领域普通技术人员根据本申请的主要构思和精神,可以十分方便地进行相应的变通或修改,故本申请的保护范围应以权利要求书所要求的保护范围为准。The above content is only a preferred exemplary embodiment of the present application, and is not intended to limit the implementation of the present application. According to the main concept and spirit of the present application, those of ordinary skill in the art can easily make corresponding modifications or modifications. Therefore, the protection scope of this application shall be subject to the protection scope required by the claims.

Claims (22)

  1. 一种基于对抗网络的指纹模型生成方法,包括:A fingerprint model generation method based on a confrontation network, including:
    获取指纹样本图像;Obtain a fingerprint sample image;
    将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型;Inputting the fingerprint sample image into a machine learning model, and the machine learning model outputs the generated fingerprint model;
    其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型包括:Wherein, the machine learning model includes a first machine learning sub-model and a second machine learning sub-model, the fingerprint sample image is input into the machine learning model, and the fingerprint model generated by the output of the machine learning model includes:
    将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;Inputting the fingerprint sample image into the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image;
    将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,以便所述第一机器学习子模型根据所述判断结果调整参数,使所述第一机器学习子模型输出指纹模拟图像与所述指纹样本图像更相似;The fingerprint simulation image is input to the second machine learning sub-model, and the second machine learning sub-model outputs a judgment result of whether the fingerprint simulation image is output, so that the first machine learning sub-model adjusts parameters according to the judgment result , Making the output fingerprint simulation image of the first machine learning sub-model more similar to the fingerprint sample image;
    计算所述第二机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;Calculate the recognition rate of the second machine learning sub-model, where the recognition rate includes the judgment result of the second machine learning sub-model that the output of the positive sample is a fingerprint simulation image and the judgment that the output of the negative sample is not a fingerprint simulation image The ratio of the result to all the judgment results output by the second machine learning sub-model;
    若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, output the fingerprint simulation image as the generated fingerprint model.
  2. 如权利要求1所述的方法,其中,所述机器学习模型如下训练出:The method of claim 1, wherein the machine learning model is trained as follows:
    将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;Inputting the fingerprint sample image into the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image;
    将所述一次指纹模拟图像作为正样本,将所述指纹样本图像作为负样本,构成第一指纹图像样本集;Taking the primary fingerprint simulation image as a positive sample and taking the fingerprint sample image as a negative sample to form a first fingerprint image sample set;
    将所述第一指纹图像样本集中的每一个指纹图像样本逐一输入第二机器学习子模型中进行学习,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,如果对于正样本输出不是指纹模拟图像的判断结果,或对于负样本输出是指纹模拟图像的判断结果,调整第一机器学习子模型,使第二机器学习子模型输出相反判断结果;Input each fingerprint image sample in the first fingerprint image sample set one by one into the second machine learning sub-model for learning, and whether the second machine learning sub-model output is the judgment result of the fingerprint simulation image, if the output is for the positive sample If it is not the judgment result of the fingerprint simulation image, or the judgment result of the fingerprint simulation image for the negative sample output, adjust the first machine learning sub-model so that the second machine learning sub-model outputs the opposite judgment result;
    将所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果输入第一机器学习子模型,使第一机器学习子模型根据所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果,调整所述第一机器学习子模型,使所述第一机器学习子模型输出的指纹模拟图像与所述指纹样本图像的相似度提升;The judgment result of whether the output of the second machine learning sub-model is a fingerprint simulation image is input to the first machine learning sub-model, so that the first machine learning sub-model outputs a fingerprint simulation image according to the second machine learning sub-model. Adjusting the first machine learning sub-model to increase the similarity between the fingerprint simulation image output by the first machine learning sub-model and the fingerprint sample image;
    计算所述第二机器学习子模型的识别率;Calculating the recognition rate of the second machine learning sub-model;
    若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, output the fingerprint simulation image as the generated fingerprint model.
  3. 如权利要求1所述的方法,其中,所述指纹样本图像包括指纹真图以及指纹草图,所述获取指纹样本图像,包括:The method according to claim 1, wherein the fingerprint sample image includes a true fingerprint image and a fingerprint sketch, and the acquiring the fingerprint sample image includes:
    从指纹录入器获取或从指纹库调取指纹真图;Obtain the fingerprint image from the fingerprint logger or retrieve the fingerprint image from the fingerprint database;
    从指纹录入器获取或通过绘图软件制作或自动生成指纹草图。Obtain from the fingerprint logger or make fingerprint sketches through drawing software or automatically generate fingerprints.
  4. 如权利要求1所述的方法,其中,所述指纹样本图像包括指纹真图以及指纹草图,所述获取指纹样本图像,包括:The method according to claim 1, wherein the fingerprint sample image includes a true fingerprint image and a fingerprint sketch, and the acquiring the fingerprint sample image includes:
    获取完整的指纹真图或残缺的指纹真图;Obtain the complete fingerprint image or the incomplete fingerprint image;
    获取全为闭合曲线的指纹草图或含有闭合曲线及非闭合曲线的指纹草图。Obtain fingerprint sketches that are all closed curves or fingerprint sketches that contain closed and unclosed curves.
  5. 如权利要求1所述的方法,其中,对所述机器学习模型进行训练的方式为基于最大化损失函数的值进行训练,其中,所述最大化损失函数为:The method according to claim 1, wherein the method of training the machine learning model is training based on the value of a maximization loss function, wherein the maximization loss function is:
    L GAN=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))] L GAN =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]
    其中:L GAN为最大化损失函数值,x为指纹真图,D为第二机器学习子模型E x~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,E z~Pz(z)为对于第一机器学习子模型损失的期望。 Among them: L GAN is the maximum loss function value, x is the true fingerprint image, D is the second machine learning sub-model E x~Pdata(x) is the expected loss of the second machine learning sub-model, z is the first input fingerprint The fingerprint sample image of the machine learning sub-model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z)) is the process of inputting the fingerprint simulation image into the second machine learning sub-model The function, E z~Pz(z) is the expectation for the loss of the first machine learning sub-model.
  6. 如权利要求1所述的方法,其中,所述第一机器学习子模型输出指纹模拟图像的步骤包括:The method of claim 1, wherein the step of outputting a fingerprint simulation image by the first machine learning sub-model comprises:
    获取指纹样本图像中相邻区域的像素变化值,判断所述像素变化值是否小于预设变化阈值;Acquiring pixel change values of adjacent areas in the fingerprint sample image, and judging whether the pixel change value is less than a preset change threshold;
    若所述像素变化值小于预设变化阈值,对所述指纹样本图像中相邻区域的像素进行调整。If the pixel change value is less than the preset change threshold value, the pixels in the adjacent area in the fingerprint sample image are adjusted.
  7. 如权利要求6所述的方法,其中,对所述指纹样本图像中相邻区域的像素进行调整,包括:8. The method of claim 6, wherein adjusting pixels in adjacent regions in the fingerprint sample image comprises:
    获取指纹样本图像中相邻区域的各个像素的变化值;Obtain the change value of each pixel in the adjacent area in the fingerprint sample image;
    计算所述相邻区域的各个像素变化值的和,得出该所述相邻区域的各个像素总的变化值;Calculating the sum of the change values of the pixels in the adjacent area to obtain the total change value of the pixels in the adjacent area;
    将所述相邻区域的各个像素总的变化值反馈到像素损失函数中,对所述相邻区域的各个像素进行调整;Feedback the total change value of each pixel in the adjacent area to a pixel loss function, and adjust each pixel in the adjacent area;
    其中,所述像素损失函数为:Wherein, the pixel loss function is:
    L GAN-TV=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))]+λTV(G(z)) L GAN-TV =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]+λTV(G(z))
    其中,L GAN-TV为损失函数值,x为指纹真图,D为第二机器学习子模型,E x~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,E z~Pz(z)为对于第一机器学习子模型损失的期望,TV(G(z))为所述相邻区域的各个像素总的变化值,λ为常数项系数。 Among them, L GAN-TV is the loss function value, x is the true fingerprint image, D is the second machine learning sub-model, E x ~ Pdata(x) is the expectation of the loss of the second machine learning sub-model, and z is the first fingerprint of the input A fingerprint sample image of a machine learning sub-model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z)) is the fingerprint simulation image input into the second machine learning sub-model The processing function, Ez~Pz(z) is the expectation of the loss of the first machine learning sub-model, TV(G(z)) is the total change value of each pixel in the adjacent area, and λ is the constant term coefficient.
  8. 一种基于对抗网络的指纹模型生成装置,包括:A fingerprint model generation device based on a confrontation network, including:
    样本图像获取单元,用于获取指纹样本图像;Sample image acquisition unit for acquiring fingerprint sample images;
    机器学习输出单元,用于将所述指纹样本图像输入机器学习模型,所述机器学习模型 输出生成的指纹模型;A machine learning output unit, configured to input the fingerprint sample image into a machine learning model, and the machine learning model outputs the generated fingerprint model;
    其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述机器学习输出单元包括:Wherein, the machine learning model includes a first machine learning sub-model and a second machine learning sub-model, and the machine learning output unit includes:
    模拟图像输出单元,配置为将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;A simulation image output unit configured to input the fingerprint sample image into the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image;
    判断结果输出单元,配置为将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果;A judgment result output unit, configured to input the fingerprint simulation image into the second machine learning sub-model, and the second machine learning sub-model outputs a judgment result of whether the fingerprint simulation image is a fingerprint simulation image;
    识别率计算单元,配置为计算所述第二机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;The recognition rate calculation unit is configured to calculate the recognition rate of the second machine learning sub-model, where the recognition rate includes the judgment result of the second machine learning sub-model that the output of the positive sample is a fingerprint simulation image and the negative sample The ratio of the judgment result output not being a fingerprint simulation image to all the judgment results output by the second machine learning sub-model;
    指纹模型输出单元,配置为若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。The fingerprint model output unit is configured to output the fingerprint simulation image as the generated fingerprint model if the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold.
  9. 如权利要求8所述的装置,所述机器学习模型如下训练出:8. The device of claim 8, wherein the machine learning model is trained as follows:
    将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;Inputting the fingerprint sample image into the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image;
    将所述一次指纹模拟图像作为正样本,将所述指纹样本图像作为负样本,构成第一指纹图像样本集;Taking the primary fingerprint simulation image as a positive sample and taking the fingerprint sample image as a negative sample to form a first fingerprint image sample set;
    将所述第一指纹图像样本集中的每一个指纹图像样本逐一输入第二机器学习子模型中进行学习,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,如果对于正样本输出不是指纹模拟图像的判断结果,或对于负样本输出是指纹模拟图像的判断结果,调整第一机器学习子模型,使第二机器学习子模型输出相反判断结果;Input each fingerprint image sample in the first fingerprint image sample set one by one into the second machine learning sub-model for learning, and whether the second machine learning sub-model output is the judgment result of the fingerprint simulation image, if the output is for the positive sample If it is not the judgment result of the fingerprint simulation image, or the judgment result of the fingerprint simulation image for the negative sample output, adjust the first machine learning sub-model so that the second machine learning sub-model outputs the opposite judgment result;
    将所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果输入第一机器学习子模型,使第一机器学习子模型根据所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果,调整所述第一机器学习子模型,使所述第一机器学习子模型输出的指纹模拟图像与所述指纹样本图像的相似度提升;The judgment result of whether the output of the second machine learning sub-model is a fingerprint simulation image is input to the first machine learning sub-model, so that the first machine learning sub-model outputs a fingerprint simulation image according to the second machine learning sub-model. Adjusting the first machine learning sub-model to increase the similarity between the fingerprint simulation image output by the first machine learning sub-model and the fingerprint sample image;
    计算所述第二机器学习子模型的识别率;Calculating the recognition rate of the second machine learning sub-model;
    若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, output the fingerprint simulation image as the generated fingerprint model.
  10. 如权利要求8所述的装置,所述指纹样本图像包括指纹真图以及指纹草图,所述样本图像获取单元被配置为:8. The device according to claim 8, wherein the fingerprint sample image includes a true fingerprint image and a fingerprint sketch, and the sample image acquisition unit is configured to:
    从指纹录入器获取或从指纹库调取指纹真图;Obtain the fingerprint image from the fingerprint logger or retrieve the fingerprint image from the fingerprint database;
    从指纹录入器获取或通过绘图软件制作或自动生成指纹草图。Obtain from the fingerprint logger or make fingerprint sketches through drawing software or automatically generate fingerprints.
  11. 如权利要求8所述的装置,所述指纹样本图像包括指纹真图以及指纹草图,所述样本图像获取单元被配置为:8. The device according to claim 8, wherein the fingerprint sample image includes a true fingerprint image and a fingerprint sketch, and the sample image acquisition unit is configured to:
    获取完整的指纹真图或残缺的指纹真图;Obtain the complete fingerprint image or the incomplete fingerprint image;
    获取全为闭合曲线的指纹草图或含有闭合曲线及非闭合曲线的指纹草图。Obtain fingerprint sketches that are all closed curves or fingerprint sketches that contain closed and unclosed curves.
  12. 如权利要求8所述的装置,所述机器学习输出单元配置为基于最大化损失函数的值进行训练,其中,所述最大化损失函数为:8. The apparatus of claim 8, wherein the machine learning output unit is configured to perform training based on the value of a maximum loss function, wherein the maximum loss function is:
    L GAN=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))] L GAN =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]
    其中:L GAN为最大化损失函数值,x为指纹真图,D为第二机器学习子模型E x~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,E z~Pz(z)为对于第一机器学习子模型损失的期望。 Among them: L GAN is the maximum loss function value, x is the true fingerprint image, D is the second machine learning sub-model E x~Pdata(x) is the expected loss of the second machine learning sub-model, z is the first input fingerprint The fingerprint sample image of the machine learning sub-model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z)) is the process of inputting the fingerprint simulation image into the second machine learning sub-model The function, E z~Pz(z) is the expectation for the loss of the first machine learning sub-model.
  13. 如权利要求8所述的装置,所述模拟图像输出单元配置为:The device according to claim 8, wherein the analog image output unit is configured to:
    获取指纹样本图像中相邻区域的像素变化值,判断所述像素变化值是否小于预设变化阈值;Acquiring pixel change values of adjacent areas in the fingerprint sample image, and judging whether the pixel change value is less than a preset change threshold;
    若所述像素变化值小于预设变化阈值,对所述指纹样本图像中相邻区域的像素进行调整。If the pixel change value is less than the preset change threshold value, the pixels in the adjacent area in the fingerprint sample image are adjusted.
  14. 如权利要求13所述的装置,所述指纹样本图像中相邻区域的像素进行调整,包括:The device of claim 13, wherein the adjustment of pixels in adjacent areas in the fingerprint sample image comprises:
    获取指纹样本图像中相邻区域的各个像素的变化值;Obtain the change value of each pixel in the adjacent area in the fingerprint sample image;
    计算所述相邻区域的各个像素变化值的和,得出该所述相邻区域的各个像素总的变化值;Calculating the sum of the change values of the pixels in the adjacent area to obtain the total change value of the pixels in the adjacent area;
    将所述相邻区域的各个像素总的变化值反馈到像素损失函数中,对所述相邻区域的各个像素进行调整;Feedback the total change value of each pixel in the adjacent area to a pixel loss function, and adjust each pixel in the adjacent area;
    其中,所述像素损失函数为:Wherein, the pixel loss function is:
    L GAN-TV=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))]+λTV(G(z)) L GAN-TV =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]+λTV(G(z))
    其中,L GAN-TV为损失函数值,x为指纹真图,D为第二机器学习子模型,E x~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,E z~Pz(z)为对于第一机器学习子模型损失的期望,TV(G(z))为所述相邻区域的各个像素总的变化值,λ为常数项系数。 Among them, L GAN-TV is the loss function value, x is the true fingerprint image, D is the second machine learning sub-model, E x ~ Pdata(x) is the expectation of the loss of the second machine learning sub-model, and z is the first fingerprint of the input A fingerprint sample image of a machine learning sub-model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z)) is the fingerprint simulation image input into the second machine learning sub-model The processing function, E z~Pz(z) is the expectation of the loss of the first machine learning sub-model, TV(G(z)) is the total change value of each pixel in the adjacent area, and λ is the constant term coefficient.
  15. 一种计算机设备,包括处理器及存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,所述处理器用于执行以下处理:A computer device includes a processor and a memory, and computer-readable instructions are stored on the memory. When the computer-readable instructions are executed by the processor, the processor is configured to perform the following processing:
    获取指纹样本图像;Obtain a fingerprint sample image;
    将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型;Inputting the fingerprint sample image into a machine learning model, and the machine learning model outputs the generated fingerprint model;
    其中,所述机器学习模型包括第一机器学习子模型和第二机器学习子模型,所述将所述指纹样本图像输入机器学习模型,所述机器学习模型输出生成的指纹模型包括:Wherein, the machine learning model includes a first machine learning sub-model and a second machine learning sub-model, the fingerprint sample image is input into the machine learning model, and the fingerprint model generated by the output of the machine learning model includes:
    将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出 指纹模拟图像;Inputting the fingerprint sample image into the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image;
    将所述指纹模拟图像输入所述第二机器学习子模型,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,以便所述第一机器学习子模型根据所述判断结果调整参数,使所述第一机器学习子模型输出指纹模拟图像与所述指纹样本图像更相似;The fingerprint simulation image is input to the second machine learning sub-model, and the second machine learning sub-model outputs a judgment result of whether the fingerprint simulation image is output, so that the first machine learning sub-model adjusts parameters according to the judgment result , Making the output fingerprint simulation image of the first machine learning sub-model more similar to the fingerprint sample image;
    计算所述第二机器学习子模型的识别率,其中,所述识别率包括所述第二机器学习子模型对于正样本输出是指纹模拟图像的判断结果以及对于负样本输出不是指纹模拟图像的判断结果占所述第二机器学习子模型输出的所有判断结果的比例;Calculate the recognition rate of the second machine learning sub-model, where the recognition rate includes the judgment result of the second machine learning sub-model that the output of the positive sample is a fingerprint simulation image and the judgment that the output of the negative sample is not a fingerprint simulation image The ratio of the result to all the judgment results output by the second machine learning sub-model;
    若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, output the fingerprint simulation image as the generated fingerprint model.
  16. 如权利要求15所述的计算机设备,其中,所述计算机可读指令被所述处理器执行时,所述处理器还用于执行以下处理:The computer device according to claim 15, wherein when the computer-readable instructions are executed by the processor, the processor is further configured to perform the following processing:
    将所述指纹样本图像输入所述第一机器学习子模型,所述第一机器学习子模型输出指纹模拟图像;Inputting the fingerprint sample image into the first machine learning sub-model, and the first machine learning sub-model outputs a fingerprint simulation image;
    将所述一次指纹模拟图像作为正样本,将所述指纹样本图像作为负样本,构成第一指纹图像样本集;Taking the primary fingerprint simulation image as a positive sample and taking the fingerprint sample image as a negative sample to form a first fingerprint image sample set;
    将所述第一指纹图像样本集中的每一个指纹图像样本逐一输入第二机器学习子模型中进行学习,所述第二机器学习子模型输出是否为指纹模拟图像的判断结果,如果对于正样本输出不是指纹模拟图像的判断结果,或对于负样本输出是指纹模拟图像的判断结果,调整第一机器学习子模型,使第二机器学习子模型输出相反判断结果;Input each fingerprint image sample in the first fingerprint image sample set one by one into the second machine learning sub-model for learning, and whether the second machine learning sub-model output is the judgment result of the fingerprint simulation image, if the output is for the positive sample If it is not the judgment result of the fingerprint simulation image, or the judgment result of the fingerprint simulation image for the negative sample output, adjust the first machine learning submodel so that the second machine learning submodel outputs the opposite judgment result;
    将所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果输入第一机器学习子模型,使第一机器学习子模型根据所述第二机器学习子模型输出的是否为指纹模拟图像的判断结果,调整所述第一机器学习子模型,使所述第一机器学习子模型输出的指纹模拟图像与所述指纹样本图像的相似度提升;The judgment result of whether the output of the second machine learning sub-model is a fingerprint simulation image is input to the first machine learning sub-model, so that the first machine learning sub-model outputs a fingerprint simulation image according to the second machine learning sub-model. Adjusting the first machine learning sub-model to increase the similarity between the fingerprint simulation image output by the first machine learning sub-model and the fingerprint sample image;
    计算所述第二机器学习子模型的识别率;Calculating the recognition rate of the second machine learning sub-model;
    若所述第二机器学习子模型的识别率达到所述预定识别阈值时,将所述指纹模拟图像作为生成的指纹模型输出。If the recognition rate of the second machine learning sub-model reaches the predetermined recognition threshold, output the fingerprint simulation image as the generated fingerprint model.
  17. 如权利要求15所述的计算机设备,其中,所述指纹样本图像包括指纹真图以及指纹草图,所述获取指纹样本图像,包括:15. The computer device according to claim 15, wherein the fingerprint sample image comprises a true fingerprint image and a fingerprint sketch, and said acquiring the fingerprint sample image comprises:
    从指纹录入器获取或从指纹库调取指纹真图;Obtain the fingerprint image from the fingerprint logger or retrieve the fingerprint image from the fingerprint database;
    从指纹录入器获取或通过绘图软件制作或自动生成指纹草图。Obtain from the fingerprint logger or make fingerprint sketches through drawing software or automatically generate fingerprints.
  18. 如权利要求15所述的计算机设备,其中,所述指纹样本图像包括指纹真图以及指纹草图,所述获取指纹样本图像,包括:15. The computer device according to claim 15, wherein the fingerprint sample image comprises a true fingerprint image and a fingerprint sketch, and said acquiring the fingerprint sample image comprises:
    获取完整的指纹真图或残缺的指纹真图;Obtain the complete fingerprint image or the incomplete fingerprint image;
    获取全为闭合曲线的指纹草图或含有闭合曲线及非闭合曲线的指纹草图。Obtain fingerprint sketches that are all closed curves or fingerprint sketches that contain closed and unclosed curves.
  19. 如权利要求15所述的计算机设备,其中,对所述机器学习模型进行训练的方式 为基于最大化损失函数的值进行训练,其中,所述最大化损失函数为:The computer device according to claim 15, wherein the method for training the machine learning model is training based on the value of a maximization loss function, wherein the maximization loss function is:
    L GAN=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))] L GAN =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]
    其中:L GAN为最大化损失函数值,x为指纹真图,D为第二机器学习子模型E x~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,E z~Pz(z)为对于第一机器学习子模型损失的期望。 Among them: L GAN is the maximum loss function value, x is the true fingerprint image, D is the second machine learning sub-model E x~Pdata(x) is the expected loss of the second machine learning sub-model, z is the first input fingerprint The fingerprint sample image of the machine learning sub-model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z)) is the process of inputting the fingerprint simulation image into the second machine learning sub-model The function, E z~Pz(z) is the expectation for the loss of the first machine learning sub-model.
  20. 如权利要求15所述的计算机设备,其中,所述第一机器学习子模型输出指纹模拟图像的步骤包括:15. The computer device of claim 15, wherein the step of outputting a fingerprint simulation image by the first machine learning sub-model comprises:
    获取指纹样本图像中相邻区域的像素变化值,判断所述像素变化值是否小于预设变化阈值;Acquiring pixel change values of adjacent areas in the fingerprint sample image, and judging whether the pixel change value is less than a preset change threshold;
    若所述像素变化值小于预设变化阈值,对所述指纹样本图像中相邻区域的像素进行调整。If the pixel change value is less than the preset change threshold value, the pixels in the adjacent area in the fingerprint sample image are adjusted.
  21. 如权利要求20所述的计算机设备,其中,对所述指纹样本图像中相邻区域的像素进行调整,包括:22. The computer device of claim 20, wherein adjusting pixels in adjacent regions in the fingerprint sample image comprises:
    获取指纹样本图像中相邻区域的各个像素的变化值;Obtain the change value of each pixel in the adjacent area in the fingerprint sample image;
    计算所述相邻区域的各个像素变化值的和,得出该所述相邻区域的各个像素总的变化值;Calculating the sum of the change values of the pixels in the adjacent area to obtain the total change value of the pixels in the adjacent area;
    将所述相邻区域的各个像素总的变化值反馈到像素损失函数中,对所述相邻区域的各个像素进行调整;Feedback the total change value of each pixel in the adjacent area to a pixel loss function, and adjust each pixel in the adjacent area;
    其中,所述像素损失函数为:Wherein, the pixel loss function is:
    L GAN-TV=E x~Pdata(x)[log D(x)+E z~Pz(z)[log(1-D(G(z)))]+λTV(G(z)) L GAN-TV =E x~Pdata(x) [log D(x)+E z~Pz(z) [log(1-D(G(z)))]+λTV(G(z))
    其中,L GAN-TV为损失函数值,x为指纹真图,D为第二机器学习子模型,E x~Pdata(x)为对于第二机器学习子模型损失的期望,z为输入指纹第一机器学习子模型的指纹样本图像,G(z)为第一机器学习子模型的输出的指纹模拟图像,D(G(z))为将所述指纹模拟图像输入第二机器学习子模型的处理函数,E z~Pz(z)为对于第一机器学习子模型损失的期望,TV(G(z))为所述相邻区域的各个像素总的变化值,λ为常数项系数。 Among them, L GAN-TV is the loss function value, x is the true fingerprint image, D is the second machine learning sub-model, E x ~ Pdata(x) is the expectation of the loss of the second machine learning sub-model, and z is the first fingerprint of the input A fingerprint sample image of a machine learning sub-model, G(z) is the fingerprint simulation image output by the first machine learning sub-model, D(G(z)) is the fingerprint simulation image input into the second machine learning sub-model The processing function, E z~Pz(z) is the expectation of the loss of the first machine learning sub-model, TV(G(z)) is the total change value of each pixel in the adjacent area, and λ is the constant term coefficient.
  22. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时,所述处理器用于执行如权利要求1至7所述的方法。A computer-readable storage medium having a computer program stored thereon, wherein, when the computer program is executed by a processor, the processor is used to execute the method according to claims 1 to 7.
PCT/CN2019/118092 2019-10-15 2019-11-13 Adversarial network-based fingerprint model generation method and related apparatus WO2021072870A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910979602.9 2019-10-15
CN201910979602.9A CN110929564B (en) 2019-10-15 2019-10-15 Fingerprint model generation method and related device based on countermeasure network

Publications (1)

Publication Number Publication Date
WO2021072870A1 true WO2021072870A1 (en) 2021-04-22

Family

ID=69848923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118092 WO2021072870A1 (en) 2019-10-15 2019-11-13 Adversarial network-based fingerprint model generation method and related apparatus

Country Status (2)

Country Link
CN (1) CN110929564B (en)
WO (1) WO2021072870A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546848A (en) * 2022-10-26 2022-12-30 南京航空航天大学 Confrontation generation network training method, cross-device palmprint recognition method and system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639521B (en) * 2020-04-14 2023-12-01 天津极豪科技有限公司 Fingerprint synthesis method, fingerprint synthesis device, electronic equipment and computer readable storage medium
CN111563561A (en) 2020-07-13 2020-08-21 支付宝(杭州)信息技术有限公司 Fingerprint image processing method and device
CN114282566A (en) * 2020-12-18 2022-04-05 深圳阜时科技有限公司 Fingerprint stain removal model construction method and fingerprint identification sensor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932534A (en) * 2018-07-15 2018-12-04 瞿文政 A kind of Picture Generation Method generating confrontation network based on depth convolution
US20190286950A1 (en) * 2018-03-16 2019-09-19 Ebay Inc. Generating a digital image using a generative adversarial network
CN110309708A (en) * 2019-05-09 2019-10-08 北京尚文金泰教育科技有限公司 A kind of intelligent dermatoglyph acquisition classifying identification method neural network based
CN110321785A (en) * 2019-05-09 2019-10-11 北京尚文金泰教育科技有限公司 A method of introducing ResNet deep learning network struction dermatoglyph classification prediction model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469106B (en) * 2015-11-13 2018-06-05 广东欧珀移动通信有限公司 fingerprint identification method, device and terminal device
CN109886212A (en) * 2019-02-25 2019-06-14 清华大学 From the method and apparatus of rolling fingerprint synthesis fingerprint on site

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190286950A1 (en) * 2018-03-16 2019-09-19 Ebay Inc. Generating a digital image using a generative adversarial network
CN108932534A (en) * 2018-07-15 2018-12-04 瞿文政 A kind of Picture Generation Method generating confrontation network based on depth convolution
CN110309708A (en) * 2019-05-09 2019-10-08 北京尚文金泰教育科技有限公司 A kind of intelligent dermatoglyph acquisition classifying identification method neural network based
CN110321785A (en) * 2019-05-09 2019-10-11 北京尚文金泰教育科技有限公司 A method of introducing ResNet deep learning network struction dermatoglyph classification prediction model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546848A (en) * 2022-10-26 2022-12-30 南京航空航天大学 Confrontation generation network training method, cross-device palmprint recognition method and system
CN115546848B (en) * 2022-10-26 2024-02-02 南京航空航天大学 Challenge generation network training method, cross-equipment palmprint recognition method and system

Also Published As

Publication number Publication date
CN110929564B (en) 2023-08-29
CN110929564A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
WO2021072870A1 (en) Adversarial network-based fingerprint model generation method and related apparatus
CN108898186B (en) Method and device for extracting image
CN107633218B (en) Method and apparatus for generating image
WO2020155907A1 (en) Method and apparatus for generating cartoon style conversion model
US20220414959A1 (en) Method for Training Virtual Image Generating Model and Method for Generating Virtual Image
JP2022058915A (en) Method and device for training image recognition model, method and device for recognizing image, electronic device, storage medium, and computer program
US20220147695A1 (en) Model training method and apparatus, font library establishment method and apparatus, and storage medium
US11355097B2 (en) Sample-efficient adaptive text-to-speech
WO2021237923A1 (en) Smart dubbing method and apparatus, computer device, and storage medium
CN110298319B (en) Image synthesis method and device
CN109189544B (en) Method and device for generating dial plate
WO2023050707A1 (en) Network model quantization method and apparatus, and computer device and storage medium
WO2024036847A1 (en) Image processing method and apparatus, and electronic device and storage medium
US20220148239A1 (en) Model training method and apparatus, font library establishment method and apparatus, device and storage medium
WO2020211573A1 (en) Method and device for processing image
WO2020207174A1 (en) Method and apparatus for generating quantized neural network
CN107240396B (en) Speaker self-adaptation method, device, equipment and storage medium
WO2021159669A1 (en) Secure system login method and apparatus, computer device, and storage medium
WO2022126904A1 (en) Voice conversion method and apparatus, computer device, and storage medium
WO2020006962A1 (en) Method and device for processing picture
CN111815748B (en) Animation processing method and device, storage medium and electronic equipment
JP2023169230A (en) Computer program, server device, terminal device, learned model, program generation method, and method
US20230096150A1 (en) Method and apparatus for determining echo, and storage medium
US10015618B1 (en) Incoherent idempotent ambisonics rendering
JP2022068146A (en) Method for annotating data, apparatus, storage medium, and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19949034

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19949034

Country of ref document: EP

Kind code of ref document: A1