CN112183208A - Identity recognition method and equipment - Google Patents

Identity recognition method and equipment Download PDF

Info

Publication number
CN112183208A
CN112183208A CN202010881150.3A CN202010881150A CN112183208A CN 112183208 A CN112183208 A CN 112183208A CN 202010881150 A CN202010881150 A CN 202010881150A CN 112183208 A CN112183208 A CN 112183208A
Authority
CN
China
Prior art keywords
model
template
sub
image
template sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010881150.3A
Other languages
Chinese (zh)
Inventor
卢嘉勋
杨春春
邵云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010881150.3A priority Critical patent/CN112183208A/en
Publication of CN112183208A publication Critical patent/CN112183208A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an identity recognition method and equipment, and relates to the field of identity recognition. And the terminal equipment trains to obtain the sub-model corresponding to the template sample group, so that whether the image sample to be processed is matched with the template image in the template sample group is determined by using the sub-model, and the identity recognition precision of the terminal equipment side is improved. The specific scheme is as follows: the terminal equipment trains according to the multiple reference image samples of the acquired identity recognition data, the first model and the first template sample group to obtain a first sub-model; collecting an image sample to be processed; according to the first sub-model, comparing the image sample to be processed with the template images in the first template sample group one by one to obtain a first matching result; according to the first model, comparing the image sample to be processed with the template images except the target template sample group one by one to obtain a second matching result; and determining a target matching result according to the first matching result and the second matching result. The embodiment of the application is used for the process of identity identification data matching.

Description

Identity recognition method and equipment
Technical Field
The embodiment of the application relates to the technical field of identity recognition, in particular to an identity recognition method and equipment.
Background
Fingerprint is a common biological characteristic of identity recognition, and the fingerprint recognition is widely popularized on terminal equipment in recent years. The fingerprint identification technology firstly collects fingerprint images, then processes and extracts the characteristics of the images, and finally compares the images with the characteristics of templates stored in the terminal equipment to output results.
With the development of deep learning, the terminal can complete the processes of feature extraction and feature comparison by using a neural network. In the prior art, the terminal equipment can not self-learn according to the personalized data of the user, and can adaptively adjust the parameters of the current neural network. Thus, the end-side identification accuracy is limited, and a continuously improved identification accuracy cannot be obtained.
Disclosure of Invention
The embodiment of the application provides an identity recognition method and equipment, and terminal equipment can train to obtain a sub-model corresponding to a template sample group of the terminal equipment, so that whether an image sample to be processed is matched with a template image in the template sample group corresponding to the sub-model can be determined by using the sub-model, and identity recognition accuracy of the terminal equipment side is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides an identity identification method. The terminal device is provided with a first model, the terminal device is provided with at least one template sample group, the at least one template sample group comprises a first template sample group, and each template sample group comprises at least one template image. The method comprises the following steps: the method comprises the steps that terminal equipment collects a plurality of reference image samples of identity recognition data; the terminal equipment trains according to the multiple reference image samples, the first model and the first template sample group to obtain a first sub-model corresponding to the first template sample group; the terminal equipment collects an image sample to be processed; the terminal equipment compares the image sample to be processed with the template images in the first template sample group one by one according to the first sub-model to obtain a first matching result; the terminal equipment compares the image sample to be processed with the template images except the target template sample group one by one according to the first model to obtain a second matching result; the target template sample set is a template sample set which is used for training a sub-model in at least one template sample set, and the target template sample set comprises a first template sample set; and the terminal equipment determines a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result and the second matching result.
In this scheme, the terminal device first trains a first sub-model for a first template sample group. And then, when the terminal equipment acquires the image sample to be processed, polling all the template images. And for the template images in the trained first template sample group, the terminal equipment adopts a first sub-model for comparison to obtain a first matching result. Because the terminal device trains the corresponding first sub-model for the first template sample group, the terminal device determines whether the image sample to be processed is matched with the template image in the first template sample group by using the first sub-model, and the identity recognition precision of the terminal device side can be improved. And for the template images in other untrained template sample groups, the terminal equipment adopts the first model to carry out comparison to obtain a second matching result. Because the terminal equipment keeps the first model, the anti-forgetting problem can be solved, and the safety of identity recognition is ensured.
In one possible design, the training of the terminal device according to the plurality of reference image samples, the first model and the first template sample set to obtain the first sub-model includes: the terminal equipment compares each reference image sample with the template images in the first template sample group one by one according to the first model to determine a reference matching result; the terminal equipment trains and obtains a first sub-model corresponding to the first template sample group based on the multiple reference image samples, the first model, the first template sample group and the reference matching result.
In the scheme, the terminal device specifically trains to obtain a first sub-model based on a plurality of reference image samples, the first model, the first template sample group and the reference matching result.
In one possible design, the training, by the terminal device, based on the plurality of reference image samples, the first template sample group of the first model, and the reference matching result, to obtain a first sub-model corresponding to the first template sample group includes: the terminal equipment trains and obtains model parameters of a first sub-model corresponding to the first template sample group based on the multiple reference image samples, the model parameters of the first model, the first template sample group and the reference matching result; or the terminal device trains to obtain model parameters of the first sub-model corresponding to the first template sample group according to the multiple reference image samples, the samples used in the training of the first model, the first template sample group and the reference matching result.
That is, the terminal device may use the parameters of the first model when training the first submodel, or may use the samples used when training the first model.
In one possible design, the first submodel and the first model each have a plurality of model parameters, the plurality of model parameters including a first model parameter set and a second model parameter set; the model parameters in the first model parameter group are used for feature extraction of the image sample to be processed and the template image, and the model parameters in the second model parameter group are used for similarity comparison of the features of the image sample to be processed and the features of the template image. The model parameters in the first model parameter group of the first submodel and the first model are the same, and at least one model parameter in the second model parameter group of the first submodel and the first model is different; or at least one model parameter in the first model parameter group of the first submodel and the first model is different, and the model parameters in the second model parameter group of the first submodel and the first model are the same; alternatively, the first submodel and the first model have at least one different model parameter from the first set of model parameters, and the first submodel and the first model have at least one different model parameter from the second set of model parameters.
In this approach, a first set of model parameters is applied to the feature extraction network and a second set of model parameters is applied to the decision output network. The model parameters of the first submodel and the first model are at least partially different. It can also be considered that the process of training the first sub-model is a process of adjusting model parameters of the first model.
In one possible design, if the target comparison result indicates that the image sample to be processed is matched with a first template image in at least one template sample group, and the target comparison result indicates that the confidence value of the image sample to be processed matched with the first template image is greater than a preset value, the terminal updates the template sample group to which the first template image belongs according to the image sample to be processed.
In the scheme, the acquired image sample to be processed with higher quality can be used for updating the template sample group, so that the subsequent comparison precision is improved by updating the template sample group.
In one possible design, after the terminal device acquires the plurality of reference image samples of the identification data, the method further includes: and the terminal equipment trains according to the multiple reference image samples, the first model and the second template sample group to obtain a second sub-model. Wherein the at least one template sample set further comprises a second template sample set and the target template sample set further comprises a second submodel. After the terminal device acquires the image sample to be processed, the method further comprises the following steps: and the terminal equipment compares the image sample to be processed with the template images in the second template sample group one by one according to the second sub-model to obtain a third matching result. Determining a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching pair result and the second matching result, wherein the target matching result comprises: and determining a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result, the second matching result and the third matching result.
In this scheme, the terminal device may also train a second submodel corresponding to the second template sample group, that is, the terminal device may also train a plurality of submodels. The terminal equipment adopts different submodels aiming at different template sample groups, so that the accuracy of identity recognition can be improved.
In one possible design, the method further includes: the terminal equipment corrects the first sub-model based on the image sample to be processed, the first template sample group and the first matching result; and/or the terminal equipment corrects the second sub-model based on the image sample to be processed, the second template sample group and the third matching result.
In the scheme, the terminal equipment corrects each submodel, so that the terminal equipment can obtain a more accurate comparison result through the corrected submodel.
In one possible design, the method further includes: the terminal equipment is trained to obtain a third sub-model corresponding to the first template sample group based on the first model, the image sample to be processed, the first template sample group and the first matching result, and the third sub-model is used for replacing the first sub-model; and/or the terminal device trains to obtain a fourth sub-model corresponding to the second template sample group based on the first model, the image sample to be processed, the second template sample group and the third matching result, and replaces the second sub-model with the fourth sub-model.
In the scheme, the terminal device can train a new sub-model for each sub-model to replace the old sub-model, so that the terminal device can obtain a more accurate comparison result through the new sub-model.
In one possible design, the first model is from a cloud server or other device. That is, the cloud server or other device may train to obtain the first model and send it to the terminal device.
In a second aspect, an embodiment of the present application provides a terminal device. The terminal device is provided with a first model, the terminal device is provided with at least one template sample group, the at least one template sample group comprises a first template sample group, each template sample group comprises at least one template image, and the terminal device comprises an acquisition module and a processing module. The acquisition module is used for: acquiring a plurality of reference image samples of identification data; the processing module is used for: and training according to the plurality of reference image samples, the first model and the first template sample group to obtain a first sub-model corresponding to the first template sample group. The acquisition module is further configured to: and acquiring an image sample to be processed. The processing module is further configured to: and comparing the image sample to be processed with the template images in the first template sample group one by one according to the first sub-model to obtain a first matching result. The processing module is further configured to: according to the first model, comparing the image sample to be processed with the template images except the target template sample group one by one to obtain a second matching result; wherein the target template sample set is a template sample set which is used for training the sub-model in at least one template sample set, and the target template sample set comprises a first template sample set. The processing module is further configured to: and determining a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result and the second matching result.
In one possible design, the processing module is further to: according to the first model, comparing each reference image sample with the template images in the first template sample group one by one to determine a reference matching result; and training to obtain a first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the first model, the first template sample group and the reference matching result.
In one possible design, the processing module is further to: training to obtain model parameters of a first sub-model corresponding to the first template sample group based on the multiple reference image samples, the model parameters of the first model, the first template sample group and the reference matching result; or training to obtain model parameters of the first sub-model corresponding to the first template sample group according to the multiple reference image samples, the samples used in training the first model, the first template sample group and the reference matching result.
In one possible design, the first submodel and the first model each have a plurality of model parameters, the plurality of model parameters including a first model parameter set and a second model parameter set; the model parameters in the first model parameter group are used for feature extraction of the image sample to be processed and the template image, and the model parameters in the second model parameter group are used for similarity comparison of the features of the image sample to be processed and the features of the template image. The model parameters in the first model parameter group of the first submodel and the first model are the same, and at least one model parameter in the second model parameter group of the first submodel and the first model is different; or at least one model parameter in the first model parameter group of the first submodel and the first model is different, and the model parameters in the second model parameter group of the first submodel and the first model are the same; alternatively, the first submodel and the first model have at least one different model parameter from the first set of model parameters, and the first submodel and the first model have at least one different model parameter from the second set of model parameters.
In one possible design, the processing module is further to: training according to the multiple reference image samples, the first model and the second template sample group to obtain a second sub-model; wherein the at least one template sample set further comprises a second template sample set, and the target template sample set further comprises the second submodel. The processing module is further configured to: and comparing the image sample to be processed with the template images in the second template sample group one by one according to the second sub-model to obtain a third matching result. The processing module is further configured to: and determining a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result, the second matching result and the third matching result.
In one possible design, the processing module is further to: based on the image sample to be processed, the first template sample group and the first matching result, correcting the first sub-model; and/or correcting the second sub-model based on the image sample to be processed, the second template sample group and the third matching result.
In one possible design, the processing module is further to: training to obtain a third sub-model corresponding to the first template sample group based on the first model, the image sample to be processed, the first template sample group and the first matching result, and replacing the first sub-model with the third sub-model; and/or training to obtain a fourth sub-model corresponding to the second template sample group based on the first model, the image sample to be processed, the second template sample group and the third matching result, and replacing the second sub-model with the fourth sub-model.
In one possible design, the first model is from a cloud server.
In a third aspect, an embodiment of the present application provides an identity recognition system. The identity recognition system comprises a cloud server and terminal equipment. The terminal device comprises at least one template sample set, the at least one template sample set comprising a first template sample set, each template sample set comprising at least one template image. The terminal equipment is configured with a first model and a first sub-model, wherein the first model is from a cloud server, and the first sub-model is obtained by training according to a plurality of reference image samples, the first model and a first template sample group. The first sub-model is used for comparing the image samples to be processed with the template images in the first template sample group one by one; the first model is used for comparing the image sample to be processed with the template images outside the target template sample group one by one. Wherein the target template sample set is a template sample set which is used for training the sub-model in at least one template sample set, and the target template sample set comprises a first template sample set.
In one possible design, the terminal device is further configured with a second submodel, and the at least one template sample set further includes the second template sample set. And the second sub-model is obtained by training according to the plurality of reference image samples, the first model and the second template sample group. The second sub-model is used for comparing the image sample to be processed with the template images in the second template sample group one by one. The target template sample set further comprises a second template sample set.
In one possible design, the terminal device is configured to perform the identification method in any one of the possible designs of the first aspect.
In a fourth aspect, an embodiment of the present application provides a terminal device. The terminal device includes: a processor and a memory. The memory is used for storing computer instructions, and when the terminal device runs, the processor executes the computer instructions stored in the memory to implement the identity recognition method in any one of the possible designs of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium. The computer readable storage medium comprises computer instructions which, when run on a computer or processor, cause the computer or processor to perform the method of identification in any of the possible designs of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product. The computer program product, when run on a computer or processor, causes the computer or processor to perform the method of identification in any of the possible designs of the first aspect.
For the advantageous effects of the other aspects, reference may be made to the description of the advantageous effects of the method aspects, which is not repeated herein.
Drawings
Fig. 1 is a schematic diagram of a neural network system according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 3 is a flowchart of an identity recognition method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a basic structure of a first model provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart of a training submodel according to an embodiment of the present disclosure;
FIG. 6 is a flow chart of a method for training submodels according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a neural network according to an embodiment of the present application;
fig. 8 is a flowchart of another identification method provided in the embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
The embodiment of the application provides an identity recognition method, which can be applied to a neural network system shown in fig. 1. Referring to fig. 1, the neural network system includes a terminal device 101, a cloud server 102, a public model 103, a private model 104, and the like. The terminal device 101 may be a mobile phone, a wearable device (e.g., a watch or a bracelet), a tablet computer, a vehicle-mounted device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or other electronic devices that can be matched according to user identification data. The cloud server 102 may train the public model and send the public model to the terminal device. The public model 103 is a neural network model trained by a cloud server in the prior art. The private model 104 is a specific neural network model for one template sample set obtained after end-side self-learning with user-side data.
Exemplarily, fig. 2 shows a schematic structural diagram of the terminal device 101 in the embodiment of the present application. The terminal device 101 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal apparatus 101. In other embodiments of the present application, terminal device 101 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the terminal device 101. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the terminal device 101. In other embodiments of the present application, the terminal device 101 may also adopt different interface connection manners or a combination of multiple interface connection manners in the foregoing embodiments.
The wireless communication function of the terminal apparatus 101 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal equipment 101 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied on the terminal device 101. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the terminal device 101, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of terminal device 101 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that terminal device 101 can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal device 101 implements a display function by the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, terminal device 101 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The terminal apparatus 101 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal device 101 may include 1 or N cameras 193, N being a positive integer greater than 1.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal device 101, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the terminal device 101. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal apparatus 101 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the terminal apparatus 101, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
For example, the neural network processor trains a private model of the template sample set from a plurality of reference image samples, the public model, and the template sample set stored in the internal memory by executing instructions in the internal memory 121. The processor 110 may further compare the image sample to be processed with the template image according to the public model and the private model stored in the internal memory by operating the instruction in the internal memory 121, so as to obtain a matching result.
The terminal device 101 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The fingerprint sensor 180H is used to collect a fingerprint. The terminal device 101 may collect fingerprint data by using a fingerprint sensor, and if the collected fingerprint data is successfully matched with the stored template image, the fingerprint unlocking is realized, so that an application lock can be accessed, a fingerprint photographing operation is performed, and an incoming call is answered by using the fingerprint.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal device 101 at a different position than the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The terminal apparatus 101 may receive a key input, and generate a key signal input related to user setting and function control of the terminal apparatus 101.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal apparatus 101 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The terminal device 101 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal device 101 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the end-point device 101 employs esims, namely: an embedded SIM card. The eSIM card may be embedded in the terminal apparatus 101 and cannot be separated from the terminal apparatus 101.
In an embodiment of the application, the fingerprint sensor is used for collecting a fingerprint, thereby acquiring a reference image sample or a to-be-processed image sample. The neural network processor NPU trains a private model of the template sample set from the plurality of reference image samples, the public model and the template sample set stored in the internal memory by executing instructions in the internal memory. The processor can also compare the image sample to be processed with the template image according to the public model and the private model stored in the internal memory by operating the instruction in the internal memory to obtain a matching result.
The identity recognition method provided by the embodiment of the present application will be described below based on the terminal device. The identity recognition method provided by the embodiment of the application comprises a sub-model training process and a sub-model application process.
In an embodiment of the application, the terminal device has at least one template sample set. Wherein one template sample group comprises at least one template image. The template image represents a template image of the identification data.
Wherein the identification data represents data capable of identifying the identity of the user. For example, the identification data includes biometric data, for example, the identification data may be fingerprint data, face data, or other biometric data capable of representing the identity of the user. The embodiment of the present application does not limit the specific type of the identification data.
In some embodiments, at least one template sample in a template sample set may be from one object or from multiple objects. Taking the example that the template images in the template sample group are fingerprint images, the template image in one template sample group may be a plurality of fingerprint data recorded by one finger, may also be a plurality of fingerprint data recorded by five fingers on one hand respectively, and may also be a plurality of fingerprint data recorded by any plurality of fingers respectively. This is not a limitation of the present application.
The terminal device may obtain at least one template sample group in multiple ways. For example, when a user initially enters a fingerprint on a terminal device, the terminal device may obtain a set of fingerprint data with higher quality, higher definition, or higher integrity as a template sample set. For another example, the terminal device may pre-configure at least one set of fingerprint data of the user corresponding to the account from the cloud as at least one template sample set according to the user information corresponding to the account.
The terminal device is a mobile phone as an example for explanation. When a user inputs fingerprint data on a mobile phone initially, a plurality of fingerprint data of a left index finger input by the user are recorded as a first template sample group, and a plurality of fingerprint data of a left middle finger input by the user are recorded as a second template sample group; or the plurality of fingerprint data of the left thumb, the plurality of fingerprint data of the left index finger, the plurality of fingerprint data of the left middle finger, the plurality of fingerprint data of the left ring finger and the plurality of fingerprint data of the left little finger which are input by the user are jointly recorded as a first template sample group; the plurality of fingerprint data of the right thumb, the plurality of fingerprint data of the right index finger, the plurality of fingerprint data of the middle finger of the right hand, the plurality of fingerprint data of the ring finger of the right hand, and the plurality of fingerprint data of the small thumb of the right hand, which are entered by the user, are collectively recorded as a second template sample group.
In some embodiments, each template image in the same template sample group has the same template label, and the template labels of the template images in different template sample groups are different, so that different template sample groups can be distinguished by the template labels.
For example, a plurality of fingerprint data of a left index finger entered by a user are recorded as a first template sample group, and template labels of template images in the first template sample group are all the left index finger or the first finger or a number a; the plurality of fingerprint data of the middle finger of the left hand input by the user are recorded as a second template sample group, and the template labels of the template images in the second template sample group are all the middle finger of the left hand or the second finger or the number b. The name of the template label is not limited in the embodiment of the application, as long as different template sample groups can be distinguished. Template images with different template labels belong to different template sample groups.
Through the template label, the terminal equipment can confirm the template sample group to which the template image belongs. In the subsequent comparison process, the image sample to be processed and the template sample are compared according to the neural network model corresponding to the template sample group to which the template image belongs during comparison, so that each model is isolated, the effect of strict forgetting resistance is realized, and the identification precision is improved.
The identity recognition method provided by the embodiment of the present application will be described in detail below with reference to fig. 3.
For example, referring to fig. 3, the process of performing sub-model training on the terminal device includes:
301. the terminal device collects a plurality of reference image samples of the identification data.
After the terminal device obtains at least one template sample group, the terminal device collects a plurality of reference image samples of the identification data in the process that a user normally uses the terminal device. Wherein the identification data corresponding to the plurality of reference image samples is consistent with the identification data represented by the template image.
During normal use of the terminal device by the user, the terminal device will acquire a plurality of reference image samples of the identification data, indicating that the terminal device will acquire the reference image samples of the identification data in response to various operations by the user. For example, when the terminal device needs to match the user identification data with the stored identification data in response to an unlocking operation, a payment operation, an identification operation, or the like of the user, the terminal device will collect a plurality of reference image samples of the user identification data.
The description will be given by taking the example that the identification data is fingerprint data and the terminal device is a mobile phone. During use of the handset by the user, the user typically employs the fingerprint data to unlock the handset. Specifically, when the user wants to unlock the mobile phone, the user places a finger, into which fingerprint data has been previously entered, in the fingerprint identification area corresponding to the mobile phone. At the moment, through a fingerprint sensor in the mobile phone, the mobile phone takes the fingerprint data of the collected user as a reference image sample, the reference image sample of the collected fingerprint data is compared with a template image of the fingerprint data recorded before, and after the matching is successful, the mobile phone is unlocked. When a user uses the mobile phone, the mobile phone needs to be unlocked for multiple times, and the mobile phone acquires at least one reference image sample of the fingerprint data of the user every time the mobile phone is unlocked. Or, when the user unlocks the mobile phone, the mobile phone may fail to be unlocked due to incomplete fingerprint data acquired by the first mobile phone or impurities on the screen of the mobile phone, and the fingerprint data needs to be acquired again. Thus, the handset will acquire a plurality of reference image samples of fingerprint data. In addition, if the user needs to pay, the mobile phone can also collect the fingerprint data of the user as a reference image sample, the reference image sample of the collected fingerprint data is compared with the template image of the fingerprint data recorded before, and after the matching is successful, the mobile phone completes the payment in response to the payment operation of the user. In summary, in various scenarios where the user uses the mobile phone, the mobile phone will collect fingerprint data of the user as a reference image sample. That is, during use of the terminal device by a user, the terminal device will acquire a plurality of reference image samples of identification data.
In some embodiments, the terminal device may update the template sample group with the reference image sample during the process of acquiring the reference image sample of the identification data. Specifically, if a new template image meeting the requirements is acquired by the terminal device in the process of acquiring the reference image sample, the template image may be added to the corresponding template sample group, and the template sample group may be updated.
The description will be given by taking the example that the identification data is fingerprint data and the terminal device is a mobile phone. When a reference image sample of fingerprint data acquired by a mobile phone is compared with a stored template image, if the matching degree of the reference image sample and one template image in the stored template images is higher, the finger from which the reference image sample comes can be definitely judged through the template label of the matched template image. In this way, the reference image sample may be added to the corresponding template sample set and the template sample set may be updated.
302. And the terminal equipment trains according to the multiple reference image samples, the first model and the template sample group to obtain the sub-model corresponding to the template sample group.
In the embodiment of the present application, the first model may also be referred to as a public model or an initial model. The first model is a neural network model initially deployed to the terminal device. For example, the first model is trained in the cloud, i.e., the first model is from a cloud server. The training method of the first model is the same as that of the prior art, and is not described herein again.
Generally, referring to fig. 4, the first model includes an image input module, a feature extraction network, a decision output network, and a result output module. The image input module inputs a template image and a reference image sample. Then, the feature extraction network extracts features in the input template image and the reference image sample, respectively. And then, the decision output network compares the characteristics of the template image and the reference image sample to obtain a matching result. And finally, the result output module outputs the matching result. It should be noted that, after the image input module or after the feature extraction network, the template image and the reference image sample need to be spliced together, so that the subsequent decision output network can compare the features of the two. For example, the feature extraction network may be a Convolutional Neural Network (CNN), where the CNN includes a convolutional layer (convolutional layer), a pooling layer (posing layer), a nonlinear layer (e.g., ReLU layer), and the like; the decision output network may be a fully connected neural network (FCC), which includes a fully connected layer (FCC), a nonlinear layer (e.g., a ReLU layer), and so on.
When the terminal equipment does not train the sub-model, comparing a reference image sample and a template image of the identity recognition data collected by the terminal equipment through the first model. Based on the description of the first model, after any reference image sample and any template image are input into the image input module, the reference image sample and any template image sequentially pass through the feature extraction network, the decision output network and the result output module, so that a matching result is obtained. The matching result can also be called as a real label, and if the reference image sample is successfully matched with the template image, the real label is 1; if the matching between the reference image sample and the template image is unsuccessful, the real label is 0.
Referring to fig. 5, taking the identification data as the fingerprint data as an example, the terminal device continuously accumulates { template image, reference image sample } sample pairs and true labels obtained after comparing each template image and reference image sample before training the sub-model. And when the number of the sample pairs accumulated by the terminal equipment reaches a preset value, the terminal equipment trains the submodel.
It should be noted that the real tag can also be obtained by the aid of face-ID/Personal Identification Number (PIN). For example, sometimes the captured fingerprint substantially matches the template image, but the captured fingerprint may not match the template image successfully due to the fingerprint being captured by the mobile phone being offset, or due to the environment being wet, and the mobile phone screen being wet. In this case, if it can be determined by the face-ID or PIN code that the acquired fingerprint really belongs to the owner, the matching genuine tag may be marked as 1. By means of the aid of the face-ID or PIN codes, training data used by the subsequent training submodels can be avoided from being omitted.
And when the number of the sample pairs accumulated by the terminal equipment reaches a preset value, the terminal equipment trains the sub-model. The process of training the submodel will be specifically described below by taking the example of training the first submodel. The first sub-model is a sub-model corresponding to the first template sample group, and the first template sample group is at least any one template sample group in the template sample groups acquired by the terminal equipment.
The sub-model may be referred to as a private model, a branch model, or the like. The first sub-model is a neural network branch which is obtained by the terminal device through training and corresponds to the first template sample group.
For example, referring to fig. 6, in the step 302, the training, by the terminal device, of the multiple reference image samples, the first model and the first template sample group to obtain the first sub-model corresponding to the first template sample group includes:
501. and the terminal equipment compares each reference image sample with the template images in the first template sample group one by one according to the first model to determine a reference matching result.
The step is the same as the process of comparing the reference image sample with the template image through the first model before the terminal device trains the sub-model. If the reference image sample is successfully matched with the template image, the real label is 1; if the matching between the reference image sample and the template image is unsuccessful, the real label is 0.
Wherein the reference matching result includes a matching result with a true tag of 1 and a matching result with a true tag of 0.
It will be appreciated that the authentic tag is mapped to the reference image sample and template image pair. That is, if the true label is 1, it can be obtained that the template image corresponding to the reference image sample is successfully matched.
Illustratively, the fingerprint data of the first finger, which is entered when the user registers the fingerprint, is recorded as a first template sample group, and the fingerprint data of the first finger, which is collected by the mobile phone when the user uses the mobile phone later, is recorded as a corresponding reference image sample. Assume that there are 20 template images in the first template sample group, 700 reference image samples, and 20 x 700 matching modes between the template images and the reference image samples, i.e. there are 14000 pairs of { template images, reference image samples }. If the first template image is matched with the first reference image sample, or the first reference image sample can be determined to be matched with the first template image by the aid of the face-ID or PIN code, the real label is 1; if the first template image does not match the first reference image sample, the true label is 0. And then, the reference image sample successfully matched with the template image can be used as a positive sample, and the reference image sample unsuccessfully matched with the template image can be used as a negative sample to participate in the training process of the first sub-model corresponding to the first template sample group.
502. The terminal equipment trains and obtains a first sub-model corresponding to the first template sample group based on the multiple reference image samples, the first model, the first template sample group and the reference matching result.
Since the reference matching result includes a matching result with a true label of 1 and a matching result with a true label of 0, it can be considered that when the terminal device trains the first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the first model, the first template sample group and the first reference matching result, the samples used include a positive sample and a negative sample, that is, the reference image sample matched with the template image and the reference image sample unmatched with the template image.
It should be noted that the negative examples that do not match the template image may also be obtained by other means. For example, the negative sample may be a sample that is preset in the terminal device and does not belong to the owner, or the negative sample may be a sample that is obtained from the server and does not belong to the owner. The embodiment of the application does not limit the way of acquiring the negative sample by the terminal device.
The terminal device trains the first sub-model by using a continuous learning method. Wherein, the continuous learning needs to be based on the first model, that is, the learning cannot forget the old one during the continuous learning. Therefore, training, by the terminal device, a first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the first model, the first template sample group, and the reference matching result includes: the terminal equipment trains and obtains model parameters of a first sub-model corresponding to the first template sample group based on the multiple reference image samples, the model parameters of the first model, the first template sample group and the reference matching result; or the terminal device trains to obtain model parameters of the first sub-model corresponding to the first template sample group according to the multiple reference image samples, the samples used in the training of the first model, the first template sample group and the reference matching result. That is, the first model used when the terminal device trains the first submodel may be a parameter of the first model, or may be a sample used when the terminal device trains the first model.
Illustratively, the terminal device may train the first sub-model by a regularized continuous learning method. The embodiment of the present application does not limit the specific method for continuously learning the terminal device.
For example, the terminal device may train the first sub-model by using a regularized continuous learning method as shown below. In the embodiment of the application, the sub-model is trained in a continuous learning mode, so that overfitting caused by less end-side data can be resisted. In the method, each time the terminal device learns a task, the terminal device evaluates the estimation of the Fisher information matrix of the parameters by the loss function on the task data set. In the embodiment of the present application, the task refers to comparing the collected reference image sample with each template image.
lT=lcross+λ||w- w* ||F (1)
As shown in formula (1), wherein lTTo learn the penalty function of the Tth task, the first term lcrossThe second term is a regularization term which is an error loss function of the Tth task itself and can reduce forgetting of the old task. Where w is the updated parameter value, w*F is the mean value of Fisher information matrixes corresponding to the 0 th to the T-1 th tasks and represents the importance of the neural network parameters relative to the previous 0 th to the T-1 th preceding tasks, and lambda is a hyper-parameter for adjusting the regular strength.
In the process of training the first sub-model, a reference image sample and a template image are compared and recorded as a task. Due to the arrangement of the regular term, modification of the weight with high importance can be reduced in the process of training the first submodel, and therefore performance of the neural network on the preamble task is kept as far as possible.
Wherein the sign that the first submodel is trained successfully is that the first submodel converges. The convergence of the first submodel may be that the training of the first submodel reaches a preset number of times, or that a loss function in the training process is smaller than a preset value.
It should be noted that the first submodel and the first model each have a plurality of model parameters. Wherein the plurality of model parameters includes a first model parameter set and a second model parameter set. Specifically, the model parameters in the first model parameter group are used for feature extraction of the image sample to be processed and the template image; and the model parameters in the second model parameter group are used for comparing the similarity of the features of the image sample to be processed and the features of the template image. In connection with the basic structure of the first model shown in fig. 4, the first model parameter set is applied to the feature extraction network, and the second model parameter set is applied to the decision output network. In some embodiments of the present application, the basic structure of the first sub-model is the same as the basic structure of the first model. The first submodel also includes a feature extraction network and a decision output network.
Wherein the model parameters of the first submodel and the first model are at least partially different. For example, the model parameters in the first model parameter set of the first submodel and the first model are the same, and at least one model parameter in the second model parameter set of the first submodel and the first model is different; or at least one model parameter in the first model parameter group of the first submodel and the first model is different, and the model parameters in the second model parameter group of the first submodel and the first model are the same; alternatively, the first submodel and the first model have at least one different model parameter from the first set of model parameters, and the first submodel and the first model have at least one different model parameter from the second set of model parameters. Therefore, it can also be considered that the process of training the first submodel is a process of adjusting the model parameters of the first model.
For example, referring to fig. 7, the first model and the first sub-model differ only in model parameters in the feature extraction network, i.e., the first model parameters differ; the model parameters of the decision output network of the first model and the first submodel are the same, i.e. the second model parameters are the same.
Through the above steps, the terminal device has trained a first sub-model for the first template sample group. Then, when the terminal device collects the image sample to be processed, all the template images are polled. And if the matching result is matched with the template image in the first template sample group, the terminal equipment outputs the matching result by adopting the first sub-model. And if the template image is matched with the template image in the untrained template sample group, the terminal equipment adopts the first model to output a matching result.
The identity recognition method provided by the embodiment of the application further comprises a sub-model application process. The sub-model application process will be specifically described below by taking as an example that the terminal device only successfully trains the first sub-model, and the terminal device applies the trained first sub-model.
With continued reference to fig. 3, after step 302, the performing, on the terminal device, a first sub-model application process specifically includes:
303. and the terminal equipment acquires an image sample to be processed.
The process of acquiring the image sample to be processed by the terminal device is similar to the process of acquiring the reference image sample of the identification data by the terminal device in step 301. When the terminal device has trained the first sub-model, in the process of using the terminal device by the user, the image sample of the identification data acquired by the terminal device in response to the operation of the user is the image sample to be processed.
For example, after the first sub-model is trained successfully, when the user needs to unlock the mobile phone, a mode of placing a finger in an unlocking area corresponding to the mobile phone is still adopted. At the moment, the mobile phone collects the fingerprint data of the user to be matched with the stored template image, so that whether the mobile phone can be unlocked by the fingerprint data is judged. Here, the fingerprint data collected by the mobile phone is a to-be-processed image sample.
That is to say, the image sample to be processed is the image sample of the identification data collected by the terminal device after the first sub-model is trained successfully.
304. And the terminal equipment compares the image sample to be processed with the template images in the first template sample group one by one according to the first sub-model to obtain a first matching result.
The terminal device has trained a first sub-model for the first template sample group, so that the terminal device compares the image sample to be processed with the template images in the first template sample group one by one according to the first sub-model to obtain a first matching result.
It can be understood that, because the first sub-model is a sub-model trained for the first template sample group, the terminal device obtains a matching result according to the first sub-model with higher precision than a matching result obtained according to the first model when the image sample to be processed and the template image in the first template sample group are used.
305. And the terminal equipment compares the image sample to be processed with the template images except the target template sample group one by one according to the first model to obtain a second matching result.
Wherein the target template sample set is a template sample set which is used for training the sub-model in at least one template sample set, and the target template sample set comprises a first template sample set.
In the comparison process of step 304 and step 305, the terminal device compares the image sample to be processed with the template images in all the template sample groups one by one, and determines a target matching result.
The terminal equipment compares the image samples to be processed with the template images in all the template sample groups one by one in a polling mode. That is, the terminal device compares the image sample to be processed with all the template images respectively.
For example, if the terminal device stores 1000 template images in total, the terminal device compares the image sample to be processed with the 1000 template images one by one, and compares the 1000 template images for a total of 1000 times.
And the terminal equipment adopts different models to compare the image sample to be processed with the template image according to whether the template image to be compared belongs to the trained first template sample group.
And for the template images in the first template sample group, the terminal equipment compares the image samples to be processed with the template images in the first template sample group one by one according to the first sub-model corresponding to the first template sample group to obtain a first matching result.
The template image belongs to the first template sample group, and the terminal device finishes self-learning of the first template sample group, and trains to obtain a first sub-model corresponding to the first template sample group. Therefore, for the template images in the first template sample group which is already learned, the terminal device adopts the corresponding first sub-model to compare the image sample to be processed with the template images.
And the terminal equipment compares the image sample to be processed with the template images in the first template group one by one to obtain a first matching result. The first matching result indicates that the matching between the image sample to be processed and the template image in the first template sample group is successful or unsuccessful, and may also indicate the matching degree between the image sample to be processed and the template image in the first template sample group. The embodiments of the present application do not limit this.
And for the template images except the target template group, the terminal equipment compares the image sample to be processed with the template images except the target template group one by one according to the first model to obtain a second matching result. Wherein the target template sample set is a template sample set which is used for training the sub-model in at least one template sample set, and the target template sample set comprises a first template sample set.
Because the template image does not belong to the first template sample group or other template sample groups in the target template group, the template image has not been learned, and the terminal device has not trained the sub-model corresponding to the template image. And for template images except the target template sample group which is not learned, the terminal equipment adopts the first model to compare the image sample to be processed with the template images.
And the terminal equipment compares the image sample to be processed with the template images except the target template sample group one by one to obtain a second matching result. The second matching result indicates that the matching between the image sample to be processed and the template images other than the target template sample group is successful or unsuccessful, and may also indicate the matching degree between the image sample to be processed and the template images other than the target template sample group. The embodiments of the present application do not limit this.
306. And the terminal equipment determines a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result and the second matching result.
Based on the above, the terminal device compares the image sample to be processed with all the template images one by one in a polling manner. For the template images in the first template sample group, the terminal equipment adopts a first sub-model to compare the image samples to be processed with the template images in the first template sample group one by one; and for the template images outside the target template sample group, the terminal equipment adopts the first model to compare the image samples to be processed with the template images outside the target template sample group one by one. In this way, the terminal device respectively adopts the first sub-model or the first model to complete the one-to-one comparison between the image sample to be processed and all the template images, so as to obtain all the matching results, namely the first matching result and the second matching result.
The terminal equipment determines the target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result and the second matching result, and determines the matching result with the highest matching degree as the target matching result according to the first matching result and the second matching result.
If the target matching result is greater than or equal to the preset value, it indicates that the matching degree of the image sample to be processed and the corresponding template image is high, and it can be considered that the image sample to be processed is matched with the corresponding template image. If the target matching result is smaller than the preset value, the matching degree of the image sample to be processed and the corresponding template image is low, and the image sample to be processed and the corresponding template image can be considered to be not matched.
In addition, if the target matching result indicates that the image sample to be processed is matched with the first template image in the at least one template sample group, and the target matching result indicates that the confidence value of the image sample to be processed matched with the first template image is greater than a preset value, the terminal device updates the template sample group to which the first template image belongs according to the image sample to be processed.
The embodiment of the application provides an identity recognition method, wherein terminal equipment firstly trains a first sub-model aiming at a first template sample group. And then, when the terminal equipment acquires an image sample to be processed, polling the template image. And for the template images in the trained first template sample group, the terminal equipment adopts the first sub-model to compare and outputs a matching result. And for the template images in other untrained template sample groups, the terminal equipment adopts the first model for comparison, and outputs a matching result. The terminal device trains the corresponding first sub-model for the first template sample group, namely the first sub-model is specially used for comparing the image to be processed with the template image in the first template sample group, so that the terminal device determines whether the image sample to be processed is matched with the template image in the first template sample group corresponding to the first sub-model by using the first sub-model, and the identity recognition precision of the terminal device side can be improved.
In the above description, only one successfully trained submodel, that is, the first submodel, is taken as an example for explanation, and the identity recognition method provided in the embodiment of the present application may be further configured to train a plurality of submodels after the terminal device collects a plurality of reference samples of the identity recognition data. Therefore, referring to fig. 8, after step 302, the identity recognition method provided in the embodiment of the present application further includes:
302a, the terminal device trains according to the multiple reference image samples, the first model and the second template sample group to obtain a second sub-model.
Wherein the second template sample group also belongs to at least one template sample group that the terminal device has. The second sub-model is a sub-model corresponding to the second template sample group.
The specific process of training the terminal device according to the multiple reference image samples, the first model and the second template sample group to obtain the second sub-model is similar to the specific process of training the first sub-model by the terminal device, and only the first template sample group in the process of training the sub-model is replaced by the second template sample group. Therefore, the detailed process of this step is not described again.
It will be appreciated that the terminal device may also train to obtain a plurality of submodels, each submodel corresponding to a template sample set utilized for training. The number of the sub-models is not limited in the embodiment of the application.
After the second submodel is trained successfully, the terminal device can also apply the second submodel. Thus, after step 304, steps 305-306 may be replaced with the following steps:
305a, the terminal device compares the image sample to be processed with the template images in the second template sample group one by one according to the second sub-model to obtain a third matching result.
The template image belongs to the second template sample group, and the terminal device finishes self-learning of the second template sample group, and trains to obtain a second sub-model corresponding to the second template sample group. Therefore, for the template images in the second template sample group which is already learned, the terminal device adopts the corresponding second sub-model to compare the image sample to be processed with the template images.
And the terminal equipment compares the image sample to be processed with the template images in the second template group one by one to obtain a third matching result. The third matching result indicates that the matching between the image sample to be processed and the template image in the second template sample group is successful or unsuccessful, and may also indicate the matching degree between the image sample to be processed and the template image in the second template sample group. The embodiments of the present application do not limit this.
306a, the terminal device compares the image sample to be processed with the template images except the target template sample group one by one according to the first model to obtain a second matching result.
Wherein the target template sample group is a template sample group which is used for training the sub-model in at least one template sample group. Here, the target template sample group specifically includes a first template sample group and a second template sample group.
307a, the terminal device determines a target matching result of the image sample to be processed and the template image in at least one template sample group according to the first matching result, the second matching result and the third matching result.
It can be understood that, if the terminal device has successfully trained more submodels for more template sample groups, all template sample groups corresponding to the successfully trained submodels belong to the target template sample group.
The terminal device compares the image sample to be processed with all the template images in a polling manner, referring to fig. 7, if the template image belongs to the target template sample group, it indicates that the template image has been learned. For the learned template images, the terminal equipment adopts the corresponding sub-models to compare the image samples to be processed with the learned template images, so that the comparison accuracy can be improved. For example, the terminal device may perform the comparison by using the sub-models corresponding to the template images in the N sub-models in fig. 7. Meanwhile, if the template image does not belong to the target template sample, the template image is not learned. For the template image which is not learned, the terminal equipment adopts an initial first model to compare the image sample to be processed with the template image which is not learned. For example, the terminal device may use the first model in fig. 7 for comparison. By adopting different sub-models to compare different template sample groups, the mutual influence among different template sample groups can be avoided. Meanwhile, the first model, namely the public model, is reserved in the embodiment of the application, so that the safety is ensured, and a continuous learning mode of strictly resisting forgetting can be realized. In addition, the sub-model is trained in a continuous learning mode in the embodiment of the application, so that overfitting caused by less end-side data can be resisted. Therefore, the identity recognition method provided by the embodiment of the application can improve the recognition precision in the identity recognition process and ensure the safety.
In addition, as the number of to-be-processed image samples acquired by the terminal device increases, the existing sub-models may not be accurate enough. In some embodiments, after the terminal device trains the successful submodels, the terminal device may further modify each submodel based on the image sample to be processed and the matching result output by the terminal device using each submodel. For example, the model parameters of each submodel are modified, so that a more accurate comparison result can be obtained through the modified submodel.
Taking the example that the terminal device successfully trains the first submodel and the second submodel, the process of the terminal device for correcting the submodel comprises the following steps:
307. the terminal equipment corrects the first sub-model based on the image sample to be processed, the first template sample group and the first matching result; and/or the terminal equipment corrects the second sub-model based on the image sample to be processed, the second template sample group and the third matching result.
Further, after the terminal device successfully trains the submodel, along with more and more to-be-processed image samples collected by the terminal device, the existing submodel may be inaccurate, and the model parameters to be corrected are more or the correction amplitude is larger. In this case, in some embodiments, the terminal device may also train a new sub-model for each template sample set to replace the old sub-model, so that a more accurate comparison result can be obtained by the new sub-model.
Taking the first submodel and the second submodel successfully trained by the terminal device as an example, the process of replacing the submodel by the terminal device includes:
308. the terminal equipment is trained to obtain a third sub-model corresponding to the first template sample group based on the first model, the image sample to be processed, the first template sample group and the first matching result, and the third sub-model is used for replacing the first sub-model; and/or the terminal device trains to obtain a fourth sub-model corresponding to the second template sample group based on the first model, the image sample to be processed, the second template sample group and the third matching result, and replaces the second sub-model with the fourth sub-model.
It is understood that the first matching result and the third matching result are similar to the above-mentioned real label, and can represent the matching results of the image sample to be processed in the first sub-model and the second sub-model. For example, if the terminal device determines that the image sample to be processed is successfully matched with the template sample in the first template sample group by using the first sub-model, the first matching result may be recorded as 1; if the terminal device determines that the matching between the image sample to be processed and the template sample in the first template sample group is unsuccessful by using the first sub-model, the first matching result may be recorded as 0.
With the increase of the image samples to be processed acquired by the terminal equipment, the terminal equipment can obtain a more appropriate sub-model by modifying the sub-model or generating a new sub-model to replace the original sub-model so as to adapt to the increase of the image samples to be processed, thereby obtaining a more accurate identification result and avoiding the problem of identification precision reduction caused by the fact that the original sub-model is not suitable for the newly increased image samples to be processed.
It is understood that, in order to implement the above functions, the terminal device includes corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the terminal device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 9 shows a possible composition diagram of the terminal device 900 involved in the above embodiment, as shown in fig. 9, the terminal device 900 may include: an acquisition module 901 and a processing module 902.
The acquisition module 901 may be configured to support the terminal device 900 to perform step 301 and step 303 shown in fig. 3 in the foregoing embodiment, and/or perform other steps or functions performed by the terminal device in the foregoing method embodiments. The processing module 902 may be configured to support the terminal device 900 to perform the steps 302, 304 to 308 shown in fig. 3 in the above embodiment, the steps 501 and 502 shown in fig. 5, the steps 305a to 307a in fig. 8, and/or other steps or functions performed by the terminal device in the above method embodiment.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the embodiment of the present application, the terminal apparatus 900 is presented in a form of dividing each functional module in an integrated manner. A "module" herein may refer to a particular ASIC, a circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other device that provides the described functionality.
Optionally, an embodiment of the present application further provides an identity recognition system, where the identity recognition system includes a cloud server and a terminal device. The terminal device includes at least one template sample set, such as a first template sample set and/or a second template sample set. The terminal device is configured with a first model and submodels, e.g. a submodel may comprise a first submodel and a second submodel, etc.
The first model is from a cloud server; the sub-model is trained according to a plurality of reference image samples, the first model and a corresponding template sample set.
It can also be considered that, in the identification system, the terminal device is capable of executing the identification method executed by the terminal device in the above-mentioned embodiments of the methods.
Optionally, an embodiment of the present application further provides a computer-readable storage medium, where a computer instruction is stored in the computer-readable storage medium, and when the computer instruction is executed on a terminal device, the terminal device is caused to execute the above related method steps to implement the identity recognition method in the above embodiment.
Optionally, an embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the identity recognition method executed by the terminal device in the above embodiment.
Optionally, an embodiment of the present application further provides a terminal device, where the terminal device may specifically be a chip, a component, a module, or a system on a chip. The terminal device may include a processor and a memory connected; the memory is used for storing computer instructions, and when the device runs, the processor can execute the computer instructions stored in the memory, so that the chip can execute the identity identification method executed by the terminal equipment in the above-mentioned method embodiments.
The terminal device, the computer-readable storage medium, the computer program product, the chip or the system on chip provided in the embodiments of the present application are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the terminal device, the computer-readable storage medium, the computer program product, the chip or the system on chip may refer to the beneficial effects in the corresponding method provided above, which are not described herein again.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (22)

1. An identity recognition method is characterized in that a terminal device is provided with a first model, the terminal device comprises at least one template sample group, the at least one template sample group comprises a first template sample group, and each template sample group comprises at least one template image; the method comprises the following steps:
the terminal equipment acquires a plurality of reference image samples of the identification data;
the terminal equipment trains according to the multiple reference image samples, the first model and the first template sample group to obtain a first sub model corresponding to the first template sample group;
the terminal equipment collects an image sample to be processed;
the terminal equipment compares the image sample to be processed with the template images in the first template sample group one by one according to the first sub-model to obtain a first matching result;
the terminal equipment compares the image sample to be processed with template images except the target template sample group one by one according to the first model to obtain a second matching result; wherein the target template sample set is a template sample set of the at least one template sample set that has been used to train a sub-model, and the target template sample set comprises the first template sample set;
and the terminal equipment determines a target matching result of the image sample to be processed and the template image in the at least one template sample group according to the first matching result and the second matching result.
2. The method of claim 1, wherein the training of the terminal device from the plurality of reference image samples, the first model and the first template sample set to obtain the first sub-model comprises:
the terminal equipment compares each reference image sample with the template images in the first template sample group one by one according to the first model to determine a reference matching result;
and the terminal equipment trains to obtain the first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the first model, the first template sample group and the reference matching result.
3. The method of claim 2, wherein the training, by the terminal device, the first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the first model, the first template sample group, and the reference matching result comprises:
the terminal equipment trains to obtain model parameters of the first sub-model corresponding to the first template sample group based on the multiple reference image samples, the model parameters of the first model, the first template sample group and the reference matching result; alternatively, the first and second electrodes may be,
and the terminal equipment trains to obtain model parameters of the first sub-model corresponding to the first template sample group according to the multiple reference image samples, the samples used in the training of the first model, the first template sample group and the reference matching result.
4. A method according to any of claims 1-3, wherein the first submodel and the first model each have a plurality of model parameters, the plurality of model parameters comprising a first set of model parameters and a second set of model parameters; the model parameters in the first model parameter group are used for feature extraction of the image sample to be processed and a template image, and the model parameters in the second model parameter group are used for similarity comparison of the features of the image sample to be processed and the features of the template image;
the first submodel and the first model have the same model parameters in the first model parameter group, and the first submodel and the first model have different model parameters in the second model parameter group; alternatively, the first and second electrodes may be,
at least one model parameter in the first model parameter group of the first submodel and the first model is different, and the model parameters in the second model parameter group of the first submodel and the first model are the same; alternatively, the first and second electrodes may be,
at least one model parameter of the first set of model parameters of the first submodel and the first model is different, and at least one model parameter of the second set of model parameters of the first submodel and the first model is different.
5. The method according to any of claims 1-4, wherein after the terminal device has acquired a plurality of reference image samples of identification data, the method further comprises:
the terminal equipment trains according to the multiple reference image samples, the first model and a second template sample group to obtain a second sub-model; wherein the at least one template sample set further comprises the second template sample set, the target template sample set further comprises the second submodel;
after the terminal device acquires the image sample to be processed, the method further comprises: the terminal equipment compares the image sample to be processed with the template images in the second template sample group one by one according to the second sub-model to obtain a third matching result;
determining a target matching result of the image sample to be processed and the template image in the at least one template sample group according to the first matching pair result and the second matching result, including: and determining a target matching result of the image sample to be processed and the template image in the at least one template sample group according to the first matching result, the second matching result and the third matching result.
6. The method of claim 5, further comprising:
the terminal equipment corrects the first sub-model based on the image sample to be processed, the first template sample group and the first matching result; and/or the presence of a gas in the gas,
and the terminal equipment corrects the second sub-model based on the image sample to be processed, the second template sample group and the third matching result.
7. The method of claim 5, further comprising:
the terminal equipment trains to obtain a third sub-model corresponding to the first template sample group based on the first model, the image sample to be processed, the first template sample group and the first matching result, and replaces the first sub-model with the third sub-model; and/or the presence of a gas in the gas,
and the terminal equipment trains to obtain a fourth sub-model corresponding to the second template sample group based on the first model, the image sample to be processed, the second template sample group and the third matching result, and replaces the second sub-model with the fourth sub-model.
8. The method of any of claims 1-7, wherein the first model is from a cloud server.
9. A terminal device, wherein the terminal device is configured with a first model, the terminal device comprises at least one template sample set, the at least one template sample set comprises the first template sample set, each template sample set comprises at least one template image, and the terminal device comprises an acquisition module and a processing module;
wherein the acquisition module is configured to: acquiring a plurality of reference image samples of identification data;
the processing module is used for: training according to the multiple reference image samples, the first model and the first template sample group to obtain a first sub-model corresponding to the first template sample group;
the acquisition module is further configured to: collecting an image sample to be processed;
the processing module is further configured to: according to the first sub-model, comparing the image sample to be processed with the template images in the first template sample group one by one to obtain a first matching result;
the processing module is further configured to: according to the first model, comparing the image sample to be processed with template images except the target template sample group one by one to obtain a second matching result; wherein the target template sample set is a template sample set of the at least one template sample set that has been used to train a sub-model, and the target template sample set comprises the first template sample set;
the processing module is further configured to: and determining a target matching result of the image sample to be processed and the template image in the at least one template sample group according to the first matching result and the second matching result.
10. The device of claim 9, wherein the processing module is further configured to:
according to the first model, comparing each reference image sample with the template images in the first template sample group one by one to determine a reference matching result;
training to obtain the first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the first model, the first template sample group and the reference matching result.
11. The device of claim 10, wherein the processing module is further configured to:
training to obtain model parameters of the first sub-model corresponding to the first template sample group based on the plurality of reference image samples, the model parameters of the first model, the first template sample group and the reference matching result; alternatively, the first and second electrodes may be,
and training to obtain model parameters of the first sub-model corresponding to the first template sample group according to the plurality of reference image samples, the samples used in training the first model, the first template sample group and the reference matching result.
12. The apparatus of any of claims 9-11, wherein the first submodel and the first model each have a plurality of model parameters, the plurality of model parameters comprising a first model parameter set and a second model parameter set; the model parameters in the first model parameter group are used for feature extraction of the image sample to be processed and a template image, and the model parameters in the second model parameter group are used for similarity comparison of the features of the image sample to be processed and the features of the template image;
the first submodel and the first model have the same model parameters in the first model parameter group, and the first submodel and the first model have different model parameters in the second model parameter group; alternatively, the first and second electrodes may be,
at least one model parameter in the first model parameter group of the first submodel and the first model is different, and the model parameters in the second model parameter group of the first submodel and the first model are the same; alternatively, the first and second electrodes may be,
at least one model parameter of the first set of model parameters of the first submodel and the first model is different, and at least one model parameter of the second set of model parameters of the first submodel and the first model is different.
13. The apparatus of any of claims 9-12, wherein the processing module is further configured to:
training according to the multiple reference image samples, the first model and a second template sample group to obtain a second sub-model; wherein the at least one template sample set further comprises the second template sample set, the target template sample set further comprises the second submodel;
the processing module is further configured to: according to the second sub-model, comparing the image sample to be processed with the template images in the second template sample group one by one to obtain a third matching result;
the processing module is further configured to: and determining a target matching result of the image sample to be processed and the template image in the at least one template sample group according to the first matching result, the second matching result and the third matching result.
14. The device of claim 13, wherein the processing module is further configured to:
correcting the first sub-model based on the image sample to be processed, the first template sample group and the first matching result; and/or the presence of a gas in the gas,
and correcting the second sub-model based on the image sample to be processed, the second template sample group and the third matching result.
15. The device of claim 13, wherein the processing module is further configured to:
training to obtain a third sub-model corresponding to the first template sample group based on the first model, the image sample to be processed, the first template sample group and the first matching result, and replacing the first sub-model with the third sub-model; and/or the presence of a gas in the gas,
training to obtain a fourth sub-model corresponding to the second template sample group based on the first model, the image sample to be processed, the second template sample group and the third matching result, and replacing the second sub-model with the fourth sub-model.
16. The apparatus of any of claims 9-15, wherein the first model is from a cloud server.
17. An identity recognition system is characterized by comprising a cloud server and a terminal device,
wherein the terminal device comprises at least one template sample group, the at least one template sample group comprises a first template sample group, and each template sample group comprises at least one template image; the terminal equipment is configured with a first model and a first sub-model, wherein the first model is from the cloud server, and the first sub-model is obtained by training according to a plurality of reference image samples, the first model and the first template sample group;
the first sub-model is used for comparing the image samples to be processed with the template images in the first template sample group one by one;
the first model is used for comparing the image sample to be processed with the template images outside the target template sample group one by one; wherein the target template sample set is a template sample set of the at least one template sample set that has been used to train a sub-model, and the target template sample set includes the first template sample set.
18. The system of claim 17, wherein the terminal device is further configured with a second submodel, and wherein the at least one template sample set further comprises a second template sample set;
wherein the second sub-model is trained according to the plurality of reference image samples, the first model and the second template sample set;
the second sub-model is used for comparing the image samples to be processed with the template images in the second template sample group one by one; the target template sample set further comprises the second template sample set.
19. The system according to claim 17, the terminal device being adapted to perform the identification method according to any of claims 1-8.
20. A terminal device, comprising: a processor and a memory; wherein the memory is used for storing computer instructions, and when the terminal device runs, the processor executes the computer instructions stored in the memory to realize the identity recognition method according to any one of claims 1 to 8.
21. A computer-readable storage medium comprising computer instructions which, when run on a computer or processor, cause the computer or processor to perform the identification method of any of claims 1-8.
22. A computer program product, characterized in that, when the computer program product is run on a computer or a processor, the computer or the processor is caused to perform the identity recognition method according to any one of claims 1-8.
CN202010881150.3A 2020-08-27 2020-08-27 Identity recognition method and equipment Pending CN112183208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010881150.3A CN112183208A (en) 2020-08-27 2020-08-27 Identity recognition method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010881150.3A CN112183208A (en) 2020-08-27 2020-08-27 Identity recognition method and equipment

Publications (1)

Publication Number Publication Date
CN112183208A true CN112183208A (en) 2021-01-05

Family

ID=73924387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010881150.3A Pending CN112183208A (en) 2020-08-27 2020-08-27 Identity recognition method and equipment

Country Status (1)

Country Link
CN (1) CN112183208A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115689626A (en) * 2022-10-31 2023-02-03 荣耀终端有限公司 User attribute determination method of terminal equipment and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115689626A (en) * 2022-10-31 2023-02-03 荣耀终端有限公司 User attribute determination method of terminal equipment and electronic equipment
CN115689626B (en) * 2022-10-31 2024-03-01 荣耀终端有限公司 User attribute determining method of terminal equipment and electronic equipment

Similar Documents

Publication Publication Date Title
CN109831622B (en) Shooting method and electronic equipment
WO2021135707A1 (en) Search method for machine learning model and related apparatus and device
WO2021017988A1 (en) Multi-mode identity identification method and device
CN114946169A (en) Image acquisition method and device
CN110851067A (en) Screen display mode switching method and device and electronic equipment
CN111563466B (en) Face detection method and related product
CN111741283A (en) Image processing apparatus and method
CN112651510A (en) Model updating method, working node and model updating system
CN114727220B (en) Equipment searching method and electronic equipment
CN113297843B (en) Reference resolution method and device and electronic equipment
CN112256868A (en) Zero-reference resolution method, method for training zero-reference resolution model and electronic equipment
CN113971271A (en) Fingerprint unlocking method and device, terminal and storage medium
WO2022022319A1 (en) Image processing method, electronic device, image processing system and chip system
CN112183208A (en) Identity recognition method and equipment
CN114095602B (en) Index display method, electronic device and computer readable storage medium
CN114444705A (en) Model updating method and device
CN113468929A (en) Motion state identification method and device, electronic equipment and storage medium
CN112308202A (en) Method for determining decision factors of convolutional neural network and electronic equipment
WO2022007757A1 (en) Cross-device voiceprint registration method, electronic device and storage medium
WO2022022466A1 (en) Method and apparatus for determining file storage position, and terminal
CN111460942B (en) Proximity detection method and device, computer readable medium and terminal equipment
CN115546248A (en) Event data processing method, device and system
CN114238554A (en) Text label extraction method
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN114116610A (en) Method, device, electronic equipment and medium for acquiring storage information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination