CN112989096A - Facial feature migration method, electronic device and storage medium - Google Patents

Facial feature migration method, electronic device and storage medium Download PDF

Info

Publication number
CN112989096A
CN112989096A CN202110247134.3A CN202110247134A CN112989096A CN 112989096 A CN112989096 A CN 112989096A CN 202110247134 A CN202110247134 A CN 202110247134A CN 112989096 A CN112989096 A CN 112989096A
Authority
CN
China
Prior art keywords
feature
library
image
feature extraction
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110247134.3A
Other languages
Chinese (zh)
Other versions
CN112989096B (en
Inventor
潘军威
牟宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110247134.3A priority Critical patent/CN112989096B/en
Publication of CN112989096A publication Critical patent/CN112989096A/en
Application granted granted Critical
Publication of CN112989096B publication Critical patent/CN112989096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a facial feature migration method, electronic equipment and a storage medium. The method comprises the following steps: acquiring an updated first feature extraction model; if the image to be recognized does not exist at present, performing feature extraction on the image in the image library by using a first feature extraction model to obtain a first facial feature, and storing the first facial feature into a first feature library; in the process of feature extraction of the first feature extraction model, if a new image to be recognized exists, face recognition is performed on the new image to be recognized by using features in the first feature library and/or the second feature library, wherein the second feature library is obtained by performing feature extraction on the image in the image library by using the second feature extraction model before updating. By the method, the facial feature migration can be realized, and the normal operation of face recognition is not influenced.

Description

Facial feature migration method, electronic device and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a facial feature migration method, an electronic device, and a storage medium.
Background
In the process of carrying out face recognition on a face image of a target to be recognized, a feature extraction model is required to be used for extracting the face features of the image.
In order to quickly adapt to the application scenes with complex and various face recognition, the feature extraction model needs to be continuously updated. After the feature extraction model is updated, the facial features extracted by the feature extraction model before updating are no longer compatible, the features need to be re-extracted by using the updated feature extraction model, and the process of re-extracting the features can be called a facial feature migration process. However, the facial feature migration process conflicts with the facial feature extraction in the facial recognition process, in other words, the feature extraction model in the facial feature migration process cannot be applied to the facial recognition process, so that the feature extraction model is applied to the facial recognition process only after the facial feature migration is completed.
Disclosure of Invention
The application provides a facial feature migration method, an electronic device and a storage medium, which can solve the problem that the existing facial feature migration process cannot carry out facial recognition.
In order to solve the technical problem, the application adopts a technical scheme that: a facial feature migration method is provided. The method comprises the following steps: acquiring an updated first feature extraction model; if the image to be recognized does not exist at present, performing feature extraction on the image in the image library by using a first feature extraction model to obtain a first facial feature, and storing the first facial feature into a first feature library; in the process of feature extraction of the first feature extraction model, if a new image to be recognized exists, face recognition is performed on the new image to be recognized by using features in the first feature library and/or the second feature library, wherein the second feature library is obtained by performing feature extraction on the image in the image library by using the second feature extraction model before updating.
In order to solve the above technical problem, another technical solution adopted by the present application is: an electronic device is provided, which comprises a processor and a memory connected with the processor, wherein the memory stores program instructions; the processor is configured to execute the program instructions stored by the memory to implement the above-described method.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a storage medium storing program instructions that when executed enable the above method to be implemented.
By the method, the facial features of the images in the image library are migrated from the second feature library to the first feature library by using the first feature extraction model under the condition that the images to be recognized do not exist; in addition, in the face feature migration process, if a new image to be recognized exists, the new image to be recognized can still be subjected to face recognition by the features in the first feature library and/or the second feature library. Therefore, facial feature migration can be realized, and normal operation of facial recognition is not influenced.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a facial feature migration method according to the present application;
FIG. 2 is a flowchart illustrating a second embodiment of the facial feature migration method of the present application;
FIG. 3 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 4 is a schematic structural diagram of an embodiment of a storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
The facial feature migration method provided by the application can be understood that the face described herein can include human faces, other animal faces, and the like. For convenience of description, the following is uniformly illustrated with human faces.
Before describing the facial feature migration method provided by the present application, the following procedures for authenticating a target in conjunction with facial recognition will be described as an example:
acquiring an initial image containing a target to be identified by using a camera; carrying out face detection on the initial image containing the target to be recognized, and cutting out a face area in the initial image containing the target to be recognized according to a face detection result to be used as a face image of the target to be recognized; extracting facial features in a face image of a target to be recognized by using a feature extraction model; respectively calculating the similarity between the facial features and each feature in the feature library, and determining whether the feature matched with the facial features exists in the feature library according to the similarity; and if so, taking the identity information corresponding to the feature matched with the facial feature in the feature library as the identity information of the target to be recognized.
In order to enable the feature extraction of the face image to be recognized in the face feature migration process not to be influenced, the face feature migration method provided by the application can be as follows:
fig. 1 is a flowchart illustrating a first embodiment of a facial feature migration method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment may include:
s11: and acquiring the updated first feature extraction model.
The feature extraction model can be divided into a first feature extraction model after updating and a second feature extraction model before updating. Wherein the first feature extraction model may be updated on the basis of the second feature extraction model. The first feature library mentioned later in the present application may be a feature library corresponding to the first feature extraction model, and the second feature library may be a feature library corresponding to the second feature extraction model.
The first feature extraction model and the second feature extraction model may be stored in two different preset storage partitions, respectively. And/or the first feature library and the second feature library can be respectively saved to two different preset storage partitions. The preset storage partition for storing the first feature extraction model and the second feature extraction model and the preset storage partition for storing the first feature library and the second feature library can be the same or different. For example, the first feature extraction model and the first feature library are stored in a first preset storage partition, and the second feature extraction model and the second feature library are stored in a second preset storage partition. For another example, the first feature extraction model is stored in a first preset storage partition, the second feature extraction model is stored in a second preset storage partition, and the first feature library and the second feature library are stored in a third preset storage partition. Therefore, the storage partition of the feature extraction model and the feature library is not particularly limited.
S12: and judging whether the image to be identified exists currently.
The image to be recognized may be a face image of the aforementioned target to be recognized. Specifically, when a target is detected to need face recognition, an initial image containing the target to be recognized can be obtained through a camera; carrying out face detection on the initial image to obtain a face detection result; and cutting the initial image based on the face detection result to obtain an image to be recognized.
If the image to be recognized currently exists, the fact that a face recognition task exists is meant, namely the first feature extraction model needs to be applied to feature extraction in the face recognition process. In other words, feature extraction needs to be performed on the image to be recognized by using the first feature extraction model. On the contrary, if the image to be recognized does not exist currently, it means that the first feature extraction model does not need to be applied to feature extraction in the face recognition process currently.
If not, S13 is executed.
S13: and performing feature extraction on the images in the image library by using the first feature extraction model to obtain a first facial feature, and storing the first facial feature into the first feature library.
Images of a plurality of qualifying objects (objects that can be authenticated) may be included in the image library. Where each qualifying object may correspond to one or more images.
And under the condition that the image to be recognized does not exist, the first feature extraction model is utilized to re-extract the features of the image in the image library, namely, the first feature extraction model is utilized to perform face feature migration on the image in the image library.
It can be understood that, for each image in the image library, after the first facial feature corresponding to each image is extracted by using the first feature extraction model and stored in the first feature library, the facial feature migration for each image is considered to be completed.
S14: in the process of feature extraction of the first feature extraction model, if a new image to be recognized exists, face recognition is carried out on the new image to be recognized by using features in the first feature library and/or the second feature library.
And the second feature library is obtained by performing feature extraction on the images in the image library by using a second feature extraction model before updating. In other words, the features in the second feature library are composed of the second facial features of each image in the image library extracted by the second feature extraction model.
If the first feature extraction model is applied to the first facial feature migration process, a new image to be recognized exists. That is, when the facial feature migration is not completed, the feature extraction of the new image to be recognized needs to be performed by using the first feature extraction model. Wherein, the facial feature migration completion means that the facial feature migration of all the images in the image library is completed.
In this case, the face recognition of the new image to be recognized can be achieved in several ways as follows.
The first method is as follows: and continuously using the second feature extraction model before updating to extract the facial features of the new image to be recognized.
The second method comprises the following steps: execution of S13 is suspended until face recognition is completed for a new image to be recognized, and execution of S13 is resumed. Namely, the facial feature migration is firstly suspended, and the first feature extraction model is utilized to extract the facial features of the new image to be recognized. After face recognition is completed on the new image to be recognized or the features of the new image to be recognized are extracted, face feature migration is resumed.
In the process of face recognition, the feature matching with the face feature extracted from the image to be recognized needs to be found in the feature library, and at this time, the feature can be found only in the first feature library, only in the second feature library, or in both the first feature library and the second feature library. For the case of searching in both the first feature library and the second feature library, the searching order of the two feature libraries may be preset, for example, the first feature library is searched first, and then the second feature library is searched, which may refer to the embodiment shown in fig. 2 in detail.
In addition, in other embodiments, after the first facial feature is saved in the first feature library, the second facial feature corresponding to the saved first facial feature in the second feature library may also be deleted. The first facial feature and the corresponding second facial feature are obtained by extracting features of the same image in the image library. It can be understood that, for the image whose facial feature migration is completed, deleting the corresponding second facial feature from the second feature library can reduce the memory occupied by the redundant features.
Alternatively, in still other embodiments, the second feature library and/or the second feature extraction model may be deleted after detecting that the first feature extraction model performs feature extraction on all images in the image library (after completing facial feature migration), so as to reduce memory occupied by redundant features/models. In addition, the preset storage partition where the second feature extraction model originally is located can be marked as a free partition, so that the second feature extraction model can be updated and used next time.
By implementing the embodiment, the facial features of the images in the image library are migrated from the second feature library to the first feature library by using the first feature extraction model under the condition that the images to be recognized do not exist; in addition, in the face feature migration process, if a new image to be recognized exists, the new image to be recognized can still be subjected to face recognition by the features in the first feature library and/or the second feature library. Therefore, facial feature migration can be realized, and normal operation of facial recognition is not influenced.
Fig. 2 is a flowchart illustrating a second embodiment of the facial feature migration method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 2 is not limited in this embodiment. The present embodiment is a further extension of S14, and as shown in fig. 2, the present embodiment may include:
s141: and carrying out first face recognition on the new image to be recognized by using the features in the first feature library.
The facial features of the new image to be recognized (facial features to be recognized) can be extracted. The facial features to be recognized can be compared with the features in the first feature library respectively, that is, the similarity between the facial features to be recognized and the features in the first feature library is calculated respectively. The maximum similarity may be taken as the first face recognition result.
S142: it is determined whether the first face recognition fails.
The first face recognition may be considered to fail, that is, the target to be recognized fails the identity authentication, under the condition that the maximum similarity is smaller than the preset similarity threshold. Conversely, when the maximum similarity is greater than or equal to the preset similarity threshold, the first face recognition is considered to be successful, that is, the target to be recognized passes identity verification (is a qualified target), and the identity information of the target corresponding to the feature corresponding to the maximum similarity in the first feature library may be used as the identity information of the target to be recognized.
If the failure occurs, S143 is executed.
S143: and carrying out second face recognition on the new image to be recognized by utilizing the features in the second feature library.
The features contained in the first feature library are incomplete because the current facial feature migration has not been completed. Therefore, under the condition that the first face identification fails, the facial features to be identified can be respectively compared with the features in the second feature library, namely, the similarity between the facial features to be identified and the features in the second feature library is respectively calculated. The maximum similarity may be taken as the second face recognition result.
In this embodiment, the facial feature to be recognized may be obtained by performing feature extraction on a new image to be recognized by using the first feature extraction model or the second feature extraction model.
In one embodiment, the first feature extraction model is used to obtain the facial features to be recognized, the facial features to be recognized are applied to the first face recognition, and the facial features to be recognized are applied to the second face recognition in the case that the first face recognition fails. In this way, the obtained first face recognition result is more accurate.
In another embodiment, the second feature extraction model is used to obtain the facial features to be recognized, and the facial features to be recognized are applied to the first face recognition, and the facial features to be recognized are applied to the second face recognition if the first face recognition fails. In this way, the obtained second face recognition result is more accurate.
In another embodiment, the first feature extraction model is used to obtain the facial features to be recognized, and the facial features to be recognized are applied to the first face recognition. And under the condition that the first face recognition fails, obtaining the face feature to be recognized by using the second feature extraction model, and applying the face feature to be recognized to the second face recognition. In this way, the obtained first face recognition result and the second face recognition result are more accurate, but the time consumed correspondingly is longer.
The facial feature migration of the images in the image library in the above embodiment may be performed according to the number sequence of the images. The following describes the above facial feature migration process by taking image a and image B in the image library as an example:
judging whether an image to be identified exists at present;
if the image to be recognized exists, the first feature extraction model is used for extracting the features of the image to be recognized. And after the feature extraction of the image to be recognized is completed, extracting the first facial features of the A by using the first feature extraction model. If the facial features do not exist, directly extracting the first facial features of the A by using the first feature extraction model;
storing the first facial features of the A into a first feature library, and deleting the second facial features of the A in a second feature library;
and judging whether a new image to be identified exists currently. If the image feature exists, the first feature extraction model is used for extracting the feature of a new image to be identified. And after the feature extraction of the new image to be recognized is completed, extracting the first facial features of the B by using the first feature extraction model. If the facial features do not exist, directly extracting the first facial features of the B by using the first feature extraction model;
and storing the first facial features of the B into a first feature library, and deleting the second facial features of the B in a second feature library.
Fig. 3 is a schematic structural diagram of an embodiment of an electronic device according to the present application. As shown in fig. 3, the electronic device includes a processor 21, and a memory 22 coupled to the processor 21.
Wherein the memory 22 stores program instructions for implementing the method of any of the above embodiments; processor 21 is operative to execute program instructions stored by memory 22 to implement the steps of the above-described method embodiments. The processor 21 may also be referred to as a CPU (Central Processing Unit). The processor 21 may be an integrated circuit chip having signal processing capabilities. The processor 21 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
FIG. 4 is a schematic structural diagram of an embodiment of a storage medium according to the present application. As shown in fig. 4, the computer-readable storage medium 30 of the embodiment of the present application stores program instructions 31, and the program instructions 31 implement the method provided by the above-mentioned embodiment of the present application when executed. The program instructions 31 may form a program file stored in the computer-readable storage medium 30 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned computer-readable storage medium 30 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A facial feature migration method, comprising:
acquiring an updated first feature extraction model;
if the image to be recognized does not exist at present, performing feature extraction on the image in the image library by using the first feature extraction model to obtain a first facial feature, and storing the first facial feature into a first feature library;
in the process of extracting the features by the first feature extraction model, if a new image to be recognized exists, performing face recognition on the new image to be recognized by using the features in the first feature library and/or the second feature library, wherein the second feature library is obtained by performing feature extraction on the image in the image library by using the second feature extraction model before updating.
2. The method according to claim 1, wherein the face recognition of the new image to be recognized by using the features in the first feature library and/or the second feature library comprises:
carrying out first face recognition on the new image to be recognized by using the features in the first feature library;
and if the first face identification fails, performing second face identification on the new image to be identified by using the features in the second feature library.
3. The method according to claim 2, wherein the performing first face recognition on the new image to be recognized by using the features in the first feature library or performing second face recognition on the new image to be recognized by using the features in the second feature library comprises:
and respectively comparing the facial features to be recognized with the features in the first feature library/the second feature library, wherein the facial features to be recognized are obtained by performing feature extraction on the new image to be recognized by using the first feature extraction model or the second feature extraction model.
4. The method of claim 1, wherein after said saving the first facial feature to a first feature library, the method further comprises:
and deleting a second facial feature corresponding to the stored first facial feature in the second feature library, wherein the first facial feature and the corresponding second facial feature are obtained by performing feature extraction on the same image in the image library.
5. The method according to claim 1, wherein during the feature extraction by the first feature extraction model, if the new image to be recognized is detected, the method further comprises:
and suspending the step of performing the feature extraction on the image in the image library by using the first feature extraction model to obtain a first facial feature, and storing the first facial feature in the first feature library until the new image to be recognized is subjected to face recognition, and then continuing to perform the step of performing the feature extraction on the image in the image library by using the first feature extraction model to obtain the first facial feature, and storing the first facial feature in the first feature library.
6. The method of claim 1, wherein after said feature extracting the images in the image library by using the first feature extraction model to obtain a first facial feature, the method further comprises:
and deleting the second feature library and/or the second feature extraction model after detecting that the first feature extraction model completes feature extraction on all the images in the image library.
7. The method according to claim 1, wherein the first feature extraction model and the second feature extraction model are respectively saved to two different preset storage partitions;
and/or the first feature library and the second feature library are respectively saved to two different preset storage partitions.
8. The method according to claim 1, characterized in that it further comprises the following steps of acquiring the image to be identified:
acquiring an initial image containing a target to be identified;
carrying out face detection on the initial image to obtain a face detection result;
and cutting the initial image based on the face detection result to obtain the image to be recognized.
9. An electronic device comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions;
the processor is configured to execute the program instructions stored by the memory to implement the method of any of claims 1-8.
10. A storage medium, characterized in that the storage medium stores program instructions that, when executed, implement the method of any one of claims 1-8.
CN202110247134.3A 2021-03-05 2021-03-05 Face feature migration method, electronic device and storage medium Active CN112989096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110247134.3A CN112989096B (en) 2021-03-05 2021-03-05 Face feature migration method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110247134.3A CN112989096B (en) 2021-03-05 2021-03-05 Face feature migration method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112989096A true CN112989096A (en) 2021-06-18
CN112989096B CN112989096B (en) 2023-03-14

Family

ID=76353088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110247134.3A Active CN112989096B (en) 2021-03-05 2021-03-05 Face feature migration method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112989096B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273766A1 (en) * 2007-05-03 2008-11-06 Samsung Electronics Co., Ltd. Face recognition system and method based on adaptive learning
CN110414376A (en) * 2019-07-08 2019-11-05 浙江大华技术股份有限公司 Update method, face recognition cameras and the server of human face recognition model
CN110659582A (en) * 2019-08-29 2020-01-07 深圳云天励飞技术有限公司 Image conversion model training method, heterogeneous face recognition method, device and equipment
CN111061706A (en) * 2019-11-07 2020-04-24 浙江大华技术股份有限公司 Face recognition algorithm model cleaning method and device and storage medium
CN111241868A (en) * 2018-11-28 2020-06-05 杭州海康威视数字技术股份有限公司 Face recognition system, method and device
CN111506592A (en) * 2020-04-21 2020-08-07 腾讯科技(深圳)有限公司 Method and device for upgrading database
CN112329797A (en) * 2020-11-13 2021-02-05 杭州海康威视数字技术股份有限公司 Target object retrieval method, device, server and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273766A1 (en) * 2007-05-03 2008-11-06 Samsung Electronics Co., Ltd. Face recognition system and method based on adaptive learning
CN111241868A (en) * 2018-11-28 2020-06-05 杭州海康威视数字技术股份有限公司 Face recognition system, method and device
CN110414376A (en) * 2019-07-08 2019-11-05 浙江大华技术股份有限公司 Update method, face recognition cameras and the server of human face recognition model
CN110659582A (en) * 2019-08-29 2020-01-07 深圳云天励飞技术有限公司 Image conversion model training method, heterogeneous face recognition method, device and equipment
CN111061706A (en) * 2019-11-07 2020-04-24 浙江大华技术股份有限公司 Face recognition algorithm model cleaning method and device and storage medium
CN111506592A (en) * 2020-04-21 2020-08-07 腾讯科技(深圳)有限公司 Method and device for upgrading database
CN112329797A (en) * 2020-11-13 2021-02-05 杭州海康威视数字技术股份有限公司 Target object retrieval method, device, server and storage medium

Also Published As

Publication number Publication date
CN112989096B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
US10417478B2 (en) Method for improving a fingerprint template, device and terminal thereof
CN109559735B (en) Voice recognition method, terminal equipment and medium based on neural network
CN110489951A (en) Method, apparatus, computer equipment and the storage medium of risk identification
US20210216617A1 (en) Biometric authentication device, biometric authentication method, and computer-readable recording medium recording biometric authentication program
JP2018508892A (en) Method and apparatus for assigning device fingerprints to Internet devices
WO2018161312A1 (en) Fingerprint identification method and apparatus
CN113689291B (en) Anti-fraud identification method and system based on abnormal movement
CN112052251B (en) Target data updating method and related device, equipment and storage medium
CN111611821B (en) Two-dimensional code identification method and device, computer equipment and readable storage medium
CN112989096B (en) Face feature migration method, electronic device and storage medium
CN115527244B (en) Fingerprint image matching method and device, computer equipment and storage medium
WO2019201029A1 (en) Candidate box update method and apparatus
US20210326615A1 (en) System and method for automatically detecting and repairing biometric crosslinks
CN111522570B (en) Target library updating method and device, electronic equipment and machine-readable storage medium
US10902106B2 (en) Authentication and authentication mode determination method, apparatus, and electronic device
CN111966545A (en) PCIe deconcentrator hot plug test method, device, equipment and storage medium
CN113409051B (en) Risk identification method and device for target service
EP4131147A1 (en) Determination device, method, and non-temporary computer-readable medium having program stored therein
US11087478B2 (en) Recover keypoint-based target tracking from occlusion using deep neural network segmentation
CN112529008A (en) Image recognition method, image feature processing method, electronic device and storage medium
CN110321758B (en) Risk management and control method and device for biological feature recognition
CN117351226A (en) Image recognition method and device, electronic equipment and storage medium
CN116844199A (en) Face recognition method, device and equipment
CN113961948A (en) Authority identification method and device, electronic equipment and storage medium
CN116361764A (en) Face recognition method, system, equipment and medium based on java

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant