WO2023029702A1 - 用于验证图像的方法和装置 - Google Patents

用于验证图像的方法和装置 Download PDF

Info

Publication number
WO2023029702A1
WO2023029702A1 PCT/CN2022/101888 CN2022101888W WO2023029702A1 WO 2023029702 A1 WO2023029702 A1 WO 2023029702A1 CN 2022101888 W CN2022101888 W CN 2022101888W WO 2023029702 A1 WO2023029702 A1 WO 2023029702A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
local
verified
feature
feature recognition
Prior art date
Application number
PCT/CN2022/101888
Other languages
English (en)
French (fr)
Inventor
苏庆勇
狄帅
裴积全
单新媛
阙成涛
易津锋
Original Assignee
京东科技信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技信息技术有限公司 filed Critical 京东科技信息技术有限公司
Publication of WO2023029702A1 publication Critical patent/WO2023029702A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present disclosure relates to the field of computer technology, in particular to a method and device for verifying an image.
  • adversarial defense methods are usually used to avoid attacks from adversarial samples on deep learning networks.
  • Existing adversarial defense methods include: performing adversarial sample detection on samples input to the network, conducting adversarial training on deep learning networks, or performing data preprocessing on samples.
  • the present disclosure provides a method, an apparatus, an electronic device, and a computer-readable storage medium for verifying an image.
  • a method for verifying an image including: acquiring an image to be verified, and using multiple target local feature recognition models to respectively identify first local features of multiple local regions of the image to be verified ; Acquire the reference image, and use multiple target local feature recognition models to identify the second local features of multiple local regions of the reference image respectively; for each target local feature recognition model in the multiple target local feature recognition models, obtain the target The feature similarity between the first local feature identified by the local feature recognition model and the second local feature identified by the target local feature recognition model; according to the obtained multiple feature similarities, determine whether the image to be verified passes the verification.
  • the image to be verified is acquired, and multiple target local feature recognition models are used to respectively identify the first local features of multiple local regions of the image to be verified, including: acquiring a face image to be verified, and identifying the face image of the person to be verified
  • the face image is divided into different face regions; for each face region in the face image to be verified, the first local feature of the face region is identified using a target local feature recognition model for identifying the features of the face region;
  • determining whether the image to be verified passes the verification according to the obtained multiple feature similarities includes: in response to determining that among the multiple feature similarities, at least one feature similarity satisfies the first similarity threshold, determining The verification image passes verification.
  • determining whether the image to be verified passes the verification according to the obtained multiple feature similarities includes: in response to determining that each of the multiple feature similarities satisfies the second similarity threshold, determining The verification image passes verification.
  • a method for verifying an image comprising: obtaining an image to be verified, and using a trained feature recognition model to identify the first global feature of the image to be verified; obtaining a reference image, and using The trained feature recognition model identifies the second global feature of the reference image; in response to determining that the similarity between the first global feature and the second global feature satisfies a third similarity threshold, using the method in the first aspect to verify the image to be verified ; or, in response to determining that the similarity between the first global feature and the second global feature does not meet the third similarity threshold, determine that the image to be verified has not passed the verification.
  • a method for training a model including: obtaining at least one piece of sample data, the sample data includes a sample image and labels of local images of each local area in the sample image; obtaining labels for identifying Each initial local feature recognition model of the features of each local area; for each partial image in the partial images of each local area, input the partial image into the initial local feature recognition model for identifying the features of the local area to which the partial image belongs, and obtain The local features output by the initial local feature recognition model; multiple losses between the labels of the local images of each local area and the labels represented by the local features of the local images of each local area; according to the average value of the obtained multiple losses , train multiple initial local feature recognition models, and obtain multiple target local feature recognition models, wherein the target local feature recognition models are applied to the method in the first aspect or the second aspect.
  • a device for verifying an image including: a first identification unit configured to obtain an image to be verified, and use a plurality of target local feature recognition models to respectively identify multiple parts of the image to be verified.
  • the second identification unit is configured to acquire a reference image, and adopt multiple target local feature recognition models to respectively identify the second local features of a plurality of local areas of the reference image;
  • the matching unit is configured For each target local feature recognition model among the plurality of target local feature recognition models, obtain the difference between the first local feature recognized by the target local feature recognition model and the second local feature recognized by the target local feature recognition model The feature similarity;
  • the verification unit is configured to determine whether the image to be verified passes the verification according to the obtained multiple feature similarities.
  • the first recognition unit includes: a first division module, configured to obtain a face image to be verified, and divide the face image to be verified into different face regions; the first recognition module is configured For each face area in the face image to be verified, adopt the target local feature recognition model for identifying the features of the face area to identify the first local feature of the face area; the second recognition unit includes: the second The division module is configured to obtain a reference face image, and divides the reference face image into different face regions; the second identification module is configured to use the method for each face region in the reference face image The target local feature recognition model that recognizes features of the face region recognizes a second local feature of the face region.
  • the verification unit includes: a first verification module configured to determine that the image to be verified passes the verification in response to determining that among the plurality of feature similarities, at least one feature similarity satisfies a first similarity threshold.
  • the verification unit includes: a second verification module configured to determine that the image to be verified passes the verification in response to determining that each of the plurality of feature similarities satisfies a second similarity threshold.
  • an apparatus for verifying an image including: a third identification unit configured to obtain an image to be verified, and use a trained feature recognition model to identify the first global image of the image to be verified feature; the fourth identification unit is configured to acquire the reference image, and adopts the trained feature recognition model to identify the second global feature of the reference image; the first verification unit is configured to respond to determining the first global feature and the second The similarity between the global features meets the third similarity threshold, and the image to be verified is verified using the method in the first aspect; or, the second verification unit is configured to respond to determining the difference between the first global feature and the second global feature If the similarity between them does not meet the third similarity threshold, it is determined that the image to be verified has not passed the verification.
  • an apparatus for training a model including: a first acquisition unit configured to acquire at least one piece of sample data, the sample data includes a sample image, and local The label of the image; the second acquisition unit is configured to acquire each initial local feature recognition model used to identify the features of each local region; the prediction unit is configured to, for each partial image in the partial images of each region, convert the local The image input is used to identify the initial local feature recognition model of the local area to which the local image belongs, and obtain the local features output by the initial local feature recognition model; the calculation unit is configured to obtain the label of the local image of each local area and each multiple losses between the labels represented by the local features of the region; the training unit is configured to train multiple initial local feature recognition models and obtain multiple target local feature recognition models according to the obtained mean value of multiple losses, Wherein, the target local feature recognition model is applied to the method in the first aspect or the second aspect.
  • an embodiment of the present disclosure provides an electronic device, including: one or more processors: a storage device for storing one or more programs, when one or more programs are used by one or A plurality of processors are executed, so that one or more processors realize the method for verifying an image as provided in the first aspect, or realize the method for verifying an image as provided in the second aspect, or realize the method for verifying an image as provided in the third aspect The method used to train the model.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, wherein, when the program is executed by a processor, the method for verifying an image as provided in the first aspect is implemented. method, or implement the method for verifying images as provided in the second aspect, or implement the method for training a model as provided in the third aspect.
  • FIG. 1 is an exemplary system architecture diagram to which embodiments of the present disclosure can be applied;
  • Figure 2 is a flowchart of one embodiment of a method for verifying an image according to the present disclosure
  • FIG. 3 is a flowchart of another embodiment of a method for verifying an image according to the present disclosure
  • Figure 4 is a flowchart of one embodiment of a method for verifying an image according to the present disclosure
  • FIG. 5 is a flow chart of global feature recognition in an application scenario of the method for verifying an image according to the present disclosure
  • FIG. 6 is a flow chart of local feature recognition in an application scenario of the method for verifying an image according to the present disclosure
  • Figure 7 is a flowchart of one embodiment of a method for training images according to the present disclosure.
  • Fig. 8 is a schematic structural diagram of an embodiment of a device for verifying an image according to the present disclosure
  • Fig. 9 is a schematic structural diagram of an embodiment of a device for verifying an image according to the present disclosure.
  • Fig. 10 is a schematic structural diagram of an embodiment of a device for training a model according to the present disclosure
  • FIG. 11 is a block diagram of an electronic device for implementing the method for verifying an image of an embodiment of the present disclosure.
  • the adversarial defense method for adversarial sample detection has a poor degree of defense
  • the adversarial defense method for adversarial training and sample data preprocessing has the problem that the deep learning network has poor detection performance on noise-free samples.
  • the method and device for verifying an image include: acquiring an image to be verified, and using multiple target local feature recognition models to respectively identify the first local features of multiple local regions of the image to be verified; acquiring a reference image, and Using multiple target local feature recognition models to respectively identify the second local features of multiple local regions of the reference image; The feature similarity between the first local feature of the target and the second local feature identified by the target local feature recognition model; according to the obtained multiple feature similarities, it is determined whether the image to be verified has passed the verification, which can improve the system’s ability to resist against samples. defensive performance.
  • this method is based on the feature similarity verification image of the local area, and does not need to preprocess the sample data, avoiding the preprocessing of clean samples (noise-free samples) and adversarial samples (noisy samples) together, resulting in The problem of poor detection performance on noise-free samples.
  • the method is not based on the network verification image obtained by the confrontation training method, which can avoid the problem that the network obtained by the confrontation training method has poor performance when detecting noise-free samples.
  • FIG. 1 shows an exemplary system architecture 100 to which an embodiment of the method for verifying an image or the apparatus for verifying an image of the present disclosure can be applied.
  • a system architecture 100 may include terminal devices 101 , 102 , 103 , a network 104 and a server 105 .
  • the network 104 is used as a medium for providing communication links between the terminal devices 101 , 102 , 103 and the server 105 .
  • Network 104 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • terminal devices 101 , 102 , 103 Users can use terminal devices 101 , 102 , 103 to interact with server 105 via network 104 to receive or send messages and the like.
  • the terminal devices 101, 102, and 103 may be user terminal devices on which various client applications, such as image recognition applications, video recognition applications, playback applications, search applications, and financial applications, can be installed.
  • Terminal devices 101, 102, 103 may be various electronic devices with display screens and support for receiving server messages, including but not limited to smartphones, tablet computers, e-book readers, electronic players, laptop computers and desktop computers etc.
  • the terminal devices 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices, and when the terminal devices 101, 102, 103 are software, they may be installed in the electronic devices listed above. It may be implemented as multiple software or software modules (for example, multiple software modules for providing distributed services), or as a single software or software module. No specific limitation is made here.
  • the server 105 can obtain the image to be verified through the terminal devices 101, 102, and 103, and use multiple target local feature recognition models to respectively identify the first local features of multiple local regions of the image to be verified, and obtain a reference image, and use multiple
  • the target local feature recognition model respectively recognizes the second local features of multiple local regions of the reference image, and for each local feature recognition model in the multiple target local features, obtains the first local feature determined by the local feature recognition model, and the feature similarity between the second local feature determined by using the local feature recognition model, and then, according to the obtained multiple feature similarities, determine whether the image to be verified passes the verification.
  • the service processing methods provided by the embodiments of the present disclosure can be executed by the terminal devices 101, 102, 103, or by the server 105, and correspondingly, the service processing apparatus can be set on the terminal devices 101, 102, 103 In, can also be set in the server 105.
  • terminal devices, networks and servers in Fig. 1 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • a flow 200 of an embodiment of a method for verifying an image according to the present disclosure is shown, including the following steps:
  • Step 201 acquiring an image to be verified, and using multiple target local feature recognition models to respectively identify first local features of multiple local regions of the image to be verified.
  • the executing subject of the method for verifying an image can obtain the image to be verified, and use multiple target local feature recognition models to identify multiple parts of the image to be verified respectively.
  • the first local feature of the region That is, multiple target local feature recognition models are obtained, and each target local feature recognition model in the multiple target local feature recognition models is used to identify different local regions of the image to be verified, so as to obtain the The first local feature.
  • the image to be verified can be a face image, and can be an image containing any target object (such as animals, plants, landscapes, drawings, various items, etc.), and the target local feature recognition model can be based on the Internet, local storage or cloud storage Obtained trained deep learning models, linear regression models, etc.
  • target object such as animals, plants, landscapes, drawings, various items, etc.
  • target local feature recognition model can be based on the Internet, local storage or cloud storage Obtained trained deep learning models, linear regression models, etc.
  • step 202 a reference image is acquired, and multiple target local feature recognition models are used to respectively identify second local features of multiple local regions of the reference image.
  • a reference image may be acquired, and multiple target local feature recognition models are used to respectively recognize the second local features of multiple local regions of the reference image. That is, multiple target local feature recognition models are obtained, and each target local feature recognition model in the multiple target local feature recognition models is used to identify different local areas of the reference image, so as to obtain the first Two local features.
  • the benchmark image is the benchmark for image comparison to be verified. For example, if the image to be verified is a face image used to verify whether the current user has the authority to log in to a certain account or to operate a certain account, the benchmark image can be the The registered face image used during registration.
  • Step 203 for each target local feature recognition model among multiple target local feature recognition models, obtain the first local feature determined by using the target local feature recognition model and the second local feature determined by using the target local feature recognition model feature similarity between them.
  • the first local feature determined by using the target local feature recognition model and the first local feature determined by using the target local feature recognition model can be obtained.
  • the second local feature and calculate the feature similarity between the first local feature and the second local feature obtained based on the target local feature recognition model. It can be understood that multiple sets of first local features and second local features can be obtained based on multiple target local feature recognition models, and multiple feature similarities can be calculated.
  • Step 204 Determine whether the image to be verified has passed the verification according to the acquired multiple feature similarities.
  • the image to be verified is obtained, and a plurality of target local feature recognition models are used to respectively identify the first local features of a plurality of local regions of the image to be verified; a reference image is obtained, and multiple The target local feature recognition model recognizes the second local features of multiple local regions of the reference image respectively; The feature similarity between the local feature and the second local feature identified by the target local feature recognition model; according to the obtained multiple feature similarities, it is determined whether the image to be verified has passed the verification, which can improve the defense performance of the system against adversarial samples .
  • this method is based on the feature similarity verification image of the local area, and does not need to preprocess the sample data, avoiding the preprocessing of clean samples (noise-free samples) and adversarial samples (noisy samples) together, resulting in The problem of poor detection performance of the network on noise-free samples.
  • the method is not based on the network verification image obtained by the confrontation training method, which can avoid the problem of poor performance of the network obtained by the confrontation training method when detecting noise-free samples.
  • a flow 300 of another embodiment of the method for verifying an image according to the present disclosure is shown, including the following steps:
  • Step 301 acquiring a face image to be verified, and dividing the face image to be verified into different face regions.
  • the subject of execution of the method for verifying an image can obtain a face image to be verified, and divide the face image to be verified into different face regions, such as , forehead area, eye area, face cheek area, mouth area, etc.
  • the face area can be divided by using a pre-trained facial area division model, or can be divided based on the preset ratio data or area position information of each area of the face.
  • Step 302 for each face region in the face image to be verified, identify the first local feature of the face region using the target local feature recognition model used to identify the features of the face region.
  • the first local feature of the face area is identified by using the target local feature recognition model for identifying the features of the face area.
  • the target local feature recognition model for identifying the features of the eye area is used to identify the eye area image of the face image to be verified, and the face image to be verified is obtained
  • the feature of the eye region of the face image to be verified can be called the first local feature.
  • Step 303 acquiring a reference face image, and dividing the reference face image into different face regions.
  • a reference face image can be acquired and divided into different face regions, such as forehead region, eye region, cheek region, mouth region and so on.
  • the face area can be divided by using a pre-trained facial area division model, or can be divided based on the preset ratio data or area position information of each area of the face.
  • Step 304 for each face area in the reference face image, identify a second local feature of the face area using a target local feature recognition model for identifying features of the face area.
  • the second local feature of the face region is identified using the target local feature recognition model used to identify the features of the face region.
  • the target local feature recognition model for identifying the features of the eye area is used to identify the eye area image of the reference face image, and the eye area of the reference face image is obtained.
  • the feature of the region, the feature obtained based on the reference face image can be called the second local feature.
  • Step 305 for each target local feature recognition model among the plurality of target local feature recognition models, obtain the first local feature recognized by the target local feature recognition model and the second local feature recognized by the target local feature recognition model feature similarity between them.
  • Step 306 Determine whether the image to be verified has passed the verification according to the acquired multiple feature similarities.
  • steps 305 and 306 in this embodiment are consistent with the descriptions of steps 203 and 204, and will not be repeated here.
  • the verified image is a face image.
  • the face image can be divided into different types based on the face area. In the local area, the face image to be verified is verified based on the feature similarity between each face region of the face image to be verified and each face region of the reference face image.
  • the pixel value of the adversarial sample has been changed, and the disturbance added in the adversarial sample often does not act on a single pixel, but there is a certain continuity between pixels at different positions, and pixels at different positions There is a dependency relationship between the perturbation values.
  • Using the target local feature recognition model can destroy the continuity and dependence of the adversarial perturbation, making the adversarial attack ineffective.
  • this method has little impact on the real pass rate of the face recognition system and does not affect the performance of the network on clean samples. Identification/authentication performance.
  • determining whether the image to be verified passes the verification includes: in response to determining the multiple feature similarities, If at least one feature similarity satisfies the first similarity threshold, it is determined that the image to be verified passes the verification.
  • determining whether the image to be verified passes the verification according to the acquired multiple feature similarities includes: responding to determining the multiple feature similarities Each feature similarity satisfies the second similarity threshold, and it is determined that the image to be verified passes the verification.
  • a flow 400 of an embodiment of a method for verifying an image according to the present disclosure is shown, including the following steps:
  • Step 401 acquire the image to be verified, and use the trained feature recognition model to identify the first global feature of the image to be verified.
  • the executing subject of the method for verifying an image can obtain the image to be verified, and use a trained feature recognition model to identify the first global feature of the image to be verified.
  • the trained feature recognition model performs feature recognition based on the global/all image regions of the image, that is, the image without region division, so as to obtain the global features of the image.
  • Global features are features relative to local features.
  • the global feature identified based on the image to be verified may be referred to as the first global feature.
  • Step 402 acquire the reference image, and use the trained feature recognition model to recognize the second global feature of the reference image.
  • a reference image may be acquired, and a trained feature recognition model may be used to identify the second global feature of the reference image.
  • the global feature identified based on the reference image may be called the second global feature.
  • Step 4031 in response to determining that the similarity between the first global feature and the second global feature satisfies the third similarity threshold, verify the image to be verified using the method in the embodiment described in FIG. 2 or FIG. 3 .
  • the method in the embodiment described in FIG. 2 or FIG. 3 can be further adopted, and the The image to be verified and the reference image are divided into regions, and the verification is performed again based on the similarity between the regional features of each region after region division, so as to improve the accuracy of verifying the image to be verified.
  • Step 4032 in response to determining that the similarity between the first global feature and the second global feature does not meet the third similarity threshold, determine that the image to be verified has not passed the verification.
  • the image to be verified is not similar to the reference image, and it may be determined that the image to be verified is not approved.
  • the method for verifying an image provided in this embodiment compares the similarity between the image to be verified and the reference image based on the global features of the image, and determines the image to be verified After being similar to the global features of the reference image, the method in the embodiment described in FIG. 2 or FIG.
  • the similarity between the verification image and the reference image is only used for the images to be verified that have passed the global feature verification, not all the images to be verified, so as to improve the accuracy of the verification image, improve the efficiency of the verification image, and avoid a large number of local
  • the method for verifying an image can be applied to a face recognition system, which can obtain a face image to be verified (image to be verified) input by a user, and use The feature recognition model extracts the global features of the face image to be verified.
  • the face recognition system obtains the registered face image (reference image) registered by the user based on local/cloud storage, obtains and uses the global feature recognition model to extract the global features of the registered face image.
  • the face images to be verified are divided into different face regions, and for each face region, the target local features used to identify the features of the face region are used
  • the recognition model recognizes the first local feature of the face area.
  • the registered face image is divided into different face regions, and for each face region, the second local feature of the face region is recognized by using the target local feature recognition model for identifying the features of the face region.
  • the target local feature recognition model for identifying the features of the face region.
  • the similarity S i (1 ⁇ i ⁇ n) represents: the local features extracted by the target local feature recognition model from a certain face area in the face image to be verified, and the target local feature recognition model from the registered face image of the certain person Similarity comparison results between local features extracted from face regions.
  • the face recognition system can determine that the face image to be verified has passed the verification, or the face image to be verified Verify that the face image is not an adversarial example for attacking the face recognition system; specifically, if the values of all the similarities S i in the similarity sequence [S 1 , S 2 ,..., S n ] satisfy the similarity threshold, Then the face recognition system can determine that the face image to be verified has passed the verification, or the face image to be verified is not an adversarial example for attacking the face recognition system.
  • a flow 700 of an embodiment of a method for training a model according to the present disclosure is shown, including the following steps:
  • step 701 at least one piece of sample data is acquired, and the sample data includes a sample image and labels of partial images of each local area in the sample image.
  • the execution subject of the method for training the model can obtain at least one piece of sample data through a terminal device, cloud storage or local storage, and a piece of sample data can include a A sample image, and the labels of the local images of each local area in the sample image.
  • a piece of sample data may include a face image, parameters such as the size and pixel characteristics of the forehead image in the forehead area of the face image, and parameters such as the size of the eye image in the eye area in the face image, and the interpupillary distance , parameters such as the size of the mouth image of the mouth area in the face image.
  • step 702 each initial local feature recognition model used to identify features of each local area is acquired.
  • each initial local feature recognition model for identifying features of each local region in the sample image may be acquired, and different initial local feature recognition models are used to recognize features of different local regions in the sample image.
  • Each initial local feature recognition model can be any type of deep learning model.
  • Step 703 For each partial image in the partial images of each region, input the partial image into the initial local feature recognition model used to identify the features of the local region to which the partial image belongs, and obtain the partial feature.
  • the partial image is input into the initial local feature recognition model used to identify the features of the local region to which the partial image belongs, so as to obtain the initial local feature Identify local features of the model output.
  • the eye image is input into the initial local feature recognition model A used to identify the features of the eye region, so as to obtain the initial local feature recognition model A
  • the mouth image is input into the initial local feature recognition model B used to identify the features of the mouth area to obtain the mouth features output by the initial local feature recognition model B
  • the cheek image is input for face recognition
  • the initial local feature recognition model C of the features of the facial area the cheek features output by the initial local feature recognition model C are obtained.
  • Step 704 acquiring the loss between the label of the local image and the label represented by the local feature.
  • the loss between the label of the local image and the label represented by the local feature can be obtained.
  • the label of the eye image in the sample data is obtained, and the loss between the label represented by the eye feature identified according to the initial local feature model A in step 703, wherein the eye feature is represented by
  • the label of can be a size parameter describing the characteristics of the eye (such as interpupillary distance), or information describing the shape of the eye (such as almond eyes).
  • Step 705 Train multiple initial local feature recognition models according to the obtained mean values of multiple losses, and obtain multiple target local feature recognition models, wherein the target local feature recognition model is applied to the description in FIG. 2 , FIG. 3 or FIG. 4 Methods for verifying images in Examples.
  • Multiple initial local feature recognition models can be trained according to the obtained mean values of multiple losses.
  • the following loss function can be used as the loss function for training multiple initial local feature recognition models:
  • i represents the identification of the initial local feature recognition model, and also represents the identification of the target local feature recognition model, and the identification of the local images of each region in the sample image, (1 ⁇ i ⁇ N), N represents the initial local feature recognition model The total amount.
  • L represents the loss between the local feature F i extracted from the local image x i after feature extraction by the initial local feature recognition model i and the label y i of the local image x i in the sample data.
  • the model parameter W i represents the parameters of each layer network in the local feature recognition model. Models with different W have different feature extraction capabilities for the input image; the relationship between W and F
  • the angle represents the angle formed by the features F i and W i extracted from the input image through the model. The smaller the angle, the closer the two are, and the greater the probability that the image should be recognized as the i-th label.
  • m is the preset angle margin, setting m can make Compare With a larger angle, it can constrain the model parameters.
  • e is the base number of natural logarithm
  • s is the scaling factor
  • j represents the count mark
  • the value range of i is the same.
  • the training goal is to minimize the loss function, and the model parameters W are gradually optimized through iterative training operations.
  • the method for training a model is to acquire at least one piece of sample data, the sample data includes a sample image and the labels of the local images of each local area in the sample image; each initial local area used to identify the features of each local area is acquired Feature recognition model: For each partial image in the partial images of each region, the partial image is input to an initial local feature recognition model used to identify the features of the local region to which the partial image belongs, and the local features output by the initial local feature recognition model are obtained ; Obtain the loss between the label of the local image and the label represented by the local feature; according to the obtained average value of multiple losses, train multiple initial local feature recognition models, and obtain multiple target local feature recognition models, which can be compared The similarity of different local regions in the sample image determines whether the input image is an adversarial example.
  • the pixel value of the adversarial sample has been changed, and the disturbance added in the adversarial sample often does not act on a single pixel, but there is a certain continuity between pixels at different positions, and between different positions There is a dependence on the perturbation value of .
  • Using the trained target local feature recognition model can destroy the continuity and dependence of the adversarial disturbance, identify the adversarial sample image, and make the adversarial attack invalid.
  • this method can avoid the overfitting of the model after adversarial training, which may affect the recognition performance of clean samples.
  • this method can avoid preprocessing of indiscriminate clean samples, which will affect The problem of the recognition performance of the model on clean samples.
  • the present disclosure provides an embodiment of a device for verifying an image, which is similar to the method embodiments shown in FIG. 2 and FIG. 3 Correspondingly, the device can be specifically applied to various electronic devices.
  • the device for verifying an image in this embodiment includes: a first identification unit 801 , a second identification unit 802 , a matching unit 803 , and a verification unit 804 .
  • the first recognition unit is configured to obtain the image to be verified, and adopts multiple target local feature recognition models to respectively recognize the first local features of the multiple local regions of the image to be verified;
  • the second recognition unit is configured to obtain the benchmark image, and using multiple target local feature recognition models to respectively identify the second local features of multiple local regions of the reference image;
  • the matching unit is configured to identify each target local feature recognition model in the multiple target local feature recognition models, Acquiring the feature similarity between the first local feature identified by the target local feature recognition model and the second local feature identified by the target local feature recognition model;
  • the verification unit is configured to , to determine whether the image to be verified passes the verification.
  • the first recognition unit includes: a first division module, configured to obtain a face image to be verified, and divide the face image to be verified into different face regions; the first recognition module is configured For each face area in the face image to be verified, adopt the target local feature recognition model for identifying the features of the face area to identify the first local feature of the face area; the second recognition unit includes: the second The division module is configured to obtain a reference face image, and divides the reference face image into different face regions; the second identification module is configured to use the method for each face region in the reference face image The target local feature recognition model that recognizes features of the face region recognizes a second local feature of the face region.
  • the verification unit includes: a first verification module configured to determine that the image to be verified passes the verification in response to determining that among the plurality of feature similarities, at least one feature similarity satisfies a first similarity threshold.
  • the verification unit includes: a second verification module configured to determine that the image to be verified passes the verification in response to determining that each of the plurality of feature similarities satisfies a second similarity threshold.
  • Each unit in the above apparatus 800 corresponds to the steps in the method described with reference to FIG. 2 and FIG. 3 . Therefore, the operations, features and achievable technical effects described above for the method for verifying an image are also applicable to the device 800 and the units contained therein, and will not be repeated here.
  • the present disclosure provides an embodiment of a device for verifying an image.
  • This device embodiment corresponds to the method embodiment shown in FIG. 4
  • the The device can be specifically applied to various electronic devices.
  • the device for verifying an image in this embodiment includes: a third identification unit 901 , a fourth identification unit 902 , a first verification unit 9031 or a second verification unit 9032 .
  • the third recognition unit is configured to obtain the image to be verified, and use the trained feature recognition model to identify the first global feature of the image to be verified;
  • the fourth recognition unit is configured to obtain the reference image, and use the trained The feature recognition model identifies the second global feature of the reference image;
  • the first verification unit is configured to respond to determining that the similarity between the first global feature and the second global feature meets a third similarity threshold, using FIG. 2 or FIG.
  • the method in the embodiment described in 3 verifies the image to be verified; or, the second verification unit is configured to, in response to determining that the similarity between the first global feature and the second global feature does not satisfy the third similarity threshold, determine The pending image failed verification.
  • Each unit in the above apparatus 900 corresponds to the steps in the method described with reference to FIG. 4 . Therefore, the operations, features and achievable technical effects described above for the method for verifying an image are also applicable to the device 900 and the units contained therein, and will not be repeated here.
  • the present disclosure provides an embodiment of a device for training a model, which corresponds to the method embodiment shown in FIG. 7 , the The device can be specifically applied to various electronic devices.
  • the apparatus for training a model in this embodiment includes: a first acquisition unit 1001 , a second acquisition unit 1002 , a prediction unit 1003 , a calculation unit 1004 , and a training unit 1005 .
  • the first acquiring unit is configured to acquire at least one piece of sample data, the sample data includes the sample image and the labels of the local images of each local area in the sample image;
  • the second acquiring unit is configured to acquire Each initial local feature recognition model of the feature of the feature;
  • the prediction unit is configured to, for each partial image in the partial images of each region, input the partial image into the initial local feature recognition model for identifying the features of the local region to which the partial image belongs, And obtain the local features output by the initial local feature recognition model;
  • the calculation unit is configured to obtain the loss between the label of the local image and the label represented by the local feature;
  • the training unit is configured to obtain a plurality of losses based on the mean, train multiple initial local feature recognition models, and obtain multiple target local feature recognition models, wherein the target local feature
  • Each unit in the above device 1000 corresponds to the steps in the method described with reference to FIG. 7 . Therefore, the operations, features and achievable technical effects described above for the method for training the model are also applicable to the device 1000 and the units contained therein, and will not be repeated here.
  • the present disclosure also provides an electronic device and a readable storage medium.
  • FIG. 11 it is a block diagram of an electronic device 1100 according to a method for verifying an image according to an embodiment of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the electronic device includes: one or more processors 1101 , a memory 1102 , and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces.
  • the various components are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired.
  • the processor may process instructions executed within the electronic device, including instructions stored in or on the memory, to display graphical information of a GUI on an external input/output device such as a display device coupled to an interface.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, with each device providing some of the necessary operations (eg, as a server array, a set of blade servers, or a multi-processor system).
  • a processor 1101 is taken as an example in FIG. 11 .
  • the memory 1102 is a non-transitory computer-readable storage medium provided in the present disclosure.
  • the memory stores instructions executable by at least one processor, so that the at least one processor executes the method for verifying an image provided in the present disclosure.
  • the non-transitory computer-readable storage medium of the present disclosure stores computer instructions for causing a computer to execute the method for verifying an image provided by the present disclosure.
  • the memory 1102 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the method for verifying images in the embodiments of the present disclosure (for example, the first identification unit 801, the second identification unit 802, the matching unit 803, and the verification unit 804 shown in FIG. 8).
  • the processor 1101 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 1102, that is, implements the methods for verifying images in the above method embodiments.
  • the memory 1102 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; data etc.
  • the memory 1102 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the storage 1102 may optionally include storages that are remotely located relative to the processor 1101, and these remote storages may be connected to electronic devices for extracting video clips through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the electronic device used in the method for verifying an image may further include: an input device 1103 , an output device 1104 and a bus 1105 .
  • the processor 1101, the memory 1102, the input device 1103, and the output device 1104 may be connected through a bus 1105 or in other ways, and the connection through the bus 1105 is taken as an example in FIG. 11 .
  • the input device 1103 can receive input numbers or character information, and generate key signal inputs related to user settings and function control of electronic equipment for extracting video clips, such as touch screens, small keyboards, mice, trackpads, touchpads, pointers, etc. input devices such as sticks, one or more mouse buttons, trackballs, joysticks, etc.
  • the output device 1104 may include a display device, an auxiliary lighting device (eg, LED), a tactile feedback device (eg, a vibration motor), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
  • Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor Can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, apparatus, and/or means for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs), including machine-readable media that receive machine instructions as machine-readable signals.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

一种用于验证图像的方法和装置,涉及计算机技术领域。方法包括:获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征(201);获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征(202);针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度(203);根据获取的多个特征相似度,确定待验证图像是否通过验证(204)。

Description

用于验证图像的方法和装置
本申请要求于2021年9月6日提交的、申请号为202111047246.0、发明名称为“用于验证图像的方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请。
技术领域
本公开涉及计算机技术领域,具体涉及用于验证图像的方法和装置。
背景技术
随着人工智能技术的快速发展,深度学习网络被广泛的应用于图像处理(如图像识别、图像转换)等领域。在基于深度学习网络执行图像处理任务时,通常会采用对抗防御方法以避免对抗样本对深度学习网络的攻击。现有的对抗防御方法包括:对输入网络的样本进行对抗样本检测,对深度学习网络进行对抗训练,或者对样本进行数据预处理。
发明内容
本公开提供了一种用于验证图像的方法、装置、电子设备以及计算机可读存储介质。
根据本公开的第一方面,提供了一种用于验证图像的方法,包括:获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征;获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征;针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度;根据获取的多个特征相似度,确定待验证图像是否通过验证。
在一些实施例中,获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征,包括:获取待验 证人脸图像,并将待验证人脸图像划分为不同的人脸区域;针对待验证人脸图像中的每一个人脸区域,采用用于识别人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征;获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征,包括:获取基准人脸图像,并将基准人脸图像划分为不同的人脸区域;针对基准人脸图像中的每一个人脸区域,采用用于识别人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。
在一些实施例中,根据获取的多个特征相似度,确定待验证图像是否通过验证,包括:响应于确定多个特征相似度中,存在至少一个特征相似度满足第一相似度阈值,确定待验证图像通过验证。
在一些实施例中,根据获取的多个特征相似度,确定待验证图像是否通过验证,包括:响应于确定多个特征相似度中的每一个特征相似度均满足第二相似度阈值,确定待验证图像通过验证。
根据本公开的第二方面,提供了一种用于验证图像的方法,包括:获取待验证图像,并采用训练好的特征识别模型识别待验证图像的第一全局特征;获取基准图像,并采用训练好的特征识别模型识别基准图像的第二全局特征;响应于确定第一全局特征与第二全局特征之间的相似度满足第三相似度阈值,采用第一方面中的方法验证待验证图像;或者,响应于确定第一全局特征与第二全局特征之间的相似度不满足第三相似度阈值,确定待验证图像未通过验证。
根据本公开的第三方面,提供了一种用于训练模型的方法,包括:获取至少一条样本数据,样本数据包括样本图像、以及样本图像中各个局部区域的局部图像的标签;获取用于识别各个局部区域的特征的各个初始局部特征识别模型;针对各个局部区域的局部图像中的每一个局部图像,将局部图像输入用于识别局部图像所属局部区域的特征的初始局部特征识别模型,并获得该初始局部特征识别模型输出的局部特征;获取各个局部区域的局部图像的标签与各个局部区域的局部图像的局部特征所表征的标签之间的多个损失;根据获取到的多个损失的均值,训练多个初始局部特征识别模型,并获得多个目标局部特征识别模型,其中,目标局部特征识别模型应用于第一方面或第二方面中的方法。
根据本公开的第四方面,提供了一种用于验证图像的装置,包括:第一识别单元,被配置为获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征;第二识别单元,被配置为获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征;匹配单元,被配置为针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度;验证单元,被配置为根据获取的多个特征相似度,确定待验证图像是否通过验证。
在一些实施例中,第一识别单元,包括:第一划分模块,被配置为获取待验证人脸图像,并将待验证人脸图像划分为不同的人脸区域;第一识别模块,被配置为针对待验证人脸图像中的每一个人脸区域,采用用于识别人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征;第二识别单元,包括:第二划分模块,被配置为获取基准人脸图像,并将基准人脸图像划分为不同的人脸区域;第二识别模块,被配置为针对基准人脸图像中的每一个人脸区域,采用用于识别人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。
在一些实施例中,验证单元,包括:第一验证模块,被配置为响应于确定多个特征相似度中,存在至少一个特征相似度满足第一相似度阈值,确定待验证图像通过验证。
在一些实施例中,验证单元,包括:第二验证模块,被配置为响应于确定多个特征相似度中的每一个特征相似度均满足第二相似度阈值,确定待验证图像通过验证。
根据本公开的第五方面,提供了一种用于验证图像的装置,包括:第三识别单元,被配置为获取待验证图像,并采用训练好的特征识别模型识别待验证图像的第一全局特征;第四识别单元,被配置为获取基准图像,并采用训练好的特征识别模型识别基准图像的第二全局特征;第一校验单元,被配置为响应于确定第一全局特征与第二全局特征之间的相似度满足第三相似度阈值,采用第一方面中的方法验证待验证图像;或者,第二校验单元,被配置为响应于确定第一全局特征与第二全局特征之间的相似度 不满足第三相似度阈值,确定待验证图像未通过验证。
根据本公开的第六方面,提供了一种用于训练模型的装置,包括:第一获取单元,被配置为获取至少一条样本数据,样本数据包括样本图像、以及样本图像中各个局部区域的局部图像的标签;第二获取单元,被配置为获取用于识别各个局部区域的特征的各个初始局部特征识别模型;预测单元,被配置为针对各个区域的局部图像中的每一个局部图像,将局部图像输入用于识别局部图像所属局部区域的特征的初始局部特征识别模型,并获得该初始局部特征识别模型输出的局部特征;计算单元,被配置为获取各个局部区域的局部图像的标签与各个局部区域的局部特征所表征的标签之间的多个损失;训练单元,被配置为根据获取到的多个损失的均值,训练多个初始局部特征识别模型,并获得多个目标局部特征识别模型,其中,目标局部特征识别模型应用于第一方面或第二方面中的方法。
根据本公开的第七方面,本公开的实施例提供了一种电子设备,包括:一个或多个处理器:存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面提供的用于验证图像的方法,或者实现如第二方面提供的用于验证图像的方法,或者实现如第三方面提供的用于训练模型的方法。
根据本公开的第八方面,本公开的实施例提供了一种计算机可读存储介质,其上存储有计算机程序,其中,程序被处理器执行时实现如第一方面提供的用于验证图像的方法,或者实现如第二方面提供的用于验证图像的方法,或者实现如第三方面提供的用于训练模型的方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本申请的限定。其中:
图1是本公开的实施例可以应用于其中的示例性系统架构图;
图2是根据本公开的用于验证图像的方法的一个实施例的流程图;
图3是根据本公开的用于验证图像的方法的另一个实施例的流程图;
图4是根据本公开的用于验证图像的方法的一个实施例的流程图;
图5是根据本公开的用于验证图像的方法的一个应用场景中全局特征识别的流程图;
图6是根据本公开的用于验证图像的方法的一个应用场景中局部特征识别的流程图;
图7是根据本公开的用于训练图像的方法的一个实施例的流程图;
图8是根据本公开的用于验证图像的装置的一个实施例的结构示意图;
图9是根据本公开的用于验证图像的装置的一个实施例的结构示意图;
图10是根据本公开的用于训练模型的装置的一个实施例的结构示意图;
图11是用来实现本公开实施例的用于验证图像的方法的电子设备的框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本申请的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
需要说明的是,本公开实施例中所涉及的个人信息数据均已通过用户自愿授权,个人信息的获取、存储、处理和传输等均符合相关法律法规的要求。
相关技术中,对抗样本检测的对抗防御方法存在防御度差,对抗训练以及样本数据预处理的对抗防御方法存在会导致深度学习网络对无噪声样本的检测性能差的问题。
本公开提供的用于验证图像的方法、装置,包括:获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征;获取基准图像,并采用多个目标局部特征识别模型分别 识别基准图像的多个局部区域的第二局部特征;针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度;根据获取的多个特征相似度,确定待验证图像是否通过验证,可以提高系统对对抗样本的防御性能。另外,该方法是基于局部区域的特征相似度验证图像,并不需要对样本数据进行预处理,避免对干净样本(无噪声样本)与对抗样本(加噪样本)一并进行了预处理,导致的对无噪声样本的检测性能差的问题。以及该方法并非基于采用对抗训练方法获得的网络验证图像,可以避免采用对抗训练方法获得的网络在对无噪声样本进行检测时性能差的问题。
图1示出了可以应用本公开的用于验证图像的方法或用于验证图像的装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103可以是用户终端设备,其上可以安装有各种客户端应用,例如图像识别类应用、视频识别类应用、播放类应用、搜索类应用、金融类应用等。
终端设备101、102、103可以是具有显示屏并且支持接收服务器消息的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、电子播放器、膝上型便携计算机和台式计算机等等。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是各种电子设备,当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以通过终端设备101、102、103获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第 一局部特征,以及获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征,针对多个目标局部特征中的每一个局部特征识别模型,获取采用该局部特征识别模型所确定的第一局部特征、与采用该局部特征识别模型确定的第二局部特征之间的特征相似度,之后,根据获取到的多个特征相似度,确定待验证图像是否通过验证。
需要说明的是,本公开的实施例所提供的业务处理方法可以由终端设备101、102、103执行、也可以由服务器105执行,相应地,业务处理装置可以设置于终端设备101、102、103中、也可以设置于服务器105中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本公开的用于验证图像的方法的一个实施例的流程200,包括以下步骤:
步骤201,获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征。
在本实施例中,用于验证图像的方法的执行主体(例如图1所示的服务器105)可以获取待验证图像,并采用多个目标局部特征识别模型分别识别该待验证图像的多个局部区域的第一局部特征。即,获取多个目标局部特征识别模型,多个目标局部特征识别模型中的每一个目标局部特征识别模型用于识别待验证图像的不同局部区域,以得到不同局部区域中每一个局部局域的第一局部特征。其中,待验证图像可以是人脸图像,可以是包含任何目标对象(如动物、植物、风景、图纸、各种物品等)的图像,目标局部特征识别模型可以是基于互联网、本地存储或者云存储获得的训练完成的深度学习模型、线性回归模型等。
步骤202,获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征。
在本实施例中,可以获取基准图像,并采用多个目标局部特征识别模型分别识别该基准图像的多个局部区域的第二局部特征。即,获取多个目标局部特征识别模型,多个目标局部特征识别模型中的每一个目标局部特征识别模型用于识别基准图像的不同局部区域,以得到不同局部区域中每 一个局部局域的第二局部特征。其中,基准图像是待验证图像比对的基准,例如,若待验证图像为用于验证当前用户是否具有登陆某账户的权限、操作某账户的权限的人脸图像,基准图像可以是该某账户在注册时所使用的注册人脸图像。
步骤203,针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用该目标局部特征识别模型确定的第一局部特征、与采用该目标局部特征识别模型确定的第二局部特征之间的特征相似度。
在本实施例中,可以针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用该目标局部特征识别模型确定的第一局部特征、与采用该目标局部特征识别模型确定的第二局部特征,并计算基于该目标局部特征识别模型得到的第一局部特征与第二局部特征之间的特征相似度。可以理解,基于多个目标局部特征识别模型可以获取到多组第一局部特征与第二局部特征,并可以计算得到多个特征相似度。
步骤204,根据获取的多个特征相似度,确定待验证图像是否通过验证。
在实施例中,可以根据获取的多个特征相似度,确定待验证图像是否通过验证。具体地,可以基于全部的特征相似度的平均值是否超过预设阈值判断待验证图像是否通过验证,可以基于全部的特征相似度的中值是否超过预设阈值判断待验证图像是否通过验证,也可以基于全部的特征相似度的其他统计特征判断待验证图像是否通过验证。
本实施例提供的用于验证图像的方法,获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征;获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征;针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度;根据获取的多个特征相似度,确定待验证图像是否通过验证,可以提高系统对对抗样本的防御性能。另外,该方法是基于局部区域的特征相似度验证图像,并不需要对样本数据进行预处理,避免对干净样本(无噪声样本)与对抗样本(加噪样本)一并进行了预处理,导致的网 络对无噪声样本的检测性能差的问题。以及该方法并非基于采用对抗训练方法获得的网络验证图像,可以避免采用对抗训练方法获得的网络在检测无噪声样本时性能差的问题。
继续参考图3,示出了根据本公开的用于验证图像的方法的另一个实施例的流程300,包括以下步骤:
步骤301,获取待验证人脸图像,并将待验证人脸图像划分为不同的人脸区域。
在本实施例中,用于验证图像的方法的执行主体(例如图1所示的服务器105)可以获取待验证人脸图像,并将该待验证人脸图像划分为不同的人脸区域,如,额头区域、眼部区域、脸部两颊区域,嘴部区域等等。可以采用预先训练好的面部区域划分模型划分人脸区域,也可以基于预置的人脸各个区域的比例数据或者区域位置信息划分人脸区域。
步骤302,针对待验证人脸图像中的每一个人脸区域,采用用于识别该人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征。
在本实施例中,针对待验证人脸图像中的每一个人脸区域,采用用于识别该人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征。例如,针对待验证人脸图像中的眼部区域,采用用于识别眼部区域的特征的目标局部特征识别模型识别该待验证人脸图像的眼部区域图像,并获得该待验证人脸图像的眼部区域的特征,基于待验证人脸图像获得的特征可以称之为第一局部特征。
步骤303,获取基准人脸图像,并将基准人脸图像划分为不同的人脸区域。
在本实施例中,可以获取基准人脸图像,并将该基准人脸图像划分为不同的人脸区域,如,额头区域、眼部区域、脸部两颊区域,嘴部区域等等。可以采用预先训练好的面部区域划分模型划分人脸区域,也可以基于预置的人脸各个区域的比例数据或者区域位置信息划分人脸区域。
步骤304,针对基准人脸图像中的每一个人脸区域,采用用于识别该人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。
在本实施例中,针对基准人脸图像中的每一个人脸区域,采用用于识别该人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。例如,针对基准人脸图像中的眼部区域,采用用于识别眼部区域的特征的目标局部特征识别模型识别该基准人脸图像的眼部区域图像,并获得该基准人脸图像的眼部区域的特征,基于基准人脸图像获得的特征可以称之为第二局部特征。
步骤305,针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度。
步骤306,根据获取的多个特征相似度,确定待验证图像是否通过验证。
本实施例中对步骤305、步骤306的描述与步骤203、步骤204的描述一致,此处不再赘述。
本实施例提供的用于验证图像的方法,相比于图2描述的实施例,所验证的图像为人脸图像,在验证人脸图像时,可以将人脸图像基于人脸区域划分为不同的局部区域,基于待验证人脸图像的各个人脸区域与基准人脸图像的各个人脸区域之间特征的特征相似度,验证待验证人脸图像。
由于对抗样本与原始图片相比,像素值已被改变,且对抗样本中所添加的扰动往往不是作用于单个像素,而是在不同位置的像素之间存在一定的连续性,且不同位置的像素之间的扰动值存在依赖关系。利用目标局部特征识别模型可以破坏对抗扰动的这种连续性和依赖性,使对抗攻击失效,同时这种方法对人脸识别系统的真人通过率的影响较小,不影响网络在干净样本上的识别/验证性能。
在上述结合图2和图3描述的实施例的一些可选的实现方式中,根据获取的多个特征相似度,确定待验证图像是否通过验证,包括:响应于确定多个特征相似度中,存在至少一个特征相似度满足第一相似度阈值,确定待验证图像通过验证。
在本实施例中,在根据获取的多个特征相似度确定待验证图像是否通过验证时,可以在确定多个特征相似度中,若存在任意特征相似度满足预设的第一相似度阈值,则确定待验证图像通过验证,以提高验证图像的效 率。
在上述结合图2和图3描述的实施例的一些可选的实现方式中,根据获取的多个特征相似度,确定待验证图像是否通过验证,包括:响应于确定多个特征相似度中的每一个特征相似度均满足第二相似度阈值,确定待验证图像通过验证。
在本实施例中,在根据获取的多个特征相似度确定待验证图像是否通过验证时,若确定多个特征相似度中的每一个特征相似度均满足预设的第二相似度阈值,则确定待验证图像通过验证,以提高验证图像的准确性。
继续参考图4,示出了根据本公开的用于验证图像的方法的一个实施例的流程400,包括以下步骤:
步骤401,获取待验证图像,并采用训练好的特征识别模型识别待验证图像的第一全局特征。
在本实施例中,用于验证图像的方法的执行主体(例如图1所示的服务器105)可以获取待验证图像,并采用训练好的特征识别模型识别待验证图像的第一全局特征。其中,该训练好的特征识别模型基于图像的全局/全部的图像区域,即未经区域划分的图像进行特征识别,以获得图像的全局特征。全局特征是相对于局部特征而言的特征。为便于区分,可以将基于待验证图像识别出的全局特征称为第一全局特征。
步骤402,获取基准图像,并采用训练好的特征识别模型识别基准图像的第二全局特征。
在本实施例中,可以获取基准图像,并采用训练好的特征识别模型识别基准图像的第二全局特征。为便于区分,可以将基于基准图像识别出的全局特征称为第二全局特征。
步骤4031,响应于确定第一全局特征与第二全局特征之间的相似度满足第三相似度阈值,采用图2或图3描述的实施例中的方法验证待验证图像。
在本实施例中,若确定第一全局特征与第二全局特征之间的相似的满足预设的第三相似度阈值,则可以进一步采用图2或图3描述的实施例中的方法,将待验证图像与基准图像进行区域划分,以基于区域划分后的各个区域的区域特征之间的相似度再次进行验证,提高验证待验证图像的准 确性。
步骤4032,响应于确定第一全局特征与第二全局特征之间的相似度不满足第三相似度阈值,确定待验证图像未通过验证。
在本实施例中,若确定第一全局特征与第二全局特征之间的相似度不满足预设的第三相似度阈值,则可以确定待验证图像与基准图像不相似,确定待验证图像未通过验证。
本实施例提供的用于验证图像的方法,相比于图2或图3描述的实施例中的方法,在基于图像的全局特征比较待验证图像与基准图像的相似度,并确定待验证图像与基准图像的全局特征相似之后,才进一步基于图2或图3描述的实施例中的方法,基于图像的局部特征比较待验证图像与基准图像的相似度,可以使基于图像的局部特征比较待验证图像与基准图像的相似度仅用于已经通过全局特征验证的待验证图像,而非全部的待验证图像,以在提高验证图像的准确性的同时,提高验证图像的效率、以及避免大量局部特征的计算和存储操作浪费服务器资源的问题。
在一些应用场景中,如图5所示,用于验证图像的方法可以应用于人脸识别系统,该人脸识别系统可以获取用户输入的待验证人脸图像(待验证图像),并采用全局特征识别模型提取该待验证人脸图像的全局特征。
人脸识别系统基于本地/云端存储获取用户已注册的注册人脸图像(基准图像),获取并采用全局特征识别模型提取该注册人脸图像的全局特征。
将待验证人脸图像的全局特征与注册人脸图像的全局特征进行比较,若确定二者不相似,则确定待验证人脸图像未通过验证;若确定二者相似,则采用图6所示的方法进一步对待验证人脸图像进行验证。
在图6所示的用于人脸图像的方法中,将待验证人脸图像划分为不同的人脸区域,针对每一个人脸区域,采用用于识别该人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征。
将注册人脸图像划分为不同的人脸区域,针对每一个人脸区域,采用用于识别该人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。(图6中的局部模型1至局部模型n即为用于识别各个局部区域的特征的n个目标局部特征识别模型)
将基于同一个目标局部特征识别模型所识别出的第一局部特征与第 二局部特征进行相似度比较,并获得相似度序列[S 1,S 2,…,S n],序列中的每一个相似度S i(1≤i≤n)代表:目标局部特征识别模型从待验证人脸图像中某个人脸区域提取的局部特征、与该目标局部特征识别模型从注册人脸图像中该某个人脸区域提取的局部特征之间的相似度比较结果。
最后,根据全部的比较结果,确定待验证图像是否通过验证。具体地,若相似度序列[S 1,S 2,…,S n]中任意相似度S i的取值满足相似度阈值,则人脸识别系统可以确定待验证人脸图像通过验证,或者待验证人脸图像并非用于攻击人脸识别系统的对抗样本;具体地,若相似度序列[S 1,S 2,…,S n]中全部相似度S i的取值均满足相似度阈值,则人脸识别系统可以确定待验证人脸图像通过验证,或者待验证人脸图像并非用于攻击人脸识别系统的对抗样本。
继续参考图7,示出了根据本公开的用于训练模型的方法的一个实施例的流程700,包括以下步骤:
步骤701,获取至少一条样本数据,样本数据包括样本图像、以及该样本图像中各个局部区域的局部图像的标签。
在本实施例中,用于训练模型的方法的执行主体(例如图1所示的服务器105)可以通过终端设备、云存储或者本地存储等方式获取至少一条样本数据,一条样本数据中可以包括一张样本图像、以及该样本图像中各个局部区域的局部图像的标签。例如,一条样本数据可以包括一张人脸图像,该人脸图像中额头区域的额头图像的尺寸、像素特征等参数,该人脸图像中眼部区域的眼部图像的尺寸,瞳孔间距等参数,该人脸图像中嘴部区域的嘴部图像的尺寸等参数。
步骤702,获取用于识别各个局部区域的特征的各个初始局部特征识别模型。
在本实施例中,可以获取用于识别样本图像中各个局部区域的特征的各个初始局部特征识别模型,不同的初始局部特征识别模型用于识别样本图像中不同局部区域的特征。各个初始局部特征识别模型可以是任意类型的深度学习模型。
步骤703,针对各个区域的局部图像中的每一个局部图像,将该局部图像输入用于识别该局部图像所属局部区域的特征的初始局部特征识别 模型,并获得该初始局部特征识别模型输出的局部特征。
在本实施例中,针对各个局部区域的局部图像中的每一个局部图像,将该局部图像输入用于识别该局部图像所属局部区域的特征的初始局部特征识别模型中,以获得该初始局部特征识别模型输出的局部特征。例如,针对样本人脸图像中的眼部图像、嘴部图像、脸颊图像,将眼部图像输入用于识别眼部区域的特征的初始局部特征识别模型A中,以获得初始局部特征识别模型A输出的眼部特征,将嘴部图像输入用于识别嘴部区域的特征的初始局部特征识别模型B中,以获得初始局部特征识别模型B输出的嘴部特征,将脸颊图像输入用于识别脸部区域的特征的初始局部特征识别模型C中,以获得初始局部特征识别模型C输出的脸颊特征。
步骤704,获取局部图像的标签与局部特征所表征的标签之间的损失。
在本实施例中,可以获取局部图像的标签与局部特征所表征的标签之间的损失。例如,针对眼部图像,获取样本数据中眼部图像的标签,以及根据步骤703中初始局部特征模型A所识别出的眼部特征所表征的标签之间的损失,其中,眼部特征所表征的标签可以是描述该眼部特征的尺寸参数(如瞳孔间距)、或者描述该眼部形状的信息(如杏仁眼)。
步骤705,根据获取到的多个损失的均值,训练多个初始局部特征识别模型,并获得多个目标局部特征识别模型,其中,目标局部特征识别模型应用于图2、图3或图4描述实施例中的用于验证图像的方法。
在本实施例中,由于样本图像中包含属于各个局部区域的局部图像,在采用对应的初始局部特征识别模型分别对每一个局部图像识别出每一个局部图像的局部特征后,并根据每一个局部图像的标签以及每一个局部特征所表征标签计算损失后,可以得到多个损失。
可以根据获取到的多个损失的均值,训练多个初始局部特征识别模型。例如,可以采用如下损失函数作为训练多个初始局部特征识别模型的损失函数:
Figure PCTCN2022101888-appb-000001
其中,i代表初始局部特征识别模型的标识,也代表目标局部特征识 别模型的标识,以及样本图像中各个区域的局部图像的标识,(1≤i≤N),N代表初始局部特征识别模型的总数量。
L代表局部图像x i在经过初始局部特征识别模型i进行特征提取后所提取出的局部特征F i与样本数据中局部图像x i的标签y i之间的损失。
Figure PCTCN2022101888-appb-000002
是模型参数W i与局部特征F i所形成的角度,模型参数W代表局部特征识别模型中各层网络的参数,拥有不同W的模型对输入图像的特征提取能力不同;W与F之间的角度代表输入图像经过模型提取得到的特征F i与W i所形成的角度,角度越小则表示二者越接近,也表示该图像应该被识别为第i个标签的概率更大。
m是预设的角度余量,设置m可以使
Figure PCTCN2022101888-appb-000003
Figure PCTCN2022101888-appb-000004
拥有更大的角度,可以对模型参数起到约束作用。
e是自然对数的底数;s是缩放因子;j代表计数标识,与i的取值范围相同。
在采用上述损失函数训练各个初始局部特征识别模型时,以最小化损失函数为训练目标,通过迭代的训练操作逐步优化模型参数W。
本实施例提供的用于训练模型的方法,获取至少一条样本数据,样本数据包括样本图像、以及样本图像中各个局部区域的局部图像的标签;获取用于识别各个局部区域的特征的各个初始局部特征识别模型;针对各个区域的局部图像中的每一个局部图像,将局部图像输入用于识别局部图像所属局部区域的特征的初始局部特征识别模型,并获得该初始局部特征识别模型输出的局部特征;获取局部图像的标签与局部特征所表征的标签之间的损失;根据获取到的多个损失的均值,训练多个初始局部特征识别模型,并获得多个目标局部特征识别模型,可以通过对比样本图像中不同局部区域的相似性判断输入图片是否为对抗样本。
由于对抗样本与原始图片相比,像素值已被改变,且对抗样本中所添加的扰动往往不是作用于单个像素,而是在不同位置的像素之间存在一定的连续性,且不同位置之间的扰动值存在依赖关系。利用训练完成的目标局部特征识别模型可以破坏对抗扰动的这种连续性和依赖性,识别出对抗样本图像,使对抗攻击失效。同时,该方法相比于对模型进行对抗训练以对对抗样本进行防御的方法,可以避免对抗训练后的模型造成过拟合,造 成影响对干净样本的识别性能的问题。该方法相比于对输入数据进行预处理后(如图像压缩)再输入模型进行样本图像检测以对对抗样本进行防御的方法,可以避免对无差别的对干净样本也进行了预处理,造成影响模型对干净样本的识别性能的问题。
进一步参考图8,作为对上述各图所示方法的实现,本公开提供了一种用于验证图像的装置的一个实施例,该装置实施例与图2和图3所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图8所示,本实施例的用于验证图像的装置,包括:第一识别单元801、第二识别单元802、匹配单元803、验证单元804。其中,第一识别单元,被配置为获取待验证图像,并采用多个目标局部特征识别模型分别识别待验证图像的多个局部区域的第一局部特征;第二识别单元,被配置为获取基准图像,并采用多个目标局部特征识别模型分别识别基准图像的多个局部区域的第二局部特征;匹配单元,被配置为针对多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用目标局部特征识别模型识别出的第一局部特征、与采用目标局部特征识别模型识别出的第二局部特征之间的特征相似度;验证单元,被配置为根据获取的多个特征相似度,确定待验证图像是否通过验证。
在一些实施例中,第一识别单元,包括:第一划分模块,被配置为获取待验证人脸图像,并将待验证人脸图像划分为不同的人脸区域;第一识别模块,被配置为针对待验证人脸图像中的每一个人脸区域,采用用于识别人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征;第二识别单元,包括:第二划分模块,被配置为获取基准人脸图像,并将基准人脸图像划分为不同的人脸区域;第二识别模块,被配置为针对基准人脸图像中的每一个人脸区域,采用用于识别人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。
在一些实施例中,验证单元,包括:第一验证模块,被配置为响应于确定多个特征相似度中,存在至少一个特征相似度满足第一相似度阈值,确定待验证图像通过验证。
在一些实施例中,验证单元,包括:第二验证模块,被配置为响应于确定多个特征相似度中的每一个特征相似度均满足第二相似度阈值,确定 待验证图像通过验证。
上述装置800中的各单元与参考图2和图3描述的方法中的步骤相对应。由此上文针对用于验证图像的方法描述的操作、特征及所能达到的技术效果同样适用于装置800及其中包含的单元,在此不再赘述。
进一步参考图9,作为对上述各图所示方法的实现,本公开提供了一种用于验证图像的装置的一个实施例,该装置实施例与图4所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图9所示,本实施例的用于验证图像的装置,包括:第三识别单元901、第四识别单元902、第一校验单元9031或者第二校验单元9032。其中,第三识别单元,被配置为获取待验证图像,并采用训练好的特征识别模型识别待验证图像的第一全局特征;第四识别单元,被配置为获取基准图像,并采用训练好的特征识别模型识别基准图像的第二全局特征;第一校验单元,被配置为响应于确定第一全局特征与第二全局特征之间的相似度满足第三相似度阈值,采用图2或图3描述的实施例中的方法验证待验证图像;或者,第二校验单元,被配置为响应于确定第一全局特征与第二全局特征之间的相似度不满足第三相似度阈值,确定待验证图像未通过验证。
上述装置900中的各单元与参考图4描述的方法中的步骤相对应。由此上文针对用于验证图像的方法描述的操作、特征及所能达到的技术效果同样适用于装置900及其中包含的单元,在此不再赘述。
进一步参考图10,作为对上述各图所示方法的实现,本公开提供了一种用于训练模型的装置的一个实施例,该装置实施例与图7所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图10所示,本实施例的用于训练模型的装置,包括:第一获取单元1001、第二获取单元1002、预测单元1003、计算单元1004、训练单元1005。其中,第一获取单元,被配置为获取至少一条样本数据,样本数据包括样本图像、以及样本图像中各个局部区域的局部图像的标签;第二获取单元,被配置为获取用于识别各个局部区域的特征的各个初始局部特征识别模型;预测单元,被配置为针对各个区域的局部图像中的每一个局部图像,将局部图像输入用于识别局部图像所属局部区域的特征的初始局部 特征识别模型,并获得该初始局部特征识别模型输出的局部特征;计算单元,被配置为获取局部图像的标签与局部特征所表征的标签之间的损失;训练单元,被配置为根据获取到的多个损失的均值,训练多个初始局部特征识别模型,并获得多个目标局部特征识别模型,其中,目标局部特征识别模型应用于图2、图3或图4描述实施例中的用于验证图像的方法。
上述装置1000中的各单元与参考图7描述的方法中的步骤相对应。由此上文针对用于训练模型的方法描述的操作、特征及所能达到的技术效果同样适用于装置1000及其中包含的单元,在此不再赘述。
根据本公开的实施例,本公开还提供了一种电子设备和一种可读存储介质。
如图11所示,是根据本公开实施例的用于验证图像的方法的电子设备1100的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图11所示,该电子设备包括:一个或多个处理器1101、存储器1102,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图11中以一个处理器1101为例。
存储器1102即为本公开所提供的非瞬时计算机可读存储介质。其中,该存储器存储有可由至少一个处理器执行的指令,以使该至少一个处理器执行本公开所提供的用于验证图像的方法。本公开的非瞬时计算机可读存 储介质存储计算机指令,该计算机指令用于使计算机执行本公开所提供的用于验证图像的方法。
存储器1102作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本公开实施例中的用于验证图像的方法对应的程序指令/模块(例如,附图8所示的第一识别单元801、第二识别单元802、匹配单元803、验证单元804)。处理器1101通过运行存储在存储器1102中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的用于验证图像的方法。
存储器1102可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据用于提取视频片段的电子设备的使用所创建的数据等。此外,存储器1102可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器1102可选包括相对于处理器1101远程设置的存储器,这些远程存储器可以通过网络连接至用于提取视频片段的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
用于验证图像的方法的电子设备还可以包括:输入装置1103、输出装置1104以及总线1105。处理器1101、存储器1102、输入装置1103和输出装置1104可以通过总线1105或者其他方式连接,图11中以通过总线1105连接为例。
输入装置1103可接收输入的数字或字符信息,以及产生与用于提取视频片段的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置1104可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、 和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。

Claims (14)

  1. 一种用于验证图像的方法,包括:
    获取待验证图像,并采用多个目标局部特征识别模型分别识别所述待验证图像的多个局部区域的第一局部特征;
    获取基准图像,并采用所述多个目标局部特征识别模型分别识别所述基准图像的多个局部区域的第二局部特征;
    针对所述多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用所述目标局部特征识别模型识别出的第一局部特征、与采用所述目标局部特征识别模型识别出的第二局部特征之间的特征相似度;
    根据获取的多个特征相似度,确定所述待验证图像是否通过验证。
  2. 根据权利要求1所述的方法,其中,所述获取待验证图像,并采用多个目标局部特征识别模型分别识别所述待验证图像的多个局部区域的第一局部特征,包括:
    获取待验证人脸图像,并将所述待验证人脸图像划分为不同的人脸区域;
    针对所述待验证人脸图像中的每一个人脸区域,采用用于识别所述人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征;
    所述获取基准图像,并采用所述多个目标局部特征识别模型分别识别所述基准图像的多个局部区域的第二局部特征,包括:
    获取基准人脸图像,并将所述基准人脸图像划分为不同的人脸区域;
    针对所述基准人脸图像中的每一个人脸区域,采用用于识别所述人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。
  3. 根据权利要求1所述的方法,其中,所述根据获取的多个特征相似度,确定所述待验证图像是否通过验证,包括:
    响应于确定所述多个特征相似度中,存在至少一个特征相似度满足第一相似度阈值,确定所述待验证图像通过验证。
  4. 根据权利要求1所述的方法,其中,所述根据获取的多个特征相似度,确定所述待验证图像是否通过验证,包括:
    响应于确定所述多个特征相似度中的每一个特征相似度均满足第二相似度阈值,确定所述待验证图像通过验证。
  5. 一种用于验证图像的方法,包括:
    获取待验证图像,并采用训练好的特征识别模型识别所述待验证图像的第一全局特征;
    获取基准图像,并采用所述训练好的特征识别模型识别所述基准图像的第二全局特征;
    响应于确定所述第一全局特征与所述第二全局特征之间的相似度满足第三相似度阈值,采用权利要求1-4中任一项所述的方法验证所述待验证图像;或者,
    响应于确定所述第一全局特征与所述第二全局特征之间的相似度不满足所述第三相似度阈值,确定所述待验证图像未通过验证。
  6. 一种用于训练模型的方法,包括:
    获取至少一条样本数据,所述样本数据包括样本图像、以及所述样本图像中各个局部区域的局部图像的标签;
    获取用于识别所述各个局部区域的特征的各个初始局部特征识别模型;
    针对所述各个局部区域的局部图像中的每一个局部图像,将所述局部图像输入用于识别所述局部图像所属局部区域的特征的初始局部特征识别模型,并获得该初始局部特征识别模型输出的局部特征;
    获取所述各个局部区域的局部图像的标签与所述各个局部区域的局部图像的局部特征所表征的标签之间的多个损失;
    根据获取到的多个损失的均值,训练所述多个初始局部特征识别模型,并获得多个目标局部特征识别模型,其中,所述目标局部特征识别模型应用于权利要求1-5任一项所述的方法。
  7. 一种用于验证图像的装置,包括:
    第一识别单元,被配置为获取待验证图像,并采用多个目标局部特征识别模型分别识别所述待验证图像的多个局部区域的第一局部特征;
    第二识别单元,被配置为获取基准图像,并采用所述多个目标局部特征识别模型分别识别所述基准图像的多个局部区域的第二局部特征;
    匹配单元,被配置为针对所述多个目标局部特征识别模型中的每一个目标局部特征识别模型,获取采用所述目标局部特征识别模型识别出的第一局部特征、与采用所述目标局部特征识别模型识别出的第二局部特征之间的特征相似度;
    验证单元,被配置为根据获取的多个特征相似度,确定所述待验证图像是否通过验证。
  8. 根据权利要求7所述的装置,其中,所述第一识别单元,包括:
    第一划分模块,被配置为获取待验证人脸图像,并将所述待验证人脸图像划分为不同的人脸区域;
    第一识别模块,被配置为针对所述待验证人脸图像中的每一个人脸区域,采用用于识别所述人脸区域的特征的目标局部特征识别模型识别该人脸区域的第一局部特征;
    所述第二识别单元,包括:
    第二划分模块,被配置为获取基准人脸图像,并将所述基准人脸图像划分为不同的人脸区域;
    第二识别模块,被配置为针对所述基准人脸图像中的每一个人脸区域,采用用于识别所述人脸区域的特征的目标局部特征识别模型识别该人脸区域的第二局部特征。
  9. 根据权利要求7所述的装置,其中,所述验证单元,包括:
    第一验证模块,被配置为响应于确定所述多个特征相似度中,存在至少一个特征相似度满足第一相似度阈值,确定所述待验证图像通过验证。
  10. 根据权利要求7所述的装置,其中,所述验证单元,包括:
    第二验证模块,被配置为响应于确定所述多个特征相似度中的每一个特征相似度均满足第二相似度阈值,确定所述待验证图像通过验证。
  11. 一种用于验证图像的装置,包括:
    第三识别单元,被配置为获取待验证图像,并采用训练好的特征识别模型识别所述待验证图像的第一全局特征;
    第四识别单元,被配置为获取基准图像,并采用所述训练好的特征识别模型识别所述基准图像的第二全局特征;
    第一校验单元,被配置为响应于确定所述第一全局特征与所述第二全局特征之间的相似度满足第三相似度阈值,采用权利要求1-4中任一项所述的方法验证所述待验证图像;或者,
    第二校验单元,被配置为响应于确定所述第一全局特征与所述第二全局特征之间的相似度不满足所述第三相似度阈值,确定所述待验证图像未通过验证。
  12. 一种用于训练模型的装置,包括:
    第一获取单元,被配置为获取至少一条样本数据,所述样本数据包括样本图像、以及所述样本图像中各个局部区域的局部图像的标签;
    第二获取单元,被配置为获取用于识别所述各个局部区域的特征的各个初始局部特征识别模型;
    预测单元,被配置为针对所述各个区域的局部图像中的每一个局部图像,将所述局部图像输入用于识别所述局部图像所属局部区域的特征的初始局部特征识别模型,并获得该初始局部特征识别模型输出的局部特征;
    计算单元,被配置为获取所述各个局部区域的局部图像的标签与所述各个局部区域的局部特征所表征的标签之间的多个损失;
    训练单元,被配置为根据获取到的多个损失的均值,训练所述多个初始局部特征识别模型,并获得多个目标局部特征识别模型,其中,所述目标局部特征识别模型应用于权利要求1-5任一项所述的方法。
  13. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6中任一项所述的方法。
  14. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行权利要求1-6中任一项所述的方法。
PCT/CN2022/101888 2021-09-06 2022-06-28 用于验证图像的方法和装置 WO2023029702A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111047246.0 2021-09-06
CN202111047246.0A CN115775401A (zh) 2021-09-06 2021-09-06 用于验证图像的方法和装置

Publications (1)

Publication Number Publication Date
WO2023029702A1 true WO2023029702A1 (zh) 2023-03-09

Family

ID=85387907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/101888 WO2023029702A1 (zh) 2021-09-06 2022-06-28 用于验证图像的方法和装置

Country Status (2)

Country Link
CN (1) CN115775401A (zh)
WO (1) WO2023029702A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288148A1 (en) * 2011-05-10 2012-11-15 Canon Kabushiki Kaisha Image recognition apparatus, method of controlling image recognition apparatus, and storage medium
US20160078284A1 (en) * 2014-09-17 2016-03-17 Canon Kabushiki Kaisha Object identification apparatus and object identification method
CN106709480A (zh) * 2017-03-02 2017-05-24 太原理工大学 基于加权强度pcnn模型的分块人脸识别方法
CN106874877A (zh) * 2017-02-20 2017-06-20 南通大学 一种结合局部和全局特征的无约束人脸验证方法
CN108229493A (zh) * 2017-04-10 2018-06-29 商汤集团有限公司 对象验证方法、装置和电子设备
CN112036266A (zh) * 2020-08-13 2020-12-04 北京迈格威科技有限公司 人脸识别方法、装置、设备及介质
CN113033244A (zh) * 2019-12-09 2021-06-25 漳州立达信光电子科技有限公司 一种人脸识别方法、装置及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288148A1 (en) * 2011-05-10 2012-11-15 Canon Kabushiki Kaisha Image recognition apparatus, method of controlling image recognition apparatus, and storage medium
US20160078284A1 (en) * 2014-09-17 2016-03-17 Canon Kabushiki Kaisha Object identification apparatus and object identification method
CN106874877A (zh) * 2017-02-20 2017-06-20 南通大学 一种结合局部和全局特征的无约束人脸验证方法
CN106709480A (zh) * 2017-03-02 2017-05-24 太原理工大学 基于加权强度pcnn模型的分块人脸识别方法
CN108229493A (zh) * 2017-04-10 2018-06-29 商汤集团有限公司 对象验证方法、装置和电子设备
CN113033244A (zh) * 2019-12-09 2021-06-25 漳州立达信光电子科技有限公司 一种人脸识别方法、装置及设备
CN112036266A (zh) * 2020-08-13 2020-12-04 北京迈格威科技有限公司 人脸识别方法、装置、设备及介质

Also Published As

Publication number Publication date
CN115775401A (zh) 2023-03-10

Similar Documents

Publication Publication Date Title
WO2021258588A1 (zh) 一种人脸图像识别方法、装置、设备及存储介质
CN110659600B (zh) 物体检测方法、装置及设备
JP7512523B2 (ja) ビデオ検出方法、装置、電子機器及び記憶媒体
CN110807410B (zh) 关键点定位方法、装置、电子设备和存储介质
CN110705460A (zh) 图像类别识别方法及装置
CN113657289B (zh) 阈值估计模型的训练方法、装置和电子设备
CN111598164A (zh) 识别目标对象的属性的方法、装置、电子设备和存储介质
WO2022213717A1 (zh) 模型训练方法、行人再识别方法、装置和电子设备
US11823494B2 (en) Human behavior recognition method, device, and storage medium
JP2021034003A (ja) 人物識別方法、装置、電子デバイス、記憶媒体、及びプログラム
US11403799B2 (en) Method and apparatus for recognizing face-swap, device and computer readable storage medium
JP7267379B2 (ja) 画像処理方法、事前トレーニングモデルのトレーニング方法、装置及び電子機器
CN111898561B (zh) 一种人脸认证方法、装置、设备及介质
WO2021227333A1 (zh) 人脸关键点检测方法、装置以及电子设备
WO2022213857A1 (zh) 动作识别方法和装置
WO2022247343A1 (zh) 识别模型训练方法、识别方法、装置、设备及存储介质
CN112507090A (zh) 用于输出信息的方法、装置、设备和存储介质
KR20220100810A (ko) 안면 생체 검출 방법, 장치, 전자 기기 및 저장 매체
CN114565513A (zh) 对抗图像的生成方法、装置、电子设备和存储介质
CN112561879A (zh) 模糊度评价模型训练方法、图像模糊度评价方法及装置
US20230096921A1 (en) Image recognition method and apparatus, electronic device and readable storage medium
CN111783619A (zh) 人体属性的识别方法、装置、设备及存储介质
CN114898266A (zh) 训练方法、图像处理方法、装置、电子设备以及存储介质
CN111523467A (zh) 人脸跟踪方法和装置
KR20210154774A (ko) 이미지 식별 방법, 장치, 전자 기기 및 컴퓨터 프로그램

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE