CN110321790B - Method for detecting countermeasure sample and electronic equipment - Google Patents

Method for detecting countermeasure sample and electronic equipment Download PDF

Info

Publication number
CN110321790B
CN110321790B CN201910425689.5A CN201910425689A CN110321790B CN 110321790 B CN110321790 B CN 110321790B CN 201910425689 A CN201910425689 A CN 201910425689A CN 110321790 B CN110321790 B CN 110321790B
Authority
CN
China
Prior art keywords
value
electronic device
pixel
sample
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910425689.5A
Other languages
Chinese (zh)
Other versions
CN110321790A (en
Inventor
李实�
赵晓娜
王思善
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910425689.5A priority Critical patent/CN110321790B/en
Publication of CN110321790A publication Critical patent/CN110321790A/en
Priority to PCT/CN2020/091027 priority patent/WO2020233564A1/en
Application granted granted Critical
Publication of CN110321790B publication Critical patent/CN110321790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The embodiment of the application discloses a detection method of an countermeasure sample, which is applied to face recognition scenes (such as face payment, face unlocking and the like of a mobile phone) and comprises the following steps: the electronic device collects a face image (such as a photographed face image or a face image cut from a photographed video) at the current moment through the video capturing equipment, judges whether a shielding object (such as glasses, a sticker and the like) exists in a face area range of the face image, further judges whether the shielding object is an anti-sample interfering object if the shielding object exists, and determines that the face image is an anti-sample (which indicates that the face image is attacked by the anti-sample) if the shielding object exists. According to the detection method, a large number of challenge sample pictures do not need to be subjected to depth model training, a generator for generating challenge samples does not need to know which challenge sample generation algorithm is adopted, and even face images of attackers do not need to be known in advance, so that whether the challenge samples exist can be detected, the detection complexity is low, and the detection method is easy to realize.

Description

Method for detecting countermeasure sample and electronic equipment
Technical Field
The present disclosure relates to the field of image recognition, and in particular, to a method for detecting an countermeasure sample and an electronic device.
Background
Deep learning is a core technology applied in the fields of machine learning and artificial intelligence today. In the field of machine vision, it has become the dominant force in face recognition, autopilot, surveillance, security applications. However, the deep learning network is vulnerable to slight disturbances in the input that can cause the deep learning network to output false recognition results. For example, in the field of image recognition, when the pixel values of part of the pixels in the input picture of the deep learning network change (i.e. a slight disturbance occurs), the deep learning network may output a wrong recognition result. Such slight disturbance is not noticeable to the human eye, but can completely fool the deep learning network. This attack method of adding a proper amount of disturbance to an input picture so that the deep learning network outputs a false recognition result is called a challenge sample attack, in which the input picture after adding the disturbance is called a challenge sample. As shown in fig. 1, an example of a sample attack is countered, and a certain amount of disturbance (i.e., changing the pixel value of a part of the pixel points of the input picture) which is not easily perceived by human eyes is added to the panda picture (i.e., the input picture), so that the output picture is mistakenly recognized as a gibbon by the deep learning network. However, the output picture is not different from the input picture in the human eye. Such a way of combating sample attacks as described above can only be done for pictures already present in the device (i.e. altering the pixel values of part of the pixels for pictures present in the device). In the face recognition scene, the device cannot perform interference processing on pixels on the face picture shot at the current moment (i.e. the real-time face picture shot by starting the camera). Based on this, another form of challenge to the sample arises: the attacker is identified by the face recognition system as a pre-designated person (i.e. victim) by wearing specially treated challenge sample items (e.g. challenge sample glasses/glasses frames, challenge sample stickers, etc.) on the face.
Currently, the solutions against sample attacks for the two forms (i.e. changing the pixel values of part of pixels in the existing input picture, and wearing the face picture taken at the current moment with the sample object) are as follows: 1) And the challenge sample and the original input picture are used as a training data set together, and the training data set is input into a deep learning network for model training, so that a challenge sample detector is generated. The challenge sample detector can detect whether an input picture is a challenge sample. 2) Training to generate a denoising device according to a function (namely a loss function) of the difference degree between the recognition result of the original input picture and the recognition result of the countermeasure sample, and performing a denoising treatment (namely removing disturbance added in the countermeasure sample) on the input countermeasure sample by using the denoising device.
However, the above solutions against sample attacks all have drawbacks: 1) The challenge sample detector can only detect the challenge samples generated by the known challenge sample generator, and in order to enable the challenge sample detector to detect all the challenge samples, training is required for the challenge samples generated by all kinds of challenge sample generators, which is not only costly but also not easy to implement. 2) The noise remover can only process the challenge sample generated by the known challenge sample generator, and needs to know in advance whether there is a challenge sample attack. Meanwhile, the method also needs to know the identification result of the original picture of the attacker in the picture identification system in advance, namely needs to know who the attacker is in advance, which is difficult to realize in the actual application scene.
Disclosure of Invention
The first aspect of the present embodiment provides a method for detecting an countermeasure sample, where the method is applied to face recognition scenarios (such as face payment, face unlocking, etc. of a mobile phone) of an electronic device, and specifically includes:
firstly, the electronic device can acquire a face image at the current moment through the camera equipment (such as a face picture taken at the current moment or a face picture taken from a camera record taken at the current moment). It should be noted that the camera device may be a camera of the electronic device, or may be a camera physically separated from the electronic device but wirelessly connected (for example, the mobile phone does not open the camera of the mobile phone, the camera of the mobile phone is damaged, or the mobile phone does not have the camera of the mobile phone, but there is a handheld camera connected with the mobile phone by bluetooth), which is not limited herein. It should be noted that, the electronic device may collect the face image at the current time through the camera device in various forms, for example, the electronic device may be triggered by responding to a certain operation instruction, that is, the execution of a certain operation instruction may trigger the electronic device to collect the face image at the current time through the camera, or the camera device may be always in an on state, so long as the camera device captures the face image at the current time, the electronic device collects the face image, and in particular, the form of collecting the face image at the current time by the electronic device is not limited. If the electronic device detects that a shielding object (such as glasses, stickers and the like) exists in the face area range of the face image shot at the current moment, the electronic device further judges whether the shielding object is a sample-resistant interference object, and if the electronic device determines that the shielding object is the sample-resistant interference object, the electronic device determines that the face image shot at the current moment is a sample-resistant object (namely, suffers from a sample-resistant attack).
In the embodiment of the application, whether the face image is a countermeasure sample is determined by judging whether a shielding object in a face area range in the face image shot at the current moment is a countermeasure sample interfering object. The method for detecting the challenge sample adopted by the embodiment of the application is applied to a face recognition scene, does not need to train a large number of challenge sample pictures in a depth model, does not need to know which challenge sample generation algorithm (including a known or latest generated challenge sample generation algorithm) is adopted by a generator for generating the challenge sample, and can detect whether the challenge sample exists or not without knowing a face image of an attacker in advance, so that the attacker cannot realize the challenge sample attack. And the detection method has low complexity and is easy to realize.
With reference to the first aspect of the embodiment of the present application, in a first implementation manner of the first aspect of the embodiment of the present application, the determining, by the electronic device, whether the obstruction is a countersample interferent may include: firstly, calculating pixel values of all pixel points in a shielding object to obtain a picture entropy value of the shielding object; and then comparing the calculated picture entropy value of the shielding object with a preset threshold value (namely, a preset threshold value) to judge whether the shielding object is an anti-sample interference object or not. The preset threshold may be determined according to a first preset manner, for example, the preset threshold determined by the first preset manner may be set by a user according to an empirical value, or may be generated based on a deep learning network calculation, which is not limited herein. And finally, if the picture entropy value is larger than the preset threshold value, determining that the shielding object is an anti-sample interference object.
In the embodiment of the application, whether the shielding object is the anti-sample interference object or not is judged by calculating the picture entropy value of the shielding object and comparing the picture entropy value with the preset threshold value, so that the method has practical operability.
With reference to the first implementation manner of the first aspect of the embodiment of the present application, in a second implementation manner of the first aspect of the embodiment of the present application, calculating pixel values of all pixel points in the occlusion object to obtain a picture entropy value of the occlusion object may include: decomposing the pixel values of all the pixel points in the occlusion object into a first vector pixel value (also called an X-axis pixel value), a second vector pixel value (also called a Y-axis pixel value) and a third vector pixel value (also called a Z-axis pixel value) in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value; then, respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula; and finally, taking an arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value, and taking the arithmetic average value as the picture entropy value of the shielding object. In the embodiment of the application, how to calculate the picture entropy of the shielding object is described, namely, all pixel points of the shielding object are firstly split in a color space and then calculated according to an entropy calculation formula, and the calculation mode is simple, convenient and easy to operate.
With reference to the second implementation manner of the first aspect of the embodiment of the present application, in a third implementation manner of the first aspect of the embodiment of the present application, the above-mentioned entropy value calculation formula may be:
Figure BDA0002067415380000031
wherein i is the value of each element in the first set, the second set or the third set, p i H is the first picture entropy value, the second picture entropy value or the third picture entropy value for the probability of occurrence of i. That is, if the occlusion object is a gray picture, the pixel values of all the pixel points in the occlusion object are the same on each coordinate axis in the color space (e.g., RGB color model). Then the calculation of the picture entropy value of the occlusion can be performed by the entropy value calculation formula +.>
Figure BDA0002067415380000032
Obtained by, wherein i is the pixel value of each pixel point in the shielding object, p i Is like a figureThe probability of the occurrence of the element i, H, is the picture entropy of the occlusion. If the occlusion object is a color picture, the entropy of the picture on each coordinate axis of the occlusion object in the color space can still be calculated according to the entropy calculation formula +_ adopted in the gray picture calculation>
Figure BDA0002067415380000033
And (5) calculating to obtain the product. I is the value of each pixel value in the first set, p when calculating the entropy of the first picture on the X coordinate axis i I is the probability of occurrence, and H is the entropy value of the first picture; when calculating the entropy of the second picture on the Y coordinate axis, i is the value of each pixel value in the second set, p i I is the probability of occurrence, and H is the entropy value of the second picture; when calculating the entropy of the third picture on the Z coordinate axis, i is the value of each pixel value in the third set, p i I is the probability of occurrence and H is the third picture entropy.
In the embodiment of the application, a specific entropy calculation formula is provided, and the calculation formula is applicable to all pictures (including gray pictures and color pictures), wide in application range and capable of being operated.
With reference to the first aspect of the present application and the first to third implementation manners of the first aspect of the present application, in a fourth implementation manner of the first aspect of the present application, if the electronic device detects that an occlusion object exists in a face region range in the face image, and the occlusion object is an anti-sample interference object, the detection method may further include:
and processing the countermeasure sample according to a second preset mode, and identifying the processed countermeasure sample to obtain an identification result. The purpose of the treatment is to eliminate the influence of the anti-sample interfering substance, and the anti-sample interfering substance may be directly removed, or the anti-sample interfering substance may be converted into a common shielding substance, which is not limited herein.
In this embodiment of the present application, after the above-mentioned processing is performed on the anti-sample interfering object, no shielding object exists in the face area range in the face image or the existing shielding object is only a common shielding object, so the electronic device may directly identify the face image, and obtain an identification result. The processed shielding object is identified again so as to prevent the situation of false identification, and the use experience of a user is improved.
With reference to the fourth implementation manner of the first aspect of the embodiment of the present application, in a fifth implementation manner of the first aspect of the embodiment of the present application, the second preset manner may include:
firstly, determining a target pixel value, and modifying the pixel values of all pixel points in the anti-sample interference object into the target pixel value; or, the pixel values of all the pixel points in the anti-sample interference object are subjected to algebraic linear transformation, namely, the pixel values x of all the pixel points in the anti-sample interference object are subjected to algebraic linear transformation. For example, the pixel values of all pixels in the anti-sample interfering object are modified to be (255-x) or 0.5 (255-x), and the form of algebraic linear transformation is not limited herein.
In the embodiment of the application, a plurality of implementation modes of the second preset mode are provided, so that the flexibility is improved.
With reference to the fifth implementation manner of the first aspect of the embodiment of the present application, in a sixth implementation manner of the first aspect of the embodiment of the present application, a plurality of manners of determining the target pixel value may also be provided, which may include:
selecting one pixel value as a target pixel value at will in the value range of the pixel value (namely any integer in 0-255);
or alternatively, the first and second heat exchangers may be,
taking the pixel value (namely the target pixel value) of any pixel point A in the anti-sample interfering object, and then modifying the pixel values of all other pixel points in the anti-sample interfering object into the same pixel value as the pixel point A;
or alternatively, the first and second heat exchangers may be,
taking the pixel value (namely a target pixel value) of any pixel point B in the face area range in the face image, and then modifying the pixel values of all the pixel points in the anti-sample interference object into the same pixel value as the pixel point B;
or alternatively, the first and second heat exchangers may be,
and taking an arithmetic average value C (namely a target pixel value) of the pixel values of all the pixel points in the face area range in the face image, and modifying the pixel values of all the pixel points in the anti-sample interference object to be the same as the arithmetic average value C.
In the embodiment of the application, various implementation modes for determining the target pixel value are provided, and flexibility is provided.
With reference to the first aspect of the embodiment of the present application and the first implementation manner to the sixth implementation manner of the first aspect of the embodiment of the present application, in a seventh implementation manner of the first aspect of the embodiment of the present application, after determining that the face image is a challenge sample (i.e., the recognition result is not the owner of the electronic device), the electronic device may further generate a reminder notification, where the reminder notification is used to prompt the relevant user that the electronic device is suffering from a challenge sample attack, for example, the relevant user may be the owner (i.e., the victim) of the electronic device, and the reminder notification may prompt the victim to perform timely processing (e.g., modify the payment password, alarm), and the relevant user may also be a service merchant corresponding to the electronic device (e.g., the attacker uses the cell phone of the victim to pay on the happy supermarket, and the corresponding service merchant is the cashier of the happy supermarket). For the reminder notification to implement its reminder function, there are various implementation manners, including but not limited to the following reminder manners:
the reminding notification reminds on the electronic equipment in the form of voice broadcasting, alarm bell and the like.
And/or the number of the groups of groups,
sending the reminding notification to a server corresponding to the electronic equipment;
And/or the number of the groups of groups,
the alert notification is sent to a target electronic device associated with the electronic device.
In the embodiment of the application, when the electronic equipment is determined to be subject to the attack against the sample, a corresponding reminding notification is generated to remind relevant users, and the specific practicability is realized.
With reference to the first to seventh embodiments of the first aspect of the embodiment of the present application, in an eighth embodiment of the first aspect of the embodiment of the present application, a first preset manner of determining the preset threshold may include:
acquiring a large number (such as M, M is larger than or equal to 1) of normal face images (i.e. reference face images), wherein no shielding object exists in the face area range of the face images (i.e. the original face images without any disturbance) or common shielding objects exist (such as wearing common glasses, adhesive tapes, masks and the like); then, calculating pixel values in each obtained normal face image to obtain a picture entropy value (namely M target picture entropy values) of each normal face image, wherein the calculation mode of the picture entropy values can be obtained through the entropy value calculation formula; and finally, taking an arithmetic average value of picture entropy values corresponding to all normal face images, wherein the obtained arithmetic average value can be used as a preset threshold value.
In the embodiment of the application, a specific way of setting the preset threshold is provided, and operability is achieved.
With reference to the first implementation manner of the first aspect of the embodiment of the present application, in a ninth implementation manner of the first aspect of the embodiment of the present application, if the entropy value of the picture is less than or equal to the preset threshold value, determining that the occlusion object is a common occlusion object; and then, the electronic equipment identifies the common shielding object to obtain an identification result.
In the embodiment of the application, if the shielding object is a common shielding object, normal identification is performed, so that normal use of a user is not affected, and use experience of the user is improved.
A second aspect of embodiments of the present application provides an electronic device, which may include: one or more camera equipment; one or more touch screens; one or more processors; one or more memories;
wherein the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the steps of:
acquiring a face image at the current moment, wherein the face image is acquired by the camera equipment;
Judging whether the shielding object is an anti-sample interfering object or not, wherein the shielding object is positioned in the face area range in the face image;
if the shielding object is the countersample interfering object, determining the face image as the countersample.
With reference to the second aspect of the embodiments of the present application, in a first implementation manner of the second aspect of the embodiments of the present application, the instructions, when executed by the electronic device, cause the electronic device to further perform the following steps:
calculating pixel values of all pixel points in the shielding object to obtain a picture entropy value of the shielding object;
judging whether the entropy value of the picture is larger than a preset threshold value or not, wherein the preset threshold value is determined according to a first preset mode;
if the picture entropy value is larger than the preset threshold value, determining that the shielding object is the anti-sample interfering object.
With reference to the first implementation manner of the second aspect of the embodiments of the present application, in a second implementation manner of the second aspect of the embodiments of the present application, the instruction, when executed by the electronic device, may cause the electronic device to further perform the following steps:
decomposing the pixel values of all the pixel points in the shielding object into a first vector pixel value, a second vector pixel value and a third vector pixel value in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value;
Respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula;
and determining the arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value as the picture entropy value of the shielding object.
With reference to the second implementation manner of the second aspect of the embodiment of the present application, in a third implementation manner of the second aspect of the embodiment of the present application, the entropy value calculation formula may include:
Figure BDA0002067415380000051
wherein i is the value of each element in the first set, the second set or the third set, p i H is the first picture entropy value, the second picture entropy value or the third picture entropy value for the probability of occurrence of i.
With reference to the second aspect of the present application and the first implementation manner to the third implementation manner of the second aspect of the present application, in a fourth implementation manner of the second aspect of the present application, if the electronic device determines that the obstruction is the countersample interferent, when the instruction is executed by the electronic device, the electronic device may further perform the following steps:
processing the challenge sample according to a second preset mode;
And identifying the processed countermeasure sample to obtain an identification result.
With reference to the fourth implementation manner of the second aspect of the embodiment of the present application, in a fifth implementation manner of the second aspect of the embodiment of the present application, the second preset manner includes:
determining a target pixel value, and modifying the pixel values of all pixel points in the anti-sample interfering object into the target pixel value;
or alternatively, the first and second heat exchangers may be,
algebraic linear transformation is carried out on the pixel values of all the pixel points in the anti-sample interfering object.
With reference to the fifth implementation manner of the second aspect of the embodiment of the present application, in a sixth implementation manner of the second aspect of the embodiment of the present application, the determining the target pixel value includes:
selecting one pixel value as the target pixel value at will in the value range of the pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the anti-sample interfering object as the target pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the face area as the target pixel value;
or alternatively, the first and second heat exchangers may be,
and determining the arithmetic average value of the pixel values of all the pixel points in the face area as the target pixel value.
With reference to the second aspect of the present application and the first implementation manner to the sixth implementation manner of the second aspect of the present application, in a seventh implementation manner of the second aspect of the present application, after the electronic device determines that the face image is a challenge sample, when the instruction is executed by the electronic device, the electronic device is further caused to perform the following steps:
Generating a reminding notification;
broadcasting the reminding notice through voice;
and/or the number of the groups of groups,
sending the reminding notification to a corresponding server;
and/or the number of the groups of groups,
the alert notification is sent to the associated target electronic device.
With reference to the first to seventh embodiments of the second aspect of the embodiment of the present application, in an eighth embodiment of the second aspect of the embodiment of the present application, the first preset manner includes:
m reference face images are obtained, wherein no shielding object exists in the face area range of the reference face images or the face images with common shielding objects exist in the face area range of the reference face images, and M is more than or equal to 1;
calculating pixel values of all pixel points in a target reference face image in the M reference face images to obtain a target picture entropy value of the target reference face image;
and determining the arithmetic average value of the entropy values of the M target pictures respectively corresponding to the M reference face images as the preset threshold value.
With reference to the first implementation manner of the second aspect of the embodiment of the present application, in a ninth implementation manner of the second aspect of the embodiment of the present application, if the entropy value of the picture is less than or equal to the preset threshold value, when the instruction is executed by the electronic device, the electronic device may further perform the following steps:
Determining that the shielding object is a common shielding object;
and identifying the common shielding object to obtain an identification result.
The third aspect of the embodiments of the present application further provides an electronic device, where the electronic device may specifically include:
the acquisition unit is used for acquiring face images at the current moment through the camera equipment;
the judging unit is used for judging whether the shielding object is an anti-sample interfering object or not, and the shielding object is positioned in the face area range in the face image;
and the determining unit is used for determining that the face image is an countermeasure sample if the shielding object is the countermeasure sample interfering object.
With reference to the third aspect of the embodiment of the present application, in a first implementation manner of the third aspect of the embodiment of the present application, the determining unit includes:
the calculating subunit is used for calculating the pixel values of all the pixel points in the shielding object to obtain the picture entropy value of the shielding object;
the judging subunit is used for judging whether the entropy value of the picture is larger than a preset threshold value or not, and the preset threshold value is determined according to a first preset mode;
and the first determination subunit is configured to determine that the occlusion object is the anti-sample interference object if the picture entropy value is greater than the preset threshold value.
With reference to the first implementation manner of the third aspect of the embodiment of the present application, in a second implementation manner of the third aspect of the embodiment of the present application, the computing subunit is specifically configured to:
Decomposing the pixel values of all the pixel points in the shielding object into a first vector pixel value, a second vector pixel value and a third vector pixel value in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value;
respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula;
and determining the arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value as the picture entropy value of the shielding object.
With reference to the second implementation manner of the third aspect of the embodiment of the present application, in a third implementation manner of the third aspect of the embodiment of the present application, the entropy value calculation formula includes:
Figure BDA0002067415380000071
wherein i is the value of each element in the first set, the second set or the third set, p i H is the first picture entropy value, the second picture entropy value or the third picture entropy value for the probability of occurrence of i.
With reference to the third aspect of the present application and the first to third embodiments of the third aspect of the present application, in a fourth embodiment of the third aspect of the present application, if the shielding object is the countersample interfering object, the electronic device further includes:
A processing unit for processing the challenge sample according to a second preset mode;
and the identification unit is used for identifying the processed countermeasure sample to obtain an identification result.
With reference to the fourth implementation manner of the third aspect of the embodiment of the present application, in a fifth implementation manner of the third aspect of the embodiment of the present application, the second preset manner includes:
determining a target pixel value, and modifying the pixel values of all pixel points in the anti-sample interfering object into the target pixel value;
or alternatively, the first and second heat exchangers may be,
algebraic linear transformation is carried out on the pixel values of all the pixel points in the anti-sample interfering object.
With reference to the fifth implementation manner of the third aspect of the embodiment of the present application, in a sixth implementation manner of the third aspect of the embodiment of the present application, the determining the target pixel value includes:
selecting one pixel value as the target pixel value at will in the value range of the pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the anti-sample interfering object as the target pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the face area as the target pixel value;
or alternatively, the first and second heat exchangers may be,
and determining the arithmetic average value of the pixel values of all the pixel points in the face area as the target pixel value.
With reference to the third aspect of the present application and the first to sixth embodiments of the third aspect of the present application, in a seventh embodiment of the third aspect of the present application, after determining that the face image is a countermeasure sample, the electronic device further includes:
the generating unit is used for generating a reminding notification;
the broadcasting unit is used for broadcasting the reminding notification in a voice mode;
and/or the number of the groups of groups,
a sending unit, configured to send the alert notification to a corresponding server; and/or sending the alert notification to the associated target electronic device.
With reference to the first to seventh embodiments of the third aspect of the embodiment of the present application, in an eighth embodiment of the third aspect of the embodiment of the present application, the first preset manner includes:
m reference face images are obtained, wherein no shielding object exists in the face area range of the reference face images or the face images with common shielding objects exist in the face area range of the reference face images, and M is more than or equal to 1;
calculating pixel values of all pixel points in a target reference face image in the M reference face images to obtain a target picture entropy value of the target reference face image;
and determining the arithmetic average value of the entropy values of the M target pictures respectively corresponding to the M reference face images as the preset threshold value.
With reference to the first implementation manner of the third aspect of the embodiment of the present application, in a ninth implementation manner of the third aspect of the embodiment of the present application, the determining unit further includes:
the second determining subunit is configured to determine that the occlusion object is a common occlusion object if the entropy value of the picture is less than or equal to the preset threshold value;
the identification unit is specifically used for identifying the common shielding object to obtain an identification result.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the detection method of the first aspect and any one of the possible implementations of the first aspect.
A fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer, enable the computer to perform the detection method of the first aspect and any one of the possible implementations of the first aspect.
From the above technical solutions, the embodiments of the present application have the following advantages:
the electronic device collects face images at the current moment (such as face pictures taken at the current moment or face pictures taken from a camera recording taken at the current moment) through camera equipment (such as a camera on the electronic device, or a camera which is physically separated from the electronic device but is in wireless connection with the electronic device). If the electronic device detects that a shielding object (such as glasses, stickers and the like) exists in the face area range of the face image shot at the current moment, the electronic device further judges whether the shielding object is a sample-resistant interference object, and if the electronic device determines that the shielding object is the sample-resistant interference object, the electronic device determines that the face image shot at the current moment is a sample-resistant object (namely, suffers from a sample-resistant attack). In the embodiment of the application, whether the face image is a countermeasure sample is determined by judging whether a shielding object in a face area range in the face image shot at the current moment is a countermeasure sample interfering object. The method for detecting the challenge sample adopted by the embodiment of the application is applied to a face recognition scene, does not need to train a large number of challenge sample pictures in a depth model, does not need to know which challenge sample generation algorithm (including a known or latest generated challenge sample generation algorithm) is adopted by a generator for generating the challenge sample, and can detect whether the challenge sample exists or not without knowing a face image of an attacker in advance, so that the attacker cannot realize the challenge sample attack. And the detection method has low complexity and is easy to realize.
Drawings
FIG. 1 is a schematic diagram of one example of a prior art challenge sample;
FIG. 2 is a schematic diagram of one embodiment of countering sample attacks in a face recognition application scenario;
FIG. 3 is a schematic illustration of the correspondence between a challenge sample article and a victim;
FIG. 4 is another schematic illustration of the correspondence between a challenge sample article and a victim;
FIG. 5 is a schematic diagram of a method for detecting a challenge sample according to an embodiment of the present application;
FIG. 6 is a schematic diagram showing the result of calculating entropy values of pictures of several different pixel distributions;
FIG. 7 is a diagram of one implementation of processing generated reminder notifications in an embodiment of the present application;
FIG. 8 is another implementation of processing generated reminder notifications in an embodiment of the present application;
FIG. 9 is a schematic diagram of a reminder notification generated in an embodiment of the present application being sent to a server corresponding to an electronic device;
FIG. 10 is a schematic diagram of a reminder notification generated in an embodiment of the present application being sent to an associated target electronic device corresponding to the electronic device;
FIG. 11 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 12 is another schematic diagram of an electronic device in an embodiment of the present application;
FIG. 13 is a hardware architecture diagram of an electronic device in an embodiment of the present application;
fig. 14 is a software architecture diagram of an electronic device in an embodiment of the present application.
Detailed Description
In the application scenario of face recognition (such as face payment and face unlocking in a mobile phone), an attacker cannot scramble a face image photographed at the current moment of an electronic device (such as the mobile phone) (i.e. cannot modify the pixel value of a pixel point in the face image photographed at the current moment), so that the attacker cannot realize the attack against a sample in a manner corresponding to fig. 1. In this case, the attacker achieves the challenge of the sample by wearing the challenge sample article such as the scrambling glasses, the scrambling stickers, or the like. This way of combating sample attacks is illustrated by way of example in fig. 2: in the application scene of face recognition (for example, the attacker A is paying by using the face in the mobile phone), the electronic equipment (for example, the mobile phone) acquires the face image of the attacker A wearing the sample-resistant glasses (marked as the glasses frame a) at the current moment through the camera, then the electronic equipment identifies the attacker A as a victim V1, so that the face payment function of the mobile phone is successfully completed (the target face image of the face payment set in the mobile phone is assumed to be the victim V1), and similarly, the attacker B and the attacker C can adopt similar attack modes (for example, the electronic equipment respectively wears the glasses frame B and the glasses frame C to respectively identify the attacker V2 and the victim V3). The identifying the application scene includes: an attacker wearing a challenge sample article may correspond to a plurality of victims that the deep learning network has determined when generating the corresponding challenge sample article. As shown in fig. 3, taking the countersample article as an example of the countersample glasses frame, before generating the countersample glasses frame (denoted as a glasses frame a 1), an attacker can determine requirements such as the number of victims (such as a victim V11, a victim V12, a victim V13) and the like, and then, according to the requirements of the attacker, the deep learning network generates a corresponding glasses frame a1 by adopting a specific algorithm, and after the attacker wears the glasses frame a1, the attacker can be identified as the victim V11, the victim V12 or the victim V13 by the electronic device. Similarly, the identifying the application scenario may further include: multiple aggressors can also identify the same victim by wearing the same challenge sample article. As shown in fig. 4, taking the example of the countersample article as the countersample glasses frame, assuming that the number of attackers is 3, that is, the attacker a11, the attacker a12, and the attacker a13 wear the countersample glasses frame (denoted as the glasses frame a 2), the deep learning network can determine each of the attacker a11, the attacker a12, and the attacker a13 wearing the glasses frame a2 as the victim V21 according to the needs of the attacker. Then whichever of the aggressors a11, a12, a13 wears the frame a2 can be recognized by the electronic device as the victim V21.
The above way of countering sample attacks poses a great threat to the image recognition system and has a great impact on the recognition result (e.g. false recognition), which may have serious consequences (e.g. recognizing as victim and completing online payment, causing financial loss of victim, or recognizing as victim and unlocking the cell phone of victim, causing privacy of victim to be compromised, etc.). In order to avoid adverse effects or serious consequences of the challenge sample attack in the face recognition scene, the embodiment of the application provides a method for detecting the challenge sample, which can effectively detect whether the face image acquired at the current moment is the challenge sample or not, so that successful implementation of the challenge sample attack can be effectively prevented.
It should be noted that, the detection method of the challenge sample provided in the embodiment of the present application is applied to a face recognition scene, and an implementation main body of the detection method includes an electronic device, where the electronic device is provided with a camera device (such as a camera) and a display device (such as a liquid crystal display screen), and may be an intelligent terminal such as a mobile phone, a tablet computer, a smart watch, and the like, and in particular, the electronic device is not limited herein. It should also be noted that the terms "first," "second," "third," "fourth," and the like in the description and claims of this application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 5 is a schematic diagram of a method for detecting an challenge sample according to an embodiment of the present application, and the specific implementation manner is as follows:
501. and acquiring a face image at the current moment through the camera equipment.
The electronic device can acquire the face image at the current moment through the camera equipment. It should be noted that the camera device may be a camera of the electronic device, or may be a camera physically separated from the electronic device but wirelessly connected (for example, the mobile phone does not open the camera of the mobile phone, the camera of the mobile phone is damaged, or the mobile phone does not have the camera of the mobile phone, but there is a handheld camera connected with the mobile phone by bluetooth), which is not limited herein. It should be further noted that, the electronic device may take various forms of capturing a face image at the current moment through the camera device, for example, the electronic device may be taken as a mobile phone to be illustrated in response to a trigger of a certain operation instruction, that is, execution of the certain operation instruction triggers the electronic device to capture the face image at the current moment through the camera device: the operation instruction can be a payment operation when a user uses a mobile phone to provide a payment two-dimensional code (such as a WeChat or a payment two-dimensional code of a payment bank) for a merchant to scan the code, and in order to ensure the safety of a payment environment, the payment operation needs to verify the identity of the user, and the operation instruction can also be an opening operation performed by the user on an application (such as an online bank, a telephone charge inquiry and the like) on the mobile phone, and in order to ensure the safety of information, the opening operation also needs to verify the identity of the user. In the embodiment of the present application, the form of the operation instruction is not limited. The user performs the operation instruction to trigger the electronic device to verify the identity of the user, wherein one of the verification modes is to identify the face image acquired at the current moment, namely the payment operation, the opening operation and the like of the user trigger the electronic device to open the camera to shoot the face of the user, so as to acquire the face image at the current moment. The electronic device may take a picture of the face of the user, or may take a video of the face, and then take the picture from the video. The electronic device collects the face image at the current moment through the camera device, except for responding to the trigger of a certain operation instruction, the camera device can be always in an on state, and the electronic device collects the face image as long as the camera device captures the face image at the current moment. In the embodiment of the application, the form of acquiring the face image at the current moment by the electronic device is not limited.
502. Whether a shielding object exists in the face area is judged, if yes, step 503 is executed, and if not, step 504 is executed.
After the electronic device obtains the face image at the current moment through the camera equipment, the detection of the shielding object in the face area range in the face image is carried out, namely whether the shielding object such as glasses, glasses frames, stickers and the like which are worn by people exists in the face area range is detected. If there is an occlusion in the face region, step 503 is executed, and if there is no occlusion in the face region, step 504 is executed.
503. Whether the obstruction is a countersample interfering object is determined, if so, step 505 is executed, and if not, step 504 is executed.
If the electronic equipment detects that the shielding object exists in the face area range in the face image, the electronic equipment further judges whether the shielding object is an anti-sample interference object or not. If the occlusion is a countersample interferent, step 505 is performed, and if the occlusion is not a countersample interferent, step 504 is performed.
Note that, in the embodiment of the present application, the anti-sample interfering object refers to an interfering object having an anti-sample attribute obtained by training through a deep learning network for the purpose of implementing an anti-sample attack, such as an anti-sample glasses frame (e.g., a glasses frame a, a glasses frame b, a glasses frame c, a glasses frame a1, a glasses frame a2 described in fig. 2, 3, and 4), a corresponding sample sticker, and the like, and the form of the anti-sample interfering object is not limited herein.
In the form of resisting sample attack in the face recognition scene, the resisting sample interferents have the following attribute characteristics: the pixel values of the pixels within a small range vary greatly and the variation is irregular, and the pixels are visually represented as bright colors. Based on this, in some embodiments of the present application, the electronic device may analyze the pixel values of the occlusion pixels by, but not limited to, using a method of picture entropy calculation to determine whether the occlusion is an opposing sample occlusion. Before describing the detailed steps of the method in detail, some concepts that may appear in the embodiments of the present application are described.
First, the concept of color space is introduced, color is the different perception of human eyes for light of different frequencies, color is both objectively present (light of different frequencies) and subjectively perceived, and there is a perceived difference. Based on this, in order to describe colors more objectively and accurately, a concept of a color space (which may also be referred to as a color gamut) appears. By creating a color model, a certain color is represented by one-dimensional, two-dimensional, three-dimensional or even four-dimensional space coordinates, and the color range or color space can be defined by the coordinate system. The types of color spaces commonly used at present are mainly three primary color light modes (RGB), printed four color separation modes (CMYK), color models (Lab), and the like. For convenience of description, in the embodiment of the present application, a color space is taken as an example of a three primary color light mode. The three primary color light mode, which may be called an RGB color model or a Red, green and Blue color model, is an additive color model, and adds three primary colors of Red (Red), green (Green) and Blue (Blue) in different proportions to generate various color light, which define a color space, and if the amount of Red is defined as an X coordinate axis, the amount of Green is defined as a Y coordinate axis, and the amount of Blue is defined as a Z coordinate axis (the amounts of Red, green and Blue are uniquely corresponding to the X coordinate axis, the Y coordinate axis and the Z coordinate axis, respectively, only one of which is illustrated herein, but not limited to, so that a three-dimensional space is obtained, and each possible color has a unique position in the three-dimensional space. The RGB color model is implemented in various ways according to the capabilities of the actual device system. Among these, the most commonly used is that each color channel of red, green and blue has 256 color levels (the value range of the color levels is an integer of 0 to 255). The color space based on such RGB color model can be expressed as 256×256×256≡1670 ten thousand colors, and some implementation methods can use more color levels (such as 512 color levels) of each primary color, so that higher and more accurate color densities can be realized in the same range. Each color level is a pixel value of a pixel point at a corresponding position in the picture. For descriptive understanding, the color level of each primary color is 256 for illustration:
Assuming that the amount of red is defined as the X axis, the amount of green is defined as the Y axis, and the amount of blue is defined as the Z axis, in one image, a color of a pixel value RGB (255, 0) appears as red, a color of a pixel value RGB (0,255,0) appears as green, and a color of a pixel value RGB (0,0,255) appears as blue. When the pixel value in one image is at least two different values in each coordinate, the image is displayed as a color image; when the pixel values are the same at each coordinate, the image is represented as a gray image, for example, when the color level of each of the three primary colors is 255, the color of the pixel value RGB (255 ) is white, when the color level of each of the three primary colors is 0, the color of the pixel value RGB (0, 0) is black, when the color level of each of the three primary colors is equal, the color of the pixel value RGB (m, m, m) is gray, m is an integer and 0 < m < 255, for example, the pixel value RGB (100,100,100) represents a gray scale of 100, the pixel value RGB (50,50,50) represents a gray scale of 50, and the gray scale refers to the color depth of each pixel point in the black-and-white image.
Secondly, the concept of entropy of pictures is introduced, in short, entropy is used to describe the degree of confusion, and at the earliest, the degree of confusion of substances in physics, and later, the concept of entropy is gradually extended to the fields of informatics, graphics and the like, and is used as a measure of uncertainty. The larger the information amount is, the smaller the uncertainty is, and the larger the entropy is; conversely, the smaller the amount of information, the greater the uncertainty and the smaller the entropy. Based on the entropy, the randomness and disorder degree of an event can be judged by calculating the entropy value. Based on this, in the embodiment of the present application, a concept of a picture entropy value is introduced, where the picture entropy value is used to reflect a distribution feature of pixel values in an image, and the larger the picture entropy value is, the more gorgeous the corresponding image color is, and the larger the information content of the image is.
Next, describing in detail how the electronic device in the embodiment of the present application determines whether the occlusion object is an opposing sample occlusion object by using the method for calculating the entropy value of the picture, the determining step may include:
a. calculating pixel values of all pixel points in the shielding object to obtain a picture entropy value of the shielding object;
the electronic equipment can calculate the pixel values of all the pixel points in the shielding object through the corresponding entropy calculation formula so as to obtain the picture entropy of the shielding object. Including but not limited to the following:
1) If the occlusion object is a gray picture, the pixel values of all the pixel points in the occlusion object are the same on each coordinate axis in the RGB color model. Then the picture entropy value for the occlusion can be calculated by, but not limited to, the entropy value calculation formula
Figure BDA0002067415380000121
Obtained by, wherein i is the pixel value of each pixel point in the shielding object, p i For the probability of the occurrence of the pixel value i, H is the picture entropy value of the occlusion. Fig. 6 (when the obstruction is a gray picture) illustrates the results of the calculation of picture entropy H for several different pixel distributions: when the pixel values of all the pixel points in the shielding object are RGB (255, 255, 255) or RGB (0, 0) in the RGB color model, according to the aboveThe entropy value calculation formula obtains a picture entropy value H1=H2=0 of the shielding object; when the pixel value distribution of the pixel points in the occlusion is shown in two distribution forms on the right in fig. 6, the entropy values of the occlusion picture obtained according to the entropy value calculation formula are h3= 1.0413 and h4= 1.3476 respectively.
2) If the occlusion object is a color picture, decomposing the pixel values of all the pixel points in the occlusion object into a first vector pixel value (also called an X-axis pixel value), a second vector pixel value (also called a Y-axis pixel value) and a third vector pixel value (also called a Z-axis pixel value) in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value; then, respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula; and finally, taking an arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value, and taking the arithmetic average value as the picture entropy value of the shielding object.
It should be noted that, in some embodiments of the present application, the entropy of the picture on each coordinate axis may also be calculated according to the entropy calculation formula adopted in the calculation of the gray picture
Figure BDA0002067415380000131
And (5) calculating to obtain the product. I is the value of each pixel value in the first set, p when calculating the entropy of the first picture on the X coordinate axis i I is the probability of occurrence, and H is the entropy value of the first picture; when calculating the entropy of the second picture on the Y coordinate axis, i is the value of each pixel value in the second set, p i I is the probability of occurrence, and H is the entropy value of the second picture; when calculating the entropy of the third picture on the Z coordinate axis, i is the value of each pixel value in the third set, p i I is the probability of occurrence and H is the third picture entropy.
For ease of understanding, consider the case where the number of pixels in the mask is 4, and assume that the pixel values of the 4 pixels in the mask are RGB1 (120, 50, 80) RGB2 (30, 90, 40), RGB3 (70, 140, 200), RGB4 (100, 160, 20), then the electronic device would decompose the pixel values of the 4 pixels into (120,0,0), (0, 50, 0), (0, 80), (30,0,0), (0, 90,0), (0, 40), (70,0,0), (0, 140,0), (0, 200), (100, 0), (0, 160,0), (0, 20), respectively. The resulting first set of first vector pixel values is { (120,0,0), (30,0,0), (70,0,0), (100, 0) }, the second set of second vector pixel values is { (0, 50, 0), (0, 90,0), (0, 140,0), (0, 160,0) }, and the third set of third vector pixel values is { (0, 80), (0, 40), (0, 200), (0, 20) }. Then, the electronic device can calculate the formula according to the entropy value
Figure BDA0002067415380000132
And respectively calculating the picture entropy values in each set so as to obtain a first picture entropy value Hx of the first set, a second picture entropy value Hy of the second set and a third picture entropy value Hz of the third set. Finally, h= (hx+hy+hz)/3 may be taken as the picture entropy value of the occlusion.
b. Judging whether the entropy value of the picture is larger than a preset threshold value or not;
and then comparing the calculated picture entropy value of the shielding object with a preset threshold value (namely, a preset threshold value) to judge whether the shielding object is an anti-sample interference object or not. It should be noted that, the preset threshold in the embodiment of the present application may be obtained in various manners, and may be set by a user according to an experience value, or may be generated based on calculation of a deep learning network, which is not limited herein. Preferably, in some embodiments of the present application, the preset threshold may be determined by: acquiring a large number (such as M, M is larger than or equal to 1) of normal face images (namely reference face images) offline, wherein no shielding object exists in the face area range of the face images (namely the original face images without any disturbance); then, calculating pixel values in each obtained normal face image to obtain a picture entropy value of each normal face image, wherein the calculation mode of the picture entropy value can be obtained through the entropy value calculation formula; and finally, taking an arithmetic average value of picture entropy values corresponding to all normal face images, wherein the obtained arithmetic average value can be used as a preset threshold value.
c. If the picture entropy value is larger than a preset threshold value, determining that the shielding object is an anti-sample interfering object.
If the electronic equipment determines that the picture entropy value of the shielding object is larger than the preset threshold value, determining that the shielding object is an anti-sample interfering object.
504. And recognizing the face image to obtain a recognition result.
If the electronic device detects that there is no occlusion object in the face area of the face image, or if the electronic device detects that there is no occlusion object in the face area of the face image, the electronic device does not resist the sample interference object (i.e. although there is an occlusion object or there is a common occlusion object (such as wearing only common glasses, adhesive bandage, mask, etc.), but there is no situation of resisting the sample attack, for example, in step 503, if the electronic device determines that the picture entropy value of the occlusion object is less than or equal to the threshold value, it determines that the occlusion object is a common occlusion object), which indicates that the electronic device is not attacked by the resisting sample, then the electronic device directly identifies the face image to obtain an identification result. At present, for the face recognition system of the existing electronic equipment, the normal small-range face shielding (such as wearing glasses, sticking adhesive plaster and the like) cannot influence the recognition result of the face recognition system. Taking the payment operation of the user when scanning the two-dimension code provided by the mobile phone to the merchant as an example: the mobile phone acquires the face image and compares the face image with the authenticated target face image capable of starting the payment operation, if the face image is matched with the target face image (the same person), the payment environment is safe, and the mobile phone can finish the payment operation; if the comparison is not passed (i.e. the face image is inconsistent with the target face image and is not the same person), the payment environment is not safe, and the mobile phone can stop the payment operation.
505. And determining the face image as a countermeasure sample.
If the electronic device detects that a shielding object exists in the face area range in the face image and the shielding object is an anti-sample interfering object, the electronic device determines that the face image is an anti-sample, and indicates that the electronic device is under an attack of the anti-sample.
506. The challenge sample is processed.
Preferably, in some embodiments of the present application, after the electronic device determines that the face image is a countersample, the countersample may be further processed, where the purpose of the processing is to eliminate the influence of the countersample interfering object, which may be directly removed or converted into a common shielding object, which is not limited herein. In some embodiments of the present application, the treatment may be performed as follows:
a. first, a target pixel value is determined, and the pixel values of all pixel points in the anti-sample interfering object are modified into the target pixel value.
It should be noted that, in the embodiment of the present application, there may be various ways to determine the target pixel value, including but not limited to:
selecting one pixel value as a target pixel value at will in the value range of the pixel value (namely any integer in 0-255);
Or alternatively, the first and second heat exchangers may be,
taking the pixel value (namely the target pixel value) of any pixel point A in the anti-sample interfering object, and then modifying the pixel values of all other pixel points in the anti-sample interfering object into the same pixel value as the pixel point A;
or alternatively, the first and second heat exchangers may be,
taking the pixel value (namely a target pixel value) of any pixel point B in the face area range in the face image, and then modifying the pixel values of all the pixel points in the anti-sample interference object into the same pixel value as the pixel point B;
or alternatively, the first and second heat exchangers may be,
and taking an arithmetic average value C (namely a target pixel value) of the pixel values of all the pixel points in the face area range in the face image, and modifying the pixel values of all the pixel points in the anti-sample interference object to be the same as the arithmetic average value C.
b. Algebraic linear transformation is performed on the pixel values of all the pixel points in the anti-sample interfering object.
The pixel values x of all the pixel points in the anti-sample interference object are algebraic linear transformation processed. For example, the pixel values of all pixels in the anti-sample interfering object are modified to be (255-x) or 0.5 (255-x), and the form of algebraic linear transformation is not limited herein.
507. And identifying the processed countermeasure sample to obtain an identification result.
After the processing is performed on the anti-sample interfering object, no shielding object exists in the face area range in the face image or the existing shielding object is only a common shielding object, so that the electronic equipment can directly identify the face image to obtain an identification result. The specific identification is similar to step 504 described above, and will not be repeated here. If the processed countermeasure sample is identified, and if the obtained identification result is the owner of the electronic equipment, the false triggering is indicated; if the obtained recognition result is not the owner of the electronic device, preferably, in some embodiments of the present application, if the electronic device determines that the face image is a real challenge sample (i.e., the recognition result is not the owner of the electronic device), the electronic device may further generate a reminder notification for prompting the relevant user that the electronic device is suffering from the challenge sample attack, for example, the relevant user may be the owner of the electronic device (i.e., the victim), the reminder notification may prompt the victim to process in time (e.g., modify the payment password, alarm), and the relevant user may also be a service merchant corresponding to the electronic device (e.g., the attacker uses the cell phone of the victim to pay on-line in the humane supermarket, and the corresponding service merchant is the cashing platform of the humane supermarket). For the reminding notification to realize the reminding function, there are various implementation manners, including but not limited to the following reminding manners (for convenience of understanding, the electronic device is taken as a mobile phone for illustration):
a. The reminding notification reminds in the form of voice broadcasting, alarm bell and the like on the mobile phone.
The reminding mode of the reminding notification is mainly used for attracting the attention of users (such as cashing service personnel, other customers around an attacker and the like) around the mobile phone, so that the attacker is unscrupulous to actively abandon the challenge sample attack or the surrounding users intervene the challenge sample attack of the attacker to stop the attack. As shown in fig. 7, when the mobile phone determines that the face image of the current time acquired by the camera of the mobile phone is a challenge sample, the mobile phone can play the voice "subject to the challenge sample attack, please stop paying-! Or similar reminding content (such as ' the mobile phone is suspected to be stolen ', the user please stop paying | ', ' the mobile phone is illegally used, the bad-looking egg | ', etc.), and the specific form of the text content played by the voice is not limited. In addition, the mobile phone can play the alarm bell in addition to the reminding of the voice playing related content, so as to play a similar reminding role, and the expression forms of the alarm bell can be various, as shown in fig. 8: the handset may send out a "beep! Beep-! Beep-! Beep-! "alarm bell sound, also can sound" Wu well! Wu well-! Wu well-! "the bell, in particular, the form of the bell is not limited herein.
b. The mobile phone sends the reminding notification to the corresponding server.
The reminding notification generated by the mobile phone can be further sent to a server (such as a merchant platform which is carrying out online payment) corresponding to the mobile phone, as shown in fig. 9, when the mobile phone detects that the face image of the attacker shot at the current moment is a countermeasure sample in the process of paying money to the merchant platform by using online payment, the mobile phone sends the reminding notification to the corresponding receiving merchant platform to remind the merchant that the payment process is unsafe, and the merchant platform can actively terminate the payment process after receiving the reminding notification so as to ensure the financial security of the victim.
c. The mobile phone sends the alert notification to other target electronic devices associated with the mobile phone.
The alert notification generated by the handset may be further sent to other target electronic devices associated with the handset. As shown in fig. 10, if the host of the mobile phone a that is being subjected to the attack against the sample is the victim B, the victim B has the mobile phone B, the tablet computer c and the smart watch d in addition to the mobile phone a, and the victim B associates the mobile phone a, the mobile phone B, the tablet computer c and the smart watch d in advance (for example, the unified ID account number is registered before, and the content can be shared), then the mobile phone B, the tablet computer c and the smart watch d are other target electronic devices associated with the mobile phone a in the embodiment of the present application. If the attacker is carrying out the challenge sample attack on the mobile phone a of the victim B, the mobile phone a generates a reminding notification (for example, the reminding notification may be that the mobile phone a is suffering from the challenge sample attack, please intervene | "), the reminding notification is sent to at least one of the mobile phone B, the tablet computer c or the smart watch d, so that if the victim B is wearing the smart watch d, or is using the mobile phone B or the tablet computer c, the victim B can timely know that the mobile phone a of the victim B is being illegally used by the attacker, and the victim B can timely intervene, for example: change payment passwords on other target electronic devices, alert related departments, etc.
The embodiment of the application may divide the functional modules of the electronic device according to the above detection method, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
For example, fig. 11 shows a schematic diagram of an electronic device, and an electronic device provided in an embodiment of the present application may include:
an acquisition unit 1101, configured to acquire a face image at a current moment through a camera device;
a judging unit 1102, configured to judge whether a blocking object is an anti-sample interfering object, where the blocking object is located in a face area range in the face image;
the determining unit 1103 is configured to determine that the face image is an opposing sample if the obstruction is the opposing sample interfering object.
Preferably, in some embodiments of the present application, the determining unit 1102 may further include more sub-units to implement more functions. Fig. 12 is another schematic diagram of an electronic device according to an embodiment of the present application, where the electronic device specifically includes: an acquisition unit 1201, a judgment unit 1202, a determination unit 1203. The functions of the acquisition unit 1201, the judgment unit 1202, and the determination unit 1203 are similar to those of the acquisition unit 1101, the judgment unit 1102, and the determination unit 1103 in fig. 11, and are not described here. In the embodiment of the present application, the determining unit 1202 may further include:
A calculating subunit 12021, configured to calculate pixel values of all pixel points in the occlusion object, so as to obtain a picture entropy value of the occlusion object;
a judging subunit 12022, configured to judge whether the entropy value of the picture is greater than a preset threshold, where the preset threshold is determined according to a first preset manner;
the first determining subunit 12023 is configured to determine that the occlusion object is the anti-sample interfering object if the entropy value of the picture is greater than the preset threshold value.
Preferably, in the embodiment of the present application, the judging unit 1202 may further include a second determining subunit 12024, specifically configured to: if the picture entropy value is smaller than or equal to the preset threshold value, determining that the occlusion object is a common occlusion object.
Preferably, the first preset manner may include: firstly, obtaining M reference face images, wherein the reference face images are face images without shielding objects in a face area range, and M is more than or equal to 1; then, calculating pixel values of all pixel points in a target reference face image in the M reference face images to obtain a target picture entropy value of the target reference face image; and finally, determining the arithmetic average value of the entropy values of the M target pictures corresponding to the M reference face images as the preset threshold value.
Preferably, in some embodiments of the present application, the computing subunit 12021 may be further specifically configured to:
decomposing the pixel values of all the pixel points in the shielding object into a first vector pixel value, a second vector pixel value and a third vector pixel value in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value; respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula; and then, determining the arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value as the picture entropy value of the shielding object.
Preferably, in some embodiments of the present application, the entropy value calculation formula may include:
Figure BDA0002067415380000161
wherein i is the value of each element in the first set, the second set or the third set, p i H is the first picture entropy value, the second picture entropy value or the third picture entropy value for the probability of occurrence of i.
Preferably, in some embodiments of the present application, the electronic device may further include more units to implement more functions, for example, when the determining unit 1202 determines that the obstruction is a countersample interfering object, the electronic device may further include:
A processing unit 1204, configured to process the challenge sample according to a second preset manner;
the identifying unit 1205 is configured to identify the processed challenge sample, so as to obtain an identification result.
Preferably, in the embodiment of the present application, if the judging unit 1202 further includes the second determining subunit 12024, the identifying unit 1205 may be further specifically configured to identify the common obstruction to obtain an identifying result.
Preferably, the second preset manner may include: determining a target pixel value, and modifying the pixel values of all pixel points in the anti-sample interfering object into the target pixel value; or, the pixel values of all the pixel points in the anti-sample interfering object are algebraically and linearly transformed.
Preferably, determining the target pixel value may also include the following:
1) Selecting one pixel value as the target pixel value at will in the value range of the pixel value;
2) Determining the pixel value of any pixel point in the anti-sample interfering object as the target pixel value;
3) Determining the pixel value of any pixel point in the face area as the target pixel value;
4) And determining the arithmetic average value of the pixel values of all the pixel points in the face area as the target pixel value.
Preferably, in some embodiments of the present application, after the determining unit 1203 determines that the face image is a countermeasure sample, the electronic device may further include:
a generating unit 1206 for generating a reminder notification;
a broadcasting unit 1207, configured to broadcast the alert notification in voice;
and/or the number of the groups of groups,
a sending unit 1208 is configured to send the alert notification to a corresponding server and/or to an associated target electronic device.
The specific functions and structures of the electronic device in the embodiments corresponding to fig. 11 and fig. 12 are used to implement the steps of processing by the electronic device in the foregoing fig. 4 to fig. 10, and are not described herein in detail.
Fig. 13 is another schematic diagram of an electronic device according to an embodiment of the present application. For convenience of description, only those portions related to the embodiments of the present application are shown, and specific technical details are not disclosed, refer to the method portions of the embodiments of the present application. The electronic device may include a cell phone, tablet computer, smart watch, personal computer, etc. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an ear-piece interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera rig 193 (which may also be referred to as a camera 193), a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It will be appreciated by those skilled in the art that the structure of the electronic device 100 shown in fig. 13 does not constitute a specific limitation of the electronic device 100, and in other embodiments of the present application, more or fewer components than illustrated may be included, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The following describes each constituent element of the electronic apparatus 100 in detail with reference to fig. 13:
the processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100. Similarly, in the embodiment of the present application, the processor 110 may be coupled to the camera 193 through the I2C interface, and if the camera collects the face image at the current moment, the camera may transmit the collected face image to the processor 110 through the I2C bus interface for processing.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the I2S interface, so as to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication. In some implementations of the embodiments of the present application, the processor 110 may generate a reminder notification if the electronic device 100 is being subjected to a challenge sample, and the reminder notification may be sent to the audio module 170 if the processor 110 is coupled to the audio module 170 through the I2S bus interface or the PCM bus interface.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100. Therefore, in the embodiment of the present application, the processor 110 may be coupled to the camera 193 through the I2C interface, and may also communicate with the camera 193 through the CSI interface, that is, if the camera collects a face image at the current moment, the camera may not only transmit the collected face image to the processor 110 through the I2C bus interface for processing, but also transmit the collected face image to the processor 110 through the CSI interface for processing.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal via an audio device (not limited to speaker 170A, receiver 170B, etc.), in this embodiment, the sound signal is a reminder notification (e.g., voice broadcast "being subjected to a challenge sample, please stop paying |" or a ring tone) or an image or video (e.g., a face image or a face video at the current time in this embodiment of the application) via display 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. In the embodiment of the present application, the processor may send the generated alert notification to the corresponding server through the mobile communication module 150 and the antenna 1, or to other target electronic devices associated therewith. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, for example, in the embodiment of the present application, if the electronic device 100 captures a face video of the current time through the camera 193, the face video may be processed by the GPU, and a face image may be extracted from the face video of the current time. The display 194 and the application processor are connected. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used for displaying images, videos, etc., and may be used for displaying face videos or face images taken by the camera at the current moment in the embodiment of the present application. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like, and in this embodiment of the present application, the face image at the current moment is obtained through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video, such as face images or face video at the current time in the embodiments of the present application. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. For example, music playing, recording, etc., in this embodiment, the voice broadcasting of the reminding notification or the playing of the alert ring is realized.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. Pressure sensor 180A
Such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The specific functions and structures of the electronic device 100 in the embodiment corresponding to fig. 13 are used to implement the steps of processing performed by the electronic device in the foregoing fig. 4 to 10, and are not described herein in detail.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 14 is a software configuration block diagram of the electronic device 100 of the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 14, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 14, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. In the embodiment of the application, the data may include a face image (including a face image directly shot or a face image intercepted from a face video) of the current moment acquired by the camera, a reminder notification, and the like.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes the workflow of the software and hardware of the electronic device 100 in an exemplary manner in connection with the scene that the electronic device captures the face image of the current time through the camera in the embodiment of the present application.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a face image (or captures a face video) at the current moment through a camera 193.
The software structure of the electronic device in the embodiment corresponding to the foregoing fig. 4 to 10 may be based on the software structure shown in fig. 14, and the software structure shown in fig. 14 may correspondingly perform the steps in the embodiment of the method in fig. 4 to 10, which are not described herein in detail.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.

Claims (20)

1. The method for detecting the countermeasure sample is applied to a face recognition scene and is characterized by comprising the following steps of:
acquiring a face image at the current moment through camera equipment;
judging whether the shielding object is an anti-sample interfering object, wherein the shielding object is positioned in the face area range in the face image, and the judging mode comprises the following steps: calculating pixel values of all pixel points in the shielding object to obtain a picture entropy value of the shielding object, judging whether the picture entropy value is larger than a preset threshold value or not, determining the preset threshold value according to a first preset mode, and determining the shielding object as the anti-sample interfering object if the picture entropy value is larger than the preset threshold value;
and if the shielding object is the countermeasures sample interference object, determining the face image as the countermeasures sample.
2. The detection method according to claim 1, wherein the calculating the pixel values of all the pixel points in the occlusion object to obtain the picture entropy value of the occlusion object includes:
decomposing the pixel values of all the pixel points in the shielding object into a first vector pixel value, a second vector pixel value and a third vector pixel value in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value;
Respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula;
and determining the arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value as the picture entropy value of the shielding object.
3. The detection method according to claim 2, wherein the entropy value calculation formula includes:
Figure FDA0004069680960000011
wherein i is the value of each element in the first set, the second set or the third set, p i And H is the first picture entropy value, the second picture entropy value or the third picture entropy value for the probability of i.
4. A method of detection according to any one of claims 1 to 3, wherein if the obstruction is the challenge sample disruptor, the method further comprises:
processing the challenge sample according to a second preset mode;
and identifying the processed countermeasure sample to obtain an identification result.
5. The method according to claim 4, wherein the second preset manner includes:
determining a target pixel value, and modifying the pixel values of all pixel points in the anti-sample interfering object into the target pixel value;
Or alternatively, the first and second heat exchangers may be,
and algebraic linear transformation is carried out on the pixel values of all the pixel points in the anti-sample interfering object.
6. The method of detecting according to claim 5, wherein determining the target pixel value includes:
selecting one pixel value as the target pixel value at will in the value range of the pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the anti-sample interfering object as the target pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the face area as the target pixel value;
or alternatively, the first and second heat exchangers may be,
and determining the arithmetic average value of the pixel values of all the pixel points in the face area as the target pixel value.
7. A detection method according to any one of claims 1-3, wherein after determining that the face image is a challenge sample, the method further comprises:
generating a reminding notification;
broadcasting the reminding notice through voice;
and/or the number of the groups of groups,
sending the reminding notification to a corresponding server;
and/or the number of the groups of groups,
and sending the reminding notification to the associated target electronic equipment.
8. A detection method according to any one of claims 1-3, wherein the first preset manner comprises:
Obtaining M reference face images, wherein the reference face images are face images without a shielding object or with a common shielding object in a face area, and M is more than or equal to 1;
calculating pixel values of all pixel points in a target reference face image in the M reference face images to obtain a target picture entropy value of the target reference face image;
and determining the arithmetic average value of the entropy values of the M target pictures respectively corresponding to the M reference face images as the preset threshold value.
9. The method according to claim 1, wherein,
if the picture entropy value is smaller than or equal to the preset threshold value, determining that the occlusion object is a common occlusion object;
the method further comprises the steps of:
and identifying the common shielding object to obtain an identification result.
10. An electronic device, comprising:
one or more camera equipment;
one or more touch screens;
one or more processors;
one or more memories;
the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to:
Acquiring a face image at the current moment, wherein the face image is acquired by the camera equipment;
judging whether the shielding object is an anti-sample interfering object, wherein the shielding object is positioned in the face area range in the face image, and the judging mode comprises the following steps: calculating pixel values of all pixel points in the shielding object to obtain a picture entropy value of the shielding object, judging whether the picture entropy value is larger than a preset threshold value or not, determining the preset threshold value according to a first preset mode, and determining the shielding object as the anti-sample interfering object if the picture entropy value is larger than the preset threshold value;
and if the shielding object is the countermeasures sample interference object, determining the face image as the countermeasures sample.
11. The electronic device of claim 10, wherein the instructions, when executed by the electronic device, cause the electronic device to further perform the steps of:
decomposing the pixel values of all the pixel points in the shielding object into a first vector pixel value, a second vector pixel value and a third vector pixel value in a color space to obtain a first set of the first vector pixel value, a second set of the second vector pixel value and a third set of the third vector pixel value;
Respectively calculating a first picture entropy value of the first set, a second picture entropy value of the second set and a third picture entropy value of the third set according to an entropy value calculation formula;
and determining the arithmetic average value of the first picture entropy value, the second picture entropy value and the third picture entropy value as the picture entropy value of the shielding object.
12. The electronic device of claim 11, wherein the entropy calculation formula comprises:
Figure FDA0004069680960000031
wherein i is the value of each element in the first set, the second set or the third set, p i And H is the first picture entropy value, the second picture entropy value or the third picture entropy value for the probability of i.
13. The electronic device of any of claims 10-12, wherein if the electronic device determines that the occlusion object is the countersample interferent, the instructions, when executed by the electronic device, cause the electronic device to further perform the steps of:
processing the challenge sample according to a second preset mode;
and identifying the processed countermeasure sample to obtain an identification result.
14. The electronic device of claim 13, wherein the second preset manner comprises:
Determining a target pixel value, and modifying the pixel values of all pixel points in the anti-sample interfering object into the target pixel value;
or alternatively, the first and second heat exchangers may be,
and algebraic linear transformation is carried out on the pixel values of all the pixel points in the anti-sample interfering object.
15. The electronic device of claim 14, wherein the determining the target pixel value comprises:
selecting one pixel value as the target pixel value at will in the value range of the pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the anti-sample interfering object as the target pixel value;
or alternatively, the first and second heat exchangers may be,
determining the pixel value of any pixel point in the face area as the target pixel value;
or alternatively, the first and second heat exchangers may be,
and determining the arithmetic average value of the pixel values of all the pixel points in the face area as the target pixel value.
16. The electronic device of any one of claims 10-12, wherein after the electronic device determines that the face image is a challenge sample, the instructions, when executed by the electronic device, cause the electronic device to further perform the steps of:
generating a reminding notification;
broadcasting the reminding notice through voice;
And/or the number of the groups of groups,
sending the reminding notification to a corresponding server;
and/or the number of the groups of groups,
and sending the reminding notification to the associated target electronic equipment.
17. The electronic device of any one of claims 10-12, wherein the first preset manner includes:
obtaining M reference face images, wherein the reference face images are face images without a shielding object or with a common shielding object in a face area, and M is more than or equal to 1;
calculating pixel values of all pixel points in a target reference face image in the M reference face images to obtain a target picture entropy value of the target reference face image;
and determining the arithmetic average value of the entropy values of the M target pictures respectively corresponding to the M reference face images as the preset threshold value.
18. The electronic device of claim 10, wherein if the picture entropy value is less than or equal to the preset threshold, the instructions, when executed by the electronic device, cause the electronic device to further perform the steps of:
determining that the shielding object is a common shielding object;
and identifying the common shielding object to obtain an identification result.
19. An electronic device, comprising:
The electronic device implements the detection method according to any of claims 1-9 by hardware or by hardware executing corresponding software, the hardware or the software comprising one or more modules corresponding to the detection method according to any of claims 1-9.
20. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the detection method of any one of claims 1-9.
CN201910425689.5A 2019-05-21 2019-05-21 Method for detecting countermeasure sample and electronic equipment Active CN110321790B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910425689.5A CN110321790B (en) 2019-05-21 2019-05-21 Method for detecting countermeasure sample and electronic equipment
PCT/CN2020/091027 WO2020233564A1 (en) 2019-05-21 2020-05-19 Method and electronic device for detecting adversarial example

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910425689.5A CN110321790B (en) 2019-05-21 2019-05-21 Method for detecting countermeasure sample and electronic equipment

Publications (2)

Publication Number Publication Date
CN110321790A CN110321790A (en) 2019-10-11
CN110321790B true CN110321790B (en) 2023-05-12

Family

ID=68113242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910425689.5A Active CN110321790B (en) 2019-05-21 2019-05-21 Method for detecting countermeasure sample and electronic equipment

Country Status (2)

Country Link
CN (1) CN110321790B (en)
WO (1) WO2020233564A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321790B (en) * 2019-05-21 2023-05-12 华为技术有限公司 Method for detecting countermeasure sample and electronic equipment
CN110768971B (en) * 2019-10-16 2021-12-03 伍军 Confrontation sample rapid early warning method and system suitable for artificial intelligence system
CN110705652B (en) * 2019-10-17 2020-10-23 北京瑞莱智慧科技有限公司 Countermeasure sample, generation method, medium, device and computing equipment thereof
CN111046379B (en) * 2019-12-06 2021-06-18 支付宝(杭州)信息技术有限公司 Anti-attack monitoring method and device
CN111325127A (en) * 2020-02-12 2020-06-23 上海云从汇临人工智能科技有限公司 Abnormal object judgment method, system, machine readable medium and equipment
CN111353403A (en) * 2020-02-24 2020-06-30 中国人民大学 Method and system for detecting confrontation sample of deep neural network image
CN111811657B (en) * 2020-07-07 2022-05-27 杭州海康威视数字技术股份有限公司 Method and device for correcting human face temperature measurement and storage medium
CN112001249B (en) * 2020-07-21 2022-08-26 山东师范大学 Method for canceling biological characteristics by generating sticker structure in physical world
CN112673381B (en) * 2020-11-17 2022-04-12 华为技术有限公司 Method and related device for identifying confrontation sample
CN112488130B (en) * 2020-12-17 2023-08-15 苏州聚悦信息科技有限公司 AI micro hole wall detection method
CN113409407B (en) * 2021-05-17 2022-05-17 海南师范大学 Countermeasure sample defense method for acquiring low-frequency information based on average compression
CN113240028A (en) * 2021-05-24 2021-08-10 浙江大学 Anti-sample block attack detection method based on class activation graph
CN113222056B (en) * 2021-05-28 2022-11-08 北京理工大学 Countercheck sample detection method for image classification system attack
CN114240732B (en) * 2021-06-24 2023-04-07 中国人民解放军陆军工程大学 Anti-patch generation method for attacking face verification model
CN113705620B (en) * 2021-08-04 2023-08-15 百度在线网络技术(北京)有限公司 Training method and device for image display model, electronic equipment and storage medium
CN115712893A (en) * 2021-08-20 2023-02-24 华为技术有限公司 Attack detection method and device
CN114333031A (en) * 2021-12-31 2022-04-12 北京瑞莱智慧科技有限公司 Vulnerability detection method and device of living body detection model and storage medium
CN114661940B (en) * 2022-01-28 2023-08-08 宁波大学 Method suitable for quickly acquiring voice countermeasure sample under black box attack
CN115909020B (en) * 2022-09-30 2024-01-09 北京瑞莱智慧科技有限公司 Model robustness detection method, related device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171215A (en) * 2018-01-25 2018-06-15 河南大学 Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification
CN109740615A (en) * 2018-12-29 2019-05-10 武汉大学 A kind of minimizing technology of pair of attack resistance sample disturbance

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143078B (en) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 Living body faces recognition methods, device and equipment
CN108537271B (en) * 2018-04-04 2021-02-05 重庆大学 Method for defending against sample attack based on convolution denoising self-encoder
CN109214327B (en) * 2018-08-29 2021-08-03 浙江工业大学 Anti-face recognition method based on PSO
CN110321790B (en) * 2019-05-21 2023-05-12 华为技术有限公司 Method for detecting countermeasure sample and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171215A (en) * 2018-01-25 2018-06-15 河南大学 Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification
CN109740615A (en) * 2018-12-29 2019-05-10 武汉大学 A kind of minimizing technology of pair of attack resistance sample disturbance

Also Published As

Publication number Publication date
CN110321790A (en) 2019-10-11
WO2020233564A1 (en) 2020-11-26

Similar Documents

Publication Publication Date Title
CN110321790B (en) Method for detecting countermeasure sample and electronic equipment
EP3859576B1 (en) Application permission management method and electronic device
CN114553814B (en) Method and device for processing push message
CN112085647B (en) Face correction method and electronic equipment
EP3882793A1 (en) Electronic device control method and electronic device
EP4190418A1 (en) Image prediction method, electronic device and storage medium
CN115914461B (en) Position relation identification method and electronic equipment
CN114528581A (en) Safety display method and electronic equipment
CN113496477A (en) Screen detection method and electronic equipment
CN113676440B (en) Authority negotiation method and device in communication process and electronic equipment
CN116305093A (en) Method for operating applet and electronic device
CN115792431B (en) Abnormal position detection method and electronic equipment
CN114157412B (en) Information verification method, electronic device and computer readable storage medium
CN116048831B (en) Target signal processing method and electronic equipment
CN116051351B (en) Special effect processing method and electronic equipment
CN116233599B (en) Video mode recommendation method and electronic equipment
CN116669020B (en) Password management method, password management system and electronic equipment
CN115016666B (en) Touch processing method, terminal equipment and storage medium
CN115495716B (en) Local authentication method and electronic equipment
EP4311277A1 (en) Contact tracing method and related device
CN114329595B (en) Application program detection method, device, storage medium and program product
CN114205318B (en) Head portrait display method and electronic equipment
CN116321265B (en) Network quality evaluation method, electronic device and storage medium
CN117496964A (en) Login method and electronic equipment
CN116028966A (en) Application display method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant