CN111401197B - Picture risk identification method, device and equipment - Google Patents

Picture risk identification method, device and equipment Download PDF

Info

Publication number
CN111401197B
CN111401197B CN202010163296.4A CN202010163296A CN111401197B CN 111401197 B CN111401197 B CN 111401197B CN 202010163296 A CN202010163296 A CN 202010163296A CN 111401197 B CN111401197 B CN 111401197B
Authority
CN
China
Prior art keywords
picture
points
determining
feature
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010163296.4A
Other languages
Chinese (zh)
Other versions
CN111401197A (en
Inventor
徐文浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010163296.4A priority Critical patent/CN111401197B/en
Publication of CN111401197A publication Critical patent/CN111401197A/en
Application granted granted Critical
Publication of CN111401197B publication Critical patent/CN111401197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

A picture risk identification method, device and equipment are disclosed. When risk identification is carried out on pictures, feature point sets of the images to be identified and verified are firstly obtained respectively, then the number of feature points which can be matched in the two feature point sets is determined, further the matching degree of the two pictures is determined according to the number of the feature points which can be matched and the number of the feature points in the two sets, if the matching degree of the two pictures exceeds a certain threshold value, the risk of the pictures to be identified is judged, and personal privacy data of a user is protected.

Description

Picture risk identification method, device and equipment
Technical Field
Embodiments of the present disclosure relate to the field of information technologies, and in particular, to a method, an apparatus, and a device for identifying risk of a picture.
Background
Face recognition technology has been widely used in the fields of application login, equipment unlocking and even payment verification. In practical applications, lawbreakers may use some means to collect photos of victims for identification, and log in the account number of the victims to obtain privacy information. For example, an lawbreaker collects a picture of a victim, uses the picture to perform face authentication, and uses a partial screenshot of the picture to perform logging and other operations after trimming.
Based on this, the embodiments of the present specification provide a more accurate picture risk recognition scheme.
Disclosure of Invention
The embodiment of the application aims to provide a more accurate picture risk identification scheme.
In order to solve the technical problems, the embodiment of the application is realized as follows:
a picture risk identification method, comprising:
acquiring a picture to be identified, and determining a verified picture corresponding to the picture to be identified;
respectively acquiring a feature point set F (A) of the picture to be identified and a feature point set F (B) of the verified picture, wherein the dimensions of the feature points in F (A) and F (B) are the same;
traversing the set F (A), and determining the distance between each feature point in the set F (B) and the selected feature point aiming at any selected feature point in the set F (A);
judging whether the selected feature points belong to the matchable feature points or not according to the distance between each feature point in the F (B) and the selected feature points;
counting the number N of the matchable characteristic points in the F (A), determining the number N (A) of the characteristic points in the set F (A), and determining the number N (B) of the characteristic points in the F (B);
and determining the matching degree P of the picture to be identified and the verified picture according to the number N, N (A) and N (B) of the matchable feature points, and determining that the picture to be identified has risk if the matching degree P exceeds a preset threshold value.
Correspondingly, the embodiment of the present specification further provides a picture risk identification device, including:
the picture acquisition module acquires a picture to be identified and determines a verified picture corresponding to the picture to be identified;
the characteristic point set acquisition module is used for respectively acquiring a characteristic point set F (A) of the picture to be identified and a characteristic point set F (B) of the verified picture, wherein the dimensions of the characteristic points in the F (A) and the F (B) are the same;
the distance determining module traverses the set F (A) and determines the distance between each feature point in the set F (B) and the selected feature point aiming at any selected feature point in the set F (A);
the judging module judges whether the selected characteristic points belong to the matched characteristic points according to the distance between each characteristic point in the F (B) and the selected characteristic points;
the quantity determining module is used for counting the quantity N of the characteristic points which can be matched in the F (A), determining the quantity N (A) of the characteristic points in the set F (A) and determining the quantity N (B) of the characteristic points in the F (B);
and the risk identification module is used for determining the matching degree P of the picture to be identified and the verified picture according to the number N, N (A) and the number N (B) of the characteristic points which can be matched, and determining that the picture to be identified has risk if the matching degree P exceeds a preset threshold value.
According to the scheme provided by the embodiment of the specification, when risk identification is carried out on pictures, firstly, feature point sets of the verified and to-be-identified images are respectively obtained, then, the number of feature points which can be matched in the two feature point sets is determined, further, the matching degree of the two pictures is determined according to the number of the matched feature points and the number of the feature points in the two sets, if the matching degree of the two pictures exceeds a certain threshold, the risk of the to-be-identified pictures is judged, accurate identification on screenshot attack is realized, and personal privacy data of a user is protected.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the embodiments of the disclosure.
Further, not all of the effects described above need be achieved in any of the embodiments of the present specification.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present description, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a screenshot attack provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of a picture risk identification method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a distance calculation method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a picture risk recognition device according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an apparatus for configuring the method of the embodiments of the present specification.
Detailed Description
In order for those skilled in the art to better understand the technical solutions in the embodiments of the present specification, the technical solutions in the embodiments of the present specification will be described in detail below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification shall fall within the scope of protection.
In the situation that face recognition has been widely used, a considerable number of lawbreakers have accordingly been generated to attack face recognition. For example, one way is called screenshot attack, which is to use a mode that, for a face recognition system, an lawbreaker only has one image of a victim, and on the premise of trying to bypass living body detection, the image is used for face authentication, and then partial screenshot of the image is used for logging in and other operations after fine tuning.
As shown in fig. 1, fig. 1 is a schematic diagram of a screenshot attack provided in the embodiment of the present disclosure, where an lawbreaker first uses a picture (e.g., a personal photo of a user obtained from an illegal channel) to authenticate, so that the lawbreaker becomes a verified picture in a system, then uses a partial screenshot of the picture, or fine-tunes a feature of a local area of the picture (e.g., performs a local ps or rotates the picture, etc.), and then uses the partial screenshot or fine-tuning screenshot to log in, so as to possibly break through the system and obtain an operation authority.
Based on this, the embodiment of the specification provides a picture risk identification method, which is applied to a scene that an lawbreaker uses a part of screenshot in a verified picture or fine-tunes the picture to attack, and can perform more accurate risk identification for the screenshot attack.
As shown in fig. 2, fig. 2 is a flow chart of a picture risk identification method provided in an embodiment of the present disclosure, and specifically includes the following steps:
s201, obtaining a picture to be identified, and determining a verified picture corresponding to the picture to be identified.
The picture to be identified is provided by the user. For example, the user obtains the image to be recognized by performing face scanning through the client, or the user directly uploads the image to be recognized to the client.
As previously described, the verified picture has been verified and stored in the server. The server may obtain the verified picture based on the identification of the user's login ID, etc.
S203, respectively acquiring a feature point set F (A) of the picture to be identified and a feature point set F (B) of the verified picture, wherein the dimensions of the feature points in F (A) and F (B) are the same.
There are many ways of obtaining feature points of a picture. For example, a rapid feature point extraction and description (Oriented FAST and Rotated BRIEF, ORB) algorithm is a classical feature point extraction algorithm in image processing, and is rapid and efficient. Alternatively, in scenarios where timeliness requirements are not high, other algorithms such as Scale-invariant feature transform algorithms (Scale-invariant feature transform, SIFT), or accelerated robust features (Speeded Up Robust Features, SURF), may be employed.
Through feature point extraction, a plurality of points with obvious features in the picture can be obtained. For example, for a face picture, features of five sense organs, outlines, eyeballs, eyebrows and the like in the face picture can be obtained, and further, feature points are expressed in a multi-dimensional vector mode.
In the present application, since feature point extraction is required for two pictures, the algorithm used in the extraction should be identical, and thus the dimensions of the feature points obtained by the extraction are the same.
In addition, in practical application, even the same face, if shooting is performed at different angles, different pictures will be obtained, and in such a case, vector expressions of feature points extracted by two pictures of the same person at different angles still have a large gap.
S205, traversing the set F (A), and determining the distance between each feature point in the set F (B) and the selected feature point aiming at any selected feature point in the set F (A).
Based on the foregoing, two feature point sets F (a) and F (B) will be obtained, and the number of feature points in the two feature point sets is not necessarily the same, but the dimensions of each point are the same, and may be used to calculate the distance.
Specifically, the distance may be calculated by a conventional manner of calculating the euclidean distance, that is, directly calculating the spatial distance of two points in a multidimensional space. In practical application, the distance may be calculated by obtaining a string corresponding to the feature vector of the selected feature point, obtaining a string of one feature point in the F (B), performing an exclusive-or operation on each corresponding character in the two strings, counting the number of 1 in the result, and determining the number as the distance between the two feature points.
For example, assuming that there is one point a in F (a), i.e., a vector is expressed as a= (11101, 11000, 11111), and one point B in the set F (B), a vector is expressed as b= (11100, 11001, 11101), an exclusive or operation may be performed on each corresponding position in the character string (111011100011111) corresponding to a and the character string (111001100111101) corresponding to B, thereby obtaining an operation result of (000010000100010), and the number of "1" s included in the obtained operation result is counted as 3, thereby determining the distance between the points a and B as "3".
In short, this way of calculating the distance can be regarded as expressing the same number in the character string corresponding to the vector in the statistical feature points a and b. Fig. 3 is a schematic diagram of a distance calculation method according to an embodiment of the present disclosure. In the case where the feature point is expressed in high dimensions (for example, when ORB feature point extraction is used, the dimension expression of each feature point is typically up to several hundred dimensions), and the number of numerical digits used for substrings in each dimension is low, it is more accurate and efficient to calculate the record in this manner.
S207, judging whether the selected feature points belong to the matchable feature points according to the distance between each feature point in the F (B) and the selected feature points.
Assuming that N (a) points exist in the set F (a), and N (B) points exist in the set F (B), it is known that, based on the foregoing, it is necessary to calculate the distance from each point in F (B) for each feature point in F (a). In other words, for any one of the feature points in F (a), there are N (B) distances at this time. It is necessary to determine whether or not this point in F (a) is a corresponding feature point in F (B) based on the N (B) distances, that is, whether or not the point belongs to a matable point.
Specifically, as described above, if the user provides not a screenshot but another picture taken at one angle, then the vector expression of the extracted feature point set will vary greatly. In other words, in this case, for any feature point in F (a), each feature point in F (B) is greatly different from it, so that it is far away from it, and there is no point closer to it.
If the user adopts a screenshot attack (i.e. the picture to be identified is a partial screenshot or a fine-tuning picture), the number of feature point identification is not the same, but vector expressions of more feature points are quite close.
In other words, at this time, for some point in F (a), the following situation may occur: in F (B), there is a characteristic vector representation of one point and its proximity, while the vector representations of the other points are different.
Based on this, it can be judged whether or not one point in F (a) belongs to a feature point that can be matched in the following manner: and (3) obtaining the distance between the rest of the characteristic points in the F (B) and the selected characteristic point, and judging that the selected characteristic point belongs to the matchable characteristic point if the ratio or the difference between the rest of the characteristic points and the selected characteristic point is higher than a preset distance threshold.
For example, a KNN (K-Nearest Neighbor) matching algorithm is adopted, two points F (B1) and F (B2) closest to each feature point F (ai) in the F (a) in the F (B) feature point set are extracted, the ratio of F (B1) to F (B2) is calculated, and if the ratio is lower than a preset distance threshold, the F (ai) is determined to belong to a feature point which can be matched. The distance threshold may be set based on the actual situation, for example, set to 0.5.
S209, counting the number N of the matchable characteristic points in the F (A), determining the number N (A) of the characteristic points in the set F (A), and determining the number N (B) of the characteristic points in the F (B).
S211, determining the matching degree P of the picture to be identified and the verified picture according to the number N, N (A) and N (B) of the matchable feature points, and determining that the picture to be identified has risk if the matching degree P exceeds a preset threshold.
The determination of the degree of matching may be made in a variety of ways based on the actual situation. The degree of matching here characterizes the likelihood of whether the picture to be identified belongs to a partial screenshot of a verified picture or a fine-tuned picture. It is easy to understand that, in the case where other conditions are the same, if the feature points in F (a) belong to a higher number of possible matching points, it indicates that the likelihood that the picture to be identified is a screenshot is higher, that is, the matching degree P is positively correlated with the number N of possible matching feature points.
In one embodiment, the determination may be performed by calculating the sum S of N (a) and N (B), and determining the ratio of N to S as the matching degree P. I.e. the degree of matching P is determined by the ratio of the number N of matchable feature points in the two sets.
In one embodiment, the determination may be further performed by determining a smaller value of the determinations N (a) and N (B), and determining a ratio of the N to the smaller value as the matching degree P. Because in practical application, the size relation between the feature points of the image to be identified and the verified image is indefinite (not every user can capture a screenshot attack, that is, N (a) is not less than N (B), the smaller value in N (a) and N (B) represents the upper limit of the point pair which can be matched between the two sets, and the value is used as a denominator more accurately.
According to the scheme provided by the embodiment of the specification, when risk identification is carried out on pictures, firstly, feature point sets of the verified and to-be-identified images are respectively obtained, then, the number of feature points which can be matched in the two feature point sets is determined, further, the matching degree of the two pictures is determined according to the number of the matched feature points and the number of the feature points in the two sets, if the matching degree of the two pictures exceeds a certain threshold, the pictures to be identified are judged to have risks, accurate identification on screenshot attack is realized, and even if personal pictures of users leak, illegal actions cannot be carried out by using the pictures by illegal molecules, so that personal privacy data of the users are effectively protected.
Correspondingly, the embodiment of the present disclosure further provides a picture risk recognition device, as shown in fig. 4, fig. 4 is a schematic structural diagram of the picture risk recognition device provided in the embodiment of the present disclosure, including:
the picture acquisition module 401 acquires a picture to be identified and determines a verified picture corresponding to the picture to be identified;
a feature point set acquisition module 403, configured to acquire a feature point set F (a) of the picture to be identified and a feature point set F (B) of the verified picture, where dimensions of feature points in the F (a) and the F (B) are the same;
the distance determining module 405 traverses the set F (a) and determines, for any feature point selected in F (a), a distance between each feature point in the set F (B) and the selected feature point;
a judging module 407, configured to judge whether the selected feature point belongs to a matchable feature point according to a distance between each feature point in the F (B) and the selected feature point;
a number determining module 409 for counting the number N of the feature points that can be matched in F (a), determining the number N (a) of the feature points in the set F (a), and determining the number N (B) of the feature points in the set F (B);
the risk identification module 411 determines a matching degree P of the picture to be identified and the verified picture according to the number N, N (a) and N (B) of the matchable feature points, and determines that the picture to be identified has a risk if the matching degree P exceeds a preset threshold.
Further, the determining module 407 determines a minimum distance D (min) between the selected feature point and the selected feature point in the F (B); and (3) obtaining the distance between the rest of the characteristic points in the F (B) and the selected characteristic point, and judging that the selected characteristic point belongs to the matchable characteristic point if the ratio or the difference between the rest of the characteristic points and the selected characteristic point is higher than a preset distance threshold.
Further, the risk identification module 411 calculates a sum S of the N (a) and the N (B), and determines a ratio of the N to the S as the matching degree P.
Further, the risk identification module 411 determines a smaller value of the determinations N (a) and N (B), and determines a ratio of the N to the smaller value as the matching degree P.
Further, the distance determining module 405 obtains a string corresponding to the feature vector of the selected feature point, obtains a string of one feature point in the F (B), performs an exclusive-or operation on each corresponding character in the two strings, counts the number of 1 in the result, and determines the number as the distance between the two feature points.
The embodiments of the present disclosure also provide a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, wherein the processor implements the picture risk identification method shown in fig. 2 when executing the program.
FIG. 5 illustrates a more specific hardware architecture diagram of a computing device provided by embodiments of the present description, which may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The embodiments of the present specification also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the picture risk identification method shown in fig. 2.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
From the foregoing description of embodiments, it will be apparent to those skilled in the art that the present embodiments may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present specification.
The system, method, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the method embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The above-described method embodiments are merely illustrative, in that the modules illustrated as separate components may or may not be physically separate, and the functions of the modules may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present disclosure. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The foregoing is merely a specific implementation of the embodiments of this disclosure, and it should be noted that, for a person skilled in the art, several improvements and modifications may be made without departing from the principles of the embodiments of this disclosure, and these improvements and modifications should also be considered as protective scope of the embodiments of this disclosure.

Claims (11)

1. A picture risk identification method, comprising:
acquiring a picture to be identified, and determining a verified picture corresponding to the picture to be identified;
respectively acquiring a feature point set F (A) of the picture to be identified and a feature point set F (B) of the verified picture, wherein the dimensions of the feature points in F (A) and F (B) are the same; the characteristic points are used for representing a plurality of points with remarkable characteristics in the picture; the difference between the vector expressions of the feature points extracted from two pictures with different angles of the same person is larger than a preset difference threshold;
traversing the set F (A), and determining the distance between each feature point in the set F (B) and the selected feature point aiming at any selected feature point in the set F (A);
judging whether the selected feature points belong to the matchable feature points or not according to the distance between each feature point in the F (B) and the selected feature points;
counting the number N of the matchable characteristic points in the F (A), determining the number N (A) of the characteristic points in the set F (A), and determining the number N (B) of the characteristic points in the F (B);
determining the matching degree P of the picture to be identified and the verified picture according to the number N, N (A) and N (B) of the matchable feature points, and determining that screenshot attack exists on the picture to be identified if the matching degree P exceeds a preset threshold; the matching degree P characterizes the possibility of whether the image to be identified belongs to a partial screenshot of the verified picture or a fine-tuning picture; if the number N of the feature points in the F (A) belonging to the matchable feature points is higher, the likelihood that the picture to be identified is a partial screenshot of the verified picture or a fine-tuning picture is higher is indicated, and the matching degree P is positively related to the number N of the matchable feature points.
2. The method of claim 1, wherein determining whether the selected feature point belongs to a matching feature point according to a distance between each feature point in the F (B) and the selected feature point, comprises:
determining the minimum distance D (min) between the F (B) and the selected characteristic point;
and (3) obtaining the distance between the rest of the characteristic points in the F (B) and the selected characteristic point, and judging that the selected characteristic point belongs to the matchable characteristic point if the ratio or the difference between the rest of the characteristic points and the selected characteristic point is higher than a preset distance threshold.
3. The method of claim 1, determining a degree of matching P of the picture to be identified and the verified picture from the number of matchable feature points N, N (a) and N (B), comprising:
and calculating the sum S of the N (A) and the N (B), and determining the ratio of the N to the S as the matching degree P.
4. The method of claim 1, determining a degree of matching P of the picture to be identified and the verified picture from the number of matchable feature points N, N (a) and N (B), comprising:
and determining a smaller value in the determination N (A) and N (B), and determining the ratio of the N to the smaller value as a matching degree P.
5. The method of claim 1, determining a distance of each feature point in the set F (B) from the selected feature point, comprising:
acquiring a character string corresponding to the feature vector of the selected feature point, acquiring a character string of one feature point in the F (B), performing exclusive OR operation on each corresponding character in the two character strings, counting the number of 1 in the result, and determining the number as the distance between the two feature points.
6. A picture risk identification apparatus comprising:
the picture acquisition module acquires a picture to be identified and determines a verified picture corresponding to the picture to be identified;
the characteristic point set acquisition module is used for respectively acquiring a characteristic point set F (A) of the picture to be identified and a characteristic point set F (B) of the verified picture, wherein the dimensions of the characteristic points in the F (A) and the F (B) are the same; the characteristic points are used for representing a plurality of points with characteristics in the picture; the difference between the vector expressions of the feature points extracted from two pictures with different angles of the same person is larger than a preset difference threshold;
the distance determining module traverses the set F (A) and determines the distance between each feature point in the set F (B) and the selected feature point aiming at any selected feature point in the set F (A);
the judging module judges whether the selected characteristic points belong to the matched characteristic points according to the distance between each characteristic point in the F (B) and the selected characteristic points;
the quantity determining module is used for counting the quantity N of the characteristic points which can be matched in the F (A), determining the quantity N (A) of the characteristic points in the set F (A) and determining the quantity N (B) of the characteristic points in the F (B);
the risk identification module is used for determining the matching degree P of the picture to be identified and the verified picture according to the number N, N (A) and the number N (B) of the characteristic points which can be matched, and if the matching degree P exceeds a preset threshold value, determining that screenshot attack exists on the picture to be identified; the matching degree P characterizes the possibility of whether the image to be identified belongs to a partial screenshot of the verified picture or a fine-tuning picture; if the number N of the feature points in the F (A) belonging to the matchable feature points is higher, the likelihood that the picture to be identified is a partial screenshot of the verified picture or a fine-tuning picture is higher is indicated, and the matching degree P is positively related to the number N of the matchable feature points.
7. The device of claim 6, wherein the judging module determines a minimum distance D (min) between the selected feature point and the F (B); and (3) obtaining the distance between the rest of the characteristic points in the F (B) and the selected characteristic point, and judging that the selected characteristic point belongs to the matchable characteristic point if the ratio or the difference between the rest of the characteristic points and the selected characteristic point is higher than a preset distance threshold.
8. The apparatus of claim 6, the risk identification module to calculate a sum S of the N (a) and N (B), and to determine a ratio of the N to S as the degree of matching P.
9. The apparatus of claim 6, the risk identification module to determine a lesser value of the determinations N (a) and N (B), to determine a ratio of the N to the lesser value as a degree of match P.
10. The apparatus of claim 6, wherein the distance determining module obtains a string corresponding to the feature vector of the selected feature point, obtains a string of one feature point in the F (B), performs an exclusive-or operation on each corresponding character in the two strings, counts the number of 1 in the result, and determines the number as the distance between the two feature points.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 5 when the program is executed by the processor.
CN202010163296.4A 2020-03-10 2020-03-10 Picture risk identification method, device and equipment Active CN111401197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010163296.4A CN111401197B (en) 2020-03-10 2020-03-10 Picture risk identification method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010163296.4A CN111401197B (en) 2020-03-10 2020-03-10 Picture risk identification method, device and equipment

Publications (2)

Publication Number Publication Date
CN111401197A CN111401197A (en) 2020-07-10
CN111401197B true CN111401197B (en) 2023-08-15

Family

ID=71434146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010163296.4A Active CN111401197B (en) 2020-03-10 2020-03-10 Picture risk identification method, device and equipment

Country Status (1)

Country Link
CN (1) CN111401197B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967769B (en) * 2020-08-18 2023-06-30 支付宝(杭州)信息技术有限公司 Risk identification method, apparatus, device and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518709A (en) * 2015-03-26 2016-04-20 北京旷视科技有限公司 Method, system and computer program product for identifying human face
CN106326773A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Method and device for photo encryption management as well as terminal
CN106934376A (en) * 2017-03-15 2017-07-07 成都创想空间文化传播有限公司 A kind of image-recognizing method, device and mobile terminal
CN107437012A (en) * 2016-05-27 2017-12-05 阿里巴巴集团控股有限公司 The guard method of data and device
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN110032887A (en) * 2019-02-27 2019-07-19 努比亚技术有限公司 A kind of picture method for secret protection, terminal and computer readable storage medium
CN110223158A (en) * 2019-05-21 2019-09-10 平安银行股份有限公司 A kind of recognition methods of risk subscribers, device, storage medium and server
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228292A1 (en) * 2016-02-10 2017-08-10 International Business Machines Corporation Privacy Protection of Media Files For Automatic Cloud Backup Systems

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518709A (en) * 2015-03-26 2016-04-20 北京旷视科技有限公司 Method, system and computer program product for identifying human face
CN107437012A (en) * 2016-05-27 2017-12-05 阿里巴巴集团控股有限公司 The guard method of data and device
CN106326773A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Method and device for photo encryption management as well as terminal
CN106934376A (en) * 2017-03-15 2017-07-07 成都创想空间文化传播有限公司 A kind of image-recognizing method, device and mobile terminal
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN110032887A (en) * 2019-02-27 2019-07-19 努比亚技术有限公司 A kind of picture method for secret protection, terminal and computer readable storage medium
CN110223158A (en) * 2019-05-21 2019-09-10 平安银行股份有限公司 A kind of recognition methods of risk subscribers, device, storage medium and server
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium

Also Published As

Publication number Publication date
CN111401197A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
US8879803B2 (en) Method, apparatus, and computer program product for image clustering
CN110011954B (en) Homomorphic encryption-based biological identification method, device, terminal and business server
TW201928741A (en) Biometric authentication, identification and detection method and device for mobile terminal and equipment
CN109063776B (en) Image re-recognition network training method and device and image re-recognition method and device
CN108564550B (en) Image processing method and device and terminal equipment
CN112395612A (en) Malicious file detection method and device, electronic equipment and storage medium
CN106919816A (en) A kind of user authen method and device, a kind of device for user authentication
TW202018541A (en) Method, apparatus and electronic device for database updating and computer storage medium thereof
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
CN110895570B (en) Data processing method and device for data processing
CN111401197B (en) Picture risk identification method, device and equipment
CN107070845B (en) System and method for detecting phishing scripts
CN109905366B (en) Terminal equipment safety verification method and device, readable storage medium and terminal equipment
TWI714321B (en) Method, apparatus and electronic device for database updating and computer storage medium thereof
CN114973293B (en) Similarity judging method, key frame extracting method and device, medium and equipment
Rahman et al. Movee: Video liveness verification for mobile devices using built-in motion sensors
CN108235228B (en) Safety verification method and device
CN115034783A (en) Digital currency transaction tracing method based on transaction address and message characteristics
KR102473724B1 (en) Image registration method and apparatus using siamese random forest
CN111160357B (en) Model training and picture output method and device based on counterstudy
JP2014067352A (en) Apparatus, method, and program for biometric authentication
Cui et al. A novel DIBR 3D image hashing scheme based on pixel grouping and NMF
TWI752519B (en) Electronic apparatus for recognizing multimedia signal and operating method of the same
KR20150106621A (en) Terminal and service providing device, control method thereof, computer readable medium having computer program recorded therefor and image searching system
CN114972807B (en) Method and device for determining image recognition accuracy, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40033190

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant