CN111860369A - Fraud identification method and device and storage medium - Google Patents

Fraud identification method and device and storage medium Download PDF

Info

Publication number
CN111860369A
CN111860369A CN202010724335.3A CN202010724335A CN111860369A CN 111860369 A CN111860369 A CN 111860369A CN 202010724335 A CN202010724335 A CN 202010724335A CN 111860369 A CN111860369 A CN 111860369A
Authority
CN
China
Prior art keywords
image
customer
similarity
preset threshold
fraud identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010724335.3A
Other languages
Chinese (zh)
Inventor
张雪飞
李彦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Zhongyuan Consumption Finance Co ltd
Original Assignee
Henan Zhongyuan Consumption Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Zhongyuan Consumption Finance Co ltd filed Critical Henan Zhongyuan Consumption Finance Co ltd
Priority to CN202010724335.3A priority Critical patent/CN111860369A/en
Publication of CN111860369A publication Critical patent/CN111860369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The application discloses a fraud identification method, a fraud identification device and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a first preset threshold and at least two client images, extracting image features of background parts in the client images and carrying out similarity analysis, and therefore judging whether the client images are abnormal images or not according to the relation between the similarity and the first preset threshold. By applying the technical scheme, on one hand, the machine can replace manpower to complete fraud recognition, so that the labor cost is reduced, and the recognition rate is improved; on the other hand, by analyzing similar backgrounds, illegal users who take pictures in the same place are effectively identified, and fraud risks are reduced.

Description

Fraud identification method and device and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a fraud recognition method, apparatus, and storage medium.
Background
In the loan approval stage, image data verification is required for high-risk customers (customers who hit some rules in the anti-fraud rules) to identify whether the individual or group (customers) has fraud problems.
In the existing fraud identification method, on one hand, image data of a client is observed manually, however, human eye observation consumes energy, and the time consumption is long and the efficiency is low; on the other hand, only the face is compared during recognition, that is, whether the customer is a fraudulent user is judged through face recognition, however, some customers have the condition that the portrait photo backgrounds are extremely similar, and in reality, the part of customers may belong to group fraud or intermediary fraud, and at the moment, if only the face is recognized, whether the group is the group fraud cannot be judged.
In view of the above prior art, the search for a fraud identification method against portrait background is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a fraud identification method, a fraud identification device and a storage medium, wherein the fraud identification method effectively identifies illegal users photographed in the same place through analysis of similar backgrounds, and fraud risks are reduced.
In order to solve the above technical problem, the present application provides a fraud identification method, including:
acquiring a first preset threshold and at least two customer images;
extracting image features of a background part in the customer image;
determining a similarity between the image features;
and when the similarity is smaller than the first preset threshold value, judging that the customer image is an abnormal image.
Preferably, before the extracting the image feature of the background portion in the customer image, the method further includes:
and extracting and removing the portrait part in the customer image.
Preferably, after the extracting and rejecting the portrait part in the customer image, the method further includes:
acquiring a second preset threshold;
determining a percentage of the portrait portion to the customer image;
and when the percentage is smaller than the second preset threshold value, entering the step of extracting the image characteristics of the background part in the customer image.
Preferably, the determining the percentage of the portrait part in the customer image specifically includes:
acquiring the number of first pixel points of the client image;
acquiring the number of second pixel points of the portrait part;
and determining the percentage of the portrait part in the customer image according to the percentage of the number of the second pixel points in the number of the first pixel points.
Preferably, when the percentage is less than the second preset threshold, the method further includes:
filling blank parts in the customer image with colors;
restoring the outline of the object which is partially shielded by the portrait in the background part in the area filled with the color;
the blank part is a blank area left by the elimination of the portrait part in the client image, and the color is set according to the mean value of the gray values of the pixels of the background part.
Preferably, after the determining the similarity between the image features, the method further includes:
acquiring object types contained in the background image;
acquiring the frequency of each object appearing in a plurality of customer images;
calculating final similarity according to the frequency and the similarity;
and when the final similarity is smaller than the first preset threshold, entering the step of judging the client image to be an abnormal image.
Preferably, the extracting the image features of the background portion in the customer image specifically includes:
and extracting the image characteristics of the background part in the client image by adopting an image hash algorithm.
In order to solve the above technical problem, the present application further provides a fraud identification apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first preset threshold and at least two customer images;
the first extraction module is used for extracting image characteristics of a background part in the client image;
a first determination module for determining similarity between the image features;
and the judging module is used for judging the client image as an abnormal image when the similarity is smaller than the first preset threshold value.
In order to solve the above technical problem, the present application further provides a fraud identification apparatus, including a memory for storing a computer program;
a processor for implementing the steps of the fraud identification method as described when executing said computer program.
To solve the above technical problem, the present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the fraud identification method as described above.
According to the fraud identification method, after the first preset threshold and the at least two customer images are obtained, the image features of the background part in the customer images are extracted for similarity analysis, so that whether the customer images are abnormal images or not can be judged according to the relation between the similarity and the first preset threshold. By applying the technical scheme, on one hand, the machine can replace manpower to complete fraud recognition, so that the labor cost is reduced, and the recognition rate is improved; on the other hand, by analyzing similar backgrounds, illegal users who take pictures in the same place are effectively identified, and fraud risks are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart of a fraud identification method according to an embodiment of the present application;
FIG. 2 is a flow chart of another fraud identification method provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a background fill path according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a fraud identification apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a fraud identification apparatus according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the present application.
The core of the application is to provide a fraud identification method, a fraud identification device and a storage medium, wherein the fraud identification method effectively identifies illegal users photographed in the same place through analysis of similar backgrounds, and fraud risks are reduced.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings.
It should be noted that the fraud identification method mentioned in the present application is mainly implemented by an open source Computer Vision Library (OpenCV), and it should be understood that the fraud identification method mentioned in the present application may be implemented by a Micro Control Unit (MCU) or other types of control devices in a Computer, and does not affect implementation of the technical solution.
Fig. 1 is a flowchart of a fraud identification method according to an embodiment of the present application. As shown in fig. 1, the method includes:
s10: a first preset threshold and at least two customer images are acquired.
In specific implementation, the image information of the client who hits part of the anti-fraud rules needs to be acquired, so as to perform image data verification. Since the client images generally include the half-length photograph and the identification photograph, it is necessary to determine whether or not a plurality of clients are photographs taken in the same place, that is, in the same background, it is necessary to acquire image information of at least two clients, and it is understood that it is necessary to compare the same type of photographs together, and for example, if there are A, B two clients, it is necessary to compare the half-length photograph of a with the half-length photograph of B and compare the identification photograph of a with the identification photograph of B.
It should be noted that, in the embodiment of the present application, the content of the anti-fraud rule is not limited, and any related content may be set according to actual situations, and in the present application, it is mainly determined whether the client uses the same Wi-Fi or the same GPRS (the same Wi-Fi indicates that the client may be in the same place).
S11: and extracting image characteristics of the background part in the client image.
Specifically, an image hash algorithm is adopted to extract image features of a background part in the client image.
In a specific implementation, image features of a background portion in a client image need to be extracted, where the image features specifically include information such as texture and color of the image. It should be noted that, in the embodiment of the present application, an extraction method of image features is not limited, and an image Hash (Hash) algorithm, a histogram, a convolutional neural network, or the like may be selected according to an actual situation, and the method of selecting the image Hash algorithm as the feature extraction in the embodiment is only a preferred implementation manner, and is improved on the basis of an original image algorithm, and the image Hash algorithm is further subdivided into an average Hash ahash, a perceptual Hash, a wavelet Hash whash, and the like. The research shows that the characteristic extraction effects of the image corresponding to different scenes are good and bad, the hash calculation is carried out by using the function provided by the imagehash packet, and three hash values of ahash (), whash (), and hash () are used respectively, wherein the ahash () has a good distinguishing effect on the brightness change of the image, and the whash (), the hash () have a good processing effect on the rotation and displacement transformation of the image content. And after three hash values of each image are obtained, storing the three hash values for later steps.
S12: a similarity between the image features is determined.
In specific implementation, a dis.hashing (hash1, hash2) function in OpenCV is called to calculate the similarity degree of two images after the two images are converted into hash values, and the larger the result value is, the lower the similarity degree is, and the smaller the similarity degree is. The hash values obtained in S11 are respectively substituted into dis.hamming () functions according to different types, and at this time, a new function F () is defined, where F ()/dis.hamming (Ha1, Ha2)/9+ dis.hamming (Hp1, Hp 2)/4+ dis.hamming (Hp1, Hp2) ]/3, where Ha1, Ha2, Hp1, Hp2, Hp1, and Hp2 respectively correspond to three hash values of two customer bust portraits. The formula can be used for better processing the transformation conditions of various images, and when a client takes pictures at different positions of the same office, the similarity of corresponding characteristics can be accurately obtained as long as the backgrounds are similar.
It should be noted that, the content of the function F () is not limited in the present application, and different calculation methods may be defined as needed in specific implementation, as long as it is ensured that the corresponding similarity can be calculated according to hash values of different images, and the calculation result is accurate enough.
In a specific implementation, not only two clients hit part of the rules in the anti-fraud rules, but also more than one photo per client, so that the bust-body photos obtained to the client (A, B, C … N) are defined as X1 to Xn, and the credentials are defined as Y1 to Yn. For example, guest a has two bust photographs AX1 and AX2, and guest B has three bust photographs defined as BX1, BX2, and BX 3. The similarity S is calculated in pairs without repeating the calculation of the similarity S, i.e., F (AX1, BX1) is S1, F (AX1, BX2) is S2, F (AX1, BX3) is S3, F (AX2, BX1) is S4, F (AX2, BX2) is S5, F (AX2, BX3) is S6, S1 to S6 are compared, and MIN (S [ ] which is the minimum value of the results S [ ] determines the similarity between the background partial images of the bust images of clients a and B. When there are N customers such as A, B, C … N, the bust-like background similarity S of the N customers is calculated, that is, S ═ AB: MIN (S [ ]) + AC: MIN (S [ ]) + BC: MIN (S [ ]) + … + MN: MIN (S [ ]) }/N, wherein the larger the S value is, the fewer people with similar backgrounds appear in the N clients, and the smaller the S value is, the more people with high background similarity in the N clients are. Therefore, the method can be used as a basis for judging the similarity of the background of the guest group.
It can be understood that the document photos AY, BY … … NY can also be processed through the above process to obtain the background similarity of the document photos in the group, and the details are not described here.
S13: and judging whether the similarity is smaller than a first preset threshold value, if so, entering S14.
S14: and judging the client image as an abnormal image.
It should be noted that, if the similarity is greater than the first preset threshold, it is determined that the client image is normal, and the subsequent comparison is terminated.
The purpose of setting the first preset threshold is to determine that the client image is an abnormal image when the similarity S is smaller than the value, that is, the client image is a photograph taken under the same background, and it can be known through the above process that the larger S is, the higher the probability that the client appears to be taken at different places is, and the smaller S is, the higher the probability that the client takes at the same place is. And the value range of S is 0-200, and when the calculation result is more than 200, the value is taken according to 200.
It should be noted that the first preset threshold is changed according to actual situations. If the value is defined as a fixed value, since data is continuously updated and iterated, there is no way to determine a reasonable value, if the value is too high, a group is easily killed by mistake, and if the value is too low, a target group is missed, so a dynamically-changed threshold is needed to meet the change requirement of the service, and in general, the first preset threshold is defined as a value capable of ensuring that the accuracy (the accuracy is equal to the number of the sub-pairs/the total number) reaches 70% to meet the requirement. In the specific implementation, the definition of the first preset threshold is completed by means of a logistic regression model, when some customers have loan overdue or are checked by customer service staff to find that fraud facts are marked, and customers who do not have overdue or do not have fraud facts do not have the marks, the data is used for marking the fraud facts Y (0 or 1) and the background similarity S obtained in the process, a logistic regression model is trained, and the S value corresponding to the condition that the accuracy is 70% is taken as an index for automatically judging whether the backgrounds of the customers are the same or not by the current system.
According to the fraud identification method, after the first preset threshold and the at least two customer images are obtained, the image features of the background part in the customer images are extracted for similarity analysis, so that whether the customer images are abnormal images or not can be judged according to the relation between the similarity and the first preset threshold. By applying the technical scheme, on one hand, the machine can replace manpower to complete fraud recognition, so that the labor cost is reduced, and the recognition rate is improved; on the other hand, by analyzing similar backgrounds, illegal users who take pictures in the same place are effectively identified, and fraud risks are reduced.
Fig. 2 is a flowchart of another fraud identification method according to an embodiment of the present application. As shown in fig. 2, on the basis of the foregoing embodiment, in this embodiment, in order to make fraud identification more accurate, before S11, the method further includes:
s20: and extracting and removing the portrait part in the customer image.
In the specific implementation, the sizes and formats of the half-length portraits X1-Xn are fixed, the png format is generally used, 480 × 320 pixels are used, a resize () function in OpenCV is called to ensure that the number of each pixel is consistent, a grabcut () function is called to segment the portrait part in the image, the grabcut () function is a function integrated on the OpenCV, and the segmentation result is given after parameters such as a client image, the iteration times, the position rect required to be segmented, the segmentation mode (foreground preservation/background preservation, and the like) are input. Because the default of the rect parameter is a rectangular coordinate, the area covered by the rect can be divided, the rectangular frame is too small to cover 100% of the portrait part, the lower half part of the portrait, particularly the arms, can be excluded, so that the division is not accurate, the enlargement of the rect area can cause the area of the portrait head to be too large, and excessive background interference is introduced, so that the accuracy of the division of the head background is reduced. In the method, the source code of the grabcut () function is modified, the parameter input mode of the rect default rectangle is changed into the trapezoid with the narrow top and the wide bottom, the default mode is equal to the photo width 320 when the bottom edge of the trapezoid is set, and the accuracy of segmenting the portrait half-length of the grabcut () function can be improved by nearly 20% by extracting the portrait part of the existing sample data.
It should be noted that, when processing a certificate photo, since the certificate photo is a perfect rectangle, it is only necessary to directly call the grabcut () function in OpenCV to perform the segmentation of the certificate photo in the client image without modifying the source code of the grabcut () function.
S21: and acquiring a second preset threshold.
S22: a percentage of portrait portions to customer images is determined.
S22 specifically includes:
acquiring the number of first pixel points of a client image;
acquiring the number of second pixel points of the portrait part;
and determining the percentage of the portrait part in the customer image according to the percentage of the number of the second pixel points in the number of the first pixel points.
S23: and judging whether the percentage is smaller than a second preset threshold value, if so, entering S11.
In the implementation, since it is necessary to identify whether the background of the photo contents of multiple customers of different customers is similar, the area presented by the background is a key index. If the background area is too small, the significance of calculating the background similarity is not contrasted, and the accuracy of final judgment is reduced by using the result, so that a second preset threshold value needs to be set.
The result extracted in S20 is a set of pixel coordinate coefficient sets, the number of pixels can be calculated by using the array calculation function in OpenCV, and then the percentage is calculated by comparing the number of pixels in the whole photo (because the size of pixels is limited in S20, if the division is accurate, the percentage is equal to the percentage of the portrait part in the customer image). The second preset threshold corresponding to the portrait of the half body is set to be 60%, the second preset threshold corresponding to the identification photo is set to be 75%, and when the percentage exceeds the threshold, the subsequent comparison is terminated, and the customer is excluded from the customer group with the hit rule. If less than the threshold, the next steps are continued.
According to the fraud identification method, the portrait part in the client image is extracted and removed before the image feature of the background part in the client image is extracted, so that the background part is not interfered when being extracted, and the identification accuracy is improved. In addition, by calculating the percentage of the portrait part in the customer image, customers exceeding a second preset threshold value are excluded from the customer group of the hit rule, and the identification accuracy is further increased.
As shown in fig. 2, on the basis of the above embodiment, when the percentage is less than the second preset threshold, the method further includes:
s24: the blank portion in the customer image is color filled.
S25: and restoring the outline of the object partially shielded by the portrait in the background part in the area filled with the color.
The blank part is a blank area left by the part of the client image where the portrait is removed, and the color is set according to the mean value of the gray values of the pixels of the background part.
In specific implementation, after the portrait part is removed by the grabcut () function in S20, a blank area is left in the cut part, and a color needs to be filled, otherwise, interference is artificially introduced, and interference is caused to subsequent feature extraction. For example, a set of client backgrounds are white walls or black walls, and are filled with cvtColor () functions after foreground extraction, a default method is to give a fixed gray scale range (0-255, 0 is close to black 255 and is close to white), when a fixed value is randomly selected, the filled color may have a large contrast with the background, and when the true overall color gamut is not the filled color, the result is disturbed. For example, if white is filled in a black background, when the image feature of the background part is calculated subsequently, the part is extracted as the image feature, but the blocked background in the original image is artificially changed at this time. In the application, the cvLoadImage () function provided by OpenCV is used for calculating the gray values of the pixel points of the background part, the average value of all the gray values is obtained, and the average value is used for the cvtColor () function to fill the blank part with colors. The gray value of the filled color can be dynamically changed according to the gray condition of the whole background color, for example, the background is dark, the corresponding gray of the A value is also dark, and the filling foreground can not form strong contrast.
After S24 is completed, the area filled with color is only colored, and thus the original appearance of the blocked part cannot be well simulated or restored, and therefore the outline of the object partially blocked by the portrait in the background part needs to be filled. Through observation of the customer image, it can be found that the background blocked by the foreground in the portrait photograph or the certificate photograph of the customer half body has certain continuity, such as the texture of a table, the outline of a special object in the portrait background, and the like, for example, the certificate photograph blocks one corner of a triangular background pattern, and the blocked outline can be basically restored in the area filled with the color.
Fig. 3 is a schematic diagram of a background filling path provided in an embodiment of the present application, as shown in fig. 3, in a specific implementation, a findContours () function in OpenCV is called to extract a contour in a background portion to obtain a matrix containing all contour coordinates, a numpy () function is used to connect contour coordinates in 5 × 5 pixel regions in the matrix to calculate a contour, thereby obtaining multiple sets of contour arrays [ Cm ]1 to [ Cm ] n, then the coordinates [ Cn ] of the contour of the portrait portion obtained in step S20 are used to compare whether the coordinates of [ Cm ]1 to [ Cm ] n fall within [ Cn ] to find and remove the contour neighboring to the portrait portion, and the contour not neighboring to each other is regarded as a complete background and is not processed. The method comprises the steps of obtaining contour coordinates which are mutually bordered, calculating center points P1-Pm (only represented by P1-P5 in the figure), then calculating a center point Q of a portrait part, taking a region of 9 x 9 at the positions P1-Pm of the center points of the bordered contours as a convolution kernel, taking a connecting line coordinate set of the center points Q of the rejected portrait part and the contour array of [ Cn ] n [ Cm ] 1-n (the coordinates of the contour array and the contour of the portrait part are taken as a path), and performing convolution smoothing treatment on the contour center points P1-Pm to the point Q direction by using a function cv2.blu () to use the convolution kernel obtained before in the path so as to achieve the effect of edge expansion, wherein the contour is a fuzzy effect and can basically simulate the color and texture characteristics of a sheltered region for subsequent steps.
On the basis of the above embodiment, after S12, the method further includes:
s26: the object classes contained in the background image are acquired.
In the specific implementation, the yolo v3 training model is selected to identify the categories of standard objects in the background image, such as furniture, animals, plants and the like, and since the model comprises a public data set, a large amount of training data does not need to be collected manually, and new category data such as inscribed boards, clocks, potted plants, bookshelf and the like frequently appearing in financial business scenes can be added on the basis of the model. The yolo V3 model is trained and then issued to a model management and deployment platform, a flash package of python is used for service encapsulation, an API interface for calling the model is provided, the input of the interface is a background image, and the output of the interface is the type and the number of the identified standard objects.
S27: the frequency of occurrence of each object in the plurality of customer images is acquired.
S28: and calculating the final similarity according to the frequency and the similarity.
S29: and judging whether the final similarity is smaller than a first preset threshold value, if so, entering S14.
Calling numpy () to count the object categories and numbers in S26, calculating the frequency of occurrence (number of occurrences/number of people in group) R1-Rn (1-n represents how many object categories) of each category, and substituting the frequency into a formula to obtain the final similarity LS ═ S/{ [1+ (R1-0.33) ]. … [1+ (Rn-0.33) ] }. Under the condition that the similarity S is unchanged, the higher the frequency of appearance of each object in the image of each client is, the higher the similarity of the objects is, and the same kind of objects in the background are important marks for judging whether the objects are in one place or not. When the objects in the customer photos are various but appear in a low frequency, which indicates that each photo may not be in the same place, the final calculation result may be increased, and the confidence that the customer is not in the same place is increased.
According to the fraud identification method, the final similarity is obtained through calculation according to the frequency and the similarity, and then the final similarity is compared with the first preset threshold value to judge whether the customer image is abnormal or not, and two parameters are applied in the comparison process, so that the identification accuracy is improved.
In the above embodiments, the fraud identification method is described in detail, and the present application also provides embodiments corresponding to the fraud identification apparatus. It should be noted that the present application describes the embodiments of the apparatus portion from two perspectives, one from the perspective of the function module and the other from the perspective of the hardware.
Fig. 4 is a schematic structural diagram of a fraud identification apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes, based on the angle of the function module:
the first acquiring module 10 is configured to acquire a first preset threshold and at least two customer images.
The first extraction module 11 is configured to extract image features of a background portion in the client image.
A first determining module 12 for determining similarity between image features.
And the judging module 13 is configured to judge that the client image is an abnormal image when the similarity is smaller than a first preset threshold.
As a preferred embodiment, the method further comprises the following steps:
and the second extraction module is used for extracting and eliminating the portrait part in the customer image.
And the second acquisition module is used for acquiring a second preset threshold.
A second determination module to determine a percentage of portrait portions to the customer image.
And the filling module is used for carrying out color filling on a blank part in the customer image and restoring the outline of the object partially shielded by the portrait in the background part in the area subjected to color filling.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
According to the fraud recognition device provided by the embodiment of the application, after the first preset threshold and the at least two customer images are obtained, the image features of the background part in the customer images are extracted for similarity analysis, so that whether the customer images are abnormal images or not can be judged according to the relation between the similarity and the first preset threshold. By applying the technical scheme, on one hand, the machine can replace manpower to complete fraud recognition, so that the labor cost is reduced, and the recognition rate is improved; on the other hand, by analyzing similar backgrounds, illegal users who take pictures in the same place are effectively identified, and fraud risks are reduced.
Fig. 5 is a block diagram of a fraud recognition apparatus according to another embodiment of the present application, as shown in fig. 5, the apparatus includes, in terms of hardware structure: a memory 20 for storing a computer program;
a processor 21 for implementing the steps of the fraud identification method as in the above embodiments when executing the computer program.
The fraud recognition apparatus provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, or a desktop computer.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
The memory 20 may include one or more computer-readable storage media, which may be non-transitory. Memory 20 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 20 is at least used for storing a computer program 201, wherein after being loaded and executed by the processor 21, the computer program can implement the relevant steps of the fraud identification method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 20 may also include an operating system 202, data 203, and the like, and the storage manner may be a transient storage manner or a permanent storage manner. Operating system 202 may include, among others, Windows, Unix, Linux, and the like. The data 203 may include, but is not limited to, object categories, and the like.
In some embodiments, the diagnostic device 20 may further include a display screen 22, an input/output interface 23, a communication interface 22, a power supply 25, and a communication bus 26.
It will be appreciated by those skilled in the art that the arrangement shown in figure 5 does not constitute a limitation of the fraud identification means and may comprise more or fewer components than those shown.
The fraud recognition device provided by the embodiment of the application comprises a memory and a processor, wherein when the processor executes a program stored in the memory, the following method can be realized: after the first preset threshold and the at least two client images are obtained, because the image features of the background part in the client images are extracted for similarity analysis, whether the client images are abnormal images can be judged according to the relation between the similarity and the first preset threshold. By applying the technical scheme, on one hand, the machine can replace manpower to complete fraud recognition, so that the labor cost is reduced, and the recognition rate is improved; on the other hand, by analyzing similar backgrounds, illegal users who take pictures in the same place are effectively identified, and fraud risks are reduced.
Finally, the application also provides a corresponding embodiment of the computer readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps as set forth in the above-mentioned method embodiments.
It is to be understood that if the method in the above embodiments is implemented in the form of software functional units and sold or used as a stand-alone product, it can be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and executes all or part of the steps of the methods described in the embodiments of the present application, or all or part of the technical solutions. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The fraud identification method, device and storage medium provided by the present application are described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A fraud identification method, comprising:
acquiring a first preset threshold and at least two customer images;
extracting image features of a background part in the customer image;
determining a similarity between the image features;
and when the similarity is smaller than the first preset threshold value, judging that the customer image is an abnormal image.
2. The fraud identification method of claim 1, further comprising, prior to said extracting image features of background portions in said customer image:
and extracting and removing the portrait part in the customer image.
3. The fraud identification method of claim 2, wherein after said extracting and rejecting portrait portions in said customer image, further comprising:
acquiring a second preset threshold;
determining a percentage of the portrait portion to the customer image;
and when the percentage is smaller than the second preset threshold value, entering the step of extracting the image characteristics of the background part in the customer image.
4. The fraud identification method of claim 3, wherein said determining the percentage of said portrait portion to said customer image specifically comprises:
acquiring the number of first pixel points of the client image;
acquiring the number of second pixel points of the portrait part;
and determining the percentage of the portrait part in the customer image according to the percentage of the number of the second pixel points in the number of the first pixel points.
5. The fraud identification method of claim 3, wherein when the percentage is less than the second preset threshold, further comprising:
filling blank parts in the customer image with colors;
restoring the outline of the object which is partially shielded by the portrait in the background part in the area filled with the color;
the blank part is a blank area left by the elimination of the portrait part in the client image, and the color is set according to the mean value of the gray values of the pixels of the background part.
6. The fraud identification method of claim 1, further comprising, after said determining the similarity between the image features:
acquiring object types contained in the background image;
acquiring the frequency of each object appearing in a plurality of customer images;
calculating final similarity according to the frequency and the similarity;
and when the final similarity is smaller than the first preset threshold, entering the step of judging the client image to be an abnormal image.
7. The fraud identification method according to claim 1, wherein said extracting image features of the background portion in the customer image is specifically:
and extracting the image characteristics of the background part in the client image by adopting an image hash algorithm.
8. An apparatus for fraud identification, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first preset threshold and at least two customer images;
the first extraction module is used for extracting image characteristics of a background part in the client image;
a first determination module for determining similarity between the image features;
and the judging module is used for judging the client image as an abnormal image when the similarity is smaller than the first preset threshold value.
9. An apparatus for fraud identification, comprising a memory for storing a computer program;
a processor for implementing the steps of the fraud identification method of any of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the fraud identification method according to any one of claims 1 to 7.
CN202010724335.3A 2020-07-24 2020-07-24 Fraud identification method and device and storage medium Pending CN111860369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010724335.3A CN111860369A (en) 2020-07-24 2020-07-24 Fraud identification method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010724335.3A CN111860369A (en) 2020-07-24 2020-07-24 Fraud identification method and device and storage medium

Publications (1)

Publication Number Publication Date
CN111860369A true CN111860369A (en) 2020-10-30

Family

ID=72950137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010724335.3A Pending CN111860369A (en) 2020-07-24 2020-07-24 Fraud identification method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111860369A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418167A (en) * 2020-12-10 2021-02-26 深圳前海微众银行股份有限公司 Image clustering method, device, equipment and storage medium
CN112560970A (en) * 2020-12-21 2021-03-26 上海明略人工智能(集团)有限公司 Abnormal picture detection method, system, equipment and storage medium based on self-coding
CN113298118A (en) * 2021-04-28 2021-08-24 上海淇玥信息技术有限公司 Intelligent anti-fraud method and device based on neural network and electronic equipment
CN113689292A (en) * 2021-09-18 2021-11-23 杭银消费金融股份有限公司 User aggregation identification method and system based on image background identification
CN114926725A (en) * 2022-07-18 2022-08-19 中邮消费金融有限公司 Online financial group partner fraud identification method based on image analysis
CN116071089A (en) * 2023-02-10 2023-05-05 成都新希望金融信息有限公司 Fraud identification method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190236738A1 (en) * 2018-02-01 2019-08-01 Fst21 Ltd System and method for detection of identity fraud
CN110348519A (en) * 2019-07-12 2019-10-18 深圳众赢维融科技有限公司 Financial product cheats recognition methods and the device of clique
CN110751490A (en) * 2019-10-22 2020-02-04 中信银行股份有限公司 Fraud identification method and device, electronic equipment and computer-readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190236738A1 (en) * 2018-02-01 2019-08-01 Fst21 Ltd System and method for detection of identity fraud
CN110348519A (en) * 2019-07-12 2019-10-18 深圳众赢维融科技有限公司 Financial product cheats recognition methods and the device of clique
CN110751490A (en) * 2019-10-22 2020-02-04 中信银行股份有限公司 Fraud identification method and device, electronic equipment and computer-readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418167A (en) * 2020-12-10 2021-02-26 深圳前海微众银行股份有限公司 Image clustering method, device, equipment and storage medium
CN112560970A (en) * 2020-12-21 2021-03-26 上海明略人工智能(集团)有限公司 Abnormal picture detection method, system, equipment and storage medium based on self-coding
CN113298118A (en) * 2021-04-28 2021-08-24 上海淇玥信息技术有限公司 Intelligent anti-fraud method and device based on neural network and electronic equipment
CN113689292A (en) * 2021-09-18 2021-11-23 杭银消费金融股份有限公司 User aggregation identification method and system based on image background identification
CN114926725A (en) * 2022-07-18 2022-08-19 中邮消费金融有限公司 Online financial group partner fraud identification method based on image analysis
CN116071089A (en) * 2023-02-10 2023-05-05 成都新希望金融信息有限公司 Fraud identification method and device, electronic equipment and storage medium
CN116071089B (en) * 2023-02-10 2023-12-05 成都新希望金融信息有限公司 Fraud identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111860369A (en) Fraud identification method and device and storage medium
JP7413400B2 (en) Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US8638993B2 (en) Segmenting human hairs and faces
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
CN108090511B (en) Image classification method and device, electronic equipment and readable storage medium
US20160163028A1 (en) Method and device for image processing
CN111091109B (en) Method, system and equipment for predicting age and gender based on face image
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN106951869B (en) A kind of living body verification method and equipment
CN111696080B (en) Face fraud detection method, system and storage medium based on static texture
WO2009078957A1 (en) Systems and methods for rule-based segmentation for objects with full or partial frontal view in color images
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN109711268B (en) Face image screening method and device
CN111680690B (en) Character recognition method and device
CN111445459A (en) Image defect detection method and system based on depth twin network
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN107256543A (en) Image processing method, device, electronic equipment and storage medium
CN110969046A (en) Face recognition method, face recognition device and computer-readable storage medium
CN111709305B (en) Face age identification method based on local image block
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
CN110956184A (en) Abstract diagram direction determination method based on HSI-LBP characteristics
CN113870196A (en) Image processing method, device, equipment and medium based on anchor point cutting graph
CN110633666A (en) Gesture track recognition method based on finger color patches
CN112069885A (en) Face attribute identification method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination