CN111985504A - Copying detection method, device, equipment and medium based on artificial intelligence - Google Patents

Copying detection method, device, equipment and medium based on artificial intelligence Download PDF

Info

Publication number
CN111985504A
CN111985504A CN202010827984.6A CN202010827984A CN111985504A CN 111985504 A CN111985504 A CN 111985504A CN 202010827984 A CN202010827984 A CN 202010827984A CN 111985504 A CN111985504 A CN 111985504A
Authority
CN
China
Prior art keywords
network
data
loss function
training
network structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010827984.6A
Other languages
Chinese (zh)
Other versions
CN111985504B (en
Inventor
喻晨曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202010827984.6A priority Critical patent/CN111985504B/en
Publication of CN111985504A publication Critical patent/CN111985504A/en
Application granted granted Critical
Publication of CN111985504B publication Critical patent/CN111985504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of artificial intelligence, and provides a reproduction detection method, a device, equipment and a medium based on artificial intelligence, which can perform color space conversion on sample data to obtain conversion data, construct a target loss function, effectively solve the problem of unbalanced sample, and effectively compatible sample noise, based on the conversion data, train with the target loss function to obtain a first network structure and a second network structure, and then connect the first network structure and the second network structure after the flatten processing to obtain a fusion network, train the fusion network with the target loss function to obtain the target network, input the data to be detected to the target network, output the reproduction detection result, further combine with the artificial intelligence means, because the fusion network simultaneously comprises primary and fine feature extraction, further effectively eliminate the problem of unbalanced sample, and simultaneously compatible sample noise, and realizing reproduction detection. The invention also relates to a block chain technology, and the target network and the copying detection result can be stored in the block chain.

Description

Copying detection method, device, equipment and medium based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a reproduction detection method, device, equipment and medium based on artificial intelligence.
Background
In an anti-fraud wind control scene, when the reproduction detection is performed, the problem of unbalanced samples is usually faced, and the accuracy of the final reproduction detection is affected.
In the prior art, in order to solve the problem of the unbalanced sample, a mode of sampling data by SMOTE (Synthetic minimum Oversampling Technique) or training by affinity loss is mainly used.
However, when the SMOTE sampling is adopted to solve the above problems, due to the fact that SMOTE has a weak point that the learning representativeness of the synthetic data is deficient, overfitting is easily caused, and the generalization capability is reduced; when the affinity loss is adopted to solve the problems, because an adaptive learning mechanism is not adopted during training, clustering needs to be adopted, and further, a great amount of research and training time is consumed.
In addition, when the reproduction detection is performed, because the personnel who print the label mainly determines the reproduced picture according to some imperfect rules, it is difficult to avoid the occurrence of artificial label errors at the data end, and the existing algorithm also has difficulty in solving the problem, so how to be compatible with the image with a small number of label errors interferes with the model training, which is also an urgent problem to be solved.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a device and a medium for detecting a copy based on artificial intelligence, which can effectively eliminate the problem of unbalanced samples by combining with artificial intelligence means, and simultaneously, is compatible with sample noise, and realizes accurate copy detection.
A reproduction detection method based on artificial intelligence comprises the following steps:
responding to a copying detection instruction, acquiring an initial picture, and performing feature interception on the initial picture to obtain sample data;
performing color space conversion on the sample data to obtain conversion data;
constructing a target loss function;
training a preset first neural network by using the target loss function based on the conversion data to obtain a first network structure, and training a preset second neural network by using the target loss function based on the conversion data to obtain a second network structure;
performing a flip processing on the first network structure and the second network structure, and splicing the first network structure and the second network structure after the flip processing to obtain a fusion network;
training the fusion network by using the target loss function to obtain a target network;
and acquiring data to be detected, inputting the data to be detected into the target network, and outputting a reproduction detection result.
According to a preferred embodiment of the present invention, the performing feature extraction on the initial picture to obtain sample data includes:
inputting each picture in the initial pictures into a YOLOv3 network for identification to obtain an avatar area of each picture;
intercepting each corresponding picture according to the head portrait area of each picture to obtain each subsample;
and integrating the obtained sub-samples to obtain the sample data.
According to a preferred embodiment of the invention, the target loss function is constructed using the following formula:
Loss=α(1-y_pred)γ*log(y_pred)*y_true+β*y_pred*log(y_true)
wherein Loss is the objective Loss function, α is a value between [0,1], β is a value between [1, + ∞), γ is a value greater than 1, y _ pred is the prediction probability, and y _ true is the actual probability.
According to a preferred embodiment of the present invention, the training a preset first neural network with the target loss function based on the conversion data to obtain a first network structure includes:
acquiring a gray scale map, a YCrCb map and an HSV map corresponding to the sample data from the conversion data;
performing dimension conversion on the YCrCb image based on a CoaLBP algorithm to obtain first dimension data, performing dimension conversion on the HSV image based on the CoaLBP algorithm to obtain second dimension data, and performing local phase quantization processing on the gray image to obtain third dimension data;
splicing the first dimension data, the second dimension data and the third dimension data to obtain fourth dimension data;
zero padding processing is carried out on the fourth dimensional data to obtain texture feature data;
and training the first neural network by using the target loss function based on the texture feature data until the value of the target loss function is converged, and stopping training to obtain the first network structure.
According to a preferred embodiment of the present invention, the training a preset second neural network with the target loss function based on the conversion data to obtain a second network structure includes:
acquiring the HSV map from the conversion data;
and training the second neural network by using the target loss function based on the HSV graph until the value of the target loss function is converged, and stopping training to obtain the second network structure, wherein the second neural network is an inverted residual network.
According to a preferred embodiment of the present invention, the training the fusion network with the target loss function to obtain a target network includes:
acquiring a network to be added;
superposing the network to be added by the converged network to obtain an intermediate network;
training the intermediate network by using the target loss function based on the sample data until the value of the target loss function is converged, and stopping training to obtain the target network;
wherein parameters of the converged network remain unchanged while training the intermediate network.
According to a preferred embodiment of the present invention, after outputting the reproduction detection result, the artificial intelligence based reproduction detection method further includes:
when the copying detection result shows that the data to be detected has copying risks, generating risk prompt information according to the copying detection result;
and sending the risk prompt information to a designated terminal device, and storing the reproduction detection result to a block chain.
The utility model provides a reproduction detection device based on artificial intelligence, reproduction detection device based on artificial intelligence includes:
the intercepting unit is used for responding to the copying detection instruction, acquiring an initial picture, and carrying out feature interception on the initial picture to obtain sample data;
the conversion unit is used for performing color space conversion on the sample data to obtain conversion data;
a construction unit for constructing a target loss function;
a training unit, configured to train a preset first neural network with the target loss function based on the conversion data to obtain a first network structure, and train a preset second neural network with the target loss function based on the conversion data to obtain a second network structure;
the fusion unit is used for carrying out a flip treatment on the first network structure and the second network structure and splicing the first network structure and the second network structure after the flip treatment to obtain a fusion network;
the training unit is further configured to train the fusion network with the target loss function to obtain a target network;
and the input unit is used for acquiring data to be detected, inputting the data to be detected to the target network and outputting a reproduction detection result.
An electronic device, the electronic device comprising:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the artificial intelligence based copy detection method.
A computer-readable storage medium having stored therein at least one instruction for execution by a processor in an electronic device to implement the artificial intelligence based copy detection method.
According to the technical scheme, the method can respond to a reproduction detection instruction to obtain an initial picture, perform feature interception on the initial picture to obtain sample data, perform color space conversion on the sample data to obtain conversion data, construct a target loss function, effectively solve the problem of unbalanced sample, simultaneously effectively compatible with sample noise, train a preset first neural network by using the target loss function based on the conversion data to obtain a first network structure, train a preset second neural network by using the target loss function based on the conversion data to obtain a second network structure, perform a flatten treatment on the first network structure and the second network structure, and connect the first network structure and the second network structure after the flatten treatment to obtain a fusion network, train the fusion network by using the target loss function, the method comprises the steps of obtaining a target network, obtaining data to be detected, inputting the data to be detected into the target network, outputting a copying detection result, and further combining an artificial intelligence means.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the artificial intelligence based copy detection method of the present invention.
FIG. 2 is a functional block diagram of a preferred embodiment of the artificial intelligence based duplication detection apparatus of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device implementing a copy detection method based on artificial intelligence according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a preferred embodiment of the artificial intelligence-based copy detection method according to the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The artificial intelligence-based copy detection method is applied to one or more electronic devices, which are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the electronic devices includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
And S10, responding to the copying detection instruction, acquiring an initial picture, and performing feature interception on the initial picture to obtain sample data.
In the anti-fraud wind control scene, the copying detection can be mainly used for certificate counterfeiting anti-fraud tasks, high-risk client tasks based on client historical data identification and the like. These tasks are ubiquitous in security wind control. And the tasks contain extremely unbalanced sample characteristics, namely, less samples can be used. In addition, in many projects, the sample rate of the wind control target in some scenes is even lower than 0.1%, and the problem caused by the extremely unbalanced sample characteristic brings great difficulty to the improvement of the model effect.
For example: the quality of sample image data with 100 ten thousand of artificial cameras directly used under an artificial line is very different from the quality of service client label data on the line, and the data balance degree of the sample image data can reach 50: 50, the data balance of the latter is 0.3: 99.7. therefore, when a rendering model trained to near 100% accuracy and hit rate using the former is applied to the line, the accuracy and hit rate of the model will be less than 60%, due to the inconsistent quality of the data online and offline. That is, when data under the line is used on the line, there is a problem of extremely unbalanced samples, it is difficult to maintain the original 100% accuracy and hit rate of the model on the line, and even after the original cross entropy loss function is provided, the hit rate is 0, which seriously affects the accuracy of detection.
In the anti-fraud wind control field, imbalance of data samples generally exists, when the proportion of the samples is presented as serious imbalance, characteristics existing in part of the samples cannot be obviously distinguished into positive and negative after slight noise reduction, and even some models with high fitting capability (such as random forest, XGboost, Light GBM and neural network models) cannot obviously and meaningfully improve the accuracy and hit rate.
The commonly adopted solution is to apply an undersampling strategy to discard most sample classes or apply an oversampling strategy to expand a few sample classes at the training data end, which are all easy to generate the overfitting problem of the model. The undersampling strategy discards data so as to reduce the total amount of available data information, and the oversampling strategy excessively exaggerates the information function of the existing examples, so that the generalization capability of the model is reduced, and the model is difficult to apply to new data.
Or an ADASYNN strategy is adopted, and for a class of sparsely distributed minority samples, each adjacent point may only have one minority sample, so that the generation of synthetic data of the adjacent point and the adjacent point is restricted, and the solution strategy is invalid under the situation. More fundamentally, the hit rate of the model is affected by the adaptability of the data after the ADASYNN strategy is used. When the new data synthesized generates many false positive samples (few sample classes), then the model will perform poorly.
Therefore, the present embodiment will solve the problem of sample imbalance.
In at least one embodiment of the present invention, the initial picture may be a big photo, a certificate photo, etc. containing the face image.
In this embodiment, the copying detection instruction may be triggered by a designated person, such as a risk manager, and may also be automatically triggered when it is detected that a user uploads a photo, which is not limited in the present invention.
In at least one embodiment of the present invention, the performing feature extraction on the initial picture to obtain sample data includes:
inputting each picture in the initial pictures into a YOLOv3 network for identification to obtain an avatar area of each picture;
intercepting each corresponding picture according to the head portrait area of each picture to obtain each subsample;
and integrating the obtained sub-samples to obtain the sample data.
Through the embodiment, the Yolov3 network has high stable precision, so that accurate sample data can be obtained by intercepting the avatar characteristics through the Yolov3 network for use in subsequent training models. Meanwhile, the head portrait characteristics are intercepted at first, so that the speed and the accuracy of model training are improved.
And S11, performing color space conversion on the sample data to obtain conversion data.
It should be noted that, since the color space conversion belongs to a relatively mature technology, it is not described herein in detail.
In this embodiment, the obtained conversion data is used for subsequent model training, and the converted data can satisfy the requirements of different models for the format of the input data.
In this embodiment, the converting data may include: a gray scale map, a YCrCb (optimized color video signal) map, and an HSV (Hue, Saturation) map.
S12, constructing an objective loss function.
In the field of unstructured data, some non-adaptive loss functions, data enhancement or optimization algorithms are generally adopted to improve the model effect. However, the above loss function usually has no adaptive learning mechanism during training, and requires clustering, which further consumes a lot of research and training time.
In this embodiment, the target loss function is constructed using the following formula:
Loss=α(1-y_pred)γ*log(y_pred)*y_true+β*y_pred*log(y_true)
wherein Loss is the objective Loss function, α is a value between [0,1], β is a value between [1, + ∞), γ is a value greater than 1, y _ pred is the prediction probability, and y _ true is the actual probability.
The model is trained through the target loss function, the problem of unbalanced samples can be effectively solved, and simultaneously, the noise of the samples is effectively compatible.
S13, based on the conversion data, training a preset first neural network by the target loss function to obtain a first network structure, and based on the conversion data, training a preset second neural network by the target loss function to obtain a second network structure.
In this embodiment, the first neural network and the second neural network may be pre-constructed as required.
For example: the first neural network may be a 3 x 3 convolutional neural network, and the second neural network may be a depth separable structure.
In at least one embodiment of the present invention, the training a preset first neural network with the target loss function based on the conversion data to obtain a first network structure includes:
acquiring a gray scale map, a YCrCb map and an HSV map corresponding to the sample data from the conversion data;
performing dimension conversion on the YCrCb image based on a CoaLBP algorithm to obtain first dimension data, performing dimension conversion on the HSV image based on the CoaLBP algorithm to obtain second dimension data, and performing local phase quantization processing on the gray image to obtain third dimension data;
splicing the first dimension data, the second dimension data and the third dimension data to obtain fourth dimension data;
zero padding processing is carried out on the fourth dimensional data to obtain texture feature data;
and training the first neural network by using the target loss function based on the texture feature data until the value of the target loss function is converged, and stopping training to obtain the first network structure.
For example: performing dimensionality conversion on the YCrCb graph and the HSV graph respectively by using a CoaLBP algorithm to obtain 3072 dimensionality data (total 6144) derived from each YCrCb graph and HSV graph, performing local phase quantization on the gray graph to generate 255 dimensionality data, then supplementing 0 to each sample to enable the data to form a matrix, obtaining matrix data of 6400 dimensionality as the texture feature data, and further performing training based on the texture feature data to obtain the first network structure.
Through the above embodiment, a first-order network for feature extraction, i.e., the first network structure, can be obtained through training.
Further, the training a preset second neural network with the target loss function based on the conversion data to obtain a second network structure includes:
acquiring the HSV map from the conversion data;
and training the second neural network by using the target loss function based on the HSV graph until the value of the target loss function is converged, and stopping training to obtain the second network structure, wherein the second neural network is an inverted residual network.
Wherein the inverted residual network belongs to a MobileNet-v2 network structure.
Through the above embodiments, the network for fine feature extraction, i.e., the second network structure, can be trained.
In at least one embodiment of the present invention, after obtaining the first network structure and the second network structure, a first-order feature extraction may be performed based on the first network structure, and a fine feature extraction may be performed based on the second network structure, so as to eliminate the problem of unbalanced samples. Meanwhile, due to the simultaneous action of the two networks, the trained model can be effectively compatible with sample noise.
And S14, performing a flip processing on the first network structure and the second network structure, and splicing the first network structure and the second network structure after the flip processing to obtain a fusion network.
For example: after the above processing, two networks of 60 × 3 can be converted into networks of 60 × 3 × 2.
In this embodiment, the fusion network includes both a first network structure for initial-stage feature extraction and a second network structure for fine-stage feature extraction, so as to effectively eliminate the problem of unbalanced samples and be compatible with sample noise.
And S15, training the fusion network by the target loss function to obtain a target network.
In this embodiment, in order to further ensure the accuracy of the model and improve the usability of the model, further training needs to be performed based on the fusion network.
Specifically, the training the fusion network with the target loss function to obtain the target network includes:
acquiring a network to be added;
superposing the network to be added by the converged network to obtain an intermediate network;
training the intermediate network by using the target loss function based on the sample data until the value of the target loss function is converged, and stopping training to obtain the target network;
wherein parameters of the converged network remain unchanged while training the intermediate network.
The network to be added may be configured according to different tasks, which is not limited in the present invention.
When the intermediate network is trained, the parameters of the fusion network are kept unchanged, the characteristics of the fusion network can be kept, and meanwhile, the model training speed is improved.
In this embodiment, to further improve the data security, the target network is saved to the blockchain
It should be noted that, in order to further improve the accuracy of the model, a feature sharing mechanism of the network may also be adopted, so that the model adds other prediction tasks on the basis of adding few parameters, which is not limited in the present invention.
And S16, acquiring the data to be detected, inputting the data to be detected to the target network, and outputting the reproduction detection result.
In this embodiment, the data to be detected refers to data that needs to be subjected to copying detection, for example: a customer's certificate photo, etc.
Wherein the copying detection result may include: the data to be detected has a copying risk, and the data to be detected does not have the copying risk.
In at least one embodiment of the present invention, after outputting the reproduction detection result, the artificial intelligence based reproduction detection method further includes:
when the copying detection result shows that the data to be detected has copying risks, generating risk prompt information according to the copying detection result;
and sending the risk prompt information to a designated terminal device, and storing the reproduction detection result to a block chain.
And the risk prompt information carries the copying detection result and is used for prompting that the data to be detected has copying risk.
The appointed terminal equipment can be equipment of related personnel such as risk management and control, and the like, so that the alarm is given in time when the risk is detected, and the related personnel are prompted to process in time.
In this embodiment, to further prevent the data from being tampered, the duplication detection result is saved to the blockchain.
According to the technical scheme, the method can respond to a reproduction detection instruction to obtain an initial picture, perform feature interception on the initial picture to obtain sample data, perform color space conversion on the sample data to obtain conversion data, construct a target loss function, effectively solve the problem of unbalanced sample, simultaneously effectively compatible with sample noise, train a preset first neural network by using the target loss function based on the conversion data to obtain a first network structure, train a preset second neural network by using the target loss function based on the conversion data to obtain a second network structure, perform a flatten treatment on the first network structure and the second network structure, and connect the first network structure and the second network structure after the flatten treatment to obtain a fusion network, train the fusion network by using the target loss function, the method comprises the steps of obtaining a target network, obtaining data to be detected, inputting the data to be detected into the target network, outputting a copying detection result, and further combining an artificial intelligence means.
Fig. 2 is a functional block diagram of a preferred embodiment of the artificial intelligence-based duplication detection apparatus according to the present invention. The artificial intelligence based reproduction detection device 11 comprises an intercepting unit 110, a converting unit 111, a constructing unit 112, a training unit 113, a fusing unit 114 and an input unit 115. The module/unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor 13 and that can perform a fixed function, and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In response to the reproduction detection instruction, the capturing unit 110 obtains an initial picture, and performs feature capturing on the initial picture to obtain sample data.
In the anti-fraud wind control scene, the copying detection can be mainly used for certificate counterfeiting anti-fraud tasks, high-risk client tasks based on client historical data identification and the like. These tasks are ubiquitous in security wind control. And the tasks contain extremely unbalanced sample characteristics, namely, less samples can be used. In addition, in many projects, the sample rate of the wind control target in some scenes is even lower than 0.1%, and the problem caused by the extremely unbalanced sample characteristic brings great difficulty to the improvement of the model effect.
For example: the quality of sample image data with 100 ten thousand of artificial cameras directly used under an artificial line is very different from the quality of service client label data on the line, and the data balance degree of the sample image data can reach 50: 50, the data balance of the latter is 0.3: 99.7. therefore, when a rendering model trained to near 100% accuracy and hit rate using the former is applied to the line, the accuracy and hit rate of the model will be less than 60%, due to the inconsistent quality of the data online and offline. That is, when data under the line is used on the line, there is a problem of extremely unbalanced samples, it is difficult to maintain the original 100% accuracy and hit rate of the model on the line, and even after the original cross entropy loss function is provided, the hit rate is 0, which seriously affects the accuracy of detection.
In the anti-fraud wind control field, imbalance of data samples generally exists, when the proportion of the samples is presented as serious imbalance, characteristics existing in part of the samples cannot be obviously distinguished into positive and negative after slight noise reduction, and even some models with high fitting capability (such as random forest, XGboost, Light GBM and neural network models) cannot obviously and meaningfully improve the accuracy and hit rate.
The commonly adopted solution is to apply an undersampling strategy to discard most sample classes or apply an oversampling strategy to expand a few sample classes at the training data end, which are all easy to generate the overfitting problem of the model. The undersampling strategy discards data so as to reduce the total amount of available data information, and the oversampling strategy excessively exaggerates the information function of the existing examples, so that the generalization capability of the model is reduced, and the model is difficult to apply to new data.
Or an ADASYNN strategy is adopted, and for a class of sparsely distributed minority samples, each adjacent point may only have one minority sample, so that the generation of synthetic data of the adjacent point and the adjacent point is restricted, and the solution strategy is invalid under the situation. More fundamentally, the hit rate of the model is affected by the adaptability of the data after the ADASYNN strategy is used. When the new data synthesized generates many false positive samples (few sample classes), then the model will perform poorly.
Therefore, the present embodiment will solve the problem of sample imbalance.
In at least one embodiment of the present invention, the initial picture may be a big photo, a certificate photo, etc. containing the face image.
In this embodiment, the copying detection instruction may be triggered by a designated person, such as a risk manager, and may also be automatically triggered when it is detected that a user uploads a photo, which is not limited in the present invention.
In at least one embodiment of the present invention, the intercepting unit 110 performs feature interception on the initial picture, and obtaining sample data includes:
inputting each picture in the initial pictures into a YOLOv3 network for identification to obtain an avatar area of each picture;
intercepting each corresponding picture according to the head portrait area of each picture to obtain each subsample;
and integrating the obtained sub-samples to obtain the sample data.
Through the embodiment, the Yolov3 network has high stable precision, so that accurate sample data can be obtained by intercepting the avatar characteristics through the Yolov3 network for use in subsequent training models. Meanwhile, the head portrait characteristics are intercepted at first, so that the speed and the accuracy of model training are improved.
The conversion unit 111 performs color space conversion on the sample data to obtain conversion data.
It should be noted that, since the color space conversion belongs to a relatively mature technology, it is not described herein in detail.
In this embodiment, the obtained conversion data is used for subsequent model training, and the converted data can satisfy the requirements of different models for the format of the input data.
In this embodiment, the converting data may include: a gray scale map, a YCrCb (optimized color video signal) map, and an HSV (Hue, Saturation) map.
The construction unit 112 constructs an objective loss function.
In the field of unstructured data, some non-adaptive loss functions, data enhancement or optimization algorithms are generally adopted to improve the model effect. However, the above loss function usually has no adaptive learning mechanism during training, and requires clustering, which further consumes a lot of research and training time.
In this embodiment, the constructing unit 112 constructs the objective loss function by using the following formula:
Loss=α(1-y_pred)γ*log(y_pred)*y_true+β*y_pred*log(y_true)
wherein Loss is the objective Loss function, α is a value between [0,1], β is a value between [1, + ∞), γ is a value greater than 1, y _ pred is the prediction probability, and y _ true is the actual probability.
The model is trained through the target loss function, the problem of unbalanced samples can be effectively solved, and simultaneously, the noise of the samples is effectively compatible.
The training unit 113 trains a preset first neural network with the target loss function based on the conversion data to obtain a first network structure, and trains a preset second neural network with the target loss function based on the conversion data to obtain a second network structure.
In this embodiment, the first neural network and the second neural network may be pre-constructed as required.
For example: the first neural network may be a 3 x 3 convolutional neural network, and the second neural network may be a depth separable structure.
In at least one embodiment of the present invention, the training unit 113 trains a preset first neural network with the target loss function based on the conversion data, and obtaining a first network structure includes:
acquiring a gray scale map, a YCrCb map and an HSV map corresponding to the sample data from the conversion data;
performing dimension conversion on the YCrCb image based on a CoaLBP algorithm to obtain first dimension data, performing dimension conversion on the HSV image based on the CoaLBP algorithm to obtain second dimension data, and performing local phase quantization processing on the gray image to obtain third dimension data;
splicing the first dimension data, the second dimension data and the third dimension data to obtain fourth dimension data;
zero padding processing is carried out on the fourth dimensional data to obtain texture feature data;
and training the first neural network by using the target loss function based on the texture feature data until the value of the target loss function is converged, and stopping training to obtain the first network structure.
For example: performing dimensionality conversion on the YCrCb graph and the HSV graph respectively by using a CoaLBP algorithm to obtain 3072 dimensionality data (total 6144) derived from each YCrCb graph and HSV graph, performing local phase quantization on the gray graph to generate 255 dimensionality data, then supplementing 0 to each sample to enable the data to form a matrix, obtaining matrix data of 6400 dimensionality as the texture feature data, and further performing training based on the texture feature data to obtain the first network structure.
Through the above embodiment, a first-order network for feature extraction, i.e., the first network structure, can be obtained through training.
Further, the training unit 113 trains a preset second neural network with the target loss function based on the conversion data, and obtaining a second network structure includes:
acquiring the HSV map from the conversion data;
and training the second neural network by using the target loss function based on the HSV graph until the value of the target loss function is converged, and stopping training to obtain the second network structure, wherein the second neural network is an inverted residual network.
Wherein the inverted residual network belongs to a MobileNet-v2 network structure.
Through the above embodiments, the network for fine feature extraction, i.e., the second network structure, can be trained.
In at least one embodiment of the present invention, after obtaining the first network structure and the second network structure, a first-order feature extraction may be performed based on the first network structure, and a fine feature extraction may be performed based on the second network structure, so as to eliminate the problem of unbalanced samples. Meanwhile, due to the simultaneous action of the two networks, the trained model can be effectively compatible with sample noise.
The fusion unit 114 performs a scatter process on the first network structure and the second network structure, and concatenates the first network structure and the second network structure after the scatter process, so as to obtain a fusion network.
For example: after the above processing, two networks of 60 × 3 can be converted into networks of 60 × 3 × 2.
In this embodiment, the fusion network includes both a first network structure for initial-stage feature extraction and a second network structure for fine-stage feature extraction, so as to effectively eliminate the problem of unbalanced samples and be compatible with sample noise.
The training unit 113 trains the fusion network with the target loss function to obtain a target network.
In this embodiment, in order to further ensure the accuracy of the model and improve the usability of the model, further training needs to be performed based on the fusion network.
Specifically, the training unit 113 trains the fusion network according to the target loss function to obtain a target network, including:
acquiring a network to be added;
superposing the network to be added by the converged network to obtain an intermediate network;
training the intermediate network by using the target loss function based on the sample data until the value of the target loss function is converged, and stopping training to obtain the target network;
wherein parameters of the converged network remain unchanged while training the intermediate network.
The network to be added may be configured according to different tasks, which is not limited in the present invention.
When the intermediate network is trained, the parameters of the fusion network are kept unchanged, the characteristics of the fusion network can be kept, and meanwhile, the model training speed is improved.
In this embodiment, to further improve the data security, the target network is saved to the blockchain
It should be noted that, in order to further improve the accuracy of the model, a feature sharing mechanism of the network may also be adopted, so that the model adds other prediction tasks on the basis of adding few parameters, which is not limited in the present invention.
The input unit 115 acquires data to be detected, inputs the data to be detected to the target network, and outputs a reproduction detection result.
In this embodiment, the data to be detected refers to data that needs to be subjected to copying detection, for example: a customer's certificate photo, etc.
Wherein the copying detection result may include: the data to be detected has a copying risk, and the data to be detected does not have the copying risk.
In at least one embodiment of the invention, after the reproduction detection result is output, when the reproduction detection result shows that the data to be detected has reproduction risk, risk prompt information is generated according to the reproduction detection result;
and sending the risk prompt information to a designated terminal device, and storing the reproduction detection result to a block chain.
And the risk prompt information carries the copying detection result and is used for prompting that the data to be detected has copying risk.
The appointed terminal equipment can be equipment of related personnel such as risk management and control, and the like, so that the alarm is given in time when the risk is detected, and the related personnel are prompted to process in time.
In this embodiment, to further prevent the data from being tampered, the duplication detection result is saved to the blockchain.
According to the technical scheme, the method can respond to a reproduction detection instruction to obtain an initial picture, perform feature interception on the initial picture to obtain sample data, perform color space conversion on the sample data to obtain conversion data, construct a target loss function, effectively solve the problem of unbalanced sample, simultaneously effectively compatible with sample noise, train a preset first neural network by using the target loss function based on the conversion data to obtain a first network structure, train a preset second neural network by using the target loss function based on the conversion data to obtain a second network structure, perform a flatten treatment on the first network structure and the second network structure, and connect the first network structure and the second network structure after the flatten treatment to obtain a fusion network, train the fusion network by using the target loss function, the method comprises the steps of obtaining a target network, obtaining data to be detected, inputting the data to be detected into the target network, outputting a copying detection result, and further combining an artificial intelligence means.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing a copy detection method based on artificial intelligence.
The electronic device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, such as an artificial intelligence based copy detection program, stored in the memory 12 and executable on the processor 13.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the electronic device 1, and does not constitute a limitation to the electronic device 1, the electronic device 1 may have a bus-type structure or a star-type structure, the electronic device 1 may further include more or less hardware or software than those shown in the figures, or different component arrangements, for example, the electronic device 1 may further include an input and output device, a network access device, and the like.
It should be noted that the electronic device 1 is only an example, and other existing or future electronic products, such as those that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
The memory 12 includes at least one type of readable storage medium, which includes flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, for example a removable hard disk of the electronic device 1. The memory 12 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 12 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of artificial intelligence based copy detection programs, etc., but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects various components of the electronic device 1 by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (for example, executing a copy detection program based on artificial intelligence, etc.) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes an operating system of the electronic device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in each of the above-described embodiments of artificial intelligence based copy detection methods, such as the steps shown in fig. 1.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the electronic device 1. For example, the computer program may be divided into a clipping unit 110, a conversion unit 111, a construction unit 112, a training unit 113, a fusion unit 114, an input unit 115.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute parts of the artificial intelligence based copy detection method according to the embodiments of the present invention.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 3, but this does not indicate only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Fig. 3 only shows the electronic device 1 with components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
Referring to fig. 1, the memory 12 in the electronic device 1 stores a plurality of instructions to implement an artificial intelligence based copy detection method, and the processor 13 can execute the plurality of instructions to implement:
responding to a copying detection instruction, acquiring an initial picture, and performing feature interception on the initial picture to obtain sample data;
performing color space conversion on the sample data to obtain conversion data;
constructing a target loss function;
training a preset first neural network by using the target loss function based on the conversion data to obtain a first network structure, and training a preset second neural network by using the target loss function based on the conversion data to obtain a second network structure;
performing a flip processing on the first network structure and the second network structure, and splicing the first network structure and the second network structure after the flip processing to obtain a fusion network;
training the fusion network by using the target loss function to obtain a target network;
and acquiring data to be detected, inputting the data to be detected into the target network, and outputting a reproduction detection result.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A reproduction detection method based on artificial intelligence is characterized by comprising the following steps:
responding to a copying detection instruction, acquiring an initial picture, and performing feature interception on the initial picture to obtain sample data;
performing color space conversion on the sample data to obtain conversion data;
constructing a target loss function;
training a preset first neural network by using the target loss function based on the conversion data to obtain a first network structure, and training a preset second neural network by using the target loss function based on the conversion data to obtain a second network structure;
performing a flip processing on the first network structure and the second network structure, and splicing the first network structure and the second network structure after the flip processing to obtain a fusion network;
training the fusion network by using the target loss function to obtain a target network;
and acquiring data to be detected, inputting the data to be detected into the target network, and outputting a reproduction detection result.
2. The artificial intelligence based reproduction detection method of claim 1, wherein the feature capturing the initial picture to obtain sample data comprises:
inputting each picture in the initial pictures into a YOLOv3 network for identification to obtain an avatar area of each picture;
intercepting each corresponding picture according to the head portrait area of each picture to obtain each subsample;
and integrating the obtained sub-samples to obtain the sample data.
3. The artificial intelligence based reproduction detection method of claim 1, wherein the objective loss function is constructed using the following formula:
Loss=α(1-y_pred)γ*log(y_pred)*y_true+β*y_pred*log(y_true)
wherein Loss is the objective Loss function, α is a value between [0,1], β is a value between [1, + ∞), γ is a value greater than 1, y _ pred is the prediction probability, and y _ true is the actual probability.
4. The artificial intelligence based reproduction detection method of claim 1, wherein the training a preset first neural network with the target loss function based on the transformation data to obtain a first network structure comprises:
acquiring a gray scale map, a YCrCb map and an HSV map corresponding to the sample data from the conversion data;
performing dimension conversion on the YCrCb image based on a CoaLBP algorithm to obtain first dimension data, performing dimension conversion on the HSV image based on the CoaLBP algorithm to obtain second dimension data, and performing local phase quantization processing on the gray image to obtain third dimension data;
splicing the first dimension data, the second dimension data and the third dimension data to obtain fourth dimension data;
zero padding processing is carried out on the fourth dimensional data to obtain texture feature data;
and training the first neural network by using the target loss function based on the texture feature data until the value of the target loss function is converged, and stopping training to obtain the first network structure.
5. The artificial intelligence based reproduction detection method of claim 1, wherein the training a preset second neural network with the objective loss function based on the transformation data to obtain a second network structure comprises:
acquiring the HSV map from the conversion data;
and training the second neural network by using the target loss function based on the HSV graph until the value of the target loss function is converged, and stopping training to obtain the second network structure, wherein the second neural network is an inverted residual network.
6. The artificial intelligence based reproduction detection method of claim 1, wherein the training of the fusion network with the objective loss function to obtain an objective network comprises:
acquiring a network to be added;
superposing the network to be added by the converged network to obtain an intermediate network;
training the intermediate network by using the target loss function based on the sample data until the value of the target loss function is converged, and stopping training to obtain the target network;
wherein parameters of the converged network remain unchanged while training the intermediate network.
7. The artificial intelligence based reproduction detection method of claim 1, wherein after outputting the reproduction detection result, the artificial intelligence based reproduction detection method further comprises:
when the copying detection result shows that the data to be detected has copying risks, generating risk prompt information according to the copying detection result;
and sending the risk prompt information to a designated terminal device, and storing the reproduction detection result to a block chain.
8. The utility model provides a reproduction detection device based on artificial intelligence which characterized in that, reproduction detection device based on artificial intelligence includes:
the intercepting unit is used for responding to the copying detection instruction, acquiring an initial picture, and carrying out feature interception on the initial picture to obtain sample data;
the conversion unit is used for performing color space conversion on the sample data to obtain conversion data;
a construction unit for constructing a target loss function;
a training unit, configured to train a preset first neural network with the target loss function based on the conversion data to obtain a first network structure, and train a preset second neural network with the target loss function based on the conversion data to obtain a second network structure;
the fusion unit is used for carrying out a flip treatment on the first network structure and the second network structure and splicing the first network structure and the second network structure after the flip treatment to obtain a fusion network;
the training unit is further configured to train the fusion network with the target loss function to obtain a target network;
and the input unit is used for acquiring data to be detected, inputting the data to be detected to the target network and outputting a reproduction detection result.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the artificial intelligence based copy detection method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer-readable storage medium has stored therein at least one instruction that is executed by a processor in an electronic device to implement the artificial intelligence based copy detection method of any of claims 1-7.
CN202010827984.6A 2020-08-17 2020-08-17 Copying detection method, device, equipment and medium based on artificial intelligence Active CN111985504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010827984.6A CN111985504B (en) 2020-08-17 2020-08-17 Copying detection method, device, equipment and medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010827984.6A CN111985504B (en) 2020-08-17 2020-08-17 Copying detection method, device, equipment and medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN111985504A true CN111985504A (en) 2020-11-24
CN111985504B CN111985504B (en) 2021-05-11

Family

ID=73434587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010827984.6A Active CN111985504B (en) 2020-08-17 2020-08-17 Copying detection method, device, equipment and medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN111985504B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507923A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Certificate copying detection method and device, electronic equipment and medium
CN112561891A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Image quality detection method, device, equipment and storage medium
CN113011355A (en) * 2021-03-25 2021-06-22 东北林业大学 Pine wood nematode disease image recognition detection method and device
CN113034406A (en) * 2021-04-27 2021-06-25 中国平安人寿保险股份有限公司 Distorted document recovery method, device, equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543632A (en) * 2018-11-28 2019-03-29 太原理工大学 A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
US20190108551A1 (en) * 2017-10-09 2019-04-11 Hampen Technology Corporation Limited Method and apparatus for customer identification and tracking system
CN110046599A (en) * 2019-04-23 2019-07-23 东北大学 Intelligent control method based on depth integration neural network pedestrian weight identification technology
CN110348322A (en) * 2019-06-19 2019-10-18 西华师范大学 Human face in-vivo detection method and equipment based on multi-feature fusion
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation
CN110895811A (en) * 2019-11-05 2020-03-20 泰康保险集团股份有限公司 Image tampering detection method and device
WO2020061273A1 (en) * 2018-09-21 2020-03-26 Ancestry.Com Operations Inc. Ventral-dorsal neural networks: object detection via selective attention
CN111259915A (en) * 2020-01-20 2020-06-09 中国平安人寿保险股份有限公司 Method, device, equipment and medium for recognizing copied image
CN111275685A (en) * 2020-01-20 2020-06-12 中国平安人寿保险股份有限公司 Method, device, equipment and medium for identifying copied image of identity document
CN111476268A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for training reproduction recognition model and image recognition
CN111476269A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for constructing balanced sample set and identifying copied image

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190108551A1 (en) * 2017-10-09 2019-04-11 Hampen Technology Corporation Limited Method and apparatus for customer identification and tracking system
WO2020061273A1 (en) * 2018-09-21 2020-03-26 Ancestry.Com Operations Inc. Ventral-dorsal neural networks: object detection via selective attention
CN109543632A (en) * 2018-11-28 2019-03-29 太原理工大学 A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110046599A (en) * 2019-04-23 2019-07-23 东北大学 Intelligent control method based on depth integration neural network pedestrian weight identification technology
CN110348322A (en) * 2019-06-19 2019-10-18 西华师范大学 Human face in-vivo detection method and equipment based on multi-feature fusion
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation
CN110895811A (en) * 2019-11-05 2020-03-20 泰康保险集团股份有限公司 Image tampering detection method and device
CN111259915A (en) * 2020-01-20 2020-06-09 中国平安人寿保险股份有限公司 Method, device, equipment and medium for recognizing copied image
CN111275685A (en) * 2020-01-20 2020-06-12 中国平安人寿保险股份有限公司 Method, device, equipment and medium for identifying copied image of identity document
CN111476268A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for training reproduction recognition model and image recognition
CN111476269A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for constructing balanced sample set and identifying copied image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JINGYING SUN ET AL.: "Face Anti-spoofing Algorithm Based on Depth Feature Fusion", 《SPRINGER》 *
邓雄 等: "基于深度学习和特征融合的人脸活体检测算法", 《计算机应用》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507923A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Certificate copying detection method and device, electronic equipment and medium
CN112507923B (en) * 2020-12-16 2023-10-31 平安银行股份有限公司 Certificate copying detection method and device, electronic equipment and medium
CN112561891A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Image quality detection method, device, equipment and storage medium
CN112561891B (en) * 2020-12-18 2024-04-16 深圳赛安特技术服务有限公司 Image quality detection method, device, equipment and storage medium
CN113011355A (en) * 2021-03-25 2021-06-22 东北林业大学 Pine wood nematode disease image recognition detection method and device
CN113011355B (en) * 2021-03-25 2022-10-11 东北林业大学 Pine wood nematode disease image recognition detection method and device
CN113034406A (en) * 2021-04-27 2021-06-25 中国平安人寿保险股份有限公司 Distorted document recovery method, device, equipment and medium
CN113034406B (en) * 2021-04-27 2024-05-14 中国平安人寿保险股份有限公司 Distorted document recovery method, device, equipment and medium

Also Published As

Publication number Publication date
CN111985504B (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN111985504B (en) Copying detection method, device, equipment and medium based on artificial intelligence
CN111723727A (en) Cloud monitoring method and device based on edge computing, electronic equipment and storage medium
CN112446544A (en) Traffic flow prediction model training method and device, electronic equipment and storage medium
CN115169587B (en) Federal learning system and method and equipment for realizing multi-party combined processing task
CN112801062B (en) Live video identification method, device, equipment and medium
CN112507934A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111950621A (en) Target data detection method, device, equipment and medium based on artificial intelligence
CN112507923A (en) Certificate copying detection method and device, electronic equipment and medium
CN112101191A (en) Expression recognition method, device, equipment and medium based on frame attention network
CN112528265A (en) Identity recognition method, device, equipment and medium based on online conference
CN111950707A (en) Behavior prediction method, apparatus, device and medium based on behavior co-occurrence network
CN115409041B (en) Unstructured data extraction method, device, equipment and storage medium
WO2022227191A1 (en) Inactive living body detection method and apparatus, electronic device, and storage medium
CN112101192B (en) Artificial intelligence-based camouflage detection method, device, equipment and medium
CN112560721B (en) Non-perception model switching method and device, electronic equipment and storage medium
CN113989548A (en) Certificate classification model training method and device, electronic equipment and storage medium
CN115016754A (en) Method and device for synchronously displaying pages among devices, electronic device and medium
CN112183347A (en) Depth space gradient-based in-vivo detection method, device, equipment and medium
CN113343882A (en) Crowd counting method and device, electronic equipment and storage medium
CN112233194A (en) Medical picture optimization method, device and equipment and computer-readable storage medium
CN113962862A (en) Super-resolution-based low-quality image recognition method, device, equipment and medium
CN116701233B (en) Transaction system testing method, equipment and medium based on high concurrency report simulation
CN112633325B (en) Personnel identification method and device based on tactical model
CN112561891B (en) Image quality detection method, device, equipment and storage medium
CN112633170A (en) Communication optimization method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant