CN113344090A - Image processing method for resisting attack by target in middle layer - Google Patents
Image processing method for resisting attack by target in middle layer Download PDFInfo
- Publication number
- CN113344090A CN113344090A CN202110676108.2A CN202110676108A CN113344090A CN 113344090 A CN113344090 A CN 113344090A CN 202110676108 A CN202110676108 A CN 202110676108A CN 113344090 A CN113344090 A CN 113344090A
- Authority
- CN
- China
- Prior art keywords
- target
- classifier
- iteration
- feature map
- map data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image processing method for an intermediate layer with a target to resist attack, which comprises the following steps: s1, obtaining a target image in the target gallery; s2, inputting the target image into a classifier to obtain feature map data output by a middle layer of the classifier; s3, inputting the confrontation sample into a classifier to obtain characteristic diagram data output by a middle layer of the classifier; s4, constructing a loss function; s5, updating the confrontation sample; s6, carrying out standardization processing on the updated confrontation sample; s7, repeating S3 to S6 until the number of iteration steps reaches a set value, inputting the final standardized countercheck sample into a classifier, outputting an incorrect classification result by the classifier, and finishing image processing of countercheck attacks; the method solves the problem that the space consistency constraint is unreasonably applied to the original and target characteristic diagrams in space due to the use of Euclidean distance in the traditional technology for resisting the attack by the target in the middle layer, thereby causing low-efficiency attack effect.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method for an intermediate layer with a target to resist attack.
Background
The countermeasure sample can easily cheat the deep neural network by only adding disturbance which is difficult to be identified by human eyes in the input picture. According to the visibility of the victim model, the counterattack can be classified into white-box, gray-box, and black-box attacks, the visibility of which gradually decreases. According to the attack target, the method can be divided into non-target attack and target attack, and the non-target attack only requires an attacker misleading model to give an arbitrary wrong output result; when an attacker is expected to induce a model to give a certain error output, the attacker is a targeted attack. In the face of the most challenging attack requirement (targeted black box attack), the attack effect of the traditional attack technology based on model logits layer output is not strong enough. In order to solve the problem, some researchers propose an intermediate layer attack technology, and further improve the targeted black box attack performance. As one of the main methods, the intermediate layer has a target to resist the attack, and after the target picture is given, the disturbance is generated by reducing the difference of the intermediate layer characteristic graphs of the original picture and the target picture. In measuring feature map differences, the current technique generally chooses euclidean distances at the pixel level, but the choice of euclidean distances is questionable as the euclidean distances unreasonably impose spatial consistency constraints on the original and target feature maps. Intuitively, given two pictures, one cat on the left and one cat on the right, the neural network would classify them as "cats", but the euclidean distance between the two pictures is large.
Disclosure of Invention
Aiming at the defects in the prior art, the image processing method for the intermediate layer targeted attack countermeasure provided by the invention solves the problem of low-efficiency attack effect caused by unreasonable spatial consistency constraint on the original and target characteristic diagrams due to the use of Euclidean distance in the traditional intermediate layer targeted attack countermeasure technology.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an image processing method for an intermediate layer targeted against attacks, comprising the steps of:
s1, acquiring a target type, inputting an original image into the classifier, and acquiring a target image in the target gallery based on the target type;
s2, inputting the target image into a classifier to obtain feature map data output by the target image in a middle layer of the classifier;
s3, inputting the standardized confrontation sample of the previous iteration into a classifier to obtain characteristic diagram data output by the classifier in the middle layer;
s4, constructing a loss function, and calculating the loss between the feature map data of the target image output by the classifier intermediate layer and the feature map data of the standardized countermeasure sample of the previous iteration output by the classifier intermediate layer;
s5, calculating the gradient of the loss between the feature map data relative to the standardized countermeasure sample of the previous iteration, calculating the noise of the current iteration based on the gradient, and adding the noise to the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample;
s6, carrying out standardization processing on the updated countermeasure sample to obtain a standardized countermeasure sample of the iteration;
and S7, taking the standardized countermeasure sample of the current iteration as the standardized countermeasure sample of the previous iteration, repeating the iteration steps from S3 to S6 until the number of the iteration steps reaches a set value, inputting the final standardized countermeasure sample into a classifier, outputting an incorrect classification result by the classifier, and finishing the image processing of the countermeasure attack.
Further, step S1 includes the following substeps:
s11, obtaining the target category, inputting the original image into the classifier, and obtaining the characteristic diagram data O output by the original image in the intermediate layer l of the classifierlWherein, in the step (A),Nlthe number of channels in the intermediate layer l, MlFor the length times the width of the feature map data,a real number space;
s12, acquiring all feature maps of the sub-gallery of the target category in the target gallery, and selecting distance feature map data O from all feature mapslThe farthest feature map is used as a target feature map;
and S13, taking the image in the target gallery corresponding to the target feature map as the target image.
Further, the method for acquiring the object class in step S11 includes 2 methods: the 1 st: randomly selecting one from all categories as a target category, 2: and selecting the prediction class with the prediction confidence degree arranged at the r-th position as the target class according to the prediction class of the original sample by the classifier.
Further, the loss function in step S4 is:
wherein the content of the first and second substances,in order to be a function of the loss,in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifierlFeature map data, s, output for the target image in the intermediate layer l of the classifier·iIs composed ofData at spatial position i, s·jIs composed ofData at spatial position j, t·iIs TlData at spatial position i, t·jIs TlData at spatial position j, MlFor the length-by-width of the feature map data, k (·,) is an auxiliary mapping operation that maps the feature map data from the original space to the complete inner product space.
The beneficial effects of the above further scheme are: through the auxiliary space mapping operation, the loss function maps the feature map data from the original space to the complete inner product space, the Euclidean distance alignment mode in the original space in the traditional technology is replaced, the space constraint brought by the Euclidean distance measurement is reduced, and therefore the semantic alignment of the feature map data is better completed.
Further, the loss function in step S4 is:
wherein the content of the first and second substances,in order to be a function of the loss,in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifierlFeature map data output for the target image at the intermediate level i of the classifier, | | is a two-norm operation, NlNumber of channels, M, for feature map datalFor the length times the width of the feature map data,is composed ofData at position (n, m), (T)l)nmIs TlThe data at location (n, m),as feature map dataVariance of all values on channel n, Var ((T)l)n·) For feature map data TlVariance of all values on channel n.
The beneficial effects of the above further scheme are: the loss function uses the global statistics to replace Euclidean distance measurement of a pixel level, has translation invariance, and reduces spatial constraint when the quantity difference is reduced to a certain extent.
Further, step S5 includes the following substeps:
s51, calculating the gradient of the loss between the characteristic map data relative to the normalized confrontation sample of the previous iteration;
s52, calculating the accumulated gradient of the current iteration according to the gradient obtained in the step S51;
and S53, calculating the noise of the current iteration according to the accumulated gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample.
Further, the formula for calculating the cumulative gradient of the current iteration number in step S52 is:
wherein, betavCumulative gradient, beta, for the ν th iteration0Mu is the attenuation factor, | is the two-norm operation, gν-1Normalizing countermeasure samples for a loss function with respect to an upper round of iterationsThe gradient of (a) of (b) is,according to a loss functionNormalized confrontation samples for round-up iterationsGradient operation is solved;
the formula of the updated confrontation sample obtained in step S53 is:
α=∈/T
wherein the content of the first and second substances,and updating a confrontation sample for the v-th iteration, wherein epsilon is a disturbance infinite norm threshold, alpha is a step length, T is the maximum iteration step number, sign (·) is a sign function, and x is an original image.
Further, the formula of the normalization process in step S6 is:
wherein the content of the first and second substances,the normalized challenge samples for the ν th iteration,the samples are confronted for the update of the ν th iteration.
The beneficial effects of the above further scheme are: one is as follows: through a gradient accumulation mode, the updating mode stabilizes the updating direction of the confrontation sample and prevents the confrontation sample from falling into local optimum; the second step is as follows: the disturbance threshold of the antagonistic sample is limited, so that the identifiability of human eyes can be effectively reduced; and thirdly: the normalization of the challenge sample stabilizes the gradient calculation. In conclusion, the scheme can generate the confrontation sample with higher mobility and complete more effective attack.
In conclusion, the beneficial effects of the invention are as follows:
(1) the prior art measures the difference between the original characteristic diagram and the target characteristic diagram through Euclidean distance, and further reduces the difference to generate a countermeasure sample. However, since the Euclidean distance inevitably introduces spatial constraint, the invention introduces high-order statistics with translation invariance to measure the similarity between the characteristic graphs, further improves the mobility of resisting samples and achieves more effective attack effect of the black box target.
(2) The method and the device consider the problem of extra space constraint of the traditional measuring mode when the difference between the characteristic graphs is measured, introduce high-order statistics with translation invariance and further reduce the space constraint. The antagonistic sample image has stronger migration capability, and better target attack performance of the black box is achieved. The method has certain universality, can be combined with a method for resisting attacks with targets in most middle layers, and does not introduce additional computational complexity.
Drawings
FIG. 1 is a flow chart of an image processing method for an intermediate layer targeted against attacks.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, an image processing method for an intermediate layer targeted against attacks includes the following steps:
s1, acquiring a target type, inputting an original image into the classifier, and acquiring a target image in the target gallery based on the target type;
the target gallery is made from a common public data set to assist the generation of a target image. In this embodiment, the target gallery holds 20 pictures for each class, and the target image is obtained from the target gallery.
Step S1 includes the following substeps:
s11, obtaining the target category, inputting the original image into the classifier, and obtaining the characteristic diagram data O output by the original image in the intermediate layer l of the classifierlWherein, in the step (A),Nlthe number of channels in the intermediate layer l, MlFor the length times the width of the feature map data,a real number space;
the method for acquiring the target category in step S11 includes 2 methods: the 1 st: randomly selecting one from all categories as a target category, 2: and selecting the prediction class with the prediction confidence degree arranged at the r-th position as the target class according to the prediction class of the original sample by the classifier.
S12, acquiring all feature maps of the sub-gallery of the target category in the target gallery, and selecting distance feature map data O from all feature mapslThe farthest feature map is used as a target feature map;
specifically, in step S12, the target gallery includes images of all categories, and the image under each category is referred to as a sub-gallery. After the target category is obtained in S11, calling the target gallery to obtain a target category sub-gallery and calculating all feature map data of the target category sub-gallery, wherein the feature map data are distance feature map data OlThe farthest feature map is the target feature map.
And S13, taking the image in the target gallery corresponding to the target feature map as the target image.
S2, inputting the target image into a classifier to obtain feature map data output by the target image in a middle layer of the classifier;
s3, inputting the standardized confrontation sample of the previous iteration into a classifier to obtain characteristic diagram data output by the classifier in the middle layer;
s4, constructing a loss function, and calculating the loss between the feature map data of the target image output by the classifier intermediate layer and the feature map data of the standardized countermeasure sample of the previous iteration output by the classifier intermediate layer;
in the embodiment, with the help of different spatial mapping auxiliary operations, 3 kinds of loss functions (1 st to 3) based on the pairwise alignment attack method and 1 kind of loss function (4 th) based on the global alignment are proposed.
and (3) type:
wherein the content of the first and second substances,in order to be a function of the loss,in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifierlFeature map data, s, output for the target image in the intermediate layer l of the classifier·iIs composed ofData at spatial position i, s·jIs composed ofData at spatial position j, t·iIs TlData at spatial position i, t·jIs TlData at spatial position j, σ, c, d being hyper-parameters, MlFor the length multiplied by the width of the feature map data, exp (. cndot.) is an exponential function, (. cndot.)TIn order to perform the transposition operation,is a squaring operation on the two norms.
And 4, the method comprises the following steps:
wherein the content of the first and second substances,in order to be a function of the loss,in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifierlFeature map data output for the target image at the intermediate level i of the classifier, | | is a two-norm operation, NlNumber of channels, M, for feature map datalFor the length times the width of the feature map data,is composed ofData at position (n, m), (T)l)nmIs TlThe data at location (n, m),as feature map dataVariance of all values on channel n, Var ((T)l)n·) For feature map data TlVariance of all values on channel n.
S5, calculating the gradient of the loss between the feature map data relative to the standardized countermeasure sample of the previous iteration, calculating the noise of the current iteration based on the gradient, and adding the noise to the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample;
step S5 includes the following substeps:
s51, calculating the gradient of the loss between the characteristic map data relative to the normalized confrontation sample of the previous iteration;
s52, calculating the accumulated gradient of the current iteration according to the gradient obtained in the step S51;
and S53, calculating the noise of the current iteration according to the accumulated gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample.
The formula for calculating the cumulative gradient of the current iteration number in step S52 is:
wherein, betavCumulative gradient, β, for the v-th iteration0Mu is the attenuation factor, | is the two-norm operation, gν-1Normalizing countermeasure samples for a loss function with respect to an upper round of iterationsThe gradient of (a) of (b) is,according to a loss functionNormalized confrontation samples for round-up iterationsGradient operation is solved;
the formula of the updated confrontation sample obtained in step S53 is:
α=∈/T
wherein the content of the first and second substances,updating the confrontation sample for the v-th iteration, wherein epsilon is a disturbance infinite norm threshold, alpha is a step length, T is a maximum iteration step number, sign (DEG) is a sign function, x is an original image, and the small and large taking operation min (max (·, DEG) is used for ensuring the confrontation sampleAlways within e of x.
S6, carrying out standardization processing on the updated confrontation sample to obtain a standardized confrontation sample;
the formula of the normalization processing in step S6 is:
wherein the content of the first and second substances,the normalized challenge samples for the ν th iteration,the samples are confronted for the update of the ν th iteration.
And S7, taking the standardized countermeasure sample of the current iteration as the standardized countermeasure sample of the previous iteration, repeating the iteration steps from S3 to S6 until the number of the iteration steps reaches a set value, inputting the final standardized countermeasure sample into a classifier, outputting an incorrect classification result by the classifier, and finishing the image processing of the countermeasure attack.
Claims (8)
1. An image processing method for an intermediate layer targeted against attacks, comprising the steps of:
s1, acquiring a target type, inputting an original image into the classifier, and acquiring a target image in the target gallery based on the target type;
s2, inputting the target image into a classifier to obtain feature map data output by the target image in a middle layer of the classifier;
s3, inputting the standardized confrontation sample of the previous iteration into a classifier to obtain characteristic diagram data output by the classifier in the middle layer;
s4, constructing a loss function, and calculating the loss between the feature map data of the target image output by the classifier intermediate layer and the feature map data of the standardized countermeasure sample of the previous iteration output by the classifier intermediate layer;
s5, calculating the gradient of the loss between the feature map data relative to the standardized countermeasure sample of the previous iteration, calculating the noise of the current iteration based on the gradient, and adding the noise to the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample;
s6, carrying out standardization processing on the updated countermeasure sample to obtain a standardized countermeasure sample of the iteration;
and S7, taking the standardized countermeasure sample of the current iteration as the standardized countermeasure sample of the previous iteration, repeating the iteration steps from S3 to S6 until the number of the iteration steps reaches a set value, inputting the final standardized countermeasure sample into a classifier, outputting an incorrect classification result by the classifier, and finishing the image processing of the countermeasure attack.
2. The image processing method for an intermediate layer targeted against attacks according to claim 1, characterized in that said step S1 comprises the sub-steps of:
s11, obtaining the target category, inputting the original image into the classifier, and obtaining the characteristic diagram data O output by the original image in the intermediate layer l of the classifierlWherein, in the step (A),Nlthe number of channels in the intermediate layer l, MlFor the length times the width of the feature map data,a real number space;
s12, acquiring all feature maps of the sub-gallery of the target category in the target gallery, and selecting distance feature map data O from all feature mapslThe farthest feature map is used as a target feature map;
and S13, taking the image in the target gallery corresponding to the target feature map as the target image.
3. The image processing method for middle layer targeted attack defense according to claim 2, wherein the method of obtaining the target class in step S11 includes 2 methods: the 1 st: randomly selecting one from all categories as a target category, 2: and selecting the prediction class with the prediction confidence degree arranged at the r-th position as the target class according to the prediction class of the original sample by the classifier.
4. The image processing method for an intermediate layer targeted against attacks according to claim 1, wherein the loss function in step S4 is:
wherein the content of the first and second substances,in order to be a function of the loss,in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifierlFeature map data, s, output for the target image in the intermediate layer l of the classifier·iIs composed ofData at spatial position i, s.jIs composed ofData at spatial position j, t·iIs TlData at spatial position i, t·jIs TlData at spatial position j, MlFor the length-by-width of the feature map data, k (·,) is an auxiliary mapping operation that maps the feature map data from the original space to the complete inner product space.
5. The image processing method for an intermediate layer targeted against attacks according to claim 1, wherein the loss function in step S4 is:
wherein the content of the first and second substances,in order to be a function of the loss,in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifierlFeature map data output for the target image at the intermediate level i of the classifier, | | is a two-norm operation, NlNumber of channels, M, for feature map datalFor the length times the width of the feature map data,is composed ofData at position (n, m), (T)l)nmIs TlThe data at location (n, m),as feature map dataVariance of all values on channel n, Var ((T)l)n·) For feature map data TlVariance of all values on channel n.
6. The image processing method for an intermediate layer targeted against attacks according to claim 1, characterized in that said step S5 comprises the sub-steps of:
s51, calculating the gradient of the loss between the characteristic map data relative to the normalized confrontation sample of the previous iteration;
s52, calculating the accumulated gradient of the current iteration according to the gradient obtained in the step S51;
and S53, calculating the noise of the current iteration according to the accumulated gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample.
7. The image processing method for an intermediate layer targeted against attacks according to claim 6, wherein the formula for calculating the cumulative gradient of the current iteration number in the step S52 is as follows:
wherein, betavCumulative gradient, beta, for the ν th iteration0Mu is the attenuation factor, | is the two-norm operation, gv-1Normalizing countermeasure samples for a loss function with respect to an upper round of iterationsThe gradient of (a) of (b) is,according to a loss functionNormalized confrontation samples for round-up iterationsGradient finding operation;
The formula of the updated confrontation sample obtained in step S53 is:
α=∈/T
8. The image processing method for an intermediate layer targeted against attacks according to claim 1, wherein the formula of the normalization process in the step S6 is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110676108.2A CN113344090B (en) | 2021-06-18 | 2021-06-18 | Image processing method for resisting attack by target in middle layer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110676108.2A CN113344090B (en) | 2021-06-18 | 2021-06-18 | Image processing method for resisting attack by target in middle layer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113344090A true CN113344090A (en) | 2021-09-03 |
CN113344090B CN113344090B (en) | 2022-11-22 |
Family
ID=77476514
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110676108.2A Active CN113344090B (en) | 2021-06-18 | 2021-06-18 | Image processing method for resisting attack by target in middle layer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344090B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190046068A1 (en) * | 2017-08-10 | 2019-02-14 | Siemens Healthcare Gmbh | Protocol independent image processing with adversarial networks |
CN109948658A (en) * | 2019-02-25 | 2019-06-28 | 浙江工业大学 | The confrontation attack defense method of Feature Oriented figure attention mechanism and application |
US20190238568A1 (en) * | 2018-02-01 | 2019-08-01 | International Business Machines Corporation | Identifying Artificial Artifacts in Input Data to Detect Adversarial Attacks |
CN111709435A (en) * | 2020-05-18 | 2020-09-25 | 杭州电子科技大学 | Countermeasure sample generation method based on discrete wavelet transform |
CN111932646A (en) * | 2020-07-16 | 2020-11-13 | 电子科技大学 | Image processing method for resisting attack |
CN112368719A (en) * | 2018-05-17 | 2021-02-12 | 奇跃公司 | Gradient antagonism training of neural networks |
CN112396129A (en) * | 2020-12-08 | 2021-02-23 | 中山大学 | Countermeasure sample detection method and general countermeasure attack defense system |
-
2021
- 2021-06-18 CN CN202110676108.2A patent/CN113344090B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190046068A1 (en) * | 2017-08-10 | 2019-02-14 | Siemens Healthcare Gmbh | Protocol independent image processing with adversarial networks |
US20190238568A1 (en) * | 2018-02-01 | 2019-08-01 | International Business Machines Corporation | Identifying Artificial Artifacts in Input Data to Detect Adversarial Attacks |
CN112368719A (en) * | 2018-05-17 | 2021-02-12 | 奇跃公司 | Gradient antagonism training of neural networks |
CN109948658A (en) * | 2019-02-25 | 2019-06-28 | 浙江工业大学 | The confrontation attack defense method of Feature Oriented figure attention mechanism and application |
CN111709435A (en) * | 2020-05-18 | 2020-09-25 | 杭州电子科技大学 | Countermeasure sample generation method based on discrete wavelet transform |
CN111932646A (en) * | 2020-07-16 | 2020-11-13 | 电子科技大学 | Image processing method for resisting attack |
CN112396129A (en) * | 2020-12-08 | 2021-02-23 | 中山大学 | Countermeasure sample detection method and general countermeasure attack defense system |
Non-Patent Citations (2)
Title |
---|
LIANLI GAO等: ""Patch-wise Attack for Fo oling Dee p Neural Network"", 《HTTP://ARXIV.ORG/ABS/2007.06765》 * |
施鸿源等: ""适用于图像检索的强化对抗生成哈希方法"", 《小型微型计算机系统》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113344090B (en) | 2022-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11341417B2 (en) | Method and apparatus for completing a knowledge graph | |
TWI742382B (en) | Neural network system for vehicle parts recognition executed by computer, method for vehicle part recognition through neural network system, device and computing equipment for vehicle part recognition | |
CN108416250B (en) | People counting method and device | |
Peng et al. | Robust joint sparse representation based on maximum correntropy criterion for hyperspectral image classification | |
WO2019228317A1 (en) | Face recognition method and device, and computer readable medium | |
US9129191B2 (en) | Semantic object selection | |
US9129192B2 (en) | Semantic object proposal generation and validation | |
US20240012846A1 (en) | Systems and methods for parsing log files using classification and a plurality of neural networks | |
CN110909618B (en) | Method and device for identifying identity of pet | |
CN111325243B (en) | Visual relationship detection method based on regional attention learning mechanism | |
CN113343982B (en) | Entity relation extraction method, device and equipment for multi-modal feature fusion | |
CN108681735A (en) | Optical character recognition method based on convolutional neural networks deep learning model | |
CN111291760B (en) | Image semantic segmentation method and device and electronic equipment | |
CN115130643A (en) | Graphical neural network of data sets with heterogeneity | |
WO2021031704A1 (en) | Object tracking method and apparatus, computer device, and storage medium | |
CN111400572A (en) | Content safety monitoring system and method for realizing image feature recognition based on convolutional neural network | |
CN115050064A (en) | Face living body detection method, device, equipment and medium | |
CN110020593A (en) | Information processing method and device, medium and calculating equipment | |
CN113344090B (en) | Image processing method for resisting attack by target in middle layer | |
CN112417961B (en) | Sea surface target detection method based on scene prior knowledge | |
CN115880702A (en) | Data processing method, device, equipment, program product and storage medium | |
CN112686249B (en) | Grad-CAM attack method based on anti-patch | |
CN115170414A (en) | Knowledge distillation-based single image rain removing method and system | |
CN115131782A (en) | Image small target classification method based on multi-scale features and attention | |
JP7239002B2 (en) | OBJECT NUMBER ESTIMATING DEVICE, CONTROL METHOD, AND PROGRAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |