CN113283578A - Data denoising method based on marking risk control - Google Patents

Data denoising method based on marking risk control Download PDF

Info

Publication number
CN113283578A
CN113283578A CN202110399544.XA CN202110399544A CN113283578A CN 113283578 A CN113283578 A CN 113283578A CN 202110399544 A CN202110399544 A CN 202110399544A CN 113283578 A CN113283578 A CN 113283578A
Authority
CN
China
Prior art keywords
data
risk
neural networks
training
networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110399544.XA
Other languages
Chinese (zh)
Inventor
王魏
胡圣佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202110399544.XA priority Critical patent/CN113283578A/en
Publication of CN113283578A publication Critical patent/CN113283578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a data denoising method based on marking risk control, wherein the success of data deep learning usually depends on a large amount of accurately marked data, but a large amount of accurately marked data is usually difficult to collect in an actual scene. In order to reduce the influence of data marking noise on the performance of the neural networks, the method maintains that two neural networks mutually select data with small loss as low-risk data to update the peer-to-peer networks, and each network respectively filters out high-risk data and retrains the rest data. In order to solve the problem that learning performance is degraded due to the fact that two networks are more and more similar along with training, when the inconsistency of the two neural networks is stable, data are stopped being selected from each other, and the networks are updated until convergence is achieved by using the obtained low-risk data. Compared with the prior art, the deep neural network has stronger robustness.

Description

Data denoising method based on marking risk control
Technical Field
The invention relates to a data denoising method based on marker risk control, which can screen high-risk marker data to improve robustness and belongs to the technical field of computer artificial intelligent data analysis.
Background
In recent years, deep learning has been highly successful in various fields such as face recognition, automatic driving, machine translation, and the like. However, the performance of deep learning depends on a large amount of accurately labeled data, and in practical application, it is often difficult to collect a large amount of accurately labeled data, because obtaining accurate data labels requires a large amount of manpower and material resources. To address this problem, people often use crowd sourcing techniques to assign large amounts of unmarked data to voluntary users for marking, and the resulting markings tend to be noisy due to the level of the users being uneven. How to learn from data with labeled noise becomes a concern for many scholars.
Disclosure of Invention
The purpose of the invention is as follows: in practical application, training data of the deep neural network is often noisy, and training the deep neural network on the data with labeled noise can impair the classification performance of the deep neural network. In order to solve the problem, the invention provides a data denoising method based on marking risk control. The invention maintains that two neural networks mutually select data with small loss to learn by each other, and then each network discovers and removes high-risk data in the data and retrains the rest data. The difference between the two networks is taken care of during the training process to prevent the learning performance from degrading and thus improve the robustness.
The technical scheme is as follows: a data denoising method based on marking risk control comprises the following steps:
first, a data set is prepared
Figure BDA0003019578440000011
The data therein contains the marking noise. Randomly initializing two peer-to-peer deep neural networks in a data set
Figure BDA0003019578440000012
Training the T rounds respectively to obtain two neural networks f and g.
For a data set
Figure BDA0003019578440000013
Respectively calculating the cross entropy loss of each data by using the neural networks f and g, sorting the data by using the cross entropy loss and respectively selecting the data with small part loss
Figure BDA0003019578440000014
And
Figure BDA0003019578440000015
neural network f is
Figure BDA0003019578440000016
Go up training K round to get f ', f' at
Figure BDA0003019578440000017
The data in which the prediction result is inconsistent with the original mark is regarded as high-risk data
Figure BDA0003019578440000018
Remove high risk data and leave the data
Figure BDA0003019578440000019
(\ represents the difference set of the two sets,
Figure BDA00030195784400000110
indicating that the previously selected data has a small loss,
Figure BDA00030195784400000111
to represent
Figure BDA00030195784400000112
High risk data in, so
Figure BDA00030195784400000113
To represent
Figure BDA00030195784400000114
In (1) removing
Figure BDA00030195784400000115
The data remaining thereafter) is retrained. In the same way g is
Figure BDA00030195784400000116
Training K rounds to obtain g 'and g' in
Figure BDA00030195784400000117
The data of (2) are predicted, and the data of which the prediction result is inconsistent with the original mark are regarded as high-risk samples
Figure BDA00030195784400000118
Remove high risk data and leave the data
Figure BDA00030195784400000119
And g is retrained.
In each training round, data that f and g have not been seen in the current round are calculated
Figure BDA00030195784400000120
And calculating the inconsistency, and if the inconsistency tends to be stable, or the number of training rounds reaches a preset maximum value N. The learning process in the previous paragraph is stopped. And low risk data obtained at the end
Figure BDA0003019578440000021
And
Figure BDA0003019578440000022
f and g are trained respectively until the neural networks converge, and finally two neural networks are obtained through training.
In the prediction stage, the user inputs the feature vectors of the data to be detected on the two modes to the two neural networks obtained by training respectively, the two neural networks return the prediction results of the data to be detected to the user respectively, and then the one with higher confidence coefficient is selected from the two prediction results and is output as the final mark of the data.
The data set
Figure BDA0003019578440000023
Is an image data set in which the image data contains a marker noise. In the prediction stage, the user inputs the feature vectors of the image data to be detected on two modes to the two neural networks obtained by training respectively, the two neural networks return the prediction results of the image data to be detected to the user respectively,and then the one with higher confidence is selected from the two prediction results to be output as the final mark of the image data.
Has the advantages that: compared with the prior art, the data denoising method based on the marker risk control maintains two neural networks, each network selects data with small loss as low-risk data through cross entropy loss to update the peer-to-peer networks, then each network discovers and deletes the data still with higher risk in the networks and trains the networks again, and meanwhile, the difference between the two networks is concerned in the training process to prevent the learning performance from deteriorating. The noise of the image data in the image classification task can have adverse effect on a subsequent neural network classification model, and the more serious the noise is, the more the performance of the neural network is damaged. Noise in a real image is a complex result of accumulation of various noise components, so that the task of image denoising is very difficult. Compared with the prior art, the method can better remove image noise and obtain training data with higher purity, so that the neural network has stronger robustness.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention;
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
As shown in fig. 1, a data denoising method based on labeled risk control maintains that two neural networks mutually select data with small loss as low-risk data to update a peer-to-peer network based on a small loss criterion, each network finds and removes high-risk data therein and retrains the remaining data, the inconsistency of the two networks is concerned in the training process, if the inconsistency tends to be stable or the number of learning rounds reaches a preset maximum number of learning rounds, the mutual selection of data is stopped, and the two networks are respectively trained on the low-risk data obtained in the last round until convergence, thereby controlling risk and improving robustness.
The image data denoising method based on the marking risk control comprises the following steps:
step 100 of preparing an image dataset
Figure BDA0003019578440000024
Wherein the images should have the same dimensions with the presence of marking noise.
Step 101, determining a network architecture, such as VGG, ResNet, EfficientNet and the like, according to the requirements of an image classification task, randomly initializing two peer-to-peer deep neural networks, and using a gradient descent method to perform image data set
Figure BDA0003019578440000031
Is trained T times over all image data to obtain two neural networks f and g.
Step 102, for an image dataset
Figure BDA0003019578440000032
For each image data, the neural networks f and g calculate the cross entropy loss separately. Ordering the image data using cross-entropy loss, selecting small loss samples of the top R (t) ratio to construct an image data set
Figure BDA0003019578440000033
And
Figure BDA0003019578440000034
r (t) is a parameter for controlling the proportion of the selected image data. l represents the loss function used in training the neural network, typically the cross entropy loss.
Figure BDA0003019578440000035
Figure BDA0003019578440000036
103, the two neural networks respectively screen out high-risk image data in the small loss image data selected by the peer-to-peer network and retrain the remaining image data, and the specific steps are as follows:
step 1031, f in the image dataset
Figure BDA0003019578440000037
Training a K wheel on all image data by using a gradient descent method to obtain f'; g in the image dataset
Figure BDA0003019578440000038
Training K rounds using gradient descent method on all image data to obtain g'.
Step 1032, will
Figure BDA0003019578440000039
Is input to the f' acquisition prediction flag. Image data in which the prediction flag does not coincide with the original flag is regarded as high-risk image data
Figure BDA00030195784400000310
Will be provided with
Figure BDA00030195784400000311
Each image data of (a) is input to the g' obtaining prediction flag. Image data in which the prediction flag does not coincide with the original flag is regarded as high-risk image data
Figure BDA00030195784400000312
Representing an image training data, wherein x represents an image feature,
Figure BDA00030195784400000313
which is a label of the collected image, because of the presence of the marking noise,
Figure BDA00030195784400000314
not necessarily its true mark y. y isf′And yg′Representing the prediction labels given by the neural networks f 'and g', respectively, for image x.
Figure BDA00030195784400000315
Figure BDA00030195784400000316
In step 1033, the remaining image data set
Figure BDA00030195784400000317
Training again on f; in the remaining image data set
Figure BDA00030195784400000318
And g is retrained.
And 104, calculating the inconsistency of the two neural networks on the image data which is not used for updating the neural networks, stopping mutual learning if the inconsistency reaches a stable state or the number of training rounds reaches a preset maximum value N, and returning to the step 102 to continue training if the inconsistency reaches the stable state or the number of training rounds reaches the preset maximum value N.
And 105, respectively training the two neural networks on the low-risk image data obtained in the last round until convergence, so as to obtain two neural networks f and g.
And step 106, inputting the image data to be detected into f and g respectively to obtain prediction marks, and outputting the prediction mark with higher confidence coefficient as a final mark.

Claims (7)

1. A data denoising method based on labeling risk control is characterized in that two neural networks are maintained, data with small loss are mutually selected to serve as low-risk data to be updated to peer-to-peer networks based on a small loss criterion, each network finds and removes high-risk data in the high-risk data and retrains the high-risk data on the rest data, the inconsistency of the two networks is concerned in the training process, if the inconsistency tends to be stable or the number of learning rounds reaches a preset maximum number of learning rounds, the mutual selection of samples is stopped, the two networks are respectively trained on the low-risk samples obtained in the last round until convergence, the two neural networks are obtained, and the neural networks are used for denoising the data to be processed.
2. The method of claim 1, wherein the data set is prepared first
Figure RE-FDA0003133136150000011
Wherein the data contains a marker noise; randomly initializing two peer-to-peer deep neural networks in a data set
Figure RE-FDA0003133136150000012
Training T wheels respectively to obtain two neural networks f and g;
for a data set
Figure RE-FDA0003133136150000013
Respectively calculating the cross entropy loss of each data by using the neural networks f and g, sorting the data by using the cross entropy loss and respectively selecting the data with small part loss
Figure RE-FDA0003133136150000014
And
Figure RE-FDA0003133136150000015
neural network f is
Figure RE-FDA0003133136150000016
Go up training K round to get f ', f' at
Figure RE-FDA0003133136150000017
The data in which the prediction result is inconsistent with the original mark is regarded as high-risk data
Figure RE-FDA0003133136150000018
Remove high risk data and leave the data
Figure RE-FDA0003133136150000019
Training again on f; in the same way g is
Figure RE-FDA00031331361500000110
Training K rounds to obtain g 'and g' in
Figure RE-FDA00031331361500000111
The data of (2) are predicted, and the data of which the prediction result is inconsistent with the original mark are regarded as high-risk samples
Figure RE-FDA00031331361500000112
Remove high risk data and leave the data
Figure RE-FDA00031331361500000113
G is retrained;
in each training round, data that f and g have not been seen in the current round are calculated
Figure RE-FDA00031331361500000114
Calculating the inconsistency, and if the inconsistency tends to be stable, or the number of training rounds reaches a preset maximum value N; stopping the learning process in the previous paragraph; and low risk data obtained at the end
Figure RE-FDA00031331361500000115
And
Figure RE-FDA00031331361500000116
f and g are trained respectively until the neural networks converge, and finally two neural networks are obtained through training.
3. The data denoising method based on labeling risk control as claimed in claim 1, wherein in the prediction stage, the user inputs the feature vector of the data to be tested into two neural networks obtained by training, the two neural networks respectively return the prediction results of the data to be tested to the user, and then the one with higher confidence coefficient is selected from the two prediction results and used as the final label output of the data.
4. The method of claim 1, wherein the data set is denoised based on marker risk control
Figure RE-FDA00031331361500000117
Is an image data set in which the image data contains a marker noise.
5. The method of claim 4, wherein in the prediction stage, the user inputs the feature vectors of the image data to be detected in the two modalities into the two neural networks obtained by training, the two neural networks respectively return the prediction results of the image data to be detected to the user, and then the one with higher confidence is selected from the two prediction results as the final label of the image data for output.
6. The method of claim 1, wherein the data denoising method based on marker risk control is applied to a data set
Figure RE-FDA0003133136150000021
Respectively calculating the cross entropy loss of each data by using the neural networks f and g, sorting the data by using the cross entropy loss, and respectively selecting small loss samples with the first R (t) proportion to construct a data set
Figure RE-FDA0003133136150000022
And
Figure RE-FDA0003133136150000023
Figure RE-FDA0003133136150000024
Figure RE-FDA0003133136150000025
r (t) is a parameter for controlling the proportion of the selected data.
7. The method for denoising data based on labeled risk control according to claim 2, wherein each network finds and removes high risk data therein and retrains on the remaining data, comprising the specific steps of:
step 1031, f in data set
Figure RE-FDA0003133136150000026
Training a K round by using a gradient descent method to obtain f'; g in the data set
Figure RE-FDA0003133136150000027
Training a K round by using a gradient descent method to obtain g';
step 1032, will
Figure RE-FDA0003133136150000028
Each data input to f' gets a predictive flag; image data in which the prediction flag does not coincide with the original flag is regarded as high-risk data
Figure RE-FDA0003133136150000029
Will be provided with
Figure RE-FDA00031331361500000210
Each data input to g' gets a predictive flag; data in which the predicted tag is inconsistent with the original tag is considered high risk data
Figure RE-FDA00031331361500000211
Figure RE-FDA00031331361500000212
Figure RE-FDA00031331361500000213
Step 1033, in the remaining data set
Figure RE-FDA00031331361500000214
Training again on f; in the remaining data set
Figure RE-FDA00031331361500000215
And g is retrained.
CN202110399544.XA 2021-04-14 2021-04-14 Data denoising method based on marking risk control Pending CN113283578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110399544.XA CN113283578A (en) 2021-04-14 2021-04-14 Data denoising method based on marking risk control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110399544.XA CN113283578A (en) 2021-04-14 2021-04-14 Data denoising method based on marking risk control

Publications (1)

Publication Number Publication Date
CN113283578A true CN113283578A (en) 2021-08-20

Family

ID=77276660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110399544.XA Pending CN113283578A (en) 2021-04-14 2021-04-14 Data denoising method based on marking risk control

Country Status (1)

Country Link
CN (1) CN113283578A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114330439A (en) * 2021-12-28 2022-04-12 盐城工学院 Bearing diagnosis method based on convolutional neural network

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740057A (en) * 2018-12-28 2019-05-10 武汉大学 A kind of strength neural network and information recommendation method of knowledge based extraction
CN110110780A (en) * 2019-04-30 2019-08-09 南开大学 A kind of picture classification method based on confrontation neural network and magnanimity noise data
CN110310199A (en) * 2019-06-27 2019-10-08 上海上湖信息技术有限公司 Borrow or lend money construction method, system and the debt-credit Risk Forecast Method of risk forecast model
EP3582142A1 (en) * 2018-06-15 2019-12-18 Université de Liège Image classification using neural networks
US20200034693A1 (en) * 2018-07-27 2020-01-30 Samsung Electronics Co., Ltd. Method for detecting defects in semiconductor device
CN111160474A (en) * 2019-12-30 2020-05-15 合肥工业大学 Image identification method based on deep course learning
CN111339934A (en) * 2020-02-25 2020-06-26 河海大学常州校区 Human head detection method integrating image preprocessing and deep learning target detection
CN111861909A (en) * 2020-06-29 2020-10-30 南京理工大学 Network fine-grained image denoising and classifying method
CN111931637A (en) * 2020-08-07 2020-11-13 华南理工大学 Cross-modal pedestrian re-identification method and system based on double-current convolutional neural network
CN111985520A (en) * 2020-05-15 2020-11-24 南京智谷人工智能研究院有限公司 Multi-mode classification method based on graph convolution neural network
CN112101328A (en) * 2020-11-19 2020-12-18 四川新网银行股份有限公司 Method for identifying and processing label noise in deep learning
CN112529024A (en) * 2019-09-17 2021-03-19 株式会社理光 Sample data generation method and device and computer readable storage medium
WO2021064856A1 (en) * 2019-10-01 2021-04-08 日本電気株式会社 Robust learning device, robust learning method, program, and storage device
WO2021064787A1 (en) * 2019-09-30 2021-04-08 日本電気株式会社 Learning system, learning device, and learning method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3582142A1 (en) * 2018-06-15 2019-12-18 Université de Liège Image classification using neural networks
US20200034693A1 (en) * 2018-07-27 2020-01-30 Samsung Electronics Co., Ltd. Method for detecting defects in semiconductor device
CN109740057A (en) * 2018-12-28 2019-05-10 武汉大学 A kind of strength neural network and information recommendation method of knowledge based extraction
CN110110780A (en) * 2019-04-30 2019-08-09 南开大学 A kind of picture classification method based on confrontation neural network and magnanimity noise data
CN110310199A (en) * 2019-06-27 2019-10-08 上海上湖信息技术有限公司 Borrow or lend money construction method, system and the debt-credit Risk Forecast Method of risk forecast model
CN112529024A (en) * 2019-09-17 2021-03-19 株式会社理光 Sample data generation method and device and computer readable storage medium
WO2021064787A1 (en) * 2019-09-30 2021-04-08 日本電気株式会社 Learning system, learning device, and learning method
WO2021064856A1 (en) * 2019-10-01 2021-04-08 日本電気株式会社 Robust learning device, robust learning method, program, and storage device
CN111160474A (en) * 2019-12-30 2020-05-15 合肥工业大学 Image identification method based on deep course learning
CN111339934A (en) * 2020-02-25 2020-06-26 河海大学常州校区 Human head detection method integrating image preprocessing and deep learning target detection
CN111985520A (en) * 2020-05-15 2020-11-24 南京智谷人工智能研究院有限公司 Multi-mode classification method based on graph convolution neural network
CN111861909A (en) * 2020-06-29 2020-10-30 南京理工大学 Network fine-grained image denoising and classifying method
CN111931637A (en) * 2020-08-07 2020-11-13 华南理工大学 Cross-modal pedestrian re-identification method and system based on double-current convolutional neural network
CN112101328A (en) * 2020-11-19 2020-12-18 四川新网银行股份有限公司 Method for identifying and processing label noise in deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
J. ZHANG 等: "Improving Crowdsourced Label Quality Using Noise Correction", 《 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》, vol. 29, no. 5, pages 1675 - 1688, XP011681456, DOI: 10.1109/TNNLS.2017.2677468 *
WANG WEI 等: "Learnability of Multi-Instance Multi-Label Learning", 《CHINESE SCIENCE BULLETIN》, vol. 57, no. 19, pages 2488, XP035075248, DOI: 10.1007/s11434-012-5133-z *
WANG, WEI 等: "Adaptive Switching Anisotropic Diffusion Model for Universal Noise Removal", 《 PROCEEDINGS OF THE 10TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA 2012)》, pages 4803 - 4808 *
ZHENGWEN ZHANG 等: "Making Deep Neural Networks Robust to Label Noise: A Reweighting Loss and Data Filtration", 《 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP)》, pages 289 - 293 *
周杭驰 等: "基于深度学习的图像分类标注研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2021 *
郭翔宇 等: "一种改进的协同训练算法:Compatible Co-training", 《南京大学学报(自然科学)》, vol. 52, no. 04, pages 662 - 671 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114330439A (en) * 2021-12-28 2022-04-12 盐城工学院 Bearing diagnosis method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN108829826B (en) Image retrieval method based on deep learning and semantic segmentation
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN108335303B (en) Multi-scale palm skeleton segmentation method applied to palm X-ray film
CN111079847B (en) Remote sensing image automatic labeling method based on deep learning
JP2018097807A (en) Learning device
CN107480723B (en) Texture Recognition based on partial binary threshold learning network
CN110728694B (en) Long-time visual target tracking method based on continuous learning
CN111126470B (en) Image data iterative cluster analysis method based on depth measurement learning
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
CN111239137B (en) Grain quality detection method based on transfer learning and adaptive deep convolution neural network
CN112101364B (en) Semantic segmentation method based on parameter importance increment learning
CN114048822A (en) Attention mechanism feature fusion segmentation method for image
CN109872326B (en) Contour detection method based on deep reinforced network jump connection
KR20220116270A (en) Learning processing apparatus and method
CN112581483B (en) Self-learning-based plant leaf vein segmentation method and device
CN113283578A (en) Data denoising method based on marking risk control
CN117115614B (en) Object identification method, device, equipment and storage medium for outdoor image
US11776292B2 (en) Object identification device and object identification method
CN108428234B (en) Interactive segmentation performance optimization method based on image segmentation result evaluation
CN109255794B (en) Standard part depth full convolution characteristic edge detection method
CN117011515A (en) Interactive image segmentation model based on attention mechanism and segmentation method thereof
CN114708307B (en) Target tracking method, system, storage medium and device based on correlation filter
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
CN114758135A (en) Unsupervised image semantic segmentation method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination