CN117152587A - Anti-learning-based semi-supervised ship detection method and system - Google Patents

Anti-learning-based semi-supervised ship detection method and system Download PDF

Info

Publication number
CN117152587A
CN117152587A CN202311407565.7A CN202311407565A CN117152587A CN 117152587 A CN117152587 A CN 117152587A CN 202311407565 A CN202311407565 A CN 202311407565A CN 117152587 A CN117152587 A CN 117152587A
Authority
CN
China
Prior art keywords
training set
picture
current
uncalibrated
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311407565.7A
Other languages
Chinese (zh)
Other versions
CN117152587B (en
Inventor
陈江海
范洪浩
宋春
雷明根
占冰倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Whyis Technology Co ltd
Original Assignee
Zhejiang Whyis Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Whyis Technology Co ltd filed Critical Zhejiang Whyis Technology Co ltd
Priority to CN202311407565.7A priority Critical patent/CN117152587B/en
Publication of CN117152587A publication Critical patent/CN117152587A/en
Application granted granted Critical
Publication of CN117152587B publication Critical patent/CN117152587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a semi-supervised ship detection method and system based on countermeasure learning. The method adopts semi-supervised training to reduce the workload of target detection calibration; adding countermeasure learning in an original ship detection model, namely adding a similarity calculation module to classify uncalibrated training sets, so that the model learns uncalibrated training sets of different classes to different degrees; classifying the calibration training set by using the loss value of the calibration training set, reducing the sampling learning of the model on the easy-to-check training set, and compressing the training time; the training stage of the target semi-supervised ship detection model modifies the image sampling rule in real time, realizes that the calibrated training set is dominant, the uncalibrated training set assists in learning, increases the learning capacity of the model on the uncalibrated training set, and reduces the influence of the uncalibrated training set on the model due to error calibration.

Description

Anti-learning-based semi-supervised ship detection method and system
Technical Field
The invention relates to the technical field of ship detection, in particular to a semi-supervised ship detection method and system based on countermeasure learning.
Background
With the continuous development of water traffic, the water traffic order is also continuously updated, and the workload of workers maintaining the water traffic is continuously increased, wherein the ship detection is one of the water traffic workload. The data volume of the ship is huge, supervised deep learning is adopted, a great deal of manpower is required for calibration, a semi-supervised algorithm is adopted in the prior art, and a calibrated training set and an uncalibrated training set are adopted to improve the detection capability of the ship detection model, but because the uncalibrated training set has huge data volume, screening is not adopted, a great deal of time is consumed in the training process, huge vibration of the ship detection model occurs, and even model fitting failure occurs.
Aiming at the problems that in the prior art, a semi-supervision algorithm is adopted, because the uncalibrated training set data volume is huge, screening is not adopted, a great amount of time is consumed in the training process, huge vibration of a ship detection model occurs, and even the model cannot be fitted, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a semi-supervised ship detection method and a semi-supervised ship detection system based on countermeasure learning, which are used for solving the problems that the prior art adopts a semi-supervised algorithm, the uncalibrated training set has huge data quantity, screening is not adopted, a great amount of time is consumed in the training process, huge vibration of a ship detection model occurs, and even the model cannot be fitted.
To achieve the above object, in one aspect, the present invention provides a semi-supervised ship detection method based on countermeasure learning, the method comprising: s1, adding a similarity calculation module behind a main network output layer of an original ship detection model and in front of a detection head to obtain an updated ship detection model; s2, carrying out data preprocessing on the original training set to obtain an updated training set; inputting the updated training set into the updated ship detection model for multi-round model training to obtain a target ship detection model; s3, inputting the calibration training set into the target ship detection model for prediction to obtain a loss value of each calibration picture and a total feature sequence of each calibration picture; sorting and dividing the calibration training sets according to the loss values of all the calibration pictures to obtain a calibration easy-to-check training set, a calibration easier-to-check training set and a calibration difficult-to-check training set; s4, inputting the uncalibrated training set into the target ship detection model for prediction to obtain the total feature sequence of each uncalibrated picture and the confidence coefficient of each target frame in each uncalibrated picture; calculating to obtain the nearest distance between each uncalibrated picture and the calibrated training set according to the total feature sequences of all uncalibrated pictures and the total feature sequences of all calibrated pictures; dividing the uncalibrated training set into an unmatched training set and a matched training set according to all the nearest distances; dividing the matched training set into an uncalibrated easy-to-check training set, an uncalibrated easier-to-check training set and an uncalibrated difficult-to-check training set; marking a target frame with the confidence coefficient larger than a preset threshold value in each uncalibrated picture as a pseudo tag; s5, deleting the calibrated easy-to-check training set and the uncalibrated easy-to-check training set; extracting two pictures from the calibration easy-to-check training set and the calibration difficult-to-check training set according to preset selection conditions, and extracting two pictures from the unmatched training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set as current iteration training sets; inputting the current iteration training set into an initial semi-supervised ship detection model for training to obtain a calibrated current iteration loss value and an uncalibrated current iteration loss value; carrying out back propagation on the initial semi-supervised ship detection model according to the calibrated current iteration loss value and the uncalibrated current iteration loss value to obtain a current iteration semi-supervised ship detection model; s6, repeating the step S5 until all the deleted training sets are trained and multi-round model training is carried out, and a target semi-supervised ship detection model is obtained; and S7, inputting the picture to be detected into the target semi-supervised ship detection model for detection, and obtaining the ship position.
Optionally, the performing data preprocessing on the original training set to obtain an updated training set includes: s21, carrying out K different data processing on the original training set to obtain an updated training set; the number of pictures of the original training set is at least two; the number of pictures of the updated training set is K times that of the original training set.
Optionally, inputting the updated training set into the updated ship detection model to perform multi-round model training, and obtaining the target ship detection model includes: s22, inputting an updated training set into a main network output layer of the updated ship detection model to obtain a main network feature map of each picture; s23, coding the backbone network feature map of each picture through M codebooks to obtain a total feature sequence of each picture; s24, calculating a similarity loss value of the current picture according to the total feature sequence of the current picture, the total feature sequence which is the same as the current picture in category and the total feature sequence which is different from the current picture in category; s25, inputting the total characteristic sequence of each picture into a detection head of the updated ship detection model to obtain a classification loss value and a regression loss value of each picture; s26, calculating a total loss value of the updated training set according to the similarity loss value, the classification loss value and the regression loss value of each picture; s27, repeating the steps S22-S26 until the total loss value of the updated training set fluctuates within a first preset range, and stopping training to obtain the target ship detection model.
Optionally, the S23 includes: s231, dividing the backbone network feature map of the current picture into k groups, and coding the backbone network feature map of the current group through the current codebook to obtain the correlation between the backbone network feature map of the current group and all codewords of the current codebook; wherein each codebook comprises k codewords; s232, calculating a first characteristic value of the current group backbone network characteristic diagram and the current codebook according to the correlation and the current codebook; s233, generating a first feature sequence according to the feature graphs of all groups of backbone networks and the first feature value of the current codebook; s234, generating a total feature sequence of the current picture according to the backbone network feature map of the current picture and the first feature sequences of all codebooks.
Optionally, the preset selection conditions include: selecting a first condition: extracting a picture from the calibration easy-to-check training set, extracting a picture from the calibration difficult-to-check training set, extracting a picture from the unmatched training set, and extracting a picture from the unmarked easy-to-check training set as the current iteration training set; or selecting a second condition: extracting a picture from the calibration easy-to-check training set, extracting a picture from the calibration difficult-to-check training set, extracting a picture from the un-calibration difficult-to-check training set, and extracting a picture from the un-calibration easy-to-check training set as a current iteration training set; or selecting a condition III: two pictures are extracted from the calibration easy-to-check training set, one picture is extracted from the unmatched training set, and one picture is extracted from the unmarked difficult-to-check training set to serve as a current iteration training set.
Optionally, the correlation between the current set of backbone network feature maps and all codewords of the current codebook is calculated according to the following formula:
wherein,for the current set of backbone network feature map, +.>Is the +.>Code word->For the number of codewords of the current codebook, +.>Is the +.>The number of codewords is one,,/>for the current group backbone network feature map and the current codebook +.>The correlation of the individual code words is such that,is a normalized exponential function;
the first eigenvalues of the current set of backbone network eigenvectors and the current codebook are calculated according to the following formula:
wherein,is the +.>Code word->For the number of codewords of the current codebook, +.>For the current group backbone network feature map and the current codebook +.>Correlation of individual codewords,/->The method comprises the steps of obtaining a first characteristic value of a current set of backbone network characteristic diagrams and a current codebook;
the total feature sequence of the current picture is as follows:
wherein,for the first feature sequence of the backbone network feature map and the first codebook of the current picture,/->A backbone network feature map of the current picture and a first feature sequence of an Mth codebook;is the total feature sequence of the current picture.
Optionally, when the first or second condition is selected for model training, the calibrated current iteration loss value is calculated according to the following formula:
Wherein,for the loss value of the calibrated easier-to-check picture obtained through training of the initial semi-supervised ship detection model, < >>In order to obtain the loss value of the calibrated difficult-to-detect picture through the training of the initial semi-supervised ship detection model,for the loss value of the calibrated easily-detected picture predicted by the target ship detection model, the +.>For the loss value of the calibrated difficult-to-detect picture predicted by the target ship detection model, the +.>And calibrating the current iteration loss value.
Optionally, when the model training is performed under the selection condition III, the current iteration loss value is calibrated according to the following formula:
wherein,to calibrate the current iteration loss value +.>The loss value sum of all the calibrated easy-to-check pictures is obtained through training of the initial semi-supervised ship detection model.
Optionally, when the first or second or third selection condition is selected for model training, the updated loss value of the current pseudo tag of the current uncalibrated picture is calculated according to the following formula:
wherein,as a logistic function,/->For the nearest distance between the current uncalibrated picture and the calibrated training set, < >>Loss value of the current pseudo tag for the current uncalibrated picture,/for the current pseudo tag>Confidence level of current pseudo tag of current uncalibrated picture obtained through prediction of target ship detection model,/for the target ship detection model >For the loss value of the current pseudo tag of the current uncalibrated picture obtained through training of the initial semi-supervised ship detection model,/I>Updating the loss value of the current pseudo tag for the current uncalibrated picture;
and summing the updated loss values of all pseudo tags of all uncalibrated pictures in the current iteration training set to obtain an uncalibrated current iteration loss value.
In another aspect, the present invention provides a semi-supervised marine vessel inspection system based on countermeasure learning, the system comprising: the model updating module is used for adding a similarity calculation module behind a main network output layer of the original ship detection model and in front of a detection head to obtain an updated ship detection model; the target ship detection model training module is used for preprocessing data of the original training set to obtain an updated training set; inputting the updated training set into the updated ship detection model for multi-round model training to obtain a target ship detection model; the calibration training set dividing module is used for inputting a calibration training set into the target ship detection model for prediction to obtain a loss value of each calibration picture and a total characteristic sequence of each calibration picture; sorting and dividing the calibration training sets according to the loss values of all the calibration pictures to obtain a calibration easy-to-check training set, a calibration easier-to-check training set and a calibration difficult-to-check training set; the uncalibrated training set dividing module is used for inputting an uncalibrated training set into the target ship detection model for prediction to obtain the total feature sequence of each uncalibrated picture and the confidence coefficient of each target frame in each uncalibrated picture; calculating to obtain the nearest distance between each uncalibrated picture and the calibrated training set according to the total feature sequences of all uncalibrated pictures and the total feature sequences of all calibrated pictures; dividing the uncalibrated training set into an unmatched training set and a matched training set according to all the nearest distances; dividing the matched training set into an uncalibrated easy-to-check training set, an uncalibrated easier-to-check training set and an uncalibrated difficult-to-check training set; marking a target frame with the confidence coefficient larger than a first preset threshold value in each uncalibrated picture as a pseudo tag; the current wheel semi-supervised ship detection model training module is used for deleting the calibrated easy-to-detect training set and the uncalibrated easy-to-detect training set; extracting two pictures from the calibration easy-to-check training set and the calibration difficult-to-check training set according to preset selection conditions, and extracting two pictures from the unmatched training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set as current iteration training sets; model training is carried out on the current iteration training set until all the deleted training sets are trained, and a current wheel semi-supervised ship detection model, a calibrated wheel loss value and an uncalibrated wheel loss value are obtained; calculating to obtain a wheel total loss value according to the calibrated wheel loss value and the uncalibrated wheel loss value; the repeated training module is used for repeating the training module of the current wheel semi-supervised ship detection model until the total wheel loss value fluctuates within a first preset range, and stopping training to obtain a target semi-supervised ship detection model; the detection module is used for inputting the picture to be detected into the target semi-supervised ship detection model for detection, and obtaining the ship position.
The invention has the beneficial effects that:
the invention provides a semi-supervised ship detection method and a system based on countermeasure learning, wherein the method adopts semi-supervised training to reduce the workload of target detection and calibration; adding countermeasure learning in an original ship detection model, namely adding a similarity calculation module to classify uncalibrated training sets, so that the model learns uncalibrated training sets of different classes to different degrees; classifying the calibration training set by using the loss value of the calibration training set, reducing the sampling learning of the model on the easy-to-check training set, and compressing the training time; the training stage of the target semi-supervised ship detection model modifies the image sampling rule in real time, realizes that the calibrated training set is dominant, the uncalibrated training set assists in learning, increases the learning capacity of the model on the uncalibrated training set, and reduces the influence of the uncalibrated training set on the model due to error calibration.
Drawings
FIG. 1 is a flow chart of a semi-supervised marine vessel inspection method based on countermeasure learning provided by an embodiment of the present invention;
FIG. 2 is a flowchart of obtaining a target ship detection model according to an embodiment of the present invention;
FIG. 3 is a flowchart of a general feature sequence for generating each picture according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a semi-supervised marine inspection system based on countermeasure learning according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a training module for a target ship detection model according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a coding submodule according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flowchart of a semi-supervised ship detection method based on countermeasure learning according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s1, adding a similarity calculation module behind a main network output layer of an original ship detection model and in front of a detection head to obtain an updated ship detection model;
the anti-learning branch is added, that is, the ship detection model is updated, so that the ship detection model has a detection function and a similarity matching function.
S2, carrying out data preprocessing on the original training set to obtain an updated training set; inputting the updated training set into the updated ship detection model for multi-round model training to obtain a target ship detection model;
fig. 2 is a flowchart of obtaining a target ship detection model according to an embodiment of the present application, as shown in fig. 2,
the step of preprocessing the data of the original training set to obtain an updated training set comprises the following steps:
s21, carrying out K different data processing on the original training set to obtain an updated training set; the number of pictures of the original training set is at least two; the number of pictures of the updated training set is K times that of the original training set.
Assuming that the number of pictures of an original training set is two (A and B), performing two different data processes on the original training set (the first data process is picture scaling, affine transformation and brightness adjustment of pictures; the second data process is picture color dithering and picture rotation) to obtain updated training sets (A1, A2, B1 and B2);
it should be noted that the manner of processing the data is not limited by the present application.
Inputting the updated training set into the updated ship detection model for multi-round model training, and obtaining the target ship detection model comprises the following steps:
S22, inputting an updated training set into a main network output layer of the updated ship detection model to obtain a main network feature map of each picture;
specifically, in the above description, A1, A2, B1 and B2 are input into the backbone network output layer of the updated ship detection model, so as to obtain a backbone network feature map of the A1 picture, a backbone network feature map of the A2 picture, a backbone network feature map of the B1 picture and a backbone network feature map of the B2 picture.
S23, coding the backbone network feature map of each picture through M codebooks to obtain a total feature sequence of each picture;
fig. 3 is a flowchart of generating a total feature sequence of each picture according to an embodiment of the present invention, as shown in fig. 3, where the step S23 includes:
s231, dividing the backbone network feature map of the current picture into k groups, and coding the backbone network feature map of the current group through the current codebook to obtain the correlation between the backbone network feature map of the current group and all codewords of the current codebook; wherein each codebook comprises k codewords;
specifically, taking a backbone network feature map of an A1 picture as an example: dividing a backbone network feature map of the A1 picture into K groups; it should be noted that the number of codewords per codebook must be equal to the number of groups of backbone network feature map partitions per picture.
The correlation between the current set of backbone network feature maps and all codewords of the current codebook is calculated according to the following formula:
wherein,for the current set of backbone network feature map, +.>Is the +.>Code word->For the number of codewords of the current codebook, +.>Is the +.>The number of codewords is one,,/>for the current group backbone network feature map and the current codebook +.>Of individual code wordsThe correlation is used to determine the correlation,is a normalized exponential function.
S232, calculating a first characteristic value of the current group backbone network characteristic diagram and the current codebook according to the correlation and the current codebook;
the first eigenvalues of the current set of backbone network eigenvectors and the current codebook are calculated according to the following formula:
wherein,is the +.>Code word->For the number of codewords of the current codebook, +.>For the current group backbone network feature map and the current codebook +.>Correlation of individual codewords,/->The method comprises the steps of obtaining a first characteristic value of a current set of backbone network characteristic diagrams and a current codebook;
233. generating a first feature sequence according to all the sets of backbone network feature graphs and the first feature value of the current codebook;
according to the formula, all groups of backbone network feature graphs (namely, the backbone network feature graph of the previous picture: the backbone network feature graph of the A1 picture) and the first feature values (namely, k first feature values) of the current codebook can be obtained; generating a first feature sequence from the k first feature values.
S234, generating a total feature sequence of the current picture according to the backbone network feature map of the current picture and the first feature sequences of all codebooks.
The total feature sequence of the current picture (the total feature sequence of the A1 picture) is as follows:
wherein,for the first feature sequence of the backbone network feature map and the first codebook of the current picture,/->A backbone network feature map of the current picture and a first feature sequence of an Mth codebook;is the total feature sequence of the current picture.
By the method, the total feature sequence of the A1 picture, the total feature sequence of the A2 picture, the total feature sequence of the B1 picture and the total feature sequence of the B1 picture are obtained; wherein the A2 picture is the same as the A1 picture in category, and the B1 picture is the same as the B2 picture in category; the B1 picture and the B2 picture are different from the A1 picture in category; the B1 picture and the B2 picture are different from the A2 picture in category.
S24, calculating a similarity loss value of the current picture according to the total feature sequence of the current picture, the total feature sequence which is the same as the current picture in category and the total feature sequence which is different from the current picture in category;
specifically, taking the similarity loss value of the A1 picture as an example:
wherein,is the total feature sequence of the current picture (the total feature sequence of the A1 picture), Is the same total feature sequence as the current picture category (total feature sequence of A2 picture),is the total feature sequence (the total feature sequence of the B1 picture and the total feature sequence of the B2 picture) which is different from the category of the current picture>For a different number of pictures than the current picture category (in the above embodiment +.>2).
By the method, the similarity loss value of the A1 picture, the similarity loss value of the A2 picture, the similarity loss value of the B1 picture and the similarity loss value of the B2 picture are obtained.
S25, inputting the total characteristic sequence of each picture into a detection head of the updated ship detection model to obtain a classification loss value and a regression loss value of each picture;
s26, calculating a total loss value of the updated training set according to the similarity loss value, the classification loss value and the regression loss value of each picture;
and summing the similarity loss value, the classification loss value and the regression loss value of all the pictures (namely 4 pictures of the updated training set) to obtain the total loss value of the updated training set. And back-propagating the updated ship detection model according to the total loss value of the updated training set.
S27, repeating the steps S22-S26 until the total loss value of the updated training set fluctuates within a first preset range, and stopping training to obtain the target ship detection model.
The updated training set is subjected to multiple rounds of model training until the total loss value of the updated training set fluctuates within a first preset range (in the application) Stopping training to obtain the targetAnd (5) marking a ship detection model.
S3, inputting the calibration training set into the target ship detection model for prediction to obtain a loss value of each calibration picture and a total feature sequence of each calibration picture; sorting and dividing the calibration training sets according to the loss values of all the calibration pictures to obtain a calibration easy-to-check training set, a calibration easier-to-check training set and a calibration difficult-to-check training set;
assuming that the calibration training set has 10 pictures, inputting the calibration training set into a target ship detection model for prediction to obtain a loss value of each calibration picture and a total feature sequence of each calibration picture; sorting the loss values of 10 calibration pictures from small to large, so as to mark the calibration training set corresponding to the 20% of the loss values as an easy-to-check training set; and marking the calibration training set corresponding to the loss value of 20% -80% as a relatively easy-to-detect training set, and marking the calibration training set corresponding to the loss value of 80% -100% as a difficult-to-detect training set.
S4, inputting the uncalibrated training set into the target ship detection model for prediction to obtain the total feature sequence of each uncalibrated picture and the confidence coefficient of each target frame in each uncalibrated picture; calculating to obtain the nearest distance between each uncalibrated picture and the calibrated training set according to the total feature sequences of all uncalibrated pictures and the total feature sequences of all calibrated pictures; dividing the uncalibrated training set into an unmatched training set and a matched training set according to all the nearest distances; dividing the matched training set into an uncalibrated easy-to-check training set, an uncalibrated easier-to-check training set and an uncalibrated difficult-to-check training set; marking a target frame with the confidence coefficient larger than a preset threshold value in each uncalibrated picture as a pseudo tag;
Specifically, assuming that 10 pictures exist in the uncalibrated training set, respectively calculating difference values of the total feature sequence of the first uncalibrated picture and the total feature sequence of the 10 calibrated pictures to obtain 10 distances, finding out the minimum distance from the 10 distances, and taking the minimum distance as the nearest distance between the first uncalibrated picture and the calibrated training set. Similarly, the nearest distances between the second to tenth uncalibrated pictures and the calibrated training set are respectively obtained; the method comprises the steps of obtaining 10 nearest distances, sorting the 10 nearest distances from small to large, marking uncollimated pictures corresponding to the last 30% of sorted pictures as uncollimated training sets, marking uncollimated pictures corresponding to the first 70% of sorted pictures as matched training sets, and dividing the matched training sets into uncollimated easy-to-check training sets, uncollimated easy-to-check training sets and uncollimated difficult-to-check training sets according to the types of calibrated training sets corresponding to the nearest distances in the matched training sets;
for example: the difference value between the first uncalibrated picture and the second calibrated picture in the calibrated training set is the smallest, the difference value between the second uncalibrated picture and the fourth calibrated picture in the calibrated training set is the smallest, and similarly, the smallest difference value between the third to tenth uncalibrated pictures and one of the calibrated pictures in the calibrated training set can be obtained respectively; the ten nearest distances are sequenced from small to large, and the 1 st, 2 nd, 3 rd, 5 th, 7 th, 8 th and 9 th uncalibrated pictures corresponding to the first 70% are obtained and are matched training sets; the difference between the first uncalibrated picture and the second calibrated picture in the calibrated training set is the smallest, the second calibrated picture is a calibrated easy-to-check picture, and the first uncalibrated picture is an uncalibrated easy-to-check picture; and the difference value between the second uncalibrated picture and the fourth calibrated picture in the calibrated training set is the smallest, and the fourth calibrated picture is the calibrated difficult-to-detect picture, so that the second uncalibrated picture is the uncalibrated difficult-to-detect picture.
All uncalibrated easy-to-check pictures form an uncalibrated easy-to-check training set, all uncalibrated easier-to-check pictures form an uncalibrated easier-to-check training set, and all uncalibrated difficult-to-check pictures form an uncalibrated difficult-to-check training set.
And marking the target frame with the confidence coefficient larger than a preset threshold value (set to 0.3 in the application) in each uncalibrated picture as a pseudo tag. It should be noted that the specific values of the preset threshold are not limited in the present application.
S5, deleting the calibrated easy-to-check training set and the uncalibrated easy-to-check training set; extracting two pictures from the calibration easy-to-check training set and the calibration difficult-to-check training set according to preset selection conditions, and extracting two pictures from the unmatched training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set as current iteration training sets; inputting the current iteration training set into an initial semi-supervised ship detection model for training to obtain a calibrated current iteration loss value and an uncalibrated current iteration loss value; carrying out back propagation on the initial semi-supervised ship detection model according to the calibrated current iteration loss value and the uncalibrated current iteration loss value to obtain a current iteration semi-supervised ship detection model;
specifically, in order to strengthen the learning of the model on the easy-to-check training set and the difficult-to-check training set, the calibrated easy-to-check training set and the uncalibrated easy-to-check training set are deleted;
The preset selection conditions comprise:
selecting a first condition: extracting a picture from the calibration easy-to-check training set, extracting a picture from the calibration difficult-to-check training set, extracting a picture from the unmatched training set, and extracting a picture from the unmarked easy-to-check training set as the current iteration training set;
or selecting a second condition: extracting a picture from the calibration easy-to-check training set, extracting a picture from the calibration difficult-to-check training set, extracting a picture from the un-calibration difficult-to-check training set, and extracting a picture from the un-calibration easy-to-check training set as a current iteration training set;
or selecting a condition III: two pictures are extracted from the calibration easy-to-check training set, one picture is extracted from the unmatched training set, and one picture is extracted from the unmarked difficult-to-check training set to serve as a current iteration training set.
Selecting one selection condition from the three selection conditions to form a current iteration training set, and inputting the current iteration training set into an initial semi-supervised ship detection model for training to obtain a loss value of each calibrated picture and a loss value of each pseudo label of each uncalibrated picture; adding a weight to the loss value of each pseudo tag of each uncalibrated picture to obtain an updated loss value of each pseudo tag of each uncalibrated picture; and summing the updated loss values of all pseudo tags of all uncalibrated pictures in the current iteration training set to obtain an uncalibrated current iteration loss value.
When the first or second condition is selected for model training, the current iteration loss value is calibrated according to the following formula:
wherein,for the loss value of the calibrated easier-to-check picture obtained through training of the initial semi-supervised ship detection model, < >>In order to obtain the loss value of the calibrated difficult-to-detect picture through the training of the initial semi-supervised ship detection model,for the loss value of the calibrated easily-detected picture predicted by the target ship detection model, the +.>For the loss value of the calibrated difficult-to-detect picture predicted by the target ship detection model, the +.>And calibrating the current iteration loss value.
When the third condition is selected for model training, the current iteration loss value is calibrated and calculated according to the following formula:
wherein,to calibrate the current iteration loss value +.>The loss value sum of all the calibrated easy-to-check pictures is obtained through training of the initial semi-supervised ship detection model.
When the first selection condition, the second selection condition or the third selection condition is used for model training, the updated loss value of the current pseudo tag of the current uncalibrated picture is calculated according to the following formula:
wherein,as a logistic function,/->For the nearest distance between the current uncalibrated picture and the calibrated training set, < >>Loss value of the current pseudo tag for the current uncalibrated picture,/for the current pseudo tag >Confidence level of current pseudo tag of current uncalibrated picture obtained through prediction of target ship detection model,/for the target ship detection model>For the loss value of the current pseudo tag of the current uncalibrated picture obtained through training of the initial semi-supervised ship detection model,/I>Updating the loss value of the current pseudo tag for the current uncalibrated picture;
and summing the updated loss values of all pseudo tags of all uncalibrated pictures in the current iteration training set to obtain an uncalibrated current iteration loss value.
And summing the calibrated current iteration loss value and the uncalibrated current iteration loss value to obtain a current iteration loss value, and carrying out back propagation on the initial semi-supervised ship detection model according to the current iteration loss value to obtain a current iteration semi-supervised ship detection model.
S6, repeating the step S5 until all the deleted training sets are trained and multi-round model training is carried out, and a target semi-supervised ship detection model is obtained;
if the step S5 is repeated, calibrating the easy-to-check training set, the difficult-to-check training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set, and remaining one piece of the difficult-to-check picture in the unmarked difficult-to-check training set and one piece of the easy-to-check picture; and directly inputting the picture which is not calibrated and easy to be detected into the current iterative semi-supervised ship detection model for training to obtain the sum of the loss value of the picture which is calibrated and difficult to be detected and the updated loss value of all pseudo labels of the picture which is not calibrated and easy to be detected.
Training all the deleted training sets to obtain a current wheel loss value and a current wheel semi-supervised ship detection model; and repeatedly performing multi-round model training to obtain the target semi-supervised ship detection model.
And S7, inputting the picture to be detected into the target semi-supervised ship detection model for detection, and obtaining the ship position.
Fig. 4 is a schematic structural diagram of a semi-supervised ship detection system based on countermeasure learning according to an embodiment of the present invention, as shown in fig. 4, the system includes:
the model updating module 201 is configured to add a similarity calculation module after the output layer of the backbone network of the original ship detection model and before the detection head, so as to obtain an updated ship detection model;
the target ship detection model training module 202 is configured to perform data preprocessing on an original training set to obtain an updated training set; inputting the updated training set into the updated ship detection model for multi-round model training to obtain a target ship detection model;
fig. 5 is a schematic structural diagram of a training module for a target ship detection model according to an embodiment of the present invention, and as shown in fig. 5, the training module 202 for a target ship detection model includes:
a data processing sub-module 2021, configured to perform K different data processing on the original training set, to obtain an updated training set; the number of pictures of the original training set is at least two; the number of the pictures of the updated training set is K times of the number of the pictures of the original training set;
The feature map obtaining submodule 2022 inputs the updated training set into a backbone network output layer of the updated ship detection model to obtain a backbone network feature map of each picture;
the coding submodule 2023 is used for coding the backbone network feature map of each picture through M codebooks to obtain a total feature sequence of each picture;
fig. 6 is a schematic structural diagram of a coding submodule according to an embodiment of the present invention, and as shown in fig. 6, the coding submodule 2023 includes:
the coding unit 20231 is configured to divide the backbone network feature map of the current picture into k groups, and code the current group backbone network feature map through the current codebook to obtain the correlation between the current group backbone network feature map and all codewords of the current codebook; wherein each codebook comprises k codewords;
a first eigenvalue calculation unit 20232, configured to calculate, according to the correlation and the current codebook, a first eigenvalue of the current set of backbone network eigenvectors and the current codebook;
a first feature sequence generating unit 20233, configured to generate a first feature sequence according to all the sets of backbone network feature graphs and the first feature values of the current codebook;
the total feature sequence generating unit 20234 is configured to generate a total feature sequence of the current picture according to the backbone network feature map of the current picture and the first feature sequences of all codebooks.
The similarity loss value calculation submodule 2024 is configured to calculate a similarity loss value of the current picture according to a total feature sequence of the current picture, a total feature sequence identical to the current picture category, and a total feature sequence different from the current picture category;
a detection submodule 2025, configured to input the total feature sequence of each picture into the detection head of the updated ship detection model, so as to obtain a classification loss value and a regression loss value of each picture;
a total loss value calculation submodule 2026, configured to calculate a total loss value of the updated training set according to the similarity loss value, the classification loss value and the regression loss value of each picture;
and a repeated training submodule 2027, configured to repeat all the submodules until the total loss value of the updated training set fluctuates within a first preset range, and stop training, so as to obtain the target ship detection model.
The calibration training set dividing module 203 is configured to input a calibration training set into the target ship detection model for prediction, so as to obtain a loss value of each calibration picture and a total feature sequence of each calibration picture; sorting and dividing the calibration training sets according to the loss values of all the calibration pictures to obtain a calibration easy-to-check training set, a calibration easier-to-check training set and a calibration difficult-to-check training set;
The uncalibrated training set dividing module 204 is configured to input an uncalibrated training set into the target ship detection model for prediction, so as to obtain a total feature sequence of each uncalibrated picture and a confidence coefficient of each target frame in each uncalibrated picture; calculating to obtain the nearest distance between each uncalibrated picture and the calibrated training set according to the total feature sequences of all uncalibrated pictures and the total feature sequences of all calibrated pictures; dividing the uncalibrated training set into an unmatched training set and a matched training set according to all the nearest distances; dividing the matched training set into an uncalibrated easy-to-check training set, an uncalibrated easier-to-check training set and an uncalibrated difficult-to-check training set; marking a target frame with the confidence coefficient larger than a first preset threshold value in each uncalibrated picture as a pseudo tag;
the current-wheel semi-supervised ship detection model training module 205 is used for deleting the calibrated easy-to-detect training set and the uncalibrated easy-to-detect training set; extracting two pictures from the calibration easy-to-check training set and the calibration difficult-to-check training set according to preset selection conditions, and extracting two pictures from the unmatched training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set as current iteration training sets; model training is carried out on the current iteration training set until all the deleted training sets are trained, and a current wheel semi-supervised ship detection model, a calibrated wheel loss value and an uncalibrated wheel loss value are obtained; calculating to obtain a wheel total loss value according to the calibrated wheel loss value and the uncalibrated wheel loss value;
The repeated training module 206 is configured to repeat the current wheel semi-supervised ship detection model training module until the wheel total loss value fluctuates within a first preset range, and stop training to obtain a target semi-supervised ship detection model;
the detection module 207 is configured to input a picture to be detected into the target semi-supervised ship detection model for detection, so as to obtain a ship position.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A semi-supervised marine vessel inspection method based on countermeasure learning, comprising:
s1, adding a similarity calculation module behind a main network output layer of an original ship detection model and in front of a detection head to obtain an updated ship detection model;
s2, carrying out data preprocessing on the original training set to obtain an updated training set; inputting the updated training set into the updated ship detection model for multi-round model training to obtain a target ship detection model;
S3, inputting the calibration training set into the target ship detection model for prediction to obtain a loss value of each calibration picture and a total feature sequence of each calibration picture; sorting and dividing the calibration training sets according to the loss values of all the calibration pictures to obtain a calibration easy-to-check training set, a calibration easier-to-check training set and a calibration difficult-to-check training set;
s4, inputting the uncalibrated training set into the target ship detection model for prediction to obtain the total feature sequence of each uncalibrated picture and the confidence coefficient of each target frame in each uncalibrated picture; calculating to obtain the nearest distance between each uncalibrated picture and the calibrated training set according to the total feature sequences of all uncalibrated pictures and the total feature sequences of all calibrated pictures; dividing the uncalibrated training set into an unmatched training set and a matched training set according to all the nearest distances; dividing the matched training set into an uncalibrated easy-to-check training set, an uncalibrated easier-to-check training set and an uncalibrated difficult-to-check training set; marking a target frame with the confidence coefficient larger than a preset threshold value in each uncalibrated picture as a pseudo tag;
s5, deleting the calibrated easy-to-check training set and the uncalibrated easy-to-check training set; extracting two pictures from the calibration easy-to-check training set and the calibration difficult-to-check training set according to preset selection conditions, and extracting two pictures from the unmatched training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set as current iteration training sets; inputting the current iteration training set into an initial semi-supervised ship detection model for training to obtain a calibrated current iteration loss value and an uncalibrated current iteration loss value; carrying out back propagation on the initial semi-supervised ship detection model according to the calibrated current iteration loss value and the uncalibrated current iteration loss value to obtain a current iteration semi-supervised ship detection model;
S6, repeating the step S5 until all the deleted training sets are trained and multi-round model training is carried out, and a target semi-supervised ship detection model is obtained;
and S7, inputting the picture to be detected into the target semi-supervised ship detection model for detection, and obtaining the ship position.
2. The method of claim 1, wherein the preprocessing the original training set to obtain the updated training set comprises:
s21, carrying out K different data processing on the original training set to obtain an updated training set; the number of pictures of the original training set is at least two; the number of pictures of the updated training set is K times that of the original training set.
3. The method of claim 2, wherein inputting the updated training set into the updated ship detection model for multiple rounds of model training, the obtaining the target ship detection model comprises:
s22, inputting an updated training set into a main network output layer of the updated ship detection model to obtain a main network feature map of each picture;
s23, coding the backbone network feature map of each picture through M codebooks to obtain a total feature sequence of each picture;
S24, calculating a similarity loss value of the current picture according to the total feature sequence of the current picture, the total feature sequence which is the same as the current picture in category and the total feature sequence which is different from the current picture in category;
s25, inputting the total characteristic sequence of each picture into a detection head of the updated ship detection model to obtain a classification loss value and a regression loss value of each picture;
s26, calculating a total loss value of the updated training set according to the similarity loss value, the classification loss value and the regression loss value of each picture;
s27, repeating the steps S22-S26 until the total loss value of the updated training set fluctuates within a first preset range, and stopping training to obtain the target ship detection model.
4. A method according to claim 3, wherein S23 comprises:
s231, dividing the backbone network feature map of the current picture into k groups, and coding the backbone network feature map of the current group through the current codebook to obtain the correlation between the backbone network feature map of the current group and all codewords of the current codebook; wherein each codebook comprises k codewords;
s232, calculating a first characteristic value of the current group backbone network characteristic diagram and the current codebook according to the correlation and the current codebook;
S233, generating a first feature sequence according to the feature graphs of all groups of backbone networks and the first feature value of the current codebook;
s234, generating a total feature sequence of the current picture according to the backbone network feature map of the current picture and the first feature sequences of all codebooks.
5. The method according to claim 1, wherein the preset selection conditions include:
selecting a first condition: extracting a picture from the calibration easy-to-check training set, extracting a picture from the calibration difficult-to-check training set, extracting a picture from the unmatched training set, and extracting a picture from the unmarked easy-to-check training set as the current iteration training set;
or selecting a second condition: extracting a picture from the calibration easy-to-check training set, extracting a picture from the calibration difficult-to-check training set, extracting a picture from the un-calibration difficult-to-check training set, and extracting a picture from the un-calibration easy-to-check training set as a current iteration training set;
or selecting a condition III: two pictures are extracted from the calibration easy-to-check training set, one picture is extracted from the unmatched training set, and one picture is extracted from the unmarked difficult-to-check training set to serve as a current iteration training set.
6. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
The correlation between the current set of backbone network feature maps and all codewords of the current codebook is calculated according to the following formula:
wherein,for the current set of backbone network feature map, +.>Is the +.>Code word->For the number of codewords of the current codebook, +.>Is the +.>Code word->,/>For the current group backbone network feature map and the current codebook +.>Correlation of individual codewords,/->Is a normalized exponential function;
the first eigenvalues of the current set of backbone network eigenvectors and the current codebook are calculated according to the following formula:
wherein,is the +.>Code word->For the number of codewords of the current codebook, +.>For the current group backbone network feature map and the current codebook +.>Correlation of individual codewords,/->The method comprises the steps of obtaining a first characteristic value of a current set of backbone network characteristic diagrams and a current codebook;
the total feature sequence of the current picture is as follows:
wherein,for the first feature sequence of the backbone network feature map and the first codebook of the current picture,/->A backbone network feature map of the current picture and a first feature sequence of an Mth codebook;is the total feature sequence of the current picture.
7. The method according to claim 5, wherein:
When the first or second condition is selected for model training, the current iteration loss value is calibrated according to the following formula:
wherein,in order to calibrate the loss value of the picture which is easy to detect and is obtained through training of the initial semi-supervised ship detection model,for the loss value of the calibrated difficult-to-detect picture obtained by training the initial semi-supervised ship detection model,/I>For the loss value of the calibrated easily-detected picture predicted by the target ship detection model, the +.>For the loss value of the calibrated difficult-to-detect picture predicted by the target ship detection model, the +.>To calibrate the current iteration loss value.
8. The method according to claim 7, wherein:
when the third condition is selected for model training, the current iteration loss value is calibrated and calculated according to the following formula:
wherein,to calibrate the current iteration loss value +.>The loss value sum of all the calibrated easy-to-check pictures is obtained through training of the initial semi-supervised ship detection model.
9. The method according to claim 8, wherein:
when the first selection condition, the second selection condition or the third selection condition is used for model training, the updated loss value of the current pseudo tag of the current uncalibrated picture is calculated according to the following formula:
Wherein,as a logistic function,/->For the nearest distance between the current uncalibrated picture and the calibrated training set, < >>Confidence level of current pseudo tag of current uncalibrated picture obtained through prediction of target ship detection model,/for the target ship detection model>For the loss value of the current pseudo tag of the current uncalibrated picture obtained through training of the initial semi-supervised ship detection model,/I>Updating the loss value of the current pseudo tag for the current uncalibrated picture;
and summing the updated loss values of all pseudo tags of all uncalibrated pictures in the current iteration training set to obtain an uncalibrated current iteration loss value.
10. A semi-supervised marine vessel inspection system based on countermeasure learning, comprising:
the model updating module is used for adding a similarity calculation module behind a main network output layer of the original ship detection model and in front of a detection head to obtain an updated ship detection model;
the target ship detection model training module is used for preprocessing data of the original training set to obtain an updated training set; inputting the updated training set into the updated ship detection model for multi-round model training to obtain a target ship detection model;
the calibration training set dividing module is used for inputting a calibration training set into the target ship detection model for prediction to obtain a loss value of each calibration picture and a total characteristic sequence of each calibration picture; sorting and dividing the calibration training sets according to the loss values of all the calibration pictures to obtain a calibration easy-to-check training set, a calibration easier-to-check training set and a calibration difficult-to-check training set;
The uncalibrated training set dividing module is used for inputting an uncalibrated training set into the target ship detection model for prediction to obtain the total feature sequence of each uncalibrated picture and the confidence coefficient of each target frame in each uncalibrated picture; calculating to obtain the nearest distance between each uncalibrated picture and the calibrated training set according to the total feature sequences of all uncalibrated pictures and the total feature sequences of all calibrated pictures; dividing the uncalibrated training set into an unmatched training set and a matched training set according to all the nearest distances; dividing the matched training set into an uncalibrated easy-to-check training set, an uncalibrated easier-to-check training set and an uncalibrated difficult-to-check training set; marking a target frame with the confidence coefficient larger than a first preset threshold value in each uncalibrated picture as a pseudo tag;
the current wheel semi-supervised ship detection model training module is used for deleting the calibrated easy-to-detect training set and the uncalibrated easy-to-detect training set; extracting two pictures from the calibration easy-to-check training set and the calibration difficult-to-check training set according to preset selection conditions, and extracting two pictures from the unmatched training set, the unmarked easy-to-check training set and the unmarked difficult-to-check training set as current iteration training sets; model training is carried out on the current iteration training set until all the deleted training sets are trained, and a current wheel semi-supervised ship detection model, a calibrated wheel loss value and an uncalibrated wheel loss value are obtained; calculating to obtain a wheel total loss value according to the calibrated wheel loss value and the uncalibrated wheel loss value;
The repeated training module is used for repeating the training module of the current wheel semi-supervised ship detection model until the total wheel loss value fluctuates within a first preset range, and stopping training to obtain a target semi-supervised ship detection model;
the detection module is used for inputting the picture to be detected into the target semi-supervised ship detection model for detection, and obtaining the ship position.
CN202311407565.7A 2023-10-27 2023-10-27 Anti-learning-based semi-supervised ship detection method and system Active CN117152587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311407565.7A CN117152587B (en) 2023-10-27 2023-10-27 Anti-learning-based semi-supervised ship detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311407565.7A CN117152587B (en) 2023-10-27 2023-10-27 Anti-learning-based semi-supervised ship detection method and system

Publications (2)

Publication Number Publication Date
CN117152587A true CN117152587A (en) 2023-12-01
CN117152587B CN117152587B (en) 2024-01-26

Family

ID=88910419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311407565.7A Active CN117152587B (en) 2023-10-27 2023-10-27 Anti-learning-based semi-supervised ship detection method and system

Country Status (1)

Country Link
CN (1) CN117152587B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351440A (en) * 2023-12-06 2024-01-05 浙江华是科技股份有限公司 Semi-supervised ship detection method and system based on open text detection
CN117789041A (en) * 2024-02-28 2024-03-29 浙江华是科技股份有限公司 Ship defogging method and system based on atmospheric scattering priori diffusion model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949750A (en) * 2021-03-25 2021-06-11 清华大学深圳国际研究生院 Image classification method and computer readable storage medium
CN114186615A (en) * 2021-11-22 2022-03-15 浙江华是科技股份有限公司 Semi-supervised online training method and device for ship detection and computer storage medium
CN114743074A (en) * 2022-06-13 2022-07-12 浙江华是科技股份有限公司 Ship detection model training method and system based on strong and weak countermeasure training
CN114998691A (en) * 2022-06-24 2022-09-02 浙江华是科技股份有限公司 Semi-supervised ship classification model training method and device
CN115439715A (en) * 2022-09-12 2022-12-06 南京理工大学 Semi-supervised few-sample image classification learning method and system based on anti-label learning
CN116168256A (en) * 2023-04-19 2023-05-26 浙江华是科技股份有限公司 Ship detection method, system and computer storage medium
US20230196117A1 (en) * 2020-08-31 2023-06-22 Huawei Technologies Co., Ltd. Training method for semi-supervised learning model, image processing method, and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230196117A1 (en) * 2020-08-31 2023-06-22 Huawei Technologies Co., Ltd. Training method for semi-supervised learning model, image processing method, and device
CN112949750A (en) * 2021-03-25 2021-06-11 清华大学深圳国际研究生院 Image classification method and computer readable storage medium
CN114186615A (en) * 2021-11-22 2022-03-15 浙江华是科技股份有限公司 Semi-supervised online training method and device for ship detection and computer storage medium
CN114743074A (en) * 2022-06-13 2022-07-12 浙江华是科技股份有限公司 Ship detection model training method and system based on strong and weak countermeasure training
CN114998691A (en) * 2022-06-24 2022-09-02 浙江华是科技股份有限公司 Semi-supervised ship classification model training method and device
CN115439715A (en) * 2022-09-12 2022-12-06 南京理工大学 Semi-supervised few-sample image classification learning method and system based on anti-label learning
CN116168256A (en) * 2023-04-19 2023-05-26 浙江华是科技股份有限公司 Ship detection method, system and computer storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YAMAMOTO, T.ET.AL: "Ship detection leveraging deep neural networks in WorldView-2 images", 《 IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XXIII》, pages 1 - 9 *
潘崇煜;黄健;郝建国;龚建兴;张中杰;: "融合零样本学习和小样本学习的弱监督学习方法综述", 系统工程与电子技术, no. 10, pages 104 - 114 *
王炳德;杨柳涛;: "基于YOLOv3的船舶目标检测算法", 中国航海, no. 01, pages 70 - 75 *
陈国炜;刘磊;郭嘉逸;潘宗序;胡文龙;: "基于生成对抗网络的半监督遥感图像飞机检测", 中国科学院大学学报, no. 04, pages 110 - 117 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351440A (en) * 2023-12-06 2024-01-05 浙江华是科技股份有限公司 Semi-supervised ship detection method and system based on open text detection
CN117351440B (en) * 2023-12-06 2024-02-20 浙江华是科技股份有限公司 Semi-supervised ship detection method and system based on open text detection
CN117789041A (en) * 2024-02-28 2024-03-29 浙江华是科技股份有限公司 Ship defogging method and system based on atmospheric scattering priori diffusion model
CN117789041B (en) * 2024-02-28 2024-05-10 浙江华是科技股份有限公司 Ship defogging method and system based on atmospheric scattering priori diffusion model

Also Published As

Publication number Publication date
CN117152587B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN117152587B (en) Anti-learning-based semi-supervised ship detection method and system
CN106874868B (en) Face detection method and system based on three-level convolutional neural network
CN110569696A (en) Neural network system, method and apparatus for vehicle component identification
CN112232241B (en) Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN114492574A (en) Pseudo label loss unsupervised countermeasure domain adaptive picture classification method based on Gaussian uniform mixing model
CN112765358A (en) Taxpayer industry classification method based on noise label learning
CN111126134B (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
US11816841B2 (en) Method and system for graph-based panoptic segmentation
CN109766469A (en) A kind of image search method based on the study optimization of depth Hash
CN114863091A (en) Target detection training method based on pseudo label
CN115292532A (en) Remote sensing image domain adaptive retrieval method based on pseudo label consistency learning
CN111242134A (en) Remote sensing image ground object segmentation method based on feature adaptive learning
CN114549909A (en) Pseudo label remote sensing image scene classification method based on self-adaptive threshold
WO2020088338A1 (en) Method and apparatus for building recognition model
CN116541704A (en) Bias mark learning method for multi-type noise separation
CN116363469A (en) Method, device and system for detecting infrared target with few samples
US11669565B2 (en) Method and apparatus for tracking object
CN113592045B (en) Model adaptive text recognition method and system from printed form to handwritten form
CN115730656A (en) Out-of-distribution sample detection method using mixed unmarked data
CN115424275A (en) Fishing boat brand identification method and system based on deep learning technology
CN114693997A (en) Image description generation method, device, equipment and medium based on transfer learning
CN114663751A (en) Power transmission line defect identification method and system based on incremental learning technology
CN113379037A (en) Multi-label learning method based on supplementary label collaborative training
CN112215272A (en) Bezier curve-based image classification neural network attack method
CN112085040A (en) Object tag determination method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant