CN114186615A - Semi-supervised online training method and device for ship detection and computer storage medium - Google Patents

Semi-supervised online training method and device for ship detection and computer storage medium Download PDF

Info

Publication number
CN114186615A
CN114186615A CN202111383472.6A CN202111383472A CN114186615A CN 114186615 A CN114186615 A CN 114186615A CN 202111383472 A CN202111383472 A CN 202111383472A CN 114186615 A CN114186615 A CN 114186615A
Authority
CN
China
Prior art keywords
data
target
loss
model
pseudo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111383472.6A
Other languages
Chinese (zh)
Other versions
CN114186615B (en
Inventor
李林超
俞永方
叶建标
朱佳豪
张鹏
陈江海
温志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Whyis Technology Co ltd
Original Assignee
Zhejiang Whyis Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Whyis Technology Co ltd filed Critical Zhejiang Whyis Technology Co ltd
Priority to CN202111383472.6A priority Critical patent/CN114186615B/en
Publication of CN114186615A publication Critical patent/CN114186615A/en
Application granted granted Critical
Publication of CN114186615B publication Critical patent/CN114186615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the invention discloses a semi-supervised online training method for ship detection, which comprises the following steps: acquiring an initial ship detection model, and acquiring a calibration data set and a data set to be detected; inputting a data set to be detected into an initial ship detection model for detection to obtain a pseudo label data set, and performing data preprocessing on the pseudo label data set to obtain a training sample set; the ship detection network loads an initial ship detection model, initializes the weight of the model, performs model training by adopting a training sample set, forwards propagates to extract target characteristics, calculates loss of the model, backwards propagates to update network parameters until the model converges, and obtains the latest ship detection model. The invention carries out data preprocessing on the pseudo label data, increases the sample data quantity, modifies the loss function calculation method, enables the ship detection network to learn on line, continuously optimizes the weight of the ship detection network, obtains an optimal model, realizes the on-line semi-supervised training of the model, and improves the robustness and the detection rate of the model.

Description

Semi-supervised online training method and device for ship detection and computer storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a ship detection semi-supervised online training method and device and a computer storage medium.
Background
With the development of water traffic, the driving safety of ships is gradually emphasized by people, and the identification and detection of the ships often depend on a ship identification model. The ship detection model in the prior art is usually trained by adopting a full supervision learning method, namely artificially collecting material data, marking the data and carrying out network on the marking data. However, the ship detection model has limitations and instability due to the fact that the variety of ships is various, the driving environment is rich, collected materials cannot cover all information, some data cannot be subjected to image acquisition due to various reasons such as secrecy of monitoring points and the like, and data information of all monitoring points cannot be traversed.
In addition, false detection and missing detection phenomena exist in the false tag data set, and the negative influence of how to process the false tag data in the prior art is still a big problem. If the method for processing the pseudo tag data set is not suitable, the ship detection algorithm can learn the noise or effective characteristic information of the pseudo tag data and guide the network to learn in a negative direction, so that more false detections or missed detections exist in the inference target of the ship detection model.
In summary, the following disadvantages exist in the prior art: firstly, a full-supervision training method of manual calibration is relied on, and a training set has limitations; secondly, the ship detection model cannot optimize the model weight according to the change of the environment and the target of the monitoring point, and is lack of adaptability; and thirdly, under the condition of no manual processing, the newly added data set of the monitoring point cannot react on the ship detection model, and the ship detection model cannot be learned in real time and is lack of learning. This causes the following problems: firstly, the ship detection model has large difference of accuracy and false detection rate of different monitoring points due to limitation of a data set, the detection rate and the false detection rate of the monitoring points close to a training set are high, and the accuracy and the false detection rate of the monitoring points with large difference are low; secondly, because the ship runs indefinitely, each monitoring point is likely to have a ship type which never appears at the monitoring point, but the detection rate of the ship detection model to a new type of ship is low; and thirdly, the actual data set of the monitoring point cannot be learned, and the ship detection model cannot better fit the data set information.
In view of the above-mentioned problems encountered in ship detection, no effective technical solution exists in the prior art.
Disclosure of Invention
In order to solve the above problems, the present invention provides a semi-supervised online training method for ship detection, comprising: acquiring an initial ship detection model, and acquiring a calibration data set and a data set to be detected; inputting the data set to be detected into the initial ship detection model for detection to obtain a pseudo label data set, and performing data preprocessing on the pseudo label data set to obtain a training sample set; wherein, the data preprocessing of the pseudo label data set to obtain a training sample set comprises: when target pseudo tag data in a pseudo tag data set are non-target pseudo tag data, carrying out picture integration on the target pseudo tag data and target object data extracted from the calibration data set, or carrying out picture mixing on the target pseudo tag data and data in the calibration data set; when target pseudo tag data in a pseudo tag data set is pseudo tag data with a target, carrying out picture integration on target object data in the target pseudo tag data and data in a calibration data set, or carrying out data connection on the target object data in the target pseudo tag data and the data in the calibration data set; and loading an initial ship detection model by the ship detection network, initializing model weights, performing model training by adopting the training sample set, carrying out forward propagation to extract target characteristics, calculating loss of the model, and carrying out backward propagation to update network parameters until the model converges to obtain the latest ship detection model.
Further optionally, the calculating loss of model loss comprises: when target sample data is obtained through data connection, calculating loss of the target sample data according to the confidence coefficient of the target sample data; when target sample data is obtained in other modes, calculating loss of the target sample data according to a loss function; and calculating the sum of loss of the sample data participating in the model training in the training sample set.
Further optionally, the calculating loss of the target sample according to the confidence of the target sample data includes: when the confidence coefficient of the target sample data is greater than or equal to a first preset confidence coefficient threshold value, calculating a first loss weight, the classification loss and the regression loss of the target sample data, and multiplying the sum of the classification loss and the regression loss of the target sample data by the first loss weight to obtain the loss of the target sample data; when the confidence coefficient of the target sample data is smaller than a first preset confidence coefficient threshold and is larger than or equal to a second confidence coefficient threshold, calculating a second loss weight and the regression loss of the target sample data, and multiplying the second loss weight and the regression loss of the target sample data to obtain the loss of the target sample data; when the confidence of the target sample data is smaller than the second confidence threshold, loss is not calculated.
Further optionally, the image integrating the target pseudo tag data and the target object data extracted from the calibration data set includes: taking target object data in the data of the calibration data set as a center, and matting the data within a preset range to obtain matting data; performing target shaking on target object data in the cutout data to obtain first data to be pasted; and performing Poisson fusion on the first to-be-pasted data and the target pseudo label data.
Further optionally, the image integrating the target object data in the target pseudo tag data and the data in the calibration data set includes: data in a preset range is subjected to matting by taking the target object data of the target pseudo label data as a center to obtain second data to be pasted; performing Poisson fusion on the second data to be pasted and the data in the calibration data set; the target pseudo tag data is pseudo tag data with a target, and the confidence coefficient of the pseudo tag data is greater than a third preset confidence coefficient threshold value.
On the other hand, the embodiment of the invention also provides a ship detection semi-supervised online training device, which comprises: the data acquisition module is used for acquiring an initial ship detection model and acquiring a calibration data set and a data set to be detected; a training sample set generation module, configured to input the to-be-detected data set to the initial ship detection model for detection, so as to obtain a pseudo tag data set, and perform data preprocessing on the pseudo tag data set, so as to obtain a training sample set; wherein the training sample set generating module comprises: the first preprocessing submodule is used for carrying out picture integration on target pseudo label data and target object data extracted from the calibration data set or carrying out picture mixing on the target pseudo label data and the data in the calibration data set when the target pseudo label data in the pseudo label data set is non-target pseudo label data; the second preprocessing submodule is used for carrying out picture integration on target object data in the target pseudo tag data and data in the calibration data set or carrying out data connection on the target object data in the target pseudo tag data and the data in the calibration data set when the target pseudo tag data in the pseudo tag data set is the pseudo tag data with a target; and the latest ship detection model generation module is used for loading an initial ship detection model by the ship detection network, initializing the model weight, performing model training by adopting the training sample set, extracting target characteristics by forward propagation, calculating the loss of the model, and updating network parameters by backward propagation until the model converges to obtain the latest ship detection model.
Further optionally, the latest ship detection model generation module includes: the first data loss calculation submodule is used for calculating loss of the target sample data according to the confidence coefficient of the target sample data when the target sample data is obtained by data connection; the second data loss calculation submodule is used for calculating loss of the target sample data according to a loss function when the target sample data is obtained in other modes; and the model loss calculation submodule is used for calculating the loss sum of the sample data in the training sample set and participating in the model training.
Further optionally, the first data loss calculation sub-module includes: the first loss calculation unit is used for calculating a first loss weight, the classification loss and the regression loss of the target sample data when the confidence coefficient of the target sample data is greater than or equal to a first preset confidence coefficient threshold value, and multiplying the sum of the classification loss and the regression loss of the target sample data by the first loss weight to obtain the loss of the target sample data; the second loss calculation unit is used for calculating a second loss weight and the regression loss of the target sample data when the confidence coefficient of the target sample data is smaller than the first preset confidence coefficient threshold and is larger than or equal to a second confidence coefficient threshold, and multiplying the second loss weight and the regression loss of the target sample data to obtain the loss of the target sample data; and the third loss calculating unit is used for not calculating loss when the confidence coefficient of the target sample data is smaller than the second confidence coefficient threshold value.
Further optionally, the first preprocessing sub-module includes: the sectional data generating unit is used for taking the target object data in the data of the calibration data set as the center and performing sectional drawing on the data in a preset range to obtain sectional data; the first data to be pasted generating unit is used for carrying out target shaking on target object data in the cutout data to obtain first data to be pasted; and the first picture integration unit is used for performing Poisson fusion on the first to-be-pasted data and the target pseudo label data.
Further optionally, the second preprocessing sub-module includes: the second data generation unit to be pasted is used for matting data in a preset range by taking the target object data of the target pseudo label data as a center to obtain second data to be pasted; the second image integration unit is used for performing Poisson fusion on the second data to be pasted and the data in the calibration data set; the target pseudo tag data is pseudo tag data with a target, and the confidence coefficient of the pseudo tag data is greater than a third preset confidence coefficient threshold value.
In another aspect, an embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, where the program is executed by a processor to perform the above-mentioned ship detection semi-supervised online training method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a semi-supervised online training method for ship detection provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a model loss calculation method according to an embodiment of the present invention;
fig. 3 is a flowchart of a sample data loss calculation method generated after data connection according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for integrating non-target pseudo tag data with data pictures in a calibration data set according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for integrating targeted pseudo tag data with data pictures in a calibration data set according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a semi-supervised online training device for ship detection provided by an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a latest ship detection model generation module provided in an embodiment of the present invention;
FIG. 8 is a block diagram of a first data loss calculation submodule provided in an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a first preprocessing submodule provided in an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a second preprocessing submodule provided in the embodiment of the present invention.
Reference numerals: 100-data acquisition module 200-training sample set generation module 2001-first preprocessing submodule 20011-cutout data generation unit 20012-first to-be-pasted data generation unit 20013-first picture integration unit 2002-second preprocessing submodule 20021-second to-be-pasted data generation unit 20022-second picture integration unit 300-latest ship detection model generation module 3001-first data loss calculation submodule 30011-first loss calculation unit 30012-second loss calculation unit 30013-third loss calculation unit 3002-second data loss calculation submodule 3003-model loss calculation submodule
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem that the accuracy of model detection is not high due to the fact that a data set cannot be comprehensively acquired, an embodiment of the invention provides a ship detection semi-supervised online training method, and fig. 1 is a flow chart of the ship detection semi-supervised online training method provided by the embodiment of the invention, and as shown in fig. 1, the method includes:
s101, acquiring an initial ship detection model, and acquiring a calibration data set and a data set to be detected;
in the data acquisition stage, the initial ship detection model is used as the basis of model adjustment, and a calibration data set and a data set to be detected are acquired simultaneously. The data in the calibration data set is a manually marked picture with target information, and the calibration data set is a training sample set of the initial ship detection model, namely the initial ship identification model is obtained by performing model training on the calibration data set. The data set to be detected is a set of pictures which are captured when the ship passes through the monitoring point and are not subjected to model prediction.
S102, inputting a to-be-detected data set into an initial ship detection model for detection to obtain a pseudo label data set, and performing data preprocessing on the pseudo label data set to obtain a training sample set;
and predicting the label of the data set to be detected by using the initial ship identification model to obtain a pseudo label data set.
Some data in the pseudo tag data set have target object information, and some data do not have target object information. And saving the pseudo tag data set with the target to an Object _ File, and saving the pseudo tag data set without the target to a No _ Obj File.
The method for preprocessing the data of the pseudo label data set to obtain the training sample set comprises the following steps:
s1021, when the target pseudo tag data in the pseudo tag data set are non-target pseudo tag data, carrying out picture integration on the target pseudo tag data and target object data extracted from the calibration data set, or carrying out picture mixing on the target pseudo tag data and the data in the calibration data set;
and respectively carrying out data preprocessing on each pseudo label data in the pseudo label data set. When the target pseudo tag data which is currently preprocessed is judged to be non-target pseudo tag data, preprocessing the target pseudo tag data in any one of the following two modes:
firstly, a picture integration mode: because the calibration data set is a picture data set subjected to manual marking, the target object information is accurate. The target pseudo label data is non-target pseudo label data, target information does not exist in the picture, and only background information exists, so that target object information in the calibration data set and the background information in the target pseudo label data can be integrated, and the data in the calibration data set and the target pseudo label data are combined into a target picture. The target pseudo tag data may randomly select one calibration data as an integration object, and may also select a plurality of calibration data to be respectively integrated, which is not limited in this embodiment.
And secondly, fusing target pseudo label data and calibration data by adopting a mix _ up method in a picture mixing mode. Therefore, the method improves the effective learning of the pseudo label background information of the network, improves the robustness of the model and improves the accuracy of the model. Therefore, the model can learn to obtain the background information of the pseudo tag data, the robustness of the model is improved, and the detection accuracy of the model is further improved.
S1022, when the target pseudo tag data in the pseudo tag data set is the pseudo tag data with the target, performing picture integration on the target object data in the target pseudo tag data and the data in the calibration data set, or performing data connection on the target object data in the target pseudo tag data and the data in the calibration data set;
when the target pseudo tag data which is currently preprocessed is judged to be pseudo tag data with a target, preprocessing the target pseudo tag data in any one of the following two modes:
firstly, a picture integration mode: the target pseudo tag data is pseudo tag data having a target, and the picture thereof has target information, but it cannot be determined whether the correspondence between the tag thereof and the target information is accurate. Therefore, the target object data in the target pseudo tag data can be extracted and integrated with the background information in the calibration data set, and the target pseudo tag data and the correspondingly integrated calibration data are combined into a new picture with a target, wherein the picture contains the pseudo tag target. The target pseudo tag data may randomly select one calibration data as an integration object, and may also select a plurality of calibration data to be respectively integrated, which is not limited in this embodiment.
The data connection mode is as follows: and performing data connection on the target pseudo label data and the data in the calibration data set, and marking the data in the target pseudo label. The data after data connection is a picture formed by connecting a target pseudo label picture and a calibration picture, the edges of the two pictures can be partially covered, and the purpose that a target object is not covered can be met. Therefore, the model can learn the target object information in the pseudo tag data, so that the model training data volume is increased, and the ship detection rate is improved.
S103, loading an initial ship detection model by the ship detection network, initializing model weights, performing model training by adopting a training sample set, carrying out forward propagation to extract target characteristics, calculating loss of the model, and carrying out backward propagation to update network parameters until the model converges to obtain the latest ship detection model.
The ship detection network loads an initial ship detection model, initializes the weight of the model, namely assigns parameters in the initial ship detection model to the ship detection network, wherein the ship detection network is a model training network.
And then inputting the data in the training sample set subjected to the data preprocessing into a ship detection network to perform model training, and obtaining the characteristics of the target object after training, namely an inference result. And performing loss calculation on the sample training set through a loss calculation function to obtain the current loss, wherein the current loss represents the difference between the inference result and the target object information in the pseudo tag data set. And obtaining back propagation information by differentiating the parameters in the network according to the current loss, adjusting the parameters in the ship detection network according to the back propagation information, and repeating the training until the loss value tends to be stable until the model reaches the optimal state.
The overall process of the ship detection semi-supervised online training method is as follows: assigning parameters in a ship detection network to be trained by using parameters in an initial ship detection model which is trained previously;
secondly, performing data preprocessing on a calibration data set which is manually calibrated and a pseudo label data set which is inferred by an existing model by using the method, inputting a training sample set after data preprocessing into a ship detection network, obtaining the characteristics of ship target information by the ship detection network through network calculation according to picture information in the input sample training set, then comparing the obtained ship target information characteristics with the ship target information which is manually calibrated or inferred by the previous model, obtaining the difference between the inferred result and the ship detection target of the pseudo label data set through a loss calculation formula, performing network back propagation (derivation on parameters in the network) by using the obtained difference to obtain back propagation information, and adjusting network parameters according to the back propagation information;
and thirdly, the step II is circulated until the difference between the inference target information and the current information of the training set does not change any more, and a latest ship detection model is obtained.
As an alternative implementation, fig. 2 is a flowchart of a model loss calculation method according to an embodiment of the present invention, and as shown in fig. 2, calculating a model loss includes:
s1031, when the target sample data is obtained by data connection, calculating loss of the target sample data according to the confidence coefficient of the target sample data;
because the data obtained by the data connection method contains the pseudo tag data, the confidence of some data is low, and if the loss calculation method which is the same as that of the calibration data set is adopted, the model can be wrongly learned, so that the detection accuracy of the model is influenced. Therefore, according to the embodiment, the loss calculation method corresponding to each pseudo tag data is adjusted according to different confidence degrees of the pseudo tag data, so that the attention degree of the model to the false tag data set positive detection target is improved, and the attention degree of the model to the false tag data set false detection target is reduced.
S1032, when the target sample data is obtained in other modes, calculating loss of the target sample data according to the loss function;
and when the target sample data is obtained in other three modes, namely a non-data connection mode, calculating loss of the target sample data according to the loss function. In this embodiment, the classification loss is calculated by softmax or Sigmoid, and the regression loss is calculated by l1_ smooth or other regression loss calculation functions.
And S1033, calculating the loss sum of the sample data in the training sample set and participating in the model training.
And calculating the loss of all sample data in the training sample set as the loss of the model. The calculation principle is as follows:
losssum=losspseudo-tagged object+lossNon-counterfeit label object
Further optionally, calculating a loss of loss value of the model, including: when a target sample training set is obtained by data connection, calculating a loss value of a target of an artificial calibration data set by adopting a network self-contained loss function, calculating a loss value of a pseudo label data set target according to a confidence coefficient loss function of target sample data, and then adding the loss value of the artificial calibration data set and the loss value of the pseudo label data set to obtain a loss value of a connection data set; and when the target sample data is obtained by other modes, calculating the loss of the target sample data according to the loss function.
As an optional implementation manner, fig. 3 is a flowchart of a sample data loss calculation method generated after data connection according to an embodiment of the present invention, and as shown in fig. 3, a loss of a target sample is calculated according to a confidence of the target sample data. The method comprises the following steps:
s10311, when the confidence coefficient of the target sample data is greater than or equal to a first preset confidence coefficient threshold value, calculating a first loss weight, the classification loss and the regression loss of the target sample data, and multiplying the sum of the classification loss and the regression loss of the target sample data by the first loss weight to obtain the loss of the target sample data;
and the prediction frame judges whether the target is a pseudo label target or not according to the target position, and the prediction frame is matched with the pseudo label target.
In this embodiment, the first preset confidence threshold is 0.7, and when the confidence (conf) of the target sample data is greater than or equal to 0.7, the classification loss and the regression loss of the target sample data are calculated.
That is, conf ≧ 0.7:
first loss weight:
Figure BDA0003366516610000081
calculating loss (loss) of target sample data with confidence coefficient greater than 0.7conf≥0.7) The value:
lossconf≥0.7=loss_weight_conf0.7×(losscls+lossreg)
wherein loss _ weight _ conf ≧0.7Is the first loss weight, lossclsTo classify loss, lossregIs the regression loss.
The classification loss can be calculated by softmax or Sigmoid, and the regression loss by l1_ smooth or other regression loss calculation function.
S10312, when the confidence of the target sample data is smaller than the first preset confidence threshold and is larger than or equal to the second confidence threshold, calculating the regression loss of the second loss weight and the target sample data, and multiplying the regression loss of the target sample data by the second loss weight to obtain the loss of the target sample data;
in this embodiment, the second preset confidence threshold is 0.3, and when the confidence (conf) of the target sample data is greater than or equal to 0.3 and less than 0.7. Only regression losses were calculated, not classification losses.
That is, 0.3. ltoreq. conf < 0.7:
second loss weight:
Figure BDA0003366516610000091
calculating loss (loss) of target sample data with a confidence level of less than 0.7 and 0.3 or more0.7>conf≥0.3) The value:
loss0.7>conf≥0.3=loss_weight_conf0.7>conf≥0.3×lossreg
wherein loss _ weight _ conf0.7conf≥0.3Is the second loss weight, lossregIs the regression loss.
The regression loss is calculated by l1_ smooth or other regression loss calculation function.
And S10313, when the confidence of the target sample data is smaller than a second confidence threshold, not calculating loss.
That is, 0.3> conf: loss of target sample data is not calculated.
All sample data loss (loss) calculations for the data connection are performed:
Figure BDA0003366516610000092
as an optional implementation manner, fig. 4 is a flowchart of a method for picture-integrating target-free pseudo tag data and data in a calibration data set according to an embodiment of the present invention, and as shown in fig. 4, picture-integrating target pseudo tag data and target object data extracted from the calibration data set includes:
s10211, using target object data in the data of the calibration data set as a center, and matting the data in a preset range to obtain matting data;
in this embodiment, the range of each prediction frame is the range of the initial object data, and in order to avoid the influence caused by the boundary line of the object, the object is used as the center in this embodiment, so that the prediction frame extends outwards by 1.5-2.5 times and is used as the matting range, and matting is performed according to the matting range to obtain the matting data.
S10212, performing target shaking on target object data in the cutout data to obtain first data to be pasted;
in order to increase the feature quantity covered by the training set, the present embodiment adds a target dithering method, sets a dithering value range to (0.5, 0.9), and reduces the target object data according to the first dithering value to obtain the first data to be pasted.
S10213, Poisson fusion is carried out on the first data to be pasted and the target pseudo label data.
As a preferred embodiment, before poisson fusion, data enhancement is performed on the first data to be pasted, for example, the first data to be pasted is randomly flipped, rotated, and the like.
As an optional implementation manner, fig. 5 is a flowchart of a method for picture-integrating target pseudo tag data and data in a calibration data set according to an embodiment of the present invention, and as shown in fig. 5, picture-integrating target object data in the target pseudo tag data and data in the calibration data set includes:
s10221, using target object data of the target pseudo label data as a center, matting data in a preset range to obtain second data to be pasted;
in this embodiment, the range of each prediction frame is the range of the initial target object data, and in order to avoid the influence caused by the boundary line of the target object, the target object is used as the center in this embodiment, so that the prediction frame extends outwards by 1.5-2.5 times and is used as the matting range, and matting is performed according to the matting range to obtain the second data to be pasted.
S10222, performing Poisson fusion on the second data to be pasted and the data in the calibration data set; the target pseudo tag data is pseudo tag data with a target, wherein the confidence coefficient of the pseudo tag data is greater than a third preset confidence coefficient threshold value.
As a preferred embodiment, before poisson fusion, data enhancement is performed on the second data to be pasted, for example, operations such as flipping and rotating are randomly performed on the second data to be pasted.
In order to make the reliability of the new picture obtained after the picture integration higher, the embodiment selects the pseudo tag data with higher confidence for the picture integration. Preferably, the third preset confidence threshold is set to 0.85.
The invention also provides a ship detection semi-supervised online training device, and fig. 6 is a schematic structural diagram of the ship detection semi-supervised online training device provided by the embodiment of the invention, as shown in fig. 6, including:
the data acquisition module 100 is configured to acquire an initial ship detection model, and acquire a calibration dataset and a to-be-detected dataset;
in the data acquisition stage, the initial ship detection model is used as the basis of model adjustment, and a calibration data set and a data set to be detected are acquired simultaneously. The data in the calibration data set is a manually marked picture with target information, and the calibration data set is a training sample set of the initial ship detection model, namely the initial ship identification model is obtained by performing model training on the calibration data set. The data set to be detected is a set of pictures which are captured when the ship passes through the monitoring point and are not subjected to model prediction.
The training sample set generating module 200 is used for inputting the data set to be detected into the initial ship detection model for detection to obtain a pseudo tag data set, and performing data preprocessing on the pseudo tag data set to obtain a training sample set;
and predicting the label of the data set to be detected by using the initial ship identification model to obtain a pseudo label data set.
Some data in the pseudo tag data set have target object information, and some data do not have target object information. And saving the pseudo tag data set with the target to an Object _ File, and saving the pseudo tag data set without the target to a No _ Obj File.
The training sample set generating module 200 includes:
the first preprocessing submodule 2001 is configured to, when target pseudo tag data in the pseudo tag data set is non-target pseudo tag data, perform picture integration on the target pseudo tag data and target object data extracted from the calibration data set, or perform picture mixing on the target pseudo tag data and data in the calibration data set;
and respectively carrying out data preprocessing on each pseudo label data in the pseudo label data set. When the target pseudo tag data which is currently preprocessed is judged to be non-target pseudo tag data, preprocessing the target pseudo tag data in any one of the following two modes:
firstly, a picture integration mode: because the calibration data set is a picture data set subjected to manual marking, the target object information is accurate. The target pseudo label data is non-target pseudo label data, target information does not exist in the picture, and only background information exists, so that target object information in the calibration data set and the background information in the target pseudo label data can be integrated, and the picture in the calibration data set and the target pseudo label data are combined into a target picture. The target pseudo tag data may randomly select one calibration data as an integration object, and may also select a plurality of calibration data to be respectively integrated, which is not limited in this embodiment.
And secondly, fusing target pseudo label data and calibration data by adopting a mix _ up method in a picture mixing mode. Therefore, the method improves the effective learning of the pseudo label background information of the network, improves the robustness of the model and improves the accuracy of the model. Therefore, the model can learn to obtain the background information of the pseudo tag data, the robustness of the model is improved, and the detection accuracy of the model is further improved.
A second preprocessing submodule 2002, configured to, when target pseudo tag data in the pseudo tag data set is pseudo tag data with a target, perform picture integration on target object data in the target pseudo tag data and data in the calibration data set, or perform data connection on the target object data in the target pseudo tag data and the data in the calibration data set;
when the target pseudo tag data which is currently preprocessed is judged to be pseudo tag data with a target, preprocessing the target pseudo tag data in any one of the following two modes:
firstly, a picture integration mode: the target pseudo tag data is pseudo tag data having a target, and the picture thereof has target information, but it cannot be determined whether the correspondence between the tag thereof and the target information is accurate. Therefore, the target object data in the target pseudo tag data can be extracted and integrated with the background information in the calibration data set, and the target pseudo tag data and the correspondingly integrated calibration data are combined into a new picture with a target, wherein the picture contains the pseudo tag target. The target pseudo tag data may randomly select one calibration data as an integration object, and may also select a plurality of calibration data to be respectively integrated, which is not limited in this embodiment.
The data connection mode is as follows: and performing data connection on the target pseudo label data and the data in the calibration data set, and marking the data in the target pseudo label. The data after data connection is a picture formed by connecting a target pseudo label picture and a calibration picture, the edges of the two pictures can be partially covered, and the purpose that a target object is not covered can be met. Therefore, the model can learn the target object information in the pseudo tag data, so that the model training data volume is increased, and the ship detection rate is improved.
And the latest ship detection model generation module 300 is used for loading an initial ship detection model in the ship detection network, initializing model weights, performing model training by adopting a training sample set, extracting target characteristics by forward propagation, calculating loss of the model, and updating network parameters by backward propagation until the model converges to obtain the latest ship detection model.
The ship detection network loads an initial ship detection model, initializes the weight of the model, namely assigns parameters in the initial ship detection model to the ship detection network, wherein the ship detection network is a model training network.
And then inputting the data in the training sample set subjected to the data preprocessing into a ship detection network to perform model training, and obtaining the characteristics of the target object after training, namely an inference result. And performing loss calculation on the sample training set through a loss calculation function to obtain the current loss, wherein the current loss represents the difference between the inference result and the target object information in the pseudo tag data set. And obtaining back propagation information by differentiating the parameters in the network according to the current loss, adjusting the parameters in the ship detection network according to the back propagation information, and repeating the training until the loss value tends to be stable until the model reaches the optimal state.
The overall process of the ship detection semi-supervised online training method is as follows: assigning parameters in a ship detection network to be trained by using parameters in an initial ship detection model which is trained previously;
secondly, performing data preprocessing on a calibration data set which is manually calibrated and a pseudo label data set which is inferred by an existing model by using the method, inputting a training sample set after data preprocessing into a ship detection network, obtaining the characteristics of ship target information by the ship detection network through network calculation according to picture information in the input sample training set, then comparing the obtained ship target information characteristics with the ship target information which is manually calibrated or inferred by the previous model, obtaining the difference between the inferred result and the ship detection target of the pseudo label data set through a loss calculation formula, performing network back propagation (derivation on parameters in the network) by using the obtained difference to obtain back propagation information, and adjusting network parameters according to the back propagation information;
and thirdly, the step II is circulated until the difference between the inference target information and the current information of the training set does not change any more, and a latest ship detection model is obtained.
As an alternative implementation manner, fig. 7 is a schematic structural diagram of a latest ship detection model generation module according to an embodiment of the present invention, and as shown in fig. 7, the latest ship detection model generation module 300 includes:
the first data loss calculation submodule 3001 is configured to calculate a loss of the target sample data according to a confidence of the target sample data when the target sample data is obtained by data connection;
because the data obtained by the data connection method contains the pseudo tag data, the confidence of some data is low, and if the loss calculation method which is the same as that of the calibration data set is adopted, the model can be wrongly learned, so that the detection accuracy of the model is influenced. Therefore, according to the embodiment, the loss calculation method corresponding to each pseudo tag data is adjusted according to different confidence degrees of the pseudo tag data, so that the attention degree of the model to the false tag data set positive detection target is improved, and the attention degree of the model to the false tag data set false detection target is reduced.
The second data loss calculation submodule 3002 is configured to calculate a loss of the target sample data according to the loss function when the target sample data is obtained by another method;
and when the target sample data is obtained in other three modes, namely a non-data connection mode, calculating loss of the target sample data according to the loss function. In this embodiment, the classification loss is calculated by softmax or Sigmoid, and the regression loss is calculated by l1_ smooth or other regression loss calculation functions.
And the model loss calculation submodule 3003 is configured to calculate a sum of loss losses of sample data in the training sample set and participating in the model training at this time.
And calculating the loss of all sample data in the training sample set as the loss of the model. The calculation principle is as follows:
losssum=losspseudo-tagged object+lossNon-counterfeit label object
As an alternative implementation, the first data loss calculating sub-module 3001, fig. 8 is a schematic structural diagram of the first data loss calculating sub-module provided in the embodiment of the present invention, as shown in fig. 8, including:
a first loss calculation unit 30011, configured to calculate a first loss weight, a classification loss of the target sample data, and a regression loss when the confidence of the target sample data is greater than or equal to a first preset confidence threshold, and multiply the sum of the classification loss and the regression loss of the target sample data by the first loss weight to obtain a loss of the target sample data;
and the prediction frame judges whether the target is a pseudo label target or not according to the target position, and the prediction frame is matched with the pseudo label target.
In this embodiment, the first preset confidence threshold is 0.7, and when the confidence (conf) of the target sample data is greater than or equal to 0.7, the classification loss and the regression loss of the target sample data are calculated.
That is, conf ≧ 0.7:
first loss weight:
Figure BDA0003366516610000131
calculating loss (loss) of target sample data with confidence coefficient greater than 0.7conf≥0.7) The value:
lossconf≥0.7=loss_weight_conf0.7×(losscls+lossreg)
wherein loss _ weight _ conf ≧0.7Is the first loss weight, lossclsTo classify loss, lossregIs the regression loss.
The classification loss can be calculated by softmax or Sigmoid, and the regression loss by l1_ smooth or other regression loss calculation function.
A second loss calculating unit 30012, configured to calculate a second loss weight and a regression loss of the target sample data when the confidence of the target sample data is less than the first preset confidence threshold and is greater than or equal to the second confidence threshold, and multiply the second loss weight and the regression loss of the target sample data to obtain a loss of the target sample data;
in this embodiment, the second preset confidence threshold is 0.3, and when the confidence (conf) of the target sample data is greater than or equal to 0.3 and less than 0.7. Only regression losses were calculated, not classification losses.
That is, 0.3. ltoreq. conf < 0.7:
second loss weight:
Figure BDA0003366516610000141
calculating loss (loss) of target sample data with a confidence level of less than 0.7 and 0.3 or more0.7>conf≥0.3) The value:
loss0.7>conf≥0.3=loss_weight_conf0.7>conf≥0.3×lossreg
wherein loss _ weight _ conf0.7>conf≥0.3Is the second loss weight, lossregIs the regression loss.
The regression loss is calculated by l1_ smooth or other regression loss calculation function.
A third loss calculating unit 30013, configured to not calculate loss when the confidence of the target sample data is less than the second confidence threshold.
That is, 0.3> conf: loss of target sample data is not calculated.
All sample data loss (loss) calculations for the data connection are performed:
Figure BDA0003366516610000142
as an alternative implementation manner, fig. 9 is a schematic structural diagram of a first preprocessing submodule provided in an embodiment of the present invention, and as shown in fig. 9, the first preprocessing submodule 2001 includes:
the matting data generating unit 20011 is configured to perform matting on data within a preset range with target object data in the data of the calibration data set as a center to obtain matting data;
in this embodiment, the range of each prediction frame is the range of the initial object data, and in order to avoid the influence caused by the boundary line of the object, the object is used as the center in this embodiment, so that the prediction frame extends outwards by 1.5-2.5 times and is used as the matting range, and matting is performed according to the matting range to obtain the matting data.
The first to-be-pasted data generation unit 20012 is configured to perform target dithering on target object data in the cutout data to obtain first to-be-pasted data;
in order to increase the feature quantity covered by the training set, the present embodiment adds a target dithering method, sets a dithering value range to (0.5, 0.9), and reduces the target object data according to the first dithering value to obtain the first data to be pasted.
The first picture integration unit 20013 is configured to perform poisson fusion on the first to-be-pasted data and the target pseudo tag data.
As a preferred embodiment, before poisson fusion, data enhancement is performed on the first data to be pasted, for example, the first data to be pasted is randomly flipped, rotated, and the like.
As an alternative implementation, fig. 10 is a schematic structural diagram of a second preprocessing sub-module provided in an embodiment of the present invention, and as shown in fig. 10, the second preprocessing sub-module 2002 includes:
a second data to be pasted generation unit 20021, configured to cutout data within a preset range with target object data of the target pseudo tag data as a center to obtain second data to be pasted;
in this embodiment, the range of each prediction frame is the range of the initial target object data, and in order to avoid the influence caused by the boundary line of the target object, the target object is used as the center in this embodiment, so that the prediction frame extends outwards by 1.5-2.5 times and is used as the matting range, and matting is performed according to the matting range to obtain the second data to be pasted.
The second picture integration unit 20022 is configured to perform poisson fusion on the second data to be pasted and the data in the calibration data set; the target pseudo tag data is pseudo tag data with a target, wherein the confidence coefficient of the pseudo tag data is greater than a third preset confidence coefficient threshold value.
As a preferred embodiment, before poisson fusion, data enhancement is performed on the second data to be pasted, for example, operations such as flipping and rotating are randomly performed on the second data to be pasted.
In order to make the reliability of the new picture obtained after the picture integration higher, the embodiment selects the pseudo tag data with higher confidence for the picture integration. Preferably, the third preset confidence threshold is set to 0.85.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the ship detection model generation method is realized.
The storage medium stores the software, and the storage medium includes but is not limited to: optical disks, floppy disks, hard disks, erasable memory, etc.
The invention carries out data preprocessing on the pseudo label data, increases the sample data quantity, modifies the loss function calculation method, enables the ship detection network to learn on line, continuously optimizes the weight of the ship detection network, obtains an optimal model, realizes the on-line semi-supervised training of the model, and improves the robustness and the detection rate of the model.
The technical scheme has the following beneficial effects: the data to be detected is subjected to reasoning test through the existing model to obtain a pseudo label data set, and the pseudo label training set is subjected to data preprocessing to increase the data volume of the training set and enrich target information and background information of monitoring points, so that the information of the training set is richer, and the robustness and the detection accuracy of the model are improved. The pseudo label data set and the calibration data set are mixed, so that the influence of false picture information of the pseudo label data set is reduced; the model training loss calculation function is modified, the influence of a positive detection target on the model is improved, the concern of the model on a wrong target is reduced, the ship detection model can learn on line, the weight of the ship detection model is continuously optimized, the optimal model is obtained, tested data are converted into a training set under the condition that no artificial calibration exists, the model on-line semi-supervised training is realized, the robustness and the detection rate of the model are improved, and the false detection rate is reduced.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (11)

1. A ship detection semi-supervised online training method is characterized by comprising the following steps:
acquiring an initial ship detection model, and acquiring a calibration data set and a data set to be detected;
inputting the data set to be detected into the initial ship detection model for detection to obtain a pseudo label data set, and performing data preprocessing on the pseudo label data set to obtain a training sample set;
wherein, the data preprocessing of the pseudo label data set to obtain a training sample set comprises: when target pseudo tag data in a pseudo tag data set are non-target pseudo tag data, carrying out picture integration on the target pseudo tag data and target object data extracted from the calibration data set, or carrying out picture mixing on the target pseudo tag data and data in the calibration data set; when target pseudo tag data in a pseudo tag data set is pseudo tag data with a target, carrying out picture integration on target object data in the target pseudo tag data and data in a calibration data set, or carrying out data connection on the target object data in the target pseudo tag data and the data in the calibration data set;
and loading an initial ship detection model by the ship detection network, initializing model weights, performing model training by adopting the training sample set, carrying out forward propagation to extract target characteristics, calculating loss of the model, and carrying out backward propagation to update network parameters until the model converges to obtain the latest ship detection model.
2. The semi-supervised online training method for ship detection according to claim 1, wherein the calculating loss of model loss comprises:
when target sample data is obtained through data connection, calculating loss of the target sample data according to the confidence coefficient of the target sample data;
when target sample data is obtained in other modes, calculating loss of the target sample data according to a loss function;
and calculating the sum of loss of the sample data participating in the model training in the training sample set.
3. The semi-supervised online training method for ship detection according to claim 2, wherein the calculating loss of the target sample according to the confidence of the target sample data comprises:
when the confidence coefficient of the target sample data is greater than or equal to a first preset confidence coefficient threshold value, calculating a first loss weight, the classification loss and the regression loss of the target sample data, and multiplying the sum of the classification loss and the regression loss of the target sample data by the first loss weight to obtain the loss of the target sample data;
when the confidence coefficient of the target sample data is smaller than a first preset confidence coefficient threshold and is larger than or equal to a second confidence coefficient threshold, calculating a second loss weight and the regression loss of the target sample data, and multiplying the second loss weight and the regression loss of the target sample data to obtain the loss of the target sample data;
when the confidence of the target sample data is smaller than the second confidence threshold, loss is not calculated.
4. The semi-supervised online training method for ship detection as recited in claim 1, wherein the photo-integrating the target pseudo tag data and the target object data extracted from the calibration data set comprises:
taking target object data in the data of the calibration data set as a center, and matting the data within a preset range to obtain matting data;
performing target shaking on target object data in the cutout data to obtain first data to be pasted;
and performing Poisson fusion on the first to-be-pasted data and the target pseudo label data.
5. The semi-supervised online training method for ship detection as recited in claim 1, wherein the photo-integrating target object data in the target pseudo tag data and data in the calibration data set comprises:
data in a preset range is subjected to matting by taking the target object data of the target pseudo label data as a center to obtain second data to be pasted;
performing Poisson fusion on the second data to be pasted and the data in the calibration data set;
the target pseudo tag data is pseudo tag data with a target, and the confidence coefficient of the pseudo tag data is greater than a third preset confidence coefficient threshold value.
6. The utility model provides a boats and ships detect semi-supervised online trainer which characterized in that includes:
the data acquisition module is used for acquiring an initial ship detection model and acquiring a calibration data set and a data set to be detected;
a training sample set generation module, configured to input the to-be-detected data set to the initial ship detection model for detection, so as to obtain a pseudo tag data set, and perform data preprocessing on the pseudo tag data set, so as to obtain a training sample set;
wherein the training sample set generating module comprises: the first preprocessing submodule is used for carrying out picture integration on target pseudo label data and target object data extracted from the calibration data set or carrying out picture mixing on the target pseudo label data and the data in the calibration data set when the target pseudo label data in the pseudo label data set is non-target pseudo label data; the second preprocessing submodule is used for carrying out picture integration on target object data in the target pseudo tag data and data in the calibration data set or carrying out data connection on the target object data in the target pseudo tag data and the data in the calibration data set when the target pseudo tag data in the pseudo tag data set is the pseudo tag data with a target;
and the latest ship detection model generation module is used for loading an initial ship detection model by the ship detection network, initializing the model weight, performing model training by adopting the training sample set, extracting target characteristics by forward propagation, calculating the loss of the model, and updating network parameters by backward propagation until the model converges to obtain the latest ship detection model.
7. The semi-supervised online training device for ship detection according to claim 6, wherein the latest ship detection model generation module comprises:
the first data loss calculation submodule is used for calculating loss of the target sample data according to the confidence coefficient of the target sample data when the target sample data is obtained by data connection;
the second data loss calculation submodule is used for calculating loss of the target sample data according to a loss function when the target sample data is obtained in other modes;
and the model loss calculation submodule is used for calculating the loss sum of the sample data in the training sample set and participating in the model training.
8. The ship detection semi-supervised online training device of claim 7, wherein the first data loss calculation submodule comprises:
the first loss calculation unit is used for calculating a first loss weight, the classification loss and the regression loss of the target sample data when the confidence coefficient of the target sample data is greater than or equal to a first preset confidence coefficient threshold value, and multiplying the sum of the classification loss and the regression loss of the target sample data by the first loss weight to obtain the loss of the target sample data;
the second loss calculation unit is used for calculating a second loss weight and the regression loss of the target sample data when the confidence coefficient of the target sample data is smaller than the first preset confidence coefficient threshold and is larger than or equal to a second confidence coefficient threshold, and multiplying the second loss weight and the regression loss of the target sample data to obtain the loss of the target sample data;
and the third loss calculating unit is used for not calculating loss when the confidence coefficient of the target sample data is smaller than the second confidence coefficient threshold value.
9. The ship detection semi-supervised online training device of claim 6, wherein the first preprocessing sub-module comprises:
the sectional data generating unit is used for taking the target object data in the data of the calibration data set as the center and performing sectional drawing on the data in a preset range to obtain sectional data;
the first data to be pasted generating unit is used for carrying out target shaking on target object data in the cutout data to obtain first data to be pasted;
and the first picture integration unit is used for performing Poisson fusion on the first to-be-pasted data and the target pseudo label data.
10. The ship detection semi-supervised online training device of claim 6, wherein the second preprocessing submodule comprises:
the second data generation unit to be pasted is used for matting data in a preset range by taking the target object data of the target pseudo label data as a center to obtain second data to be pasted;
the second image integration unit is used for performing Poisson fusion on the second data to be pasted and the data in the calibration data set;
the target pseudo tag data is pseudo tag data with a target, and the confidence coefficient of the pseudo tag data is greater than a third preset confidence coefficient threshold value.
11. A computer storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements a vessel detection semi-supervised online training method as claimed in any one of claims 1 to 6.
CN202111383472.6A 2021-11-22 2021-11-22 Semi-supervised online training method and device for ship detection and computer storage medium Active CN114186615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111383472.6A CN114186615B (en) 2021-11-22 2021-11-22 Semi-supervised online training method and device for ship detection and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111383472.6A CN114186615B (en) 2021-11-22 2021-11-22 Semi-supervised online training method and device for ship detection and computer storage medium

Publications (2)

Publication Number Publication Date
CN114186615A true CN114186615A (en) 2022-03-15
CN114186615B CN114186615B (en) 2022-07-08

Family

ID=80541131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111383472.6A Active CN114186615B (en) 2021-11-22 2021-11-22 Semi-supervised online training method and device for ship detection and computer storage medium

Country Status (1)

Country Link
CN (1) CN114186615B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627338A (en) * 2022-05-16 2022-06-14 浙江华是科技股份有限公司 Ship category classification model training method and system and computer storage medium
CN114693612A (en) * 2022-03-16 2022-07-01 深圳大学 Knee joint bone tumor detection method based on deep learning and related device
CN114743074A (en) * 2022-06-13 2022-07-12 浙江华是科技股份有限公司 Ship detection model training method and system based on strong and weak countermeasure training
CN114881129A (en) * 2022-04-25 2022-08-09 北京百度网讯科技有限公司 Model training method and device, electronic equipment and storage medium
CN114998691A (en) * 2022-06-24 2022-09-02 浙江华是科技股份有限公司 Semi-supervised ship classification model training method and device
CN116343050A (en) * 2023-05-26 2023-06-27 成都理工大学 Target detection method for remote sensing image noise annotation based on self-adaptive weight
CN117152587A (en) * 2023-10-27 2023-12-01 浙江华是科技股份有限公司 Anti-learning-based semi-supervised ship detection method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190354857A1 (en) * 2018-05-17 2019-11-21 Raytheon Company Machine learning using informed pseudolabels
CN111222648A (en) * 2020-01-15 2020-06-02 深圳前海微众银行股份有限公司 Semi-supervised machine learning optimization method, device, equipment and storage medium
CN111882055A (en) * 2020-06-15 2020-11-03 电子科技大学 Method for constructing target detection self-adaptive model based on cycleGAN and pseudo label
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112115780A (en) * 2020-08-11 2020-12-22 西安交通大学 Semi-supervised pedestrian re-identification method based on deep multi-model cooperation
CN112381098A (en) * 2020-11-19 2021-02-19 上海交通大学 Semi-supervised learning method and system based on self-learning in target segmentation field
US20210056718A1 (en) * 2019-08-20 2021-02-25 GM Global Technology Operations LLC Domain adaptation for analysis of images
CN112509563A (en) * 2020-12-17 2021-03-16 中国科学技术大学 Model training method and device and electronic equipment
CN113177576A (en) * 2021-03-31 2021-07-27 中国科学院大学 Multi-example active learning method for target detection
CN113191385A (en) * 2021-03-25 2021-07-30 之江实验室 Unknown image classification automatic labeling method based on pre-training labeling data
CN113269267A (en) * 2021-06-15 2021-08-17 苏州挚途科技有限公司 Training method of target detection model, target detection method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190354857A1 (en) * 2018-05-17 2019-11-21 Raytheon Company Machine learning using informed pseudolabels
US20210056718A1 (en) * 2019-08-20 2021-02-25 GM Global Technology Operations LLC Domain adaptation for analysis of images
CN111222648A (en) * 2020-01-15 2020-06-02 深圳前海微众银行股份有限公司 Semi-supervised machine learning optimization method, device, equipment and storage medium
CN111882055A (en) * 2020-06-15 2020-11-03 电子科技大学 Method for constructing target detection self-adaptive model based on cycleGAN and pseudo label
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112115780A (en) * 2020-08-11 2020-12-22 西安交通大学 Semi-supervised pedestrian re-identification method based on deep multi-model cooperation
CN112381098A (en) * 2020-11-19 2021-02-19 上海交通大学 Semi-supervised learning method and system based on self-learning in target segmentation field
CN112509563A (en) * 2020-12-17 2021-03-16 中国科学技术大学 Model training method and device and electronic equipment
CN113191385A (en) * 2021-03-25 2021-07-30 之江实验室 Unknown image classification automatic labeling method based on pre-training labeling data
CN113177576A (en) * 2021-03-31 2021-07-27 中国科学院大学 Multi-example active learning method for target detection
CN113269267A (en) * 2021-06-15 2021-08-17 苏州挚途科技有限公司 Training method of target detection model, target detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MENGDE XU ET AL.: "End-to-End Semi-Supervised Object Detection with Soft Teacher", 《ARXIV.ORG》 *
张棠磊: "行人检测伪标签半监督学习算法", 《福建电脑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693612A (en) * 2022-03-16 2022-07-01 深圳大学 Knee joint bone tumor detection method based on deep learning and related device
CN114881129A (en) * 2022-04-25 2022-08-09 北京百度网讯科技有限公司 Model training method and device, electronic equipment and storage medium
CN114627338A (en) * 2022-05-16 2022-06-14 浙江华是科技股份有限公司 Ship category classification model training method and system and computer storage medium
CN114743074A (en) * 2022-06-13 2022-07-12 浙江华是科技股份有限公司 Ship detection model training method and system based on strong and weak countermeasure training
CN114743074B (en) * 2022-06-13 2022-09-09 浙江华是科技股份有限公司 Ship detection model training method and system based on strong and weak confrontation training
CN114998691A (en) * 2022-06-24 2022-09-02 浙江华是科技股份有限公司 Semi-supervised ship classification model training method and device
CN114998691B (en) * 2022-06-24 2023-04-18 浙江华是科技股份有限公司 Semi-supervised ship classification model training method and device
CN116343050A (en) * 2023-05-26 2023-06-27 成都理工大学 Target detection method for remote sensing image noise annotation based on self-adaptive weight
CN117152587A (en) * 2023-10-27 2023-12-01 浙江华是科技股份有限公司 Anti-learning-based semi-supervised ship detection method and system
CN117152587B (en) * 2023-10-27 2024-01-26 浙江华是科技股份有限公司 Anti-learning-based semi-supervised ship detection method and system

Also Published As

Publication number Publication date
CN114186615B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN114186615B (en) Semi-supervised online training method and device for ship detection and computer storage medium
CN111428726B (en) Panorama segmentation method, system, equipment and storage medium based on graph neural network
CN110781924B (en) Side-scan sonar image feature extraction method based on full convolution neural network
US20190164312A1 (en) Neural network-based camera calibration
CN102799900B (en) Target tracking method based on supporting online clustering in detection
CN111222526B (en) Method, device, equipment and storage medium for identifying real-time fishing behavior of fishing vessel
CN114693942A (en) Multimode fault understanding and auxiliary labeling method for intelligent operation and maintenance of instruments and meters
CN110782448A (en) Rendered image evaluation method and device
CN114926726A (en) Unmanned ship sensing method based on multitask network and related equipment
CN116977633A (en) Feature element segmentation model training method, feature element segmentation method and device
CN114821229B (en) Underwater acoustic data set augmentation method and system based on condition generation countermeasure network
Durán-Rosal et al. Detection and prediction of segments containing extreme significant wave heights
CN115797735A (en) Target detection method, device, equipment and storage medium
CN112206541A (en) Game plug-in identification method and device, storage medium and computer equipment
CN116452810A (en) Multi-level semantic segmentation method and device, electronic equipment and storage medium
CN113343123B (en) Training method and detection method for generating confrontation multiple relation graph network
CN111783716A (en) Pedestrian detection method, system and device based on attitude information
CN109242882B (en) Visual tracking method, device, medium and equipment
CN112418149A (en) Abnormal behavior detection method based on deep convolutional neural network
CN111858999B (en) Retrieval method and device based on segmentation difficult sample generation
CN114510961A (en) Ship behavior intelligent monitoring algorithm based on recurrent neural network and Beidou positioning
CN113569081A (en) Image recognition method, device, equipment and storage medium
CN116739073B (en) Online back door sample detection method and system based on evolution deviation
CN115965823B (en) Online difficult sample mining method and system based on Focal loss function
CN115293297B (en) Method for predicting track of ship driven by intention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant