CN111611836A - Ship detection model training and ship tracking method based on background elimination method - Google Patents

Ship detection model training and ship tracking method based on background elimination method Download PDF

Info

Publication number
CN111611836A
CN111611836A CN201911381857.1A CN201911381857A CN111611836A CN 111611836 A CN111611836 A CN 111611836A CN 201911381857 A CN201911381857 A CN 201911381857A CN 111611836 A CN111611836 A CN 111611836A
Authority
CN
China
Prior art keywords
ship
image
background elimination
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911381857.1A
Other languages
Chinese (zh)
Inventor
邓练兵
邹纪升
逯明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN201911381857.1A priority Critical patent/CN111611836A/en
Publication of CN111611836A publication Critical patent/CN111611836A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides a ship detection model training and ship tracking method based on a background elimination method, wherein the ship detection model training method based on the background elimination method comprises the following steps: acquiring a ship image training sample, wherein the ship image training sample is a ship image extracted according to a background elimination method; training a neural network model according to the ship image training sample to obtain an output vector of the neural network model; calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector; performing a gradient inversion on the loss; and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model. By implementing the method and the device, the accuracy of ship detection can be improved, the ship image can be quickly and accurately detected, and the accuracy of ship tracking is improved.

Description

Ship detection model training and ship tracking method based on background elimination method
Technical Field
The invention relates to the field of computer vision, in particular to a ship detection model training and ship tracking method based on a background elimination method.
Background
With the vigorous development of water traffic and fishery management, the detection and tracking of ships are increasingly paid attention. As an important transportation means, a ship is used in all aspects, and tracking of the ship is essential in order to ensure the safety of the ship and to grasp the navigation state of the ship. How to accurately and quickly track the ship becomes a problem which needs to be solved urgently.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is how to accurately and quickly track the ship, thereby providing a ship detection model training and ship tracking method based on the background elimination method.
According to a first aspect, an embodiment of the present invention provides a ship detection model training method based on a background elimination method, including: acquiring a ship image training sample, wherein the ship image training sample is a ship image extracted according to a background elimination method; training a neural network model according to the ship image training sample to obtain an output vector of the neural network model; calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector; performing a gradient inversion on the loss; and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model.
With reference to the first aspect, in a first implementation manner of the first aspect, the ship image training sample is a ship image extracted according to a background subtraction method, and includes: intercepting a picture sequence with the number of ships lower than a first threshold value; calculating a picture mean value of the picture sequence, and taking the calculated picture mean value as a background model; and obtaining a ship image training sample according to the background model.
With reference to the first aspect, in a second implementation manner of the first aspect, the method for training a ship detection model based on background subtraction further includes: acquiring a ship image test sample, wherein the ship image test sample is a ship image extracted according to the background elimination method; obtaining a test result according to the ship image test sample and the ship detection model based on the background elimination method; judging whether the accuracy of the ship detection model based on the background elimination method is higher than a preset threshold value or not according to the test result; and when the accuracy of the ship detection model based on the background elimination method is higher than the preset threshold value, determining the ship detection model based on the background elimination method as an available ship detection model based on the background elimination method.
According to a second aspect, an embodiment of the present invention provides a ship tracking method, including the steps of: acquiring a ship video image, and extracting a ship image from the ship video image by using a background elimination method; inputting the ship image to a preset ship detection model based on a background elimination method to obtain a ship detection result; the preset ship detection model is generated by training through a ship tracking model training method based on a background elimination method according to the first aspect or any embodiment of the first aspect; and correlating the ship detection result by using a target algorithm to obtain the running track of the ship.
With reference to the second aspect, in a first embodiment of the second aspect, the step of correlating the ship detection results with a target algorithm to obtain the running track of the ship includes: acquiring the matching weight of the ship detection result of the current video image and each ship detection result of the next video image; selecting the ship detection result where the maximum value of the matching weight in the next video image is located, and performing data association; and when the ship detection result where the maximum value of the selected matching weight is located is associated, reducing the matching weight, and reselecting the ship detection result where the maximum value of the selected matching weight is located in the next video image to perform data association.
With reference to the first embodiment of the second aspect, in the second embodiment of the second aspect, the obtaining the matching weight of the ship detection result of the current video image and each ship detection result of the next video image includes: acquiring motion parameters of a ship, predicting the motion track of the ship according to the motion parameters, and obtaining a predicted position of the ship; judging the motion matching degree according to the predicted position of the ship and the detection result of the ship; judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance; and determining the matching weight according to the motion matching degree and the appearance matching degree.
According to a third aspect, an embodiment of the present invention provides a ship tracking model training apparatus based on a background elimination method, including: the system comprises a sample acquisition module, a background elimination module and a background elimination module, wherein the sample acquisition module is used for acquiring a ship image training sample, and the ship image training sample is a ship image extracted according to the background elimination method; the vector acquisition module is used for training a neural network model according to the ship image training sample to acquire an output vector of the neural network model; the loss calculation module is used for calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector; a gradient inversion module for performing gradient inversion on the loss; and the model building module is used for adjusting the weight parameters of the neural network model according to the loss after gradient inversion and building a ship detection model.
According to a fourth aspect, embodiments of the present invention provide a vessel tracking apparatus comprising: the video image acquisition module is used for acquiring a ship video image and extracting the ship image from the ship video image by using a background elimination method; the detection module is used for inputting the ship image to a preset ship detection model based on a background elimination method to obtain a ship detection result; the preset ship detection model is generated by training through a ship tracking model training method based on a background elimination method according to the first aspect or any embodiment of the first aspect; and the association module is used for associating the ship detection result by using a target algorithm to obtain the running track of the ship.
According to a fifth aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the ship tracking model training method based on the background elimination method according to the first aspect or any of the embodiments of the first aspect, or the ship tracking method according to the second aspect or any of the embodiments of the second aspect when executing the program.
According to a sixth aspect, an embodiment of the present invention provides a storage medium, on which computer instructions are stored, which when executed by a processor, implement the steps of the method for training a ship tracking model based on background elimination method according to the first aspect or any of the embodiments of the first aspect, or the method for tracking a ship according to the second aspect or any of the embodiments of the second aspect.
The technical scheme of the invention has the following advantages:
1. the ship image training method and the ship image training device based on the background elimination method extract the ship image from the ship image training sample according to the background elimination method, and perform neural network model training on the ship image extracted by the background elimination method, so as to construct the ship detection model.
2. According to the ship detection model training method and device based on the background elimination method, the average value of one picture sequence is used as the background model, the effectiveness of the established background model is higher, the actual background model can be restored better, and the accuracy of the ship detection model based on the background elimination method is improved.
3. The test set provided by the invention is used for testing the accuracy of the trained neural network model, selecting the neural network model meeting the accuracy rate condition, giving the index for selecting the neural network model, and being beneficial to the verification and selection of the neural network model.
4. According to the ship tracking method and device provided by the invention, the ship image subjected to background elimination is input into the ship detection model based on the background elimination method, and then the ship detection result is correlated according to the target algorithm, so that the ship can be quickly and accurately tracked.
5. The method and the device provided by the invention determine the matching weight by utilizing the motion matching degree and the appearance matching degree, not only consider the motion condition of the ship, but also consider the appearance matching degree, so that the matching degree of the obtained matching weight is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a method for training a ship detection model based on background subtraction according to an embodiment of the present invention;
FIG. 2 is a flow chart of a specific example of a vessel tracking method in an embodiment of the invention;
FIG. 3 is a diagram of a specific example of a vessel tracking method according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a specific example of a ship detection model training apparatus based on background elimination method according to an embodiment of the present invention;
FIG. 5 is a functional block diagram of one specific example of a vessel tracking device in an embodiment of the present invention;
fig. 6 is a schematic block diagram of a specific example of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment provides a ship detection model training method based on background subtraction, as shown in fig. 1, including the following steps:
s110: and acquiring a ship image training sample, wherein the ship image training sample is a ship image extracted according to the background elimination method.
For example, the ship image sample may be acquired by unframing a video captured by the drone, or may be acquired from a network database. The method of taking out 70% of the ship image sample and extracting the ship image according to the background subtraction method as the ship image training sample may be to first obtain a background image or establish a background model, compare the current ship image sample with the background image or the background model, for example, subtract the background image or the background model from the current ship image sample to obtain a ship image, and use the ship image as the ship image training sample. The acquisition mode of the ship image training sample is not limited in the embodiment, and can be determined according to needs.
S120: and training the neural network model according to the ship image training sample to obtain an output vector of the neural network model.
For example, the ship image training samples train the neural network model in a manner that ship positions in the ship image training samples are marked, the ship image training samples with the marked positions are input into the neural network, the minimum value of a loss function is calculated by using a gradient descent method, the minimum value of the loss function is sought as a constraint condition, and parameters and weights of each convolution layer, pooling layer, full-link layer, classification layer and the like in the neural network are adjusted, so that the neural network model can complete the detection of the ship. The neural network training method is not limited in this embodiment, and may be determined as needed.
Optionally, in some embodiments of the present invention, the manner of obtaining the output vector of the neural network model may specifically be to calculate the texture of each pixel in the ship image training sample by using the convolutional layer, combine the close pixels to obtain an image candidate region, input the obtained image information of the candidate region into the full connection layer of the neural network, and obtain the output vector of the neural network model; or setting a candidate area network, taking the output feature map of the first convolutional network as the input of the candidate area network, sliding a convolution kernel of 3 × 3 on the feature map to construct a candidate area irrelevant to the category by using the convolutional network, and inputting the candidate area to an independent full-connected layer to obtain the output vector of the neural network model. The output vector represents the prediction bounding box and the detection results within the prediction bounding box, including the presence and absence of the detection target. The embodiment does not limit what kind of neural network model is specifically adopted, and can determine according to needs.
S130: and calculating the loss of the neural network model according to the actual result and the output vector corresponding to the ship image training sample.
For example, the calculation method for calculating the loss of the neural network model according to the actual result and the output vector corresponding to the ship image training sample may be:
Figure BDA0002342462680000081
wherein x represents the actual result corresponding to the training sample of the input ship image, f (x)(t)) And f represents the trained neural network model, and T represents the number of training samples. The specific calculation method of the loss is not limited in this embodiment, and may be determined as needed.
S140: the losses are gradient reversed.
Illustratively, the gradient inversion may be performed by calling a gradient inversion function, and the effect of the gradient inversion function is to invert the back propagation loss, so that the training targets of the neural network before and after the calling of the gradient inversion function are opposite to each other, and the antagonistic effect is achieved.
S150: and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model.
Illustratively, the weight parameters of the neural network model are adjusted according to the loss after gradient inversion, and the ship detection model is constructed in a way that the loss of the model in each layer is recorded when the gradient inversion layer is propagated in the forward direction, the loss passing through the gradient inversion layer is multiplied by-lambda when the gradient inversion layer is propagated in the backward direction, then each layer of network carries out gradient calculation according to the returned loss, and then the weight parameters of the layer of network are updated, and the ship detection model is constructed.
The ship image training method based on the background elimination method provided by the embodiment extracts the ship image from the ship image training sample according to the background elimination method, and performs neural network model training on the ship image extracted by the background elimination method, so as to construct the ship detection model.
As an alternative embodiment of the present application, the ship image training sample is a ship image extracted according to a background subtraction method, and includes:
first, a sequence of pictures in which the number of vessels present is below a first threshold is truncated.
For example, the manner of intercepting the picture sequence in which the number of ships appearing is lower than the first threshold may be to manually select a section of video image in which the number of ships appears is lower than the first threshold, and perform deframing on the video image to obtain a picture sequence for establishing the background model. The first threshold may be 2, and the manner of capturing the picture sequence and the size of the first threshold are not limited in this embodiment, and may be determined as needed.
Secondly, the picture mean value of the picture sequence is obtained, and the obtained picture mean value is used as a background model.
For example, the mode of obtaining the picture mean value for the picture sequence may be to extract the values of the RGB channels from all the pictures in the picture sequence, calculate the values of the RGB channels to obtain the mean value, and reconstruct the obtained mean value into a mean value image as the background model. The method for obtaining the picture mean value of the picture sequence is not limited in this embodiment, and those skilled in the art can determine the mean value as needed.
And thirdly, obtaining a ship image training sample according to the background model.
Illustratively, according to the background model, the ship image training sample can be obtained by subtracting the current ship image from the established background model.
According to the ship detection model training method based on the background elimination method, the average value of one picture sequence is used as the background model, the established background model is high in effectiveness, the actual background model can be restored better, and the accuracy of the ship detection model based on the background elimination method is improved.
As an optional implementation manner of the present application, the method for training a ship detection model based on background subtraction further includes:
firstly, a ship image test sample is obtained, wherein the ship image test sample is a ship image extracted according to a background elimination method.
Illustratively, it should be noted that the ship image test sample is different from the ship image training sample, the ship image test sample may be 30% of the ship image sample, and the ship image test sample is subjected to background elimination in step S110 to extract a ship image, which is not described herein again. The acquisition mode of the ship image test sample is not limited in the embodiment, and can be determined according to the requirement.
And secondly, obtaining a test result according to the ship image test sample and a ship detection model based on the background elimination method.
For example, the manner of obtaining the test result according to the ship image test sample and the ship detection model based on the background elimination method may be that all the ship image test samples are input to the ship detection model based on the background elimination method, the intersection ratio calculation is performed according to a plurality of predicted boundary frames output by the ship detection model based on the background elimination method and the detection results in the predicted boundary frames and the markers, and the average value of the calculation results of all the intersection ratios is used as the accuracy rate, so as to obtain the accuracy rate of the output result of the neural network; or performing cross-over ratio calculation according to a plurality of prediction boundary frames output by a ship detection model based on a background elimination method, and detection results and marks in the prediction boundary frames, judging whether each cross-over ratio calculation result meets a preset threshold value, when the preset threshold value is met, the detection result of the ship image test sample is accurate, wherein the preset threshold value can be 80%, and taking the ratio of the ship image test sample with the accurate detection result to all ship image test samples as the accuracy of the output result of the neural network. The embodiment does not limit what kind of method is specifically adopted to obtain the test result, and can determine according to the needs.
Finally, judging whether the accuracy of the ship detection model based on the background elimination method is higher than a preset threshold value or not according to the test result; and when the accuracy of the ship detection model based on the background elimination method is higher than a preset threshold value, determining the ship detection model based on the background elimination method as an available ship detection model based on the background elimination method. The preset threshold may be 98%, and the size of the preset threshold is not limited in this embodiment and may be set as needed.
The test set provided by the embodiment is used for testing the accuracy of the trained neural network model, the neural network model meeting the accuracy rate condition is selected, the index for selecting the neural network model is given, and the verification and the selection of the neural network model are facilitated.
The embodiment provides a ship tracking method, as shown in fig. 2, including the following steps:
and S210, acquiring a ship video image, and extracting the ship image from the ship video image by using a background elimination method.
For example, the acquisition mode of the ship video image may be to perform de-framing processing on the video acquired by the unmanned aerial vehicle to acquire each frame of video image; or frame skipping processing, in which a video image separated by a certain number of frames is used as the acquired video image, for example, a video image is acquired every 3 frames. The ship image is extracted from the ship video image by using the background subtraction method, which is referred to as the above step S110, and is not described herein again. The method for acquiring the ship video image is not limited in the embodiment, and can be determined according to requirements.
S220, inputting the ship image to a preset ship detection model based on a background elimination method to obtain a ship detection result; the preset ship detection model based on the background elimination method is generated by training through the training method of the ship detection model based on the background elimination method according to the embodiment, and the ship detection result represents a position coordinate frame detected by the ship in the current video image, and the rest of the positions are not described again here.
And S230, correlating the ship detection results by using a target algorithm to obtain the running track of the ship.
Exemplarily, the ship detection results are correlated by using a target algorithm to obtain the running track of the ship, wherein the method comprises the steps of dividing any two adjacent ship detection results into two sets, searching for the maximum matching of the two sets, and performing data correlation on the matching results to obtain the running track of the ship; or adopting an IOU algorithm to obtain detection results of any two adjacent ships, sequentially carrying out intersection ratio calculation, and carrying out data association on the detection results of the two ships with the largest intersection ratio calculation result so as to obtain the running track of the ship. The target algorithm is not limited in this embodiment, and can be determined by requirements.
According to the ship tracking method provided by the embodiment, the ship image subjected to background elimination is input into the ship detection model based on the background elimination method, and then the ship detection result is correlated according to the target algorithm, so that the ship can be quickly and accurately tracked.
As an optional implementation manner of the present application, the step S230 specifically includes:
first, the matching weight of the ship detection result of the current video image and each ship detection result of the next video image is obtained.
For example, the matching weight between the ship detection result of the current video image and each ship detection result of the next video image may be obtained by sequentially performing cross-comparison calculation on any ship detection result of the current video image and each ship detection result of the next video image, and taking the obtained cross-comparison calculation result as the matching weight between each ship detection result in the next video image and the ship detection result of the current video image. Such as: and carrying out cross-comparison calculation on a certain ship position coordinate frame detected by the current video image and each ship position coordinate frame of the next video image to obtain different cross-comparison calculation results, and taking each calculation result as the matching weight of each ship detection result in the next video image to the ship detection result of the current video image. The matching weight obtaining mode is not limited in this embodiment, and can be determined by requirements.
And finally, selecting a ship detection result where the maximum value of the matching weight in the next video image is located, performing data association, reducing the matching weight when the ship detection result where the maximum value of the selected matching weight is located is associated, and reselecting the ship detection result where the maximum value of the reduced matching weight is located in the next video image to perform data association.
Exemplarily, the ship detection results in the video images acquired by any two adjacent drones are divided into two sets, as shown in fig. 3, the two sets are respectively denoted by 11, 12 and 13 as a previous set, and 21, 22 and 23 as a next set, the previous set represents all the ship detection results in the current video image, and the next set represents all the ship detection results in the next video image, where it is assumed that the matching weights of 11 and 21, 22 and 23 are known to be 0.8, 0.6 and 0, respectively; the matching weights of 12 and 21, 22 and 23 are respectively 0, 0.3 and 0.9; the matching weights of 13 and 21, 22 and 23 are 0.9, 0.8 and 0, respectively.
Firstly, assigning values to all ship detection results in a previous set and all ship detection results in a next video image, assigning the values of all the ship detection results in the previous set to the ship detection results of the current video image, wherein the ship detection results in the next video image have the maximum matching weight values of 0.8, 0.9 and 0.9 respectively, and assigning all the ship detection results in the next set to 0. Secondly, selecting ship detection results of a next set which have the same assigned value number with the ship detection results of the previous set for data association, wherein 11 and 21 are associated, and 12 and 23 are associated in a corresponding graph; before the association it is determined whether the ship detection results of the latter set have been associated, e.g. 13 with 21, 21 has been associated with 11. At this point, conflicting 11 and 13 are subtracted by 0.1 in the first set, respectively, corresponding to 21 being added by 0.1 in the second set. At this time, the data association is performed again in the above manner, and 13 and 22 data association is obtained. The data association method provided by the embodiment of the invention performs matching through the weight, and increases the accuracy of data association.
As an optional embodiment of the present application, obtaining a matching weight of a ship detection result of a current video image and each ship detection result of a next video image includes:
firstly, motion parameters of a ship are obtained, and a ship motion track is predicted according to the motion parameters to obtain a predicted position of the ship.
The motion parameters may be, for example, direction of motion, velocity, acceleration, resistance, etc. The motion parameters of the ship can be obtained by calculation according to the motion of the ship or preset motion parameters of the ship. The method for predicting the ship motion track according to the motion parameters to obtain the predicted position of the ship can be that a Kalman filter is constructed through the motion parameters, and prediction is carried out according to the Kalman filter. The embodiment does not limit the specific manner of obtaining the predicted position of the ship, and can be determined according to the needs.
Secondly, judging the motion matching degree according to the predicted position of the ship and the detection result of the ship; and judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance.
For example, the method for judging the motion matching degree according to the ship predicted position and the ship detection result may be to calculate the motion matching degree by using the mahalanobis distance, or may be to directly use the intersection ratio of the ship predicted position and the ship detection result as the value of the motion matching degree. The method for judging the appearance matching degree of the detection result of the adjacent ship according to the minimum cosine distance can be completed through an appearance model (a ReID model), a feature vector of a unit norm is extracted by using a depth network, and the minimum cosine distance between the feature vectors is used as the value of the appearance matching degree. The embodiment does not limit the specific manner of obtaining the motion matching degree and the appearance matching degree, and can determine the motion matching degree and the appearance matching degree as required.
And determining the matching weight according to the motion matching degree and the appearance matching degree.
For example, the way of determining the matching weight according to the degree of motion matching and the degree of appearance matching may be by obtaining a weighted sum of the degree of motion matching and the degree of appearance matching. The embodiment does not limit the specific determination method of the matching weight, and may determine the matching weight as needed.
The matching weight is determined by utilizing the motion matching degree and the appearance matching degree, the motion condition of the ship is considered, the appearance matching degree is considered, and the matching degree of the obtained matching weight is higher.
The present embodiment provides a training apparatus for a ship inspection model, as shown in fig. 4, including:
a sample obtaining module 410, configured to obtain a ship image training sample, where the ship image training sample is a ship image extracted according to a background elimination method; the specific implementation manner is described in relation to S110 in this embodiment, and is not described herein again.
The vector obtaining module 420 is configured to train the neural network model according to the ship image training sample, and obtain an output vector of the neural network model; the specific implementation is shown in this embodiment S120, which is not described herein again.
The loss calculation module 430 is used for calculating the loss of the neural network model according to the actual result and the output vector corresponding to the ship image training sample; the specific implementation is shown in this embodiment S130, which is not described herein again.
A gradient inversion module 440 for gradient inverting the loss; the specific implementation is shown in this embodiment S140, which is not described herein again.
And the model building module 450 is configured to adjust the weight parameter of the neural network model according to the loss after gradient inversion, and build a ship detection model. The specific implementation is shown in this embodiment S150, which is not described herein again.
The ship detection model training device based on the background elimination method provided by the embodiment extracts ship images from a ship image training sample according to the background elimination method, and performs neural network model training on the ship images extracted by the background elimination method, so as to construct a ship detection model.
As an optional embodiment of the present application, the sample acquiring module 410 includes:
the image sequence intercepting module is used for intercepting an image sequence with the number of ships lower than a first threshold value; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The background model establishing module is used for solving a picture mean value of the picture sequence and taking the solved picture mean value as a background model; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
And the sample acquisition submodule is used for acquiring a ship image training sample according to the background model. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
As an optional embodiment of the present application, the training apparatus for a ship detection model further includes:
and the test sample acquisition module is used for acquiring a ship image test sample, and the ship image test sample is a ship image extracted according to the background elimination method. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The test result acquisition module is used for acquiring a test result according to the ship image test sample and a ship detection model based on a background elimination method; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The judging module is used for judging whether the accuracy of the ship detection model based on the background elimination method is higher than a preset threshold value or not according to the test result; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
And the determining module is used for determining the ship detection model based on the background elimination method as an available ship detection model based on the background elimination method when the accuracy of the ship detection model based on the background elimination method is higher than a preset threshold value. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The present embodiment provides a ship tracking device based on unmanned aerial vehicle video, as shown in fig. 5, including:
a video image obtaining module 510, configured to obtain a ship video image, and extract a ship image from the ship video image by using a background subtraction method; the specific implementation is shown in this embodiment S210, which is not described herein again.
The training module 520 is configured to input the video image to a preset ship tracking model training method based on a background elimination method to obtain a ship detection result; the preset ship tracking model training method based on the background elimination method is generated by training through the training method of the ship tracking model training method based on the background elimination method in the embodiment; the specific implementation is shown in this embodiment S220, which is not described herein again.
And the associating module 530 is configured to associate the ship detection result with a target algorithm to obtain a running track of the ship. The specific implementation is shown in this embodiment S230, which is not described herein again.
The ship tracking device provided by the embodiment inputs the ship image subjected to background elimination into the ship detection model based on the background elimination method, and then correlates the ship detection result according to the target algorithm, so that the ship can be quickly and accurately tracked.
As an optional implementation manner of the present application, the association module 530 specifically includes:
the weight acquisition module is used for acquiring the matching weight of the ship detection result of the current video image and each ship detection result of the next video image; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The first data association module is used for selecting a ship detection result where the maximum value of the matching weight in the next video image is located, and performing data association; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
And the second data association module is used for reducing the matching weight when the ship detection result where the maximum value of the selected matching weight is located is associated, reselecting the ship detection result where the maximum value of the selected matching weight is located in the next video image, and performing data association. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
As an optional embodiment of the present application, the weight obtaining module specifically includes:
the ship prediction acquisition module is used for acquiring the motion parameters of the ship, predicting the motion track of the ship according to the motion parameters and obtaining a predicted position of the ship; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The motion matching degree judging module is used for judging the motion matching degree according to the ship predicted position and the ship detection result; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The appearance matching degree judging module is used for judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
And the matching weight determining module is used for determining the matching weight according to the motion matching degree and the appearance matching degree. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The embodiment of the present application also provides an electronic device, as shown in fig. 6, including a processor 610 and a memory 620, where the processor 610 and the memory 620 may be connected by a bus or in other manners.
Processor 610 may be a Central Processing Unit (CPU). The Processor 610 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 620, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the ship detection model training method or the ship tracking method based on background elimination method in the embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions, and modules stored in the memory.
The memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 620 optionally includes memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 620, and when executed by the processor 610, perform a ship detection model training method or a ship tracking method based on background elimination method as in the embodiments shown in fig. 1 and 2.
The details of the electronic device may be understood with reference to the corresponding related descriptions and effects in the embodiments shown in fig. 1 or fig. 2, and are not described herein again.
The embodiment also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the ship detection model training method or the ship tracking method based on the background elimination method in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a flash Memory (FlashMemory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid-State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A ship detection model training method based on a background elimination method is characterized by comprising the following steps:
acquiring a ship image training sample, wherein the ship image training sample is a ship image extracted according to a background elimination method;
training a neural network model according to the ship image training sample to obtain an output vector of the neural network model;
calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector;
performing a gradient inversion on the loss;
and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model.
2. The method of claim 1, wherein the ship image training sample is a ship image extracted according to a background subtraction method, comprising:
intercepting a picture sequence with the number of ships lower than a first threshold value;
calculating a picture mean value of the picture sequence, and taking the calculated picture mean value as a background model;
and obtaining a ship image training sample according to the background model.
3. The method of claim 1, further comprising:
acquiring a ship image test sample, wherein the ship image test sample is a ship image extracted according to the background elimination method;
obtaining a test result according to the ship image test sample and the ship detection model based on the background elimination method;
judging whether the accuracy of the ship detection model based on the background elimination method is higher than a preset threshold value or not according to the test result;
and when the accuracy of the ship detection model based on the background elimination method is higher than the preset threshold value, determining the ship detection model based on the background elimination method as an available ship detection model based on the background elimination method.
4. A method of vessel tracking, comprising the steps of:
acquiring a ship video image, and extracting a ship image from the ship video image by using a background elimination method;
inputting the ship image to a preset ship detection model based on a background elimination method to obtain a ship detection result; the preset ship detection model is generated by training through a ship tracking model training method based on the background elimination method according to any one of claims 1 to 3;
and correlating the ship detection result by using a target algorithm to obtain the running track of the ship.
5. The method of claim 4, wherein the step of correlating the ship detection results with a target algorithm to obtain the ship trajectory comprises:
acquiring the matching weight of the ship detection result of the current video image and each ship detection result of the next video image;
selecting the ship detection result where the maximum value of the matching weight in the next video image is located, and performing data association;
and when the ship detection result where the maximum value of the selected matching weight is located is associated, reducing the matching weight, and reselecting the ship detection result where the maximum value of the selected matching weight is located in the next video image to perform data association.
6. The vessel tracking method according to claim 5, wherein the obtaining of the matching weight of the vessel detection result of the current video image and each vessel detection result of the next video image comprises:
acquiring motion parameters of a ship, predicting the motion track of the ship according to the motion parameters, and obtaining a predicted position of the ship;
judging the motion matching degree according to the predicted position of the ship and the detection result of the ship;
judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance;
and determining the matching weight according to the motion matching degree and the appearance matching degree.
7. A ship tracking model training device based on background elimination method is characterized by comprising:
the system comprises a sample acquisition module, a background elimination module and a background elimination module, wherein the sample acquisition module is used for acquiring a ship image training sample, and the ship image training sample is a ship image extracted according to the background elimination method;
the vector acquisition module is used for training a neural network model according to the ship image training sample to acquire an output vector of the neural network model;
the loss calculation module is used for calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector;
a gradient inversion module for performing gradient inversion on the loss;
and the model building module is used for adjusting the weight parameters of the neural network model according to the loss after gradient inversion and building a ship detection model.
8. A vessel tracking device, comprising:
the video image acquisition module is used for acquiring a ship video image and extracting the ship image from the ship video image by using a background elimination method;
the detection module is used for inputting the ship image to a preset ship detection model based on a background elimination method to obtain a ship detection result; the preset ship detection model is generated by training through a ship tracking model training method based on the background elimination method according to any one of claims 1 to 3;
and the association module is used for associating the ship detection result by using a target algorithm to obtain the running track of the ship.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the background subtraction based vessel tracking model training method according to any of claims 1 to 3 or the vessel tracking method according to any of claims 4 to 6.
10. A storage medium having stored thereon computer instructions, which when executed by a processor, carry out the steps of the method for training a ship tracking model based on background elimination according to any one of claims 1 to 3 or the method for ship tracking according to any one of claims 4 to 6.
CN201911381857.1A 2019-12-27 2019-12-27 Ship detection model training and ship tracking method based on background elimination method Pending CN111611836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911381857.1A CN111611836A (en) 2019-12-27 2019-12-27 Ship detection model training and ship tracking method based on background elimination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911381857.1A CN111611836A (en) 2019-12-27 2019-12-27 Ship detection model training and ship tracking method based on background elimination method

Publications (1)

Publication Number Publication Date
CN111611836A true CN111611836A (en) 2020-09-01

Family

ID=72205383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911381857.1A Pending CN111611836A (en) 2019-12-27 2019-12-27 Ship detection model training and ship tracking method based on background elimination method

Country Status (1)

Country Link
CN (1) CN111611836A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258549A (en) * 2020-11-12 2021-01-22 珠海大横琴科技发展有限公司 Ship target tracking method and device based on background elimination
CN114972740A (en) * 2022-07-29 2022-08-30 上海鹰觉科技有限公司 Automatic ship sample collection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081801A (en) * 2011-01-26 2011-06-01 上海交通大学 Multi-feature adaptive fused ship tracking and track detecting method
CN106199583A (en) * 2016-06-30 2016-12-07 大连楼兰科技股份有限公司 Multi-target Data coupling and the method and system followed the tracks of
CN107818571A (en) * 2017-12-11 2018-03-20 珠海大横琴科技发展有限公司 Ship automatic tracking method and system based on deep learning network and average drifting
CN109460740A (en) * 2018-11-15 2019-03-12 上海埃威航空电子有限公司 The watercraft identification recognition methods merged based on AIS with video data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081801A (en) * 2011-01-26 2011-06-01 上海交通大学 Multi-feature adaptive fused ship tracking and track detecting method
CN106199583A (en) * 2016-06-30 2016-12-07 大连楼兰科技股份有限公司 Multi-target Data coupling and the method and system followed the tracks of
CN107818571A (en) * 2017-12-11 2018-03-20 珠海大横琴科技发展有限公司 Ship automatic tracking method and system based on deep learning network and average drifting
CN109460740A (en) * 2018-11-15 2019-03-12 上海埃威航空电子有限公司 The watercraft identification recognition methods merged based on AIS with video data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
业精于勤荒于嬉-行成于思而毁于随: "基于候选区域的目标检测器(两阶段)和单次目标检测器(一阶段)", 《HTTPS://BLOG.CSDN.NET/M0_37644085/ARTICLE/DETAILS/86498787》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258549A (en) * 2020-11-12 2021-01-22 珠海大横琴科技发展有限公司 Ship target tracking method and device based on background elimination
CN114972740A (en) * 2022-07-29 2022-08-30 上海鹰觉科技有限公司 Automatic ship sample collection method and system

Similar Documents

Publication Publication Date Title
JP6893564B2 (en) Target identification methods, devices, storage media and electronics
CN109035304B (en) Target tracking method, medium, computing device and apparatus
CN107624189B (en) Method and apparatus for generating a predictive model
KR102661954B1 (en) A method of processing an image, and apparatuses performing the same
CN111602138B (en) Object detection system and method based on artificial neural network
CN110956646B (en) Target tracking method, device, equipment and storage medium
CN110176024B (en) Method, device, equipment and storage medium for detecting target in video
WO2016179808A1 (en) An apparatus and a method for face parts and face detection
CN111553182A (en) Ship retrieval method and device and electronic equipment
CN111931764A (en) Target detection method, target detection framework and related equipment
CN111062263A (en) Method, device, computer device and storage medium for hand pose estimation
JP6462528B2 (en) MOBILE BODY TRACKING DEVICE, MOBILE BODY TRACKING METHOD, AND MOBILE BODY TRACKING PROGRAM
CN110991385A (en) Method and device for identifying ship driving track and electronic equipment
WO2021090771A1 (en) Method, apparatus and system for training a neural network, and storage medium storing instructions
CN108491818B (en) Detection method, device and the electronic equipment of target object
CN111611836A (en) Ship detection model training and ship tracking method based on background elimination method
CN113348465A (en) Method, device, equipment and storage medium for predicting relevance of object in image
CN114556445A (en) Object recognition method, device, movable platform and storage medium
CN111695572A (en) Ship retrieval method and device based on convolutional layer feature extraction
CN111753590B (en) Behavior recognition method and device and electronic equipment
CN115797735A (en) Target detection method, device, equipment and storage medium
CN111553474A (en) Ship detection model training method and ship tracking method based on unmanned aerial vehicle video
CN111611835A (en) Ship detection method and device
KR20220130567A (en) Methods, apparatuses, devices, and storage medium for detecting correlated objects included in an image
CN111753775A (en) Fish growth assessment method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200901