CN111553474A - Ship detection model training method and ship tracking method based on unmanned aerial vehicle video - Google Patents

Ship detection model training method and ship tracking method based on unmanned aerial vehicle video Download PDF

Info

Publication number
CN111553474A
CN111553474A CN201911319571.0A CN201911319571A CN111553474A CN 111553474 A CN111553474 A CN 111553474A CN 201911319571 A CN201911319571 A CN 201911319571A CN 111553474 A CN111553474 A CN 111553474A
Authority
CN
China
Prior art keywords
ship
image
detection model
training
ship detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911319571.0A
Other languages
Chinese (zh)
Inventor
邓练兵
薛剑
陈金鹿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dahengqin Technology Development Co Ltd
Original Assignee
Zhuhai Dahengqin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dahengqin Technology Development Co Ltd filed Critical Zhuhai Dahengqin Technology Development Co Ltd
Priority to CN201911319571.0A priority Critical patent/CN111553474A/en
Publication of CN111553474A publication Critical patent/CN111553474A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches

Abstract

The invention provides a ship detection model training method and a ship tracking method based on unmanned aerial vehicle video, wherein the ship tracking method based on the unmanned aerial vehicle video comprises the following steps: acquiring a video image acquired by an unmanned aerial vehicle; inputting the video image into a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through a training method of the ship detection model; and correlating the ship detection result by using a target algorithm to obtain the running track of the ship. By implementing the method, the unmanned aerial vehicle video is used for acquiring the ship running video, the acquired ship video image is input into the neural network model for detection, the detection speed is high, the range is wide, the problems of high ship tracking price and small range are solved, and the ship detection model trained by the ship detection model training method provided by the invention is used for detecting the ship, so that the detection accuracy under the complex environment is improved.

Description

Ship detection model training method and ship tracking method based on unmanned aerial vehicle video
Technical Field
The invention relates to the field of computer vision, in particular to a ship detection model training method and a ship tracking method based on unmanned aerial vehicle video.
Background
As an important transportation vehicle, the ship is applied to all aspects, and monitoring and tracking of the ship are necessary for ensuring the safety of the ship. In the related art, the tracking and monitoring methods for ships include satellite tracking and positioning and AIS system tracking. Satellite tracking and positioning are expensive, and the range of the transmitted signals of the AIS system is limited, so that the range of ship tracking is limited, and a new ship tracking method needs to be searched.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defects of high ship tracking price and small tracking range in the prior art, so that a ship detection model training method and a ship tracking method based on unmanned aerial vehicle video are provided.
According to a first aspect, an embodiment of the present invention provides a method for training a ship detection model, including: obtaining a ship image training sample, the ship image training sample comprising a positive sample image with a ship and a negative sample image without a ship; training a neural network model according to the ship image training sample to obtain an output vector of the neural network model; calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector; performing a gradient inversion on the loss; and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model.
With reference to the first aspect, in a first implementation manner of the first aspect, the method for training the ship detection model further includes: obtaining a ship image test sample comprising a positive sample image with a ship and a negative sample image without a ship; obtaining a test result according to the ship image test sample and the ship detection model; judging whether the accuracy of the ship detection model is higher than a preset threshold value or not according to the test result; and if the accuracy of the ship detection model is higher than the preset threshold value, determining the ship detection model as an available ship detection model.
According to a second aspect, an embodiment of the present invention provides a method for tracking a ship based on a video of an unmanned aerial vehicle, including the following steps: acquiring a video image acquired by an unmanned aerial vehicle; inputting the video image into a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through the first aspect or the training method of the ship detection model according to any one of the embodiments of the first aspect; and correlating the ship detection result by using a target algorithm to obtain the running track of the ship.
With reference to the second aspect, in a first embodiment of the second aspect, after the obtaining the video image captured by the drone and before inputting the video image to a preset ship detection model, the method for training a ship detection model further includes: and carrying out image enhancement and denoising processing on the video image.
With reference to the second aspect, in a second implementation manner of the second aspect, the step of correlating the ship detection results by using a target algorithm to obtain the operation trajectory of the ship includes: acquiring the matching weight of the ship detection result of the current video image and each ship detection result of the next video image; selecting the ship detection result where the maximum value of the matching weight in the next video image is located, and performing data association; and when the ship detection result where the maximum value of the selected matching weight is located is associated, reducing the matching weight, and reselecting the ship detection result where the maximum value of the selected matching weight is located in the next video image to perform data association.
With reference to the second implementation aspect of the second aspect, in a third implementation manner of the second aspect, the obtaining a matching weight of the ship detection result of the current video image and each ship detection result of a next video image includes: acquiring motion parameters of a ship, predicting the motion track of the ship according to the motion parameters, and obtaining a predicted position of the ship; judging the motion matching degree according to the predicted position of the ship and the detection result of the ship; judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance; and determining the matching weight according to the motion matching degree and the appearance matching degree.
According to a third aspect, an embodiment of the present invention provides a training device for a ship detection model, including: the sample acquisition module is used for acquiring a ship image training sample; the vector acquisition module is used for training a neural network model according to the ship image training sample to acquire an output vector of the neural network model; the loss calculation module is used for calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector; a gradient inversion module for performing gradient inversion on the loss; and the model building module is used for adjusting the weight parameters of the neural network model according to the loss after gradient inversion and building a ship detection model.
According to a fourth aspect, an embodiment of the present invention provides a ship tracking apparatus based on an unmanned aerial vehicle video, including: the video image acquisition module is used for acquiring a video image acquired by the unmanned aerial vehicle; the training module is used for inputting the video image to a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through the first aspect or the training method of the ship detection model according to any one of the embodiments of the first aspect; and the association module is used for associating the ship detection result by using a target algorithm to obtain the running track of the ship.
According to a fifth aspect, embodiments of the present invention provide an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the steps of the ship detection model training method of the first aspect or any of the embodiments of the first aspect or the ship tracking method based on unmanned aerial vehicle video of the second aspect or any of the embodiments of the second aspect.
According to a sixth aspect, an embodiment of the present invention provides a storage medium, on which computer instructions are stored, which when executed by a processor, implement the steps of the ship detection model training method according to the first aspect or any of the embodiments of the first aspect, or the ship tracking method based on the drone video according to any of the embodiments of the second aspect or the second aspect.
The technical scheme of the invention has the following advantages:
1. according to the ship detection model training method provided by the embodiment, the loss is subjected to gradient inversion to form an effect of countermeasure training, and the image level and the region of interest level are subjected to gradient inversion operation, so that the trained neural network model can learn the characteristic data of different ship types and also can learn the characteristic data of light rays and brightness conditions in the unmanned aerial vehicle video image under different weather conditions, and the trained neural network model has stronger adaptability to external factors.
2. The test set provided by the embodiment is used for testing the accuracy of the trained neural network model, the neural network model meeting the accuracy rate condition is selected, the index for selecting the neural network model is given, and the verification and the selection of the neural network model are facilitated.
3. The ship tracking method based on the unmanned aerial vehicle video provided by the embodiment utilizes the unmanned aerial vehicle video input neural network model for detection, and has the advantages of high detection speed and wide detection range. Only use unmanned aerial vehicle to track, the price is low.
4. The matching weight is determined by utilizing the motion matching degree and the appearance matching degree, the motion condition of the ship is considered, the appearance matching degree is considered, and the matching degree of the obtained matching weight is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a specific example of a ship detection model training method according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific example of a method for vessel tracking based on drone video according to an embodiment of the present invention;
fig. 3 is a diagram of a specific example of a method for tracking a ship based on a video of an unmanned aerial vehicle according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a specific example of a ship detection model training apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a specific example of a drone video based vessel tracking device in an embodiment of the present invention;
fig. 6 is a schematic block diagram of a specific example of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment provides a training method of a ship detection model, as shown in fig. 1, including the following steps:
s110: a ship image training sample is obtained, the ship image training sample including a positive sample image with a ship and a negative sample image without a ship.
For example, the ship image sample may be acquired by unframing a video captured by the drone, or may be acquired from a network database. The ship image training sample may be a part of the acquired ship image sample which is randomly divided, for example, 70% of the ship image sample is randomly divided as the ship image training sample. The acquired ship image training samples need to ensure the diversity of data, for example, the ship type, the appearance and the image background in the ship image training samples have diversity. The acquisition mode of the ship image training sample is not limited in the embodiment, and can be determined according to needs.
S120: and training the neural network model according to the ship image training sample to obtain an output vector of the neural network model.
For example, the ship image training samples train the neural network model in a manner that ship positions in the ship image training samples are marked, the ship image training samples with the marked positions are input into the neural network, the minimum value of a loss function is calculated by using a gradient descent method, the minimum value of the loss function is sought as a constraint condition, and parameters and weights of each convolution layer, pooling layer, full-link layer, classification layer and the like in the neural network are adjusted, so that the neural network model can complete the detection of the ship. The neural network training method is not limited in this embodiment, and may be determined as needed.
Optionally, in some embodiments of the present invention, the manner of obtaining the output vector of the neural network model may specifically be to calculate the texture of each pixel in the ship image training sample by using the convolutional layer, combine the close pixels to obtain an image candidate region, input the obtained image information of the candidate region into the full connection layer of the neural network, and obtain the output vector of the neural network model; or setting a candidate area network, taking the output feature map of the first convolutional network as the input of the candidate area network, sliding a convolution kernel of 3 × 3 on the feature map to construct a candidate area irrelevant to the category by using the convolutional network, and inputting the candidate area to an independent full-connected layer to obtain the output vector of the neural network model. The output vector represents the prediction bounding box and the detection results within the prediction bounding box, including the presence and absence of the detection target. The embodiment does not limit what kind of neural network model is specifically adopted, and can determine according to needs.
S130: and calculating the loss of the neural network model according to the actual result and the output vector corresponding to the ship image training sample.
For example, the calculation method for calculating the loss of the neural network model according to the actual result and the output vector corresponding to the ship image training sample may be:
Figure BDA0002326771670000081
wherein x represents the actual result corresponding to the training sample of the input ship image, f (x)(t)) And f represents the trained neural network model, and T represents the number of training samples. The specific calculation method of the loss is not limited in this embodiment, and may be determined as needed.
S140: the losses are gradient reversed.
Illustratively, the gradient inversion may be performed by calling a gradient inversion function, and the effect of the gradient inversion function is to invert the back propagation loss, so that the training targets of the neural network before and after the calling of the gradient inversion function are opposite to each other, and the antagonistic effect is achieved. In this embodiment, gradient inversion operation is performed on the image level and the region of interest level, where the image level represents various levels of a background in a ship image training sample, for example, weather conditions in the ship image training sample and specific light and brightness of an unmanned aerial vehicle; the region-of-interest level represents a diverse class of types of ships, for example, classified into civil ships and military ships.
S150: and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model.
Illustratively, the weight parameters of the neural network model are adjusted according to the loss after gradient inversion, and the ship detection model is constructed in a way that the loss of the model in each layer is recorded when the gradient inversion layer is propagated in the forward direction, the loss passing through the gradient inversion layer is multiplied by-lambda when the gradient inversion layer is propagated in the backward direction, then each layer of network carries out gradient calculation according to the returned loss, and then the weight parameters of the layer of network are updated, and the ship detection model is constructed.
According to the ship detection model training method provided by the embodiment, the loss is subjected to gradient inversion to form an effect of countermeasure training, and the image level and the region of interest level are subjected to gradient inversion operation, so that the trained neural network model can learn the characteristic data of different ship types and also can learn the characteristic data of light rays and brightness conditions in the unmanned aerial vehicle video image under different weather conditions, and the trained neural network model has stronger adaptability to external factors.
In order to test the performance of the trained neural network model, as an optional implementation manner of the present application, the ship detection model training method according to the embodiment of the present invention further includes:
first, a ship image test sample is obtained, which includes a positive sample image with a ship and a negative sample image without a ship.
For example, the ship image test sample may be acquired by randomly dividing a part of the ship image sample acquired in S110, for example, the remaining 30% of the ship image training sample is different from the ship image test sample. The acquisition mode of the ship image test sample is not limited in the embodiment, and can be determined according to the requirement.
And secondly, obtaining a test result according to the ship image test sample and the ship detection model.
For example, the manner of obtaining the test result according to the ship image test sample and the ship detection model may be that all the ship image test samples are input to the ship detection model, the intersection ratio calculation is performed according to a plurality of prediction boundary frames output by the ship detection model and the detection result in the prediction boundary frames and the markers, and the average value of the calculation results of all the intersection ratios is used as the accuracy rate, so as to obtain the accuracy rate of the output result of the neural network; or carrying out cross comparison calculation according to a plurality of prediction boundary frames output by the ship detection model and detection results and marks in the prediction boundary frames, judging whether each cross comparison calculation result meets a preset threshold value, indicating that the detection result of the ship image test sample is accurate when the preset threshold value is met, and taking the ratio of the ship image test sample with the accurate detection result to all ship image test samples as the accuracy of the output result of the neural network. The embodiment does not limit what kind of method is specifically adopted to obtain the test result, and can determine according to the needs.
And finally, judging whether the accuracy of the ship detection model is higher than a preset threshold value or not according to the test result, and if the accuracy of the ship detection model is higher than the preset threshold value, determining the ship detection model as an available ship detection model. The preset threshold may be 98%, and the size of the preset threshold is not limited in this embodiment and may be set as needed.
The test set provided by the embodiment is used for testing the accuracy of the trained neural network model, the neural network model meeting the accuracy rate condition is selected, the index for selecting the neural network model is given, and the verification and the selection of the neural network model are facilitated.
The embodiment provides a ship tracking method based on unmanned aerial vehicle video, as shown in fig. 2, including the following steps:
s210, acquiring a video image acquired by the unmanned aerial vehicle.
For example, the video image acquired by the unmanned aerial vehicle may be acquired by performing a frame decoding process on the video acquired by the unmanned aerial vehicle to acquire each frame of video image; or frame skipping processing, in which a video image separated by a certain number of frames is used as the acquired video image, for example, a video image is acquired every 3 frames. The mode of obtaining the video image collected by the unmanned aerial vehicle is not limited by the embodiment, and can be determined through requirements.
S220, inputting the video image into a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through the training method of the ship detection model according to the embodiment, and the ship detection result represents a position coordinate frame detected by the ship in the current video image, and the rest of the positions are not described again.
And S230, correlating the ship detection results by using a target algorithm to obtain the running track of the ship.
Exemplarily, the ship detection results are correlated by using a target algorithm to obtain the running track of the ship, and the method comprises the steps of dividing the ship detection results in the video images acquired by any two adjacent unmanned aerial vehicles into two sets, searching for the maximum matching of the two sets, and performing data correlation on the matching results to obtain the running track of the ship; the method also can adopt an IOU algorithm to obtain ship detection results in video images acquired by any two adjacent unmanned aerial vehicles, sequentially perform intersection ratio calculation, and perform data association on two ship detection results with the largest intersection ratio calculation result so as to obtain the running track of the ship. The target algorithm is not limited in this embodiment, and can be determined by requirements.
The ship tracking method based on the unmanned aerial vehicle video provided by the embodiment utilizes the unmanned aerial vehicle video input neural network model for detection, and has the advantages of high detection speed and wide detection range. And, only use unmanned aerial vehicle to track, the price is low.
As an optional implementation manner of the present application, after step S210 and before step S220, the method for tracking a ship based on a video of an unmanned aerial vehicle according to an embodiment of the present invention further includes: and carrying out image enhancement and denoising processing on the video image. Noise or weather condition in being favorable to further reducing unmanned aerial vehicle data are to the influence of testing result.
As an optional implementation manner of the present application, the step S230 specifically includes:
first, the matching weight of the ship detection result of the current video image and each ship detection result of the next video image is obtained.
For example, the matching weight between the ship detection result of the current video image and each ship detection result of the next video image may be obtained by sequentially performing cross-comparison calculation on any ship detection result of the current video image and each ship detection result of the next video image, and taking the obtained cross-comparison calculation result as the matching weight between each ship detection result in the next video image and the ship detection result of the current video image. Such as: and carrying out cross-comparison calculation on a certain ship position coordinate frame detected by the current video image and each ship position coordinate frame of the next video image to obtain different cross-comparison calculation results, and taking each calculation result as the matching weight of each ship detection result in the next video image to the ship detection result of the current video image. The matching weight obtaining mode is not limited in this embodiment, and can be determined by requirements.
And finally, selecting a ship detection result where the maximum value of the matching weight in the next video image is located, performing data association, reducing the matching weight when the ship detection result where the maximum value of the selected matching weight is located is associated, and reselecting the ship detection result where the maximum value of the reduced matching weight is located in the next video image to perform data association.
Exemplarily, the ship detection results in the video images acquired by any two adjacent drones are divided into two sets, as shown in fig. 3, the two sets are respectively denoted by 11, 12 and 13 as a previous set, and 21, 22 and 23 as a next set, the previous set represents all the ship detection results in the current video image, and the next set represents all the ship detection results in the next video image, where it is assumed that the matching weights of 11 and 21, 22 and 23 are known to be 0.8, 0.6 and 0, respectively; the matching weights of 12 and 21, 22 and 23 are respectively 0, 0.3 and 0.9; the matching weights of 13 and 21, 22 and 23 are 0.9, 0.8 and 0, respectively.
Firstly, assigning values to all ship detection results in a previous set and all ship detection results in a next video image, assigning the values of all the ship detection results in the previous set to the ship detection results of the current video image, wherein the ship detection results in the next video image have the maximum matching weight values of 0.8, 0.9 and 0.9 respectively, and assigning all the ship detection results in the next set to 0. Secondly, selecting ship detection results of a next set which have the same assigned value number with the ship detection results of the previous set for data association, wherein 11 and 21 are associated, and 12 and 23 are associated in a corresponding graph; before the association it is determined whether the ship detection results of the latter set have been associated, e.g. 13 with 21, 21 has been associated with 11. At this point, conflicting 11 and 13 are subtracted by 0.1 in the first set, respectively, corresponding to 21 being added by 0.1 in the second set. At this time, the data association is performed again in the above manner, and 13 and 22 data association is obtained. The data association method provided by the embodiment of the invention performs matching through the weight, and increases the accuracy of data association.
As an optional embodiment of the present application, obtaining a matching weight of a ship detection result of a current video image and each ship detection result of a next video image includes:
firstly, motion parameters of a ship are obtained, and a ship motion track is predicted according to the motion parameters to obtain a predicted position of the ship.
The motion parameters may be, for example, direction of motion, velocity, acceleration, resistance, etc. The motion parameters of the ship can be obtained by calculation according to the motion of the ship in the video image of the unmanned aerial vehicle, or can be preset ship motion parameters. The method for predicting the ship motion track according to the motion parameters to obtain the predicted position of the ship can be that a Kalman filter is constructed through the motion parameters, and prediction is carried out according to the Kalman filter. The embodiment does not limit the specific manner of obtaining the predicted position of the ship, and can be determined according to the needs.
Secondly, judging the motion matching degree according to the predicted position of the ship and the detection result of the ship; and judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance.
For example, the method for judging the motion matching degree according to the ship predicted position and the ship detection result may be to calculate the motion matching degree by using the mahalanobis distance, or may be to directly use the intersection ratio of the ship predicted position and the ship detection result as the value of the motion matching degree. The method for judging the appearance matching degree of the detection result of the adjacent ship according to the minimum cosine distance can be completed through an appearance model (a ReID model), a feature vector of a unit norm is extracted by using a depth network, and the minimum cosine distance between the feature vectors is used as the value of the appearance matching degree. The embodiment does not limit the specific manner of obtaining the motion matching degree and the appearance matching degree, and can determine the motion matching degree and the appearance matching degree as required.
And determining the matching weight according to the motion matching degree and the appearance matching degree.
For example, the way of determining the matching weight according to the degree of motion matching and the degree of appearance matching may be by obtaining a weighted sum of the degree of motion matching and the degree of appearance matching. The embodiment does not limit the specific determination method of the matching weight, and may determine the matching weight as needed.
The matching weight is determined by utilizing the motion matching degree and the appearance matching degree, the motion condition of the ship is considered, the appearance matching degree is considered, and the matching degree of the obtained matching weight is higher.
The present embodiment provides a training apparatus for a ship inspection model, as shown in fig. 4, including:
a sample acquisition module 410 for acquiring ship image training samples; the specific implementation manner is described in relation to S110 in this embodiment, and is not described herein again.
The vector obtaining module 420 is configured to train the neural network model according to the ship image training sample, and obtain an output vector of the neural network model; the specific implementation is shown in this embodiment S120, which is not described herein again.
The loss calculation module 430 is used for calculating the loss of the neural network model according to the actual result and the output vector corresponding to the ship image training sample; the specific implementation is shown in this embodiment S130, which is not described herein again.
A gradient inversion module 440 for gradient inverting the loss; the specific implementation is shown in this embodiment S140, which is not described herein again.
And the model building module 450 is configured to adjust the weight parameter of the neural network model according to the loss after gradient inversion, and build a ship detection model. The specific implementation is shown in this embodiment S150, which is not described herein again.
The ship detection model trainer that this embodiment provided, carry out the gradient reversal with the loss and form the effect of antagonism training, carry out the gradient reversal operation to image level and interesting region level, make the neural network model of training both can learn the characteristic data of different ship types, can learn again under different weather conditions, light in the unmanned aerial vehicle video image, the characteristic data of luminance condition, make the neural network model of training stronger to the adaptability of external factor.
As an optional embodiment of the present application, the training apparatus for a ship detection model further includes:
the device comprises a test sample acquisition module, a ship image test sample acquisition module and a ship image test sample acquisition module, wherein the ship image test sample acquisition module is used for acquiring a ship image test sample which comprises a positive sample image with a ship and a negative sample image without the ship. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The test result acquisition module is used for acquiring a test result according to the ship image test sample and the ship detection model; the judging module is used for judging whether the accuracy of the ship detection model is higher than a preset threshold value or not according to the test result; and the determining module is used for determining the ship detection model as an available ship detection model if the accuracy of the ship detection model is higher than a preset threshold value. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The present embodiment provides a ship tracking device based on unmanned aerial vehicle video, as shown in fig. 5, including:
a video image obtaining module 510, configured to obtain a video image acquired by the unmanned aerial vehicle; the specific implementation is shown in this embodiment S210, which is not described herein again.
The training module 520 is used for inputting the video images into a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through a training method of the ship detection model in the embodiment; the specific implementation is shown in this embodiment S220, which is not described herein again.
And the associating module 530 is configured to associate the ship detection result with a target algorithm to obtain a running track of the ship. The specific implementation is shown in this embodiment S230, which is not described herein again.
The video ship tracking device based on unmanned aerial vehicle that this embodiment provided utilizes unmanned aerial vehicle video input neural network model to detect, and detection speed is fast, and detection range is wide. And, only use unmanned aerial vehicle to track, the price is low.
As an optional implementation manner of the present application, the vessel tracking apparatus based on the video of the unmanned aerial vehicle further includes: and the image processing module is used for carrying out image enhancement and denoising processing on the video image. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
As an optional embodiment of the present application, the association module specifically includes:
the weight acquisition module is used for acquiring the matching weight of the ship detection result of the current video image and each ship detection result of the next video image; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The first data association module is used for selecting a ship detection result where the maximum value of the matching weight in the next video image is located, and performing data association; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
And the second data association module is used for reducing the matching weight when the ship detection result where the maximum value of the selected matching weight is located is associated, reselecting the ship detection result where the maximum value of the selected matching weight is located in the next video image, and performing data association. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
As an optional embodiment of the present application, the weight obtaining module specifically includes:
the ship prediction acquisition module is used for acquiring the motion parameters of the ship, predicting the motion track of the ship according to the motion parameters and obtaining a predicted position of the ship; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The motion matching degree judging module is used for judging the motion matching degree according to the ship predicted position and the ship detection result; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The appearance matching degree judging module is used for judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance; the specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
And the matching weight determining module is used for determining the matching weight according to the motion matching degree and the appearance matching degree. The specific implementation manner is shown in the corresponding part of the method of the embodiment, and is not described herein again.
The embodiment of the present application also provides an electronic device, as shown in fig. 6, including a processor 610 and a memory 620, where the processor 610 and the memory 620 may be connected by a bus or in other manners.
Processor 610 may be a Central Processing Unit (CPU). The Processor 610 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 620, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the ship detection model training method or the drone video based ship tracking method in the embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions, and modules stored in the memory.
The memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 620 optionally includes memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 620 and, when executed by the processor 610, perform a ship detection model training method or a drone video based ship tracking method as in the embodiment shown in fig. 1.
The details of the electronic device may be understood with reference to the corresponding related descriptions and effects in the embodiments shown in fig. 1 or fig. 2, and are not described herein again.
The embodiment also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the ship detection model training method or the ship tracking method based on the unmanned aerial vehicle video in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a flash Memory (FlashMemory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid-State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A ship detection model training method is characterized by comprising the following steps:
obtaining a ship image training sample, the ship image training sample comprising a positive sample image with a ship and a negative sample image without a ship;
training a neural network model according to the ship image training sample to obtain an output vector of the neural network model;
calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector;
performing a gradient inversion on the loss;
and adjusting the weight parameters of the neural network model according to the loss after gradient inversion to construct a ship detection model.
2. The ship detection model training method of claim 1, further comprising:
obtaining a ship image test sample comprising a positive sample image with a ship and a negative sample image without a ship;
obtaining a test result according to the ship image test sample and the ship detection model;
judging whether the accuracy of the ship detection model is higher than a preset threshold value or not according to the test result;
and if the accuracy of the ship detection model is higher than the preset threshold value, determining the ship detection model as an available ship detection model.
3. A ship tracking method based on unmanned aerial vehicle video is characterized by comprising the following steps:
acquiring a video image acquired by an unmanned aerial vehicle;
inputting the video image into a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through the training method of the ship detection model according to claim 1 or 2;
and correlating the ship detection result by using a target algorithm to obtain the running track of the ship.
4. The unmanned aerial vehicle video-based ship tracking method according to claim 3, wherein after the acquiring the video image captured by the unmanned aerial vehicle and before inputting the video image into a preset ship detection model, the method further comprises:
and carrying out image enhancement and denoising processing on the video image.
5. The unmanned aerial vehicle video-based ship tracking method of claim 3, wherein the step of correlating the ship detection results with a target algorithm to obtain the running track of the ship comprises:
acquiring the matching weight of the ship detection result of the current video image and each ship detection result of the next video image;
selecting the ship detection result where the maximum value of the matching weight in the next video image is located, and performing data association;
and when the ship detection result where the maximum value of the selected matching weight is located is associated, reducing the matching weight, and reselecting the ship detection result where the maximum value of the selected matching weight is located in the next video image to perform data association.
6. The drone video-based vessel tracking method of claim 5, wherein the obtaining of the matching weight of the vessel detection result of a current video image and each of the vessel detection results of a next video image comprises:
acquiring motion parameters of a ship, predicting the motion track of the ship according to the motion parameters, and obtaining a predicted position of the ship;
judging the motion matching degree according to the predicted position of the ship and the detection result of the ship;
judging the appearance matching degree of the detection results of the adjacent ships according to the minimum cosine distance;
and determining the matching weight according to the motion matching degree and the appearance matching degree.
7. A training device for a ship detection model is characterized by comprising:
the sample acquisition module is used for acquiring a ship image training sample;
the vector acquisition module is used for training a neural network model according to the ship image training sample to acquire an output vector of the neural network model;
the loss calculation module is used for calculating the loss of the neural network model according to the actual result corresponding to the ship image training sample and the output vector;
a gradient inversion module for performing gradient inversion on the loss;
and the model building module is used for adjusting the weight parameters of the neural network model according to the loss after gradient inversion and building a ship detection model.
8. A ship tracking device based on unmanned aerial vehicle video, characterized by includes:
the video image acquisition module is used for acquiring a video image acquired by the unmanned aerial vehicle;
the training module is used for inputting the video image to a preset ship detection model to obtain a ship detection result; the preset ship detection model is generated by training through the training method of the ship detection model according to claim 1 or 2;
and the association module is used for associating the ship detection result by using a target algorithm to obtain the running track of the ship.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the ship detection model training method of claim 1 or 2 or the unmanned aerial vehicle video-based ship tracking method of any of claims 3-6.
10. A storage medium having stored thereon computer instructions, which when executed by a processor, carry out the steps of the vessel detection model training method of claim 1 or 2 or the drone video based vessel tracking method of any of claims 3-6.
CN201911319571.0A 2019-12-19 2019-12-19 Ship detection model training method and ship tracking method based on unmanned aerial vehicle video Pending CN111553474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911319571.0A CN111553474A (en) 2019-12-19 2019-12-19 Ship detection model training method and ship tracking method based on unmanned aerial vehicle video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911319571.0A CN111553474A (en) 2019-12-19 2019-12-19 Ship detection model training method and ship tracking method based on unmanned aerial vehicle video

Publications (1)

Publication Number Publication Date
CN111553474A true CN111553474A (en) 2020-08-18

Family

ID=71999820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911319571.0A Pending CN111553474A (en) 2019-12-19 2019-12-19 Ship detection model training method and ship tracking method based on unmanned aerial vehicle video

Country Status (1)

Country Link
CN (1) CN111553474A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070803A (en) * 2020-09-02 2020-12-11 安徽工程大学 Unmanned ship path tracking method based on SSD neural network model
CN112183463A (en) * 2020-10-23 2021-01-05 珠海大横琴科技发展有限公司 Ship identification model verification method and device based on radar image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145900A (en) * 2017-04-24 2017-09-08 清华大学 Pedestrian based on consistency constraint feature learning recognition methods again
CN107423686A (en) * 2017-06-15 2017-12-01 深圳大学 Video multi-target Fuzzy data association method and device
CN109145836A (en) * 2018-08-28 2019-01-04 武汉大学 Ship target video detection method based on deep learning network and Kalman filtering
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145900A (en) * 2017-04-24 2017-09-08 清华大学 Pedestrian based on consistency constraint feature learning recognition methods again
CN107423686A (en) * 2017-06-15 2017-12-01 深圳大学 Video multi-target Fuzzy data association method and device
CN109145836A (en) * 2018-08-28 2019-01-04 武汉大学 Ship target video detection method based on deep learning network and Kalman filtering
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AYAN SINHA等: "Gradient Adversarial Training of Neural Networks", 《ARXIV:1806.08028V1》 *
刘妍等: "《最优化方法及其MATLAB程序设计》", 31 January 2017, 哈尔滨工业大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070803A (en) * 2020-09-02 2020-12-11 安徽工程大学 Unmanned ship path tracking method based on SSD neural network model
CN112183463A (en) * 2020-10-23 2021-01-05 珠海大横琴科技发展有限公司 Ship identification model verification method and device based on radar image

Similar Documents

Publication Publication Date Title
CN109426782B (en) Object detection method and neural network system for object detection
CN108229267B (en) Object attribute detection, neural network training and region detection method and device
CN109035304B (en) Target tracking method, medium, computing device and apparatus
CN105335955B (en) Method for checking object and object test equipment
JP2018508078A (en) System and method for object tracking
CN113762252A (en) Unmanned aerial vehicle intelligent following target determination method, unmanned aerial vehicle and remote controller
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN111602138B (en) Object detection system and method based on artificial neural network
CN113377888B (en) Method for training object detection model and detection object
CN110766724A (en) Target tracking network training and tracking method and device, electronic equipment and medium
CN110889318A (en) Lane detection method and apparatus using CNN
WO2016179808A1 (en) An apparatus and a method for face parts and face detection
CN111553182A (en) Ship retrieval method and device and electronic equipment
WO2021090771A1 (en) Method, apparatus and system for training a neural network, and storage medium storing instructions
CN112101114A (en) Video target detection method, device, equipment and storage medium
CN111553474A (en) Ship detection model training method and ship tracking method based on unmanned aerial vehicle video
CN111611836A (en) Ship detection model training and ship tracking method based on background elimination method
CN111611835A (en) Ship detection method and device
CN110766725A (en) Template image updating method and device, target tracking method and device, electronic equipment and medium
CN112633066A (en) Aerial small target detection method, device, equipment and storage medium
CN113743163A (en) Traffic target recognition model training method, traffic target positioning method and device
CN111652907B (en) Multi-target tracking method and device based on data association and electronic equipment
CN115620079A (en) Sample label obtaining method and lens failure detection model training method
CN114373081A (en) Image processing method and device, electronic device and storage medium
CN111275039B (en) Water gauge character positioning method, device, computing equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination