CN108537826A - A kind of Ship Target tracking based on manual intervention - Google Patents

A kind of Ship Target tracking based on manual intervention Download PDF

Info

Publication number
CN108537826A
CN108537826A CN201810273119.4A CN201810273119A CN108537826A CN 108537826 A CN108537826 A CN 108537826A CN 201810273119 A CN201810273119 A CN 201810273119A CN 108537826 A CN108537826 A CN 108537826A
Authority
CN
China
Prior art keywords
frame
region
naval vessel
area
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810273119.4A
Other languages
Chinese (zh)
Inventor
庄祐存
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinhan Sensing Technology Co Ltd
Original Assignee
Shenzhen Xinhan Sensing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinhan Sensing Technology Co Ltd filed Critical Shenzhen Xinhan Sensing Technology Co Ltd
Priority to CN201810273119.4A priority Critical patent/CN108537826A/en
Publication of CN108537826A publication Critical patent/CN108537826A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of Ship Target tracking based on manual intervention, including step:S1. the naval vessel area information in first frame on certain video carrier is sketched the contours of manually;S2. the video carrier after normalization size is input in Fast RCNN networks, carries out the tracking monitor in naval vessel region to the frame after first frame by Fast RCNN networks, obtains the location information on the naval vessel in each frame;S3. by the location information on naval vessel in human eye observation's present frame, and judge whether annotation results are qualified;Then include after completing manual intervention operation, continuing step S2 by naval vessel region by manually drawing a box with mouse if S4. unqualified;S5. above step is all executed to each frame in video carrier, until tracking operation terminates.The present invention uses the deep learning frame based on Fast RCNN networks, realizes and is operated to the tracking of Ship Target, to prevent occurring during tracking the method that tracking error artificially corrects tracing area using manual intervention again, substantially increases the accuracy of tracking result.

Description

A kind of Ship Target tracking based on manual intervention
Technical field
The present invention relates to a kind of Ship Target tracking more particularly to a kind of Ship Target tracking based on manual intervention Method.
Background technology
There are its identification and the detection of movement position in naval vessel as seaborne main body and important military target Very important meaning.It is most of greatly to use the study based on non-depth for current target detection technique Method realize detection to moving target, such as:The optical flow algorithm etc. of particle filter, Meanshift and feature based point, More or less there is the shortcomings of certain precision accuracy for these methods.
Invention content
The main purpose of the present invention is to provide a kind of naval vessel tracking based on manual intervention, it is intended to solve existing at present The technical issues of shortcoming of some naval vessel tracking precision accuracys.
To achieve the goals above, the technical solution that the present invention takes is to provide a kind of naval vessel tracking based on manual intervention Method includes the following steps:
S1. the naval vessel area information in first frame on certain video carrier is sketched the contours of manually;
S2. the video carrier after normalization size is input in Fast-RCNN networks, passes through Fast-RCNN networks pair Frame after first frame carries out the tracking monitor in naval vessel region, obtains the location information on the naval vessel in each frame;
S3. by the location information on naval vessel in human eye observation's present frame, and judge whether annotation results are qualified;
Then include completing naval vessel region by manually drawing a box with mouse if S4. unqualified After manual intervention operation, continue step S2;
S5. above step is all executed to each frame in video carrier, until tracking operation terminates.
Optionally, the step S2 is further comprising the steps of:
S21. target training is carried out to Fast-RCNN networks;
S22. Fast-RCNN networks are used to carry out target following.
Optionally, the step S21 is further comprising the steps of:
A. several times are therefrom obtained by selective search methods for each pictures in training set Favored area, and record, the location coordinate information of each candidate region;
B. for each candidate region in step A, it is labeled label, i.e., 0 or 1,1 represents the candidate region Including naval vessel part, 0 indicates that the candidate region does not include naval vessel region part;
C. for each region in step A, all there is its correction position coordinate, i.e.,:
If the markup information in current candidate region is 1, illustrate current candidate region RiIn include naval vessel region, we Record its correct co-ordinate position information, the amendment coordinate information, including the corresponding top left corner apex coordinate of a box with And the width and length information of box, and naval vessel part is then surrounded by the box just;
D. then several candidate regions constitute the input of network, mark corresponding to each candidate region and its is right Correction position information is answered to constitute the output of network;
E. Fast-RCNN networks are trained according to the network inputs and output that are obtained in step D, are calculated by using BP Method updates weight and the biasing of neuron, finally so that Fast-RCNN networks reach convergence state.
Optionally, the step A includes the following steps:
S211., the size and aspect ratio range of candidate region are set;
S212. it chooses in the video frame a bit at random, is denoted as M, then calculate in video frame between any point N and point M Color distortion degree point N is slid into the range of convergence of point M, the meter of color distortion degree X if color distortion degree is less than K therewith Calculating formula is:
Wherein, wherein MR、MG、MBPixel values of the point M on tri- components of R, G and B is indicated respectively, similarly NR、NGAnd NG Indicate that pixel values of the point N on tri- components of R, G and B, point N are not overlapped with point M respectively, K=150;
S213. selected element N is repeated, and executes diversity factor calculating, expands the range of convergence of point M, the range of convergence of point M is final An image-region will be constituted, then the image-region just constitutes a part of several above-mentioned candidate regions;
If S214. above-mentioned candidate region meets the size and length-width ratio model for the candidate region being arranged in step S211 It encloses, then terminates step S213;
S215. different point M is randomly selected, and executes step, and executes step S213-S214, and meets and obtains every time Candidate area size within the scope of size condition in step S211, but candidate area size does not repeat.
Optionally, the step S22 includes the following steps:
Each frame of the video carrier of input is all exported multiple images region by S221.Fast-RCNN networks, for each Image-region is indicated with a pair of of coordinate points and two parameters w and h;Using this to coordinate points as the top left corner apex of rectangle, w and h The width and length of rectangle are indicated respectively, then the rectangle area encompassed is the naval vessel region that network is thought;
S222. the location information on naval vessel final in present frame is determined by identified tracing area in previous frame.
Optionally, the step S222 includes the following steps:
S2221. for a later frame of the initialization tracing area of video carrier, it is assumed that in the frame, neural network Output multiple images region Areai, wherein 1≤i≤n, n are the number of image-region in present frame;
S2222. pass through formula:
It calculates in present frame, each image-region AreaiWith determined tracing area in previous frame, coincidence ratio Li, 1 ≤ i≤n, wherein Sp-1Indicate the area of the determined tracing area in -1 frame of pth,It indicates in pth frame, image-region AreaiCorresponding area, symbol ∩ represent intersection operation, if -1 frame of pth indicates that frame of initialization tracing area, Sp-1Just the area of initialization tracing area is indicated;
S2223. L is choseniIn the case of obtaining maximum value, corresponding to image-region update the tracking area of present frame Domain.
Optionally, in the step S3, the tracking of naval vessel regional location whether qualification determination is according to being:In Fast-RCNN In network, the naval vessel regional location thought is surrounded with a box, if do not had in box area encompassed There is the naval vessel region by practical 80% or more to be surrounded, therefore thinks that the tracking result accuracy of present frame is relatively low, it is unqualified, then Execute step S4.
Optionally, the step S4 includes:Tracing area manually is reinitialized with mouse, i.e., draws a side with mouse Then naval vessel region includes by frame.
Optionally, in the step A, by selective search methods, 2000 candidate regions are therefrom obtained.
Optionally, the range of the size of the candidate region is [5000,10000] pixel size, length than model Enclose is [5,10].
The beneficial effects of the invention are as follows:The present invention proposes a kind of Ship Target technology based on manual intervention.It is special at this In profit, the deep learning frame based on Fast-RCNN is used first, to realize that the tracking to Ship Target operates, in addition, being It prevents during tracking, the case where tracking error (since there are shelters so that BREAK TRACK) of appearance, we Manual intervention is used, tracing area is artificially corrected, to improve the accuracy of tracking result.
Description of the drawings
Fig. 1 is flow chart of steps provided by the invention;
Fig. 2 is substantially flow chart of steps provided by the invention;
Fig. 3 is human intervention flow chart of steps provided by the invention;
Fig. 4 is the specific steps flow chart of step S21 provided by the invention;
Fig. 5 is the single neuronal structure figure of BP algorithm provided by the invention;
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
The present invention is further described for explanation and specific implementation mode below in conjunction with the accompanying drawings.It should be appreciated that described herein Specific embodiment be only used to explain the present invention, be not intended to limit the present invention.
The method of deep learning is combined by the present invention with visual pursuit, in addition, in order to prevent using deep learning into During row vision tracks, errors (loss etc. of target tracking) of some trackings of appearance, present invention employs into pedestrian The operation of work intervention, into when line trace, is missed when in the method using deep learning if human eye finds that target following exists Difference is then manually corrected tracing area again, it is ensured that the normal progress of tracking largely ensure that the accurate of tracking Rate.
Using the method for deep learning, that is, use the tracking frame based on Fast-RCNN, realize to Ship Target with Track not only increases the accuracy rate of tracking, equally also meets the requirement of real-time;By the way of manual intervention, for tracking As a result there is error, then can timely be corrected tracing area, improve the accuracy rate of tracking effect to a greater extent.
Embodiment 1, the present invention propose a kind of Ship Target technology based on manual intervention.In the present invention, it adopts first With the deep learning frame based on Fast-RCNN, to realize that the tracking to Ship Target operates, in addition, in order to prevent with During track, the case where tracking error (since there are shelters so that BREAK TRACK) of appearance, present invention employs people Work intervention, artificially corrects tracing area, to improve the accuracy of tracking result.Its concrete operations flow is as shown in figure 3, tool Body includes three big steps:
Step 1 initializes tracing area;
For target following technology, the cover object implemented is often video carrier.First, it would be desirable to selected One target to be tracked, to complete the operation of initialization tracing area.In the present invention, initialization tracing area be In the present frame of video carrier, a box is manually used, sketches the contours of naval vessel region part, then the region in box is for we Required tracing area.
Step 2, the update of tracing area;
It is actually each of the video after initializing tracing area for the update of tracing area operation In frame, naval vessel region part is equally searched out.For the part, the present invention uses the Fast-RCNN frames based on deep learning To realize relevant operation.
Step 3, manual intervention;
The specific implementation step of manual intervention, as shown in figure 3, during Ship Target carries out mobile, if background Greatly either in the current frame, there is shelter, then very in the color similarity in (in video frame, non-naval vessel region) and naval vessel region During target following, it may appear that there is mistake (i.e. in current, identifying that other regions are naval vessel region) in tracking result, Further, since during target following, the continuity with target movement, i.e., in the next frame, the target confirmed The position of tracking is in previous frame, and near identified target location, the range moved there is no target is excessive, because This, if in the current frame, the target location of tracking is malfunctioned, then can be influenced in the next frame, the result of target following.
Based on this, in this step, the present invention corrects tracing area, utmostly in real time by the way of manual intervention On ensure the accuracy of tracking result.
Embodiment 2, is based on above-described embodiment, and the present invention provides a kind of naval vessel tracking based on manual intervention, this reality Applying example and above-described embodiment, difference lies in as shown in Figure 1, include the following steps:
S1. the naval vessel area information in first frame on certain video carrier is sketched the contours of manually;The wherein described naval vessel region letter Breath be exactly naval vessel in the first frame shared by region, naval vessel area information as described below is also similarly, and to pass through Fast-RCNN nets The naval vessel area information that network predicts is identical as the concept for the naval vessel area information manually sketched the contours of, the difference is that, manually It sketches the contours variant with the accuracy rate of the naval vessel area information of Fast-RCNN neural network forecasts.
S2. the video carrier after normalization size is input in Fast-RCNN networks, passes through Fast-RCNN networks pair Video frame after first frame carries out the tracking monitor in naval vessel region, obtains the location information on the naval vessel in each frame;
Step S3- step S4, as shown in Figure 3:S3. by the location information on naval vessel in human eye observation's present frame, and judge Whether annotation results are qualified;In eye-observation present frame, the accuracy of naval vessel regional location tracking, the judgment basis of accuracy is such as Under:In Fast-RCNN networks, removing thought naval vessel regional location is surrounded with a box.If in side About 80% or more the region on practical naval vessel is not surrounded by frame area encompassed, therefore thinks the tracking knot of present frame Fruit accuracy is relatively low, needs the manual intervention operation for executing step S4, otherwise continues to execute the update in the naval vessel region of step S2 Operation.
Then include completing naval vessel region by manually drawing a box with mouse if S4. unqualified After manual intervention operation, continue step S2;Because during Ship Target carries out mobile, if background (in video frame, Non- naval vessel region) with the color similarity in naval vessel region very it is big either in the current frame, there is shelter, then in target following During, it may appear that there is mistake (i.e. in current, identifying that other regions are naval vessel region) in tracking result, further, since During target following, the continuity with target movement, i.e., in the next frame, the position of the target following confirmed It is in previous frame, near identified target location, the range moved there is no target is excessive, therefore, if working as In previous frame, the target location of tracking is malfunctioned, then can influence the result of target following in the next frame.So step S3 and step S4 is essential.Wherein, the specific implementation operation of step S4 is as follows:Tracing area manually is reinitialized with mouse, i.e., A box is drawn with mouse, then includes by naval vessel region, to complete manual intervention operation.
S5. above step is all executed to each frame in video carrier, until tracking operation terminates.
The precision accuracy of naval vessel tracking can be greatly promoted by the method for manual intervention.
Embodiment 3, is based on above-described embodiment, and the present invention provides a kind of naval vessel tracking based on manual intervention, this reality The distinguishing feature for applying example and above-described embodiment is, for the update of tracing area operation, actually exists In each frame for initializing the video after tracing area, naval vessel region part is equally searched out.For the part, our this patents Realize that relevant operation, concrete operations are as follows using the Fast-rcnn frames based on deep learning:
S21. target training is carried out to Fast-RCNN networks;Fast-RCNN network models after training are more accurate, institute The accuracy rate higher of obtained result;Finally carried out again in video carrier by the Fast-RCNN networks after the completion of training Naval vessel carries out target following.
Embodiment 3, as shown in figure 4, the specific implementation step of the step S21 is:
A. several times are therefrom obtained by selective search methods for each pictures in training set Favored area, and record, the location coordinate information of each candidate region;Wherein, this several candidate region may exist Overlapping phenomenon, each region are to be surrounded by a box, therefore location coordinate information is the corresponding upper left of the box The width and length of angular vertex coordinate and the box, by selective search algorithms (selective search algorithm) we Relatively coarse naval vessel region can be obtained, then by being trained to neural network, thus from rough naval vessel region, Obtain out more accurately naval vessel regional location.
Specifically, we use color similarity as the standard for obtaining candidate region, specific implementation method is as follows:
S211., the size and aspect ratio range of candidate region are set;Due in the video frame, the area surface on naval vessel Product and size be meet it is a certain range of, it would therefore be desirable to first be arranged candidate region relative dimensions.In this step In, naval vessel region is arranged in we areal extent is [5000,10000] pixel size, length than range be [5,10].And this In embodiment, by taking 2000 candidate regions as an example.
S212. it chooses in the video frame a bit at random, is denoted as M, then calculate in video frame between any point N and point M Color distortion degree point N is slid into the range of convergence of point M, the meter of color distortion degree X if color distortion degree is less than K therewith Calculating formula is:
Wherein, MR、MG、MBPixel values of the point M on tri- components of R, G and B is indicated respectively, similarly NR、NGAnd NGRespectively Indicate that pixel values of the point N on tri- components of R, G and B, point N are not overlapped with point M, K=150;
S213. selected element N is repeated, and executes diversity factor calculating, expands the range of convergence of point M, the range of convergence of point M is final An image-region will be constituted, then the image-region just constitutes a part of above-mentioned 2000 candidate regions;
If S214. above-mentioned candidate region meets the size and length-width ratio model for the candidate region being arranged in step S211 It encloses, then terminates step S213;
S215. different point M is randomly selected, and executes step, and executes step S213-S214, and meets and obtains every time Candidate area size within the scope of size condition in step S211, but candidate area size does not repeat.Such as:The The image-region once obtained, area are 5001, and length ratio is 5, and the image-region obtained after me can be that area is 5100, length-width ratio is 5.
B. for each candidate region in step A, it is labeled label, i.e., 0 or 1,1 represents the candidate region Including naval vessel part, 0 indicates that the candidate region does not include naval vessel region part;Correct coordinate information, including a box pair The information such as the width of the top left corner apex coordinate and box answered and length, and the box then just surrounds naval vessel part Come.
C. for each region in step A, all there is its correction position coordinate, i.e.,:
If the markup information in current candidate region is 1, illustrate current candidate region RiIn include naval vessel region, we Record its correct co-ordinate position information, the amendment coordinate information, including the corresponding top left corner apex coordinate of a box with And the width and length information of box, and naval vessel part is then surrounded by the box just;Wherein, if current region Ri's Mark is 0, then need not carry out any operation to it, and correction position information is exactly current location coordinate information.
D. several candidate regions constitute the input of network, the mark corresponding to each candidate region and its correspondence Correction position information constitutes the output of network;
E. Fast-RCNN networks are trained according to the network inputs and output that are obtained in step D, are calculated by using BP Method updates weight and the biasing of neuron, finally so that Fast-RCNN networks reach convergence state.
Specifically, the structure of simple nervelet network can be as shown in figure 5, wherein each circle represents a nerve Member, w1And w2The weight between neuron is represented, b indicates biasing, and g (z) is activation primitive, so that output becomes non-linear, A indicates output, x1And x2It indicates input, is then directed to current structure, output is represented by:
A=g (x1*w1+x2*w2+1*b) (2)
It can be obtained by formula (2), in the case of input data and constant activation primitive, the value a of the output of neural network is Related with weight and biasing, by adjusting different weight and biasing, the output of neural network also has different results.
The value (predicted value) of known neural network output is a, it is assumed that its corresponding actual value is a'.
For Fig. 5, BP algorithm executes as follows:
It a, can every connecting line weight (w of first random initializtion in BP algorithm1And w2) and biasing b;
B, for input data x1, x2, BP algorithm can all first carry out fl transmission and obtain predicted value a;
C, and then according to the error between actual value a' and predicted value aReverse feedback update nerve net The weight of every connecting line and every layer of biasing in network.
Shown in weight and the update method of biasing such as formula (3)-(5), i.e., w is asked respectively to E1, w2, the local derviation of b.Wherein η tables What is shown is learning rate, is a parameter set in this formula.
D, step a-c is constantly repeated, until the value of network convergence, i.e. E is minimum or is held essentially constant.This moment, it indicates Network is trained to be finished.
By step S21 to Fast-RCNN networks after training, we can use the network carry out target with The specific implementation step of track, i.e. step S22, the step S22 includes:
Each frame of the video carrier of input is all exported multiple images region (in the frame by S221.Fast-RCNN networks In, may include multiple Ship Targets), equally it is with a pair of of coordinate points and two parameter w for each candidate region It is indicated with h.Using this to coordinate points as the top left corner apex of rectangle, w and h indicate the width and length of rectangle respectively, then the rectangle Area encompassed is the naval vessel region that Fast-RCNN networks are thought.
S222. since the movement on naval vessel has continuity, we can pass through identified tracking area in previous frame Domain determines in present frame, final Ship Target, and specific implementation is as follows:
S2221. for video initialization tracing area a later frame for, it is assumed that we in the frame, neural network Output multiple images region Areai, 1≤i≤n, n are the number of image-region in present frame.
S2222. it calculates in present frame, each image-region AreaiWith determined tracing area in previous frame (if upper one Frame is the video frame carried out corresponding to initialization tracing area, then the tracing area just refer to initialize tracing area) coincidence Ratio is denoted as Li, 1≤i≤n.Specific such as formula (6) is shown, wherein Sp-1Indicate the determined tracing area in -1 frame of pth Area,It indicates in pth frame, image-region AreaiCorresponding area, symbol ∩ represents intersection operation, if pth -1 Frame indicates that frame of initialization tracing area, then Sp-1Just the area of initialization tracing area is indicated.
S2223. L is choseniObtain maximum value in the case of, corresponding to image-region carry out the tracking area with new present frame Domain.
In the present invention, the deep learning frame based on Fast-RCNN networks is used first, to realize to Ship Target Tracking operation, in addition, in order to prevent during tracking, the tracking error of appearance (since there are shelters so that target with Track is lost) the case where, we use manual intervention, the artificial method for correcting tracing area, to improve tracking result Accuracy.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that The specific implementation of the present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to the present invention's Protection domain.

Claims (10)

1. a kind of Ship Target tracking based on manual intervention, which is characterized in that include the following steps:
S1. the naval vessel area information in first frame on certain video carrier is sketched the contours of manually;
S2. the video carrier after normalization size is input in Fast-RCNN networks, passes through Fast-RCNN networks pair first Frame after frame carries out the tracking monitor in naval vessel region, obtains the location information on the naval vessel in each frame;
S3. by the location information on naval vessel in human eye observation's present frame, and judge whether annotation results are qualified;
Then include completing naval vessel region manually by manually drawing a box with mouse if S4. unqualified After intervention operation, continue step S2;
S5. above step is all executed to each frame in video carrier, until tracking operation terminates.
2. according to the method described in claim 1, it is characterized in that, the step S2 is further comprising the steps of:
S21. target training is carried out to Fast-RCNN networks;
S22. Fast-RCNN networks are used to carry out target following.
3. according to the method described in claim 2, it is characterized in that, the step S21 is further comprising the steps of:
A. several candidate regions are therefrom obtained by selective search methods for each pictures in training set Domain, and record, the location coordinate information of each candidate region;
B. for each candidate region in step A, it is labeled label, i.e., 0 or 1,1, which represents the candidate region, includes Naval vessel part, 0 indicates that the candidate region does not include naval vessel region part;
C. for each region in step A, all there is its correction position coordinate, i.e.,:
If the markup information in current candidate region is 1, illustrate current candidate region RiIn include naval vessel region, we record Lower its corrects co-ordinate position information, the amendment coordinate information, including the corresponding top left corner apex coordinate of a box and side The width and length information of frame, and naval vessel part is then surrounded by the box just;
D. several candidate regions constitute the input of network, and the mark and its correspondence corresponding to each candidate region are corrected Location information constitutes the output of network;
E. Fast-RCNN networks are trained according to the network inputs and output that are obtained in step D, are come by using BP algorithm Weight and the biasing of neuron are updated, finally so that Fast-RCNN networks reach convergence state.
4. according to the method described in claim 3, it is characterized in that, the step A includes the following steps:
S211., the size and aspect ratio range of candidate region are set;
S212. it chooses in the video frame a bit at random, is denoted as M, then calculate the face in video frame between any point N and point M Color diversity factor slides into point N if color distortion degree is less than K therewith in the range of convergence of point M, and the calculating of color distortion degree X is public Formula is:
Wherein, wherein MR、MG、MBPixel values of the point M on tri- components of R, G and B is indicated respectively, similarly NR、NGAnd NGRespectively Indicate that pixel values of the point N on tri- components of R, G and B, point N are not overlapped with point M, K=150;
S213. selected element N is repeated, and executes diversity factor calculating, expands the range of convergence of point M, the range of convergence of point M is finally by structure At an image-region, then the image-region just constitutes a part of several above-mentioned candidate regions;
If S214. above-mentioned candidate region meets the size and aspect ratio range for the candidate region being arranged in step S211, Then terminate step S213;
S215. different point M is randomly selected, and executes step, and executes step S213-S214, and meets the time obtained every time Constituency domain sizes are within the scope of the size condition in step S211, but candidate area size does not repeat.
5. according to the method described in claim 2, it is characterized in that, the step S22 includes the following steps:
Each frame of the video carrier of input is all exported multiple images region by S221.Fast-RCNN networks, for each image Region is indicated with a pair of of coordinate points and two parameters w and h;Using this to coordinate points as the top left corner apex of rectangle, w and h difference Indicate the width and length of rectangle, then the rectangle area encompassed is the naval vessel region that network is thought;
S222. the location information on naval vessel final in present frame is determined by identified tracing area in previous frame.
6. according to the method described in claim 5, it is characterized in that, the step S222 includes the following steps:
S2221. for a later frame of the initialization tracing area of video carrier, it is assumed that in the frame, neural network output Multiple images region Areai, wherein 1≤i≤n, n are the number of image-region in present frame;
S2222. pass through formula:
It calculates in present frame, each image-region AreaiWith determined tracing area in previous frame, coincidence ratio Li, 1≤i≤ N, wherein Sp-1Indicate the area of the determined tracing area in -1 frame of pth,It indicates in pth frame, image-region AreaiInstitute Corresponding area, symbol ∩ represent intersection operation, if -1 frame of pth indicates that frame of initialization tracing area, Sp-1Just table Show the area of initialization tracing area;
S2223. L is choseniIn the case of obtaining maximum value, corresponding to image-region update the tracing area of present frame.
7. according to the method described in claim 1, it is characterized in that, in the step S3, whether the tracking of naval vessel regional location Qualification determination is according to being:In Fast-RCNN networks, the naval vessel regional location thought has been surrounded with a box Come, if be not surrounded practical 80% or more naval vessel region in box area encompassed, therefore thinks present frame Tracking result accuracy is relatively low, unqualified, thens follow the steps S4.
8. according to the method described in claim 1, it is characterized in that, the step S4 includes:Manually reinitialized with mouse Tracing area draws a box with mouse, then includes by naval vessel region.
9. according to the method described in claim 3, it is characterized in that, in the step A, pass through the sides search selective Method therefrom obtains 2000 candidate regions.
10. according to the method described in claim 4, it is characterized in that, the range of the size of the candidate region is [5000,10000] pixel size, length than range be [5,10].
CN201810273119.4A 2018-05-28 2018-05-28 A kind of Ship Target tracking based on manual intervention Pending CN108537826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810273119.4A CN108537826A (en) 2018-05-28 2018-05-28 A kind of Ship Target tracking based on manual intervention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810273119.4A CN108537826A (en) 2018-05-28 2018-05-28 A kind of Ship Target tracking based on manual intervention

Publications (1)

Publication Number Publication Date
CN108537826A true CN108537826A (en) 2018-09-14

Family

ID=63481615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810273119.4A Pending CN108537826A (en) 2018-05-28 2018-05-28 A kind of Ship Target tracking based on manual intervention

Country Status (1)

Country Link
CN (1) CN108537826A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222632A (en) * 2019-06-04 2019-09-10 哈尔滨工程大学 A kind of waterborne target detection method of gray prediction auxiliary area suggestion
CN110348356A (en) * 2019-07-03 2019-10-18 北京遥感设备研究所 A kind of successive frame RD images steganalysis method based on depth light stream network
CN110413166A (en) * 2019-07-02 2019-11-05 上海熙菱信息技术有限公司 A kind of method of history video real time linear tracking
CN111062298A (en) * 2019-12-11 2020-04-24 深圳供电局有限公司 Power distribution network power equipment target identification method and system
CN116343049A (en) * 2023-05-24 2023-06-27 四川创意科技有限公司 Method, device, equipment and storage medium for monitoring abnormal behavior of offshore target

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683117A (en) * 2016-12-30 2017-05-17 佳都新太科技股份有限公司 Target grasping algorithm based on kinematics behavior analysis
CN106934332A (en) * 2015-12-31 2017-07-07 中国科学院深圳先进技术研究院 A kind of method of multiple target tracking
CN106960446A (en) * 2017-04-01 2017-07-18 广东华中科技大学工业技术研究院 A kind of waterborne target detecting and tracking integral method applied towards unmanned boat
US20170206431A1 (en) * 2016-01-20 2017-07-20 Microsoft Technology Licensing, Llc Object detection and classification in images
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN107564004A (en) * 2017-09-21 2018-01-09 杭州电子科技大学 It is a kind of that video labeling method is distorted based on computer auxiliary tracking
CN107818571A (en) * 2017-12-11 2018-03-20 珠海大横琴科技发展有限公司 Ship automatic tracking method and system based on deep learning network and average drifting

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106934332A (en) * 2015-12-31 2017-07-07 中国科学院深圳先进技术研究院 A kind of method of multiple target tracking
US20170206431A1 (en) * 2016-01-20 2017-07-20 Microsoft Technology Licensing, Llc Object detection and classification in images
CN106683117A (en) * 2016-12-30 2017-05-17 佳都新太科技股份有限公司 Target grasping algorithm based on kinematics behavior analysis
CN106960446A (en) * 2017-04-01 2017-07-18 广东华中科技大学工业技术研究院 A kind of waterborne target detecting and tracking integral method applied towards unmanned boat
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN107564004A (en) * 2017-09-21 2018-01-09 杭州电子科技大学 It is a kind of that video labeling method is distorted based on computer auxiliary tracking
CN107818571A (en) * 2017-12-11 2018-03-20 珠海大横琴科技发展有限公司 Ship automatic tracking method and system based on deep learning network and average drifting

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
J.R.R.UIJLINGS,AND ETC: "Selective Search for Object Recognition", 《SPRINGERSCIENCE》 *
周云成等: "基于深度卷积神经网络的番茄主要器官分类识别方法", 《农业工程学报》 *
徐超等: "改进的卷积神经网络行人检测方法", 《计算机应用》 *
曹诗雨等: "基于Fast R-CNN的车辆目标检测", 《中国图象图形学报》 *
曾向阳著: "《智能水中目标识别》", 31 March 2016, 北京:国防工业出版社 *
王一丁等: "《数字图像处理》", 31 August 2015, 西安:西安电子科技大学出版社 *
邱莉榕等: "《算法设计与优化》", 31 December 2016, 北京:中央民族大学出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222632A (en) * 2019-06-04 2019-09-10 哈尔滨工程大学 A kind of waterborne target detection method of gray prediction auxiliary area suggestion
CN110413166A (en) * 2019-07-02 2019-11-05 上海熙菱信息技术有限公司 A kind of method of history video real time linear tracking
CN110413166B (en) * 2019-07-02 2022-11-25 上海熙菱信息技术有限公司 Real-time linear tracking method for historical video
CN110348356A (en) * 2019-07-03 2019-10-18 北京遥感设备研究所 A kind of successive frame RD images steganalysis method based on depth light stream network
CN111062298A (en) * 2019-12-11 2020-04-24 深圳供电局有限公司 Power distribution network power equipment target identification method and system
CN116343049A (en) * 2023-05-24 2023-06-27 四川创意科技有限公司 Method, device, equipment and storage medium for monitoring abnormal behavior of offshore target
CN116343049B (en) * 2023-05-24 2023-08-15 四川创意科技有限公司 Method, device, equipment and storage medium for monitoring abnormal behavior of offshore target

Similar Documents

Publication Publication Date Title
CN108537826A (en) A kind of Ship Target tracking based on manual intervention
US20210023720A1 (en) Method for detecting grasping position of robot in grasping object
CN110472554B (en) Table tennis action recognition method and system based on attitude segmentation and key point features
CN106650630B (en) A kind of method for tracking target and electronic equipment
CN108549876A (en) The sitting posture detecting method estimated based on target detection and human body attitude
CN109271888A (en) Personal identification method, device, electronic equipment based on gait
CN105205453B (en) Human eye detection and localization method based on depth self-encoding encoder
CN108229268A (en) Expression Recognition and convolutional neural networks model training method, device and electronic equipment
CN109285179A (en) A kind of motion target tracking method based on multi-feature fusion
CN108803617A (en) Trajectory predictions method and device
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
CN104751466B (en) A kind of changing object tracking and its system based on conspicuousness
CN107886089A (en) A kind of method of the 3 D human body Attitude estimation returned based on skeleton drawing
CN108171141A (en) The video target tracking method of cascade multi-pattern Fusion based on attention model
CN111192294B (en) Target tracking method and system based on target detection
CN108197584A (en) A kind of recognition methods again of the pedestrian based on triple deep neural network
CN106373160A (en) Active camera target positioning method based on depth reinforcement learning
CN104299245A (en) Augmented reality tracking method based on neural network
CN108520218A (en) A kind of naval vessel sample collection method based on target tracking algorism
CN108765468A (en) A kind of method for tracking target and device of feature based fusion
CN106803084A (en) A kind of facial characteristics independent positioning method based on end-to-end recirculating network
CN108334878A (en) Video images detection method and apparatus
CN109087337A (en) Long-time method for tracking target and system based on layering convolution feature
CN110472628A (en) A kind of improvement Faster R-CNN network detection floating material method based on video features
CN107274437A (en) A kind of visual tracking method based on convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180914

RJ01 Rejection of invention patent application after publication