CN115942120A - Method for tracking ship by smooth linkage of cameras - Google Patents

Method for tracking ship by smooth linkage of cameras Download PDF

Info

Publication number
CN115942120A
CN115942120A CN202211671984.7A CN202211671984A CN115942120A CN 115942120 A CN115942120 A CN 115942120A CN 202211671984 A CN202211671984 A CN 202211671984A CN 115942120 A CN115942120 A CN 115942120A
Authority
CN
China
Prior art keywords
ship
camera
image
video
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211671984.7A
Other languages
Chinese (zh)
Inventor
朱德理
白正
隋远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Laiwangxin Technology Research Institute Co ltd
Original Assignee
Nanjing Laiwangxin Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Laiwangxin Technology Research Institute Co ltd filed Critical Nanjing Laiwangxin Technology Research Institute Co ltd
Priority to CN202211671984.7A priority Critical patent/CN115942120A/en
Publication of CN115942120A publication Critical patent/CN115942120A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a method for smoothly linking and tracking a ship by a camera, which realizes the detection and tracking of a ship target in a continuous video picture by utilizing a twin neural network, obtains the position of the ship target detection and tracking result relative to an image, controls the upper part, the lower part, the left part and the right part of a camera cloud deck and the distance of a lens focal distance through a cloud deck control protocol (PELCO-D), keeps smooth motion by controlling the cloud deck and the lens in the motion process of the ship, does not generate obvious jitter on the picture, and keeps the ship in the center of the video picture all the time. The invention can be used in a plurality of wading fields, such as the ship traffic management field in the maritime field, the fishing ground supervision in the fishery field, the ship monitoring in the sea police field, and the like; compared with the linkage tracking based on the ship position of a ship traffic management system (VTS), the method uses image target detection and tracking, has strong real-time image tracking effect and better tracking effect, and can be suitable for scenes with slow updating of ship position signals and high requirement on the linkage tracking effect in a ship traffic service system.

Description

Method for tracking ship by smooth linkage of camera
Technical Field
The invention relates to the technical field of image target tracking, in particular to a method for tracking a ship by smoothly linking a camera.
Background
In recent years, information-based construction is vigorously carried out on maritime affairs, CCTV construction cost is low, use is convenient, and the CCTV has a vital role in improving navigation safety, dynamically monitoring ships, reducing overwater environmental pollution and preventing traffic accidents. The ship video linkage tracking is a commonly used function in the duty process, can help duty personnel to quickly position a ship to be observed, and provides visual and rich ship information for the duty personnel.
At present, the traditional method of ship linkage tracking is that ship longitude and latitude sensed by an AIS sensor and a radar sensor of a ship are used for linkage tracking, the tracking effect depends on the conditions of AIS, radar sensing ship position updating frequency, accuracy, camera geographic position measuring accuracy, AIS and radar fusion state and the like, the tracking can only passively receive ship information data and then adjust holder azimuth information, the ship cannot be actively detected through videos, and the tracking process has the conditions of unsmooth videos, obvious pause and frustration, untimely camera angle adjustment and the like. Therefore, a tracking method for actively finding a ship from a video, improving a smoothing effect in a ship tracking process and improving timeliness is urgently researched.
Disclosure of Invention
The invention aims to: the invention provides a ship smooth linkage tracking method based on a twin neural network, aiming at the problems of uneven rotation of a cradle head, untimely rotation of the cradle head and the like in the video linkage tracking process of the existing ship target video tracking effect depending on the position of a ship sensed by an AIS and a radar sensor. The method specifically comprises the steps of training ship model data, generating a special ship data set and collecting camera parameters; manually framing a ship target to be tracked in a video image, detecting and tracking the ship target in the video in real time, acquiring pixel positions of the ship in the image, acquiring horizontal and pitching field angles of a camera, and calculating to obtain the rotation speeds of a pan-tilt and a lens; and controlling the camera pan-tilt and the lens to continuously and stably track the ship. The invention can continuously and stably detect and track ships in the video by utilizing the twin neural network, controls the rotation of the camera pan-tilt and the camera lens through the detection and tracking result of each frame of image, and can effectively improve the smoothing effect and the timeliness of the tracking process.
The invention supports video sources such as visible light, infrared and thermal imaging. The invention needs to obtain parameters such as the angle of view parameter of the camera, the rotating speed of the holder and the like, and the precision of the rotating speed of the holder is 0.01 degrees, otherwise, the smooth tracking effect is influenced.
The technical scheme is as follows:
the invention provides a method for tracking a ship by smoothly linking cameras, which comprises the following steps:
step 1, acquiring ship videos of a camera under different focal lengths, directions, weather and illumination states, automatically intercepting pictures from the videos, labeling ships in the pictures, and generating a data result set of ship characteristics through twin neural network training;
step 2, collecting camera lens and holder fixed parameters;
step 3, acquiring a video of the camera, manually framing and selecting a tracked ship in the image, and matching the tracked ship with a ship in a next video frame by using a twin neural network to acquire a pixel coordinate position of the tracked ship;
step 4, calculating the azimuth direction of the rotation of the tripod head and the rotating speed duration of the tripod head according to the pixel coordinate position of the tracked ship, the azimuth of the tripod head of the camera and the lens parameters after the detection and tracking of each frame of image, and controlling the tripod head to continuously and smoothly rotate the tracked ship;
and 5, proportionally controlling the lens to enlarge and reduce in the image according to the pixel coordinate position of the tracked ship so as to enable the ship to be in a display proportion of 1/3 to 1/2 in the image.
The step 1 comprises the following steps:
step 1-1, acquiring a video of a camera, storing the video as a video file, reading a key frame (I frame) from the video file, storing the I frame of the video as a picture, and zooming the picture to 512 × 512;
step 1-2, cleaning, classifying and normalizing the picture data, manually removing pictures which do not display ships and have high ship density and serious shielding in the pictures, classifying the ship pictures according to the illumination intensity of the pictures and the ship types, manually marking the ships in the pictures, and marking only the ships which are not shielded for the ships at the bank and the ships which have serious shielding;
step 1-3, establishing a twin neural network with Alexnet as a backbone, training the marked picture, and generating a data result set of ship features, wherein the twin neural network comprises 5 convolutional layers and 3 full-connection layers, wherein the 1 st convolutional layer, the 3 rd convolutional layer and the 5 th convolutional layer in the 5 convolutional layers comprise pooling layers, the 1 st full-connection layer and the 2 th full-connection layer in the 3 full-connection layers introduce discarding layers, so that model overfitting is avoided, and the 3 rd full-connection layer is an output layer.
The step 2 comprises the following steps: collecting the focal range of a camera, the rotational precision of a tripod head, the rotational speed range of the tripod head and the length and width information of a lens Charge Coupled Device (CCD) target surface;
the step 3 comprises the following steps:
step 3-1, acquiring a video of the camera through RTSP (real time streaming protocol), and decoding the video into a frame of image by using ffmpeg to display on an interface;
step 3-2, loading ship feature model data in the step 1-3, manually framing a tracked ship in the image, sending the framed image and each next frame into a twin neural network to extract features, performing similarity matching in a target feature space through a loss function, if the matching is successful, sending the framed image features and the feature data of the searched image frame into a candidate area network to predict an identification rectangular frame of a target, obtaining a pixel coordinate proportion of the tracked ship, obtaining a pixel rectangular position of the tracked ship according to the length and width of the image, and if the matching is unsuccessful, executing the step 4-3;
the step 4 comprises the following steps:
step 4-1, if the camera pan-tilt is in a rotating state, stopping the pan-tilt rotation to obtain the focal range of the camera, the length and width information of the CCD target surface and the pixel rectangular position of the tracked ship in the step 3 in the step 2;
step 4-2, acquiring current camera multiple, and acquiring a horizontal field angle and a pitching field angle of the camera through the camera focal length, the zoom multiple and the length and the width of the CCD target surface;
horizontal field angle θ 1 The formula is as follows:
θ 1 =2×arctan(w/(2f*z)),
wherein w is the CCD width of the camera, f is the minimum focal length of the camera, and z is the current zoom multiple of the camera;
the elevation angle of view formula is:
θ 2 =2×arctan(h/(2f*z)),
h is the CCD height of the camera, f is the minimum focal length of the camera, and z is the current zoom multiple of the camera;
obtaining the horizontal and pitching offset angles of the ship center point relative to the video center point, calculating the time from the ship center point to the video image center position according to the rotational speed of the holder, controlling the holder to rotate up and down, left and right through a holder control protocol, stopping the holder from rotating after corresponding time delay, wherein the time delay is less than 83 milliseconds; duration t of horizontal rotation of camera 1 The formula is as follows:
Figure BDA0004016794700000031
where abs is an absolute value function, w 1 Length, x, of resolution of video image 1 For detecting that the vessel is positioned in opposite viewHorizontal coordinate pixel point s of upper left corner 1 The horizontal rotating speed of the tripod head under the current focal length state of the camera;
duration t of vertical rotation of holder 2 The formula is as follows:
Figure BDA0004016794700000032
wherein h is 1 Width of resolution of video image, y 1 For a detected ship target located at a vertical coordinate pixel point, s, relative to the upper left corner of the video 2 The vertical rotating speed of the tripod head under the current focal length state of the camera;
step 4-3, storing the horizontal orientation v of the camera when the first detection is successful 1 Pitch orientation p 1 Time t 1 (millisecond) and horizontal camera orientation v at last successful detection n Pitch azimuth p n Time t n (millisecond), if the twin network detection tracking ship fails, the historical motion track of the holder is used for pre-pushing, and the calculation formula is as follows:
t k average speed V of horizontal pre-push point of moment holder k
Figure BDA0004016794700000041
t k Average speed P of pitching pre-push point of moment tripod head k
Figure BDA0004016794700000042
Run time of t k -t k-1 ,t k-1 Pre-pushing the time point for the last moment and updating t k-1 Has a value of t k
4-4, repeating the steps 4-1, 4-2 and 4-3, translating horizontal and pitching azimuth information in real time according to the detection result of the tracked ship, so that the deviation value of the central point position of the rectangular frame detected by the tracked ship and the pixel of the central point position of the image is within a threshold range, wherein for the image size of 1920 × 1080, the threshold range is 0-150 pixels, and the threshold range of the image size of 704 × 576 is 0-50 pixels;
the step 5 comprises the following steps: judging whether the proportion of the ship tracking detection rectangular frame in the video image is 1/3 to 1/2, if the image proportion does not meet the condition, controlling the lens to stretch and retract through a pan-tilt control protocol, carrying out fine adjustment, and carrying out fine adjustment every time so as to avoid excessively adjusting the focal length, wherein the image detection success rate is reduced when the focal length is excessively changed; if the image proportion meets the condition, the focal length of the camera does not need to be adjusted.
Has the beneficial effects that: the invention can well solve the problem of smooth tracking of the ship. During the marine duty process, when a camera is used for tracking a ship, the camera is calibrated firstly, then the position detected by the AIS and the radar of the ship is tracked, the tracking accuracy has a great relationship with the updating frequency of the AIS and the radar detection position, and the tracking picture of the camera is one-stop in the tracking process. The ship tracking method of the twin neural network is simple in actual use, does not need to calibrate a camera, does not depend on updating frequency of AIS and radar in the tracking process, does not cause pause and frustration in image display in the tracking process, and enables images to be smoother in the tracking process.
Drawings
FIG. 1 is a twin neural network training process of the present invention.
FIG. 2 is a diagram illustrating a software structure and a data processing flow according to the present invention.
Fig. 3 is a schematic view of a control flow structure of a camera pan-tilt and a lens according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description:
as shown in fig. 1, 2 and 3, the present embodiment provides a method for tracking a ship by smooth linkage of cameras, including the following steps:
step 1, collecting ship videos of a camera under different focal lengths, directions, weather and illumination states, automatically intercepting pictures from the videos, and cleaning, classifying, normalizing and other processing on picture data; labeling the ship in the picture, and training through a twin neural network to generate a data result set of ship characteristics;
step 2, collecting camera lens and holder fixed parameters;
step 3, acquiring a video of the camera, manually framing and selecting a tracked ship in the image, and matching the tracked ship with a ship in a next video frame by using a twin neural network to acquire a pixel coordinate position of the tracked ship;
step 4, according to the pixel coordinate position of the tracked ship, the tripod head position of the camera and the lens parameters after the detection and the tracking of each frame of image, calculating the direction of the rotation of the tripod head and the rotating speed duration of the tripod head, and controlling the tripod head to continuously and smoothly rotate the tracked ship;
and 5, proportionally controlling the lens to enlarge and reduce in the image according to the pixel coordinate position of the tracked ship so as to enable the ship to be in a display proportion of 1/3 to 1/2 in the image.
As shown in FIG. 1, the present invention requires a data result set for training ship features, the method includes;
step S11, acquiring a video of a camera, storing the video as a video file, reading a key frame (I frame) from the video file, storing the I frame of the video as a picture, and zooming the picture to 512 × 512;
s12, manually removing pictures which do not display ships and have high ship density and serious shielding in the pictures, classifying the ship pictures according to the illumination intensity of the pictures and the ship types, manually marking the ships in the pictures, and marking only the ships which are not shielded for the ships at the shore and the ships which shield the ships seriously;
and S13, establishing a twin neural network with Alexnet as a backbone, wherein the twin neural network comprises 5 convolutional layers and 3 full-connection layers, the 1 st convolutional layer, the 3 rd convolutional layer and the 5 th convolutional layer comprise pooling layers, the 1 st convolutional layer and the 2 nd convolutional layer are fully connected, a discarding layer is introduced, model overfitting is avoided, and the 3 rd layer is an output layer. And training the marked pictures to generate a data result set of the ship features.
As shown in fig. 2 and 3, the method of the invention enables the camera to smoothly track the ship through the steps of ship tracking detection, camera pan-tilt parameter acquisition, camera control and the like, and comprises the following steps:
step S21, acquiring a video of the camera through RTSP (real time streaming protocol), and decoding the video into a frame of image by using ffmpeg to display on an interface;
step S22, loading ship feature model data in the step S13, manually framing a tracked ship in the image, sending the framed image and each next frame into a twin neural network to extract features, performing similarity matching in a target feature space through a loss function, if the matching is successful, sending the framed image features and the feature data of the searched image frame into a candidate area network to predict an identification rectangular frame of a target, obtaining a pixel coordinate proportion of the tracked ship, and if the matching is unsuccessful, executing the step S25;
step S23, if the camera tripod head is in a rotating state, the rotation of the tripod head is stopped, and pixel rectangular coordinates, a focal range and CCD target surface length and width parameters of a tracked ship in an image are obtained;
s24, acquiring camera multiples, and acquiring horizontal and pitching field angles of the camera through the focal length of the camera, the zoom multiple and the length and width of the CCD target surface;
horizontal field angle θ 1 The formula is as follows:
θ 1 =2×arctan(w/(2f*z)),
wherein w is the CCD width of the camera, f is the minimum focal length of the camera, and z is the current zoom multiple of the camera;
the elevation angle of view formula is:
θ 2 =2×arctan(h/(2f*z)),
h is the CCD height of the camera, f is the minimum focal length of the camera, and z is the current zoom multiple of the camera;
obtaining the horizontal and pitching offset angles of the ship center point relative to the video center point, calculating the time from the moving ship center point to the video image center position according to the rotational speed of the holder, and coordinating through the holder controlControlling the holder to rotate up, down, left and right, delaying corresponding time and stopping the holder rotation, wherein the delay time is less than 83 milliseconds; duration t of horizontal rotation of camera 1 The formula is as follows:
Figure BDA0004016794700000061
where abs is an absolute value function, w 1 Length, x, of resolution of the video image 1 For a detected ship target located at a horizontal coordinate pixel point, s, relative to the upper left corner of the video 1 The horizontal rotating speed of the tripod head under the current focal length state of the camera;
duration t of vertical rotation of holder 2 The formula is as follows:
Figure BDA0004016794700000062
wherein h is 1 Width, y, of resolution of video image 1 For a detected ship target located at a vertical coordinate pixel point, s, relative to the upper left corner of the video 2 The vertical rotating speed of the tripod head under the current focal length state of the camera;
step S25, saving the horizontal orientation v of the camera when the first detection is successful 1 Pitch orientation p 1 Time t 1 (millisecond) and horizontal camera orientation v at the last successful detection n Pitch orientation p n Time t n (millisecond), if the twin network detection tracking ship fails, the historical motion track of the holder is used for pre-pushing, and the calculation formula is as follows:
t k average speed V of horizontal pre-push point of moment holder k
Figure BDA0004016794700000071
t k Average speed P of pitching pre-push point of moment tripod head k
Figure BDA0004016794700000072
Running time of t k -t k-1 ,t k-1 Pre-pushing the time point for the last moment and updating t k-1 Has a value of t k
Step S26, repeating the steps S22, S23 and S24, translating horizontal and pitching azimuth information in real time according to the detection result of the tracked ship, enabling the pixel deviation value of the central point position of the rectangular frame detected by the tracked ship and the central point position of the image to be within a threshold range, enabling the threshold range to be 0-150 pixels for the size of an image of 1920 x 1080, and enabling the threshold range to be 0-50 pixels for the size of the image of 704 x 576;
s27, judging whether the proportion of the ship tracking detection rectangular frame in the video image is 1/3 to 1/2, if the image proportion does not meet the condition, controlling the lens to stretch and retract through a pan-tilt control protocol, and finely adjusting every time to avoid excessively adjusting the focal length, wherein the image detection success rate is reduced when the focal length is excessively changed; if the image proportion meets the condition, the focal length of the camera does not need to be adjusted.
In a specific implementation, the present application provides a computer storage medium and a corresponding data processing unit, where the computer storage medium is capable of storing a computer program, and the computer program, when executed by the data processing unit, may execute the inventive content of the method for tracking a ship by smooth linkage of cameras provided by the present invention and some or all of the steps in each embodiment. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
It is obvious to those skilled in the art that the technical solutions in the embodiments of the present invention can be implemented by means of a computer program and its corresponding general-purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a computer program, that is, a software product, which may be stored in a storage medium and include several instructions for enabling a device (which may be a personal computer, a server, a single chip microcomputer, an MUU, or a network device, etc.) including a data processing unit to execute the method according to each embodiment or some portions of the embodiments of the present invention.
The invention provides a method for tracking a ship by smoothly linking cameras, which has a plurality of methods and ways for realizing the technical scheme, and the above description is only a preferred embodiment of the invention, and it should be noted that, for those skilled in the art, a plurality of improvements and decorations can be made without departing from the principle of the invention, and these improvements and decorations should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (6)

1. A method for tracking a ship by smoothly linking cameras is characterized by comprising the following steps:
step 1, acquiring ship videos of a camera under different focal lengths, directions, weather and illumination states, automatically intercepting pictures from the videos, labeling ships in the pictures, and generating a data result set of ship characteristics through twin neural network training;
step 2, collecting camera lens and holder fixed parameters;
step 3, acquiring a video of the camera, manually framing and selecting a tracked ship in the image, and matching the tracked ship with a ship in a next video frame by using a twin neural network to acquire a pixel coordinate position of the tracked ship;
step 4, calculating the azimuth direction of the rotation of the tripod head and the rotating speed duration of the tripod head according to the pixel coordinate position of the tracked ship, the azimuth of the tripod head of the camera and the lens parameters after the detection and tracking of each frame of image, and controlling the tripod head to continuously and smoothly rotate the tracked ship;
and 5, proportionally controlling the lens to enlarge and reduce in the image according to the pixel coordinate position of the tracked ship so as to enable the ship to be in a display proportion of 1/3 to 1/2 in the image.
2. The method of claim 1, wherein step 1 comprises:
step 1-1, acquiring a video of a camera, storing the video as a video file, reading a key frame I frame from the video file, storing the key frame I frame as a picture, and zooming the picture to 512 × 512;
step 1-2, cleaning, classifying and normalizing the picture data, eliminating pictures which do not display ships and have high ship density and serious shielding in the pictures, classifying the ship pictures according to the illumination intensity of the pictures and the ship types, labeling the ships in the pictures, and labeling only the ships which are not shielded for the ships at the bank and the ships which are seriously shielded and meet the ships;
step 1-3, establishing a twin neural network with Alexnet as a backbone, training the marked picture, and generating a data result set of ship features, wherein the twin neural network comprises 5 convolutional layers and 3 full-connection layers, wherein the 1 st convolutional layer, the 3 rd convolutional layer and the 5 th convolutional layer in the 5 convolutional layers comprise pooling layers, the 1 st full-connection layer and the 2 th full-connection layer in the 3 full-connection layers introduce discarding layers, so that model overfitting is avoided, and the 3 rd full-connection layer is an output layer.
3. The method of claim 2, wherein step 2 comprises: and acquiring the focal range of the camera, the rotation precision of the tripod head, the rotating speed range of the tripod head and the length and width information of the CCD target surface of the lens charge coupling element.
4. The method of claim 3, wherein step 3 comprises:
step 3-1, acquiring a video of the camera through a real-time streaming protocol RTSP, and decoding the video into a frame of image by using ffmpeg to display on an interface;
and 3-2, loading ship feature model data in the step 1-3, framing a tracked ship in the image, sending the framed image and each next frame into a twin neural network to extract features, performing similarity matching in a target feature space through a loss function, if the matching is successful, sending the framed image features and the feature data of the searched image frame into a candidate area network to predict an identification rectangular frame of the target, obtaining the pixel coordinate proportion of the tracked ship, obtaining the rectangular pixel position of the tracked ship according to the length and width of the image, and if the matching is unsuccessful, executing the step 4-3.
5. The method of claim 4, wherein step 4 comprises:
step 4-1, if the camera pan-tilt is in a rotating state, stopping the pan-tilt rotation to obtain the focal range of the camera, the length and width information of the CCD target surface and the pixel rectangular position of the tracked ship in the step 3 in the step 2;
step 4-2, acquiring current camera multiple, and acquiring a horizontal field angle and a pitching field angle of the camera through the camera focal length, the zoom multiple and the length and the width of the CCD target surface;
horizontal field angle θ 1 The formula is as follows:
θ 1 =2×arctan(w/(2f*z)),
wherein w is the CCD width of the camera, f is the minimum focal length of the camera, and z is the current zoom multiple of the camera;
the elevation angle of view formula is:
θ 2 =2×arctan(h/(2f*z)),
wherein h is the CCD height of the camera;
obtaining the horizontal and pitching offset angles of the ship center point relative to the video center point, calculating the time from the moving ship center point to the video image center position according to the rotational speed of the holder, controlling the holder to rotate up and down, left and right through a holder control protocol, and stopping the holder from rotating after delaying corresponding time; duration t of horizontal rotation of camera 1 The formula is as follows:
Figure FDA0004016794690000021
where abs is an absolute value function, w 1 Length, x, of resolution of the video image 1 Horizontal coordinate pixels located in upper left corner relative to video for detected ship targetPoint, s 1 The horizontal rotating speed of the tripod head under the current focal length state of the camera is obtained;
duration t of vertical rotation of holder 2 The formula is as follows:
Figure FDA0004016794690000022
wherein h is 1 Width of resolution of video image, y 1 For a detected ship target located at a vertical coordinate pixel point, s, relative to the upper left corner of the video 2 The vertical rotating speed of the tripod head under the current focal length state of the camera;
step 4-3, storing the horizontal orientation v of the camera when the first detection is successful 1 Pitch orientation p 1 Time t 1 And the horizontal orientation v of the camera when the last detection is successful n Pitch orientation p n Time t n If the twin neural network fails to detect and track the ship, pre-pushing is carried out by the historical motion track of the holder, and the calculation formula is as follows:
t k average speed V of horizontal pre-push point of moment holder k
Figure FDA0004016794690000031
t k Average speed P of pitching pre-push point of moment tripod head k
Figure FDA0004016794690000032
Running time of t k ~t k-1 ,t k-1 Pre-pushing the time point for the last moment and updating t k-1 Has a value of t k
And 4-4, repeating the step 4-1, the step 4-2 and the step 4-3, and translating the horizontal and pitching azimuth information in real time according to the detection result of the tracked ship so that the pixel deviation value between the central point position of the detected rectangular frame of the tracked ship and the central point position of the image is within the threshold range.
6. The method of claim 5, wherein step 5 comprises: judging whether the proportion of the ship tracking detection rectangular frame in the video image is 1/3 to 1/2, if the image proportion does not meet the condition, controlling the lens to stretch and retract through a pan-tilt control protocol, and carrying out fine adjustment; if the image proportion meets the condition, the focal length of the camera does not need to be adjusted.
CN202211671984.7A 2022-12-26 2022-12-26 Method for tracking ship by smooth linkage of cameras Pending CN115942120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211671984.7A CN115942120A (en) 2022-12-26 2022-12-26 Method for tracking ship by smooth linkage of cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211671984.7A CN115942120A (en) 2022-12-26 2022-12-26 Method for tracking ship by smooth linkage of cameras

Publications (1)

Publication Number Publication Date
CN115942120A true CN115942120A (en) 2023-04-07

Family

ID=86654218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211671984.7A Pending CN115942120A (en) 2022-12-26 2022-12-26 Method for tracking ship by smooth linkage of cameras

Country Status (1)

Country Link
CN (1) CN115942120A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061712A (en) * 2023-10-12 2023-11-14 常州市佐安电器有限公司 Automatic following monitoring method for coal mining machine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061712A (en) * 2023-10-12 2023-11-14 常州市佐安电器有限公司 Automatic following monitoring method for coal mining machine
CN117061712B (en) * 2023-10-12 2023-12-29 常州市佐安电器有限公司 Automatic following monitoring method for coal mining machine

Similar Documents

Publication Publication Date Title
US20210073573A1 (en) Ship identity recognition method based on fusion of ais data and video data
CN111523465B (en) Ship identity recognition system based on camera calibration and deep learning algorithm
US9886770B2 (en) Image processing device and method, image processing system, and image processing program
US9936169B1 (en) System and method for autonomous PTZ tracking of aerial targets
US11900668B2 (en) System and method for identifying an object in water
CN108965809A (en) The video linkage monitoring system and control method of radar vectoring
CN105611244B (en) A kind of airport alien material detection method based on ball machine monitor video
CN108898122A (en) A kind of Intelligent human-face recognition methods
CN115942120A (en) Method for tracking ship by smooth linkage of cameras
WO2003098922A1 (en) An imaging system and method for tracking the motion of an object
CN111161305A (en) Intelligent unmanned aerial vehicle identification tracking method and system
CN103617631B (en) A kind of tracking based on Spot detection
CN109883433A (en) Vehicle positioning method in structured environment based on 360 degree of panoramic views
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
CN100451544C (en) Method for measuring attitude parameters of aircraft based on video images
CN113743286A (en) Target monitoring system and method for multi-source signal fusion
CN109509368A (en) A kind of parking behavior algorithm based on roof model
Qiu et al. Intelligent highway lane center identification from surveillance camera video
CN102930554A (en) Method and system for accurately capturing target in monitored scene
Cafaro et al. Towards Enhanced Support for Ship Sailing
JP4686773B2 (en) Moving object recognition method and moving object recognition apparatus
CN113850905B (en) Panoramic image real-time stitching method for circumferential scanning type photoelectric early warning system
Shan et al. Maritime target detection based on electronic image stabilization technology of shipborne camera
Zhao et al. Adaptive background modeling for land and water composition scenes
Li et al. Arbitrary-Oriented Ship Detection Based on Feature Filter and KL Loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination