CN117557788A - Marine target detection method and system based on motion prediction - Google Patents

Marine target detection method and system based on motion prediction Download PDF

Info

Publication number
CN117557788A
CN117557788A CN202410044811.5A CN202410044811A CN117557788A CN 117557788 A CN117557788 A CN 117557788A CN 202410044811 A CN202410044811 A CN 202410044811A CN 117557788 A CN117557788 A CN 117557788A
Authority
CN
China
Prior art keywords
target
images
image
area
target feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410044811.5A
Other languages
Chinese (zh)
Other versions
CN117557788B (en
Inventor
邱千钧
滕哲
李烨
陈健
宋健
付哲
陈琴琴
王青瑜
吕梅柏
张芳
潘兰波
马政伟
林洪文
毛建舟
李东
王发龙
郭安邦
郭涵子
罗飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
91526 Troops Of Chinese Pla
Chinese People's Liberation Army 91959 Unit
Science And Technology Innovation Research Center Of Naval Research Institute Of People's Liberation Army Of China
Srif Software Co ltd
Xian institute of Applied Optics
PLA Dalian Naval Academy
Original Assignee
91526 Troops Of Chinese Pla
Chinese People's Liberation Army 91959 Unit
Science And Technology Innovation Research Center Of Naval Research Institute Of People's Liberation Army Of China
Srif Software Co ltd
Xian institute of Applied Optics
PLA Dalian Naval Academy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 91526 Troops Of Chinese Pla, Chinese People's Liberation Army 91959 Unit, Science And Technology Innovation Research Center Of Naval Research Institute Of People's Liberation Army Of China, Srif Software Co ltd, Xian institute of Applied Optics, PLA Dalian Naval Academy filed Critical 91526 Troops Of Chinese Pla
Priority to CN202410044811.5A priority Critical patent/CN117557788B/en
Publication of CN117557788A publication Critical patent/CN117557788A/en
Application granted granted Critical
Publication of CN117557788B publication Critical patent/CN117557788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a marine target detection method and a marine target detection system based on motion prediction, which relate to the technical field of image processing, wherein M prediction targets are obtained by processing regional images acquired in M time windows by adopting a weighted image entropy method, frame selection and image re-acquisition are carried out, analysis is carried out to obtain M target feature sets, M-1 target similarity probability information is obtained by calculation based on the M target feature sets, N determination targets meeting a preset similarity probability threshold are screened, and the motion prediction information display of the determination targets is obtained by calculation according to the positions of the determination targets in the corresponding N regional images and first regional images. The method solves the technical problems of insufficient effectiveness of offshore target detection and identification accuracy and real-time performance in the prior art, and achieves the technical effects of improving the offshore target detection and identification accuracy and the real-time performance so as to ensure the effectiveness of ship safety management and navigation safety maintenance.

Description

Marine target detection method and system based on motion prediction
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for detecting an offshore target based on motion prediction.
Background
Along with the gradual development and maturity of the target tracking technology, the technology is expanded and applied to the detection and tracking of the targets of the marine ships so as to reduce the risk accidents of the marine ships. The ship target detection technology is essentially to monitor the moving state and the sailing track of the marine ship target by not judging the ship target in the continuous video frame sequence.
With the rapid development of ocean economy, the number of ships in the ocean fishing and transportation industry is continuously increased, so that the navigation condition on the sea surface is more complex, and the safety problem of sea ship navigation is also gradually focused.
In the prior art, the defects of accuracy and instantaneity of marine target detection and identification exist, so that the technical problems of insufficient effectiveness of marine ship safety management and ship navigation safety maintenance based on marine target detection and identification are caused.
Disclosure of Invention
The application provides a method and a system for detecting an offshore target based on motion prediction, which are used for solving the technical problems of insufficient effectiveness of offshore ship safety management and ship navigation safety maintenance based on offshore target detection and identification caused by the defects of accuracy and instantaneity of offshore target detection and identification in the prior art.
In view of the above, the present application provides a method and a system for detecting an offshore target based on motion prediction.
In a first aspect of the present application, there is provided a method of marine target detection based on motion prediction, the method comprising: acquiring and acquiring area images in a target area in M time windows to obtain an area image set, wherein the area image set comprises a first area image corresponding to a first time window; performing weighted image entropy processing on M area images in the area image set to obtain M prediction targets in the M area images, wherein M is a positive integer; performing frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image; inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set; according to the first target feature set and the other M-1 target feature sets, calculating to obtain M-1 target similarity probability information; n predicted targets corresponding to N target similarity probability information meeting a preset similarity probability threshold are obtained and used as determination targets, wherein N is a positive integer and is less than or equal to M-1; and calculating and obtaining the motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying the motion prediction information.
In a second aspect of the present application, there is provided an offshore object detection system based on motion prediction, the system comprising: the regional image acquisition module is used for acquiring regional images in the target region in M time windows to obtain a regional image set, wherein the regional image set comprises a first regional image corresponding to a first time window; the prediction target obtaining module is used for carrying out weighted image entropy processing on M area images in the area image set to obtain M prediction targets in the M area images, wherein M is a positive integer; the image re-acquisition execution module is used for carrying out frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image; the target feature obtaining module is used for inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set; the similarity probability calculation module is used for calculating and obtaining M-1 target similarity probability information according to the first target feature set and other M-1 target feature sets; the determining target obtaining module is used for obtaining N predicting targets corresponding to N target similarity probability information meeting a preset similarity probability threshold, wherein N is a positive integer and N is less than or equal to M-1 as a determining target; and the target motion prediction module is used for calculating and obtaining the motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying the motion prediction information.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
acquiring area images in a target area in M time windows to obtain an area image set, wherein the area image set comprises a first area image corresponding to a first time window, and the image area set provides basic image information for subsequent analysis to determine the movement state of a monitoring target in the target area; performing weighted image entropy processing on M area images in the area image set to obtain M predicted targets in the M area images, wherein M is a positive integer, so that high-accuracy and high-precision predicted target detection and recognition in a complex background are realized, and a tracking target recognition reference is provided for follow-up predicted target tracking; performing frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image, the imaging resolution of the predicted targets and the area occupation ratio of the imaged acquired images are improved, and a judgment basis is provided for judging whether the predicted targets in the multiple re-acquired images are the same marine ship or not through subsequent recognition; inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set, and providing a running process static site for generating a predicted target navigation track for the follow-up; according to the first target feature set and other M-1 target feature sets, calculating to obtain M-1 target similarity probability information, and providing a screening basis for a plurality of acquired images, wherein the acquired images contain the same marine ship as a prediction target, for subsequent screening determination; n predicted targets corresponding to N target similarity probability information meeting a preset similarity probability threshold are obtained and used as determination targets, wherein N is a positive integer and is less than or equal to M-1; and calculating and obtaining the motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying the motion prediction information. The method and the device achieve the technical effects of improving the detection and identification accuracy and instantaneity of the marine targets, accurately predicting the motions of the marine targets based on accurate and timely marine target detection results, and guaranteeing the safety management and the safety maintenance effectiveness of the ship.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting an offshore target based on motion prediction provided by the application;
FIG. 2 is a schematic flow chart of obtaining a predicted target in the method for detecting an offshore target based on motion prediction provided by the present application;
FIG. 3 is a schematic flow chart of obtaining a target feature set in an offshore target detection method based on motion prediction provided by the present application;
fig. 4 is a schematic structural diagram of an offshore object detection system based on motion prediction provided in the present application.
Reference numerals illustrate: the device comprises a regional image acquisition module 1, a predicted target acquisition module 2, an image re-acquisition execution module 3, a target feature acquisition module 4, a similarity probability calculation module 5, a determined target acquisition module 6 and a target motion prediction module 7.
Detailed Description
The application provides a method and a system for detecting an offshore target based on motion prediction, which are used for solving the technical problems of insufficient effectiveness in offshore ship safety management and ship navigation safety maintenance caused by the defects of accuracy and instantaneity of offshore target detection and identification in the prior art, and achieving the aims of improving the accuracy and instantaneity of offshore target detection and identification, so that the accurate prediction of the offshore target motion is carried out based on accurate and timely offshore target detection results, and the technical effects of guaranteeing the effectiveness of ship safety management and navigation safety maintenance are achieved.
The technical scheme of the invention obtains, stores, uses, processes and the like the data, which all meet the relevant regulations of national laws and regulations.
In the following, the technical solutions of the present invention will be clearly and completely described with reference to the accompanying drawings, and it should be understood that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention, and that the present invention is not limited by the exemplary embodiments described herein. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. It should be further noted that, for convenience of description, only some, but not all of the drawings related to the present invention are shown.
Example 1
As shown in fig. 1, the present application provides a method for detecting an offshore target based on motion prediction, the method comprising:
s100, acquiring area images in an acquisition target area in M time windows to obtain an area image set, wherein the area image set comprises a first area image corresponding to a first time window;
specifically, in this embodiment, the target area is preferably a sea area with a certain range where navigation of the offshore ship frequently exists, and the M time windows are used for calling the image acquisition device to acquire the infrared image of the target area at intervals of time periods, and are preferably a plurality of time windows with consistent time intervals.
The M time windows comprise a first time window to an Mth time window, and the image acquisition device is called in the first time window to acquire images of the target area, so that the first image area is obtained. And sequentially calling M time windows one by adopting the same method to acquire target area images, and obtaining the area image set comprising the first image area to the Mth image area, wherein the image area set provides basic image information for subsequent analysis to determine the moving state of a monitoring target in the target area.
S200, carrying out weighted image entropy processing on M area images in the area image set to obtain M prediction targets in the M area images, wherein M is a positive integer;
in one embodiment, as shown in fig. 2, the weighted image entropy method is performed on M area images in the area image set to obtain M prediction targets in the M area images, and the method provided in step S200 further includes:
s210, calculating multi-scale local contrast of each pixel point in the regional image for M regional images in the regional image set;
s220, calculating the local information entropy of each pixel point for M area images in the area image set;
s230, calculating the product according to the multiscale local contrast and the local information entropy of each pixel point, and binarizing the product calculation result of each pixel point according to a preset threshold value to obtain the M prediction targets.
Specifically, the object of the multi-time window image acquisition of the target area in the present embodiment is to monitor and determine the navigation track of a specific target in the sea area, which may be, for example, an offshore ship, and the ship-type size of the offshore ship varies, so that the predicted target size has uncertainty in the target area.
It should be understood that the area image is an infrared image, the area image is composed of a background, a predicted target and noise, the predicted target appears in a random position in the target area, the predicted target is smaller relative to the target area, correspondingly, the predicted target appears in the area image and has a random imaging shape, and the area occupied by the predicted target in the area image is smaller.
The gray value of the predicted target imaged in the area image differs from the background/noise gray value, but the predicted target differs less from the background/noise of the area image based on the complexity of the background of the area image.
Therefore, in this embodiment, the method of suppressing the background enhancement prediction target is adopted to improve the recognizability of the prediction target in the corresponding region image, and specifically, in this embodiment, taking the case of obtaining the prediction target in any region image, the description of the method of obtaining M prediction targets in M region images is performed.
The multiscale local contrast ratio of any pixel point Q (x, y) in the regional image is calculated by adopting a multiscale local contrast ratio obtaining method in the prior art. Specifically, a plurality of multi-scale neighborhood spaces of the pixel point are preset, local contrast ratios of the plurality of multi-scale neighborhood spaces are calculated one by one, numerical calculation is performed based on local contrast ratio calculation results of the plurality of multi-scale neighborhood spaces, and multi-scale local contrast ratios of the pixel point Q (x, y) are obtained. And calculating the multiscale local contrast of each pixel point in the regional image by adopting the same method.
The method comprises the steps of calculating local information entropy of each pixel point Q (x, y) in an area image by adopting an existing local information entropy obtaining method, wherein the local information entropy is used for balancing multi-scale local contrast of the pixel points obtained by calculation.
And carrying out product calculation according to the multiscale local contrast and the local information entropy of each pixel point so as to balance the multiscale local contrast of each pixel point in the regional image. And setting a preset threshold according to the product result of the multiscale local contrast and the local information entropy of each pixel point, judging whether the product calculation result of each pixel point is larger than the preset threshold, performing binarization processing, and based on the processing result, distinguishing prediction targets in a plurality of region images to obtain M prediction targets. The preset threshold is used for judging whether the pixel points in the regional image are background/noise components or prediction target components, and the specific value of the preset threshold can be set according to actual requirements, for example, the average value of the product result of the multi-scale local contrast and the local information entropy of each pixel point.
And merging pixel points (the pixel points are the predicted target components) meeting a preset threshold to obtain the predicted targets in the corresponding area images, and obtaining M predicted targets belonging to M area images by adopting the same method, wherein it is understood that the M predicted targets are actually images of the same marine ship.
The embodiment realizes the technical effects of detecting and identifying the predicted target with high accuracy and high precision in a complex background and providing a tracking target identification reference for the follow-up tracking of the predicted target.
S300, performing frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image;
in one embodiment, the frame selection and the image re-acquisition are performed on the M predicted targets to obtain M re-acquired images, and the method step S300 provided in the present application further includes:
s310, in the M area images, carrying out frame selection on the M prediction targets to obtain M frame selection areas;
s320, up-sampling the M frame selection areas to obtain the M re-acquisition images.
Specifically, it should be understood that the predicted target occurrence position has randomness in the target area, and the predicted target is smaller relative to the target area, and accordingly, the predicted target occurrence position and the imaging shape in the area image have randomness, and the predicted target image occupies a smaller area in the area image.
Therefore, in this embodiment, on the basis of obtaining M predicted targets in M area images through step S200, the M predicted targets are framed on the M area images to increase the area ratio of the predicted target image in the acquired image, so as to obtain the M framed areas, where the image ratio of the predicted target in each framed area is greater than the image background ratio.
And up-sampling the M frame selection areas, and improving the image resolution of the M frame selection areas based on up-sampling to obtain M re-acquisition images, wherein the M re-acquisition images are clear infrared images with higher image resolution and larger imaging area occupation of a predicted target in the images. The M re-acquired images correspond to the M region images, and a first re-acquired image corresponds to the first region image.
The embodiment achieves the technical effects of improving the imaging resolution of the predicted target and the area ratio of the imaged acquired image, and providing a judgment basis for the follow-up identification and judgment of whether the predicted target in the multiple re-acquired images is the same marine ship.
S400, inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set;
in one embodiment, as shown in fig. 3, the M re-acquired images are input into a target feature analysis model to obtain M target feature sets including a first target feature set, and the method provided in step S400 further includes:
s410, acquiring a plurality of sample re-acquisition images, wherein the sample re-acquisition images comprise sample prediction targets;
s420, performing feature analysis on sample prediction targets in the plurality of sample re-acquisition images to obtain a plurality of sample target feature sets, wherein the feature analysis comprises size feature analysis and gray level feature analysis;
s430, adopting the plurality of sample re-acquisition images and the plurality of sample target feature sets to construct the target feature analysis model;
s440, inputting the M re-acquired images into the target feature analysis model to obtain the M target feature sets comprising the first target feature set.
In one embodiment, the target feature analysis model is constructed by using the plurality of sample re-acquired images and the plurality of sample target feature sets, and the method provided in step S430 further includes:
s431, constructing the target feature analysis model based on a convolutional neural network, wherein input data of the target feature analysis model is a re-acquired image, and output data is a target feature set;
s432, carrying out data labeling and division on the multiple sample re-acquisition images and the multiple sample target feature sets to obtain a training set, a verification set and a test set;
s433, performing supervision training, verification and test on the target feature analysis model by adopting the training set, the verification set and the test set to obtain the target feature analysis model with the accuracy meeting the preset requirement.
Specifically, in this embodiment, the prediction target obtained by recognition in the first resampled image of the M resampled images is used as a reference for motion prediction, and based on the prediction target in the first resampled image, it is determined whether the prediction target in the subsequent M-1 resampled images is an image obtained by performing image acquisition on the motion change track of the same marine ship.
In order to improve the accuracy of judging whether the predicted targets in the subsequent M-1 resampled images are images obtained by carrying out image acquisition on the motion change tracks of the same marine ship or not based on the predicted targets in the first resampled image, M target feature sets of the M predicted targets in the M resampled images are obtained, the similarity between the M target feature sets of the M predicted targets and the residual M-1 target features is calculated by comparing the first target feature sets of the first resampled images, and whether the predicted targets in the subsequent M-1 resampled images are images obtained by carrying out image acquisition on the motion change tracks of the same marine ship or not is accurately judged.
M target feature sets of M predicted targets are acquired based on the M resampled images, and are preferably realized through analysis by constructing a target feature analysis model. The training data acquisition method of the target feature analysis model comprises the following steps:
obtaining a plurality of sample re-acquisition images, wherein the plurality of sample re-acquisition images comprise sample prediction targets, and performing size feature analysis and gray feature analysis on the sample prediction targets in the plurality of sample re-acquisition images based on manual work to obtain a plurality of sample target feature sets. It should be understood that, based on the feature analysis performed on the sample prediction target of each sample re-acquisition image, the size feature and the gray level feature of the sample prediction target are obtained, and the size feature and the readable feature form a sample target feature set of the sample. The size characteristic is the image line size proportion characteristic of the sample prediction target imaging in the horizontal direction, the longitudinal direction and the like, and the gray scale characteristic is the pixel point RGB value interval threshold characteristic of the sample prediction target imaging.
Based on a convolutional neural network, the target feature analysis model is constructed, input data of the target feature analysis model is a re-acquired image, output data is a target feature set, and the target feature set comprises size features and gray features of a predicted target.
And carrying out data division and labeling on the multiple sample re-acquisition images and the multiple sample target feature sets according to the data volume of 17:2:1 to obtain a training set, a verification set and a test set. And performing supervision training and verification on the target feature analysis model by adopting the training set and the verification set, performing output accuracy test on the target feature analysis model based on the test set, and stopping model training when the output accuracy of the target feature analysis model meets a preset requirement (for example, the output accuracy is greater than or equal to 85 percent) to obtain the target feature analysis model.
Inputting the M re-acquired images into the target feature analysis model to obtain the M target feature sets comprising the first target feature set.
The embodiment achieves the technical effect of obtaining the image which can be used for accurately judging whether the predicted target in the subsequent M-1 resampled images is the image obtained by carrying out image acquisition on the motion change track of the same marine ship, and providing a static site in the running process for generating the subsequent predicted target navigation track.
S500, calculating to obtain M-1 target similarity probability information according to the first target feature set and other M-1 target feature sets;
in one embodiment, according to the first target feature set and the other M-1 target feature sets, calculating to obtain M-1 target similarity probability information, the method provided in step S500 further includes:
s510, acquiring a first target size feature, a first target gray feature, M-1 target size features and M-1 target gray features in the first target feature set and the M-1 target feature sets;
s520, calculating and obtaining M-1 size similarity probability information and M-1 gray similarity probability information according to the first target size feature, the first target gray feature, the M-1 target size features and the M-1 target gray features;
s530, weighting and calculating the M-1 size similarity probability information and the M-1 gray similarity probability information to obtain the M-1 target similarity probability information.
Specifically, in this embodiment, the predicted target included in the first resampled image is taken as the target of single detection of the marine vessel track, and the first target size feature and the first target gray scale feature in the first target feature set of the predicted target obtained based on the predicted target of the first resampled image are taken as the predicted target in the second resampled image to the M-1 resampled image to determine whether the predicted target and the predicted target in the first resampled image are obtained by performing image acquisition on the same marine vessel entity.
The calculation method of the similarity probability of the predicted target in the second re-acquired image and the predicted target in the first re-acquired image is that the deviation degree of the line size proportion of the image of the second target size characteristic and the first target size characteristic in the directions of transverse direction, longitudinal direction and the like is calculated, the deviation degree is added as the size similarity probability information of the second re-acquired image, the deviation degree is the ratio of the difference value of the size of the second target and the size of the first target to the size of the first target, and the ratio is subtracted by 1. Calculating the average deviation degree of the gray values of the pixels of the second target gray features and the first target gray features, taking the added percentage of the deviation degree as gray similarity probability information of the second acquired image, wherein the deviation degree is the ratio of the average gray value of the pixels in the second target to the average gray value of the pixels in the first target, and subtracting the ratio from 1.
Thus, a first target size feature, a first target gray scale feature, M-1 target size features and M-1 target gray scale features in the first target feature set and the M-1 target feature sets are obtained. And calculating the size similarity probability information and the gray similarity probability information in the above manner to obtain M-1 size similarity probability information and M-1 gray similarity probability information corresponding to the second to M-th acquired images.
In one embodiment, a plurality of groups of weight distribution results given by image field experts based on image size similarity and image gray level similarity are adopted through a public channel, image size similarity weight distribution results are obtained by carrying out mean calculation on image size similarity weights in the plurality of groups of weight distribution results, and image gray level similarity weight distribution results are obtained by carrying out mean calculation on image gray level similarity weights in the plurality of groups of weight distribution results. Illustratively, the image size similarity weight distribution result is 0.6, and the image gray scale similarity weight distribution result is 0.4.
And carrying out weighted calculation on the M-1 size similarity probability information and the M-1 gray similarity probability information to obtain M-1 target similarity probability information, wherein the M-1 target similarity probability information corresponds to M-1 prediction targets in the second to Mth re-acquired images.
According to the method, the predicted target imaging similarity between the second to Mth acquired images and the predicted target imaging similarity between the first acquired image are calculated, so that the technical effect that the same marine ship is used as the predicted target in a plurality of acquired images is provided for subsequent screening determination.
S600, N prediction targets corresponding to N target similarity probability information meeting a preset similarity probability threshold are obtained and used as determination targets, wherein N is a positive integer and N is less than or equal to M-1;
specifically, in this embodiment, the preset similarity probability threshold is obtained by setting according to the marine target detection experience data in the historical time, where the similarity probability threshold includes a size similarity probability threshold and a gray similarity probability threshold, and the preset similarity probability threshold may be set to a specific value according to the actual marine target detection accuracy and real-time requirement. And when the size similarity probability information and the gray similarity probability information of any re-acquired image simultaneously meet the preset similarity probability threshold, the predicted target imaging contained in the re-acquired image and the predicted target imaging in the first re-acquired image are considered to be the same marine ship image. Illustratively, the preset similarity probability threshold is 0.7.
N predicted targets corresponding to N target similarity probability information meeting a preset similarity probability threshold are obtained and used as determination targets, N is a positive integer and is less than or equal to M-1, and the determination targets are infrared imaging of the marine ship of the current marine target detection object.
And S700, calculating and obtaining motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying.
In one embodiment, according to the positions of the determined objects in the corresponding N area images and the first area image, motion prediction information of the determined objects is obtained through calculation, and the method step S700 provided in the present application further includes:
s710, acquiring time windows corresponding to the N area images and the first area image to be used as N+1 determined time windows;
s720, acquiring positions of the determination targets in the corresponding N+1 area images, and acquiring N+1 determination position information;
and S730, calculating and obtaining a motion method and a motion speed of the determined target as the motion prediction information according to the N+1 determined position information and the N+1 determined time windows.
Specifically, in this embodiment, based on the correspondence between the M time windows and the M area images and the M reacquired images in step S100, the time windows corresponding to the N area images and the first area image are obtained as n+1 determined time windows.
Acquiring positions of the determined targets in the corresponding N+1 area images, acquiring N+1 determined position information, calculating and acquiring a motion method and a motion speed of the determined targets according to the N+1 determined position information and the N+1 determined time windows and combining longitude and latitude data of the target areas, wherein the motion method comprises running based on a power system and running based on sea wind and wind power, and the motion speed corresponds to the motion method and comprises running speed based on the power system and running speed based on sea wind and wind power. And taking the motion method and the motion speed of the determined target as the motion prediction information, wherein the motion prediction information can be used for predicting the running position of the determined target in the future, so that the sailing collision of the determined target is avoided in a complex scene of the determined target (a plurality of marine ships). The method and the device achieve the technical effects of improving the detection and identification accuracy and instantaneity of the marine targets, accordingly, carrying out accurate prediction of the marine target movement based on accurate and timely marine target detection results, and guaranteeing the effectiveness of ship safety management and navigation safety maintenance.
Example two
Based on the same inventive concept as the motion prediction-based marine object detection method in the previous embodiment, as shown in fig. 4, the present application provides a motion prediction-based marine object detection system, wherein the system includes:
the regional image acquisition module 1 is used for acquiring regional images in an acquisition target region in M time windows to obtain a regional image set, wherein the regional image set comprises first regional images corresponding to a first time window;
the prediction target obtaining module 2 is used for carrying out weighted image entropy processing on M area images in the area image set to obtain M prediction targets in the M area images, wherein M is a positive integer;
the image re-acquisition execution module 3 is used for carrying out frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image;
the target feature obtaining module 4 is used for inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set;
the similarity probability calculation module 5 is configured to calculate and obtain M-1 pieces of target similarity probability information according to the first target feature set and the other M-1 pieces of target feature sets;
the determination target obtaining module 6 is configured to obtain N prediction targets corresponding to N target similarity probability information that satisfy a preset similarity probability threshold, where N is a positive integer and N is less than or equal to M-1 as a determination target;
and the target motion prediction module 7 is used for calculating and obtaining motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying the motion prediction information.
In one embodiment, the system further comprises:
the pixel contrast calculating unit is used for calculating the multiscale local contrast of each pixel point in the region image for M region images in the region image set;
the local information entropy calculation unit is used for calculating the local information entropy of each pixel point for M area images in the area image set;
the binarization processing unit is used for carrying out product calculation according to the multiscale local contrast and the local information entropy of each pixel point, carrying out binarization processing on the product calculation result of each pixel point according to a preset threshold value, and obtaining the M prediction targets.
In one embodiment, the system further comprises:
the regional frame selection execution unit is used for carrying out frame selection on the M prediction targets in the M regional images to obtain M frame selection regions;
and the frame selection area sampling unit is used for up-sampling the M frame selection areas to obtain the M re-acquisition images.
In one embodiment, the system further comprises:
the sample image acquisition unit is used for acquiring a plurality of sample re-acquisition images, wherein each sample re-acquisition image comprises a sample prediction target;
the image feature analysis unit is used for carrying out feature analysis on sample prediction targets in the plurality of sample re-acquisition images to obtain a plurality of sample target feature sets, wherein the feature analysis comprises size feature analysis and gray feature analysis;
an analysis model construction unit, configured to construct the target feature analysis model by using the plurality of sample re-acquisition images and the plurality of sample target feature sets;
and the image analysis execution unit is used for inputting the M re-acquired images into the target feature analysis model to obtain the M target feature sets comprising the first target feature set.
In one embodiment, the system further comprises:
the model construction execution unit is used for constructing the target feature analysis model based on a convolutional neural network, wherein the input data of the target feature analysis model is a collected image, and the output data is a target feature set;
the sample data dividing unit is used for carrying out data labeling and dividing on the plurality of sample re-acquisition images and the plurality of sample target feature sets to obtain a training set, a verification set and a test set;
and the model training execution unit is used for performing supervision training, verification and test on the target feature analysis model by adopting the training set, the verification set and the test set to obtain the target feature analysis model with the accuracy meeting the preset requirement.
In one embodiment, the system further comprises:
the target feature obtaining unit is used for obtaining a first target size feature, a first target gray feature, M-1 target size features and M-1 target gray features in the first target feature set and the M-1 target feature sets;
the gray level similarity calculation unit is used for calculating and obtaining M-1 size similarity probability information and M-1 gray level similarity probability information according to the first target size feature, the first target gray level feature, the M-1 target size features and the M-1 target gray level features;
and the data weighted calculation unit is used for weighted calculation of the M-1 size similarity probability information and the M-1 gray similarity probability information to obtain the M-1 target similarity probability information.
In one embodiment, the system further comprises:
the time window obtaining unit is used for obtaining and collecting time windows corresponding to the N area images and the first area image and taking the time windows as N+1 determined time windows;
a determining position obtaining unit, configured to obtain positions of the determining target in the corresponding n+1 area images, and obtain n+1 determining position information;
and the prediction information obtaining unit is used for calculating and obtaining the motion method and the motion speed of the determined target as the motion prediction information according to the N+1 determined position information and the N+1 determined time windows.
Any of the methods or steps described above may be stored as computer instructions or programs in various non-limiting types of computer memories, and identified by various non-limiting types of computer processors, thereby implementing any of the methods or steps described above.
Based on the above-mentioned embodiments of the present invention, any improvements and modifications to the present invention without departing from the principles of the present invention should fall within the scope of the present invention.

Claims (8)

1. An offshore target detection method based on motion prediction, the method comprising:
acquiring and acquiring area images in a target area in M time windows to obtain an area image set, wherein the area image set comprises a first area image corresponding to a first time window;
performing weighted image entropy processing on M area images in the area image set to obtain M prediction targets in the M area images, wherein M is a positive integer;
performing frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image;
inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set;
according to the first target feature set and the other M-1 target feature sets, calculating to obtain M-1 target similarity probability information;
n predicted targets corresponding to N target similarity probability information meeting a preset similarity probability threshold are obtained and used as determination targets, wherein N is a positive integer and is less than or equal to M-1;
and calculating and obtaining the motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying the motion prediction information.
2. The method of claim 1, wherein performing weighted image entropy processing on M region images in the set of region images to obtain M prediction targets in the M region images, comprises:
for M area images in the area image set, calculating the multiscale local contrast of each pixel point in the area image;
calculating the local information entropy of each pixel point for M area images in the area image set;
and carrying out product calculation according to the multiscale local contrast and the local information entropy of each pixel point, and carrying out binarization processing on the product calculation result of each pixel point according to a preset threshold value to obtain the M prediction targets.
3. The method of claim 1, wherein frame selection and image re-acquisition of the M predicted targets to obtain M re-acquired images comprises:
in the M area images, carrying out frame selection on the M prediction targets to obtain M frame selection areas;
and up-sampling the M frame selection areas to obtain the M re-acquisition images.
4. The method of claim 1, wherein inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set, comprises:
acquiring a plurality of sample re-acquisition images, wherein the sample re-acquisition images comprise sample prediction targets;
performing feature analysis on sample prediction targets in the plurality of sample re-acquisition images to obtain a plurality of sample target feature sets, wherein the feature analysis comprises size feature analysis and gray level feature analysis;
the target feature analysis model is constructed by adopting the multiple sample re-acquisition images and the multiple sample target feature sets;
inputting the M re-acquired images into the target feature analysis model to obtain the M target feature sets comprising the first target feature set.
5. The method of claim 4, wherein constructing the target feature analysis model using the plurality of sample re-acquisition images and the plurality of sample target feature sets comprises:
based on a convolutional neural network, constructing the target feature analysis model, wherein input data of the target feature analysis model is a re-acquired image, and output data is a target feature set;
performing data annotation and division on the multiple sample re-acquisition images and the multiple sample target feature sets to obtain a training set, a verification set and a test set;
and performing supervision training, verification and test on the target feature analysis model by adopting the training set, the verification set and the test set to obtain the target feature analysis model with the accuracy meeting the preset requirement.
6. The method of claim 1, calculating M-1 target similarity probability information from the first set of target features and other M-1 sets of target features, comprising:
acquiring a first target size feature, a first target gray feature, M-1 target size features and M-1 target gray features in the first target feature set and the M-1 target feature sets;
calculating to obtain M-1 size similarity probability information and M-1 gray similarity probability information according to the first target size feature, the first target gray feature, the M-1 target size features and the M-1 target gray features;
and carrying out weighted calculation on the M-1 size similarity probability information and the M-1 gray similarity probability information to obtain the M-1 target similarity probability information.
7. The method according to claim 1, wherein calculating motion prediction information of the determination target according to the positions of the determination target in the corresponding N area images and the first area image includes:
acquiring time windows corresponding to the N area images and the first area image as N+1 determined time windows;
acquiring positions of the determination targets in the corresponding N+1 area images, and acquiring N+1 determination position information;
and calculating and obtaining a motion method and a motion speed of the determined target as the motion prediction information according to the N+1 determined position information and the N+1 determined time windows.
8. An offshore object detection system based on motion prediction, the system comprising:
the regional image acquisition module is used for acquiring regional images in the target region in M time windows to obtain a regional image set, wherein the regional image set comprises a first regional image corresponding to a first time window;
the prediction target obtaining module is used for carrying out weighted image entropy processing on M area images in the area image set to obtain M prediction targets in the M area images, wherein M is a positive integer;
the image re-acquisition execution module is used for carrying out frame selection and image re-acquisition on the M predicted targets to obtain M re-acquired images, wherein the M re-acquired images comprise a first re-acquired image;
the target feature obtaining module is used for inputting the M re-acquired images into a target feature analysis model to obtain M target feature sets comprising a first target feature set;
the similarity probability calculation module is used for calculating and obtaining M-1 target similarity probability information according to the first target feature set and other M-1 target feature sets;
the determining target obtaining module is used for obtaining N predicting targets corresponding to N target similarity probability information meeting a preset similarity probability threshold, wherein N is a positive integer and N is less than or equal to M-1 as a determining target;
and the target motion prediction module is used for calculating and obtaining the motion prediction information of the determined target according to the positions of the determined target in the corresponding N area images and the first area image, and displaying the motion prediction information.
CN202410044811.5A 2024-01-12 2024-01-12 Marine target detection method and system based on motion prediction Active CN117557788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410044811.5A CN117557788B (en) 2024-01-12 2024-01-12 Marine target detection method and system based on motion prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410044811.5A CN117557788B (en) 2024-01-12 2024-01-12 Marine target detection method and system based on motion prediction

Publications (2)

Publication Number Publication Date
CN117557788A true CN117557788A (en) 2024-02-13
CN117557788B CN117557788B (en) 2024-03-26

Family

ID=89815186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410044811.5A Active CN117557788B (en) 2024-01-12 2024-01-12 Marine target detection method and system based on motion prediction

Country Status (1)

Country Link
CN (1) CN117557788B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060002631A1 (en) * 2004-06-30 2006-01-05 Accuray, Inc. ROI selection in image registration
CN104794733A (en) * 2014-01-20 2015-07-22 株式会社理光 Object tracking method and device
CN111160065A (en) * 2018-11-07 2020-05-15 中电科海洋信息技术研究院有限公司 Remote sensing image ship detection method, device, equipment and storage medium thereof
CN111508019A (en) * 2020-03-11 2020-08-07 上海商汤智能科技有限公司 Target detection method, training method of model thereof, and related device and equipment
CN114283323A (en) * 2021-12-28 2022-04-05 航天科工智能运筹与信息安全研究院(武汉)有限公司 Marine target recognition system based on image deep learning
CN115731174A (en) * 2022-11-15 2023-03-03 浙江大学 Infrared small target detection method and device based on image information entropy and multi-scale local contrast measurement
CN116091292A (en) * 2022-08-17 2023-05-09 荣耀终端有限公司 Data processing method and related device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060002631A1 (en) * 2004-06-30 2006-01-05 Accuray, Inc. ROI selection in image registration
CN104794733A (en) * 2014-01-20 2015-07-22 株式会社理光 Object tracking method and device
CN111160065A (en) * 2018-11-07 2020-05-15 中电科海洋信息技术研究院有限公司 Remote sensing image ship detection method, device, equipment and storage medium thereof
CN111508019A (en) * 2020-03-11 2020-08-07 上海商汤智能科技有限公司 Target detection method, training method of model thereof, and related device and equipment
CN114283323A (en) * 2021-12-28 2022-04-05 航天科工智能运筹与信息安全研究院(武汉)有限公司 Marine target recognition system based on image deep learning
CN116091292A (en) * 2022-08-17 2023-05-09 荣耀终端有限公司 Data processing method and related device
CN115731174A (en) * 2022-11-15 2023-03-03 浙江大学 Infrared small target detection method and device based on image information entropy and multi-scale local contrast measurement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
T. THENMOZHI;A.M. KALPANA: "Adaptive motion estimation and sequential outline separation based moving object detection in video surveillance system", 《 MICROPROCESSORS AND MICROSYSTEMS》, 11 July 2020 (2020-07-11), pages 103084 - 103084 *
杨帆: "基于动态视频信息分析的海上舰船目标检测方法", 《舰船科学技术》, 23 October 2022 (2022-10-23), pages 169 - 172 *

Also Published As

Publication number Publication date
CN117557788B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
JP6759475B2 (en) Ship detection methods and systems based on multidimensional features of the scene
CN112180375B (en) Weather radar echo extrapolation method based on improved TrajGRU network
CN113971660B (en) Computer vision method for bridge health diagnosis and intelligent camera system
CN111476159A (en) Method and device for training and detecting detection model based on double-angle regression
CN115035182B (en) Landslide hazard early warning method and system
CN113591592B (en) Overwater target identification method and device, terminal equipment and storage medium
CN111723632A (en) Ship tracking method and system based on twin network
CN113936132A (en) Method and system for detecting water pollution of chemical plant based on computer vision
CN113408550B (en) Intelligent weighing management system based on image processing
CN117557788B (en) Marine target detection method and system based on motion prediction
CN112017213B (en) Target object position updating method and system
CN116229419B (en) Pedestrian detection method and device
CN115857060B (en) Short-term precipitation prediction method and system based on layered generation countermeasure network
CN116110006B (en) Scenic spot tourist abnormal behavior identification method for intelligent tourism system
CN116935356A (en) Weak supervision-based automatic driving multi-mode picture and point cloud instance segmentation method
CN115713750B (en) Lane line detection method and device, electronic equipment and storage medium
CN113450385B (en) Night work engineering machine vision tracking method, device and storage medium
CN115239689A (en) Road surface information detection method, road surface information calculation method, road surface information detection system, road surface information detection equipment and computer readable storage medium
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
Samsi et al. Compute, time and energy characterization of encoder-decoder networks with automatic mixed precision training
CN116469013B (en) Road ponding prediction method, device, computer equipment and storage medium
CN117474915B (en) Abnormality detection method, electronic equipment and storage medium
Loktev et al. Image processing of transport objects using neural networks
CN118097348A (en) Unmanned ship recognition system based on improved YOLOv algorithm
CN108664978B (en) Character segmentation method and device for fuzzy license plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant