CN115035397A - Underwater moving target identification method and device - Google Patents

Underwater moving target identification method and device Download PDF

Info

Publication number
CN115035397A
CN115035397A CN202210578614.2A CN202210578614A CN115035397A CN 115035397 A CN115035397 A CN 115035397A CN 202210578614 A CN202210578614 A CN 202210578614A CN 115035397 A CN115035397 A CN 115035397A
Authority
CN
China
Prior art keywords
image
channel
background
saliency map
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210578614.2A
Other languages
Chinese (zh)
Inventor
许佳玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210578614.2A priority Critical patent/CN115035397A/en
Publication of CN115035397A publication Critical patent/CN115035397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for identifying an underwater moving target, which comprises the following steps: acquiring an underwater target image to obtain an initial image; converting the initial image into YUV channel images, and respectively obtaining Y, U, V channel images; acquiring a first image based on first image processing on the Y-channel image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image; and generating a background image based on the Y-channel enhanced image M1 for underwater target identification. According to the invention, more effective Y-channel image information is obtained by preprocessing and enhancing the underwater image, the saliency maps of different layers are obtained, the establishment of a background model is executed based on the saliency maps, and then the fusion of the saliency maps of different layers is executed, so that the background model optimally ensures the integrity of the background, and the accuracy of the guaranteed target is convenient to identify.

Description

Underwater moving target identification method and device
Technical Field
The invention relates to the technical field of target identification, in particular to an underwater moving target identification method and device.
Background
The underwater target recognition is a key technology for realizing the intellectualization of underwater acoustic equipment and weapon systems, is a premise of defeating and defeating under high and new technical conditions, is a pending technical problem which is not overcome by the naval department for a long time, and is highly valued by related academic circles and application departments in the last 40 th century. But the research progress is slow due to the particularity and complexity of the field (harsh environment and channel, low available data rate, etc.). In the last thirty years, the emergence of low-noise nuclear submarines has placed a stronger demand on underwater target feature analysis and identification techniques. And emerging information processing technologies, microprocessor technology, VLSI and VHSIC technology have also made great progress. The traction of military requirements and the push of emerging technologies enable the underwater target identification technology to be greatly developed, the theoretical exploration and laboratory simulation technology to be mature day by day, and the forward engineering application direction is advanced at present.
The purpose of the preprocessing is to remove noise, increase signal gain, and recover from acoustic signal degradation caused by various factors. The signal preprocessing methods commonly used abroad include signal correlation, adaptive noise cancellation, wavelet analysis, blind signal processing, various nonlinear processing techniques, and the like.
In the field of target identification, the traditional target detection algorithm: cascade + HOG/DPM + Haar/SVM and a plurality of improvements and optimizations of the method. In the traditional target detection, a multi-scale deformation Part model DPM (Deformable Part model) has excellent performance, and detection champions of VOC (visual Object class) 2007 to 2009 are continuously obtained. DPM considers an object as a plurality of components (such as the nose, mouth and the like of a human face), describes the object by using the relationship among the components, and has the characteristic of being very consistent with the non-rigid characteristics of many objects in the nature. The DPM can be regarded as the extension of the HOG + SVM, the advantages of the HOG + SVM are well inherited, good effects are achieved on tasks such as face detection and pedestrian detection, the DPM is relatively complex, the detection speed is low, and therefore a plurality of improved methods are provided. However, there are two main problems with conventional target detection: one is that the region selection strategy based on the sliding window has no pertinence, the time complexity is high, and the window is redundant; secondly, the manually designed features are not very robust to variations in diversity. Compared with the conventional target, the small target has less information amount and training data are difficult to mark, so that the detection effect of a general target detection method on the small target is poor, and a detection method specially designed for the small target is often too high in complexity or has no universality. However, in the prior art, the image for underwater target identification is blurred, and the small target identification mode still has the technical problems of low identification rate and low identification accuracy.
Disclosure of Invention
In view of this, the invention provides an underwater moving target identification method and device, and aims to solve the technical problems of blurred underwater images, low identification rate and low identification accuracy.
The technical scheme of the invention is as follows:
an underwater moving target identification method is characterized by comprising the following steps:
acquiring an underwater target image to obtain an initial image;
converting the initial image into YUV channel images, and respectively obtaining Y, U, V channel images;
acquiring a first image based on first image processing on the Y-channel image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image;
the obtaining a first image based on the first image processing of the Y-channel image includes: obtaining the underwater illuminance A, extracting a high-frequency signal M, D from the Y-channel image, calculating M × A to obtain a Y-channel high-frequency enhanced image signal M0, and obtaining a Y-channel enhanced image M1 through the fusion of M0 and D;
and generating a background image based on the Y-channel enhanced image M1 for underwater target identification.
Preferably, the underwater illuminance a is T/average gray value of the initial image.
Preferably, the generating a background image based on the Y channel enhanced image M1 for underwater target recognition includes:
performing Gaussian decomposition on the Y channel enhanced image M1 to obtain a third image and a fourth image; respectively carrying out saliency detection on the Y-channel enhanced image M1, the third image and the fourth image to obtain a Y-channel enhanced image saliency map, a third image saliency map and a fourth image saliency map; respectively executing background processing on the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map to obtain a first background image, a third background image and a fourth background image;
and performing fusion processing based on the first background image, the third background image and the fourth background image to generate a fusion background image, thereby performing underwater target identification.
Preferably, the target area is detected based on the fusion background, a target area mask is generated, and the main color information of the target is acquired based on the target area mask, the Y channel enhanced image, the second U channel image and the second V channel image.
Preferably, the performing background processing on the Y-channel enhanced image saliency map, the third image saliency map, and the fourth image saliency map respectively to obtain a first background image, a third background image, and a fourth background image includes:
respectively selecting corresponding local neighborhood templates for the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, and for each pixel point in the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, taking the gray value of the pixel point as the difference value of the average gray value of other points in the neighborhood, taking the square of the difference value as a weight coefficient, and multiplying the weight coefficient by the local entropy of each point to be used as the final local information entropy of the pixel point;
and if the final local information entropy of the pixel point is smaller than a certain threshold, determining the pixel point as a background area, traversing all the points, and respectively obtaining a first background image, a third background image and a fourth background image.
In addition, a device for identifying the target based on underwater movement is also provided, which comprises:
the acquisition module acquires an underwater target image to obtain an initial image;
the conversion module is used for converting the initial image into a YUV channel image and respectively acquiring Y, U, V channel images;
the processing module is used for performing first image processing on the Y-channel image to obtain a first image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image; the obtaining a first image based on the first image processing of the Y-channel image includes: acquiring underwater illuminance A, extracting a high-frequency signal M, D from the Y-channel image, calculating M × A to obtain a Y-channel high-frequency enhanced image signal M0, and obtaining a Y-channel enhanced image M1 through M0 and D fusion;
and the recognition module generates a background image based on the Y channel enhanced image M1 and identifies underwater targets.
Preferably, the underwater illuminance a is T/average gray scale value of the original image.
Preferably, the generating a background image based on the Y channel enhanced image M1 for underwater target recognition includes:
performing Gaussian decomposition on the Y channel enhanced image M1 to obtain a third image and a fourth image; respectively carrying out saliency detection on the Y-channel enhanced image M1, the third image and the fourth image to obtain a Y-channel enhanced image saliency map, a third image saliency map and a fourth image saliency map; respectively executing background processing on the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map to obtain a first background image, a third background image and a fourth background image;
and performing fusion processing based on the first background image, the third background image and the fourth background image to generate a fusion background image, thereby performing underwater target identification.
Preferably, the target area is detected based on the fusion background, a target area mask is generated, and the main color information of the target is acquired based on the target area mask, the Y-channel enhanced image, the second U-channel image and the second V-channel image.
Preferably, the performing background processing on the Y-channel enhanced image saliency map, the third image saliency map, and the fourth image saliency map respectively to obtain a first background image, a third background image, and a fourth background image includes:
respectively selecting corresponding local neighborhood templates for the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, and for each pixel point in the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, taking the gray value of the pixel point as the difference value of the average gray value of other points in the neighborhood, taking the square of the difference value as a weight coefficient, and multiplying the weight coefficient by the local entropy of each point to be used as the final local information entropy of the pixel point;
and if the final local information entropy of the pixel point is smaller than a certain threshold, determining the pixel point as a background area, traversing all the points, and respectively obtaining a first background image, a third background image and a fourth background image.
In the scheme of the embodiment of the invention, the method and the device for identifying the underwater moving target comprise the following steps: acquiring an underwater target image to obtain an initial image; converting the initial image into YUV channel images, and respectively obtaining Y, U, V channel images; acquiring a first image based on first image processing on the Y-channel image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image; the acquiring a first image based on the first image processing of the Y-channel image includes: acquiring underwater illuminance A, extracting a high-frequency signal M from the Y-channel image, and calculating M x A to obtain a Y-channel enhanced image M1; and generating a background image based on the Y-channel enhanced image M1 to perform underwater target recognition. According to the invention, the underwater image is preprocessed, so that more effective Y-channel image information is obtained, the saliency maps of different layers are obtained, the establishment of a background model is executed based on the saliency maps, and then the fusion of the saliency maps of different layers is executed, so that the background model optimally ensures the integrity of a background, and the accuracy of a target is conveniently identified and ensured.
Drawings
FIG. 1 is a flow chart of an underwater moving target identification method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an underwater moving object identification device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses an underwater moving target identification method, which comprises the following steps:
acquiring an underwater target image to obtain an initial image;
converting the initial image into YUV channel images, and respectively obtaining Y, U, V channel images;
acquiring a first image based on first image processing on the Y-channel image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image;
the obtaining a first image based on the first image processing of the Y-channel image includes: obtaining the underwater illuminance A, extracting a high-frequency signal M, D from the Y-channel image, calculating M × A to obtain a Y-channel high-frequency enhanced image signal M0, and obtaining a Y-channel enhanced image M1 through the fusion of M0 and D;
and generating a background image based on the Y-channel enhanced image M1 for underwater target identification.
Preferably, the underwater illuminance a is T/average gray scale value of the original image.
Preferably, the generating a background image based on the Y channel enhanced image M1 for underwater target recognition includes:
performing Gaussian decomposition on the Y channel enhanced image M1 to obtain a third image and a fourth image; respectively carrying out saliency detection on the Y-channel enhanced image M1, the third image and the fourth image to obtain a Y-channel enhanced image saliency map, a third image saliency map and a fourth image saliency map; respectively executing background processing on the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map to obtain a first background image, a third background image and a fourth background image;
and performing fusion processing based on the first background image, the third background image and the fourth background image to generate a fusion background image, thereby performing underwater target identification.
Preferably, the performing background processing on the Y-channel enhanced image saliency map, the third image saliency map, and the fourth image saliency map respectively to obtain a first background image, a third background image, and a fourth background image includes:
respectively selecting corresponding local neighborhood templates for the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map, and for each pixel point in the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map, using the gray value of each pixel point as the difference value of the average gray value of other points in the neighborhood, using the square of the difference value as a weight coefficient, and multiplying the weight coefficient by the local entropy of each point to be used as the final local information entropy of the pixel point;
and if the final local information entropy of the pixel point is smaller than a certain threshold, determining the pixel point as a background area, thereby traversing all the points and respectively obtaining a first background image, a third background image and a fourth background image.
Specifically, in this embodiment, 3 × 3, 5 × 5, 7 × 7, and 9 × 9 templates may be selected as local neighborhood templates, for each point in the saliency map, the gray value of the point is subtracted from the average gray value of other points in the neighborhood, the square of the difference is used as a weight coefficient, and the weight coefficient is multiplied by the local entropy of each point to obtain the final local information entropy of the point. And comparing the contrast of the small target in the saliency map obtained by each template with the time required by operation, and selecting the template with the best effect. Based on selecting the template with the best effect and the least running time, finally, we can select 5 × 5 or 7 × 7 templates as local neighborhood templates. Selecting a corresponding local neighborhood template for the third image saliency map, taking the gray value of each pixel point in the third image saliency map as the difference value of the average gray value of other points in the neighborhood, taking the square of the difference value as a weight coefficient, multiplying the weight coefficient by the local entropy of each point, and taking the result as the final local information entropy of the pixel point, wherein an image formed by the final local information entropy corresponding to each pixel point is called a local entropy image.
Specifically, in this embodiment, first, the position relationship of the target candidate points in different layers is analyzed, and the relationship between the three layers obtained through the gaussian downsampling decomposition is 1/4 where the next layer is the previous layer, that is, 1/2 where the length and the width are the previous layer respectively. Because gaussian filtering is prone to generate distortion at the edge of an image, in this embodiment, the width h of the periphery of the next-level image is cut to ensure that the obtained saliency map does not contain distortion noise to affect judgment of a saliency area, and the value of h is 10 pixels. According to this relationship, if the candidate point position in the next layer is a2 ═ x, y, then the position corresponding to the candidate point a1 should appear in the previous layer: a1 ═ 2 × (x + h, y + h).
Calculating integral image variance values x1, x2 and x3 of the Y-channel enhanced image M1, the third image and the fourth image respectively;
respectively calculating image weighted local entropies of the Y-channel enhanced image M1, the third image and the fourth image to obtain corresponding weighted local entropy images M1, M2 and M3;
the fused background image is:
M=(1/x1)×M1+(1/x2)×M2+(1/x3)×M3;
and performing small target region segmentation and identification based on the fused background image.
Specifically, in this embodiment, a global threshold segmentation method is adopted, the image is firstly judged, the maximum value, the mean value and the standard deviation of the image are calculated, and if the maximum value is smaller than the sum of the mean value and n times of the standard deviation, the processed image does not contain a small target according to the statistical rule; for the image judged to contain a small target, the threshold value is set to 0.65 of the maximum value, and a point larger than this threshold value is regarded as a target point.
Preferably, the target region is detected based on the fusion background, a target region mask is generated, and the main color information of the target is acquired based on the target region mask and the Y channel enhanced image M1, the second U channel image, and the second V channel image.
Specifically, in the embodiment, the coordinate information of the target can be defined through the target area mask, so that the Y-U-V information of the target object is obtained, and the main color of the target is extracted based on the K-Means algorithm.
In addition, the invention also provides a device for identifying the target based on underwater movement, which comprises:
the acquisition module acquires an underwater target image to obtain an initial image;
the conversion module is used for converting the initial image into a YUV channel image and respectively acquiring Y, U, V channel images;
the processing module is used for performing first image processing on the Y-channel image to obtain a first image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image; the obtaining a first image based on the first image processing of the Y-channel image includes: acquiring underwater illuminance A, extracting a high-frequency signal M, D from the Y-channel image, calculating M × A to obtain a Y-channel high-frequency enhanced image signal M0, and obtaining a Y-channel enhanced image M1 through M0 and D fusion;
and the recognition module generates a background image based on the Y channel enhanced image M1 and identifies underwater targets.
Preferably, the underwater illuminance a is T/average gray value of the initial image.
Preferably, the generating a background image based on the Y channel enhanced image M1 for underwater target recognition includes:
performing Gaussian decomposition on the Y channel enhanced image M1 to obtain a third image and a fourth image; respectively carrying out saliency detection on the Y-channel enhanced image M1, the third image and the fourth image to obtain a Y-channel enhanced image saliency map, a third image saliency map and a fourth image saliency map; respectively executing background processing on the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map to obtain a first background image, a third background image and a fourth background image;
and performing fusion processing based on the first background image, the third background image and the fourth background image to generate a fusion background image, thereby performing underwater target identification.
Preferably, the target area is detected based on the fusion background, a target area mask is generated, and the main color information of the target is acquired based on the target area mask, the Y channel enhanced image, the second U channel image and the second V channel image.
Preferably, the performing background processing on the Y-channel enhanced image saliency map, the third image saliency map, and the fourth image saliency map respectively to obtain a first background image, a third background image, and a fourth background image includes:
respectively selecting corresponding local neighborhood templates for the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, and for each pixel point in the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, taking the gray value of the pixel point as the difference value of the average gray value of other points in the neighborhood, taking the square of the difference value as a weight coefficient, and multiplying the weight coefficient by the local entropy of each point to be used as the final local information entropy of the pixel point;
and if the final local information entropy of the pixel point is smaller than a certain threshold, determining the pixel point as a background area, traversing all the points, and respectively obtaining a first background image, a third background image and a fourth background image.
In the scheme of the embodiment of the invention, the method and the device for identifying the underwater moving target comprise the following steps: acquiring an underwater target image to obtain an initial image; converting the initial image into YUV channel images, and respectively obtaining Y, U, V channel images; acquiring a first image based on first image processing on the Y-channel image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image; the acquiring a first image based on the first image processing of the Y-channel image includes: obtaining underwater illuminance A, extracting a high-frequency signal M from the Y-channel image, and calculating M x A to obtain a Y-channel enhanced image M1; and generating a background image based on the Y-channel enhanced image M1 for underwater target recognition. According to the invention, the underwater image is preprocessed, so that more effective Y-channel image information is obtained, the saliency maps of different layers are obtained, the establishment of a background model is executed based on the saliency maps, and then the fusion of the saliency maps of different layers is executed, so that the background model optimally ensures the integrity of a background, and the accuracy of a target is conveniently identified and ensured.
It should be noted that the division of the modules of the above apparatus is only a division of logical functions, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these modules may all be implemented in software invoked by a processing element. Or may be implemented entirely in hardware. And part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware.
In addition, the embodiment of the present invention further provides a readable storage medium, in which a computer executing instruction is stored, and when a processor executes the computer executing instruction, the processing method is implemented.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed description is intended to be exemplary only, and is not intended to limit the present disclosure. Various modifications, improvements, and offset processing may occur to those skilled in the art, though not expressly stated herein. Such modifications, improvements, and offset processing are suggested in this specification and still fall within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the specification. Such as "one possible implementation," "one possible example," and/or "exemplary" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "one possible implementation," "one possible example," and/or "exemplary" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, some features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
Moreover, those skilled in the art will recognize that aspects of the present description may be illustrated and described in terms of several species or situations of patentability, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
It is noted that the descriptions, definitions and/or terms used in the specification shall control if any inconsistent or conflicting with the description, definitions and/or terms used in the specification appended hereto.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present description may be considered consistent with the teachings of the present description. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. An underwater moving target identification method is characterized by comprising the following steps:
acquiring an underwater target image to obtain an initial image;
converting the initial image into YUV channel images, and respectively obtaining Y, U, V channel images;
acquiring a first image based on first image processing of the Y-channel image; acquiring a second U channel image and a second V channel image based on second image processing of the U, V channel image;
the obtaining a first image based on the first image processing of the Y-channel image includes: acquiring underwater illuminance A, extracting a high-frequency signal M, D from the Y-channel image, calculating M × A to obtain a Y-channel high-frequency enhanced image signal M0, and obtaining a Y-channel enhanced image M1 through fusion of M0 and D;
and generating a background image based on the Y-channel enhanced image M1 for underwater target identification.
2. The underwater moving object recognition method as claimed in claim 1, wherein the underwater illuminance a is T/average gray value of the initial image.
3. The underwater moving object recognition method of claim 1 or 2, wherein the generating of the background image based on the Y channel enhanced image M1 for underwater object recognition comprises:
carrying out Gaussian decomposition on the Y-channel enhanced image M1 to obtain a third image and a fourth image; respectively carrying out saliency detection on the Y-channel enhanced image M1, the third image and the fourth image to obtain a Y-channel enhanced image saliency map, a third image saliency map and a fourth image saliency map; respectively executing background processing on the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map to obtain a first background image, a third background image and a fourth background image;
and performing fusion processing based on the first background image, the third background image and the fourth background image to generate a fusion background image, thereby performing underwater target identification.
4. The underwater moving target identification method of claim 3, wherein a target area is detected based on a fusion background, a target area mask is generated, and main color information of a target is obtained based on the target area mask and the Y-channel enhanced image M1, the second U-channel image and the second V-channel image.
5. The underwater moving object identification method as claimed in claim 3, wherein the performing background processing on the Y-channel enhanced image saliency map, the third image saliency map, and the fourth image saliency map respectively to obtain a first background image, a third background image, and a fourth background image comprises:
respectively selecting corresponding local neighborhood templates for the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, and for each pixel point in the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, taking the gray value of the pixel point as the difference value of the average gray value of other points in the neighborhood, taking the square of the difference value as a weight coefficient, and multiplying the weight coefficient by the local entropy of each point to be used as the final local information entropy of the pixel point;
and if the final local information entropy of the pixel point is smaller than a certain threshold, determining the pixel point as a background area, thereby traversing all the points and respectively obtaining a first background image, a third background image and a fourth background image.
6. An underwater moving object-based recognition device, comprising:
the acquisition module acquires an underwater target image to obtain an initial image;
the conversion module is used for converting the initial image into a YUV channel image and respectively acquiring Y, U, V channel images;
the processing module is used for performing first image processing on the Y-channel image to obtain a first image; performing second image processing on the U, V channel image to obtain a second U channel image and a second V channel image; the obtaining a first image based on the first image processing of the Y-channel image includes: acquiring underwater illuminance A, extracting a high-frequency signal M, D from the Y-channel image, calculating M x A to obtain a Y-channel high-frequency enhanced image signal M0, and fusing M0 and D to obtain a Y-channel enhanced image M1;
and generating a background image based on the Y-channel enhanced image M1 for underwater target identification.
7. An underwater moving object based recognition device as claimed in claim 6 wherein the underwater light intensity A is T/mean gray scale value of the initial image.
8. The underwater moving object recognition device of claim 6 or 7, wherein the generating of the background image based on the Y channel enhanced image M1 for underwater object recognition comprises:
performing Gaussian decomposition on the Y channel enhanced image M1 to obtain a third image and a fourth image; respectively carrying out saliency detection on the Y-channel enhanced image M1, the third image and the fourth image to obtain a Y-channel enhanced image saliency map, a third image saliency map and a fourth image saliency map; respectively executing background processing on the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency map to obtain a first background image, a third background image and a fourth background image;
and performing fusion processing based on the first background image, the third background image and the fourth background image to generate a fusion background image, thereby performing underwater target identification.
9. The underwater moving target recognition device of claim 8, wherein a target area is detected based on the fusion background, a target area mask is generated, and the main color information of the target is obtained based on the target area mask and the Y channel enhanced image, the second U channel image, and the second V channel image.
10. The underwater moving object recognition device of claim 8, wherein the performing background processing on the Y-channel enhanced image saliency map, the third image saliency map, and the fourth image saliency map to obtain a first background image, a third background image, and a fourth background image respectively comprises:
respectively selecting corresponding local neighborhood templates for the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, and for each pixel point in the Y-channel enhanced image saliency map, the third image saliency map and the fourth image saliency, taking the gray value of the pixel point as the difference value of the average gray value of other points in the neighborhood, taking the square of the difference value as a weight coefficient, and multiplying the weight coefficient by the local entropy of each point to be used as the final local information entropy of the pixel point;
and if the final local information entropy of the pixel point is smaller than a certain threshold, determining the pixel point as a background area, traversing all the points, and respectively obtaining a first background image, a third background image and a fourth background image.
CN202210578614.2A 2022-05-25 2022-05-25 Underwater moving target identification method and device Pending CN115035397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210578614.2A CN115035397A (en) 2022-05-25 2022-05-25 Underwater moving target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210578614.2A CN115035397A (en) 2022-05-25 2022-05-25 Underwater moving target identification method and device

Publications (1)

Publication Number Publication Date
CN115035397A true CN115035397A (en) 2022-09-09

Family

ID=83121609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210578614.2A Pending CN115035397A (en) 2022-05-25 2022-05-25 Underwater moving target identification method and device

Country Status (1)

Country Link
CN (1) CN115035397A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393579A (en) * 2022-10-27 2022-11-25 长春理工大学 Infrared small target detection method based on weighted block contrast

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393579A (en) * 2022-10-27 2022-11-25 长春理工大学 Infrared small target detection method based on weighted block contrast
CN115393579B (en) * 2022-10-27 2023-02-10 长春理工大学 Infrared small target detection method based on weighted block contrast

Similar Documents

Publication Publication Date Title
Chen et al. Visual depth guided color image rain streaks removal using sparse coding
CN112232349A (en) Model training method, image segmentation method and device
CN106600625A (en) Image processing method and device for detecting small-sized living thing
CN105678213B (en) Dual-mode mask person event automatic detection method based on video feature statistics
CN1918604A (en) Method for modeling background and foreground regions
CN109993744B (en) Infrared target detection method under offshore backlight environment
CN110135312B (en) Rapid small target detection method based on hierarchical LCM
US11042986B2 (en) Method for thinning and connection in linear object extraction from an image
CN111160293A (en) Small target ship detection method and system based on characteristic pyramid network
CN118155149B (en) Intelligent monitoring system for smart city roads
CN115035397A (en) Underwater moving target identification method and device
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN107248175B (en) TLD target tracking method based on circular projection matching algorithm
CN116311212A (en) Ship number identification method and device based on high-speed camera and in motion state
Chu et al. A content-adaptive method for single image dehazing
Taha et al. Moving shadow removal for multi-objects tracking in outdoor environments
He et al. A Pedestrian Detection Method Using SVM and CNN Multistage Classification.
CN111507968B (en) Image fusion quality detection method and device
Jarraya et al. Adaptive moving shadow detection and removal by new semi-supervised learning technique
Zhang et al. Digital image forensics of non-uniform deblurring
CN117237619B (en) Water rescue detection system and method based on machine vision technology
KR101312306B1 (en) Apparatus for recognizing signs, Method thereof, and Method for recognizing image
CN115018776A (en) Small target identification method and system under complex background
Abdikhamitovich Detection and determination coordinates of moving objects from a video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220909