CN116797777A - Target detection method and system for underwater in-situ image - Google Patents
Target detection method and system for underwater in-situ image Download PDFInfo
- Publication number
- CN116797777A CN116797777A CN202210271278.7A CN202210271278A CN116797777A CN 116797777 A CN116797777 A CN 116797777A CN 202210271278 A CN202210271278 A CN 202210271278A CN 116797777 A CN116797777 A CN 116797777A
- Authority
- CN
- China
- Prior art keywords
- image
- quantile
- sliding window
- gray
- rectangular frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 47
- 238000011065 in-situ storage Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 238000003860 storage Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000013535 sea water Substances 0.000 description 3
- 241000199919 Phaeophyceae Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000000053 physical method Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a target detection method and a target detection system for an underwater in-situ image, wherein the method comprises the following steps: preprocessing an original image to obtain a gray image; traversing the gray scale image using a sliding window movement; when the sliding window moves each time, acquiring a first quantile of the brightness value of each pixel in the image sub-block corresponding to the current sliding window, and sequentially arranging all the first quantiles acquired by traversing to acquire a quantile list; calculating a second quantile of the quantile list, adding a preset offset value to the second quantile as a quantile threshold, comparing the quantile threshold with the first quantile, judging whether the image sub-block is a foreground or not, and constructing a binary image; and calculating a connected domain on the binary image by using a connected domain detection algorithm to obtain an external rectangular frame coordinate of the connected domain, mapping the external rectangular frame coordinate back to the gray level image, and cutting and obtaining an ROI image.
Description
Technical Field
The application relates to the field of water environment monitoring, in particular to a target detection method and a target detection system for an underwater in-situ image.
Background
Plankton are widely distributed in the ocean, are a key ring in the marine ecosystem and the marine food network, and have great research value and observation significance because of huge losses caused by harmful outbreaks.
Plankton in situ observation techniques based on optical imaging generally comprise the steps of: an original image is shot underwater by an in-situ imager, then preprocessing including target detection is carried out, and finally, storage, transmission, identification, measurement and analysis are carried out on the target image, so that an observation result is obtained. The target detection obtains the region of the target on the image from the original image, and then cuts the region to obtain each small image (ROI) containing one target, so that the subsequent identification, processing and analysis are convenient. The accuracy of subsequent identification and analysis and the operation efficiency of the whole in-situ observation system are greatly affected by the effect and the operation efficiency of target detection.
In the related art, campbell R et al disclose an in-situ observation method of plankton and particles, in which an original image is downsampled four times by nearest interpolation and converted into an 8-bit image, so as to reduce the calculation amount. And then, a Canny edge detection operation is applied to obtain the position of the target in the image. The broken portions of the edges are then closed using a morphological closing operation. And detecting the outline of the target by using the findcontours function of OpenCV to obtain a bounding box of the target, and expanding the circumscribed rectangle of the target with the boundary larger than 300 pixels from the center by a factor of 50%. And then the ROI image of the target corresponding to the boundary box is segmented from the original image.
Yamazaki H et al discloses a wired observation system for understanding comprehensive long-term, high-frequency biological, chemical and physical measurement of a plankton system, which utilizes a blob feature detection algorithm to detect a continuous region with obvious brightness difference from surrounding pixels, sets a continuous pixel number threshold value, judges the continuous pixel number threshold value as a target if the continuous pixel number threshold value is larger than the continuous pixel number threshold value, acquires a boundary frame of the continuous pixel number threshold value, enlarges the size of the boundary frame according to a preset factor, and cuts to obtain an ROI subgraph.
Cheng K et al discloses an enhanced convolutional neural network for plankton identification and enumeration, which first sets a brightness threshold and binarizes an image. A morphological open operation is then performed on the image to eliminate background noise. And obtaining the position of the target through the edge of the target, and finally cutting to obtain a target ROI subgraph.
Geraldes P et al discloses in situ real-time zooplankton detection and classification, and utilizes a deep neural network model for target detection. The method comprises the steps of firstly, constructing a training data set by manually marking targets in a large number of images, and then training a network. After training, the original image is input into the network, the target boundary box information in the image can be obtained at the output end, and the ROI subgraph is cut according to the target boundary box information.
However, the background noise interference of the detection method in the prior art causes poor detection effect, and is difficult to be effective in high-turbidity seawater; some detection method parameters need to be set manually, and the parameters may be reset when the brightness of the image changes; some detection methods are easy to ignore transparent targets in images, so that the targets are missed to detect and rely on a large number of marked images to be used as training sets, and the neural network model has long training time and high calculation complexity, and is unfavorable for deploying algorithms on a low-power-consumption, low-cost and low-calculation-force platform.
Disclosure of Invention
The embodiment of the application provides a target detection method and a target detection system for an underwater in-situ image, which are used for solving the problems that the related technology is easy to be interfered by background noise, has poor self-adaptability and low efficiency, and is not beneficial to deploying an algorithm on a low-power-consumption, low-cost and low-calculation-force platform.
In order to achieve the above purpose, the application is realized by adopting the following technical scheme:
the target detection method of the underwater in-situ image is characterized by comprising the following steps of:
s1, preprocessing an original image to obtain a gray image; traversing the gray scale image using a sliding window movement; when the sliding window moves each time, acquiring a first quantile of the brightness value of each pixel in the image sub-block corresponding to the current sliding window, and sequentially arranging all the first quantiles acquired by traversing to acquire a quantile list;
s2, calculating a second quantile of the quantile list, adding a preset offset value to the second quantile to serve as a quantile threshold, comparing the quantile threshold with the first quantile, judging whether the image sub-block is a foreground or not, and constructing a binary image;
s3, calculating a connected domain on the binary image by using a connected domain detection algorithm to obtain the coordinates of an external rectangular frame of the connected domain, mapping the coordinates of the external rectangular frame back to the gray level image, and cutting and obtaining the ROI image.
In some embodiments, the constructing the binary image comprises:
each pixel represents a sliding window on the gray-scale image, the ordinate of which represents the row number of the sliding window on the gray-scale image, the abscissa of which represents the number of the sliding window in the row, and the pixel value is respectively represented by 1 or 0 according to whether the image sub-block belongs to the foreground or the background.
In some embodiments, determining whether the image sub-block is foreground in the step S2 includes:
and judging the size relation between the first quantile in the list and the quantile threshold value, and if the first quantile is larger than the second quantile, the image sub-block corresponding to the current first quantile is foreground, otherwise, the image sub-block is background.
In some embodiments, the step S3 includes:
s31, calculating a connected domain on the binary image by using a connected domain detection algorithm, and acquiring the coordinates of an external rectangular frame of the connected domain;
s32, converting the coordinates of the external rectangular frame to the gray level image to obtain target positioning information;
s33, cutting a required rectangular region from the original image according to the target positioning information to obtain an ROI image containing plankton targets.
In some embodiments, the original image preprocessing includes converting the original image into a grayscale image and downsizing it.
In some embodiments, the relationship between the movement step size of the sliding window and the width of the sliding window is:
Patch_step=Factor*Patch_size/2;
where Patch_step is the moving step of the sliding window, factor is the scaling multiple of the gray scale image, and factor_size is the width of the sliding window.
In some embodiments, when the sliding window is rectangular, the coordinates of the circumscribed rectangular frame are converted to a gray-scale image, and the formula is adopted as follows:
x1’=x1*Patch_step/Factor;y1’=y1*Patch_step/Factor;
x2’=x2*Patch_step/Factor;y2’=y2*Patch_step/Factor;
wherein x1, y1, x2, y2 are coordinate values of the circumscribed rectangular frame, and x1', x2', y1', y2' are coordinate values of the rectangular frame corresponding to the gray scale image.
In some embodiments, the sliding window shape is square, rectangular, circular, trapezoidal, or triangular.
In some embodiments, the first quantile and the second quantile range from 25% to 75%.
An object detection system for an underwater in-situ image, comprising: a computer readable storage medium and a processor;
the computer-readable storage medium is for storing executable instructions;
the processor is used for reading executable instructions stored in the computer readable storage medium and executing the target detection method of the underwater in-situ image.
The technical scheme provided by the application has the beneficial effects that:
(1) The application utilizes the sliding window and describes the image characteristics in the sliding window by quantiles, has stronger self-adaptive capacity and strong interference capability of resisting background noise and brightness change, and can meet the requirement of target detection working precision under different turbidity seawater environments.
(2) Compared with the existing non-learning-based technology, the method has better self-adaption, can adapt to the change of the brightness of the image, has high algorithm sensitivity and has good detection effect on the low-brightness target; the algorithm does not need training, has small implementation difficulty, contains fewer manual adjustment parameters, and has low use difficulty.
(3) According to the application, the sliding window on the original image is mapped to the binary small image, and the target boundary box detection is carried out on the sliding window, so that the image processed by the step is smaller in size, higher in calculation efficiency, smaller in calculation amount by adopting the calculation optimization method, capable of meeting the real-time processing requirements on the embedded calculation platform with low power consumption and low calculation force built in the plankton in-situ imaging instrument, and capable of realizing post-processing on the cloud server, and wide in application range.
The embodiment of the application provides a target detection method and a target detection system for an underwater in-situ image, which are characterized in that the size of the image is firstly reduced, the image is then traversed by utilizing a sliding window, after the pixel brightness quantiles in the sliding window area are counted, the quantiles of the sliding window are counted again to serve as a threshold value, whether the sliding window corresponds to the image area is a foreground or not is judged, a binary image is then generated, an external rectangular frame of the foreground is calculated on the binary image, finally, the coordinates of the external rectangular frame are mapped back to an original image, and an ROI image is obtained through cutting. Therefore, the method has the advantages of small implementation difficulty, stronger self-adaptive capacity, high sensitivity, smaller calculation amount and wider application scene.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting targets in-situ images under water in an embodiment of the application;
FIG. 2 is a sample of in situ image target detection of plankton in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a target detection method and a target detection system for an underwater in-situ image, which are matched with a plankton imager to work, solve the problem of positioning and extracting a target foreground image from an original image acquired by the plankton imager, can be used for preparing for subsequent identification, measurement and analysis of data, and improve the accuracy of the data.
Referring to fig. 1, a target detection method for an underwater in-situ image includes the specific steps of:
step one: the original image is initialized, and the color image is converted into a gray image, so that the processing is convenient, the size of the reduced image is calculated in an acceleration way;
s11, averaging RGB three channels of an original image I_raw pixel by pixel to obtain a gray image I_raw_gray;
s12, scaling the length and width of the gray image I_raw_gray by Factor to obtain a scaled image I_gray.
Step two: counting the quantiles of the sliding window, traversing the image by using the sliding window, and calculating 50% quantiles of pixel values in the image sub-blocks corresponding to each sliding window in the embodiment, so that the method has better background noise interference resistance;
s21. in this embodiment, a sliding window with a window width of Factor is used to move from left to right in a row-by-row manner with a step size of each step of patch_step, and traverse the scaled image i_gray, where patch_step=factor_patch_size/2;
s22, calculating 50% quantiles Q_0.5 of the brightness value of each pixel in the image sub-block corresponding to the current sliding window from small to large after the sliding window moves once, and sequentially recording the quantiles Q_0.5 in a list Qs_0.5.
In some embodiments, the quantile of the luminance value of each pixel within each sliding window corresponds to an image sub-block in the range of 25% -75%.
In some embodiments, the sliding window may be in the shape of a closed figure of any two-dimensional plane, such as rectangle, circle, trapezoid, or triangle.
In some embodiments, the traversal order of the sliding windows may be a column-by-column top-down traversal, an out-of-order traversal, or writing a parallel computing program to compute all sliding windows simultaneously.
Step three: judging whether the sliding window corresponds to a foreground region containing a target in the subgraph or not, so that the method has better self-adaptability and can adapt to the change of the brightness of the image;
s31, calculating 75% quantiles Q_0.75 of the Qs_0.5 from small to large, and adding Q_bias (the empirical value is 2) to obtain a quantile threshold Q_thresh for distinguishing the foreground and the background;
s32, judging the size relation between each element Q_0.5 in the Qs_0.5 and the Q_thresh. If Q_0.5> Q_thresh, marking the image sub-block in the sliding window range corresponding to the current Q_0.5 as a foreground region, otherwise marking the image sub-block as a background region.
S33, constructing a binary image I_bin. Each pixel represents a sliding window on the gray image i_gray, the ordinate of which represents the row number to which the sliding window belongs on the gray image i_gray, and the abscissa of which represents the number of sliding windows in which the sliding window belongs. According to the foreground or background of the sliding window, the pixel value is respectively represented by 1 or 0, and the calculation flow of the subsequent target positioning and ROI extraction steps is accelerated.
Step four: target positioning and ROI extraction, namely separating a foreground target on a binary image I_bin, mapping coordinates to an original image I_raw, and extracting a target image;
s41, in the embodiment, a connected domain on a binary image I_bin is calculated by using a connected domain detection algorithm, and an upper left corner coordinate (x 1, y 1) and a lower right corner coordinate (x 2, y 2) of an external rectangular frame of the connected domain are obtained;
s42, converting coordinates of a target boundary box on the binary image I_bin to a gray image I_gray:
x1’=x1*Patch_step/Factor;y1’=y1*Patch_step/Factor
x2’=x2*Patch_step/Factor;;y2’=y2*Patch_step/Factor;
s43, cutting rectangular areas with the coordinates of the upper left corner and the lower right corner being (x 1', y 1') and (x 2', y 2') from the original image I_raw to obtain an ROI image containing plankton targets.
In some embodiments, the traversal order of the sliding windows may be a column-by-column top-down traversal, an out-of-order traversal, or writing a parallel computing program to compute all sliding windows simultaneously.
The results of plankton in-situ image target detection are shown in fig. 2, wherein the left side is an original image containing two brown algae targets photographed in high turbidity seawater. The brown algae has high transparency, the brightness is close to the background, and the detection difficulty is high. The left-hand white rectangle shows the effect of the application on the detection of the target position in the image, the right-hand side being the extracted ROI image.
The comparison of the meanings of the variable names involved in the target detection method of the underwater in-situ image is shown in table 1.
TABLE 1
The embodiment of the application also provides an object detection system of the underwater in-situ image, which comprises: a computer readable storage medium and a processor;
the computer-readable storage medium is for storing executable instructions;
the processor is used for reading executable instructions stored in the computer readable storage medium and executing the target detection method of the underwater in-situ image.
It should be noted that in the present application, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. The target detection method of the underwater in-situ image is characterized by comprising the following steps of:
s1, preprocessing an original image to obtain a gray image; traversing the gray scale image using a sliding window movement; when the sliding window moves each time, acquiring a first quantile of the brightness value of each pixel in the image sub-block corresponding to the current sliding window, and sequentially arranging all the first quantiles acquired by traversing to acquire a quantile list;
s2, calculating a second quantile of the quantile list, adding a preset offset value to the second quantile to serve as a quantile threshold, comparing the quantile threshold with the first quantile, judging whether the image sub-block is a foreground or not, and constructing a binary image;
s3, calculating a connected domain on the binary image by using a connected domain detection algorithm to obtain the coordinates of an external rectangular frame of the connected domain, mapping the coordinates of the external rectangular frame back to the gray level image, and cutting and obtaining the ROI image.
2. The method of claim 1, wherein constructing the binary image comprises:
each pixel represents a sliding window on the gray-scale image, the ordinate of which represents the row number of the sliding window on the gray-scale image, the abscissa of which represents the number of the sliding window in the row, and the pixel value is respectively represented by 1 or 0 according to whether the image sub-block belongs to the foreground or the background.
3. The method according to claim 1, wherein determining whether the image sub-block is foreground in step S2 comprises:
and judging the size relation between the first quantile in the list and the quantile threshold value, and if the first quantile is larger than the second quantile, the image sub-block corresponding to the current first quantile is foreground, otherwise, the image sub-block is background.
4. The target detection method according to claim 1, wherein the step S3 includes:
s31, calculating a connected domain on the binary image by using a connected domain detection algorithm, and acquiring the coordinates of an external rectangular frame of the connected domain;
s32, converting the coordinates of the external rectangular frame to the gray level image to obtain target positioning information;
s33, cutting a required rectangular region from the original image according to the target positioning information to obtain an ROI image containing plankton targets.
5. The object detection method according to claim 1, wherein the original image preprocessing includes converting an original image into a gray scale image and downsizing the same.
6. The method of claim 5, wherein the relation between the moving step length of the sliding window and the width of the sliding window is:
Patch_step=Factor*Patch_size/2;
where Patch_step is the moving step of the sliding window, factor is the scaling multiple of the gray scale image, and factor_size is the width of the sliding window.
7. The method according to claim 4, wherein when the sliding window is rectangular, the coordinates of the circumscribed rectangular frame are converted to a gray-scale image by the following formula:
x1’=x1*Patch_step/Factor;y1’=y1*Patch_step/Factor;
x2’=x2*Patch_step/Factor;y2’=y2*Patch_step/Factor;
wherein x1, y1, x2, y2 are coordinate values of the circumscribed rectangular frame, and x1', x2', y1', y2' are coordinate values of the rectangular frame corresponding to the gray scale image.
8. The method of claim 1, wherein the sliding window is square, rectangular, circular, trapezoidal, or triangular in shape.
9. The method of claim 1, wherein the first and second quantiles range from 25% to 75%.
10. An object detection system for an underwater in-situ image, comprising: a computer readable storage medium and a processor;
the computer-readable storage medium is for storing executable instructions;
the processor is configured to read executable instructions stored in the computer readable storage medium and perform a method of object detection of an underwater in-situ image according to any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210271278.7A CN116797777A (en) | 2022-03-18 | 2022-03-18 | Target detection method and system for underwater in-situ image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210271278.7A CN116797777A (en) | 2022-03-18 | 2022-03-18 | Target detection method and system for underwater in-situ image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116797777A true CN116797777A (en) | 2023-09-22 |
Family
ID=88042507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210271278.7A Pending CN116797777A (en) | 2022-03-18 | 2022-03-18 | Target detection method and system for underwater in-situ image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116797777A (en) |
-
2022
- 2022-03-18 CN CN202210271278.7A patent/CN116797777A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022007431A1 (en) | Positioning method for micro qr two-dimensional code | |
CN105046252B (en) | A kind of RMB prefix code recognition methods | |
CN109165538B (en) | Bar code detection method and device based on deep neural network | |
CN113592845A (en) | Defect detection method and device for battery coating and storage medium | |
Le et al. | An automated fish counting algorithm in aquaculture based on image processing | |
CN109708658B (en) | Visual odometer method based on convolutional neural network | |
CN105405138B (en) | Waterborne target tracking based on conspicuousness detection | |
CN110415208A (en) | A kind of adaptive targets detection method and its device, equipment, storage medium | |
CN109766823A (en) | A kind of high-definition remote sensing ship detecting method based on deep layer convolutional neural networks | |
CN103500453A (en) | SAR(synthetic aperture radar) image significance region detection method based on Gamma distribution and neighborhood information | |
CN105913425B (en) | A kind of more pig contour extraction methods based on adaptive oval piecemeal and wavelet transformation | |
CN110969164A (en) | Low-illumination imaging license plate recognition method and device based on deep learning end-to-end | |
Koniar et al. | Machine vision application in animal trajectory tracking | |
CN114926826A (en) | Scene text detection system | |
CN117557565B (en) | Detection method and device for lithium battery pole piece | |
CN111027564A (en) | Low-illumination imaging license plate recognition method and device based on deep learning integration | |
CN109165592B (en) | Real-time rotatable face detection method based on PICO algorithm | |
CN117037049B (en) | Image content detection method and system based on YOLOv5 deep learning | |
Zou et al. | Fish tracking based on feature fusion and scale adaptation in a real-world underwater environment | |
CN103632373B (en) | A kind of flco detection method of three-frame difference high-order statistic combination OTSU algorithms | |
US10115195B2 (en) | Method and apparatus for processing block to be processed of urine sediment image | |
CN116934734A (en) | Image-based part defect multipath parallel detection method, device and related medium | |
CN110276260B (en) | Commodity detection method based on depth camera | |
CN116977893A (en) | Shoal state detection method based on improved sobel and piecewise linear transformation | |
CN116797777A (en) | Target detection method and system for underwater in-situ image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |