CN107730526A - A kind of statistical method of the number of fish school - Google Patents
A kind of statistical method of the number of fish school Download PDFInfo
- Publication number
- CN107730526A CN107730526A CN201710874194.1A CN201710874194A CN107730526A CN 107730526 A CN107730526 A CN 107730526A CN 201710874194 A CN201710874194 A CN 201710874194A CN 107730526 A CN107730526 A CN 107730526A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- fish
- shoal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The invention discloses a kind of statistical method of the number of fish school, the described method comprises the following steps:Step 1) obtains sonar detection shoal of fish data;The data of acquisition are obtained shoal of fish sound spectrogram sequence (I by step 2) after pretreatment1,I2,…IQ), Q is totalframes;Step 3) take N two field pictures in above-mentioned shoal of fish sound spectrogram sequence utilize before and after consecutive frame and interval frame quickly establish background model;Step 4) carries out segmentation to every two field picture using background model and obtains foreground target and binaryzation;Step 5) calculates the barycenter of the foreground target of each connected region per two field picture, and is labeled on every two field picture;The quantity for the centroid statistic shoal of fish that step 6) marks according to every two field picture.The method of the present invention can directly extract background from the scene image containing sport foreground, without establishing model to the background in scene and target, Objective extraction produces false multiple target in the case of preventing velocity to moving target slow, ensure that the extraction effect of foreground target.
Description
Technical field
The present invention relates to fisheries stock assessment technical field, more particularly to a kind of statistical method of the number of fish school.
Background technology
Marine acoustics Detection Techniques are fishery resources survey and assess the important means used, how effective quick high accuracy
It is key issue to carry out assessment to stock number.In recent years with development in science and technology, high frequency sonar is gradually widely used in fisheries management tune
Look into assessment, at present in resource investigation, it is to be close to ripple integration method to carry out stock assessment, but this estimation to use most methods
It is relatively coarse, it can not judge whether its echo only has the shoal of fish, error is larger, and the software for being used to count that some sonars are subsidiary,
The error counted if its too small echo-signal of shoal of fish target is weak is very big.
The method for background modeling has a variety of at present, as inter-frame difference, symmetric difference are graded, its target being often partitioned into
Breach can be produced, partial contour information can only be extracted, so causes the overlapping shoal of fish to be divided into a target or by target
It is divided into two, this method error is larger;The methods of for establishing probability density function or Gaussian Mixture, extracts target, algorithm
It is more complicated and computationally intensive.
For the method for object count, have based on Kalman filtering algorithm, based on sample block algorithm etc., if the overlapping screening of the shoal of fish
Gear etc., can cause BREAK TRACK, bad adaptability, and sample block algorithm, sample block size immobilize so that matching bad student
Error, than relatively time-consuming, therefore these method poor robustness and than relatively time-consuming.
The content of the invention
It is an object of the invention to overcome to calculate complicated, computationally intensive ask existing for current number of fish school statistical method
Topic, it is proposed that a kind of statistical method of the new number of fish school, this method, only need to be to target matter by calculating the barycenter of foreground target
The heart mark can carry out quantity statistics, can in a short time more precise and high efficiency to the number of fish school carry out detection statistics.
In order to achieve the above object, the present invention proposes a kind of statistical method of the number of fish school, and methods described includes:
Step 1) sonar detection obtains shoal of fish data;
The sonar shoal of fish data of acquisition are obtained shoal of fish sound spectrogram sequence (I by step 2) after pretreatment1,I2,…IQ), Q is total
Frame number;
Step 3) takes the N two field pictures (I in above-mentioned shoal of fish sound spectrogram sequence1,I2,…IN) establish background model;
Step 4) carries out image segmentation using background model to every two field picture, obtains the foreground target per two field picture and progress
Binaryzation;
Step 5) calculates the barycenter of the foreground target of each connected region per two field picture, and in the enterprising rower of every two field picture
Note;
The quantity for the centroid statistic shoal of fish that step 6) marks according to every two field picture.
One kind as the above method is improved, and the step 2) specifically includes:
Step 201) reads the high frequency sonar data collected according to particular memory form, is converted to rectangle acoustic image;
Step 202) makees linear interpolation pretreatment to the acoustic wave beam of each two field picture;Three are interleave out in two wave beams
New wave beam;
The linear interpolation pre-processes:
Wherein, BxThree wave beams that expression is inserted between two wave beams, x=1,2,3, B0And B0' represent adjacent two
Wave beam;
Data after interpolation are made image and shown by step 203), obtain acoustic image sequence (I1,I2,…IQ), Q is totalframes.
One kind as the above method is improved, and the step 3) specifically includes:
Step 301) establishes gray value statistical matrix M:Each element M (j, x) in matrix represents the gray scale of pixel at j
Level x, the total degree that 0≤x≤255 occurs;
Step 302) is successively with i-th, i=4,5 ..., each backward forward to choose two frame figures of interval on the basis of N-3 two field pictures
As i-3, i-1, i+1, i+3, five two field pictures are formed, the gray scale value of each pixel of the i-th two field picture are calculated, with BKi(j)
Background pixel point is determined, thus counts and updates gray scale value matrix M;
Ii-3(j),Ii-1(j),Ii(j),Ii+1(j),Ii+3(j) the i-th -3, i-1, i, i+1 are represented respectively, j in i+3 picture frames
Locate the gray value of pixel, Di-1(j),Di+1(j) forward difference and backward difference mask are represented respectively:
Wherein, T represents threshold value thresholding, grey at pixel j for judging by maximum variance between clusters Ostu adaptive polo placements
Whether angle value changes;
Judge in continuous seven frames sound spectrogram in the point it is foreground target point or background dot by above-mentioned difference mask, formula is such as
Under:
Wherein, if BKi(j)=1, i.e. Di-1And D (j)i+1(j) value all be 1 when, it may be determined that the point therebetween every image
It is all to move in continuous 7 frame to be in frame;If on the contrary, BKi(j) item it is=0 background pixel point;
The initial gray value statistical matrix M of statistical updating is carried out by above-mentioned processing:
Said process is constantly repeated, until i=N-3;
Step 303) judges the initial back of the body of the frequency of occurrences highest gray value as picture point according to gray value statistical matrix M
Scape gray value, so as to complete background modeling:
Wherein, B (j) represents background model.
One kind as the above method is improved, and the step 4) specifically includes:
Each picture frame and background model are carried out difference by step 401) successively, obtain the foreground detection image of every two field picture
Binaryzation F1(x,y);
Background model B (j) is designated as B (x, y), current image frame is designated as I (x, y), wherein, the space of x and y expression images
Position, x represent row, and y represents row;Obtained foreground detection image is designated as F1(x,y):
T1It is binary-state threshold, is calculated using Ostu adaptive methods;F1The corresponding foreground target in (x, y)=1 and part are made an uproar
Sound;
Step 402) is to the F after above-mentioned binaryzation1(x, y), isolated noise is removed using area-method, after carrying out Blob analyses
Labeled as F (x, y).
One kind as the above method is improved, and the step 5) specifically includes:
The F (x, y) of every two field picture is carried out line by line and scanned by column first, finds the point of first F (x, y)=1, is marked
The point, and the iterative search vertex neighborhood, calculate connected region area pixel summation Si, then calculate moving target barycenter seat
Cursor position (Xi,Yi):
According to coordinate position (Xi,Yi) marked and including in acoustic image, until sweeping complete F (x, y);
The often target centroid quantity of two field picture mark, it can be equivalent to the number of fish school, be incited somebody to action for labeled in picture frame
No longer mark, additional character and color mark then are carried out to it if there is fresh target.
One kind as the above method is improved, and the step 6) specifically includes:
The number of fish school in the accumulative mark point acquisition acoustic image sequence of Q two field pictures is counted, finally by these mark points
Be shown in original series frame acoustic image can real-time display shoal of fish marker number, complete the statistics of the number of fish school.
One kind as the above method is improved, and methods described further comprises:Background model is updated, is specially:
With constantly receiving new shoal of fish sound spectrogram sequence, it is necessary to whether need to update using threshold decision background model;It is first
The dynamic prospect of present frame is first judged according to initial back-ground model, if the pixel count to be changed after difference in image and whole pictures
The percentage of prime number is more than some threshold value, and the threshold value takes 80%, then judges that background is changed, if background in continuous multiple frames
Change, then extract acoustic image sequence now again, re-establish background model.
Advantage of the invention is that:
1st, the shortcomings that Small object can not being counted present invention is generally directed to high frequency imaging sonar bundled software, it is proposed that be based on
Barycenter carries out the method for counting the number of fish school, will not because the small echo of fish individual is weak and missing inspection, this method are simple and quick efficiently, essence
Du Genggao, a kind of new faster statistical approach is provided for fisheries stock assessment;
2nd, the background modeling method in the present invention, can effectively avoid mixing phenomena, directly from the field containing sport foreground
Background is extracted in scape image, without establishing model to the background in scene and target, it is therefore prevented that velocity to moving target is slow
In the case of Objective extraction produce false multiple target, and preferable background can be extracted with relatively small number of frame number, before ensure that
The extraction effect of scape.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the statistical method of the number of fish school of the present invention;
Fig. 2 is the background modeling principle schematic of the present invention.
Embodiment
With reference to the accompanying drawings and examples, the embodiment of the present invention is described in further detail.Implement below
Example is suitable to the explanation present invention, but is not limited to the scope of the present invention.
High frequency sound wave of the double frequency identification sonar to underwater emission frequency for 1.8MHz is used in this example, Fig. 1 is according to this hair
A kind of basic skills schematic flow sheet of bright shoal of fish statistics;Reference picture 1, main method process are described below:
The first step, obtain locating fish high frequency sonar data;
Under natural conditions in the sea, branch is fixed on using double frequency identification sonar and is placed on the lower 2 meters of depth in sea, the support can root
Factually border regulation angle, carry out detection to transmitting high frequency sound wave wave beam in the sea and obtain data.
Second step, the sonar data of acquisition is obtained into shoal of fish sound spectrogram sequence (I after pretreatment1,I2,…IQ), Q is total frame
Number;
The high frequency sonar data collected is read according to its particular memory form, is converted to rectangle acoustic image, and to every
The acoustic wave beam of one two field picture makees interpolation pretreatment, makes image apparent complete, is easy to observe.
Requirement can be substantially met using simplest linear interpolation, i.e., interleave out three new ripples in two wave beams
Beam.
Following linear interpolation processing:
Wherein, BxThree wave beams that expression is inserted between two wave beams, x=1,2,3, B0And B0' represent adjacent two
Wave beam.
Because being the sonar using high frequency mode, therefore there are 96 acoustic wave beams, 512 sampled points, this 96 wave beams are entered
Row linear interpolation is 381, then the data are made into image and shown, that is, obtains acoustic image sequence (I1,I2,…IQ)。
3rd step, according to shoal of fish sound spectrogram sequence, take the N two field pictures (I in above-mentioned sequence1,I2,…IN) carry out based on statistics
Method background of information models, and effective target is split;
The specific implementation process of the step is as shown in Figure 2.
Establish gray value statistical matrix M:Each element M (j, x) in matrix represents the gray level x, 0≤x of pixel at j
≤ 255 total degrees occurred;
It is each backward forward to choose interval two such as on the basis of i on the basis of N-3 two field pictures successively with i-th, i=4,5 ...
Two field picture i-3, i-1, i+1, i+3, five two field pictures are formed, calculate the gray scale value of each pixel of the i-th two field picture, with
BKi(j) background pixel point is determined, thus counts and updates gray scale value matrix M;
Ii-3(j),Ii-1(j),Ii(j),Ii+1(j),Ii+3(j) the i-th -3, i-1, i, i+1 are represented respectively, j in i+3 picture frames
Locate the gray value of pixel, Di-1(j),Di+1(j) forward difference and backward difference mask are represented respectively:
Wherein, T represents threshold value thresholding, grey at pixel j for judging by maximum variance between clusters Ostu adaptive polo placements
Whether angle value changes;
It is that foreground target point or background dot, formula are as follows to be spaced by aforementioned mask judgement in two frame sound spectrograms in the point:
Wherein, if BKi(j)=1, i.e. Di-1And D (j)i+1(j) value all be 1 when, it may be determined that the point therebetween every image
It is all to move in continuous 7 frame to be in frame;If on the contrary, BKi(j) item it is=0 background pixel point;
The initial gray value statistical matrix M of statistical updating is carried out by above-mentioned processing:
Said process is constantly repeated, until i=N-3;
Initial background gray scale of the frequency of occurrences highest gray value as picture point is judged according to gray value statistical matrix M
Value, so as to complete background modeling:
Wherein, B (j) represents background model.
For the N values occurred in above-mentioned engineering, the back of the body can largely be determined according to actual flexibly selection, the value
The sound spectrogram quality of scape modeling.Due in the sea experiment so background herein is not especially complex, therefore for improve system effectiveness this
The N at place number is 16.
, it is necessary to whether need to update using threshold decision background model after the completion of background model initializing.With new images
Arrival, may can be changed because of the influence of the factors such as solar irradiation, background image gray value, be reduce because background change
Caused false alarm rate is, it is necessary to carry out the renewal of background model, dynamic updating maintenance background model, but still begin in a self-adaptive manner
Retain initial back-ground model eventually to ensure its reliability.The dynamic prospect of present frame is judged according to initial back-ground model first, if
The percentage of the pixel count to be changed after difference in image and whole pixel counts is more than some threshold value (generally taking 80%), then
Background is changed, if the ratio is still very big in continuous multiple frames, extracts image sequence now again, background is built again
Mould.
4th step, image segmentation, obtains foreground target and binaryzation, i.e., based in above-mentioned background model method, successively will
Each picture frame carries out difference with background model, obtains the foreground target of every two field picture;
If background model B (j) is designated as B (x, y), present frame I (x, y), wherein, the locus of x and y expression images, x generations
Table row, y represent row;Obtained foreground detection image is designated as F1(x, y), then:
T1It is binary-state threshold, is calculated using Ostu adaptive methods.F1The corresponding foreground target in (x, y)=1 and part are made an uproar
Sound.
To the F after above-mentioned binaryzation1(x, y), isolated noise is removed using the area-method in morphological image system, i.e., it is small
Removed it in the area of some threshold value, and carry out Blob analyses and be labeled as F (x, y);
5th step, the barycenter of each connected region foreground target per two field picture is calculated, and in the enterprising rower of every two field picture
Note;
F (x, y) is carried out line by line and scanned by column first, finds the point of first F (x, y)=1, marks the point, and repeatedly
In generation, searches for the vertex neighborhood, calculates connected region area pixel summation Si, then calculate moving target barycenter:
Obtain the coordinate position (X of barycenteri,Yi) by its recording mark and including in acoustic image, until sweep complete F (x,
Y), the detection of whole connected regions can be achieved by single pass for the algorithm.
The often target centroid quantity of two field picture mark, it can be equivalent to the number of fish school, be incited somebody to action for labeled in picture frame
No longer mark, additional character and color mark then are carried out to it if there is fresh target.
6th step, according to the quantity of the centroid statistic shoal of fish of every two field picture shoal of fish;
The accumulative mark point of statistics Q two field pictures can obtain the number of fish school in whole acoustic image sequence, finally by this
A little mark points be shown in original series frame acoustic image can real-time display shoal of fish marker number, complete the statistics of the number of fish school.
This invention takes double frequency identification sonar system, can store data as video format or DDF forms, but all
It can use as the sound spectrogram sequence information of the present invention, more accurately be obtained in a short time by simple background modeling and renewal
The shoal of fish counts, and improves accuracy rate and is applied to Small object, is no longer limited in bundled software that more than more than ten centimetres can only be counted
Target.
It should be noted last that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted.Although ginseng
The present invention is described in detail according to embodiment, it will be understood by those within the art that, to the technical side of the present invention
Case is modified or equivalent substitution, and without departure from the spirit and scope of technical solution of the present invention, it all should cover in the present invention
Right among.
Claims (7)
1. a kind of statistical method of the number of fish school, the described method comprises the following steps:
Step 1) sonar detection obtains shoal of fish data;
The sonar shoal of fish data of acquisition are obtained shoal of fish sound spectrogram sequence (I by step 2) after pretreatment1,I2,…IQ), Q is total frame
Number;
Step 3) takes the N two field pictures (I in above-mentioned shoal of fish sound spectrogram sequence1,I2,…IN) establish background model;
Step 4) carries out image segmentation using background model to every two field picture, obtains the foreground target per two field picture and carries out two-value
Change;
Step 5) calculates the barycenter of the foreground target of each connected region per two field picture, and is labeled on every two field picture;
The quantity for the centroid statistic shoal of fish that step 6) marks according to every two field picture.
2. the statistical method of the number of fish school according to claim 1, it is characterised in that the step 2) specifically includes:
Step 201) reads the high frequency sonar data collected according to particular memory form, is converted to rectangle acoustic image;
Step 202) makees linear interpolation pretreatment to the acoustic wave beam of each two field picture;Two wave beams interleave out three it is new
Wave beam;
The linear interpolation pre-processes:
<mrow>
<msub>
<mi>B</mi>
<mi>x</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>-</mo>
<mi>x</mi>
</mrow>
<mn>4</mn>
</mfrac>
<msub>
<mi>B</mi>
<mn>0</mn>
</msub>
<mo>+</mo>
<mfrac>
<mi>x</mi>
<mn>4</mn>
</mfrac>
<msup>
<msub>
<mi>B</mi>
<mn>0</mn>
</msub>
<mo>&prime;</mo>
</msup>
</mrow>
Wherein, BxThree wave beams that expression is inserted between two wave beams, x=1,2,3, B0And B0' represent two adjacent ripples
Beam;
Data after interpolation are made image and shown by step 203), obtain acoustic image sequence (I1,I2,…IQ), Q is totalframes.
3. the statistical method of the number of fish school according to claim 2, it is characterised in that the step 3) specifically includes:
Step 301) establishes gray value statistical matrix M:Each element M (j, x) in matrix represents the gray level x of pixel at j,
The total degree that 0≤x≤255 occur;
Step 302) is successively with i-th, i=4,5 ..., each backward forward to choose interval two field pictures i- on the basis of N-3 two field pictures
3, i-1, i+1, i+3, five two field pictures are formed, the gray scale value of each pixel of the i-th two field picture is calculated, with BKi(j) determine
Background pixel point, thus count and update gray scale value matrix M;
Ii-3(j),Ii-1(j),Ii(j),Ii+1(j),Ii+3(j) the i-th -3, i-1, i, i+1 are represented respectively, picture at j in i+3 picture frames
The gray value of vegetarian refreshments, Di-1(j),Di+1(j) forward difference and backward difference mask are represented respectively:
<mrow>
<msub>
<mi>D</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mn>2</mn>
<mo>*</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>3</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>></mo>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<mn>2</mn>
<mo>*</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>3</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>&le;</mo>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>4</mn>
<mo>,</mo>
<mn>5</mn>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<mi>N</mi>
<mo>-</mo>
<mn>3</mn>
</mrow>
<mrow>
<msub>
<mi>D</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>3</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mn>2</mn>
<mo>*</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>></mo>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>|</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>3</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>I</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mn>2</mn>
<mo>*</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>&le;</mo>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>4</mn>
<mo>,</mo>
<mn>5</mn>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<mi>N</mi>
<mo>-</mo>
<mn>3</mn>
</mrow>
Wherein, T represents threshold value thresholding, by maximum variance between clusters Ostu adaptive polo placements, for judging gray value at pixel j
Whether change;
Judge in continuous 7 frame sound spectrogram in the point it is that foreground target point or background dot, formula are as follows by above-mentioned difference mask:
Wherein, if BKi(j)=1, i.e. Di-1And D (j)i+1(j) value all be 1 when, it may be determined that the point therebetween every picture frame in
All it is motion in i.e. continuous 7 frame;If on the contrary, BKi(j) item it is=0 background pixel point;
The initial gray value statistical matrix M of statistical updating is carried out by above-mentioned processing:
<mrow>
<mi>M</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>,</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>M</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>,</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>BK</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>M</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>,</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>BK</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Said process is constantly repeated, until i=N-3;
Step 303) judges initial background ash of the frequency of occurrences highest gray value as picture point according to gray value statistical matrix M
Angle value, so as to complete background modeling:
<mrow>
<mi>B</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mi>max</mi>
<mrow>
<mn>0</mn>
<mo>&le;</mo>
<mi>x</mi>
<mo>&le;</mo>
<mn>255</mn>
</mrow>
</munder>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>(</mo>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mi>x</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
Wherein, B (j) represents background model.
4. the statistical method of the number of fish school according to claim 3, it is characterised in that the step 4) specifically includes:
Each picture frame and background model are carried out difference by step 401) successively, obtain the foreground detection image two-value of every two field picture
Change F1(x,y);
Background model B (j) is designated as B (x, y), current image frame is designated as I (x, y), wherein, the space bit of x and y expression images
Put, x represents row, and y represents row;Obtained foreground detection image is designated as F1(x,y):
T1It is binary-state threshold, is calculated using Ostu adaptive methods;F1The corresponding foreground target in (x, y)=1 and partial noise;
Step 402) is to the F after above-mentioned binaryzation1(x, y), isolated noise is removed using area-method, is marked after carrying out Blob analyses
For F (x, y).
5. the statistical method of the number of fish school according to claim 4, it is characterised in that the step 5) specifically includes:
The F (x, y) of every two field picture is carried out line by line and scanned by column first, finds the point of first F (x, y)=1, mark should
Point, and the iterative search vertex neighborhood, calculate connected region area pixel summation Si, then calculate moving target barycenter coordinate
Position (Xi,Yi):
<mrow>
<msub>
<mi>X</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>&Element;</mo>
<mi>F</mi>
</mrow>
</munder>
<mi>x</mi>
<mi>F</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>Y</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>&Element;</mo>
<mi>F</mi>
</mrow>
</munder>
<mi>y</mi>
<mi>F</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
</mfrac>
</mrow>
According to coordinate position (Xi,Yi) marked and including in acoustic image, until sweeping complete F (x, y);
The target centroid quantity of mark per two field picture, it can be equivalent to the number of fish school, for it is labeled in picture frame will no longer
Mark, then additional character and color mark are carried out if there is fresh target to it.
6. the statistical method of the number of fish school according to claim 5, it is characterised in that the step 6) specifically includes:
The number of fish school in the accumulative mark point acquisition acoustic image sequence of Q two field pictures is counted, finally shows these mark points
In original series frame acoustic image can real-time display shoal of fish marker number, complete the statistics of the number of fish school.
7. the statistical method of the number of fish school according to claim 3, it is characterised in that methods described further comprises:It is right
Background model is updated, and is specially:
With constantly receiving new shoal of fish sound spectrogram sequence, it is necessary to whether need to update using threshold decision background model;Root first
The dynamic prospect of present frame is judged according to initial back-ground model, if the pixel count to be changed after difference in image and whole pixel counts
Percentage be more than some threshold value, the threshold value takes 80%, then judges that background is changed, if in continuous multiple frames background occur
Change, then extract acoustic image sequence now, re-establish background model again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710874194.1A CN107730526A (en) | 2017-09-25 | 2017-09-25 | A kind of statistical method of the number of fish school |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710874194.1A CN107730526A (en) | 2017-09-25 | 2017-09-25 | A kind of statistical method of the number of fish school |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107730526A true CN107730526A (en) | 2018-02-23 |
Family
ID=61207837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710874194.1A Pending CN107730526A (en) | 2017-09-25 | 2017-09-25 | A kind of statistical method of the number of fish school |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107730526A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108392170A (en) * | 2018-02-09 | 2018-08-14 | 中北大学 | A kind of human eye follow-up mechanism and recognition positioning method for optometry unit |
CN108731795A (en) * | 2018-05-31 | 2018-11-02 | 中国科学院声学研究所 | A kind of field birds quantity survey method based on acoustic imaging technology |
CN108742159A (en) * | 2018-04-08 | 2018-11-06 | 浙江安精智能科技有限公司 | Intelligent control device of water dispenser based on RGB-D cameras and its control method |
CN110570361A (en) * | 2019-07-26 | 2019-12-13 | 武汉理工大学 | sonar image structured noise suppression method, system, device and storage medium |
CN110992389A (en) * | 2019-11-08 | 2020-04-10 | 浙江大华技术股份有限公司 | Termite monitoring method, termite monitoring device and termite monitoring storage device |
CN112819847A (en) * | 2021-02-02 | 2021-05-18 | 中国水利水电科学研究院 | Method and system for segmenting target fish image of fish passing channel |
KR102276669B1 (en) * | 2021-04-12 | 2021-07-13 | (주)한컴인텔리전스 | Fish-shoal ecosystem monitoring system apparatus for detecting the abnormality of fish-shoal ecosystem and the operating method thereof |
CN113327263A (en) * | 2021-05-18 | 2021-08-31 | 浙江工业大学 | Fish shoal liveness monitoring method based on image vision |
CN113658124A (en) * | 2021-08-11 | 2021-11-16 | 杭州费尔马科技有限责任公司 | Method for checking underwater culture assets |
CN114119662A (en) * | 2021-11-23 | 2022-03-01 | 广州市斯睿特智能科技有限公司 | Image processing method and system in fish detection visual system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867349A (en) * | 2012-08-20 | 2013-01-09 | 无锡慧眼电子科技有限公司 | People counting method based on elliptical ring template matching |
US20150317797A1 (en) * | 2012-11-28 | 2015-11-05 | Zte Corporation | Pedestrian tracking and counting method and device for near-front top-view monitoring video |
CN106408575A (en) * | 2016-09-06 | 2017-02-15 | 东南大学 | Time-space image-based vehicle counting method applied to urban traffic scene |
CN106780502A (en) * | 2016-12-27 | 2017-05-31 | 江苏省无线电科学研究所有限公司 | Sugarcane seeding stage automatic testing method based on image |
CN106815819A (en) * | 2017-01-24 | 2017-06-09 | 河南工业大学 | Many strategy grain worm visible detection methods |
-
2017
- 2017-09-25 CN CN201710874194.1A patent/CN107730526A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867349A (en) * | 2012-08-20 | 2013-01-09 | 无锡慧眼电子科技有限公司 | People counting method based on elliptical ring template matching |
US20150317797A1 (en) * | 2012-11-28 | 2015-11-05 | Zte Corporation | Pedestrian tracking and counting method and device for near-front top-view monitoring video |
CN106408575A (en) * | 2016-09-06 | 2017-02-15 | 东南大学 | Time-space image-based vehicle counting method applied to urban traffic scene |
CN106780502A (en) * | 2016-12-27 | 2017-05-31 | 江苏省无线电科学研究所有限公司 | Sugarcane seeding stage automatic testing method based on image |
CN106815819A (en) * | 2017-01-24 | 2017-06-09 | 河南工业大学 | Many strategy grain worm visible detection methods |
Non-Patent Citations (2)
Title |
---|
危自福 等: "基于背景重构和水平集的多运动目标分割", 《光电工程》 * |
张进: "基于双频识别声纳 DIDSON 的鱼群定量评估技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108392170A (en) * | 2018-02-09 | 2018-08-14 | 中北大学 | A kind of human eye follow-up mechanism and recognition positioning method for optometry unit |
CN108742159A (en) * | 2018-04-08 | 2018-11-06 | 浙江安精智能科技有限公司 | Intelligent control device of water dispenser based on RGB-D cameras and its control method |
CN108731795A (en) * | 2018-05-31 | 2018-11-02 | 中国科学院声学研究所 | A kind of field birds quantity survey method based on acoustic imaging technology |
CN110570361B (en) * | 2019-07-26 | 2022-04-01 | 武汉理工大学 | Sonar image structured noise suppression method, system, device and storage medium |
CN110570361A (en) * | 2019-07-26 | 2019-12-13 | 武汉理工大学 | sonar image structured noise suppression method, system, device and storage medium |
CN110992389A (en) * | 2019-11-08 | 2020-04-10 | 浙江大华技术股份有限公司 | Termite monitoring method, termite monitoring device and termite monitoring storage device |
CN112819847A (en) * | 2021-02-02 | 2021-05-18 | 中国水利水电科学研究院 | Method and system for segmenting target fish image of fish passing channel |
KR102276669B1 (en) * | 2021-04-12 | 2021-07-13 | (주)한컴인텔리전스 | Fish-shoal ecosystem monitoring system apparatus for detecting the abnormality of fish-shoal ecosystem and the operating method thereof |
WO2022220354A1 (en) * | 2021-04-12 | 2022-10-20 | (주)한컴인텔리전스 | Fish shoal ecosystem monitoring system device for detecting abnormality in fish shoal ecosystem, and method for operation same |
CN113327263A (en) * | 2021-05-18 | 2021-08-31 | 浙江工业大学 | Fish shoal liveness monitoring method based on image vision |
CN113327263B (en) * | 2021-05-18 | 2024-03-01 | 浙江工业大学 | Image vision-based fish school activity monitoring method |
CN113658124A (en) * | 2021-08-11 | 2021-11-16 | 杭州费尔马科技有限责任公司 | Method for checking underwater culture assets |
CN113658124B (en) * | 2021-08-11 | 2024-04-09 | 杭州费尔马科技有限责任公司 | Method for checking underwater culture assets |
CN114119662A (en) * | 2021-11-23 | 2022-03-01 | 广州市斯睿特智能科技有限公司 | Image processing method and system in fish detection visual system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107730526A (en) | A kind of statistical method of the number of fish school | |
CN109460754B (en) | A kind of water surface foreign matter detecting method, device, equipment and storage medium | |
CN110084234B (en) | Sonar image target identification method based on example segmentation | |
CN104049245B (en) | Urban building change detection method based on LiDAR point cloud spatial difference analysis | |
CN110889844B (en) | Coral distribution and health condition assessment method based on deep clustering analysis | |
CN112052817B (en) | Improved YOLOv3 model side-scan sonar sunken ship target automatic identification method based on transfer learning | |
CN110516606A (en) | High-resolution satellite image any direction Ship Target Detection method | |
CN109376589A (en) | ROV deformation target and Small object recognition methods based on convolution kernel screening SSD network | |
CN109544694A (en) | A kind of augmented reality system actual situation hybrid modeling method based on deep learning | |
CN112017192A (en) | Glandular cell image segmentation method and system based on improved U-Net network | |
CN111968159A (en) | Simple and universal fish video image track tracking method | |
CN108537115A (en) | Image-recognizing method, device and electronic equipment | |
CN107545579A (en) | A kind of cardiac segmentation method, equipment and storage medium | |
CN110706177A (en) | Method and system for equalizing gray level of side-scan sonar image | |
CN115393734A (en) | SAR image ship contour extraction method based on fast R-CNN and CV model combined method | |
CN112907615B (en) | Submarine landform unit contour and detail identification method based on region growing | |
Chamberlain et al. | ImageCLEFcoral task: coral reef image annotation and localisation | |
CN115063428B (en) | Spatial dim small target detection method based on deep reinforcement learning | |
Xiong et al. | Artificial reef detection and recognition based on Faster-RCNN | |
CN116152649A (en) | Sonar target detection method and device based on improved YOLOv4 | |
CN115223033A (en) | Synthetic aperture sonar image target classification method and system | |
CN115240058A (en) | Side-scan sonar target detection method combining accurate image segmentation and target shadow information | |
CN115294322A (en) | Underwater ship bottom suspicious target detection method and device, electronic equipment and readable medium | |
CN112529072A (en) | Underwater buried object identification and positioning method based on sonar image processing | |
CN113284164A (en) | Shrimp swarm automatic counting method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180223 |