CN109784378A - A kind of underwater fishing method based on machine vision - Google Patents
A kind of underwater fishing method based on machine vision Download PDFInfo
- Publication number
- CN109784378A CN109784378A CN201811608004.2A CN201811608004A CN109784378A CN 109784378 A CN109784378 A CN 109784378A CN 201811608004 A CN201811608004 A CN 201811608004A CN 109784378 A CN109784378 A CN 109784378A
- Authority
- CN
- China
- Prior art keywords
- fish
- pixel
- image
- weight
- underwater
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K61/00—Culture of aquatic animals
- A01K61/90—Sorting, grading, counting or marking live aquatic animals, e.g. sex determination
- A01K61/95—Sorting, grading, counting or marking live aquatic animals, e.g. sex determination specially adapted for fish
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Abstract
The underwater fishing method based on machine vision that the present invention relates to a kind of, the step real-time monitoring fish growth situation which cooperates fish identification classification, fish linear measure longimetry and weight to predict by underwater fishing device, precisely grasps best fishing period;Wherein, the fish identification classification includes Image Acquisition and pretreatment, wavelet character extraction and BP neural network fish image classification, and the fish linear measure longimetry and weight prediction include fish sample length and weight parameter obtains, fish length information extracts, error in length compensates and weight prediction.The present invention has the advantages that can capture the fish of particular types and particular size the present invention is based on the underwater fishing method of machine vision, it is also possible to estimate fish weight, real-time monitoring fish growth situation precisely grasps best fishing period.
Description
Technical field
The present invention relates to underwater fish capturing technology, in particular to a kind of underwater fishing method based on machine vision.
Background technique
The main fished species of marine fishery production, which are those, is carrying out migration, breeding, forage or the movable fish such as overwintering
The pod of class or economic animal, especially reproductive population, density is big and stablizes, and most shoals of fish are with long group of same one
Or same year age group carries out cluster, as Salmons are particularly evident.
Therefore, when carrying out fishing operation, if to the object (children of such as low age or sex immature that Catchable size is not achieved
Fish) extremely caught, then it necessarily loses more than gain, seriously affects the stock number in the coming year, or even the decline of fishery resources, future trouble can be caused
It is infinite.So to realize sustainable, rationalization fishing, it is necessary to select variety classes and different size of fish
Property fishing.
However the mode draind the pond to get all the fish typically is taken for small-scale inland is breeded fish at present, and for Hai Chang
It breeds fish, is also all to take fishing boat seine, cast net, finally carries out manual sorting again, the fish for not meeting fishing are put back to again, thus
Lead to the problems such as labor intensity is big, and workload is very heavy, inefficiency.Therefore, underwater fishing robot has extremely
Important theoretical research and practical application meaning, has good economic value and social benefit.
Summary of the invention
The underwater fishing method based on machine vision that the technical problem to be solved in the present invention is to provide a kind of, can capture spy
Determine the fish of type and particular size, it is also possible to estimate fish weight, real-time monitoring fish growth situation is precisely grasped
Best fishing period.
In order to solve the above technical problems, the technical solution of the present invention is as follows: a kind of underwater fishing method based on machine vision,
Its innovative point is: the underwater fishing method by underwater fishing device cooperate fish identification classification, fish linear measure longimetry and
The step real-time monitoring fish growth situation of weight prediction precisely grasps best fishing period;
Wherein, the fish identification classification includes Image Acquisition and pretreatment, wavelet character extracts and BP neural network fish
Class image classification, the fish linear measure longimetry and weight prediction include fish sample length and weight parameter acquisition, fish length
Information extraction, error in length compensation and weight prediction;
The underwater fishing device includes for acquiring the Underwater Camera of image, for the underwater photograph of dark surrounds illumination
Bright lamp, the fish-luring light device for inducing fish and underwater dedicated fishing net and underwater robot for capturing fish;Wherein,
The fish-luring light device includes three-color LED and frequency conversion acoustical generator.
Further, the fish identification classification specifically comprises the following steps:
Step 1: Image Acquisition and pretreatment: Underwater Camera collects the color image of fish, passes through improved intermediate value
Filter is filtered original image, then carries out image segmentation to picture after filtering, the segmentation figure after obtaining removal background
Then picture carries out gray scale, morphology and binarization operation to segmented image, gets treated fish body two dimension binary map
Picture;
Step 2: wavelet character extracts:
1) normalized: the pretreatment image of step 1 is normalized;
2) Polar coordinates: assuming that f (x, y) indicates that the two-dimentional bianry image on rectangular co-ordinate, standard square are defined as Mpq=∫
∫xpyqF (x, y) dxdy, by x=rcos (θ), above formula is switched to polar coordinate system and obtains the General Expression of moment characteristics by y=rsin (θ)
Formula is Fpq=∫ ∫ f (r, θ) gp(r)ejqθRdrd θ, wherein gpIt (r) is the angle component of transformation kernel, ejqθIt is the angle point of transformation kernel
Amount;
3) invariable rotary small echo Moment Feature Extraction: s is enabledq(r)=∫ f (r, θ) ejqθD θ, then above formula can be written as Fpq=∫ sq(r)
gp(r) rdr can prove that image rotates rear characteristic value mould | | Fpq| | it remains unchanged;Select wavelet ψ appropriate
(r), wavelet function collection ψ is then generated by stretching, translatingm,n(r), m, n are respectively scale and translate variable, then wavelet moment
Invariant is | | Fm,n,q| |=| | ∫ sq(r)ψm,n(r)rdr||;
Step 3:BP neural network fish image classification
1) netinit: the moment characteristics for the target image that above-mentioned steps are obtained are as the input of BP network, Jin Ershi
Other target;Assuming that the node number of input layer is n, the node number of hidden layer is l, and the node number of output layer is m, then inputs
Layer arrives the weights omega of hidden layerij, the weight of hidden layer to output layer is ωjk, input layer to hidden layer is biased to aj, hidden layer
B is biased to output layerk;Learning rate is η, and excitation function is g (x);Wherein, excitation function is that g (x) takes Sigmoid letter
Number, form are as follows:
2) hidden layer and output layer output are calculated: using three layers of BP neural network, hidden layer output isThe output of output layer is
3) calculating of error: error formula is taken are as follows:Wherein YkFor desired output;Remember Yk-Ok
=ek, then error E can be expressed asI=1 ... n in formula, j=1 ... l, k=1 ... m;
4) weight and biasing update:
Right value update formula are as follows:
Bias more new formula are as follows:
5) activation that output unit generates judges whether algorithm has restrained again compared with desired value, if convergence is defeated
Otherwise 2) image recognition result out jumps to.
Further, image segmentation is to be cut into rectangular image to filtering image in the step 1, then uses Grab
Cut algorithm is split described image.
Further, the Grab Cut algorithm is generally taken using RGB color respectively with a K Gaussian component
The full covariance mixed Gauss model GMM of K=5 models target and background, then there is additional vector k=
{k1,...,kn,...,kN, wherein knIt is exactly that nth pixel corresponds to that Gaussian component, kn∈{1,...k};Wherein, for
Each pixel, from some Gaussian component of target GMM, or from some Gaussian component of background GMM, then for entire
The Gibbs energy of image are as follows:
E (α, k, θ, z)=U (α, k, θ, z)+V (α, z);
Wherein, U is exactly area item, indicates that a pixel is classified as the punishment of target or background, is some pixel
Belong to the negative logarithm of the probability of target or background, Gaussian mixture model is following form:And 0≤πi≤1;
Grab Cut is that iteration is the smallest, and each iterative process all makes the parameter of the GMM modeled to target and background more
It is excellent, so that image segmentation is more excellent, the specific steps are as follows:
Step 1: user selects target by direct frame to obtain an initial trimap T, i.e., the pixel outside box is whole
As background pixel TB, and the whole pixel as " may be target " of the pixel of TU in box;
Step 2: to each pixel n in TB, the label α of initialized pixel nn=0, as background pixel;And in TU
Each pixel n, the label α of initialized pixel nn=1, the i.e. pixel as " may be target ";
Step 3: passing through step 1 and step 2, respectively obtain and belong to target (αn=1) some pixels, remaining is to belong to
Background (αn=0) pixel then estimates the GMM of target and background by this pixel;Meanwhile passing through k-mean algorithm point
The other pixel cluster for belonging to target and background is K class, i.e. K Gauss model in GMM, then each Gauss model is just in GMM
Some pixel samples collection are provided with, its mean parameter and covariance can estimate to obtain by rgb value, then the Gaussian component
Weight can be determined by belonging to the ratio of the number of pixels of the Gaussian component and total number of pixels.
Further, specific step is as follows for the iteration minimum:
Step 1: to the Gaussian component in each pixel distribution GMM, i.e. pixel n is object pixel, then pixel n's
Rgb value substitutes into each of target GMM Gaussian component, and that of maximum probability is most possible to generate n namely picture
The kth of plain nnA Gaussian component:
Step 2: for given image data Z, the parameter of study optimization GMM
Step 3: partitioning estimation, the Gibbs energy term analyzed by Gauss model GMM establish a figure, and find out weight
Then t-link and n-link is split by max flow/min cut algorithm:
Step 4: repeating step 1 and arrive step 3, until convergence.
Further, the fish linear measure longimetry and weight prediction specifically comprise the following steps:
Step 1: fish sample length and weight parameter obtain: length and weight number by measuring a large amount of same fish
According to, and existing relationship therebetween is calculated using linear regression processing, and estimate according to the fish length finally measured
Fish weight, and then the growing state of entire fishing ground fish is assessed, and whether meet the capturing condition of underwater robot;
Step 2: fish length information extracts: parallel position is arranged one therewith for the end of dedicated fishing net under water
Diameter is the circle of 5cm, and makes its imaging position in the upper left corner of picture in its entirety;On the pretreatment image basis of category identification
On, by calculate after fish processing the leftmost side of picture and the pixel number of the rightmost side and the acquaintanceship point number of circle diameter it
The length of fish is calculated than being multiplied by circle diameter;
Step 3: error in length compensation: the distance of the underwater dedicated fishing network interface of fish distance is 10-20cm, is calculating fish
When length, 5%-10% error compensation is added;
Step 4: the length information extracted from pretreatment image weight prediction: being input to linear regression function prediction
In model, fish weight approximate weight is calculated.
The present invention has the advantages that being matched the present invention is based on the underwater fishing method of machine vision by underwater fishing device
The step real-time monitoring fish growth situation for closing fish identification classification, fish linear measure longimetry and weight prediction, is precisely grasped best
Fishing period;I.e. by image procossing, fish length is measured, realizes and precisely captures, fish weight is carried out by image procossing
Estimation, real-time monitoring fish growth situation maximizes breeding income, and then can realize and fish automatically, divide fish process, simplifies
A large amount of fishing operations improve working efficiency;In addition, helping to reduce in conjunction with underwater robot technology and image recognition technology
Personnel's amount of labour improves production efficiency, enhancing automatization level, has the theoretical meaning in practical application.
Detailed description of the invention
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 is the flow chart of underwater fishing method Mesichthyes identification classification of the embodiment based on machine vision.
Fig. 2 is Image Acquisition and pretreated specific flow chart in Fig. 1.
Fig. 3 is the process of underwater fishing method Mesichthyes linear measure longimetry and weight prediction of the embodiment based on machine vision
Figure.
Fig. 4 is embodiment based on length of fish body prediction processing intermediate image in the underwater fishing method of machine vision.
Specific embodiment
The following examples can make professional and technical personnel that the present invention be more fully understood, but therefore not send out this
It is bright to be limited among the embodiment described range.
Embodiment
The present embodiment is based on underwater fishing device in the underwater fishing method of machine vision, including the health for acquiring image
How depending on In-Sight7000 type industrial camera, the underwater luminaire for dark surrounds illumination, the fish-luring light for inducing fish
Device and underwater dedicated fishing net and underwater robot for capturing fish, wherein fish-luring light device includes three-color LED and change
Frequency acoustical generator;In addition, it is contemplated that the bluish-green offset of underwater light propagation, it is thallium iodide lamp, the light that it is radiated that underwater luminaire, which is selected,
It can be largely focused in blue and green range, water absorbs it seldom;Underwater Camera uses this light source compared with incandescent lamp,
Under identical power condition, efficiency is at six times or more.
Underwater fishing method of the present embodiment based on machine vision, cooperate fish identification to classify by underwater fishing device,
The step real-time monitoring fish growth situation of fish linear measure longimetry and weight prediction precisely grasps best fishing period.
In the present embodiment, fish identification classification, as shown in Figure 1, specifically comprising the following steps:
Step (1): Image Acquisition and pretreatment: as shown in Fig. 2, Underwater Camera collects the color image of fish, lead to
It crosses improved median filter to be filtered original image, rectangular image then is cut into picture after filtering, is then used
Grab Cut algorithm is split described image, then the segmented image after obtaining removal background carries out ash to segmented image
Degree, morphology and binarization operation, get treated fish body two dimension bianry image;Wherein, Grab Cut algorithm uses
RGB color generally takes the full covariance mixed Gauss model GMM of K=5 to come to target respectively with a K Gaussian component
It is modeled with background, then there is an additional vector k={ k1,...,kn,...,kN, wherein knIt is exactly nth pixel pair
It should be in that Gaussian component, kn∈{1,...k};Wherein, for each pixel, from some Gaussian component of target GMM,
Or some Gaussian component from background GMM, then it is used for the Gibbs energy of whole image are as follows: E (α, k, θ, z)=U (α, k,
θ,z)+V(α,z);Wherein, U is exactly area item, indicates that a pixel is classified as the punishment of target or background, is some
Pixel belongs to the negative logarithm of the probability of target or background, and Gaussian mixture model is following form:And 0≤πi≤1;Grab Cut is that iteration is the smallest, and each iterative process all makes
The parameter for obtaining the GMM modeled to target and background is more excellent, so that image segmentation is more excellent, the specific steps are as follows:
Step 1: user selects target by direct frame to obtain an initial trimap T, i.e., the pixel outside box is whole
As background pixel TB, and the whole pixel as " may be target " of the pixel of TU in box;
Step 2: to each pixel n in TB, the label α of initialized pixel nn=0, as background pixel;And in TU
Each pixel n, the label α of initialized pixel nn=1, the i.e. pixel as " may be target ";
Step 3: passing through step 1 and step 2, respectively obtain and belong to target (αn=1) some pixels, remaining is to belong to
Background (αn=0) pixel then estimates the GMM of target and background by this pixel;Meanwhile passing through k-mean algorithm point
The other pixel cluster for belonging to target and background is K class, i.e. K Gauss model in GMM, then each Gauss model is just in GMM
Some pixel samples collection are provided with, its mean parameter and covariance can estimate to obtain by rgb value, then the Gaussian component
Weight can be determined by belonging to the ratio of the number of pixels of the Gaussian component and total number of pixels.
As embodiment, more specific embodiment is that specific step is as follows for iteration minimum:
Step 1: to the Gaussian component in each pixel distribution GMM, i.e. pixel n is object pixel, then pixel n's
Rgb value substitutes into each of target GMM Gaussian component, and that of maximum probability is most possible to generate n namely picture
The kth of plain nnA Gaussian component:
Step 2: for given image data Z, the parameter of study optimization GMM
Step 3: partitioning estimation, the Gibbs energy term analyzed by Gauss model GMM establish a figure, and find out weight
Then t-link and n-link is split by max flow/min cut algorithm:
Step 4: repeating step 1 and arrive step 3, until convergence.
Step (2): wavelet character extracts:
1) normalized: the pretreatment image of step (1) is normalized;
2) Polar coordinates: assuming that f (x, y) indicates that the two-dimentional bianry image on rectangular co-ordinate, standard square are defined as Mpq=∫
∫xpyqF (x, y) dxdy, by x=rcos (θ), above formula is switched to polar coordinate system and obtains the General Expression of moment characteristics by y=rsin (θ)
Formula is Fpq=∫ ∫ f (r, θ) gp(r)ejqθRdrd θ, wherein gpIt (r) is the angle component of transformation kernel, ejqθIt is the angle point of transformation kernel
Amount;
3) invariable rotary small echo Moment Feature Extraction: s is enabledq(r)=∫ f (r, θ) ejqθD θ, then above formula can be written as Fpq=∫ sq(r)
gp(r) rdr can prove that image rotates rear characteristic value mould | | Fpq| | it remains unchanged;Select wavelet ψ appropriate
(r), wavelet function collection ψ is then generated by stretching, translatingm,n(r), m, n are respectively scale and translate variable, then wavelet moment
Invariant is | | Fm,n,q| |=| | ∫ sq(r)ψm,n(r)rdr||;
Step (3): BP neural network fish image classification
1) netinit: the moment characteristics for the target image that above-mentioned steps are obtained are as the input of BP network, Jin Ershi
Other target;Assuming that the node number of input layer is n, the node number of hidden layer is l, and the node number of output layer is m, then inputs
Layer arrives the weights omega of hidden layerij, the weight of hidden layer to output layer is ωjk, input layer to hidden layer is biased to aj, hidden layer
B is biased to output layerk;Learning rate is η, and excitation function is g (x);Wherein, excitation function is that g (x) takes Sigmoid letter
Number, form are as follows:
2) hidden layer and output layer output are calculated: using three layers of BP neural network, hidden layer output isThe output of output layer is
3) calculating of error: error formula is taken are as follows:Wherein YkFor desired output;Remember Yk-
Ok=ek, then error E can be expressed asI=1 ... n in formula, j=1 ... l, k=1 ... m;
4) weight and biasing update:
Right value update formula are as follows:
Bias more new formula are as follows:
5) activation that output unit generates judges whether algorithm has restrained again compared with desired value, if convergence is defeated
Otherwise 2) image recognition result out jumps to.
In the present embodiment, fish linear measure longimetry and weight prediction, as shown in figure 3, specifically comprising the following steps:
Step 1: fish sample length and weight parameter obtain: length and weight number by measuring a large amount of same fish
According to, and existing relationship therebetween is calculated using linear regression processing, and estimate according to the fish length finally measured
Fish weight, and then the growing state of entire fishing ground fish is assessed, and whether meet the capturing condition of underwater robot;
Step 2: fish length information extracts: as shown in figure 4, the end of dedicated fishing net parallel position therewith under water
It installs a diameter and is the circle of 5cm, and make its imaging position in the upper left corner of picture in its entirety;In the pretreatment of category identification
In image basis, pass through the acquaintanceship of the leftmost side of picture and the pixel number of the rightmost side and circle diameter after calculating fish processing
The ratio between point number is multiplied by circle diameter to calculate the length of fish;
Step 3: error in length compensation: the distance of the underwater dedicated fishing network interface of fish distance is 10-20cm, is calculating fish
When length, 5%-10% error compensation is added;
Step 4: the length information extracted from pretreatment image weight prediction: being input to linear regression function prediction
In model, fish weight approximate weight is calculated.
Basic principles and main features and advantages of the present invention of the invention have been shown and described above.The skill of the industry
Art personnel it should be appreciated that the present invention is not limited to the above embodiments, the above embodiments and description only describe
The principle of the present invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these
Changes and improvements all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and
Its equivalent thereof.
Claims (6)
1. a kind of underwater fishing method based on machine vision, it is characterised in that: the underwater fishing method passes through underwater fishing
Device cooperates the step real-time monitoring fish growth situation of fish identification classification, fish linear measure longimetry and weight prediction, precisely slaps
Hold best fishing period;
Wherein, the fish identification classification includes Image Acquisition and pretreatment, wavelet character extract and BP neural network fish are schemed
As classification, the fish linear measure longimetry and weight prediction include fish sample length and weight parameter acquisition, fish length information
It extracts, error in length compensation and weight are predicted;
The underwater fishing device includes for acquiring the Underwater Camera of image, for the underwater lighting of dark surrounds illumination
Lamp, the fish-luring light device for inducing fish and underwater dedicated fishing net and underwater robot for capturing fish;Wherein, institute
Stating fish-luring light device includes three-color LED and frequency conversion acoustical generator.
2. the underwater fishing method according to claim 1 based on machine vision, it is characterised in that: the fish identification point
Class specifically comprises the following steps:
Step 1: Image Acquisition and pretreatment: Underwater Camera collects the color image of fish, passes through improved median filtering
Device is filtered original image, then carries out image segmentation to picture after filtering, the segmented image after obtaining removal background, so
Gray scale, morphology and binarization operation are carried out to segmented image afterwards, get treated fish body two dimension bianry image;
Step 2: wavelet character extracts:
1) normalized: the pretreatment image of step 1 is normalized;
2) Polar coordinates: assuming that f (x, y) indicates that the two-dimentional bianry image on rectangular co-ordinate, standard square are defined as Mpq=∫ ∫
xpyqF (x, y) dxdy, by x=rcos (θ), above formula is switched to polar coordinate system and obtains the general expression of moment characteristics by y=rsin (θ)
For Fpq=∫ ∫ f (r, θ) gp(r)ejqθRdrd θ, wherein gpIt (r) is the angle component of transformation kernel, ejqθIt is the angle point of transformation kernel
Amount;
3) invariable rotary small echo Moment Feature Extraction: s is enabledq(r)=∫ f (r, θ) ejqθD θ, then above formula can be written as Fpq=∫ sq(r)gp(r)
Rdr can prove that image rotates rear characteristic value mould | | Fpq| | it remains unchanged;Wavelet ψ (r) appropriate is selected, so
Wavelet function collection ψ is generated by stretching, translating afterwardsm,n(r), m, n are respectively scale and translate variable, then wavelet moment invariants
For | | Fm,n,q| |=| | ∫ sq(r)ψm,n(r)rdr||;
Step 3:BP neural network fish image classification
1) netinit: the moment characteristics for the target image that above-mentioned steps are obtained identify mesh as the input of BP network
Mark;Assuming that the node number of input layer is n, the node number of hidden layer is l, and the node number of output layer is m, then input layer arrives
The weights omega of hidden layerij, the weight of hidden layer to output layer is ωjk, input layer to hidden layer is biased to aj, hidden layer is to defeated
Layer is biased to b outk;Learning rate is η, and excitation function is g (x);Wherein, excitation function is that g (x) takes Sigmoid function, shape
Formula are as follows:
2) hidden layer and output layer output are calculated: using three layers of BP neural network, hidden layer output isThe output of output layer is
3) calculating of error: error formula is taken are as follows:Wherein YkFor desired output;Remember Yk-Ok=
ek, then error E can be expressed asI=1 ... n in formula, j=1 ... l, k=1 ... m;
4) weight and biasing update:
Right value update formula are as follows:
Bias more new formula are as follows:
5) activation that output unit generates judges whether algorithm has restrained again compared with desired value, if convergence output figure
As recognition result, otherwise jump to 2).
3. the underwater fishing method according to claim 2 based on machine vision, it is characterised in that: scheme in the step 1
Rectangular image is cut into filtering image as being divided into, then described image is split using Grab Cut algorithm.
4. the underwater fishing method according to claim 3 based on machine vision, it is characterised in that: the Grab Cut is calculated
Method uses RGB color, respectively with a K Gaussian component, the full covariance mixed Gauss model GMM of K=5 is generally taken to come
Target and background is modeled, then there is an additional vector k={ k1,...,kn,...,kN, wherein knIt is exactly n-th
Pixel corresponds to that Gaussian component, kn∈{1,...k};Wherein, for each pixel, from some Gauss of target GMM
Component, or from some Gaussian component of background GMM, then it is used for the Gibbs energy of whole image are as follows:
E (α, k, θ, z)=U (α, k, θ, z)+V (α, z);
Wherein, U is exactly area item, indicates that a pixel is classified as the punishment of target or background, is that some pixel belongs to
The negative logarithm of the probability of target or background, Gaussian mixture model are following forms:And 0≤πi≤1;
Grab Cut is that iteration is the smallest, and each iterative process all makes the parameter of the GMM modeled to target and background more excellent, is made
It is more excellent to obtain image segmentation, the specific steps are as follows:
Step 1: user by direct frame selects target to obtain an initial trimap T, i.e., the pixel whole conduct outside box
Background pixel TB, and the whole pixel as " may be target " of the pixel of TU in box;
Step 2: to each pixel n in TB, the label α of initialized pixel nn=0, as background pixel;And to each of in TU
Pixel n, the label α of initialized pixel nn=1, the i.e. pixel as " may be target ";
Step 3: passing through step 1 and step 2, respectively obtain and belong to target (αn=1) some pixels, remaining is to belong to background
(αn=0) pixel then estimates the GMM of target and background by this pixel;Meanwhile by k-mean algorithm respectively
The pixel cluster for belonging to target and background is K class, i.e. K Gauss model in GMM, then each Gauss model just has in GMM
Some pixel samples collection, its mean parameter and covariance can be estimated to obtain by rgb value, then the weight of the Gaussian component
It can be determined by belonging to the ratio of the number of pixels of the Gaussian component and total number of pixels.
5. the underwater fishing method according to claim 4 based on machine vision, it is characterised in that: the iteration minimizes
Specific step is as follows:
Step 1: to the Gaussian component in each pixel distribution GMM, i.e. pixel n is object pixel, then the rgb value of pixel n
It substitutes into each of target GMM Gaussian component, that of maximum probability is most possible to generate n's namely pixel n
KthnA Gaussian component:
Step 2: for given image data Z, the parameter of study optimization GMM
Step 3: partitioning estimation, the Gibbs energy term analyzed by Gauss model GMM establish a figure, and find out weight t-
Then link and n-link is split by max flow/min cut algorithm:
Step 4: repeating step 1 and arrive step 3, until convergence.
6. the underwater fishing method according to claim 1 based on machine vision, it is characterised in that: the fish length is surveyed
Amount and weight prediction specifically comprise the following steps:
Step 1: fish sample length and weight parameter obtain: length and weight data by measuring a large amount of same fish,
And existing relationship therebetween is calculated using linear regression processing, and fish are estimated according to the fish length finally measured
Weight, and then the growing state of entire fishing ground fish is assessed, and whether meet the capturing condition of underwater robot;
Step 2: fish length information extracts: a diameter is arranged in parallel position therewith for the end of dedicated fishing net under water
For the circle of 5cm, and make its imaging position in the upper left corner of picture in its entirety;On the basis of the pretreatment image of category identification, lead to
The ratio between the leftmost side of picture and the pixel number of the rightmost side and the acquaintanceship point number of circle diameter after calculating fish processing is crossed to multiply
Upper circle diameter calculates the length of fish;
Step 3: error in length compensation: the distance of the underwater dedicated fishing network interface of fish distance is 10-20cm, is calculating fish length
When, 5%-10% error compensation is added;
Step 4: the length information extracted from pretreatment image weight prediction: being input to linear regression function prediction model
In, calculate fish weight approximate weight.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608004.2A CN109784378A (en) | 2018-12-27 | 2018-12-27 | A kind of underwater fishing method based on machine vision |
PCT/CN2019/108334 WO2020134255A1 (en) | 2018-12-27 | 2019-09-27 | Method for monitoring growth situations of fishes based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608004.2A CN109784378A (en) | 2018-12-27 | 2018-12-27 | A kind of underwater fishing method based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109784378A true CN109784378A (en) | 2019-05-21 |
Family
ID=66498475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811608004.2A Withdrawn CN109784378A (en) | 2018-12-27 | 2018-12-27 | A kind of underwater fishing method based on machine vision |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109784378A (en) |
WO (1) | WO2020134255A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347134A (en) * | 2019-07-29 | 2019-10-18 | 南京图玩智能科技有限公司 | A kind of AI intelligence aquaculture specimen discerning method and cultivating system |
CN110378241A (en) * | 2019-06-25 | 2019-10-25 | 北京百度网讯科技有限公司 | Crop growthing state monitoring method, device, computer equipment and storage medium |
WO2020134255A1 (en) * | 2018-12-27 | 2020-07-02 | 南京芊玥机器人科技有限公司 | Method for monitoring growth situations of fishes based on machine vision |
CN111406693A (en) * | 2020-04-23 | 2020-07-14 | 上海海洋大学 | Marine ranch fishery resource maintenance effect evaluation method based on bionic sea eels |
CN112287913A (en) * | 2020-12-25 | 2021-01-29 | 浙江渔生泰科技有限公司 | Intelligent supervisory system for fish video identification |
CN113205283A (en) * | 2021-06-02 | 2021-08-03 | 中国水产科学研究院南海水产研究所 | Open sea fishery fishing scheduling method and system |
CN114049577A (en) * | 2021-11-17 | 2022-02-15 | 中国水利水电科学研究院 | Fish specification measuring method and system |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NO347281B1 (en) * | 2020-10-05 | 2023-08-21 | Fishency Innovation As | Generating three dimensional skeleton representations of aquatic animals using machine learning |
CN112419392A (en) * | 2020-11-30 | 2021-02-26 | 广州博进信息技术有限公司 | Method, apparatus and medium for calculating actual size of moving object based on machine vision |
CN113239324B (en) * | 2021-04-13 | 2023-11-10 | 江苏农林职业技术学院 | Snakehead sexual maturity judging method and system |
CN113591671B (en) * | 2021-07-28 | 2023-10-24 | 常州大学 | Fish growth identification detection method based on Mask-Rcnn |
CN113628182B (en) * | 2021-08-03 | 2024-04-26 | 中国农业大学 | Automatic fish weight estimation method and device, electronic equipment and storage medium |
CN114742806A (en) * | 2022-04-21 | 2022-07-12 | 海南大学 | Fish body morphological feature measurement method based on key point coordinate regression |
CN117011795B (en) * | 2023-08-08 | 2024-02-13 | 南京农业大学 | River crab growth state nondestructive monitoring and evaluating platform and method based on Gaussian-like fuzzy support degree |
CN116843085B (en) * | 2023-08-29 | 2023-12-01 | 深圳市明心数智科技有限公司 | Freshwater fish growth monitoring method, device, equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103444613B (en) * | 2013-08-29 | 2014-11-19 | 北京农业信息技术研究中心 | Feeding control system and method for fish culture |
CN106561532B (en) * | 2016-11-08 | 2019-07-26 | 深圳技师学院 | A kind of movable method and apparatus of monitoring fish |
PH12016000469A1 (en) * | 2016-12-15 | 2018-06-25 | Univ Of The Philippines Diliman | Estimating fish size, population density, species distribution and biomass |
CN107909137A (en) * | 2017-11-27 | 2018-04-13 | 南瑞集团有限公司 | A kind of fish pass crosses fish counting and recognition methods |
CN109784378A (en) * | 2018-12-27 | 2019-05-21 | 南京芊玥机器人科技有限公司 | A kind of underwater fishing method based on machine vision |
-
2018
- 2018-12-27 CN CN201811608004.2A patent/CN109784378A/en not_active Withdrawn
-
2019
- 2019-09-27 WO PCT/CN2019/108334 patent/WO2020134255A1/en active Application Filing
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020134255A1 (en) * | 2018-12-27 | 2020-07-02 | 南京芊玥机器人科技有限公司 | Method for monitoring growth situations of fishes based on machine vision |
CN110378241A (en) * | 2019-06-25 | 2019-10-25 | 北京百度网讯科技有限公司 | Crop growthing state monitoring method, device, computer equipment and storage medium |
CN110378241B (en) * | 2019-06-25 | 2022-04-29 | 北京百度网讯科技有限公司 | Crop growth state monitoring method and device, computer equipment and storage medium |
CN110347134A (en) * | 2019-07-29 | 2019-10-18 | 南京图玩智能科技有限公司 | A kind of AI intelligence aquaculture specimen discerning method and cultivating system |
CN111406693A (en) * | 2020-04-23 | 2020-07-14 | 上海海洋大学 | Marine ranch fishery resource maintenance effect evaluation method based on bionic sea eels |
CN112287913A (en) * | 2020-12-25 | 2021-01-29 | 浙江渔生泰科技有限公司 | Intelligent supervisory system for fish video identification |
CN113205283A (en) * | 2021-06-02 | 2021-08-03 | 中国水产科学研究院南海水产研究所 | Open sea fishery fishing scheduling method and system |
CN113205283B (en) * | 2021-06-02 | 2023-09-12 | 中国水产科学研究院南海水产研究所 | Open sea fishery fishing scheduling method and system |
CN114049577A (en) * | 2021-11-17 | 2022-02-15 | 中国水利水电科学研究院 | Fish specification measuring method and system |
Also Published As
Publication number | Publication date |
---|---|
WO2020134255A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784378A (en) | A kind of underwater fishing method based on machine vision | |
CN111178197B (en) | Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method | |
CN107292298A (en) | Ox face recognition method based on convolutional neural networks and sorter model | |
CN109684906B (en) | Method for detecting red fat bark beetles based on deep learning | |
CN112598713A (en) | Offshore submarine fish detection and tracking statistical method based on deep learning | |
CN110781921A (en) | Depth residual error network and transfer learning-based muscarinic image identification method and device | |
Tamilselvi et al. | Unsupervised machine learning for clustering the infected leaves based on the leaf-colours | |
Lainez et al. | Automated fingerlings counting using convolutional neural network | |
CN111127423B (en) | Rice pest and disease identification method based on CNN-BP neural network algorithm | |
CN109242826B (en) | Mobile equipment end stick-shaped object root counting method and system based on target detection | |
CN110853070A (en) | Underwater sea cucumber image segmentation method based on significance and Grabcut | |
CN106872467A (en) | Chicken embryo fertility detection method and apparatus | |
CN109815973A (en) | A kind of deep learning method suitable for the identification of fish fine granularity | |
Zhang et al. | Robust image segmentation method for cotton leaf under natural conditions based on immune algorithm and PCNN algorithm | |
CN112270681A (en) | Method and system for detecting and counting yellow plate pests deeply | |
Loresco et al. | Segmentation of lettuce plants using super pixels and thresholding methods in smart farm hydroponics setup | |
Yang et al. | Automatic greenhouse pest recognition based on multiple color space features. | |
CN110532935A (en) | A kind of high-throughput reciprocity monitoring system of field crop phenotypic information and monitoring method | |
CN110428374A (en) | A kind of small size pest automatic testing method and system | |
Miao et al. | Research on Soybean Disease Identification Method Based on Deep Learning | |
CN107016401B (en) | Digital camera image-based rice canopy recognition method | |
Siripattanadilok et al. | Recognition of partially occluded soft-shell mud crabs using Faster R-CNN and Grad-CAM | |
Han et al. | A deep learning model for automatic plastic waste monitoring using unmanned aerial vehicle (UAV) data | |
Sosa-Trejo et al. | Vision-based techniques for automatic marine plankton classification | |
Aquino et al. | Detection of Rice Planthopper Using Image Processing Techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190521 |