CN113724255A - Counting method for abalones in seedling raising period - Google Patents
Counting method for abalones in seedling raising period Download PDFInfo
- Publication number
- CN113724255A CN113724255A CN202111237469.3A CN202111237469A CN113724255A CN 113724255 A CN113724255 A CN 113724255A CN 202111237469 A CN202111237469 A CN 202111237469A CN 113724255 A CN113724255 A CN 113724255A
- Authority
- CN
- China
- Prior art keywords
- matrix
- young
- abalones
- information
- abalone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Abstract
The invention relates to a counting method for abalones in a seedling raising period, and belongs to the technical field of image processing. A counting method for abalones in a nursery stage is characterized in that a young abalones area to be counted is photographed to obtain an initial input image, and each pixel in the input image is read to obtain an initial matrix of image input; converting the initial matrix into a plurality of continuous two-dimensional characteristic matrices through a convolutional layer convolution process; then, processing the two-dimensional feature matrix by a matrix fusion means to form a terminal fusion matrix, and refining color arrangement information and corresponding position information of the young abalones in the picture; however, the efficiency of acquiring the dependency relationship is low because long-distance dependency between information is captured by the convolutional neural network and the plurality of convolutional modules are stacked, and the relationship between the position information of the young abalone is calculated by using the long-distance dependency relationship between the information through the characteristic relationship.
Description
Technical Field
The invention relates to a counting method for abalones in a seedling raising period, and belongs to the technical field of image processing.
Background
The abalone has rich nutritive value, the demand of domestic and foreign markets for abalone is continuously increased, but the natural yield of abalone is low, and the market demand can not be met. The industrial culture of the abalones in China is rapidly developed, and the abalone fry culture is an important component of the industrial culture. When the shell of the abalone fry grows to 1.8mm and a first hole is formed, the young abalone is counted to be a seedling. The quantity of young abalones is strictly controlled, and the problems that the young abalones grow slowly due to the fact that the large-density bait is insufficient and the production benefit is influenced due to the small density are avoided. When the shell of the young abalone grows to 3-5 mm, the young abalone is peeled off, and the young abalone is cultivated for 3-4 months. When the abalone fries grow to 10-20 mm, the abalone fries become commodity abalone fries, the sizes of the abalone fries are different, and screening and grading are needed to be carried out, and then the abalone fries are sold or adult abalone culture is carried out. The young abalones and the young abalones are stripped at the present stage in China, the counting is manual, the workload is large, and errors are easy to occur. How to apply the convolutional neural network to the detection and counting of the abalone larvae and improve the accuracy of the detection and counting is one of the key works of abalone fry culture; especially, when the problem that abalones are densely distributed and shielded in the seedling raising period exists, the counting accuracy is improved.
Disclosure of Invention
Aiming at the problems in the prior art, the method for extracting the features in the input image by using the feature dependency relationship matrix is provided to establish the long-distance dependency relationship among information, highlight the young abalone area in the input image and extract the shape and edge features of the young abalones in the young abalones area. The problem that the young abalones are densely distributed and have low detection accuracy when the young abalones are shielded from each other is solved, compared with the traditional convolutional layer with more convolutional layers stacked in a convolutional neural network, the characteristic dependence relation matrix directly calculates the relation between position information, the convolutional layer stacking is reduced, and the calculation efficiency is improved; and is easily embedded in a network structure.
The invention solves the technical problems through the following technical scheme:
a counting method for abalones in a nursery stage is characterized in that a single young abalone tile to be counted is photographed to obtain an initial input image, each pixel in the input image is read through an asarray function and converted into a matrix, and then the initial matrix of image input is obtained; converting the initial matrix into a plurality of continuous two-dimensional feature matrices through a convolution layer convolution process, wherein the convolution process is to extract features by utilizing a convolution kernel, the convolution kernel can be continuously close to a real feature set (feature vector) of an input image in a back propagation process, and the feature extraction is carried out on the matrices through the feature vector to form a two-dimensional feature matrix; recording color arrangement information of the young abalones in the picture by the two-dimensional characteristic matrix, and preliminarily determining position information of the young abalones in the image; then, the two-dimensional feature matrix is processed by a matrix fusion means, two-dimensional feature matrices at the rear end (one two-dimensional feature matrix contains more color texture feature information and one two-dimensional feature matrix contains more position feature information) are converted by the conversion matrix in rows and columns to obtain two-dimensional feature conversion matrices with the same row number, column number, color texture feature and position feature information, so that the two-dimensional feature conversion matrices are added to fuse one two-dimensional feature matrix containing more color texture feature information and the other two-dimensional feature matrix containing more position feature information to form an initial fusion matrix, the initial fusion matrix contains more color texture information and more position information, and the initial fusion matrix is fused with the previous two-dimensional feature matrix in a rolling mode, forming a promotion level fusion matrix, obtaining a terminal fusion matrix, and refining color arrangement information and corresponding position information of the young abalones in the picture by the formed terminal fusion matrix; however, the efficiency of acquiring the dependency relationship is low because long-distance dependency between information is captured through a convolutional neural network and a plurality of convolutional modules are stacked; further processing the terminal fusion matrix to form a characteristic dependency relationship matrix so as to establish a long-distance dependency relationship between information and strengthen the dependency relationship between the information of the detected young abalone and the tile or the information of the rest young abalones, when the detected young abalone is shielded by the rest young abalones, detecting the shielding edges of the detected young abalone and the rest young abalones through the established dependency relationship between the information of the detected young abalone and the information of the rest young abalones, and being capable of more accurately identifying the boundary of the detected young abalones and marking the boundary frame of the young abalones so as to improve the accuracy of the position information of the detected young abalones; and finally, counting the obtained boundary frames of the young abalones, counting the number of the young abalones, selecting a plurality of tiles to average after obtaining the number of the young abalones on a single tile, and multiplying the number by the number of the tiles in the seedling raising pool to determine the total number of the young abalones in the seedling raising pool.
The advantages of the above technical scheme are:
firstly, the invention researches a hierarchical fusion method of two-dimensional feature matrices, two-dimensional feature matrices are fused to obtain a new feature matrix-fusion matrix, and the fusion matrix contains more feature information such as color, texture and the like of the young abalones, so that the method is beneficial to extracting the features of the young abalones more accurately and more quickly, and simultaneously improves the generalization capability of a young abalones detection counting model. The detection and counting accuracy of the model trained by the hierarchical fusion method of the two-dimensional feature matrix on the data set of the young abalone is improved.
Secondly, the invention provides a strategy for extracting features by establishing a feature dependency relationship matrix, namely, long-distance dependency relationship between information is established in a feature map so as to highlight the young abalone area in the feature map, and shape and edge features are extracted in the area. The problem that the young abalones are densely distributed and have low detection accuracy when the young abalones are shielded from each other is solved, compared with the traditional convolutional layer with more convolutional layers stacked in a convolutional neural network, the characteristic dependence relation matrix directly calculates the relation between position information, the convolutional layer stacking is reduced, and the calculation efficiency is improved; and is easily embedded in a network structure.
On the basis of the technical scheme, the method further refines and perfects the technical scheme deeply:
further, the characteristic relation is that the terminal fusion matrix is processed by three convolutional layers respectively to obtain three corresponding characteristic dependency relation matrixes, the dependency relation is applied to each pixel in the characteristic diagram after the three characteristic dependency relation matrixes are deeply processed, the dependency relation between the detected young abalones or the information of the young abalones is strengthened, the area where the young abalones are located is highlighted, the network can further accurately obtain a priori frames of the young abalones, the individual young abalones are detected and identified to obtain the young abalones frames and corresponding scores on each priori frame, wherein the corresponding scores on the priori frames represent the cross-comparison between the priori frames detected by the network and the actual positions of the young abalones, the cross-comparison priori frames are used for sorting from large to small, the prior frames which are cross and compared with the actual positions of the young abalones are screened out, and the training weights of the network are used for finely tuning the priori frames to obtain boundary frames of the young abalones, and outputting the young abalone image with the bounding box.
Further, the initial matrix is(ii) a The input image size is denoted n, n = h × w, the image height is h, the width is w, and the number of image channels is c.
1. Further, the number of the two-dimensional feature matrices is four, and the four two-dimensional feature matrices are a1, a2, A3 and a 4; c1, C2, etc. are convolution kernels;
n x n dimensional C2, n x nC4 of vitamin A,*C6 of vitamin A,*Vitamin C8 is used for extracting position information of young abalone, the column numbers of the four matrixes C2, C4, C6 and C8 respectively determine the column numbers of A1, A2, A3 and A4, and the column numbers are used for obtaining n pieces of A1, A2, A3 and A4,A strip,A strip,Position information of the young abalone of the strip;
detecting all young abalones in the image on a young abalones feature map according to color texture information contained in the two-dimensional matrixes A1, A2, A3 and A4, highlighting areas of the young abalones through displaying a priori frame, and determining the general positions of the young abalones, wherein x, y, w and h parameters of the priori frame respectively correspond to the abscissa of the central position of the prior frame where the young abalones are located, the ordinate of the central position, the width of the prior frame and the height of the prior frame, so that the positions of the young abalones are determined.
Further, the initial fusion matrix is:
fusing two-dimensional matrices A3 and a4, a4 containing relatively more color texture feature information, and A3 containing relatively more position feature information; processing the A3 matrix and the A4 matrix by using a conversion matrix, wherein the conversion matrix is a self-defined matrix; processing matrix A4 using M4 (b), ) The two-dimensional matrix a4 is formed from (c,) Instead of the (c) being the first,) The conversion matrix M4 is used for converting the number of rows and columns of the two-dimensional feature matrix A4, and the conversion matrix M4 is used for converting the two-dimensional feature matrix A4 into a two-dimensional matrix U4; the matrix A3 was processed, using L3(c, c) to map the two-dimensional matrix A3 from (2c,) Instead of the (c) being the first,) The conversion matrix L3 is used for converting the number of rows and columns of the two-dimensional matrix A3 and converting the two-dimensional feature matrix A3 into B3; the matrixes B3 and U4 obtained after the A3 and the A4 are processed have the same row number and are added to obtain a combined feature matrix D3. D3 has n pieces of color texture information,Bar position information, whereas A3 only hasThe color and texture information of the bar,Piece position information, A4 only has n pieces of color texture information,The position information of the bar is set to,
m4 obtains more position information after processing A4, wherein n pieces of color texture information are contained in A4 before processing,Position information of the strip, n pieces of color texture information,Bar position information;
l3 obtains more color texture information after processing A3, and A3 is processed beforeThe color and texture information of the bar,Position information of the strip, n pieces of color texture information,The bar position information.
Further, the promotion fusion matrix is: fusing two-dimensional matrices A2 and D3, processing matrix D3 using M3 (C),) The two-dimensional matrix D3 is formed from (c,) Instead of the (c) being the first,) (ii) a The matrix a2 was processed, using L2(c, 4c) to map the two-dimensional matrix a2 from (4c,) Instead of the (c) being the first,) (ii) a The matrixes B2 and U3 obtained after the A2 and the D3 are processed have the same row number and are added to obtain a combined feature matrix D2. D2 has n pieces of color texture information,Bar position information, whereas A2 only hasThe color and texture information of the bar,Piece position information, D3 only has n pieces of color texture information,Bar position information; in particular, the amount of the solvent to be used,
further, the terminal fuses the matrixes, fuses the two-dimensional matrixes A1 and D2; processing matrix D2 using M2 (c) ((c))N) the two-dimensional matrix D2 is divided from (c,) Changing to (c, n); processing the matrix A1, changing the two-dimensional matrix A1 from (2c, n) to (c, n) using L1(c, 2 c); the matrixes B1 and U2 obtained after the A1 and the D2 are processed have the same row number and are added to obtain a combined feature matrix D1. D1 has n pieces of color texture information, n pieces of position information, while A1 has onlyColor texture information, n pieces of position information, D2 only has n pieces of color texture information,The bar position information. Thus, instead of containing only n pieces of color texture information D1A1 of the bar color texture information as an output is more helpful for the extraction of the feature information; in particular, the amount of the solvent to be used,
further, the feature dependency matrix:
firstly, a terminal fusion matrix after hierarchical fusion of a two-dimensional feature matrix is utilizedAnd establishing a characteristic dependency relationship matrix. The matrix D1 is passed through three 1 × 1 convolutional layers Conv _ A, Conv _ B, Conv _ D, respectively, to obtain convolved feature map matrices Conv _ a1, Conv _ B1, Conv _ D1,
in particular, the amount of the solvent to be used,
and secondly, performing matrix multiplication operation on Conv _ A1 and Conv _ B1 obtained after convolution to obtain the similarity between elements as a characteristic dependency relationship matrix H. Normalizing the characteristic dependency relationship matrix H to obtain Z, and establishing a dependency relationship between any two elements in the characteristic diagram; the calculation formula for Z and H is as follows:
wherein the content of the first and second substances,representing all elements in the normalized feature map, a feature dependency relationship matrix H representing the similarity between elements Conv _ A1 and Conv _ B1, and i, j representing the position in the matrix H;
in each channel, transposing Z to be used as a weight matrix, performing weighted sum on the value of each pixel position in Conv _ D1, and finally adding X to obtain a final Output characteristic diagram Output:
wherein, X represents an input feature map matrix, Conv _ D1 represents a result of a feature map after passing through a convolutional layer, Z represents a weight matrix obtained by point-multiplying and normalizing Conv _ A1 and Conv _ B1, W represents a weighting parameter, the initial value is 0, and new weights are continuously learned in the transition learning.
The application has the advantages that: aiming at the problems that abalone individuals in the nursery stage are small and similar in shape, color and texture, the invention researches a hierarchical fusion method of a two-dimensional feature matrix, namely two-dimensional feature matrices are fused to obtain a new feature matrix, the two-dimensional feature matrix comprises a plurality of priori frames and is used for representing the rough position and size of the young abalone detected after an input image is subjected to convolution operation, the new feature matrix comprises a plurality of prediction frames, and the prediction frames are obtained by screening the priori frames and finely adjusting the position and size through the hierarchical fusion method. The new feature matrix after fusion contains more color and texture feature information of the young abalones, and is more beneficial to detecting the young abalones, so that the size and the position of the young abalones can be more accurately predicted by a young abalones prediction frame in the new feature matrix than by a young abalones prior frame of the two-dimensional feature matrix before fusion. And counting the prediction frames in each image to obtain the number of young abalones in the image. The method is used for detecting and counting the abalones in the seedling stage to obtain a result with higher accuracy.
Drawings
Fig. 1 is a logic flow chart of a counting method for abalone in a nursery stage according to the present application;
fig. 2 is a processing flow chart of a counting method for abalone in a nursery stage according to the present application;
fig. 3 is a prior frame marking effect diagram in the actual processing of the counting method for the abalones in the seedling raising period.
Detailed Description
The following examples are only for illustrating the technical solutions described in the claims with reference to the drawings, and are not intended to limit the scope of the claims.
A counting method for abalones in a seedling stage comprises the steps of photographing a single abalone tile to be counted to obtain an initial input image, reading each pixel in the input image through an asarray function, and converting the pixel into a matrix, namely obtaining an initial matrix of image input; converting the initial matrix into a plurality of continuous two-dimensional feature matrices through a convolution layer convolution process, wherein the convolution process is to extract features by utilizing a convolution kernel, the convolution kernel can be continuously close to a real feature set (feature vector) of an input image in a back propagation process, and the feature extraction is carried out on the matrices through the feature vector to form a two-dimensional feature matrix; recording color arrangement information of the young abalones in the picture by the two-dimensional characteristic matrix, and preliminarily determining position information of the young abalones in the image; then, the two-dimensional feature matrix is processed by a matrix fusion means, two-dimensional feature matrices at the rear end (one two-dimensional feature matrix contains more color texture feature information and one two-dimensional feature matrix contains more position feature information) are converted by the conversion matrix in rows and columns to obtain two-dimensional feature conversion matrices with the same row number, column number, color texture feature and position feature information, so that the two-dimensional feature conversion matrices are added to fuse one two-dimensional feature matrix containing more color texture feature information and the other two-dimensional feature matrix containing more position feature information to form an initial fusion matrix, the initial fusion matrix contains more color texture information and more position information, and the initial fusion matrix is rolled and fused forwards, forming a promotion level fusion matrix, obtaining a terminal fusion matrix, and refining color arrangement information and corresponding position information of the young abalones in the picture by the formed terminal fusion matrix; however, the efficiency of acquiring the dependency relationship is low because long-distance dependency between information is captured through a convolutional neural network and a plurality of convolutional modules are stacked; further processing the terminal fusion matrix to form a characteristic dependency relationship matrix so as to establish a long-distance dependency relationship between information and strengthen the dependency relationship between the information of the detected young abalone and the tile or the information of the rest young abalones, when the detected young abalone is shielded by the rest young abalones, detecting the shielding edges of the detected young abalone and the rest young abalones through the established dependency relationship between the information of the detected young abalone and the information of the rest young abalones, and being capable of more accurately identifying the boundary of the detected young abalones and marking the boundary frame of the young abalones so as to improve the accuracy of the position information of the detected young abalones; and finally, counting the obtained boundary frames of the young abalones, counting the number of the young abalones, selecting a plurality of tiles to average after obtaining the number of the young abalones on a single tile, and multiplying the number by the number of the tiles in the seedling raising pool to determine the total number of the young abalones in the seedling raising pool.
Based on the design thought, the invention designs the abalone counting device in the nursery stage, which comprises a camera, a brightness control module (an annular lamp and a brightness controller), a glass plate, an image data processing module, a young abalone individual detection module, a counting switch button and a counting result display.
The technical solution of the present application is explained by a specific case as follows:
the method comprises the following steps of cultivating young abalones in a cultivation pond with the length of 10 meters, the width of 8 meters and the height of 0.4 meter, regularly arranging tiles with the length of 0.25 meter and the width of 0.1 meter in the cultivation pond, and cultivating 30-200 young abalones with the length of 5-13 mm on each tile, wherein in the cultivation stage of the young abalones, the cultivation density needs to be estimated according to the size and the number of the young abalones, so that the feeding amount is determined, and the situation that the development of the young abalones is influenced by too little bait is prevented; prevent the young abalone from being attacked due to the pollution of the breeding environment caused by excessive baits. The abalone individual in the seedling stage is small, the color and the texture of the abalone individual are similar, the individual bodies are shielded, and the detection and counting accuracy is low. To solve the problem, two technical schemes are provided: a hierarchical fusion method of a two-dimensional feature matrix and a feature dependency relationship matrix established. An embedded device comprising a camera, a brightness control module (an annular lamp and a brightness controller), a glass plate, an image data processing module, a counting switch button, a young abalone individual detection module and a counting result display is designed and used for acquiring the position and quantity information of young abalones in an image. The boundary box of the young abalone in the image is determined by utilizing a two-dimensional characteristic matrix hierarchical fusion and characteristic dependency relationship matrix establishing method, the boundary box (x, y, w, h) contains the size and position information of the young abalone, wherein x and y respectively represent the horizontal coordinate and the vertical coordinate of the center point of the boundary box, w and h respectively represent the width and the height of the boundary box, the number of the boundary box is the number of the young abalones in the image, and the detection and counting accuracy of the young abalones is improved.
The size of an abalone picture in a nursery stage acquired by the camera used by the device is 512 x 512, the picture acquired by the camera is cut into 256 x 256 size by the image processing module, the individual detection module of the young abalone detects and counts the young abalone according to the young abalone image provided by the image processing module and the hierarchical fusion and feature dependency relationship matrix of the two-dimensional feature matrix, so that the position information and the quantity information of the young abalone are obtained, and finally the counting result of the young abalone is sent to the counting result display.
In conjunction with figures 1, 2 and 3,
collecting and processing abalone images in a seedling stage:
educate seedling stage abalone image acquisition, use educate seedling stage abalone counting assembly, in breeding the pond, the placing of a tile is selected at random on the glass board, the annular lamp that the glass board top was placed is by luminance controller controlled brightness and colour temperature, wherein, luminance controller has four kinds of luminance gears (55 lm low bright, 260lm well bright, 760lm high bright, 1800lm extremely bright), three kinds of colour temperatures (5800 k ~6300 k's positive white light, 3000k ~5000 k's neutral light, be less than 3000 k's warm yellow light), four kinds of luminance gears and three kinds of colour temperature combination form 12 kinds of different bright and colour temperature environment: low bright positive white light, medium bright positive white light, high bright positive white light, extremely bright positive white light, low bright neutral light, medium bright neutral light, high bright neutral light, extremely bright neutral light, low bright warm yellow light, medium bright warm yellow light, high bright warm yellow light, and extremely bright warm yellow light. The young abalones are placed on a glass plate, and a light and color temperature combination is selected by adjusting the brightness and the color temperature of annular light to simulate the brightness and the color temperature of the actual young abalones in the culture environment. And when a counting switch button is pressed down, the camera starts to work, and the camera transmits the shot young abalone image with the size of 512 x 512 to the abalone image processing module in the seedling stage.
The abalone image processing module in the nursery stage consists of a computer, and the computer is used for receiving and processing the young abalone images transmitted by the camera. Firstly, screening a young abalone image by a computer: and calculating the fuzziness of the image by utilizing a gray variance algorithm, deleting the young abalone image with the fuzziness lower than 100, shooting again, and keeping the image with the fuzziness higher than 100.
Hierarchical fusion of two-dimensional feature matrices:
the method comprises the following steps of taking a young abalone image processed by an abalone image processing module in a nursery stage as an input image of the module, wherein the size of the input image is represented by n (n = h × w, the height of the image is h, the width of the image is w), and the number of image channels is c. Reading each pixel (total c × n pixels) in the input image by using an array function and converting the pixel into a matrix, namely acquiring an initial matrix of the image input. Next, performing convolution operation on the image initial matrix X by using C1 and C2 convolution layers to obtain a two-dimensional feature matrix a 1; performing convolution operation on the A1 by utilizing the convolution layers of C3 and C4 to obtain a two-dimensional feature matrix A2; then using C5, C6 convolution layer to make two-dimensional pairPerforming convolution operation on the feature matrix A2 to obtain A3; and performing convolution operation on the A3 by utilizing the convolution layers of C7 and C8 to obtain a two-dimensional feature matrix A4. In particular, the amount of the solvent to be used,
wherein, the dimension n is C2, nC4 of vitamin A,*C6 of vitamin A,*Vitamin C8 is used for extracting position information of young abalone, and the column numbers (n, m, n, m, n, m, n, m, n,、、) The numbers of columns A1, A2, A3 and A4 were determined. Thus, A1, A2, A3 and A4 respectively comprise n,A strip,A strip,Young abalone location information of the strip. A total of 2n useful messages in the network, the number of messages specifically included in the network, includingStripe color texture and position information andbar edge, shape information. Thus, in theory, A1, A2, A3, A4 each compriseA strip,A strip,Color and texture information of the bars and the n bars; but information is lost in the convolution process, and A2 and A3 are lostColor texture information of the bar. Thus, in practice, A1, A2, A3 and A4 each compriseA strip,A strip,Color texture information of the bars, n bars. At the moment, all the young abalones in the image are detected on the young abalones feature map according to color texture information contained in the two-dimensional matrixes A1, A2, A3 and A4, the areas of the young abalones are highlighted by displaying a priori frame, the general positions of the young abalones are determined, and x, y, w and h parameters of the priori frame respectively correspond to the abscissa of the central position of the prior frame where the young abalones are located, the ordinate of the central position, the width of the prior frame and the height of the prior frame, so that the positions of the young abalones are determined.
Second, to obtain more color texture and position information, the two-dimensional matrices A3 and a4 are fused. Processing matrix A4 using M4 (b), ) The two-dimensional matrix a4 is formed from (c,) Instead of the (c) being the first,) (ii) a Processing the matrix A3, using L3(c, c) to combine the two-dimensional momentsThe array a3 is derived from (2c,) Instead of the (c) being the first,) (ii) a The matrixes B3 and U4 obtained after the A3 and the A4 are processed have the same row number and are added to obtain a combined feature matrix D3. D3 has n pieces of color texture information,Bar position information, whereas A3 only hasThe color and texture information of the bar,Piece position information, A4 only has n pieces of color texture information,The bar position information. Thus, instead of containing only n pieces of color texture information D3A3 of the bar color texture information as an output is more helpful for the extraction of the feature information. In particular, the amount of the solvent to be used,
m4 is a self-defined matrix, in order to fuse A3 and A4, but the A3 and A4 are different in number of rows and columns and cannot be fused, M4 and L3 are used for processing the A4 and the A3 respectively, and after the processing, the two matrixes are the same in number of rows and columns, and fusion is achieved;
at this time, D3 containing n pieces of color texture information instead contains onlyA3 of the strip color texture information is used as an output, all young abalones in the graph are detected on the young abalones feature map by using a two-dimensional matrix D3 containing more color texture information, and the region of the young abalones is highlighted by displaying a priori frames (x, y, w, h) so as to further determine the general positions of the young abalones. The obtained position of the prior frame of the young abalone is closer to the actual position of the young abalone than the position of the prior frame obtained by detecting the young abalone with A3, and the detection precision of the young abalone under the condition of similar color and texture is improved.
Third, similarly, the two-dimensional matrices a2 and D3 are fused. Processing matrix D3 using M3 (c) ((c)),) The two-dimensional matrix D3 is formed from (c,) Instead of the (c) being the first,) (ii) a The matrix a2 was processed, using L2(c, 4c) to map the two-dimensional matrix a2 from (4c,) Instead of the (c) being the first,) (ii) a The matrixes B2 and U3 obtained after the A2 and the D3 are processed have the same row number and are added to obtain a combined feature matrix D2. D2 has n pieces of color texture information,Bar position information, whereas A2 only hasThe color and texture information of the bar,Piece position information, D3 only has n pieces of color texture information,The bar position information. Thus, D2 containing n pieces of color texture information instead contains onlyA2 of the bar color texture information as an output is more helpful for the extraction of the feature information. In particular, the amount of the solvent to be used,
at this time, D2 containing n pieces of color texture information instead contains onlyA2 of the strip color texture information is used as an output, all young abalones in the graph are detected on the young abalones feature map by using a two-dimensional matrix D2 containing more color texture information, and the region of the young abalones is highlighted by displaying a priori frames (x, y, w, h) so as to further determine the general positions of the young abalones. The obtained position of the prior frame of the young abalone is closer to the actual position of the young abalone than the position of the prior frame obtained by detecting the young abalone with A2, and the detection precision of the young abalone under the condition of similar color and texture is improved.
The fourth step, similarly,the two-dimensional matrices a1 and D2 are fused. Processing matrix D2 using M2 (c) ((c))N) the two-dimensional matrix D2 is divided from (c,) Changing to (c, n); processing the matrix A1, changing the two-dimensional matrix A1 from (2c, n) to (c, n) using L1(c, 2 c); the matrixes B1 and U2 obtained after the A1 and the D2 are processed have the same row number and are added to obtain a combined feature matrix D1. D1 has n pieces of color texture information, n pieces of position information, while A1 has onlyColor texture information, n pieces of position information, D2 only has n pieces of color texture information,The bar position information. Thus, instead of containing only n pieces of color texture information D1A1 of the bar color texture information as an output is more helpful for the extraction of the feature information. In particular, the amount of the solvent to be used,
at this time, instead of only including n pieces of color texture information D1A1 of bar color texture information as inputAnd (3) detecting all the young abalones in the graph on the young abalones feature map by using a two-dimensional matrix D1 containing more color texture information, and highlighting the region of the young abalones through displaying a priori frame (x, y, w, h) to further determine the general positions of the young abalones. The obtained position of the prior frame of the young abalone is closer to the actual position of the young abalone than the position of the prior frame obtained by detecting the young abalone with A1, and the detection precision of the young abalone under the condition of similar color and texture is improved.
Thus, the feature matrixes A1, A2, A3 and A4 are subjected to hierarchical fusion to obtain feature matrixes D1, D2 and D3. D3 combines the position information of A3 and the color texture information of a 4; d2 combines the position information of a2 and the color texture information of D3; d1 combines the position information of a1 and the color texture information of D2. Instead of containing the matrices D1, D2, D3, respectively, which finally contain n pieces of color texture informationMatrix A1 of stripe color texture information, containingMatrix A2 of stripe color texture information, containingThe matrix A3 of the strip color texture information is output, so that more color texture information is obtained by the network, the position of a prior frame obtained by detecting the young abalone by using the obtained color texture information is more accurate and closer to the actual position of the young abalone, and the detection precision of the young abalone under the condition of similar color texture is improved.
Characteristic dependency relationship matrix:
at present, a convolutional neural network realizes the capture of long-distance dependence between information by stacking a plurality of convolutional modules, so that the efficiency of acquiring the dependence relationship is low; and the traditional convolutional neural network has a deep structure, so that the difficulty of module design and embedding is high. Aiming at the problem, the long-distance dependency relationship among the information is established through the characteristic dependency relationship matrix, and the characteristic dependency relationship matrix provided by the invention directly calculates the relationship among the position information, reduces the stack of the convolution layer and improves the calculation efficiency; and the operation is simple, and the network structure is easy to embed.
The characteristic dependency relationship matrix is established and used as follows:
firstly, a terminal fusion matrix after hierarchical fusion of a two-dimensional feature matrix is utilizedEstablishing a characteristic dependency relationship matrix, wherein D1 passes through three 1-by-1 convolutional layers Conv _ A, Conv _ B, Conv _ D respectively to obtain convolutional characteristic diagram matrixes Conv _ A1, Conv _ B1 and Conv _ D1,
in particular, the amount of the solvent to be used,
in particular, a matrix,,Are the parameter matrices for Conv _ A1, Conv _ B1, Conv _ D1, respectively.
And secondly, performing matrix multiplication operation on Conv _ A1 and Conv _ B1 obtained after convolution to obtain the similarity between elements as a characteristic dependency relationship matrix H. And normalizing the characteristic dependency relationship matrix H to obtain Z, and establishing the dependency relationship between any two elements in the characteristic diagram. The calculation formula for Z and H is as follows:
in particular, the amount of the solvent to be used,representing all elements in the normalized feature map, the feature dependency matrix H represents the similarity between the elements Conv _ a1 and Conv _ B1, and i, j represents the position in the matrix H.
In each channel, transposing Z to be used as a weight matrix, performing weighted sum on the value of each pixel position in Conv _ D1, and finally adding X to obtain a final Output characteristic diagram Output:
wherein, X represents an input feature map matrix, Conv _ D1 represents a result of a feature map after passing through a convolutional layer, Z represents a weight matrix obtained by point-multiplying and normalizing Conv _ A1 and Conv _ B1, W represents a weighting parameter, the initial value is 0, and new weights are continuously learned in the transition learning.
The module establishes a feature dependency relationship matrix H by utilizing the matrixes Conv _ A1 and Conv _ B1, establishes a dependency relationship between the detected young abalone information and the tile or other young abalone information, normalizes and converts the feature dependency relationship matrix H into a weight matrix to weight and calculate each value in a feature map matrix Conv _ D1, applies the dependency relationship to each pixel in the feature map, and further strengthens the dependency relationship between the detected young abalone information and the tile or other young abalone information.
And transmitting the processed young abalone image to a young abalone individual detection module, and detecting and identifying the young abalone individual by using a two-dimensional feature matrix hierarchical fusion and feature dependency relationship matrix method. The hierarchical fusion method of the two-dimensional feature matrix provides more color texture information of the young abalones for the network, so that the network can extract more color texture features, the accuracy of the prior frame of the young abalones is further improved, and the detection precision of the young abalones under the condition that the color and the texture of the young abalones are similar is improved; the dependency relationship between the detected young abalone and the tile and among other young abalones is established by the characteristic dependency relationship matrix, and the area where the young abalones are located is highlighted by utilizing the dependency relationship, so that the network can further accurately acquire the prior frame of the young abalones. Detecting and identifying individual young abalones to obtain young abalones prior frames and corresponding scores on each prior frame, wherein the corresponding scores on the prior frames represent the intersection and the parallel comparison (the intersection ratio is the ratio of the intersection of the prior frames and the actual positions of the young abalones to the union of the prior frames and the actual positions of the young abalones) of the prior frames detected by the network and the actual positions of the young abalones, sorting the prior frames which are the highest in intersection and the actual positions of the young abalones from large to small by using the intersection and the parallel comparison, fine-tuning the prior frames by using the training weight of the network to obtain boundary frames of the young abalones, and outputting images of the young abalones with the boundary frames. The detection precision of the young abalone under the conditions that the color and the texture are close and the individuals are shielded mutually is improved. The x, y, w, h parameters of the bounding box in one image represent the position information of the young abalones, and the number of the bounding boxes represents the number of the young abalones in the image. Therefore, the young abalone individuals are detected and identified by using a method of hierarchical fusion of two-dimensional feature matrices and establishment of feature dependency relationship matrices, and the position information and the number of boundary boxes of the young abalones are counted and used as the position information and the number information of the young abalones in each image. And finally, storing the obtained information and transmitting the information to a counting result display.
After the number of young abalones on a single tile is obtained, the total number of young abalones in the nursery pond can be determined by selecting a plurality of tiles to average and multiplying the average by the number of tiles in the nursery pond.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A counting method for abalones in a seedling stage is characterized in that a single young abalone tile to be counted is photographed to obtain an initial input image, each pixel in the input image is read through an asarray function and converted into a matrix, and then the initial matrix of image input is obtained;
converting the initial matrix into a plurality of continuous two-dimensional characteristic matrices through a convolutional layer convolution process, recording color arrangement information of the young abalones in the picture by the two-dimensional characteristic matrices, and preliminarily determining position information of the young abalones in the picture;
then, processing the two-dimensional feature matrix by a matrix fusion means, performing row and column number conversion on the two-dimensional feature matrices at the rear end by using the conversion matrix to obtain two-dimensional feature conversion matrices with the same row and column number, color texture features and position feature information, performing addition operation on the two obtained two-dimensional feature conversion matrices to form an initial fusion matrix, performing rolling forward fusion on the initial fusion matrix to form a promotion fusion matrix, and obtaining a terminal fusion matrix;
further processing the terminal fusion matrix to form a characteristic dependency relationship matrix so as to establish a long-distance dependency relationship between information and strengthen the dependency relationship between the information of the detected young abalone and the tile or the information of the rest young abalones, when the detected young abalone is shielded by the rest young abalones, detecting the shielding edges of the detected young abalone and the rest young abalones through the established dependency relationship between the information of the detected young abalone and the information of the rest young abalones, identifying the boundary of the detected young abalone, and marking a boundary frame of the young abalones;
and finally, counting the obtained boundary frames of the young abalones, counting the number of the young abalones, obtaining the number of the young abalones on a single tile, and then selecting a plurality of tiles to calculate the average number, and multiplying the average number by the number of the tiles in the seedling raising pool to determine the total number of the young abalones in the seedling raising pool.
2. The counting method for abalone in the nursery stage according to claim 1, characterized in that the feature dependency matrix is obtained by processing the terminal fusion matrix through three convolutional layers respectively, and then three corresponding feature dependency matrices are further processed, and then the dependency is applied to each pixel in the feature map, so as to strengthen the dependency between the detected young abalones or young abalones information, highlight the area where the young abalones are located, so that the network can further and accurately obtain the prior frames of the young abalones, detect and identify individual young abalones, obtain the prior frames of the young abalones and the corresponding scores on each prior frame, wherein the corresponding scores on the prior frames represent the cross-over ratio between the prior frames detected by the network and the actual positions of the young abalones, sort from large to small by using the cross-over and comparison prior frames, screen out the prior frames which are cross-over and compared with the actual positions of the young abalones, and fine-tuning the prior frame by utilizing the training weight of the network to obtain a boundary frame of the young abalone, and outputting the image of the young abalone with the boundary frame.
4. A method of enumerating abalone in the nursery stage as claimed in claim 3, wherein there are four of the two-dimensional feature matrices, two-dimensional feature matrix a1, two-dimensional feature matrix a2, two-dimensional feature matrix A3 and two-dimensional feature matrix a 4;
n x n dimensional C2, n x nC4 of vitamin A,*C6 of vitamin A,*Vitamin C8 is used for extracting position information of young abalone, the column numbers of the four matrixes C2, C4, C6 and C8 respectively determine the column numbers of A1, A2, A3 and A4, and the column numbers are used for obtaining n pieces of A1, A2, A3 and A4,A strip,A strip,Position information of the young abalone of the strip;
color texture information contained in the two-dimensional matrixes A1, A2, A3 and A4 detects all young abalones in the image on the young abalones feature map, highlights the region of the young abalones by displaying a priori frame, and determines the general positions of the young abalones, wherein x, y, w and h parameters of the priori frame respectively correspond to the abscissa of the central position of the prior frame where the young abalones are located, the ordinate of the central position, the width of the prior frame and the height of the prior frame, so that the positions of the young abalones are determined.
5. A method of enumeration for nursery-grown abalone according to claim 4, wherein the initial fusion matrix is:
fusing two-dimensional matrices A3 and A4, processing matrix A4 using transformation matrix M4 (A4), ) The two-dimensional matrix a4 is formed from (c,) Instead of the (c) being the first,) (ii) a The matrix a3 is processed, using the transformation matrix L3(c, c) to transform the two-dimensional matrix a3 from (2c,) Instead of the (c) being the first,) (ii) a Moments obtained after A3 and A4 treatmentThe array B3 and the array U4 have the same row number and are subjected to addition operation to obtain a combined feature matrix D3; d3 has n pieces of color texture information,Bar position information, whereas A3 only hasThe color and texture information of the bar,Piece position information, A4 only has n pieces of color texture information,The position information of the bar is set to,
6. a method of enumeration for nursery-grown abalone as claimed in claim 5, wherein the promoting fusion matrix: fusing two-dimensional matrices A2 and D3, processing matrix D3 using M3 (C),) The two-dimensional matrix D3 is formed from (c,) Instead of the (c) being the first,) (ii) a The matrix a2 was processed, using L2(c, 4c) to map the two-dimensional matrix a2 from (4c,) Instead of the (c) being the first,) (ii) a The matrixes B2 and U3 obtained after the A2 and the D3 are processed have the same row number and are added to obtain a combined feature matrix D2; d2 has n pieces of color texture information,Bar position information, whereas A2 only hasThe color and texture information of the bar,Piece position information, D3 only has n pieces of color texture information,Bar position information; in particular, the amount of the solvent to be used,
7. a method of enumeration for nursery-grown abalone as claimed in claim 6, wherein the terminal fuses matrices, fusing two-dimensional matrices A1 and D2; processing matrix D2 using M2 (c) ((c))N) the two-dimensional matrix D2 is divided from (c,) Changing to (c, n); processing the matrix A1, changing the two-dimensional matrix A1 from (2c, n) to (c, n) using L1(c, 2 c); the matrixes B1 and U2 obtained after the A1 and the D2 are processed have the same row number and are added to obtain a combined feature matrix D1; d1 has n pieces of color texture information and n pieces of position information, A1 only hasColor texture information, n pieces of position information, D2 only has n pieces of color texture information,Bar position information; thus, instead of containing only n pieces of color texture information D1A1 of the bar color texture information as an output is more helpful for the extraction of the feature information; in particular, the amount of the solvent to be used,
8. a method of enumeration for nursery-grown abalone according to claim 7, wherein the feature dependency matrix:
firstly, a terminal fusion matrix after hierarchical fusion of a two-dimensional feature matrix is utilizedEstablishing a characteristic dependency relationship matrix, wherein D1 passes through three 1-by-1 convolutional layers Conv _ A, Conv _ B, Conv _ D respectively to obtain convolutional characteristic diagram matrixes Conv _ A1, Conv _ B1 and Conv _ D1,
in particular, the amount of the solvent to be used,
secondly, performing matrix multiplication operation on Conv _ A1 and Conv _ B1 obtained after convolution to obtain similarity among elements as a characteristic dependency relationship matrix H; normalizing the characteristic dependency relationship matrix H to obtain Z, and establishing a dependency relationship between any two elements in the characteristic diagram; the calculation formula for H and Z is as follows:
wherein the content of the first and second substances,representing all elements in the normalized feature map, a feature dependency relationship matrix H representing the similarity between elements Conv _ A1 and Conv _ B1, and i, j representing the position in the matrix H;
in each channel, transposing Z to be used as a weight matrix, performing weighted sum on the value of each pixel position in Conv _ D1, and finally adding D1 to obtain a final Output characteristic diagram Output:
wherein D1 represents the input feature map matrix, Conv _ D1 represents the result of feature map after convolutional layer, Z represents the weight matrix obtained by point-multiplying and normalizing Conv _ a1 and Conv _ B1, W represents the weighting parameter, the initial value is 0, and new weights are continuously learned in the transition learning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111237469.3A CN113724255A (en) | 2021-10-25 | 2021-10-25 | Counting method for abalones in seedling raising period |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111237469.3A CN113724255A (en) | 2021-10-25 | 2021-10-25 | Counting method for abalones in seedling raising period |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113724255A true CN113724255A (en) | 2021-11-30 |
Family
ID=78686099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111237469.3A Withdrawn CN113724255A (en) | 2021-10-25 | 2021-10-25 | Counting method for abalones in seedling raising period |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113724255A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115336549A (en) * | 2022-08-30 | 2022-11-15 | 四川农业大学 | Intelligent feeding system and method for fish culture |
-
2021
- 2021-10-25 CN CN202111237469.3A patent/CN113724255A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115336549A (en) * | 2022-08-30 | 2022-11-15 | 四川农业大学 | Intelligent feeding system and method for fish culture |
CN115336549B (en) * | 2022-08-30 | 2023-06-20 | 四川农业大学 | Intelligent fish culture feeding system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jia et al. | Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot | |
CN107292298A (en) | Ox face recognition method based on convolutional neural networks and sorter model | |
CN111540006B (en) | Plant stomata intelligent detection and identification method and system based on deep migration learning | |
CN106845497B (en) | Corn early-stage image drought identification method based on multi-feature fusion | |
CN108388905B (en) | A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context | |
CN107038416B (en) | Pedestrian detection method based on binary image improved HOG characteristics | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
CN107516103A (en) | A kind of image classification method and system | |
CN112766155A (en) | Deep learning-based mariculture area extraction method | |
CN113657326A (en) | Weed detection method based on multi-scale fusion module and feature enhancement | |
CN113793350A (en) | Fry counting Internet of things device and fry condition statistical method | |
CN112861666A (en) | Chicken flock counting method based on deep learning and application | |
CN112633082A (en) | Multi-feature fusion weed detection method | |
Suh et al. | Investigation on combinations of colour indices and threshold techniques in vegetation segmentation for volunteer potato control in sugar beet | |
CN113724255A (en) | Counting method for abalones in seedling raising period | |
CN113222889B (en) | Industrial aquaculture counting method and device for aquaculture under high-resolution image | |
Shen et al. | YOLOv5-Based Model Integrating Separable Convolutions for Detection of Wheat Head Images | |
CN117079125A (en) | Kiwi fruit pollination flower identification method based on improved YOLOv5 | |
CN116188317A (en) | Method for acquiring lettuce growth information in plant factory based on oblique shooting image | |
CN116740337A (en) | Safflower picking point identification positioning method and safflower picking system | |
CN115359324A (en) | Method for identifying head and chest beetle characteristic points of eriocheir sinensis | |
CN114037737B (en) | Neural network-based offshore submarine fish detection and tracking statistical method | |
CN115690778A (en) | Method for detecting, tracking and counting mature fruits based on deep neural network | |
CN114926826A (en) | Scene text detection system | |
CN108334840A (en) | Pedestrian detection method based on deep neural network under traffic environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211130 |
|
WW01 | Invention patent application withdrawn after publication |