CN111753775A - Fish growth assessment method, device, equipment and storage medium - Google Patents

Fish growth assessment method, device, equipment and storage medium Download PDF

Info

Publication number
CN111753775A
CN111753775A CN202010608682.XA CN202010608682A CN111753775A CN 111753775 A CN111753775 A CN 111753775A CN 202010608682 A CN202010608682 A CN 202010608682A CN 111753775 A CN111753775 A CN 111753775A
Authority
CN
China
Prior art keywords
fish
images
frames
same
continuous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010608682.XA
Other languages
Chinese (zh)
Other versions
CN111753775B (en
Inventor
张为明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyi Tongzhan Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN202010608682.XA priority Critical patent/CN111753775B/en
Publication of CN111753775A publication Critical patent/CN111753775A/en
Application granted granted Critical
Publication of CN111753775B publication Critical patent/CN111753775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for evaluating growth of fish, wherein the method comprises the following steps: acquiring continuous N frames of fish images, wherein N is an integer greater than 1; respectively processing each two continuous frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and allocating the same identification to the same fish; obtaining the motion trail of the fish according to the identification of the fish in the N frames of fish images; and evaluating the growth trend of the fish according to the movement track of the fish. The method and the device are used for improving the accuracy of growth assessment of the fish.

Description

Fish growth assessment method, device, equipment and storage medium
Technical Field
The present application relates to the field of aquaculture, and in particular, to a method, an apparatus, a device, and a storage medium for evaluating growth of fish.
Background
The analysis of the aquatic product industry shows that the world aquaculture is most developed in Asia, and the global proportion of the aquaculture is close to 90%. China is one of the major aquaculture countries in Asia, and is also one of the longest-standing countries in the world engaged in aquaculture.
In order to create a green, healthy and safe aquaculture, we need to observe the growth of the fish. Through observing the growth of fish to can in time deal with when the problem appears, thereby guarantee the healthy growth of fish only. Generally, an underwater camera continuously shoots fish images, detects suitable fish through an algorithm, obtains body size information of the fish, and then performs statistical analysis on the body size information of all the images. The method evaluates the growth of the fish based on the information of the fish in a single image, cannot comprehensively estimate the growth trend of the fish, has one-sidedness and poor evaluation accuracy.
In addition, the prior art also has: the method comprises the steps of segmenting an image to obtain a target, tracking the target through a traditional algorithm, and evaluating the growth of the fish according to a tracking result. Moreover, the continuity of time sequence is neglected, only the same target in a plurality of frame images is matched by rules, the operation is easy to miss, and the effect is very poor particularly when the frame rate is low.
Disclosure of Invention
The application provides a fish growth assessment method, a fish growth assessment device, fish growth assessment equipment and a storage medium, which are used for improving the accuracy of fish growth assessment.
In a first aspect, the present application provides a method for assessing growth of fish, comprising:
acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
respectively processing each two continuous frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and allocating the same identification to the same fish;
obtaining the motion trail of the fish according to the identification of the fish in the N frames of fish images;
and evaluating the growth trend of the fish according to the movement track of the fish.
Optionally, the estimating the growth trend of the fish according to the motion track of the fish comprises:
according to the movement track of the fish, obtaining the fish with abnormal movement, and filtering the movement track of the fish with abnormal movement;
and evaluating the growth trend of the fish according to the filtered motion track of the fish.
Optionally, extracting features of fish in the two consecutive frames of fish images, and identifying the same fish in the two consecutive frames of fish images includes:
extracting the characteristics of the fish in the two continuous frames of fish images, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
when the offset of the center point of the fish is within a preset range, judging that the fish with similar characteristics is the same fish;
wherein, the central point of the fish is the central point of the circumscribed rectangle of the fish.
Optionally, extracting features of the fish in the two consecutive frames of fish images, obtaining position information of the fish with similar features in the two consecutive frames of fish images, and determining an offset of a center point of the fish with similar features according to the position information, includes:
inputting the two continuous frames of fish images into a detection tracking model;
extracting the characteristics of the fish in the two continuous frames of fish images through the detection tracking model, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
and acquiring the characteristics of the fish, the central point of the fish and the offset of the central point of the same fish output by the detection tracking model.
Optionally, the training process of the detection tracking model includes:
acquiring a sample image set, wherein the sample image set comprises Q continuous sample images and preset mark data, S sample images form a group of sample images, S is smaller than or equal to Q, and the preset mark data comprises: the real characteristic of the fish, the real center point of the fish and the offset of the real center point of the same fish;
performing the following training process on each group of sample images in the sample image set respectively:
inputting the group of sample images into an initial detection tracking model, and respectively processing each two continuous frames of sample images as follows: obtaining the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish output by the initial detection tracking model;
and calculating to obtain a calculation result of a loss function according to the output result of the initial detection tracking model and the preset mark data, propagating the gradient to the initial detection tracking model in a reverse direction, optimizing the initial detection tracking model, acquiring a next group of sample images from the sample image set, and repeatedly executing the training process until the calculation result of the loss function tends to be stable, wherein the initial detection tracking model is used as the final detection tracking model.
Optionally, obtaining the characteristics of the fish output by the detection and tracking model comprises:
respectively carrying out down-sampling on the two continuous frames of sample images, and extracting first features of respective fish of the two continuous frames of sample images after down-sampling;
up-sampling the two continuous frames of sample images after down-sampling, and extracting second features of fish of the two continuous frames of sample images after up-sampling;
acquiring the hot spot map characteristics of any one frame of sample image in the two continuous frames of sample images;
and obtaining the characteristics of the fish according to the first characteristics, the second characteristics and the hot spot diagram characteristics.
Optionally, after extracting the features of the fish in the two consecutive frames of fish images, the method further includes:
and according to the characteristics of the fish, performing target classification on the fish, and allocating the same type of identification to the fish of the same type.
Optionally, before inputting the set of sample images into the initial detection and tracking model, the method further includes:
and adding a circumscribed rectangle to the fish in the sample image, and distributing the preset mark data to the circumscribed rectangle.
In a second aspect, the present application provides a fish growth assessment apparatus comprising:
the first acquisition module is used for acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
the extraction module is used for respectively processing each continuous two frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and allocating the same identification to the same fish;
the second acquisition module is used for acquiring the motion trail of the fish according to the identification of the fish in the N frames of fish images;
and the evaluation module is used for evaluating the growth trend of the fish according to the motion trail of the fish.
In a third aspect, the present application provides an electronic device, comprising: the system comprises a processor, a communication component, a memory and a communication bus, wherein the processor, the communication component and the memory are communicated with each other through the communication bus; the memory for storing a computer program; the processor is used for executing the program stored in the memory to realize the growth evaluation method of the fish.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the method for assessing growth of a fish.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, through acquiring the continuous N frames of fish images, the characteristic extraction operation of the fish is respectively carried out on each two continuous frames of fish images, the same fish in the two continuous frames of fish images is identified, the continuity of the images in time is ensured, the accuracy of matching the same fish is further ensured, in addition, the same identification is distributed to the same fish, each fish is tracked according to the identification of the fish in the N frames of fish images, the motion track of the fish is obtained, the growth trend of the fish is evaluated according to the motion track of the fish, the growth trend of the fish is evaluated through the motion track of the fish obtained through continuous tracking, and relatively comprehensive fish information can be obtained compared with a mode of analyzing the evaluation of a single image, and further the evaluation result obtained based on the comprehensive fish information is more accurate. Moreover, the evaluation mode considers the continuity of time sequence, and the same identification is distributed to the same fish for tracking, so that the problem that the target is matched by rules and is easy to miss is avoided, the tracking effect is improved, and the result of evaluating the motion track of the fish obtained based on tracking is more accurate.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic view of a process for evaluating the growth of fish in an embodiment of the present application;
FIG. 2 is a schematic diagram of a fish feature extraction operation in an embodiment of the present application;
FIG. 3 is a schematic diagram of a training process of a detection tracking model in an embodiment of the present application;
FIG. 4 is a schematic view of the fish growth assessment apparatus according to the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a fish growth evaluation method, which can be applied to camera equipment of fish images, can also be applied to intelligent terminals without camera functions, and can also be applied to servers. The specific implementation of the method is shown in fig. 1:
step 101, acquiring continuous N frames of fish images, wherein N is an integer greater than 1.
Specifically, N continuous frames of fish images are shot through a camera, wherein the camera is arranged at a fixed position, and one or more cameras can be arranged. The user sets the shooting time of the camera according to actual needs so as to enable the camera to shoot regularly, and certainly, for the accuracy of growth assessment of the fish, the time interval between two adjacent shooting can be set not to exceed a set value.
102, respectively processing each two continuous frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish.
In a specific embodiment, for consecutive N frames of fish images, the following processing is respectively performed for every two consecutive frames of fish images:
extracting the fish features in the two continuous frames of fish images, wherein the fish features comprise: width and height of fish, fish morphology and fish color, etc.; respectively comparing the characteristics of the fish extracted from the two continuous frames of fish images to obtain the position information of the similar fish in the two continuous frames of fish images; determining the offset of the center point of the fish with similar characteristics according to the position information; when the offset of the central point of the fish is within a preset range, judging that the fish with similar characteristics is the same fish; the center point of the fish is the center point of an external rectangle of the fish, two sides of the external rectangle are parallel to a horizontal axis, and the horizontal axis is established by taking the camera as a reference.
In a specific embodiment, a detection tracking model is used to extract the characteristics of the fish and obtain the position information of the fish and the offset of the center point of the fish, and the specific implementation process is as follows: inputting two continuous frames of fish images into a detection tracking model; extracting the characteristics of the fish in the two continuous frames of fish images through the detection tracking model, respectively comparing the characteristics of the fish extracted from the two continuous frames of fish images, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, determining the offset of the central point of the fish with similar characteristics according to the position information, and judging the fish with similar characteristics as the same fish when the offset of the central point of the fish is within a preset range; the detection tracking model outputs the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish.
In one embodiment, in order to clearly and clearly illustrate a specific training process of the detection and tracking model, a target detection algorithm, namely a centret algorithm, is introduced, and the tracking process is illustrated in combination with the centret algorithm, which is as follows:
as shown in fig. 2, the image features of a fish image when the fish image is downsampled are extracted by a Deep Layer Aggregation (DLA), and the downsampling is performed by 4 times, 8 times, 16 times, and 32 times, that is, the image features of the fish image downsampled by 4 times, the image features downsampled by 8 times, the image features downsampled by 16 times, and the image features downsampled by 32 times are respectively extracted by a Deep Layer feature fusion network. And repeatedly performing up-sampling processing on the fish image subjected to down-sampling based on the image features obtained by down-sampling, extracting the image features subjected to up-sampling, performing feature fusion operation, obtaining fusion features at the time of down-sampling by 4 times, and taking the fusion features as the features of the fish in the fish image. Wherein the upsampling operation is jointly formed by a deformable convolution and a transposed convolution to make the obtained image features more accurate.
In addition, the following fused features at 4 times of sampling are taken as the features of the fish in the fish image because: the fusion features when the down-sampling is 4 times are fused with the high-level semantic features and the bottom-level simple features of the fish image, and the feature resolution when the down-sampling is 4 times is higher. Of course, the fusion feature obtained by downsampling 4 times is not absolute as the feature of the fish in the fish image, and the user can determine the optimal fusion feature according to the resolution of the image and the size of the image.
Further, the CenterNet algorithm model determines whether the target is a fish according to the fusion characteristics, and outputs the center point of the fish and the characteristics of the fish after determining that the target is the fish. Wherein, when inputting a frame of fish image into the centret algorithm model, the input data further comprises preset marking data, and the preset marking data comprises: the real characteristic of the fish and the real center point of the fish. Comparing the output result of the CenterNet algorithm model with the real central point of the fish according to the real characteristic of the fish, calculating to obtain the calculation result of the loss function, transmitting the gradient back to the CenterNet algorithm model, optimizing the CenterNet algorithm model, inputting the next frame, repeatedly executing the training process until the calculation result of the loss function tends to be stable, and taking the CenterNet algorithm model as the final CenterNet algorithm model. In the following, the width, height and pixel data of the fish are mentioned as features of the fish, but the features of the fish are not limited to these two points.
Wherein the calculation result of the loss function comprises: classification penalty, center point offset penalty, and target wide penalty and high penalty. The method comprises the following specific steps:
classification loss: and judging whether the pixels in the acquired fish image correspond to fish or not according to the pixel category. The following description will be made by taking a fish image subjected to downsampling by 4 times as an example: after the characteristics of the fish are extracted from the fish image after the down sampling is carried out for 4 times, the fish in the fish image is determined, the region where the fish is located is subjected to Gaussian fuzzy operation, and meanwhile, the CenterNet algorithm model predicts the category of each pixel. The extracted characteristic data of the fish comprises a large amount of pixel data, for example, some pixel corresponding categories are water, some pixel corresponding categories are fish, some pixel corresponding categories are sundries, and the like, the category of the pixel predicted according to the CenterNet algorithm model is compared with the real category corresponding to the pixel in the fish image, the success probability when the predicted pixel is the fish is determined, and the classification loss is calculated by utilizing a focal loss function. The focal loss function is specified as follows:
Figure BDA0002560101390000081
wherein γ is a learnable hyperparameter, y' represents the probability of success when the predicted pixel class is fish, and the smaller L represents the smaller classification loss.
Center point offset loss: the following description will be made by taking a fish image subjected to downsampling by 4 times as an example: because the centret algorithm model outputs the center point of the fish image subjected to down-sampling 4 times, the fish image subjected to down-sampling 4 times is up-sampled 4 times at the moment and returns to the original fish image, and at the moment, the center point offset loss is brought, and the center point offset loss is compensated according to the center point offset loss calculation result. Specifically, the center point offset loss is calculated through a smooth L1 loss function:
Figure BDA0002560101390000082
wherein x ═ f (x) -y, f (x) is the true center point of the fish, y is the center point of the fish output by the centret algorithm model, and x is the difference between the true center point of the fish and the center point of the fish output by the centret algorithm model.
Target wide and high losses: the following description will be made by taking a fish image subjected to downsampling by 4 times as an example: because the fish width of the fish image subjected to downsampling by 4 times is output by the centret algorithm model, at this time, the fish image subjected to downsampling by 4 times is subjected to upsampling by 4 times and returns to the original fish image, at this time, a target width loss is brought, the target width loss is compensated according to a target width loss calculation result, specifically, the target width loss is calculated by a smooth L1 loss function, which is described by taking the width as an example:
Figure BDA0002560101390000083
where x ═ f (x) -y, f (x) is the true width of the fish, y is the width of the fish output by the centrnet algorithm model, x is the difference between the true width of the fish and the width of the fish output by the centrnet algorithm model, and high is calculated as wide, i.e. width is replaced by height.
Calculating the calculation results of classification loss, center point offset loss, target width loss and target height loss, and when the sum of the calculation results tends to be stable, indicating that the training of the CenterNet algorithm model is finished.
The cenet algorithm is introduced to lead out the cenet algorithm, the implementation of the cenet algorithm is carried out on the basis of the cenet algorithm, and the cenet algorithm is an improved algorithm on the basis of the cenet algorithm. The two differences are: the input of the CenterNet algorithm is an image, and the output is the target width, height and target central point; the input of the CenterTrack algorithm is the hot spot map characteristics of two adjacent images and any frame of image, and the output is the target width and height, the offset of the target central point and the central point of the same target. The CenterNet algorithm is only used for detecting the target in the current frame image and is discontinuous in time, and the CenterTrack algorithm is used for detecting two continuous frame images, so that the problem of discontinuity in time is solved. The invention trains a detection tracking model for a fundamental algorithm by using an improved CenterTrack algorithm.
In one embodiment, the training process of the detection tracking model is as shown in fig. 3:
step 301, obtaining a sample image set, where the sample image set includes Q consecutive sample images and preset mark data, where S sample images form a group of sample images, S is less than or equal to Q, and the preset mark data includes: the real characteristic of the fish, the real center point of the fish and the offset of the real center point of the same fish.
The actual features of the fish include the width and height of the fish, and the width and height of the fish are exemplified in the present embodiment, but the features of the fish are not limited to the width and height of the fish, and include the contour, texture, and shape of the fish.
In a specific embodiment, a camera is used to shoot continuous fish images as sample images, and Labelme is used to frame the fish in the sample images, which is specifically represented as: and adding a circumscribed rectangle to the fish in the sample image, and adding preset marking data to the circumscribed rectangle.
Step 302, the following training process is respectively performed on each group of sample images in the sample image set: inputting a group of sample images into an initial detection tracking model, and respectively processing each two continuous frames of sample images as follows: and obtaining the characteristics of the fish output by the initial detection tracking model, the central point of the fish and the offset of the central point of the same fish.
The characteristics of the output fish include the width and height of the fish, and the width and height of the fish are exemplified in the present embodiment, but the characteristics of the fish are not limited to the width and height of the fish, and include the contour of the fish, the texture of the fish, the shape of the fish, and the like.
In a specific embodiment, N frames of continuous sample images are input into an initial detection tracking model, and the characteristics of the fish are output through the detection tracking model, wherein the extraction mode of the fusion characteristics and the processing mode of a CenterNet algorithm model extract the fusion characteristics of two continuous frames of fish images through DLA. Taking the extraction downsampling of 4 times, 8 times, 16 times and 32 times as examples, the following concrete steps are carried out: respectively carrying out down-sampling on two continuous frame sample images, and extracting respective fish first characteristics of the two continuous frame sample images subjected to down-sampling; and performing up-sampling on the sample images subjected to down-sampling, extracting second features of the fish of the two continuous frames of sample images subjected to up-sampling, and performing feature fusion operation to obtain fusion features.
In addition, the initial detection tracking model performs initialization processing on any one frame of sample image in two continuous frames of sample images to obtain the hot spot map characteristics of the frame of sample image, and obtains the characteristics of the fish according to the fusion characteristics and the hot spot map characteristics, wherein the hot spot map characteristics are single-channel characteristics generated by using a Gaussian rendering function.
Further, after the characteristics of the fish are obtained, the characteristics of the fish extracted from the two continuous frames of sample images are respectively compared, the position information of the fish with similar characteristics in the two continuous frames of sample images is obtained, the offset of the central point of the fish with similar characteristics is determined according to the position information, and when the offset of the central point of the fish is within a preset range, the fish with similar characteristics is judged to be the same fish. Finally, the initial detection tracking model outputs the width and height of the fish, the center point of the fish, and the offset of the center point of the same fish.
Step 303, calculating to obtain a calculation result of the loss function according to an output result of the initial detection tracking model and preset mark data, propagating the gradient to the initial detection tracking model in a reverse direction, optimizing the initial detection tracking model, acquiring a next group of sample images from the sample image set, and repeatedly executing the training process until the calculation result of the loss function tends to be stable, wherein the initial detection tracking model is used as a final detection tracking model.
In a specific embodiment, the calculation result of the loss function is obtained by calculation, and the initial detection tracking model is propagated to the initial detection tracking model in a reverse direction without a constant gradient to optimize the initial detection tracking model and determine the final detection tracking model, which is specifically realized as follows: calculating the classification loss of the target to obtain a first result; calculating the offset loss of the central point to obtain a second result; calculating target width loss and target height loss to obtain a third result; and calculating the midpoint offset regression loss of the same target to obtain a fourth result. And summing the first result, the second result, the third result and the fourth result to obtain a summation result, and when the summation result tends to be stable, successfully training the detection tracking model.
Wherein the calculation of the target classification loss, the center point offset loss, the target width loss and the target high loss is the same as the calculation of the loss function of the CenterNet algorithm model. The midpoint offset regression loss of the same target is specifically calculated by a smooth L1 loss function:
Figure BDA0002560101390000111
wherein x ═ f (x) -y, f (x) is the real offset of the same fish, y is the offset of the same fish output by the detection and tracking model, and x is the distance difference between the real offset of the same fish and the offset of the same fish output by the centrnet algorithm model.
Specifically, in the training process, contrast differences of the hot spot maps are enhanced, for example, differences of color shades are enhanced, so that differences of sample image features are clearer, and further a detection tracking model is more robust. And Gaussian noise can be added to each central point, so that the central point of the fish of the previous frame of sample image of two continuous frame sample images is locally shaken, and the positioning of the central point and the matching of the same fish are better carried out. And randomly rendering a noise peak value as a false positive sample image for the sample image with the hot spot image characteristics extracted by a certain probability, and randomly removing the sample image with the hot spot image characteristics by a certain probability as a false negative sample so as to improve the accuracy, the difference and the sensitivity of the detection tracking model.
In one embodiment, after extracting the features of the fish in two consecutive frames of fish images, the fish may be subject to target classification according to the features of the fish, and the same type of identification may be assigned to the same type of fish. The fish is classified by analyzing the characteristics of the fish, and the same type of identification is distributed to the same type of fish, so that the growth trends of the different types of fish can be analyzed according to the motion tracks of the different types of fish when the fish polyculture is carried out, the overall analysis of all the fish polyculture can be carried out according to the growth trends of the different types of fish, and effective reference data can be provided for the whole fish polyculture.
And 103, obtaining the motion track of the fish according to the identification of the fish in the N frames of fish images.
Specifically, after the detection tracking model outputs the offset of the central point of the same fish, the same fish is associated in time through a greedy matching algorithm. Assuming that the central point of any fish in the current frame fish image is P and the output offset is d, the fish at the position of P can match the fish within the range of d in the previous frame fish image relative to the current frame fish image, if the matching is successful, adding the same identifier to the same fish which is successfully matched, and obtaining the motion track of the fish according to the identifier of the fish; otherwise, a new identification needs to be assigned to the new fish appearing in the fish image, and a new motion trajectory needs to be generated.
And step 104, evaluating the growth trend of the fish according to the motion track of the fish.
In one embodiment, the method comprises the steps of obtaining abnormal-motion fish according to the motion trail of the fish, filtering the motion trail of the abnormal-motion fish, defining the fish which does not form the motion trail or forms the motion trail which is too short as the abnormal fish, and evaluating the growth trend of the fish according to the filtered motion trail of the fish. Of course, in an ideal case, if there is no abnormal fish, the growth tendency of the fish is evaluated directly by the movement trajectories of all the fish. Furthermore, the aquaculture farmers determine the feeding condition of the fish and the growth environment of the fish according to the growth trend of the fish.
According to the method provided by the embodiment of the application, through acquiring the continuous N frames of fish images, the characteristic extraction operation of the fish is respectively carried out on each two continuous frames of fish images, the same fish in the two continuous frames of fish images is identified, the continuity of the images in time is ensured, the accuracy of matching the same fish is further ensured, in addition, the same identification is distributed to the same fish, each fish is tracked according to the identification of the fish in the N frames of fish images, the motion track of the fish is obtained, the growth trend of the fish is evaluated according to the motion track of the fish, the growth trend of the fish is evaluated through the motion track of the fish obtained through continuous tracking, and relatively comprehensive fish information can be obtained compared with a mode of analyzing the evaluation of a single image, and further the evaluation result obtained based on the comprehensive fish information is more accurate. Moreover, the evaluation mode considers the continuity of time sequence, and the same identification is distributed to the same fish for tracking, so that the problem that the target is matched by rules and is easy to miss is avoided, the tracking effect is improved, and the result of evaluating the motion track of the fish obtained based on tracking is more accurate.
The embodiment of the present application further provides a device for evaluating growth of fish, and the specific implementation of the device may refer to the description in the method embodiment section, and repeated details are not repeated, as shown in fig. 4, the device mainly includes:
the first obtaining module 401 is configured to obtain N consecutive frames of fish images, where N is an integer greater than 1.
An extracting module 402, configured to perform the following processing on each two consecutive frames of fish images: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish.
A second obtaining module 403, configured to obtain a motion trajectory of the fish according to the identifier of the fish in the N frames of fish images.
And the evaluation module 404 is used for evaluating the growth trend of the fish according to the motion track of the fish.
Specifically, the evaluation module 404 is specifically configured to obtain a fish with abnormal motion according to the motion trajectory of the fish, and filter the motion trajectory of the fish with abnormal motion; and evaluating the growth trend of the fish according to the motion trail of the filtered fish.
Specifically, the extracting module 402 is configured to extract features of fish in two consecutive frames of fish images, obtain position information of the fish with similar features in the two consecutive frames of fish images, and determine an offset of a center point of the fish with similar features according to the position information; when the offset of the central point of the fish is within a preset range, judging the fish with similar characteristics as the same fish; wherein, the central point of the fish is the central point of the circumscribed rectangle of the fish.
In the device provided by the embodiment of the application, the first obtaining module 401 obtains continuous N frames of fish images, the extracting module 402 is utilized to respectively extract the characteristics of the fish from every two continuous frames of fish images, and identifies the same fish in two continuous frames of fish images, thereby ensuring the continuity of the images in time and further ensuring the accuracy of matching the same fish, the same identification is assigned to the same fish, and according to the identification of the fish in the N frames of fish images by the second obtaining module 403, tracking each fish to obtain the motion track of the fish, and finally, estimating the growth trend of the fish by adopting an estimation module 404 according to the motion track of the fish, the device assesses the growth trend of the fish by continuously tracking the obtained motion track of the fish, and can obtain more comprehensive information of the fish compared with a mode of analyzing evaluation of a single image, so that an assessment result obtained based on the comprehensive information of the fish is more accurate. Moreover, the evaluation mode considers the continuity of time sequence, and the same identification is distributed to the same fish for tracking, so that the problem that the target is matched by rules and is easy to miss is avoided, the tracking effect is improved, and the result of evaluating the motion track of the fish obtained based on tracking is more accurate.
Based on the same concept, an embodiment of the present application further provides an electronic device, as shown in fig. 5, the electronic device mainly includes: a processor 501, a communication component 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication component 502 and the memory 503 are in communication with each other through the communication bus 504. Wherein, the memory 503 stores the program that can be executed by the processor 501, and the processor 501 executes the program stored in the memory 503, implementing the following steps: acquiring continuous N frames of fish images, wherein N is an integer greater than 1; respectively processing each two continuous frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish; obtaining the motion trail of the fish according to the identification of the fish in the N frames of fish images; and evaluating the growth trend of the fish according to the movement track of the fish.
The communication bus 504 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 504 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The communication component 502 is used for communication between the electronic device and other devices described above.
The Memory 503 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the aforementioned processor 501.
The Processor 501 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
In a further embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform the method for assessing growth of fish described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method for assessing growth of fish, comprising:
acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
respectively processing each two continuous frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and allocating the same identification to the same fish;
obtaining the motion trail of the fish according to the identification of the fish in the N frames of fish images;
and evaluating the growth trend of the fish according to the movement track of the fish.
2. The method of claim 1, wherein estimating the growth trend of the fish based on the motion trajectory of the fish comprises:
according to the movement track of the fish, obtaining the fish with abnormal movement, and filtering the movement track of the fish with abnormal movement;
and evaluating the growth trend of the fish according to the filtered motion track of the fish.
3. The method of claim 2, wherein extracting the fish feature from the two consecutive frames of fish images and identifying the same fish in the two consecutive frames of fish images comprises:
extracting the characteristics of the fish in the two continuous frames of fish images, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
when the offset of the center point of the fish is within a preset range, judging that the fish with similar characteristics is the same fish;
wherein, the central point of the fish is the central point of the circumscribed rectangle of the fish.
4. The method of claim 3, wherein the step of extracting the fish features from the two consecutive frames of fish images, obtaining the position information of the fish with similar features in the two consecutive frames of fish images, and determining the offset of the center point of the fish with similar features according to the position information comprises:
inputting the two continuous frames of fish images into a detection tracking model;
extracting the characteristics of the fish in the two continuous frames of fish images through the detection tracking model, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
and acquiring the characteristics of the fish, the central point of the fish and the offset of the central point of the same fish output by the detection tracking model.
5. The method of claim 4, wherein the training process of the detection and tracking model comprises:
acquiring a sample image set, wherein the sample image set comprises Q continuous sample images and preset mark data, S sample images form a group of sample images, S is smaller than or equal to Q, and the preset mark data comprises: the real characteristic of the fish, the real center point of the fish and the offset of the real center point of the same fish;
performing the following training process on each group of sample images in the sample image set respectively:
inputting the group of sample images into an initial detection tracking model, and respectively processing each two continuous frames of sample images as follows: obtaining the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish output by the initial detection tracking model;
and calculating to obtain a calculation result of a loss function according to the output result of the initial detection tracking model and the preset mark data, propagating the gradient to the initial detection tracking model in a reverse direction, optimizing the initial detection tracking model, acquiring a next group of sample images from the sample image set, and repeatedly executing the training process until the calculation result of the loss function tends to be stable, wherein the initial detection tracking model is used as the final detection tracking model.
6. The method of claim 5, wherein obtaining the characteristics of the fish output by the detection and tracking model comprises:
respectively carrying out down-sampling on the two continuous frames of sample images, and extracting first features of respective fish of the two continuous frames of sample images after down-sampling;
up-sampling the two continuous frames of sample images after down-sampling, and extracting second features of fish of the two continuous frames of sample images after up-sampling;
acquiring the hot spot map characteristics of any one frame of sample image in the two continuous frames of sample images;
and obtaining the characteristics of the fish according to the first characteristics, the second characteristics and the hot spot diagram characteristics.
7. The method of any one of claims 1 to 6, further comprising, after extracting the features of the fish in the two consecutive frames of fish images:
and according to the characteristics of the fish, performing target classification on the fish, and allocating the same type of identification to the fish of the same type.
8. The method of claim 7, wherein the step of inputting the set of sample images into an initial detection and tracking model further comprises:
and adding a circumscribed rectangle to the fish in the sample image, and distributing the preset mark data to the circumscribed rectangle.
9. A fish growth assessment device, comprising:
the first acquisition module is used for acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
the extraction module is used for respectively processing each continuous two frames of fish images as follows: extracting the characteristics of the fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and allocating the same identification to the same fish;
the second acquisition module is used for acquiring the motion trail of the fish according to the identification of the fish in the N frames of fish images;
and the evaluation module is used for evaluating the growth trend of the fish according to the motion trail of the fish.
10. An electronic device, comprising: the system comprises a processor, a communication component, a memory and a communication bus, wherein the processor, the communication component and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor, executing a program stored in the memory, implements the method of assessing growth of a fish of any one of claims 1-8.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method for assessing the growth of a fish as claimed in any one of the claims 1 to 8.
CN202010608682.XA 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium Active CN111753775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010608682.XA CN111753775B (en) 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010608682.XA CN111753775B (en) 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111753775A true CN111753775A (en) 2020-10-09
CN111753775B CN111753775B (en) 2023-09-26

Family

ID=72678163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010608682.XA Active CN111753775B (en) 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111753775B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232432A (en) * 2020-10-26 2021-01-15 西安交通大学 Security check X-ray image target detection and identification method based on improved central point detection
TWI801911B (en) * 2021-06-18 2023-05-11 國立臺灣海洋大學 Aquatic organism identification method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6160759A (en) * 1999-04-19 2000-12-12 Nestler; John Michael Method for determining probable response of aquatic species to selected components of water flow fields
CN106561532A (en) * 2016-11-08 2017-04-19 深圳技师学院 Method and device for monitoring activity of fish
CN108875647A (en) * 2018-06-22 2018-11-23 成都睿畜电子科技有限公司 A kind of motion track monitoring method and system based on livestock identity
CN109271694A (en) * 2018-09-06 2019-01-25 西安理工大学 Habitat recognition methods based on fish individual dynamic Simulation Techniques
TWI661770B (en) * 2018-05-31 2019-06-11 National Chin-Yi University Of Technology Intelligent deep learning agricultural and fishery training system
US20190228218A1 (en) * 2018-01-25 2019-07-25 X Development Llc Fish biomass, shape, and size determination
CN110476871A (en) * 2019-09-17 2019-11-22 浙江傲宋智能科技有限公司 A kind of cultured fishes growth monitoring system
CN110942045A (en) * 2019-12-05 2020-03-31 安徽信息工程学院 Intelligent fish tank feeding system based on machine vision
CN111325181A (en) * 2020-03-19 2020-06-23 北京海益同展信息科技有限公司 State monitoring method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6160759A (en) * 1999-04-19 2000-12-12 Nestler; John Michael Method for determining probable response of aquatic species to selected components of water flow fields
CN106561532A (en) * 2016-11-08 2017-04-19 深圳技师学院 Method and device for monitoring activity of fish
US20190228218A1 (en) * 2018-01-25 2019-07-25 X Development Llc Fish biomass, shape, and size determination
TWI661770B (en) * 2018-05-31 2019-06-11 National Chin-Yi University Of Technology Intelligent deep learning agricultural and fishery training system
CN108875647A (en) * 2018-06-22 2018-11-23 成都睿畜电子科技有限公司 A kind of motion track monitoring method and system based on livestock identity
CN109271694A (en) * 2018-09-06 2019-01-25 西安理工大学 Habitat recognition methods based on fish individual dynamic Simulation Techniques
CN110476871A (en) * 2019-09-17 2019-11-22 浙江傲宋智能科技有限公司 A kind of cultured fishes growth monitoring system
CN110942045A (en) * 2019-12-05 2020-03-31 安徽信息工程学院 Intelligent fish tank feeding system based on machine vision
CN111325181A (en) * 2020-03-19 2020-06-23 北京海益同展信息科技有限公司 State monitoring method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐建瑜, 崔绍荣, 苗香雯, 刘鹰: "计算机视觉技术在水产养殖中的应用与展望", 农业工程学报, no. 08 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232432A (en) * 2020-10-26 2021-01-15 西安交通大学 Security check X-ray image target detection and identification method based on improved central point detection
CN112232432B (en) * 2020-10-26 2023-04-11 西安交通大学 Security check X-ray image target detection and identification method based on improved central point detection
TWI801911B (en) * 2021-06-18 2023-05-11 國立臺灣海洋大學 Aquatic organism identification method and system

Also Published As

Publication number Publication date
CN111753775B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN110598558B (en) Crowd density estimation method, device, electronic equipment and medium
EP3333768A1 (en) Method and apparatus for detecting target
KR101747220B1 (en) Adaptive image processing apparatus and method in image pyramid
CN108256404B (en) Pedestrian detection method and device
KR20180065889A (en) Method and apparatus for detecting target
CN111127508B (en) Target tracking method and device based on video
CN112633255B (en) Target detection method, device and equipment
CN115546705B (en) Target identification method, terminal device and storage medium
CN112417955A (en) Patrol video stream processing method and device
CN111027347A (en) Video identification method and device and computer equipment
CN111291646A (en) People flow statistical method, device, equipment and storage medium
CN110610123A (en) Multi-target vehicle detection method and device, electronic equipment and storage medium
CN111814690A (en) Target re-identification method and device and computer readable storage medium
CN111753775A (en) Fish growth assessment method, device, equipment and storage medium
US20170053172A1 (en) Image processing apparatus, and image processing method
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
CN111325181A (en) State monitoring method and device, electronic equipment and storage medium
CN114220087A (en) License plate detection method, license plate detector and related equipment
Fatemeh Razavi et al. Integration of colour and uniform interlaced derivative patterns for object tracking
CN111476132A (en) Video scene recognition method and device, electronic equipment and storage medium
Gonçalves et al. Using a convolutional neural network for fingerling counting: A multi-task learning approach
CN113239738B (en) Image blurring detection method and blurring detection device
CN114359332A (en) Target tracking method, device, equipment and medium based on depth image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant