CN110956198A - Visual weight measuring method for monocular camera - Google Patents

Visual weight measuring method for monocular camera Download PDF

Info

Publication number
CN110956198A
CN110956198A CN201911050810.7A CN201911050810A CN110956198A CN 110956198 A CN110956198 A CN 110956198A CN 201911050810 A CN201911050810 A CN 201911050810A CN 110956198 A CN110956198 A CN 110956198A
Authority
CN
China
Prior art keywords
poultry
pictures
prediction
weight
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911050810.7A
Other languages
Chinese (zh)
Other versions
CN110956198B (en
Inventor
赵玉良
陈若愚
沙晓鹏
崔逸丰
李文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201911050810.7A priority Critical patent/CN110956198B/en
Publication of CN110956198A publication Critical patent/CN110956198A/en
Application granted granted Critical
Publication of CN110956198B publication Critical patent/CN110956198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a monocular camera vision weight measuring method, and relates to the technical field of artificial intelligence. The invention comprises the following steps: step 1: collecting data; step 2: data preprocessing, capturing standard pictures of poultry, and coding all the standard pictures; and step 3: training a network to enable the convolutional neural network to learn the relation between the pictures and the weights of the poultry to obtain a prediction model; and 4, step 4: and acquiring a prediction sample, sampling n pictures of the prediction sample, inputting all the pictures into a prediction model, and synthesizing the prediction results for n times to obtain the weight of the poultry. The method is lower in complexity and cost of the camera, can be widely applied to farms, automatically calculates the weight of the poultry through a computer vision technology, and enables the system to be more automatic.

Description

Visual weight measuring method for monocular camera
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a monocular camera vision weight measuring method.
Background
Every year, poultry slaughtering and processing enterprises relate to 100 hundred million slaughtered raw materials (chickens and ducks), and mainly use white feather broilers and white feather meat ducks. The raw materials have the advantages of short growth cycle, stable gene, delicious meat quality and the like. The slaughtering and processing process mainly adopts a human-computer combined segmentation mode, is different from the traditional processing and manufacturing industry, is a process of killing and decomposing and segmenting living bodies, and is a one-by-one or more process, so that the whole production process has urgent requirements on process data acquisition. The data plays an important role in raw material settlement, production structure analysis, cost analysis and the like.
The first link of poultry meat production and segmentation is counting and weighing, namely, the number and weight specification of monomers on a production line are known through extension equipment. The traditional method is formed by matching an infrared point device with an electronic weighing device. With the increasing lean requirements of production enterprises on the production process, the traditional method cannot meet the management requirements on data acquisition and analysis, and therefore a new technical method is urgently needed to be introduced to solve the problem.
Deep learning by learning a large amount of data, it is possible to avoid feature extraction of a display, and to learn implicitly from training data. Deep learning techniques have achieved little success in the field of computer vision, and through deep learning, computers can solve tasks that are impossible to accomplish.
Disclosure of Invention
The invention aims to solve the technical problem of providing a monocular camera vision weight measuring method aiming at the defects of the prior art, the method regresses the relationship between a large number of poultry pictures and the weight through a convolutional neural network algorithm, and averages the calculated values of the trained convolutional neural network of the pictures of the same target at a plurality of visual angles to be used as the predicted value of the weight of the target.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
the invention provides a monocular camera vision weight measuring method, which comprises the following steps:
step 1: collecting data; acquiring a poultry meat production line video through a monocular camera; numbering all poultry on a production line, recording the weight of the poultry corresponding to each number, and enabling the weight to be sampled in a certain range and distributed uniformly in the collection process; a baffle is arranged at the back of the poultry meat production line and covers the view field shot by the whole monocular camera;
step 2: data preprocessing, capturing standard pictures of poultry, and coding all the standard pictures;
and step 3: training a network to enable the convolutional neural network to learn the relation between the pictures and the weights of the poultry to obtain a prediction model;
step 3.1: acquiring standard pictures of all poultry, and importing the pictures and weight labels corresponding to the pictures; dividing the standard picture into a training set and a test set according to codes; that is, pictures with the same number and different viewing angles should all appear in the same set;
step 3.2: initializing parameters of the convolutional neural network, taking a standard picture in a training set as the input of the convolutional neural network, and taking the weight of the poultry as the output of the convolutional neural network; inputting the standard pictures into the network in sequence, and training a convolutional neural network; when the prediction precision reaches a preset value, stopping training the network, and simultaneously storing data of parameters in the network to obtain a prediction model;
step 3.3: performing precision prediction on the prediction model by using the test set, judging whether the prediction precision reaches a preset value, and if so, outputting the prediction model; if not, executing the step 3.1;
and 4, step 4: acquiring a prediction sample, sampling n pictures of the prediction sample, inputting all the pictures into a prediction model, and synthesizing the prediction results for n times to obtain the weight of the poultry;
step 4.1: sampling n pictures of samples needing to be predicted, wherein each picture is shot at a different visual angle, and performing data preprocessing of the step (2) on the n pictures;
step 4.2: predicting the poultry pictures after n data pretreatments through a prediction model, and selecting the average value of the predicted values for n times as a final prediction result to estimate the weight of the poultry.
The step 2 comprises the following specific steps:
step 2.1: adopting a target tracking method to intercept poultry pictures flowing on a poultry production line; selecting a reference point during interception, so that the poultry images at the same positions of the images have no deviation on the positions, and the sizes of the finally output poultry images are equal;
step 2.2: splitting R, G, B three channels of the intercepted poultry pictures, and selecting pictures of R channels;
step 2.3: setting a threshold, removing the background of the R channel picture obtained in the step 2.2 to enable all pixel values smaller than the threshold to return to zero, then dividing each pixel value of the picture by 255 to carry out picture normalization, finally obtaining a standard picture only containing poultry on the picture, and labeling all the standard pictures according to the corresponding poultry codes.
The network structure parameters in the step 3 are configured as follows: part of convolutional neural network convolutional layer: the convolution kernel size is 3 x 3, the activation function of each layer adopts a softplus activation function, and each layer of convolution is subjected to maximum pooling after activation; after convolution operation is carried out for 3 times, converting the three-dimensional convolution block into a characteristic vector, and then entering a full-connection layer; the fully-connected layer comprises two hidden layers, wherein the first hidden layer comprises 32 neurons, and the second hidden layer comprises 16 neurons; except the output layer, the other full connection layers are activated by adopting a softplus activation function; random inactivation dropouts are adopted in both hidden layers during training, and the random inactivation rate is 10%; finally outputting a value; the specific pre-parameters of the training are set as follows: the learning rate is set to 0.0001 by adopting a learning rate attenuation method, the attenuation rate is 0.99, and the learning rate is attenuated once every 1000 batchs; the optimizer adopts an Adam optimizer, and the loss function is a root-mean-square loss function.
The baffle in said step 1 should select a color that is insensitive to red light.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: the monocular camera vision weight measuring method provided by the invention has higher accuracy and operation speed, and does not need to artificially extract the characteristics of a target picture; the problem of processing at different visual angles, which is not solved in the current visual weight measuring method, is effectively solved; compared with a binocular camera, the method is lower in complexity and cost of the camera. The method can be widely applied to farms, and the weight of the poultry is automatically calculated by a computer vision technology, so that the system is more automatic. The method has high accuracy, wide transplantation scenes and good application prospect.
Drawings
FIG. 1 is a schematic diagram of the main structure provided by the embodiment of the present invention;
FIG. 2 is a flow chart of a method provided by an embodiment of the present invention;
FIG. 3 is a flow chart of a pre-processing stage provided by an embodiment of the present invention;
FIG. 4 is a flow chart of a predictive model training phase provided by an embodiment of the invention;
FIG. 5 is a block diagram of a convolutional neural network according to the present invention;
the method comprises the following steps of 1-baffle plate, 2-assembly line moving direction, 3-iron frame, 4-poultry, 5-monocular camera, 6-workstation, 7-preprocessed data, 8-deep learning convolution neural network algorithm and 9-output data.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
The method of this example is as follows.
The hardware architecture part of the method is shown in figure 1 and comprises a baffle, a monocular camera and a workstation; the baffle has the characteristics of flat and rough surface, no reflection, less reflected red light and the like; the monocular camera is connected with the workstation, plays a role in transmitting visual data, and has the characteristics of clear shooting visual field, no distortion of shot pictures and the like; in this embodiment, the workstation is an industrial personal computer with a high operation speed, and can quickly operate and process data.
The invention provides a monocular camera vision weight measuring method, as shown in figure 2, comprising the following steps:
step 1: collecting data; acquiring a poultry meat production line video through a monocular camera; numbering all birds on the production line, and recording the weight of the birds corresponding to each number; the weight is sampled in a large quantity and distributed uniformly in a certain range, and the condition that the number of samples in a certain weight interval is small is avoided; a baffle is arranged at the back of the poultry meat production line, the baffle covers the visual field shot by the whole monocular camera, and the color of the baffle is insensitive to red light and can be blue or black; in the embodiment, the baffle is black;
in the embodiment, the baffle is placed behind the production line, the baffle is required to cover the view field shot by the whole monocular camera, and the color of the baffle is not sensitive to red light, so that the data preprocessing process is facilitated; the assembly line is required to keep slow uniform motion during moving, so that the aim of preventing the suspended poultry from swinging in large steps to influence a prediction result is fulfilled; the monocular camera should be facing the pipeline so that there is a fixed view projection in one direction; the monocular camera collects data and transmits the data to the workstation in time;
step 2: data preprocessing, capturing standard pictures of poultry; as shown in fig. 3;
in the embodiment, the workstation preprocesses the video in a set preprocessing mode, wherein the preprocessed video is fixed in size, extracted as an R-channel single-channel picture, and a result of removing background impurities is obtained;
step 2.1: adopting a target tracking method to intercept poultry pictures flowing on a poultry production line; selecting a proper reference point during interception, so that the poultry images at the same positions of the images have no deviation in position, and the sizes of the finally output poultry images are equal;
in this embodiment, the assembly line moves at a constant speed, the computer should track each position of each bird by a program, so that the obtained birds have correct numbers, and the camera should be fixed; selecting a fixed picture position for placing poultry, selecting a reference point, such as two sides of an iron stand, and finding the reference point by a computer when tracking in real time; selecting a fixed size from the periphery according to the searched reference point, and cutting the picture;
step 2.2: because the flesh has the most obvious red light radiation, R, G, B three channels of the intercepted poultry picture are split, and an R channel picture is selected;
step 2.3: setting a threshold, removing the background of the R channel picture obtained in the step 2.2, preventing the influence of the background on a training result in the training data, enabling all pixel values smaller than the threshold to be zero, then dividing each pixel value of the picture by 255 for picture normalization, finally obtaining a standard picture only having poultry on the picture, and labeling all the standard pictures according to corresponding poultry codes, namely, if the poultry in the standard picture is No. 4, the code of the standard picture is 4.
And step 3: training a network; as shown in fig. 4, the convolutional neural network learns the relationship between the pictures and the weights of the birds in a specific scene to obtain a prediction model;
step 3.1: acquiring standard pictures of all poultry, and importing the pictures and weight labels corresponding to the pictures; dividing the standard picture into a training set and a test set according to codes; that is, pictures with the same number and different viewing angles should all appear in the same set;
in the embodiment, the ratio of the training set to the test set is 8: 2;
step 3.2: initializing parameters of the convolutional neural network, taking a standard picture in a training set as the input of the convolutional neural network, and taking the weight of the poultry as the output of the convolutional neural network; sequentially inputting the standard pictures into a network, training the network by adopting a specific loss function and an optimizer, and simultaneously inputting test data to supervise the performance of the network; training a convolutional neural network; when the prediction precision reaches a preset value (the MSE of the test set is not higher than 0.007 in the embodiment), stopping training the network, and simultaneously storing data of parameters in the network to obtain a prediction model for prediction;
as shown in fig. 5, the convolutional neural network structure parameters are configured as: the structure of the convolutional neural network is shown in the figure, and the convolutional neural network comprises the following parts: the convolution kernel size is 3 x 3, the activation function of each layer adopts a softplus activation function, and each layer of convolution is subjected to maximum pooling after activation; after convolution operation is carried out for 3 times, converting the three-dimensional convolution block into a characteristic vector, and then entering a full-connection layer; the fully-connected layer comprises two hidden layers, wherein the first hidden layer comprises 32 neurons, and the second hidden layer comprises 16 neurons; except the output layer, the other full connection layers are activated by adopting a softplus activation function; random inactivation dropouts are adopted in both hidden layers during training, and the random inactivation rate is 10%; and finally, outputting a value, thereby realizing the mapping from the picture to the weight. The specific pre-parameters of the training are set as follows: the learning rate adopts a learning rate attenuation method, the learning rate is set to be 0.0001, the attenuation rate is 0.99, the learning rate is attenuated once every 1000 batchs, an Adam optimizer is adopted in the optimizer, and the loss function is a root-mean-square loss function; and inputting the training set and the test set into a convolutional neural network for parameter training.
In this embodiment, the network structure parameters are configured as: part of convolutional neural network convolutional layer: the convolution kernel size is 3 x 3, the activation function of each layer adopts a softplus activation function, and each layer of convolution is subjected to maximum pooling after activation; after convolution operation is carried out for 3 times, converting the three-dimensional convolution block into a characteristic vector, and then entering a full-connection layer; the fully-connected layer comprises two hidden layers, wherein the first hidden layer comprises 32 neurons, and the second hidden layer comprises 16 neurons; except the output layer, the other full connection layers are activated by adopting a softplus activation function; random inactivation dropouts are adopted in both hidden layers during training, and the random inactivation rate is 10%; finally outputting a value
Step 3.3: performing precision prediction on the prediction model by using the test set, judging whether the prediction precision reaches a preset value, and if so, outputting the prediction model; if not, executing the step 3.1 to continue training the convolutional neural network;
and 4, step 4: acquiring a prediction sample, sampling n pictures of the prediction sample, inputting all the pictures into a prediction model, and synthesizing the prediction results for n times to obtain the weight of the poultry;
step 4.1: sampling n pictures of samples needing to be predicted, wherein each picture is shot at a different visual angle, and performing data preprocessing of the step (2) on the n pictures;
step 4.2: predicting the poultry pictures after n data pretreatments through a prediction model, and selecting the average value of the predicted values for n times as a final prediction result to estimate the weight of the poultry.
In the algorithm, effective weight prediction on poultry under different visual fields can be achieved through the convolutional neural network, data preprocessing can normalize the data, and influences of other factors (such as picture backgrounds) on the predicted data are reduced; the multi-view prediction mainly aims to prevent the accidental prediction at one time, and the adoption of multiple groups of pictures can effectively avoid accidental errors generated by the prediction of the pictures.
In this example we evaluated 50 ducks on a farm with a duck weight tag accuracy of 0.05 kg. We performed prediction experiments using cross validation, selecting 45 ducks as training and 5 as tests per number each time. We performed multiple extractions and guaranteed that 23 different ducks were extracted 3 times each, and finally evaluated. The final precision of the model reaches 0.0588kg, and the accuracy of weight prediction is 97.85%.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.

Claims (4)

1. A monocular camera vision weight measuring method is characterized in that: the method comprises the following steps:
step 1: collecting data; acquiring a poultry meat production line video through a monocular camera; numbering all poultry on a production line, recording the weight of the poultry corresponding to each number, and enabling the weight to be sampled in a certain range and distributed uniformly in the collection process; a baffle is arranged at the back of the poultry meat production line and covers the view field shot by the whole monocular camera;
step 2: data preprocessing, capturing standard pictures of poultry, and coding all the standard pictures;
and step 3: training a network to enable the convolutional neural network to learn the relation between the pictures and the weights of the poultry to obtain a prediction model;
step 3.1: acquiring standard pictures of all poultry, and importing the pictures and weight labels corresponding to the pictures; dividing the standard picture into a training set and a test set according to codes; that is, pictures with the same number and different viewing angles should all appear in the same set;
step 3.2: initializing parameters of the convolutional neural network, taking a standard picture in a training set as the input of the convolutional neural network, and taking the weight of the poultry as the output of the convolutional neural network; inputting the standard pictures into the network in sequence, and training a convolutional neural network; when the prediction precision reaches a preset value, stopping training the network, and simultaneously storing data of parameters in the network to obtain a prediction model;
step 3.3: performing precision prediction on the prediction model by using the test set, judging whether the prediction precision reaches a preset value, and if so, outputting the prediction model; if not, executing the step 3.1;
and 4, step 4: acquiring a prediction sample, sampling n pictures of the prediction sample, inputting all the pictures into a prediction model, and synthesizing the prediction results for n times to obtain the weight of the poultry;
step 4.1: sampling n pictures of samples needing to be predicted, wherein each picture is shot at a different visual angle, and performing data preprocessing of the step (2) on the n pictures;
step 4.2: predicting the poultry pictures after n data pretreatments through a prediction model, and selecting the average value of the predicted values for n times as a final prediction result to estimate the weight of the poultry.
2. The visual weight measuring method of the monocular camera according to claim 1, wherein: the step 2 comprises the following specific steps:
step 2.1: adopting a target tracking method to intercept poultry pictures flowing on a poultry production line; selecting a reference point during interception, so that the poultry images at the same positions of the images have no deviation on the positions, and the sizes of the finally output poultry images are equal;
step 2.2: splitting R, G, B three channels of the intercepted poultry pictures, and selecting pictures of R channels;
step 2.3: setting a threshold, removing the background of the R channel picture obtained in the step 2.2 to enable all pixel values smaller than the threshold to return to zero, then dividing each pixel value of the picture by 255 to carry out picture normalization, finally obtaining a standard picture only containing poultry on the picture, and labeling all the standard pictures according to the corresponding poultry codes.
3. The visual weight measuring method of the monocular camera according to claim 1, wherein: the network structure parameters in the step 3 are configured as follows: part of convolutional neural network convolutional layer: the convolution kernel size is 3 x 3, the activation function of each layer adopts a softplus activation function, and each layer of convolution is subjected to maximum pooling after activation; after convolution operation is carried out for 3 times, converting the three-dimensional convolution block into a characteristic vector, and then entering a full-connection layer; the fully-connected layer comprises two hidden layers, wherein the first hidden layer comprises 32 neurons, and the second hidden layer comprises 16 neurons; except the output layer, the other full connection layers are activated by adopting a softplus activation function; random inactivation dropouts are adopted in both hidden layers during training, and the random inactivation rate is 10%; finally outputting a value; the specific pre-parameters of the training are set as follows: the learning rate is set to 0.0001 by adopting a learning rate attenuation method, the attenuation rate is 0.99, and the learning rate is attenuated once every 1000 batchs; the optimizer adopts an Adam optimizer, and the loss function is a root-mean-square loss function.
4. The visual weight measuring method of the monocular camera according to claim 1, wherein: the baffle in said step 1 should select a color that is insensitive to red light.
CN201911050810.7A 2019-10-31 2019-10-31 Visual weight measurement method for monocular camera Active CN110956198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911050810.7A CN110956198B (en) 2019-10-31 2019-10-31 Visual weight measurement method for monocular camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911050810.7A CN110956198B (en) 2019-10-31 2019-10-31 Visual weight measurement method for monocular camera

Publications (2)

Publication Number Publication Date
CN110956198A true CN110956198A (en) 2020-04-03
CN110956198B CN110956198B (en) 2023-07-14

Family

ID=69976536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911050810.7A Active CN110956198B (en) 2019-10-31 2019-10-31 Visual weight measurement method for monocular camera

Country Status (1)

Country Link
CN (1) CN110956198B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516635A (en) * 2021-06-15 2021-10-19 中国农业大学 Fish-vegetable symbiotic system and vegetable nitrogen element demand estimation method based on fish behaviors
CN114877979A (en) * 2022-03-23 2022-08-09 成都爱记科技有限公司 Animal body weight measuring system based on artificial intelligence
CN115968813A (en) * 2021-10-15 2023-04-18 智逐科技股份有限公司 Poultry health monitoring system and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI282847B (en) * 2006-06-09 2007-06-21 Univ Nat Taiwan Science Tech Method to measure the weight based on machine vision
CN106044663A (en) * 2016-06-23 2016-10-26 福建工程学院 Visual technology-based stone mine forklift with weight measuring function and weight measuring method of stone mine forklift
CN108537994A (en) * 2018-03-12 2018-09-14 深兰科技(上海)有限公司 View-based access control model identifies and the intelligent commodity settlement system and method for weight induction technology
CN108961269A (en) * 2018-06-22 2018-12-07 深源恒际科技有限公司 Pig weight measuring method and system based on image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI282847B (en) * 2006-06-09 2007-06-21 Univ Nat Taiwan Science Tech Method to measure the weight based on machine vision
CN106044663A (en) * 2016-06-23 2016-10-26 福建工程学院 Visual technology-based stone mine forklift with weight measuring function and weight measuring method of stone mine forklift
CN108537994A (en) * 2018-03-12 2018-09-14 深兰科技(上海)有限公司 View-based access control model identifies and the intelligent commodity settlement system and method for weight induction technology
CN108961269A (en) * 2018-06-22 2018-12-07 深源恒际科技有限公司 Pig weight measuring method and system based on image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王业琴;: "计算机视觉鸭蛋重量检测方法研究", 安徽农业科学, no. 07 *
聂卫东,欧阳松,李财莲: "BP神经网络算法在图像预测物体重量中的应用", 湖南工业职业技术学院学报, no. 02 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516635A (en) * 2021-06-15 2021-10-19 中国农业大学 Fish-vegetable symbiotic system and vegetable nitrogen element demand estimation method based on fish behaviors
CN113516635B (en) * 2021-06-15 2024-02-27 中国农业大学 Fish and vegetable symbiotic system and vegetable nitrogen element demand estimation method based on fish behaviors
CN115968813A (en) * 2021-10-15 2023-04-18 智逐科技股份有限公司 Poultry health monitoring system and method thereof
CN114877979A (en) * 2022-03-23 2022-08-09 成都爱记科技有限公司 Animal body weight measuring system based on artificial intelligence

Also Published As

Publication number Publication date
CN110956198B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
Shi et al. An automatic method of fish length estimation using underwater stereo system based on LabVIEW
CN110956198B (en) Visual weight measurement method for monocular camera
Zabawa et al. Detection of single grapevine berries in images using fully convolutional neural networks
CN114387520B (en) Method and system for accurately detecting compact Li Zijing for robot picking
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN112232978B (en) Aquatic product length and weight detection method, terminal equipment and storage medium
Atienza-Vanacloig et al. Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model
CN112257564B (en) Aquatic product quantity statistical method, terminal equipment and storage medium
CN113592896B (en) Fish feeding method, system, equipment and storage medium based on image processing
Ji et al. Apple target recognition method in complex environment based on improved YOLOv4
Lainez et al. Automated fingerlings counting using convolutional neural network
CN112184699A (en) Aquatic product health detection method, terminal device and storage medium
CN112232977A (en) Aquatic product cultivation evaluation method, terminal device and storage medium
CN113435355A (en) Multi-target cow identity identification method and system
CN115861721B (en) Livestock and poultry breeding spraying equipment state identification method based on image data
Rahim et al. Deep learning-based accurate grapevine inflorescence and flower quantification in unstructured vineyard images acquired using a mobile sensing platform
CN114202563A (en) Fish multi-target tracking method based on balance joint network
CN115471871A (en) Sheldrake gender classification and identification method based on target detection and classification network
McLeay et al. Deep convolutional neural networks with transfer learning for waterline detection in mussel farms
CN113255549B (en) Intelligent recognition method and system for behavior state of wolf-swarm hunting
CN108921872B (en) Robust visual target tracking method suitable for long-range tracking
Muñoz-Benavent et al. Impact evaluation of deep learning on image segmentation for automatic bluefin tuna sizing
CN109684953A (en) The method and device of pig tracking is carried out based on target detection and particle filter algorithm
CN117333948A (en) End-to-end multi-target broiler behavior identification method integrating space-time attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant