CN109785337B - In-column mammal counting method based on example segmentation algorithm - Google Patents

In-column mammal counting method based on example segmentation algorithm Download PDF

Info

Publication number
CN109785337B
CN109785337B CN201811588576.9A CN201811588576A CN109785337B CN 109785337 B CN109785337 B CN 109785337B CN 201811588576 A CN201811588576 A CN 201811588576A CN 109785337 B CN109785337 B CN 109785337B
Authority
CN
China
Prior art keywords
image
test
images
target
border
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811588576.9A
Other languages
Chinese (zh)
Other versions
CN109785337A (en
Inventor
苍岩
陈婵
乔玉龙
陈春雨
何恒翔
李�诚
唐圣权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201811588576.9A priority Critical patent/CN109785337B/en
Publication of CN109785337A publication Critical patent/CN109785337A/en
Application granted granted Critical
Publication of CN109785337B publication Critical patent/CN109785337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an in-column mammal counting method based on an example segmentation algorithm, belonging to the field of computer vision; the invention collects the images of the mammals in the fence through the camera and sends the images to the workstation; then, selecting an image with clear target contour, dividing the image into a training set, a verification set and a test set, and using the training set, the verification set and the test set as a data set for model training; secondly, generating a segmentation model for testing through an example segmentation algorithm of deep learning; sequentially inputting the images of the test set into the segmentation models generated by training, predicting, and outputting test results and test effect graphs; after the test is finished, automatically storing a test effect graph; and finally, counting the target boundary frames in the test result to further count the number of targets in the image.

Description

In-column mammal counting method based on example segmentation algorithm
Technical Field
The invention relates to the field of computer vision, in particular to an in-column mammal counting method based on an example segmentation algorithm.
Background
With the development of industrial integration, the large-scale breeding plants are continuously increased, and the real-time and accurate counting of the number of the mammals in the breeding plants becomes one of the important problems to be solved by large-scale breeding enterprises. At present, the counting of the number of the mammals in a breeding plant mainly adopts an artificial counting mode, thousands of mammals are manually counted, the process is time-consuming and labor-consuming, the counting accuracy is low, the rechecking is required for different times, the labor cost is very high, the cost is much higher than that of equipment counting, and the labor cost is continuous investment because the equipment is disposable. The invention patent in 2017 (CN206039601U) provides that the number of experimental mice is checked through sensing equipment, an animal waiting box and an animal storage box are connected through an animal conveying pipe, an infrared sensor is installed on the conveying pipe, the experimental mice to be counted are transferred into the waiting box, the experimental mice are driven to enter the waiting box through the conveying pipe, the experimental mice are sensed when passing through the infrared sensor, and sensing signals are transmitted to an automatic counter to be counted. The method has complex equipment, is difficult to control animals, cannot realize real-time counting and is not suitable for mammals with larger body sizes. The invention patent in China (CN108509913A) in 2018 adopts an image processing mode to count indoor people, a camera is arranged above the indoor space to obtain an image, after preprocessing, Sobel operators are adopted to carry out edge detection on the image to obtain a target edge profile, subsequent processing is carried out to complete target profile detection, and the target number is counted. The method has the disadvantages of complicated image processing steps, long time consumption, poor real-time effect, more noise of the image processing algorithm, poor image processing effect and inaccurate counting result. The patent of invention (CN10841625OA) in 2018 proposes a people counting model of an image feature extraction submodel and an SSD classification regression submodel constructed based on a convolutional neural network. The method adopts a complex model and poor real-time performance, only detects the boundary frame of the target, and leads to the easy omission of the mutually shielded targets.
In conclusion, at present, no simple and practical mammal counting method exists, which can accurately detect animals in an area and count the number of the animals out of a breeding plant in real time. Based on the current situation, the wide-angle camera is used for collecting images of all columns of mammals in a breeding plant, the targets in the images are accurately detected through an example segmentation technology, and the number of the targets is counted in real time.
Disclosure of Invention
The invention aims to provide an in-column mammal counting method based on an example segmentation algorithm; the purpose of accurately segmenting the mammals in the bar based on the examples in real time is achieved.
An in-column mammal inventory method based on an example segmentation algorithm, comprising the steps of:
the method comprises the following steps: the image acquisition part acquires and sends the images of the mammals in the fence to the workstation through the camera;
step two: the image preprocessing part selects an image with clear target contour, divides the image into a training set, a verification set and a test set, and uses the training set, the verification set and the test set as a data set for model training;
step three: a model training part; generating a segmentation model for testing through an example segmentation algorithm of deep learning;
step four: a model test section; in the testing stage, sequentially inputting the images of the test set into the segmentation model generated by training, predicting, and outputting a test result and a test effect graph; after the test is finished, the test effect graph is automatically stored, so that later-period examination is facilitated;
step five: a target counting part; counting the target bounding boxes in the test result to further count the number of targets in the image, and automatically storing the image names and the corresponding target numbers after the test is finished.
The first step comprises the following steps:
step 1.1: installing a wide-angle camera above the mammal feeding fence, and adjusting the angle of the wide-angle camera to enable the wide-angle camera to shoot the whole fence;
step 1.2: and reading the parameters of the wide-angle camera by using the workstation, remotely controlling the wide-angle camera to acquire images, and storing the acquired images at the workstation end.
The second step comprises the following steps:
step 2.1: selecting an image with a clear target contour, and dividing the image into a training set, a verification set and a test set according to the ratio of 8:1: 1;
step 2.2: labeling the training set and the verification set images by using a labeling tool VGG Image Antator (via), drawing a closed polygon according to a target contour in the images, and labeling a polygon area as a mammal name;
step 2.3: and after the image annotation in the training set is finished, generating an annotation file via _ region _ data.
The third step comprises the following steps:
step 3.1: firstly, completing feature extraction of a target in an image by using a depth residual error network (ResNet101) and a Feature Pyramid Network (FPN); the ResNet101 network is divided into five parts from bottom to top, namely, Stage1-Stage5, characteristic graphs with the sizes of 256 × 256,128 × 128,64 × 64,32 × 32 and 16 × 16 are respectively output, the FPN network sequentially connects the characteristics of adjacent layers from top to bottom, and all the layers are jointly input into the RPN network to generate candidate areas;
step 3.2: completing generation of a candidate region (propofol) by using a region candidate network (RPN); the RPN network slides on each layer of feature map by using a sliding window with the size of 3 x 3, and generates rectangular frames (anchors) with different length-width ratios on each layer of feature map by taking the central position of the sliding window as a core; mapping the anchors back to the original images through the mapping relation between the characteristic images of all layers and the original images;
step 3.3: classifying and regressing the anchor; calculating the intersection ratio (IoU) of the anchor and the target real bounding box on the original image, wherein the area IoU is greater than 0.7 is divided into a foreground, and the corresponding anchor is used as a proposal; performing border regression on the propofol, i.e. finding a mapping relation, mapping the original propofol (P) to a predicted border
Figure BDA0001919700730000021
True bounding box (G) to make it closer to the target:
Figure BDA0001919700730000031
firstly, the proposal is subjected to translation transformation by using a formula (2) and a formula (3), and then is subjected to scale scaling by using a formula (4) and a formula (5) to complete mapping:
Figure BDA0001919700730000032
Figure BDA0001919700730000033
Figure BDA0001919700730000034
Figure BDA0001919700730000035
step 3.4: removing boundary frames which cross the border, inhibiting non-maximum values of the rest border frames, namely arranging scores of all the border frames in a descending order, selecting the highest score and the corresponding border frames, selecting the border frame with the highest score from unprocessed border frames, calculating IoU of the border frame with the highest score, if IoU is greater than a given threshold value, discarding the border frame, and traversing the rest border frames;
step 3.5: the targets are segmented and classified using the full convolutional network and the full connectivity layer.
The invention has the beneficial effects that: provides a high-precision real-time mammal counting method in a breeding plant fence. The wide-angle camera is remotely controlled by the workstation to acquire images, real-time counting can be performed, and stress reaction to mammals caused by photographing in a breeding plant is avoided. The neural network training model is utilized, the complex image preprocessing step is avoided, the precise target features are extracted through the depth residual error network and the feature pyramid network, the segmentation precision of the target is improved, the target segmentation is carried out through the full convolution network, the identification accuracy of the shielded target is effectively improved, and the target boundary frame is calibrated. And the number of the target boundary frames in the test result is utilized to count the targets, and the counting result is quick and accurate.
Drawings
FIG. 1 is a general block diagram of the present invention;
FIG. 2 is a schematic of the image acquisition of the present invention;
FIG. 3a is a schematic view of an annotated image of the present invention;
FIG. 3b is a schematic diagram of annotation data of an image according to the present invention;
FIG. 4 is a feature diagram of the present feature extraction network output;
FIG. 5a is a schematic view of the anchor of the present invention mapping the feature map back to the original image;
FIG. 5b is a schematic diagram of the anchor after border regression and non-maxima suppression according to the present invention;
FIG. 6 is a schematic diagram of the output result of the area candidate network according to the present invention;
FIG. 7 is a graph illustrating the segmentation results of the object of the present invention;
FIG. 8a is a schematic diagram of an image test result according to the present invention;
FIG. 8b is a schematic diagram of the coordinates of a target bounding box according to the present invention;
FIG. 8c is a schematic diagram of the confidence level of the segmented target according to the present invention;
FIG. 9a is a graphical illustration of the test effect of an image of the present invention;
FIG. 9b is a diagram showing the effect of the test of the present invention after the test is completed;
FIG. 10a is a test chart of the inventorying results;
FIG. 10b is a diagram illustrating saved image names and corresponding inventoried results according to the present invention;
FIG. 11 is a schematic view of an image acquisition system according to the present invention;
FIG. 12 is a diagram of a feature extraction network architecture in accordance with the present invention;
FIG. 13 is a schematic diagram of a border regression of the present invention;
FIG. 14 is a flow chart of object segmentation in accordance with the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Example 1:
the technical solution of the invention mainly comprises the following parts:
1) image acquisition section
The image acquisition part finishes the work of saving the images of the mammals in each column of the breeding plant. The invention adopts a wide-angle camera to collect images. In the wide-angle camera collection scheme, the camera setting is in the top on fence, and through reading the camera parameter, work station remote control camera carries out the work of gathering, and the image of gathering is preserved in the workstation.
2) Image preprocessing section
And selecting an image with clear target contour, and dividing the image into a training set, a verification set and a test set according to the ratio of 8:1: 1. Marking images of a training set and a verification set by using a marking tool VGG Image indicator (via), drawing a closed polygon according to a target contour in the Image, marking a polygonal area as a name of a mammal, such as 'pig', generating a marking file via _ region _ data.json after marking of the images in the training set is finished, placing the images and the marking files in the training set in a train folder, and placing the verification set and the corresponding marking files in a val folder as a data set for model training.
3) Model training part
The invention adopts an example segmentation algorithm based on deep learning and generates a segmentation model for testing through training.
Firstly, the feature extraction of the target in the image is completed by using a depth residual error network (ResNet101) and a Feature Pyramid Network (FPN).
The ResNet101 network is divided into five parts from bottom to top, namely, Stage1-Stage5, characteristic graphs with the sizes of 256 × 256,128 × 128,64 × 64,32 × 32 and 16 × 16 are respectively output, the FPN network sequentially connects the characteristics of adjacent layers from top to bottom, and all the layers are jointly input into the RPN network to generate candidate regions.
Next, generation of a candidate region (pro-active) is completed using the region candidate network (RPN).
The first step is as follows: the RPN network slides on each layer feature map by using a sliding window of 3 × 3 size, and generates rectangular frames (anchors) with different aspect ratios on each layer feature map, with the center position of the sliding window as a core. And mapping the anchors back to the original figures through the mapping relation between the feature maps of the layers and the original figures.
The second step is that: the anchors were classified and regressed. On the original picture by calculating anchor and target real bounding box (according to the label)Data generation), regions IoU greater than 0.7 were classified as foreground, with the corresponding anchors as propofol. Performing border regression on the propofol, i.e. finding a mapping relation, mapping the original propofol (P) to a predicted border
Figure BDA0001919700730000051
True bounding box (G) to make it closer to the target:
Figure BDA0001919700730000052
firstly, the proposal is subjected to translation transformation by using formulas (2) and (3), and then is subjected to scale scaling by using formulas (4) and (5) to complete mapping:
Figure BDA0001919700730000053
Figure BDA0001919700730000054
Figure BDA0001919700730000055
Figure BDA0001919700730000056
the third step: removing boundary frames which are out of bounds, and carrying out non-maximum value suppression on the rest boundary frames, namely arranging scores of all boundary frames in a descending order, selecting the highest score and the corresponding boundary frames, selecting the boundary frame with the highest score from unprocessed boundary frames, calculating IoU of the boundary frame with the current (highest score), if IoU is greater than a given threshold value, discarding the boundary frame, and traversing the rest boundary frames according to the process.
And finally, segmenting and classifying the targets by utilizing the full convolution network and the full connection layer respectively.
4) Model test section
In the testing stage, the images in the testing set are sequentially input into the segmentation model generated by training for prediction, and the testing result and the testing effect graph are output. After the test is finished, the test effect graph is automatically stored, so that later-period examination is facilitated.
5) Target counting part
The test result of the invention comprises the boundary frame of each target, and counting the target boundary frames in the test result further realizes counting of the number of targets in the image. The time for checking the targets in a single picture is about 0.2 second, and the real-time performance is good. After the test is finished, the image names and the corresponding target number are automatically stored, so that the statistics of a breeding plant is facilitated.
Example 2:
fig. 1 is a general block diagram of the present invention. The experiment of the invention takes live pigs as an example, the wide-angle camera collects live pig images of all the columns of a breeding plant, and the collected images on site are shown in figure 2; the annotated image is shown in FIG. 3 a; the annotation data of the image is shown in FIG. 3 b; the feature graph output by the feature extraction network is shown in fig. 4; the anchor mapped back to the original from the feature map is shown in FIG. 5 a; the anchor after border regression and non-maximum suppression is shown in fig. 5b, where the dashed box is the original border box and the solid box is the regressed border; the output result of the regional candidate network is shown in fig. 6; the segmentation result of the target is shown in fig. 7, the green contour is the labeled contour, and the red contour is the predicted contour; the image test results are shown in FIG. 8 a; the generated coordinates of the target bounding box are shown in FIG. 8 b; the confidence of the segmented target is shown in FIG. 8 c; the test effect graph of the image is shown in fig. 9a, and comprises the confidence of the target, a bounding box and a mask region; after the test is completed, the test effect graph is saved in a designated folder, as shown in fig. 9 b; the inventorying results are shown in FIG. 10 a; the saved image names and corresponding inventorying results are shown in FIG. 10 b; the image acquisition schematic diagram is shown in fig. 11; the structure of the feature extraction network is shown in fig. 12; the frame regression diagram is shown in FIG. 13; the target segmentation flow chart is shown in fig. 14.
The method comprises the steps of collecting images of mammals in all columns of a breeding plant by using a wide-angle camera, selecting images with clear target outlines to make a data set, training by using an example segmentation algorithm, generating images in a model test set by using training, outputting a test result and a test effect graph, and finishing counting of the number of targets in the images according to the image test result.
The first step is as follows: install wide-angle camera in mammal raising fence top, adjust wide-angle camera angle, make wide-angle camera can shoot whole fence.
The second step is that: and reading the parameters of the wide-angle camera by using the workstation, remotely controlling the wide-angle camera to acquire images, and storing the acquired images at the workstation end.
The third step: and processing the acquired image. And selecting an image with clear target contour, and dividing the image into a training set, a verification set and a test set according to the ratio of 8:1: 1. Marking images of the training set and the test set by using a marking tool via, drawing a closed polygon according to a target contour in the image, marking a polygonal area as a name of a mammal, such as 'pig', generating a marking file via _ region _ data.json after marking of the images in the training set is finished, placing the images in the training set and the marking file in a train folder, and placing the verification set and the corresponding marking file in a val folder as a data set for model training.
The fourth step: and (5) training a model. Firstly, a target feature is extracted by a feature extraction network (ResNet101+ FPN), secondly, a candidate region is generated by a region candidate network, anchors with different aspect ratios are generated on each layer of feature map, and the anchors are mapped back to the original map according to the mapping relation between each layer of feature map and the original map. Classifying and regressing the anchors, calculating the IoU of the anchors and the target real boundary frame on the original graph, dividing the area with the IoU larger than 0.7 into a foreground, taking the corresponding anchors as the propofol, and performing frame regression on the propofol to enable the predicted boundary frame to be closer to the real boundary frame. And eliminating boundary frames beyond the boundary, and performing non-maximum suppression on the rest boundary frames. And finally, segmenting and classifying the target by using the full convolution network and the full connection layer respectively, generating a mask region of the target and outputting a target category. And training to generate a model file for testing the image.
The fifth step: and (5) testing the model. And sequentially inputting the images of the test set into the segmentation model generated by training, predicting, and outputting a test result and a test effect image, wherein the test result comprises an image name, image original information, a confidence coefficient, a boundary box and a mask region of the segmented target, and the confidence coefficient, the boundary box and the mask region of the segmented target are displayed in the test effect image.
And a sixth step: and after the test is finished, saving the test effect graph of the image.
The seventh step: and counting the target, namely counting the boundary frame of the segmented target during image testing, and counting the target boundary frame in the test result so as to count the number of the targets in the image.
Eighth step: and after the counting is finished, the name of the test image and a corresponding counting result are stored, so that the statistics of a breeding factory is facilitated.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. An in-column mammal inventory method based on an example segmentation algorithm, comprising the steps of:
the method comprises the following steps: the image acquisition part acquires and sends the images of the mammals in the fence to the workstation through the camera;
step two: the image preprocessing part selects an image with clear target contour, divides the image into a training set, a verification set and a test set, labels the images of the training set and the verification set, and uses the labeled images as a data set for model training;
step three: a model training part; generating a segmentation model for testing through an example segmentation algorithm of deep learning;
step four: a model test section; in the testing stage, sequentially inputting the images of the test set into the segmentation model generated by training, predicting, and outputting a test result and a test effect graph; after the test is finished, the test effect graph is automatically stored, so that later-period examination is facilitated;
step five: a target counting part; counting the target bounding boxes in the test result to further count the target number in the image, and automatically storing the image name and the corresponding target number after the test is finished;
the third step comprises the following steps:
step 3.1: firstly, completing feature extraction of a target in an image by using a depth residual error network ResNet101 and a feature pyramid network FPN; the ResNet101 network is divided into five parts from bottom to top, namely, Stage1-Stage5, characteristic graphs with the sizes of 256 × 256,128 × 128,64 × 64,32 × 32 and 16 × 16 are respectively output, the FPN network sequentially connects the characteristics of adjacent layers from top to bottom, and all the layers are jointly input into the RPN network to generate candidate areas;
step 3.2: completing generation of a candidate region proposal by using a region candidate network (RPN); the RPN network slides on each layer of feature map by using a sliding window with the size of 3 x 3, and generates a rectangular frame anchor with different length-width ratios on each layer of feature map by taking the central position of the sliding window as a core; mapping the anchors back to the original images through the mapping relation between the characteristic images of all layers and the original images;
step 3.3: classifying and regressing the anchor; calculating the intersection ratio IoU of the rectangular frame anchor and the target real boundary frame on the original image, taking the area IoU greater than 0.7 as a foreground, and taking the corresponding rectangular frame anchor with the intersection ratio IoU greater than 0.7 as a candidate area propofol; performing border regression on the propofol, i.e. finding a mapping relation, mapping the original propofol (P) to a predicted border
Figure FDA0002996146150000026
The real bounding box G that makes it closer to the target:
Figure FDA0002996146150000021
firstly, the proposal is subjected to translation transformation by using a formula (2) and a formula (3), and then is subjected to scale scaling by using a formula (4) and a formula (5) to complete mapping:
Figure FDA0002996146150000022
Figure FDA0002996146150000023
Figure FDA0002996146150000024
Figure FDA0002996146150000025
step 3.4: removing boundary frames which cross the border, inhibiting non-maximum values of the rest border frames, namely arranging scores of all the border frames in a descending order, selecting the highest score and the corresponding border frames, selecting the border frame with the highest score from unprocessed border frames, calculating IoU of the border frame with the highest score, if IoU is greater than a given threshold value, discarding the border frame, and traversing the rest border frames;
step 3.5: the targets are segmented and classified using the full convolutional network and the full connectivity layer.
2. The method for in-column mammal counting based on the example segmentation algorithm as claimed in claim 1, wherein the first step comprises the steps of:
step 1.1: installing a wide-angle camera above the mammal feeding fence, and adjusting the angle of the wide-angle camera to enable the wide-angle camera to shoot the whole fence;
step 1.2: and reading the parameters of the wide-angle camera by using the workstation, remotely controlling the wide-angle camera to acquire images, and storing the acquired images at the workstation end.
3. The method for in-column mammal counting based on the example segmentation algorithm as claimed in claim 1, wherein the second step comprises the steps of:
step 2.1: selecting an image with a clear target contour, and dividing the image into a training set, a verification set and a test set according to the ratio of 8:1: 1;
step 2.2: marking the images of the training set and the verification set by using a marking tool VGG Image annotor, drawing a closed polygon according to the target contour in the images, and marking the polygonal area as the name of the mammal;
step 2.3: and after the image annotation in the training set is finished, generating an annotation file via _ region _ data.
CN201811588576.9A 2018-12-25 2018-12-25 In-column mammal counting method based on example segmentation algorithm Active CN109785337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811588576.9A CN109785337B (en) 2018-12-25 2018-12-25 In-column mammal counting method based on example segmentation algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811588576.9A CN109785337B (en) 2018-12-25 2018-12-25 In-column mammal counting method based on example segmentation algorithm

Publications (2)

Publication Number Publication Date
CN109785337A CN109785337A (en) 2019-05-21
CN109785337B true CN109785337B (en) 2021-07-06

Family

ID=66498290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811588576.9A Active CN109785337B (en) 2018-12-25 2018-12-25 In-column mammal counting method based on example segmentation algorithm

Country Status (1)

Country Link
CN (1) CN109785337B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222664B (en) * 2019-06-13 2021-07-02 河南牧业经济学院 Intelligent pig housing monitoring system based on video activity analysis
CN110363769B (en) * 2019-06-19 2023-03-10 西南交通大学 Image segmentation method for cantilever system of high-speed rail contact net supporting device
CN110378231A (en) * 2019-06-19 2019-10-25 广东工业大学 Nut recognition positioning method based on deep learning
CN111161265A (en) * 2019-11-13 2020-05-15 北京海益同展信息科技有限公司 Animal counting and image processing method and device
CN110992325A (en) * 2019-11-27 2020-04-10 同济大学 Target counting method, device and equipment based on deep learning
CN111060518A (en) * 2019-12-20 2020-04-24 重庆大学 Stamping part defect identification method based on instance segmentation
CN111401386B (en) * 2020-03-30 2023-06-13 深圳前海微众银行股份有限公司 Livestock shed monitoring method and device, intelligent cruising robot and storage medium
CN111652142A (en) * 2020-06-03 2020-09-11 广东小天才科技有限公司 Topic segmentation method, device, equipment and medium based on deep learning
CN111652140A (en) * 2020-06-03 2020-09-11 广东小天才科技有限公司 Method, device, equipment and medium for accurately segmenting questions based on deep learning
CN111899245B (en) * 2020-07-30 2021-03-09 推想医疗科技股份有限公司 Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
CN112733622B (en) * 2020-12-25 2023-07-04 广西慧云信息技术有限公司 Gramineae plant tillering quantity counting method
US20220261593A1 (en) * 2021-02-16 2022-08-18 Nvidia Corporation Using neural networks to perform object detection, instance segmentation, and semantic correspondence from bounding box supervision
CN113095441A (en) * 2021-04-30 2021-07-09 河南牧原智能科技有限公司 Pig herd bundling detection method, device, equipment and readable storage medium
CN113255495A (en) * 2021-05-17 2021-08-13 开放智能机器(上海)有限公司 Pig farm live pig counting method and system
CN113256656A (en) * 2021-05-28 2021-08-13 北京达佳互联信息技术有限公司 Image segmentation method and device
CN113642406B (en) * 2021-07-14 2023-01-31 广州市玄武无线科技股份有限公司 System, method, device, equipment and storage medium for counting densely-suspended paper sheets
CN113469302A (en) * 2021-09-06 2021-10-01 南昌工学院 Multi-circular target identification method and system for video image
CN116229369A (en) * 2023-03-03 2023-06-06 嘉洋智慧安全科技(北京)股份有限公司 Method, device and equipment for detecting people flow and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997459A (en) * 2017-04-28 2017-08-01 成都艾联科创科技有限公司 A kind of demographic method split based on neutral net and image congruencing and system
CN107527351A (en) * 2017-08-31 2017-12-29 华南农业大学 A kind of fusion FCN and Threshold segmentation milking sow image partition method
CN107680080A (en) * 2017-09-05 2018-02-09 翔创科技(北京)有限公司 The Sample Storehouse method for building up and counting method of livestock, storage medium and electronic equipment
CN108921105A (en) * 2018-07-06 2018-11-30 北京京东金融科技控股有限公司 Identify the method, apparatus and computer readable storage medium of destination number
CN108961330A (en) * 2018-06-22 2018-12-07 深源恒际科技有限公司 The long measuring method of pig body and system based on image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITUD20050152A1 (en) * 2005-09-23 2007-03-24 Neuricam Spa ELECTRO-OPTICAL DEVICE FOR THE COUNTING OF PEOPLE, OR OTHERWISE, BASED ON STEREOSCOPIC VISION, AND ITS PROCEDURE

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997459A (en) * 2017-04-28 2017-08-01 成都艾联科创科技有限公司 A kind of demographic method split based on neutral net and image congruencing and system
CN107527351A (en) * 2017-08-31 2017-12-29 华南农业大学 A kind of fusion FCN and Threshold segmentation milking sow image partition method
CN107680080A (en) * 2017-09-05 2018-02-09 翔创科技(北京)有限公司 The Sample Storehouse method for building up and counting method of livestock, storage medium and electronic equipment
CN108961330A (en) * 2018-06-22 2018-12-07 深源恒际科技有限公司 The long measuring method of pig body and system based on image
CN108921105A (en) * 2018-07-06 2018-11-30 北京京东金融科技控股有限公司 Identify the method, apparatus and computer readable storage medium of destination number

Also Published As

Publication number Publication date
CN109785337A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN108898085B (en) Intelligent road disease detection method based on mobile phone video
CN110059558A (en) A kind of orchard barrier real-time detection method based on improvement SSD network
CN111724355B (en) Image measuring method for abalone body type parameters
CN109409365A (en) It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection
CN108961330B (en) Pig body length measuring and calculating method and system based on image
CN107230203A (en) Casting defect recognition methods based on human eye vision attention mechanism
CN111582234B (en) Large-scale oil tea tree forest fruit intelligent detection and counting method based on UAV and deep learning
CN111105495A (en) Laser radar mapping method and system fusing visual semantic information
CN106910187B (en) Image data set artificial amplification method for bridge crack detection and positioning
CN111598098B (en) Water gauge water line detection and effectiveness identification method based on full convolution neural network
CN107833213A (en) A kind of Weakly supervised object detecting method based on pseudo- true value adaptive method
CN113674216A (en) Subway tunnel disease detection method based on deep learning
CN112069985A (en) High-resolution field image rice ear detection and counting method based on deep learning
CN114565675A (en) Method for removing dynamic feature points at front end of visual SLAM
CN112580671A (en) Automatic detection method and system for multiple development stages of rice ears based on deep learning
CN111797831A (en) BIM and artificial intelligence based parallel abnormality detection method for poultry feeding
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN105404682B (en) A kind of book retrieval method based on digital image content
CN111627059A (en) Method for positioning center point position of cotton blade
CN109523509B (en) Method and device for detecting heading stage of wheat and electronic equipment
CN110991300A (en) Automatic identification method for abnormal swelling state of dorking abdomen
CN107563327B (en) Pedestrian re-identification method and system based on self-walking feedback
CN116188943A (en) Solar radio spectrum burst information detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant