CN111860652B - Method, device, equipment and medium for measuring animal body weight based on image detection - Google Patents

Method, device, equipment and medium for measuring animal body weight based on image detection Download PDF

Info

Publication number
CN111860652B
CN111860652B CN202010710800.8A CN202010710800A CN111860652B CN 111860652 B CN111860652 B CN 111860652B CN 202010710800 A CN202010710800 A CN 202010710800A CN 111860652 B CN111860652 B CN 111860652B
Authority
CN
China
Prior art keywords
animal
detected
weight
image
checkerboard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010710800.8A
Other languages
Chinese (zh)
Other versions
CN111860652A (en
Inventor
汤鑫
尹高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202010710800.8A priority Critical patent/CN111860652B/en
Publication of CN111860652A publication Critical patent/CN111860652A/en
Application granted granted Critical
Publication of CN111860652B publication Critical patent/CN111860652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention relates to artificial intelligence and discloses an animal body weight measuring method, device, equipment and medium based on image detection, wherein the method comprises the following steps: selecting a checkerboard with a set specification as a calibration object; placing a calibration object at the middle position right below an animal to be tested, and acquiring an image of the animal to be tested and the calibration object as a whole; detecting an animal to be detected in the image, and correcting the scale of the animal to be detected according to the checkerboard to obtain an image only including the animal to be detected; inputting the image only including the animal to be detected into a weight estimation model, classifying the animal to be detected according to the weight category through the weight estimation model, acquiring the probability of the animal to be detected being classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected being classified into each weight category. The method is convenient to operate, does not depend on equipment, has controllable errors, and can improve the efficiency of measuring the body weight of the animal.

Description

Method, device, equipment and medium for measuring animal body weight based on image detection
Technical Field
The present invention relates to artificial intelligence, and in particular, to a method, an apparatus, an electronic device, and a computer-readable storage medium for measuring a body weight of an animal based on image detection.
Background
With the increase of population and the continuous improvement of living standard, the demand of urban and rural residents on pork is more and more large, and the market scale of the pig breeding industry is over trillion. In recent years, with the rapid development of deep learning technology, more and more enterprises apply the AI technology to the pig breeding industry. Currently, live pig weight measurement is mainly carried out in the following ways: first, the empirical formula measures the weight, and needs to measure many indexes of the pig, such as body height, chest depth, side body length, etc., and then calculate the weight by using the formula. Obviously, the information is difficult to obtain in actual production, the operation is inconvenient, and the accuracy is not high; secondly, in a projection distance measurement scheme based on vision, most cameras need to be calibrated, the positions of the cameras are fixed, a plurality of pictures are collected, and the operation is not friendly; thirdly, a sensor for acquiring depth information is adopted to calculate point cloud data, acquire volume information and calculate weight, and the scheme needs additional expensive equipment. More importantly, the pig body subjected to insurance claim is generally dead, the pig body is difficult to move, the three-dimensional information of the pig body is difficult to obtain, and the related data information is difficult to obtain by the methods for measuring the weight of the pig.
Disclosure of Invention
The invention provides an animal body weight measurement method and device based on image detection, electronic equipment and a computer readable storage medium, and mainly aims to improve the convenience and efficiency of animal body weight measurement.
In order to achieve the above object, a first aspect of the present invention provides an animal body weight measurement method based on image detection, the method comprising:
selecting a checkerboard with a set specification as a calibration object; placing a calibration object at the middle position right below an animal to be tested, and acquiring an image of the animal to be tested and the calibration object as a whole; detecting an animal to be detected in the image, and correcting the scale of the animal to be detected according to the checkerboard to obtain an image only including the animal to be detected; inputting the image only including the animal to be detected into a weight estimation model, classifying the animal to be detected according to the weight category through the weight estimation model, acquiring the probability of the animal to be detected being classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected being classified into each weight category.
In one embodiment, after the step of selecting the checkerboard with set specifications as the calibration object, the method further comprises the following steps: and detecting and correcting the checkerboards, detecting the outlines of all the checkerboards in the checkerboards, and correcting the areas of the checkerboards according to the sizes of the detected outlines of the checkerboards.
In one embodiment, the step of detecting the animal to be detected in the image and correcting the dimension of the animal to be detected according to the checkerboard comprises the following steps: and extracting the target animal to be detected by using Mask R-CNN, and adjusting the posture and the position of the extracted target animal to be detected according to the checkerboard.
In one embodiment, the step of extracting the target animal to be tested by using Mask R-CNN comprises the following steps: extracting image features through a feature extraction network to obtain a feature map; selecting candidate frames capable of representing the positions of the objects in the image by using the feature map through a regional submission network; cutting out the features of the region corresponding to the candidate box from the feature map by adopting a ROIAlign algorithm; and respectively predicting the category of each object according to the characteristics of the area corresponding to the candidate frame to obtain the position coordinates of the target animal to be detected in the image and the corresponding segmentation map.
In one embodiment, the step of adjusting the posture and the position of the extracted target animal to be tested according to the checkerboard comprises the following steps: placing an animal to be detected on a background plate with a set size; calculating the pixel area of the animal to be detected according to the position coordinates of the animal to be detected; calculating the pixel area of the background plate according to the pixel area of the animal to be detected, and obtaining a set picture frame according to the pixel area of the background plate; and placing the extracted target animal to be detected in the set picture frame, and aligning the posture and the position of the animal to be detected.
In one embodiment, before the step of inputting the image including only the animal to be tested into the weight estimation model, the method further includes: training the weight estimation model, wherein when the weight estimation model is trained and learned, two objective functions are simultaneously optimized, as shown in the following formula:
minLreg+λLcls
wherein L isregRepresenting the error of the measured weight from the true weight, LclsIndicating the classification error and lambda the hyperparameter.
In order to achieve the above object, a second aspect of the present invention provides an animal body weight measuring apparatus based on image detection, comprising:
the calibration object selection module is used for selecting the checkerboard with the set specification as the calibration object; the image acquisition module is used for acquiring an image of the to-be-detected animal and the calibration object as a whole, wherein the calibration object is placed in the middle position under the to-be-detected animal; the image detection module is used for detecting the animal to be detected in the image and correcting the dimension of the animal to be detected according to the checkerboard to obtain the image only comprising the animal to be detected; and the weight estimation module is used for inputting the image only comprising the animal to be detected into the weight estimation model, classifying the animal to be detected according to the weight category through the weight estimation model, acquiring the probability of the animal to be detected being classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected being classified into each weight category.
In one embodiment, the apparatus further comprises a training module for training the weight estimation model, wherein the following objective function is optimized when the weight estimation model is trained and learned:
minLreg+λLcls
wherein L isregRepresenting the error of the measured weight from the true weight, LclsIndicating the classification error and lambda the hyperparameter.
In order to achieve the above object, a third aspect of the present invention provides an electronic apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of animal body weight measurement based on image detection as described above.
In order to achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method for measuring body weight based on image detection as described above.
According to the invention, through the checkerboard, the influence caused by different cameras and different shooting distances is eliminated, and meanwhile, in order to reduce the influence of animal postures and positions, the alignment treatment is carried out on the animal to be detected; because the lighting conditions are different during shooting, the influence of lighting is eliminated by adopting a data enhancement method. In addition, the invention combines artificial intelligence and image detection technology to measure the weight of the animal, is convenient to operate, does not depend on equipment, and has controllable error. When the measuring method is used for measuring the weight of the animal, only a portable mobile phone is needed, the shooting is convenient, the equipment is easy to obtain, the robustness is high, and the business requirements can be better met.
Drawings
FIG. 1 is a schematic flow chart of a method for measuring body weight of an animal according to an embodiment of the present invention;
fig. 2 is a schematic image of an animal to be tested and a calibration object as a whole according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an image including only an animal to be tested according to an embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for measuring body weight of an animal according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device for implementing a method for measuring body weight of an animal according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an animal body weight measuring method based on image detection. Referring to fig. 1, a schematic flow chart of a method for measuring animal body weight according to an embodiment of the present invention is shown. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the method for measuring the body weight of the animal based on image detection includes:
step S1, selecting a checkerboard with a set specification as a calibration object, where the calibration object is a black-and-white checkerboard with a specification of nxn (for example, n is 8), that is, black and white checkerboards alternately appear, or other colored checkerboards are also possible, and the physical size of the whole checkerboard is 40cmx40 cm; each grid is the same size, and the size and the number of the whole checkerboard can be set arbitrarily;
and step S2, placing the calibration object at the middle position right below the animal to be detected, and acquiring an image of the animal to be detected and the calibration object as a whole. Referring to fig. 2, a schematic view of an image of a pig to be tested and a calibration object as a whole is shown, taking the pig to be tested as an example, when taking a picture, a black and white checkerboard is placed at a middle position right below the pig to be tested, and the whole pig is taken facing the checkerboard, so as to obtain a whole image of the pig and the whole checkerboard.
And step S3, detecting the animal to be detected in the image, and correcting the dimension of the animal to be detected according to the checkerboard to obtain the image only including the animal to be detected. Referring to fig. 3, which is a schematic diagram of an image including only an animal to be detected according to an embodiment of the present invention, taking the animal to be detected as a pig body as an example, the whole image shown in fig. 2 is detected, the pig body to be detected is extracted, and the image shown in fig. 3 is obtained through scale correction.
And step S4, inputting the image only including the animal to be detected into a weight estimation model, classifying the animal to be detected according to the weight category through the weight estimation model, acquiring the probability of the animal to be detected classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected classified into each weight category.
In one embodiment, after the step of selecting the checkerboard with set specification as the calibration object,further comprising: and detecting and correcting the checkerboards, detecting the outlines of all the checkerboards in the checkerboards, and correcting the areas of the checkerboards according to the sizes of the detected outlines of the checkerboards. Specifically, the outline features of the checkerboard are detected by using OPENCV, and the outlines corresponding to all the small lattices are screened out according to the outline features of the checkerboard. For the actually manufactured checkerboards, due to production errors, the actual areas of all the small lattices in the checkerboards are unequal, so that the detected outlines corresponding to all the small lattices have slight deviation, and the small lattices are sorted from large to small according to the areas of the outlines of the small lattices; selecting a small lattice with the median area (the sequence is in the middle position) as the area S of one corrected lattice, taking the area S as a basic small lattice, expanding xn lattices with the same size to obtain the corrected checkerboard, and deriving the area S of the whole corrected checkerboard from the obtained checkerboardchess=sn2Wherein n is2The number of the small lattices. In consideration of the error problem of algorithm detection, after the small lattice sub-outlines are sorted according to the area, the outline with the median area is selected as the small lattice with the best detection. In this way, detection of smaller and larger grids can be avoided.
In one embodiment, before the step of detecting the animal to be tested in the image, the method further comprises: and preprocessing the image. Suppose the size of the pig photograph taken is w, h. The resolutions of different cameras may be different, and in order to eliminate the influence of the resolution, all the photos are processed according to the length-width ratio of the original photos, which is maintained according to the photos shot by the actual business, and the long sides are 800, so that the different resolutions of different cameras are aligned.
When insurance is paid, the animal to be detected is generally in a rural area or a farm, the background is often different and complicated when the animal to be detected is shot, meanwhile, sometimes, a plurality of animals exist in one picture, and in order to estimate the weight of the target animal to be detected, a specific animal needs to be obviously detected, and the influence of the background is eliminated. Because the shooting distance is different, the proportion of animals in the picture is different, and in order to eliminate the influence brought by the shooting distance, the pig body is aligned by the checkerboard scale. In one embodiment, the step of detecting the animal to be detected in the image and correcting the dimension of the animal to be detected according to the checkerboard comprises the following steps: extracting a target animal to be detected by using a Mask Region-based deep Neural Network (Mask R-CNN), and adjusting the posture and the position of the extracted target animal to be detected according to a checkerboard to obtain an image only including the animal to be detected.
The Mask R-CNN is a deep Neural Network model based on a fast Region-based deep Neural Network (fast R-CNN), and the advantage performance in the task of processing object identification and segmentation of a single picture makes the model become one of the current best technologies. Inputting the image into a Mask R-CNN network, and detecting the position coordinates of the animal to be detected in the image through the Mask R-CNN network, including the coordinates (x) at the upper left corner1,y1) Lower right corner coordinate (x)2,y2) And meanwhile, a corresponding segmentation map can be obtained. In order to eliminate the influence of the background, the image is directly extracted from the image according to the segmentation drawing of the animal to be tested, and the rest part is filled with 0, so that the image only including the animal to be tested is obtained, which is shown in fig. 3.
Mask R-CNN is composed of 5 parts, which are respectively a feature extraction network, a feature combination network (RPN), a region submission network, ROIAlign, an output network layer, a class of an output object, a position frame of a two-stage correction object in a picture and a segmentation graph corresponding to the object, wherein the feature extraction network is a segmented backbone network and can select resnet 50.
In one embodiment, the step of extracting the target animal to be tested by using Mask R-CNN comprises the following steps: extracting image features through a feature extraction network to obtain a feature map; selecting candidate frames capable of representing the positions of the objects in the image by using the feature map through a regional submission network; cutting out the features of the region corresponding to the candidate box from the feature map by adopting a ROIAlign algorithm; and respectively predicting the category of each object according to the characteristics of the area corresponding to the candidate frame to obtain the position coordinates of the target animal to be detected in the image and the corresponding segmentation map.
Further, the extracted dimension of the target animal to be detected is corrected according to the checkerboard, the dimension of the target animal to be detected in the image is aligned, and therefore the posture and the position of the extracted target animal to be detected are adjusted. Specifically, the method comprises the following steps: placing an animal to be detected on a background plate with a set size; calculating the pixel area of the animal to be detected according to the position coordinates of the animal to be detected; calculating the pixel area of the background plate according to the pixel area of the animal to be detected, and obtaining a set picture frame according to the pixel area of the background plate; and placing the extracted target animal to be detected in the set picture frame, and aligning the posture and the position of the animal to be detected, for example, if the animal to be detected rotates, rotating the animal to be detected to be horizontal, and placing the animal to be detected in the upper left corner of the picture, so that the influence of the posture and the position can be eliminated.
Taking the pig body as an example, suppose that the pig body to be detected is placed on a background plate of 2.5m × 2.5m, of course, the background plate may have other sizes, and only the whole pig body needs to be included. Calculating the pixel area S of the pig body according to the position coordinates of the animal to be detected by using the area in which the pig body is detectedpig=(y2-y1)(x2-x1). Although the focal length of the camera and the shooting distance are different, after the 2.5m × 2.5m background plate is aligned, the pixel area of the 2.5m × 2.5m background plate corresponding to each picture can be calculated according to the pixel area of the pig body
Figure BDA0002596478390000061
The picture frame is set according to the pixel area of the background plate
Figure BDA0002596478390000062
And then putting the rectangular frame where the extracted target pig body to be detected is positioned into a set picture frame, and simultaneously aligning the posture and the position of the pig body.
In one embodiment, the weight estimation model employs a convolutional neural network Resnet50, including a feature extraction layer for extracting image features and three fully-connected layers, the last layer of the full connection being 101 neurons. In order to align the scales of different pictures, the sizes of all input pictures are set to 448 × 448, and since the collected data are limited, the estimation effect is not good by directly using a regression method, therefore, the weight estimation model adopts a classification and regression method to process the images, which is specifically as follows:
setting animal weight categories, segmenting the animal weight by two kilograms, such as 0 to 2 kilograms as a first category, 2 kilograms to 4 kilograms as a second category, up to 198 kilograms to 200 kilograms as a 100 th category, and 200 kilograms to 202 kilograms as a 101 th category, so as to convert the actual value of the animal weight into a 0-1 code, and assuming that the actual value corresponds to a jth segment, i.e., a jth category, the target code can be expressed as g ═ 0, …,1, …, 0;
classifying the animals to be detected according to the weight categories, and predicting to obtain the probability p (p) of classifying the animals to be detected into each weight category1,p2,…,p101);
The classification error is estimated using cross-entropy loss, represented by
Figure BDA0002596478390000071
Wherein
Figure BDA0002596478390000072
Figure BDA0002596478390000073
Indicating that the target body weight corresponding to the ith sample comes from the jth class, N indicates the number of samples, and LclsIndicates the classification error, j indicates the index of the weight category,
Figure BDA0002596478390000074
representing the probability of classifying the target weight corresponding to the ith sample into the jth class;
according to the probability [ p ] of each class corresponding to the animal to be detected1,p2,…,p101]The corresponding weight is calculated as
Figure BDA0002596478390000075
In order to accurately estimate the body weight, an estimated measured body weight is required
Figure BDA0002596478390000076
As close as possible to its true body weight w, i.e.
Figure BDA0002596478390000077
Wherein L isregError of the measured weight from the true weight is represented, i represents index of the sample, N represents number of samples,
Figure BDA0002596478390000078
represents the measured body weight, w, of the ith sampleiThe true body weight of the ith sample is indicated.
Preferably, before the step of inputting the image only including the animal to be tested into the weight estimation model, the method further includes: training the weight estimation model, wherein when the weight estimation model is trained and learned, the following objective function is optimized:
minLreg+λLcls
wherein L isregRepresenting the error of the measured weight from the true weight, LclsIndicating the classification error and lambda the hyperparameter, which needs to be given in advance.
Specifically, the training of the weight prediction network is divided into two stages: training a Mask R-CNN network to segment a pig body picture to be detected; and in the second stage, the features of the pig body are extracted by utilizing the segmented pig body picture and a resnet50 network, and then two full connection layers are connected for regression and classification.
These two phases are trained separately, first, taking photographs of the pig from the scene and measuring its weight. Then, for each pig body photo, a marking person marks the outline of the pig body and generates the area of the pig body. In order to train a Mask R-CNN detection segmentation network and collect about 2000 pig body pictures, in order to enhance the robustness of the segmentation network, data enhancement processing such as random rotation, turnover, brightness and contrast change is added to the pictures during training, a random gradient descent algorithm is adopted for training, and the initial learning rate is 0.01. After the segmentation network training is finished, the segmentation network is utilized to segment the live pig body photos into pig bodies, the influence of the background environment is removed, and meanwhile, the pig bodies are scaled to a reasonable size according to the scale, so that the input data of the weight estimation model are obtained.
The weight estimation model mainly utilizes resnet50 to extract features from the pig body picture, then two layers of full connection are connected for classification, the maximum weight of the pig is assumed to be 200kg, the pig is classified into one class every 2kg, and the total number of the classes is 101, so that the actual weight of the pig body is converted into interval codes. Since the weight estimation model is sensitive to rotation, the model is trained using data enhancement techniques. During training, one or more of random rotation, horizontal turnover, random brightness change and contrast change are performed on an input training sample, so that the training sample is enhanced.
The invention has convenient operation, no dependence on equipment and controllable error. Through the checkerboard, the influence caused by different cameras and different shooting distances is eliminated, and meanwhile, in order to reduce the influence of animal postures and positions, the alignment treatment is carried out on the animal to be detected; because the lighting conditions are different during shooting, the influence of lighting is eliminated by adopting a data enhancement method. Only a mobile phone which is carried about is needed, the shooting is convenient, the equipment is easy to obtain, the robustness is high, and the business requirements can be well met.
Fig. 4 is a functional block diagram of the body weight measuring apparatus of the present invention.
The animal body weight measuring apparatus 100 based on image detection according to the present invention may be installed in an electronic device. According to the realized function, the animal body weight measuring device can comprise a calibration object selecting module 101, an image acquiring module 102, an image detecting module 103 and a weight estimating module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the calibration object selecting module 101 is configured to select a checkerboard with a set specification as a calibration object, where the calibration object is a black and white checkerboard with a specification of nxn (for example, n is 8), that is, black and white checkerboards alternately appear, or other colored checkerboards may also appear, and a physical size of the whole checkerboard is 40cmx40 cm; each grid is the same size, and the size and the number of the whole checkerboard can be set arbitrarily;
the image acquisition module 102 is configured to acquire an image of the animal to be tested and the calibration object as a whole, where the calibration object is placed at a middle position right below the animal to be tested; taking an animal to be detected as a pig body as an example, when a picture is taken, the black and white checkerboard is placed at the middle position under the pig body to be detected, and the whole pig body is shot facing the checkerboard, so that a whole image including the pig body and the whole checkerboard is obtained;
the image detection module 103 is configured to detect an animal to be detected in the image, and correct the scale of the animal to be detected according to the checkerboard to obtain an image only including the animal to be detected; taking an animal to be detected as a pig body as an example, detecting the whole image shown in fig. 2, extracting the pig body to be detected, and obtaining the image shown in fig. 3 through scale correction;
the weight estimation module 104 is configured to input the image only including the animal to be detected into the weight estimation model, classify the animal to be detected according to the weight category through the weight estimation model, obtain the probability that the animal to be detected is classified into each weight category, and calculate the weight of the animal to be detected according to the probability that the animal to be detected is classified into each weight category.
In one embodiment, the animal body weight measuring apparatus further comprises: and the correcting module is used for detecting and correcting the checkerboards after the checkerboards with set specifications are selected as calibration objects, detecting the outlines of all the checks in the checkerboards, and correcting the areas of the checkerboards according to the detected outline sizes of all the checks. Specifically, the outline features of the checkerboard are detected by using OPENCV, and the outlines corresponding to all the small lattices are screened out according to the outline features of the checkerboard. For the actually manufactured checkerboards, the actual areas of the small lattices in the checkerboards are different due to production errors, so that the detected outlines corresponding to the small lattices have slight deviation, and the small lattices are sorted according to the small-lattice outline areas from large to small(ii) a Selecting a small lattice with the median area (the sequence is in the middle position) as the area S of one corrected lattice, taking the small lattice as a basic small lattice, expanding nxn lattices with the same size to obtain the corrected checkerboard, and deriving the area S of the whole corrected checkerboard from the corrected checkerboardchess=sn2Wherein n is2The number of the small lattices. In consideration of the error problem of algorithm detection, after the small lattice sub-outlines are sorted according to the area, the outline with the median area is selected as the small lattice with the best detection. In this way, detection of smaller and larger grids can be avoided.
In one embodiment, the animal body weight measuring apparatus further includes a preprocessing module for preprocessing the image before detecting the animal to be measured in the image. Suppose the size of the pig photograph taken is w, h. The resolutions of different cameras may be different, and in order to eliminate the influence of the resolution, all the photos are processed according to the length-width ratio of the original photos, which is maintained according to the photos shot by the actual business, and the long sides are 800, so that the different resolutions of different cameras are aligned.
When insurance is paid, the animal to be detected is generally in a rural area or a farm, the background is often different and complicated when the animal to be detected is shot, meanwhile, sometimes, a plurality of animals exist in one picture, and in order to estimate the weight of the target animal to be detected, a specific animal needs to be obviously detected, and the influence of the background is eliminated. Because the shooting distance is different, the proportion of animals in the picture is different, and in order to eliminate the influence brought by the shooting distance, the pig body is aligned by the checkerboard scale. In one embodiment, the image detection module 103 includes: the device comprises an extracting unit for extracting a target animal to be detected by using a Mask Region-based deep Neural Network (Mask R-CNN), and an adjusting unit for adjusting the posture and the position of the extracted target animal to be detected according to a checkerboard to obtain an image only including the animal to be detected.
Wherein the Mask R-CNN is a deep Neural Network model based on a fast Region-based deep Neural Network (fast R-CNN), and is used for processing a single signalThe superior performance in the task of object recognition and segmentation of pictures makes it one of the best current techniques. Inputting the image into a Mask R-CNN network, and detecting the position coordinates of the animal to be detected in the image through the Mask R-CNN network, including the coordinates (x) at the upper left corner1,y1) Lower right corner coordinate (x)2,y2) And meanwhile, a corresponding segmentation map can be obtained. In order to eliminate the influence of the background, the image is directly extracted from the image according to the segmentation drawing of the animal to be tested, and the rest part is filled with 0, so that the image only including the animal to be tested is obtained, which is shown in fig. 3.
Mask R-CNN is composed of 5 parts, which are respectively a feature extraction network, a feature combination network (RPN), a region submission network, ROIAlign, an output network layer, a class of an output object, a position frame of a two-stage correction object in a picture and a segmentation graph corresponding to the object, wherein the feature extraction network is a segmented backbone network and can select resnet 50.
In one embodiment, the extraction unit extracts a target animal to be detected by using Mask R-CNN in the following way, specifically, extracts image features through a feature extraction network to obtain a feature map; selecting candidate frames capable of representing the positions of the objects in the image by using the feature map through a regional submission network; cutting out the features of the region corresponding to the candidate box from the feature map by adopting a ROIAlign algorithm; and respectively predicting the category of each object according to the characteristics of the area corresponding to the candidate frame to obtain the position coordinates of the target animal to be detected in the image and the corresponding segmentation map.
Further, the extracted dimension of the target animal to be detected is corrected according to the checkerboard, the dimension of the target animal to be detected in the image is aligned, and therefore the posture and the position of the extracted target animal to be detected are adjusted. Specifically, the method comprises the following steps: placing an animal to be detected on a background plate with a set size; calculating the pixel area of the animal to be detected according to the position coordinates of the animal to be detected; calculating the pixel area of the background plate according to the pixel area of the animal to be detected, and obtaining a set picture frame according to the pixel area of the background plate; and placing the extracted target animal to be detected in the set picture frame, and aligning the posture and the position of the animal to be detected, for example, if the animal to be detected rotates, rotating the animal to be detected to be horizontal, and placing the animal to be detected in the upper left corner of the picture, so that the influence of the posture and the position can be eliminated.
In one embodiment, the weight estimation model employs a convolutional neural network Resnet50, including a feature extraction layer for extracting image features and three fully-connected layers, the last layer of the full connection being 101 neurons. In order to align the scales of different pictures, the sizes of all input pictures are set to 448 × 448, and since the collected data are limited, the estimation effect is not good by directly using a regression method, therefore, the weight estimation model adopts a classification and regression method to process the images, which is specifically as follows:
setting animal weight categories, segmenting the animal weight by two kilograms, such as 0 to 2 kilograms as a first category, 2 kilograms to 4 kilograms as a second category, up to 198 kilograms to 200 kilograms as a 100 th category, and 200 kilograms to 202 kilograms as a 101 th category, so as to convert the actual value of the animal weight into a 0-1 code, and assuming that the actual value corresponds to a jth segment, i.e., a jth category, the target code can be expressed as g ═ 0, …,1, …, 0;
classifying the animals to be detected according to the weight categories, and predicting to obtain the probability p (p) of classifying the animals to be detected into each weight category1,p2,…,p101);
The classification error is estimated using cross-entropy loss, represented by
Figure BDA0002596478390000111
Wherein
Figure BDA0002596478390000112
Figure BDA0002596478390000113
Indicating that the target body weight corresponding to the ith sample comes from the jth class, N indicates the number of samples, and LclsIndicates the classification error, j indicates the index of the weight category,
Figure BDA0002596478390000114
representing the probability of classifying the target weight corresponding to the ith sample into the jth class;
according to the probability [ p ] of each class corresponding to the animal to be detected1,p2,…,p101]The corresponding weight is calculated as
Figure BDA0002596478390000115
In order to accurately estimate the body weight, an estimated measured body weight is required
Figure BDA0002596478390000116
As close as possible to its true body weight w, i.e.
Figure BDA0002596478390000117
Wherein L isregError of the measured weight from the true weight is represented, i represents index of the sample, N represents number of samples,
Figure BDA0002596478390000118
represents the measured body weight, w, of the ith sampleiThe true body weight of the ith sample is indicated.
In one embodiment, the animal body weight measuring apparatus further comprises a training module for training the weight estimation model, wherein, when the weight estimation model is trained, the following objective function is optimized:
minLreg+λLcls
wherein L isregRepresenting the error of the measured weight from the true weight, LclsIndicating the classification error and lambda the hyperparameter.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the method for measuring body weight of an animal according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as an animal body weight measurement program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of an animal body weight measurement program, etc., but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., an animal body weight measuring program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 5 only shows an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The body weight measurement program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, enable:
selecting a checkerboard with a set specification as a calibration object;
placing a calibration object at the middle position right below an animal to be tested, and acquiring an image of the animal to be tested and the calibration object as a whole;
detecting an animal to be detected in the image, and correcting the scale of the animal to be detected according to the checkerboard to obtain an image only including the animal to be detected;
inputting the image only including the animal to be detected into a weight estimation model, classifying the animal to be detected according to the weight category through the weight estimation model, acquiring the probability of the animal to be detected being classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected being classified into each weight category.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (8)

1. An image detection based animal body weight measurement method, characterized in that the method comprises:
selecting a checkerboard with a set specification as a calibration object;
placing a calibration object at the middle position right below an animal to be tested, and acquiring an image of the animal to be tested and the calibration object as a whole;
detecting an animal to be detected in the image, and correcting the scale of the animal to be detected according to the checkerboard to obtain an image only including the animal to be detected; extracting a target animal to be detected by using Mask R-CNN, and adjusting the posture and the position of the extracted target animal to be detected according to the checkerboard;
inputting an image only including the animal to be detected into a weight estimation model, classifying the animal to be detected according to weight categories through the weight estimation model, acquiring the probability of the animal to be detected being classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected being classified into each weight category; wherein the content of the first and second substances,
the step of adjusting the posture and the position of the extracted target animal to be detected according to the checkerboard comprises the following steps:
placing an animal to be detected on a background plate with a set size;
calculating the pixel area of the animal to be detected according to the position coordinates of the animal to be detected;
calculating the pixel area of the background plate according to the pixel area of the animal to be detected, and obtaining a set picture frame according to the pixel area of the background plate;
and placing the extracted target animal to be detected in the set picture frame, and aligning the posture and the position of the animal to be detected.
2. The method for measuring animal body weight based on image detection according to claim 1, further comprising, after the step of selecting a checkerboard of a set specification as a calibration object: and detecting and correcting the checkerboards, detecting the outlines of all the checkerboards in the checkerboards, and correcting the areas of the checkerboards according to the sizes of the detected outlines of the checkerboards.
3. The image detection-based animal body weight measurement method according to claim 1, wherein the step of extracting the target animal to be measured using Mask R-CNN comprises:
extracting image features through a feature extraction network to obtain a feature map;
selecting candidate frames capable of representing the positions of the objects in the image by using the feature map through a regional submission network;
cutting out the features of the region corresponding to the candidate box from the feature map by adopting a ROIAlign algorithm;
and respectively predicting the category of each object according to the characteristics of the area corresponding to the candidate frame to obtain the position coordinates of the target animal to be detected in the image and the corresponding segmentation map.
4. The method of claim 1, wherein the step of inputting an image including only the animal to be measured into the weight estimation model further comprises: training the weight estimation model to obtain a weight estimation model,
wherein, during training, the training sample is enhanced through one or more of random rotation, horizontal turnover, random brightness change and contrast change;
when the weight estimation model is trained and learned, the following objective function is optimized:
min Lreg+λLcls
wherein L isregRepresenting the error of the measured weight from the true weight, LclsIndicating the classification error and lambda the hyperparameter.
5. An object weight measuring device based on image detection is characterized by comprising:
the calibration object selection module is used for selecting the checkerboard with the set specification as the calibration object;
the image acquisition module is used for acquiring an image of the to-be-detected animal and the calibration object as a whole, wherein the calibration object is placed in the middle position under the to-be-detected animal;
the image detection module is used for detecting the animal to be detected in the image and correcting the dimension of the animal to be detected according to the checkerboard to obtain the image only comprising the animal to be detected; extracting a target animal to be detected by using Mask R-CNN, and adjusting the posture and the position of the extracted target animal to be detected according to the checkerboard; wherein, the step of adjusting the posture and the position of the extracted target animal to be tested according to the checkerboard comprises the following steps: placing an animal to be detected on a background plate with a set size;
calculating the pixel area of the animal to be detected according to the position coordinates of the animal to be detected;
calculating the pixel area of the background plate according to the pixel area of the animal to be detected, and obtaining a set picture frame according to the pixel area of the background plate;
placing the extracted target animal to be detected in the set picture frame, and aligning the posture and the position of the animal to be detected
And the weight estimation module is used for inputting the image only comprising the animal to be detected into the weight estimation model, classifying the animal to be detected according to the weight category through the weight estimation model, acquiring the probability of the animal to be detected being classified into each weight category, and calculating the weight of the animal to be detected according to the probability of the animal to be detected being classified into each weight category.
6. The image detection-based animal body weight measuring device of claim 5, further comprising a training module for training the weight estimation model, wherein two objective functions are simultaneously optimized during training and learning of the weight estimation model, as shown in the following formula:
min Lreg+λLcls
wherein L isregRepresenting the error of the measured weight from the true weight, LclsIndicating the classification error and lambda the hyperparameter.
7. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image detection based animal body weight measurement method of any one of claims 1 to 4.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method for image detection-based body weight measurement according to any one of claims 1 to 4.
CN202010710800.8A 2020-07-22 2020-07-22 Method, device, equipment and medium for measuring animal body weight based on image detection Active CN111860652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010710800.8A CN111860652B (en) 2020-07-22 2020-07-22 Method, device, equipment and medium for measuring animal body weight based on image detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010710800.8A CN111860652B (en) 2020-07-22 2020-07-22 Method, device, equipment and medium for measuring animal body weight based on image detection

Publications (2)

Publication Number Publication Date
CN111860652A CN111860652A (en) 2020-10-30
CN111860652B true CN111860652B (en) 2022-03-29

Family

ID=73001560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010710800.8A Active CN111860652B (en) 2020-07-22 2020-07-22 Method, device, equipment and medium for measuring animal body weight based on image detection

Country Status (1)

Country Link
CN (1) CN111860652B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150498A (en) * 2020-11-09 2020-12-29 浙江大华技术股份有限公司 Method and device for determining posture information, storage medium and electronic device
CN112508968B (en) * 2020-12-10 2022-02-15 马鞍山市瀚海云星科技有限责任公司 Image segmentation method, device, system and storage medium
CN116830162A (en) * 2021-02-09 2023-09-29 水智有限公司 Systems, methods, and computer-executable code for organism quantification
CN113989361B (en) * 2021-10-22 2023-04-07 中国平安财产保险股份有限公司 Animal body length measuring method, device, equipment and medium based on artificial intelligence
CN114001810A (en) * 2021-11-08 2022-02-01 厦门熵基科技有限公司 Weight calculation method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204528A (en) * 2016-06-27 2016-12-07 重庆理工大学 A kind of size detecting method of part geometry quality
CN108109680A (en) * 2017-12-20 2018-06-01 南通艾思达智能科技有限公司 A kind of method of settlement of insurance claim image bag sorting
CN108122259A (en) * 2017-12-20 2018-06-05 厦门美图之家科技有限公司 Binocular camera scaling method, device, electronic equipment and readable storage medium storing program for executing
WO2018153322A1 (en) * 2017-02-23 2018-08-30 北京市商汤科技开发有限公司 Key point detection method, neural network training method, apparatus and electronic device
CN108898610A (en) * 2018-07-20 2018-11-27 电子科技大学 A kind of object contour extraction method based on mask-RCNN
CN109064511A (en) * 2018-08-22 2018-12-21 广东工业大学 A kind of gravity center of human body's height measurement method, device and relevant device
CN109559342A (en) * 2018-03-05 2019-04-02 北京佳格天地科技有限公司 The long measurement method of animal body and device
CN109800647A (en) * 2018-12-18 2019-05-24 陈韬文 A kind of chess manual automatic generation method, system, device and storage medium
CN110175503A (en) * 2019-04-04 2019-08-27 财付通支付科技有限公司 Length acquisition methods, device, settlement of insurance claim system, medium and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830293B (en) * 2018-05-08 2021-10-01 北京佳格天地科技有限公司 Animal weight identification method and device
CN108764210B (en) * 2018-06-12 2019-11-15 焦点科技股份有限公司 A kind of method and system that the pig based on object of reference identifies again
CN108961269B (en) * 2018-06-22 2022-04-08 深源恒际科技有限公司 Pig weight measuring and calculating method and system based on image
CN109141248B (en) * 2018-07-26 2020-09-08 深源恒际科技有限公司 Pig weight measuring and calculating method and system based on image
CN109785379B (en) * 2018-12-17 2021-06-15 中国科学院长春光学精密机械与物理研究所 Method and system for measuring size and weight of symmetrical object
CN111161265A (en) * 2019-11-13 2020-05-15 北京海益同展信息科技有限公司 Animal counting and image processing method and device
CN111387987A (en) * 2020-03-26 2020-07-10 苏州沃柯雷克智能系统有限公司 Height measuring method, device, equipment and storage medium based on image recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204528A (en) * 2016-06-27 2016-12-07 重庆理工大学 A kind of size detecting method of part geometry quality
WO2018153322A1 (en) * 2017-02-23 2018-08-30 北京市商汤科技开发有限公司 Key point detection method, neural network training method, apparatus and electronic device
CN108109680A (en) * 2017-12-20 2018-06-01 南通艾思达智能科技有限公司 A kind of method of settlement of insurance claim image bag sorting
CN108122259A (en) * 2017-12-20 2018-06-05 厦门美图之家科技有限公司 Binocular camera scaling method, device, electronic equipment and readable storage medium storing program for executing
CN109559342A (en) * 2018-03-05 2019-04-02 北京佳格天地科技有限公司 The long measurement method of animal body and device
CN108898610A (en) * 2018-07-20 2018-11-27 电子科技大学 A kind of object contour extraction method based on mask-RCNN
CN109064511A (en) * 2018-08-22 2018-12-21 广东工业大学 A kind of gravity center of human body's height measurement method, device and relevant device
CN109800647A (en) * 2018-12-18 2019-05-24 陈韬文 A kind of chess manual automatic generation method, system, device and storage medium
CN110175503A (en) * 2019-04-04 2019-08-27 财付通支付科技有限公司 Length acquisition methods, device, settlement of insurance claim system, medium and electronic equipment

Also Published As

Publication number Publication date
CN111860652A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111860652B (en) Method, device, equipment and medium for measuring animal body weight based on image detection
US11144786B2 (en) Information processing apparatus, method for controlling information processing apparatus, and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN109657716B (en) Vehicle appearance damage identification method based on deep learning
CN108898047B (en) Pedestrian detection method and system based on blocking and shielding perception
CN108154102B (en) Road traffic sign identification method
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
KR20190068266A (en) System for measuring weight of livestocks using image analysis and method using the same
CN111724355B (en) Image measuring method for abalone body type parameters
WO2021139494A1 (en) Animal body online claim settlement method and apparatus based on monocular camera, and storage medium
CN114758249B (en) Target object monitoring method, device, equipment and medium based on field night environment
CN104463240B (en) A kind of instrument localization method and device
CN112528908A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114049325A (en) Construction method and application of lightweight face mask wearing detection model
CN111626241B (en) Face detection method and device
CN111144372A (en) Vehicle detection method, device, computer equipment and storage medium
CN103852034A (en) Elevator guide rail perpendicularity detection method
CN114241338A (en) Building measuring method, device, equipment and storage medium based on image recognition
CN109800616A (en) A kind of two dimensional code positioning identification system based on characteristics of image
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN115265545A (en) Map matching navigation method, device, equipment and storage medium based on decision analysis
CN112686872B (en) Wood counting method based on deep learning
CN113283466A (en) Instrument reading identification method and device and readable storage medium
CN116740758A (en) Bird image recognition method and system for preventing misjudgment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant