CN111985477A - Monocular camera-based animal body online claims checking method and device and storage medium - Google Patents

Monocular camera-based animal body online claims checking method and device and storage medium Download PDF

Info

Publication number
CN111985477A
CN111985477A CN202010879333.1A CN202010879333A CN111985477A CN 111985477 A CN111985477 A CN 111985477A CN 202010879333 A CN202010879333 A CN 202010879333A CN 111985477 A CN111985477 A CN 111985477A
Authority
CN
China
Prior art keywords
animal body
image
calibration plate
monocular camera
animal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010879333.1A
Other languages
Chinese (zh)
Other versions
CN111985477B (en
Inventor
梅栋
汤鑫
余勇健
齐宪标
肖嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010879333.1A priority Critical patent/CN111985477B/en
Priority claimed from CN202010879333.1A external-priority patent/CN111985477B/en
Publication of CN111985477A publication Critical patent/CN111985477A/en
Priority to PCT/CN2020/136401 priority patent/WO2021139494A1/en
Application granted granted Critical
Publication of CN111985477B publication Critical patent/CN111985477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The scheme relates to artificial intelligence, and provides a monocular camera-based on-line animal body claims checking method, a monocular camera-based on-line animal body claims checking device and a storage medium, wherein the method comprises the following steps: the monocular camera focuses on an animal body and a calibration plate, the animal body is laterally placed on the ground, the calibration plate is placed below the abdomen of the animal body, an animal body region frame and a calibration plate region frame are arranged in a shooting picture, and the animal body is placed in the animal body region frame; if the IOU of the current animal body minimum outsourcing rectangular region and the animal body region frame is larger than a preset intersection threshold value, continuing to execute, and otherwise, prompting refocusing; shooting an image containing an animal body and a calibration plate, inputting the image into a Cascade network model for animal body region identification, and outputting a minimum animal body outsourcing rectangular region mask; and (3) segmenting a pre-segmentation image by taking the animal body as a center, sending the pre-segmentation image into a weight recognition model, outputting body length and weight information, and carrying out claims verification by combining category information obtained by facial image recognition of the animal body. The invention checks claims on line through the model, guarantees fairness criteria and reduces claim checking cost.

Description

Monocular camera-based animal body online claims checking method and device and storage medium
Technical Field
The invention relates to artificial intelligence, in particular to a monocular camera-based on-line claims checking method, a monocular camera-based on-line claims checking device and a storage medium for an animal.
Background
In the agricultural insurance industry, the insurance of livestock is a huge and stable risk, and farmers usually apply insurance to livestock, such as pigs and cows, for each individual in the young period, or the farmers also apply insurance to livestock in the transportation process. When the livestock die, an insurance company carries out check and claim on each animal body, and the process of checking and claim by insurance refers to the process that an insurer checks the information of the insurable target and determines whether to pay or not and the amount of the paid. This usually requires the acquisition of facial, body size, and weight characteristics of the animal to assist.
Compared with human insurance and vehicle insurance, the animal claims have great individual identification and distinguishing difficulty, the experience requirement on the business personnel is high by a method of on-line visual estimation by business personnel, the culture cost is high, and meanwhile, the insurance criterion for maintaining the fairness among the insured individuals is violated, and the development of company business is not facilitated.
For animal check, the main current schemes in the industry are as follows:
1. the method comprises the following steps of manually checking claims by an attendant on-site traditional method:
typically, the applicant contacts the company, which dispatches a clerk to check the claim on-site.
The businessman generally measures information such as body size, weight and the like of the animal through a tool carried by the businessman, establishes an insurance file and completes the work of checking the claim.
(1) The advantages are that: the on-site claims check of the service staff is the most traditional claims check method, has various complete processes, and is not easy to cause unknown problems. And as the experience of the operator for processing is increased, the processing effect of the same operator is better and better, and the proficiency level is achieved.
(2) The disadvantages are as follows: the insuring of farm animals features that the single amount of money is small relative to people and vehicles, the amount of money is hundreds of RMB, but the frequency is very high, and the cost of manpower and traffic for check-in is much higher than that of people and vehicles in city. Fraud is also easily created if the operator's level of service is not high or has an agreement with the applicant.
2. The remote auxiliary check of data collected by an operator on site:
aiming at the higher requirement of the scheme 1 on the level of the business staff and the existence of uncontrollable risks, various companies in the industry begin to add some informatization measures, such as pictures taken by the business staff and measured weight are uploaded to a company database synchronously, and the companies adopt an application or spot check mode and are checked for claims remotely by high-level professional staff.
(1) The advantages are that: therefore, personnel with high experience level can remotely assist field service personnel, and the lower limit level of check claims can be improved by online information. Meanwhile, the claim checking mechanism of a plurality of persons ensures that the standard of claim checking is different from a plurality of service personnel, and mainly depends on a small number of high-grade special personnel, so that the fairness of claim checking can be ensured to a certain degree.
(2) The disadvantages are as follows: still need invest considerable manpower and carry out on-the-spot data acquisition, through the cooperation of a small amount of experienced personnel and the majority of staff in the scene, still rely on people to evaluate the standard of claim. Which is not conducive to automation of the service.
In summary, the applicant finds that no technology which can adapt to the measurement of the application of the animal body at the mobile terminal and is conveniently applied to the online check of the animal body exists up to now, and with the development of the deep learning technology and the maturity of the functions of the mobile phone, the animal identity identification, the body size identification and the weight identification by using the mobile phone become possible. In view of the above, it is necessary to develop a method for obtaining the body size and weight of the animal body based on the mobile terminal to check the claim online.
Disclosure of Invention
The invention provides a monocular camera-based animal body online claims checking method, a monocular camera-based animal body online claims checking device and a storage medium, and mainly aims to obtain body ruler and weight information of an animal body by identifying a two-dimensional image of the animal body.
In order to achieve the above object, the present invention provides an on-line claims checking method for an animal body based on a monocular camera, comprising the following steps:
controlling the monocular camera to focus an animal body and a calibration plate, wherein the animal body is placed on the ground, the calibration plate is placed below the abdomen of the animal body, an animal body region frame and a calibration plate region frame are displayed in a shooting picture of the monocular camera, and the animal body is placed in the animal body region frame;
judging whether the minimum animal body outsourcing rectangular area of the current frame and the IOU of the animal body area frame displayed in the shooting picture are larger than a preset intersection threshold value or not, if so, continuing to execute the operation, otherwise, prompting refocusing;
controlling a monocular camera to shoot an image containing an animal body and a calibration plate, inputting the image into a Cascade RCNN network model to identify the animal body region, and outputting a minimum animal body outer rectangular region mask;
and (3) segmenting a pre-segmented image with a preset size by taking the animal body as a center, sending the pre-segmented image into a trained weight recognition model, outputting body length and weight information, and determining a claim checking result by combining category information obtained by facial image recognition of the animal body.
Optionally, before determining whether the minimum animal body enveloping rectangular region of the current frame and the IOU of the animal body region frame displayed in the shot picture are greater than a preset intersection threshold, performing shot picture pre-determination, including determining the resolution of the picture, the blurring degree of the picture and whether the animal body in the animal body region frame is complete,
and detecting the blurring degree of the picture by using a Laplace operator, performing convolution calculation on each point pixel of the picture and the Laplace operator to output a variance, and regarding the picture as fuzzy when the variances of 2s are less than a fuzzy threshold value.
Optionally, before shooting, a findchessboardcorrers function of Opencv is further used to find corner points of the calibration plate, obtain an area calculation scale I of the calibration plate, and prompt refocusing when the scale I is smaller than a preset scale threshold, where the formula of the scale I is:
Figure BDA0002653623530000031
wherein S isb1Representing the area of the calibration plate obtained for the current frame;
Sb2the representation is the area of a preset calibration plate area in the picture.
Optionally, the calculation formula of the IOU of the minimum animal body surrounding rectangular region and the animal body region frame preset in the shot picture is as follows:
Figure BDA0002653623530000032
wherein S ish1Is the minimum external rectangular area of the animal body of the current frame;
Sh2is the region of the animal body region frame preset in the picture;
IOU is Sh1And Sh2Area of intersection overlap region of (1) and (S)h1、Sh2Ratio of the area of the coverage area.
Optionally, segmenting the pre-segmented image centered on the animal comprises obtaining a dimension D of the pre-segmented image according to a scale IWidth of、DHeightWherein, in the step (A),
Figure BDA0002653623530000033
wherein D isPreset width、DPreset heightIs the size of the pre-segmented image with scale I of 1.
Optionally, the weight recognition model is a RESNET-50 network, and the method for recognizing weight by using the weight recognition model includes:
collecting a plurality of pre-segmentation images marked with body length and weight information of animal bodies, setting a reference body length and a reference weight, dividing the body length marked by each animal body image by the reference body length to obtain a normalized marked body length, dividing the body weight marked by each animal body image by the reference weight to obtain a normalized marked body weight,
taking one part of the pre-segmentation image as a training image and one part of the pre-segmentation image as a verification image;
inputting a training image into an RESNET-50 network for training, outputting a body length recognition branch and a body weight recognition branch by the RESNET-50 network, and inputting a verification image into the RESNET-50 network until the output reaches a preset accuracy threshold;
and inputting the pre-segmentation image into a verified RESNET-50 network, outputting relative indexes L _0 and W _0 of the body length and the body weight, and multiplying the relative indexes L _0 and W _0 by corresponding reference body length and body weight parameters respectively to obtain the identified body length L and body weight W.
Optionally, before controlling the monocular camera to focus on the object and the calibration plate, an AI interface provided by a monocular camera supplier is called first to obtain geographic positions, camera parameters and IMU parameters, scene three-dimensional reconstruction is performed according to the multi-angle pictures, the camera parameters and the IMU parameters shot by the applicant to obtain the actual size of the calibration plate, the shooting distance is obtained according to a displacement sensor of the mobile phone, and then whether the calibration plate is the designated calibration plate is judged according to the obtained shooting distance and the actual size of the calibration plate.
The invention also provides an on-line claims checking device for an animal body based on the monocular camera, which comprises:
the focusing module is used for controlling the monocular camera to focus an animal body and a calibration plate, wherein the animal body is laterally placed on the ground, the calibration plate is placed below the abdomen of the animal body, an animal body region frame and a calibration plate region frame are displayed in a shooting picture of the monocular camera, and the animal body is placed in the animal body region frame;
the shooting compliance judging module is used for judging whether the IOU of the minimum animal body outsourcing rectangular area of the current frame and the IOU of the animal body area frame displayed in the shooting picture are larger than a preset intersection threshold value or not, if so, the shooting compliance judging module continues to execute the shooting compliance judging module, and otherwise, refocusing is prompted;
the animal body segmentation module is used for controlling a monocular camera to shoot images containing an animal body and a calibration plate, inputting the images into a Cascade RCNN network model to identify animal body regions and outputting a minimum animal body outsourcing rectangular region mask;
and the weight recognition module is used for segmenting a pre-segmentation image with a preset size by taking the animal body as a center, sending the pre-segmentation image into the trained weight recognition model, outputting body length and weight information, and determining a claim check result by combining the category information obtained by facial image recognition of the animal body.
The present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the monocular camera-based animal online claims method as described above.
The invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the monocular camera-based on-line claims method for an animal body as described above.
Aiming at the problem of remote animal check, the scheme provides a set of remote end-to-end solution, and has the following beneficial technical effects:
1. the identification result of the unified model is used as the claim check index, so that the fairness criterion of the most basic criterion of the claim check on all users is ensured.
2. The intelligent mobile phone is used as a centralized platform of the latest science and technology, has very high credibility, and can be used as an anti-counterfeiting input through indexes acquired by an official AI interface of the intelligent mobile phone.
3. The method for performing claims check by using the deep learning related model is a trainable and iterative method, and the data input in each claim check can also be used as the expansion of a training set, so that the accuracy of the claim check measurement can be continuously improved as the acquired data is continuously expanded. The labor cost of high skilled staffs and the false claim expense of the indexes can be saved (the manual method or the traditional algorithm does not have sustainable iterative optimization capability, and the policyholder can certainly complain if the measured indexes are unfavorable, for example, the weight measurement is lighter than the actual value, and can cause extra claim amount to the company if the measured indexes are favorable, for example, the measured indexes are heavier than the actual value).
Drawings
The above features and technical advantages of the present invention will become more apparent and readily appreciated from the following description of the embodiments thereof taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic diagram illustrating steps of a monocular camera-based method for online claims verification of an animal body according to an embodiment of the present invention;
FIG. 2 is a schematic view of a calibration plate smaller than the frame of an animal body region;
FIG. 3 is a schematic view of the calibration plate coinciding with the animal region frame;
FIG. 4 is a schematic view of a calibration plate larger than the animal region frame;
FIG. 5 is a schematic diagram of an electronic device;
fig. 6 is a block configuration diagram of the monocular camera-based animal body online claiming device.
Detailed Description
Embodiments of a monocular camera-based animal body online claims checking method, apparatus and storage medium according to the present invention will be described below with reference to the accompanying drawings. Those of ordinary skill in the art will recognize that the described embodiments can be modified in various different ways, or combinations thereof, without departing from the spirit and scope of the present invention. Accordingly, the drawings and description are illustrative in nature and not intended to limit the scope of the claims. Furthermore, in the present description, the drawings are not to scale and like reference numerals refer to like parts.
The embodiment provides an animal body online claims checking method based on a monocular camera, which comprises the following steps:
step S1, controlling the monocular camera to focus on the animal and the calibration board, as shown in fig. 2, placing the animal 50 on the ground, placing the calibration board 20 below the abdomen of the animal 50, so that the side of the animal and the calibration board 20 are both in the horizontal direction, displaying an animal region frame 30 and a calibration board region frame 80 in the shot picture 60 of the monocular camera, the calibration board region frame 80 being located in the animal region frame, the animal 50 being placed in the animal region frame 30.
Wherein, preferably, the calibration plate 20 is completely overlapped with the calibration plate area 80 during shooting, so that the sizes of the shot animal bodies can be referred to the same scale, and the later weight identification is facilitated. As can be seen from fig. 4 and 5, in the case of the same animal body region frame 30, the minimum enveloping rectangles of the obtained animal bodies are different due to the difference of the shooting distances. If the proportion of the photographed animal body is locked without using the calibration plate 20, the photographed same animal body may take a form of different size due to different distances between the monocular camera and the animal body, which may cause errors in recognizing the body size and the body weight. By fixing the calibration plate 20 as a marker, the sizes of the photographed animal bodies are all based on the calibration plate, and the deviation of weight recognition caused by the fact that the same animal body is obtained but different animal body sizes are obtained due to different photographing distances can be prevented.
The size coordinates of the animal body region frame and the calibration plate region frame may be: the upper left, upper right, lower right and lower left coordinates of the frame of the animal body region are (0.2 × Width, 0.1 × height), (0.8 × Width, 0.1 height), (0.2 × Width, 0.6 height), the upper left, upper right, lower right and lower left coordinates of the frame of the calibration plate region are (0.4 × Width, 0.65 Width), (0.6 × Width, 0.65 height), (0.4 × Width, 0.8 × height), and the height is the height parameter.
And step S2, image quality judgment is carried out on the shot picture, including scaling judgment and target animal body region judgment.
The scaling determination means that a scaling I between the area of the scaling plate detected in the shot picture and the area of the area frame of the scaling plate is larger than a set scaling threshold, for example, the scaling threshold is 0.8. And if the ratio is smaller than the set scale threshold value, prompting refocusing. The scale I means Sb1And Sb2Area of intersection overlap region of (1) and (S)b1、Sb2The ratio of the coverage area is the coincidence degree of the coverage area and the coverage area, and the calculation formula of the proportion scale I is as follows:
Figure BDA0002653623530000061
wherein S isb1Representing the area of the calibration plate obtained for the current frame;
Sb2the representation is the area of a preset calibration plate area in the picture.
The judgment condition of the target animal body region is that the minimum animal body outer-wrapping rectangular region 40 of the current frame and the IOU (IOU is an abbreviation of intersection ratio) of the preset animal body region frame 30 in the picture are greater than an intersection threshold, wherein the intersection threshold is selected to be 0.75, if the IOU is less than the intersection threshold, refocusing is prompted, and the specific formula of the IOU is as follows:
Figure BDA0002653623530000071
wherein S ish1Is the minimum external rectangular area of the animal body of the current frame;
Sh2is the region of the animal body region frame preset in the picture;
IOU is Sh1And Sh2Area of intersection overlap region of (1) and (S)h1、Sh2Ratio of the area of the coverage area.
Specifically, findchessboardcorrers, which is a commonly used built-in method of the image processing library Opencv, may be called to perform detection and determination of the calibration board, (findchessboardcorrers are built-in methods of the image processing library Opencv, and for example, a 40x40cm calibration board is adopted, and is composed of 8x8 grids, so that the threshold of the number of detection corner points (points where black squares intersect are corner points) is set to be 49, and when requirements are met, the pixel region set occupied by the calibration board is calculated according to the corresponding region of 8x8 through detection of the calibration board.
Preferably, before the scaling determination and the target animal body region determination, a shooting picture pre-determination is further performed, wherein the shooting picture pre-determination comprises determination of general quality attributes of an image to be shot, including resolution of the image, blurring degree of the image and animal body condition in a detection picture, and comprises the following steps:
B. and detecting the resolution of the current shot picture, and if the resolution setting is smaller than the threshold, popping up a prompt resolution smaller than the threshold (such as 1280 x 960), and reminding the user of modification.
C. After the condition B is met, reminding a user to focus an animal body, detecting the image blurring degree frame by frame, detecting the image blurring degree by adopting a Laplacian operator, providing a packaging method for the Laplacian operator in Opencv (a cross-platform computer vision library based on BSD permission, BSD is a derivative system of Unix), and directly calling the Laplacian operator, wherein the Laplacian operator is used for measuring a second derivative of the image and can embody a region (namely a boundary) with rapidly changing density in the image, pixels of each point of the image are convoluted with the Laplacian operator, and then calculating the output variance. And when the variances of the continuous 2s are all smaller than the blurring threshold value, the blurring degree of the picture is higher as the variances are smaller.
D. Whether a moving object exists in the picture is detected. If the detection does not exist, popping up a prompt to indicate that the detection of the corresponding target fails, and reminding a user to align the monocular camera to a qualified animal body.
Step S3, controlling a monocular camera to shoot an image including an animal and a calibration board, and performing animal region and calibration board identification on the image, specifically, using image identification, inputting the image determined by image quality into a Cascade RCNN network model to perform animal region identification, outputting a minimum animal outsourcing rectangular region mask, and obtaining coordinates (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4) of four corner points of the minimum animal outsourcing rectangular region, where X1, X2, X3, and X4 are the upper left-corner abscissa, the upper-right-corner abscissa, the lower-left-corner abscissa, the lower-right-corner abscissa, Y1, Y2, Y3, and Y4 are the upper-left-corner ordinate, the lower-right-corner ordinate, the lower-left-right-corner ordinate, and the lower-right ordinate of the minimum animal outsourcing rectangular region. The Cascade RCNN network model is a target detection model integrated in mmdetection (PyTorch-based open source target detection toolkit).
Step S4, a rectangle area with a preset size is divided by taking the animal body as a center to be used as a pre-division image, so that the weight recognition model can be calculated conveniently, and the specific steps of obtaining the pre-division image containing the animal body are as follows:
in S41, theoretically, if the area of the calibration plate is exactly overlapped with the area of the animal body region frame 30, the standards of the sizes of the obtained animal bodies should be the same, but in practice, the overlapping degree of the calibration plate and the animal body region frame 30 is certainly wrong due to differences in the shooting angle, shooting method, and vision. Therefore, the error of artificial shooting is compensated by the scale I by detecting the scale I of the area of the calibration plate and the frame of the animal body region. Thereby calculating the size D of the pre-divided image to be dividedWidth of、DHeight
Figure BDA0002653623530000081
Wherein D isPreset width、DPreset heightIs a value of 1 on the scale IThe size of the image is segmented.
S42, calculating the center coordinate P of the minimum outer-wrapping rectangle of the animal bodyhThe specific calculation formula is as follows:
Ph=(Xh,Yh)
wherein:
Xh=0.25*(X1+X2+X3+X4)
Yh=0.25*(Y1+Y2+Y3+Y4)
s43, four vertex coordinates of the segmented pre-segmented image are calculated:
the coordinates of the four corner points are calculated as follows:
1)
Phg=(Xhgxs,Yhgxs)
Xhgxs=Xh-0.5*Dwidth of
Yhgxs=Yh-0.5*DHeight
Wherein, PhgxsIs the coordinates of the lower left corner of the pre-segmented image;
Xhgxsis the horizontal coordinate of the lower left corner of the pre-segmented image;
Yhgxsis the vertical coordinate of the lower left corner of the pre-segmented image.
2)
Phgyx=(Xhgyx,Yhgyx)
Xhgyx=Xh+0.5*DWidth of
Yhgyx=Yh-0.5*DHeight
Wherein, PhgyxIs the coordinates of the lower right corner of the pre-segmented image;
Xhgyxis the horizontal coordinate of the lower right corner of the pre-divided image;
Xhgyxis the vertical coordinate of the lower right corner of the pre-segmented image.
3)
Phgys=(Xhgys,Yhgys)
Xhgys=Xh+0.5*DWidth of
Yhgys=Yh+0.5*DHeight
PhgysCoordinates of the upper right corner of the pre-segmented image are obtained;
Xhgysis the horizontal coordinate of the upper right corner of the pre-segmented image;
Yhgysis the vertical coordinate of the top right corner of the pre-segmented image.
4)
Phgzs=(Xhgzs,Yhgzs)
Xhgzs=Xh-0.5*DWidth of
Yhgzs=Yh+0.5*DHeight
Wherein, PhgzsCoordinates of the upper left corner of the pre-segmented image are obtained;
Xhgzsis the horizontal coordinate of the upper left corner of the pre-segmented image;
Yhgzsis the vertical coordinate of the top left corner of the pre-segmented image.
And S44, cutting the image according to the four corner coordinates corresponding to the pre-segmentation image obtained in the S43, wherein the obtained cut image is the pre-segmentation image.
And step S5, sending the obtained pre-segmentation image into a weight recognition model, wherein the weight recognition model is a RESNET-50 network, and the RESNET-50 network is a deep learning model, and is subjected to multi-task training and verification, so that the body length and weight information of the animal body can be output, and a claim checking result is output by combining the species information of the animal body. For example, different pig types and different pay prices corresponding to different body sizes and weight characteristics, wherein the pay price can be determined based on the type information, body size and weight information of the animal body.
The specific steps of training, verifying and identifying the weight of the weight identification model are as follows:
s51, training stage:
A. collecting a plurality of pre-divided images, wherein each pre-divided image has labeled information, the labeled information includes the body length and the body weight of an animal, one part of the pre-divided images is randomly divided for training, and the other part of the pre-divided images is used for verification.
B. And performing normalization processing, specifically, setting a reference body length of 2M, dividing the body length marked on each picture by the reference body length, obtaining normalized marking parameters, setting a reference weight of 300KG, and dividing the marked weight of each picture by the reference weight, thus obtaining the normalized marking parameters.
C. The pre-divided images are each scaled uniformly to a reasonable size, and the embodiment is scaled to 640x 640.
D. In the training process, the network outputs two branches, namely a body length recognition branch and a weight recognition branch, wherein the body length recognition branch is used as auxiliary input information of the weight recognition branch, and after training and verification, the model can recognize after reaching a preset accuracy threshold.
S52, use stage:
E. the image obtained through steps S1 to S4 is scaled to 640x 640.
G. The scaled images are input to a RESNET-50 network, and relative indexes L _0 and W _0 of body length and body weight are output.
H. The reference body length and the reference weight parameter set by the system are read, and the reference body length and the reference weight parameter are set to be 2M and 300KG in the training step, so that the obtained relative indexes L _0 and W _0 are multiplied by the corresponding reference body length and the reference weight respectively, and the identified body length L and the identified body weight W are obtained.
Further, step S0 is included before step S1, a face photograph of the insured animal is taken, and the category of the animal is determined by using a fine-grained classification algorithm, which can distinguish pig and cattle pictures of various breeds with very similar biological characteristics. For example, to which breeds of pigs belong (different breeds of pigs correspond to different pay prices). The fine-grained classification algorithm can be an MTCNN (class-C/noise neural network) algorithm and a B-CNN algorithm, and the B-CNN algorithm has high classification accuracy on the CUB bird data set.
In an alternative embodiment, the authenticity of the claims verification process may result in the final acquisition of animal body weight and body size data that is not authentic due to the counterfeiting of the calibration plate. Such as would normally be the case if the applicant used a calibration board designated by the insurance company for animal photography. In order to obtain a larger animal weight measurement value, the applicant can imitate a calibration plate, reduce the size of the calibration plate, and enable the calibration plate to be full of the reference area of the calibration plate of the mobile phone by shortening the shooting distance, so that the reference performance of the weight and body size calculation model obtained by combining the calibration plate with the specified size with an algorithm is distorted, and a larger prediction error is obtained.
The present embodiment ensures that whether there is a counterfeit calibration sheet can be identified in the following manner: the method comprises the steps of calling an AI (Artificial intelligence) interface provided by a mobile phone supplier through a mobile phone to obtain geographic positions, camera parameters and IMU (inertial measurement unit), carrying out scene three-dimensional reconstruction according to multi-angle pictures, camera parameters and IMU parameters shot by an applicant to obtain the actual size of a calibration plate, obtaining a shooting distance according to a displacement sensor of the mobile phone, and judging whether the calibration plate is an appointed calibration plate according to the obtained shooting height and the actual size of the calibration plate. Specifically, for the same calibration plate, the monocular camera is at a fixed shooting height, the sizes of images shot by the monocular camera are the same, for the calibration plate specified by the insurance company, the calibration plate correspondingly has data of shooting sizes according to different shooting heights, the size of the calibration plate obtained according to the three-dimensional scene is consistent with the data, and otherwise, the calibration plate is not the specified calibration plate.
Obtaining a three-dimensional scene comprises the steps of:
(1) calibrating a camera: with regard to the internal reference of the camera, because a lot of sensors, such as IMU, camera and even structured light module, are integrated inside the mobile phone, each mobile phone company or supporting enterprise also provides an AI interface platform, such as HUAWEI _ AI of google, Facebook and wary, for example, AR _ KIT of apple can call an AI interface of a mobile phone provider through app. And each application can call the AI interface to acquire corresponding data according to the requirement. And obtaining external parameters of the camera according to the internal parameters of the camera and by combining a pose estimation algorithm.
(2) The method comprises the steps of calculating the characteristics of each pixel point of a picture by adopting a sift operator, matching and corresponding the pixels of a plurality of pictures, obtaining sparse point cloud information by combining camera parameters through the characteristics of the pixel points, and generating sparse point cloud by using Bundler (capable of reconstructing a 3D model by utilizing a disordered picture set) and VisualSFM (three-dimensional reconstruction software).
(3) And performing range expansion and range filtering on the sparse point cloud by using PMVS (PMVS vehicle simulation) to obtain dense point cloud, and meshing the dense point cloud to obtain a three-dimensional scene.
Fig. 5 is a functional block diagram of the monocular camera-based on-line claims device for animal body verification according to the present invention. The monocular camera-based on-line claims apparatus 200 may be installed in an electronic device. According to the realized functions, the monocular camera-based animal online claims checking device 200 may include a focusing module 201, a shooting compliance determination module 202, an animal segmentation module 203, and a weight recognition module 204. The module refers to a series of computer program segments that can be executed by a processor of an electronic device and can perform a fixed function, and is stored in a memory of the electronic device.
In the present embodiment, the functions of the modules are as follows:
the focusing module 201 is configured to control the monocular camera to focus an animal and a calibration board, as shown in fig. 2, the animal 50 is placed on the ground, the calibration board 20 is placed below the abdomen of the animal 50, so that the side of the animal and the calibration board 20 are both in the horizontal direction, an animal region frame 30 and a calibration board region frame 80 are displayed in a shooting picture 60 of the monocular camera, the calibration board region frame 80 is located in the animal region frame, and the animal 50 is placed in the animal region frame 30.
Wherein, preferably, the calibration plate 20 is completely overlapped with the calibration plate area 80 during shooting, so that the sizes of the shot animal bodies can be referred to the same scale, and the later weight identification is facilitated. As can be seen from fig. 4 and 5, in the case of the same animal body region frame 30, the minimum enveloping rectangles of the obtained animal bodies are different due to the difference of the shooting distances. If the proportion of the photographed animal body is locked without using the calibration plate 20, the photographed same animal body may take a form of different size due to different distances between the monocular camera and the animal body, which may cause errors in recognizing the body size and the body weight. By fixing the calibration plate 20 as a marker, the sizes of the photographed animal bodies are all based on the calibration plate, and the deviation of weight recognition caused by the fact that the same animal body is obtained but different animal body sizes are obtained due to different photographing distances can be prevented.
The shooting compliance judging module 202 is used for judging the image quality of the shot picture, including judging the scaling and judging the target animal body region.
The scaling determination means that a scaling I between the area of the scaling plate detected in the shot picture and the area of the area frame of the scaling plate is larger than a set scaling threshold, for example, the scaling threshold is 0.8. And if the ratio is smaller than the set scale threshold value, prompting refocusing. The scale I means Sb1And Sb2Area of intersection overlap region of (1) and (S)b1、Sb2The ratio of the coverage area is the coincidence degree of the coverage area and the coverage area, and the calculation formula of the proportion scale I is as follows:
Figure BDA0002653623530000121
wherein S isb1Representing the area of the calibration plate obtained for the current frame;
Sb2the representation is the area of a preset calibration plate area in the picture.
The judgment condition of the target animal body region is that the minimum animal body outer-wrapping rectangular region 40 of the current frame and the IOU (IOU is an abbreviation of intersection ratio) of the preset animal body region frame 30 in the picture are greater than an intersection threshold, wherein the intersection threshold is selected to be 0.75, if the IOU is less than the intersection threshold, refocusing is prompted, and the specific formula of the IOU is as follows:
Figure BDA0002653623530000122
wherein S ish1Is the minimum external rectangular area of the animal body of the current frame;
Sh2is a picture in pictureA region of a set animal body region frame;
IOU is Sh1And Sh2Area of intersection overlap region of (1) and (S)h1、Sh2Ratio of the area of the coverage area.
Specifically, the findchsorbardlorners function of Opencv is called to perform detection judgment of the calibration board, (findchsorbardlorners is a commonly used built-in method of the image processing library Opencv, for example, a 40x40cm calibration board is adopted and consists of 8x8 squares, so that the threshold of the number of detection corner points (points where black squares intersect are corner points) is set to be 49, when the requirement is met, the calibration board is used for detection, and a pixel region set occupied by the calibration board is calculated according to a corresponding region of 8x 8.
Preferably, before the scaling determination and the target animal body region determination, a shooting picture pre-determination is further performed, wherein the shooting picture pre-determination comprises determination of general quality attributes of an image to be shot, including resolution of the image, blurring degree of the image and animal body condition in a detection picture, and comprises the following steps:
B. and detecting the resolution of the current shot picture, and if the resolution setting is smaller than the threshold, popping up a prompt resolution smaller than the threshold (such as 1280 x 960), and reminding the user of modification.
C. After the condition B is met, reminding a user to focus an animal body, detecting the image blurring degree frame by frame, detecting the image blurring degree by adopting a Laplacian operator, providing a packaging method for the Laplacian operator in Opencv (a cross-platform computer vision library based on BSD permission, BSD is a derivative system of Unix), and directly calling the Laplacian operator, wherein the Laplacian operator is used for measuring a second derivative of the image and can embody a region (namely a boundary) with rapidly changing density in the image, pixels of each point of the image are convoluted with the Laplacian operator, and then calculating the output variance. And when the variances of the continuous 2s are all smaller than the blurring threshold value, the blurring degree of the picture is higher as the variances are smaller.
D. Whether a moving object exists in the picture is detected. If the detection does not exist, popping up a prompt to indicate that the detection of the corresponding target fails, and reminding a user to align the monocular camera to a qualified animal body.
Wherein, the animal body segmentation module 203 controls the monocular camera to shoot the image containing the animal body and the calibration plate, carrying out recognition of the animal body region and the calibration board on the image, specifically, adopting image recognition, inputting the image judged by image quality into a Cascade RCNN network model for recognition of the animal body region, outputting the image as a mask of the minimum animal body external rectangular region, obtaining coordinates (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4) of four corner points of the minimum animal body external rectangular region, wherein, X1, X2, X3 and X4 are the abscissa of the upper left corner, the abscissa of the upper right corner, the abscissa of the lower left corner and the abscissa of the lower right corner of the minimum animal enclosure rectangular area, and Y1, Y2, Y3 and Y4 are the ordinate of the upper left corner, the ordinate of the upper right corner, the ordinate of the lower left corner and the ordinate of the lower right corner of the minimum animal enclosure rectangular area. The Cascade RCNN network model is a target detection model integrated in mmdetection (PyTorch-based open source target detection toolkit).
Step S4, a rectangle area with a preset size is divided by taking the animal body as a center to be used as a pre-division image, so that the weight recognition model can be calculated conveniently, and the specific steps of obtaining the pre-division image containing the animal body are as follows:
in S41, theoretically, if the area of the calibration plate is exactly overlapped with the area of the animal body region frame 30, the standards of the sizes of the obtained animal bodies should be the same, but in practice, the overlapping degree of the calibration plate and the animal body region frame 30 is certainly wrong due to differences in the shooting angle, shooting method, and vision. Therefore, the error of artificial shooting is compensated by the scale I by detecting the scale I of the area of the calibration plate and the frame of the animal body region. Thereby calculating the size D of the pre-divided image to be dividedWidth of、DHeight
Figure BDA0002653623530000141
Wherein D isPreset ofWidth of、DPreset heightIs the size of the pre-segmented image with scale I of 1.
S42, calculating the center coordinate P of the minimum outer-wrapping rectangle of the animal bodyhThe specific calculation formula is as follows:
Ph=(Xh,Yh)
wherein:
Xh=0.25*(X1+X2+X3+X4)
Yh=0.25*(Y1+Y2+Y3+Y4)
s43, four vertex coordinates of the segmented pre-segmented image are calculated:
the coordinates of the four corner points are calculated as follows:
1)
Phg=(Xhgxs,Yhgxs)
Xhgxs=Xh-0.5*Dwidth of
Yhgxs=Yh-0.5*DHeight
Wherein, PhgxsIs the coordinates of the lower left corner of the pre-segmented image;
Xhgxsis the horizontal coordinate of the lower left corner of the pre-segmented image;
Yhgxsis the vertical coordinate of the lower left corner of the pre-segmented image.
2)
Phgyx=(Xhgyx,Yhgyx)
Xhgyx=Xh+0.5*DWidth of
Yhgyx=Yh-0.5*DHeight
Wherein, PhgyxIs the coordinates of the lower right corner of the pre-segmented image;
Xhgyxis the horizontal coordinate of the lower right corner of the pre-divided image;
Xhgyxis the vertical coordinate of the lower right corner of the pre-segmented image.
3)
Phgys=(Xhgys,Yhgys)
Xhgys=Xh+0.5*DWidth of
Yhgys=Yh+0.5*DHeight
PhgysCoordinates of the upper right corner of the pre-segmented image are obtained;
Xhgysis the horizontal coordinate of the upper right corner of the pre-segmented image;
Yhgysis the vertical coordinate of the top right corner of the pre-segmented image.
4)
Phgzs=(Xhgzs,Yhgzs)
Xhgzs=Xh-0.5*DWidth of
Yhgzs=Yh+0.5*DHeight
Wherein, PhgzsCoordinates of the upper left corner of the pre-segmented image are obtained;
Xhgzsis the horizontal coordinate of the upper left corner of the pre-segmented image;
Yhgzsis the vertical coordinate of the top left corner of the pre-segmented image.
And S44, cutting the image according to the four corner coordinates corresponding to the pre-segmentation image obtained in the S43, wherein the obtained cut image is the pre-segmentation image.
The weight recognition module 204 is used for sending the obtained pre-segmentation image into a weight recognition model, wherein the weight recognition model is a RESNET-50 network, and the RESNET-50 network is a deep learning model, and the body length and weight information of the animal body can be output through multi-task training and verification. And outputting the claim checking result by combining the species information of the animal body. For example, different pig types and different pay prices corresponding to different body sizes and weight characteristics, wherein the pay price can be determined based on the type information, body size and weight information of the animal body.
The specific steps of training, verifying and identifying the weight of the weight identification model are as follows:
s51, training stage:
A. collecting a plurality of pre-divided images, wherein each pre-divided image has labeled information, the labeled information includes the body length and the body weight of an animal, one part of the pre-divided images is randomly divided for training, and the other part of the pre-divided images is used for verification.
B. And performing normalization processing, specifically, setting a reference body length of 2M, dividing the body length marked on each picture by the reference body length, obtaining normalized marking parameters, setting a reference weight of 300KG, and dividing the marked weight of each picture by the reference weight, thus obtaining the normalized marking parameters.
C. The pre-divided images are each scaled uniformly to a reasonable size, and the embodiment is scaled to 640x 640.
D. In the training process, the network outputs two branches, namely a body length recognition branch and a weight recognition branch, wherein the body length recognition branch is used as auxiliary input information of the weight recognition branch, and after training and verification, the model can recognize after reaching a preset accuracy threshold.
S52, use stage:
E. the image obtained through steps S1 to S4 is scaled to 640x 640.
G. The scaled images are input to a RESNET-50 network, and relative indexes L _0 and W _0 of body length and body weight are output.
H. The reference body length and the reference weight parameter set by the system are read, and the reference body length and the reference weight parameter are set to be 2M and 300KG in the training step, so that the obtained relative indexes L _0 and W _0 are multiplied by the corresponding reference body length and the reference weight respectively, and the identified body length L and the identified body weight W are obtained.
Fig. 6 is a schematic diagram of a hardware architecture of an embodiment of the electronic device according to the present invention. In the present embodiment, the electronic device 2 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction. For example, the mobile device may be an easily portable mobile device such as a smart mobile device, a tablet computer, a notebook computer, and the like. As shown in fig. 6, the electronic device 2 at least includes a memory 21 and a processor 22, which are communicatively connected with each other through a line, and the monocular camera is connected with the processor, wherein: the memory 21 may be an internal storage unit of the electronic device 2, such as a hard disk or a memory of the electronic device 2. In other embodiments, the memory 21 may also be an external storage device of the electronic device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the electronic device 2.
Of course, the memory 21 may also comprise both an internal memory unit and an external memory device of the electronic device 2. In this embodiment, the memory 21 is generally used to store an operating system installed in the electronic device 2 and various types of application software, such as codes of an on-line claims check program of an animal based on a monocular camera. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. For running program codes stored in said memory 21 or processing data, for example running a monocular camera based on-line claims program for an animal.
It is noted that fig. 6 only shows the electronic device 2 with the memory 21, the processor 22, but it is to be understood that not all shown components are required to be implemented, and more or less components may be implemented instead.
The memory 21 containing the readable storage medium may include an operating system, a monocular camera-based on-line claims program, and the like. The processor 22 implements the steps of S1 to S5 when executing the monocular camera based animal online claims program in the memory 21, which will not be described herein again.
Furthermore, the embodiment of the present invention also provides a computer-readable storage medium, which may be any one or any combination of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, and the like. The computer readable storage medium includes a monocular camera based on-line claims program and the like, and when executed by the processor 22, the monocular camera based on-line claims program implements the following operations:
s1, controlling the monocular camera to focus on an animal body and a calibration plate, wherein the animal body is placed on the ground, the calibration plate is placed below the abdomen of the animal body, an animal body region frame and a calibration plate region frame are displayed in a shooting picture of the monocular camera, and the animal body is placed in the animal body region frame;
s2, judging the image quality of the shot picture, including judging the scaling degree and judging the target animal body region;
s3, controlling a monocular camera to shoot images containing the animal body and a calibration board, inputting the images into a Cascade RCNN network model to identify the animal body region, and outputting a minimum animal body outsourcing rectangular region mask;
and S4, segmenting the pre-segmented image with a preset size by taking the animal body as the center, sending the pre-segmented image into the trained weight recognition model, outputting body length and weight information, and determining a claim check result by combining the category information obtained by facial image recognition of the animal body.
The embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiment of the monocular camera-based animal online claims checking method and the electronic device 2, and will not be described herein again.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An on-line animal body claim checking method based on a monocular camera is characterized by comprising the following steps:
controlling the monocular camera to focus an animal body and a calibration plate, wherein the animal body is placed on the ground, the calibration plate is placed below the abdomen of the animal body, an animal body region frame and a calibration plate region frame are displayed in a shooting picture of the monocular camera, and the animal body is placed in the animal body region frame;
judging whether the minimum animal body outsourcing rectangular area of the current frame and the IOU of the animal body area frame displayed in the shooting picture are larger than a preset intersection threshold value or not, if so, continuing to execute the operation, otherwise, prompting refocusing;
controlling a monocular camera to shoot an image containing an animal body and a calibration plate, inputting the image into a Cascade RCNN network model to identify the animal body region, and outputting a minimum animal body outer rectangular region mask;
and (3) segmenting a pre-segmented image with a preset size by taking the animal body as a center, sending the pre-segmented image into a trained weight recognition model, outputting body length and weight information, and determining a claim checking result by combining category information obtained by facial image recognition of the animal body.
2. The monocular camera-based animal body online claims checking method according to claim 1, wherein before determining whether the animal body minimum outsourcing rectangular region of the current frame and the IOU of the animal body region frame displayed in the shot picture are greater than a preset intersection threshold, a shot picture pre-determination is performed, including determination of the resolution of the picture, the degree of blurring of the picture, and whether the animal body in the animal body region frame is complete,
and detecting the blurring degree of the picture by using a Laplace operator, performing convolution calculation on each point pixel of the picture and the Laplace operator to output a variance, and regarding the picture as fuzzy when the variances of 2s are less than a fuzzy threshold value.
3. The monocular camera-based animal body online claims verification method, as recited in claim 1, further comprising, before shooting, finding the corner point of the calibration plate by using a findchessbosdcorrers function of Opencv, obtaining an area calculation scale I of the calibration plate, and prompting refocusing when the scale I is smaller than a preset scale threshold, wherein the scale I has a formula:
Figure FDA0002653623520000011
wherein S isb1Representing the area of the calibration plate obtained for the current frame;
Sb2the representation is the area of a preset calibration plate area in the picture.
4. The monocular camera-based animal body online claims method of claim 1,
the calculation formula of the IOU of the minimum animal body surrounding rectangular area and the preset animal body area frame in the shot picture is as follows:
Figure FDA0002653623520000021
wherein S ish1Is the minimum external rectangular area of the animal body of the current frame;
Sh2is the region of the animal body region frame preset in the picture;
IOU is Sh1And Sh2Area of intersection overlap region of (1) and (S)h1、Sh2Ratio of the area of the coverage area.
5. The monocular camera-based animal body online claims verification method of claim 1, wherein segmenting the pre-segmented image centered on the animal body comprises obtaining a dimension D of the pre-segmented image according to a scale IWidth of、DHeightWherein, in the step (A),
Figure FDA0002653623520000022
wherein D isPreset width、DPreset heightIs the size of the pre-segmented image with scale I of 1.
6. The monocular camera-based animal body online claims method of claim 1,
the weight recognition model is an RESNET-50 network, and the method for recognizing the weight by using the weight recognition model comprises the following steps:
collecting a plurality of pre-segmentation images marked with body length and weight information of animal bodies, setting a reference body length and a reference weight, dividing the body length marked by each animal body image by the reference body length to obtain a normalized marked body length, dividing the body weight marked by each animal body image by the reference weight to obtain a normalized marked body weight,
taking one part of the pre-segmentation image as a training image and one part of the pre-segmentation image as a verification image;
inputting a training image into an RESNET-50 network for training, outputting a body length recognition branch and a body weight recognition branch by the RESNET-50 network, and inputting a verification image into the RESNET-50 network until the output reaches a preset accuracy threshold;
and inputting the pre-segmentation image into a verified RESNET-50 network, outputting relative indexes L _0 and W _0 of the body length and the body weight, and multiplying the relative indexes L _0 and W _0 by corresponding reference body length and body weight parameters respectively to obtain the identified body length L and body weight W.
7. The monocular camera-based animal body online claims method of claim 1,
before controlling the monocular camera to focus on the animal body and the calibration plate, calling an AI (Artificial intelligence) interface provided by a monocular camera supplier to acquire geographic positions, camera parameters and IMU (inertial measurement Unit) parameters, performing scene three-dimensional reconstruction according to multi-angle pictures, camera parameters and IMU parameters shot by an applicant to acquire the actual size of the calibration plate, acquiring shooting distances according to a displacement sensor of a mobile phone, and judging whether the calibration plate is an appointed calibration plate according to the acquired shooting distances and the actual size of the calibration plate.
8. An on-line claims check device for an animal body based on a monocular camera, comprising:
the focusing module is used for controlling the monocular camera to focus an animal body and a calibration plate, wherein the animal body is laterally placed on the ground, the calibration plate is placed below the abdomen of the animal body, an animal body region frame and a calibration plate region frame are displayed in a shooting picture of the monocular camera, and the animal body is placed in the animal body region frame;
the shooting compliance judging module is used for judging whether the IOU of the minimum animal body outsourcing rectangular area of the current frame and the IOU of the animal body area frame displayed in the shooting picture are larger than a preset intersection threshold value or not, if so, the shooting compliance judging module continues to execute the shooting compliance judging module, and otherwise, refocusing is prompted;
the animal body segmentation module is used for controlling a monocular camera to shoot images containing an animal body and a calibration plate, inputting the images into a Cascade RCNN network model to identify animal body regions and outputting a minimum animal body outsourcing rectangular region mask;
and the weight recognition module is used for segmenting a pre-segmentation image with a preset size by taking the animal body as a center, sending the pre-segmentation image into the trained weight recognition model, outputting body length and weight information, and determining a claim check result by combining the category information obtained by facial image recognition of the animal body.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the monocular camera-based animal online claims method of any one of claims 1 to 7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the monocular camera-based animal body online claims method as recited in any one of claims 1 to 7.
CN202010879333.1A 2020-08-27 2020-08-27 Animal on-line core claim method, device and storage medium based on monocular camera Active CN111985477B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010879333.1A CN111985477B (en) 2020-08-27 Animal on-line core claim method, device and storage medium based on monocular camera
PCT/CN2020/136401 WO2021139494A1 (en) 2020-08-27 2020-12-15 Animal body online claim settlement method and apparatus based on monocular camera, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010879333.1A CN111985477B (en) 2020-08-27 Animal on-line core claim method, device and storage medium based on monocular camera

Publications (2)

Publication Number Publication Date
CN111985477A true CN111985477A (en) 2020-11-24
CN111985477B CN111985477B (en) 2024-06-28

Family

ID=

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749760A (en) * 2021-01-22 2021-05-04 淮阴师范学院 Waterfowl image recognition feature fusion model system and method based on deep convolutional network
CN112836904A (en) * 2021-04-07 2021-05-25 复旦大学附属中山医院 Body quality index prediction method based on face characteristic points
WO2021139494A1 (en) * 2020-08-27 2021-07-15 平安科技(深圳)有限公司 Animal body online claim settlement method and apparatus based on monocular camera, and storage medium
CN114399785A (en) * 2021-10-29 2022-04-26 平安科技(深圳)有限公司 Human height identification method and device, computer equipment and storage medium
CN116229518A (en) * 2023-03-17 2023-06-06 百鸟数据科技(北京)有限责任公司 Bird species observation method and system based on machine learning
CN116416260A (en) * 2023-05-19 2023-07-11 四川智迅车联科技有限公司 Weighing precision optimization method and system based on image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361345A (en) * 2016-11-29 2017-02-01 公安部第三研究所 System and method for measuring height of human body in video image based on camera calibration
CN108871520A (en) * 2018-07-06 2018-11-23 平安科技(深圳)有限公司 Livestock body weight measurement and device
CN108921026A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of animal identification
CN108921057A (en) * 2018-06-19 2018-11-30 厦门大学 Prawn method for measuring shape of palaemon, medium, terminal device and device based on convolutional neural networks
CN109165645A (en) * 2018-08-01 2019-01-08 腾讯科技(深圳)有限公司 A kind of image processing method, device and relevant device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361345A (en) * 2016-11-29 2017-02-01 公安部第三研究所 System and method for measuring height of human body in video image based on camera calibration
CN108921026A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of animal identification
CN108921057A (en) * 2018-06-19 2018-11-30 厦门大学 Prawn method for measuring shape of palaemon, medium, terminal device and device based on convolutional neural networks
CN108871520A (en) * 2018-07-06 2018-11-23 平安科技(深圳)有限公司 Livestock body weight measurement and device
CN109165645A (en) * 2018-08-01 2019-01-08 腾讯科技(深圳)有限公司 A kind of image processing method, device and relevant device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021139494A1 (en) * 2020-08-27 2021-07-15 平安科技(深圳)有限公司 Animal body online claim settlement method and apparatus based on monocular camera, and storage medium
CN112749760A (en) * 2021-01-22 2021-05-04 淮阴师范学院 Waterfowl image recognition feature fusion model system and method based on deep convolutional network
CN112836904A (en) * 2021-04-07 2021-05-25 复旦大学附属中山医院 Body quality index prediction method based on face characteristic points
CN114399785A (en) * 2021-10-29 2022-04-26 平安科技(深圳)有限公司 Human height identification method and device, computer equipment and storage medium
CN114399785B (en) * 2021-10-29 2023-02-21 平安科技(深圳)有限公司 Human height identification method and device, computer equipment and storage medium
CN116229518A (en) * 2023-03-17 2023-06-06 百鸟数据科技(北京)有限责任公司 Bird species observation method and system based on machine learning
CN116229518B (en) * 2023-03-17 2024-01-16 百鸟数据科技(北京)有限责任公司 Bird species observation method and system based on machine learning
CN116416260A (en) * 2023-05-19 2023-07-11 四川智迅车联科技有限公司 Weighing precision optimization method and system based on image processing
CN116416260B (en) * 2023-05-19 2024-01-26 四川智迅车联科技有限公司 Weighing precision optimization method and system based on image processing

Also Published As

Publication number Publication date
WO2021139494A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
CN110569878B (en) Photograph background similarity clustering method based on convolutional neural network and computer
CN112633144A (en) Face occlusion detection method, system, device and storage medium
CN109165645B (en) Image processing method and device and related equipment
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
KR101165415B1 (en) Method for recognizing human face and recognizing apparatus
CN112017231B (en) Monocular camera-based human body weight identification method, monocular camera-based human body weight identification device and storage medium
CN111259889A (en) Image text recognition method and device, computer equipment and computer storage medium
JP2018060296A (en) Image processing apparatus, image processing system, and image processing method
CN114758249B (en) Target object monitoring method, device, equipment and medium based on field night environment
CN111860652B (en) Method, device, equipment and medium for measuring animal body weight based on image detection
CN111144372A (en) Vehicle detection method, device, computer equipment and storage medium
CN111160169A (en) Face detection method, device, equipment and computer readable storage medium
CN111160395A (en) Image recognition method and device, electronic equipment and storage medium
CN110766650A (en) Biological detection early warning method, system, device, computer equipment and storage medium
CN110796709A (en) Method and device for acquiring size of frame number, computer equipment and storage medium
CN111738988A (en) Face depth image generation method and device, electronic equipment and storage medium
WO2021139494A1 (en) Animal body online claim settlement method and apparatus based on monocular camera, and storage medium
CN112329845B (en) Method and device for changing paper money, terminal equipment and computer readable storage medium
CN110210314B (en) Face detection method, device, computer equipment and storage medium
CN116563040A (en) Farm risk exploration method, device, equipment and storage medium based on livestock identification
CN111985477B (en) Animal on-line core claim method, device and storage medium based on monocular camera
CN115170471A (en) Part identification method and device based on image identification model
CN114168772A (en) Tablet identification method, readable storage medium, and electronic device
CN115222621A (en) Image correction method, electronic device, storage medium, and computer program product
KR20230104969A (en) System and method for nose-based companion animal identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant