CN115135973A - Weight estimation device and program - Google Patents

Weight estimation device and program Download PDF

Info

Publication number
CN115135973A
CN115135973A CN202180013781.6A CN202180013781A CN115135973A CN 115135973 A CN115135973 A CN 115135973A CN 202180013781 A CN202180013781 A CN 202180013781A CN 115135973 A CN115135973 A CN 115135973A
Authority
CN
China
Prior art keywords
image
animal
weight
information
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202180013781.6A
Other languages
Chinese (zh)
Other versions
CN115135973B (en
Inventor
川末纪功仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Miyazaki NUC
Original Assignee
University of Miyazaki NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Miyazaki NUC filed Critical University of Miyazaki NUC
Publication of CN115135973A publication Critical patent/CN115135973A/en
Application granted granted Critical
Publication of CN115135973B publication Critical patent/CN115135973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F17/00Methods or apparatus for determining the capacity of containers or cavities, or the volume of solid bodies
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G17/00Apparatus for or methods of weighing material of special form or property
    • G01G17/08Apparatus for or methods of weighing material of special form or property for weighing livestock
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a weight estimation device capable of estimating the freedom degree of the shooting direction of the weight related to animals. The device is provided with: an image acquisition unit that acquires an image of an animal; a shape specifying unit that specifies a shape of a predetermined portion of an animal based on an image; an information generation unit that generates estimation information for estimating the weight of the animal based on the shape of the predetermined portion; and a weight estimating unit that estimates a weight based on the information for estimation, wherein the information generating unit is capable of generating the information for estimation in any one of a case where a first image (animal image GA) obtained by imaging the animal from a first direction (for example, left half) and a case where a second image obtained by imaging the animal from a second direction (for example, right half) different from the first direction is obtained.

Description

Weight estimation device and program
Technical Field
The present invention relates to a weight estimation device and a program.
Background
Conventionally, the body weight of animals such as livestock is measured by a weighing machine. However, when the animal is not stationary on the scale, there is a problem that the weight cannot be accurately measured. As a configuration for solving this problem, patent document 1 describes the following configuration: an animal is photographed from a predetermined photographing direction, and the weight of the animal is measured (estimated) from the photographed image. According to the above configuration, since it is not necessary to keep the animal at rest on the scale, the above problem can be suppressed.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2014-44078
Disclosure of Invention
Problems to be solved by the invention
However, the configuration of patent document 1 has a problem that the degree of freedom in the imaging direction in which the body weight of the animal can be estimated is low. Specifically, in the configuration of patent document 1, the weight of an animal can be estimated only when the animal is photographed from a first direction (directly above direction), and the weight of the animal cannot be estimated when the animal is photographed from a second direction different from the first direction. In view of the above, an object of the present invention is to improve the degree of freedom of a shooting direction in which an animal-related weight can be estimated.
Means for solving the problems
In order to solve the above problem, a weight estimation device according to the present invention includes: an image acquisition unit that acquires an image of an animal; a shape specifying unit that specifies a shape of a predetermined portion of an animal based on an image; an information generation unit that generates estimation information for estimating the weight of the animal based on the shape of the predetermined portion; and a weight estimating unit that estimates a weight based on the information for estimation, wherein the information generating unit is capable of generating the information for estimation even when a first image obtained by imaging the animal from a first direction is acquired, and is capable of generating the information for estimation even when a second image obtained by imaging the animal from a second direction different from the first direction is acquired.
According to the above configuration, the weight of the animal can be estimated when the first image obtained by imaging the animal from the first direction is acquired, and the weight of the animal can be estimated when the second image obtained by imaging the animal from the second direction different from the first direction is acquired. Therefore, compared to the structure of patent document 1, the degree of freedom of the imaging direction in which the weight of the animal can be estimated is improved.
Effects of the invention
According to the present invention, the degree of freedom in the imaging direction in which the weight of the animal can be estimated is improved.
Drawings
Fig. 1 is a hardware configuration diagram of the weight estimation device.
Fig. 2 is a functional block diagram of the weight estimation device.
Fig. 3 is a diagram for explaining an animal image.
Fig. 4 is a diagram for explaining a structure for a specific ridge curve.
Fig. 5 is a diagram for explaining a structure for generating an overall image.
Fig. 6 is a diagram for explaining a structure for estimating the weight.
Fig. 7 is a diagram for explaining a screen displayed on the display unit.
Fig. 8 is a flowchart of the weight estimation control process.
Detailed Description
< first embodiment >
Fig. 1 is a hardware configuration diagram of the weight estimation device 1. As shown in fig. 1, the weight estimation device 1 includes a computer 10, a head-mounted display 20, and a depth (depth) camera 30. The above structures are communicably connected.
The computer 10 includes a CPU (Central Processing Unit)11, a ROM (read Only memory)12, a RAM (random access memory)13, and an HDD (hard Disk drive) 14. The computer 10 of the present embodiment is a portable computer (e.g., a notebook computer). However, a desktop computer may also be employed as the computer 10.
The HDD14 of the computer 10 stores various data including the weight estimation program PG. The CPU11 executes the weight estimation program PG to realize various functions (the weight estimation unit 108 and the like) described later. The RAM13 temporarily stores various information referred to when the CPU11 executes programs, for example. In addition, the ROM12 stores various information in a nonvolatile manner. The weight measurement program PG may be stored in a place other than the HDD 14.
The head-mounted display 20 can be fixed to the head of the user, and a known head-mounted display can be suitably used. For example, as the head mounted display 20, a head mounted display including a small liquid crystal screen and a half mirror can be employed. The above small liquid crystal display can display various images, and the image displayed on the small liquid crystal display is reflected by the half mirror and visually recognized by a user. In the above configuration, when the user views the scene through the half mirror, the image displayed on the small liquid crystal screen is visually recognized as being superimposed on the scene. However, the head-mounted display 20 is not limited to the above example.
The depth camera 30 generates a distance image (three-dimensional image) including depth information indicating a distance to a subject. For example, as the distance image, a point cloud image captured by LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) technology is assumed. In addition, the depth camera 30 is provided with a tilt sensor. The above-described tilt sensor detects the magnitude of the tilt of the imaging direction with respect to the vertical direction.
As shown in fig. 1, the depth camera 30 is fixed on the head mounted display 20. Therefore, when the user wears the head mounted display 20 on the head, the depth camera 30 is fixed at a specific position from the user's view. Specifically, the depth camera 30 is fixed at a position where the shooting direction substantially coincides with the line-of-sight direction of the user.
In the above configuration, the scene visually recognized by the user is photographed by the depth camera 30. Therefore, the user can photograph the animal by moving the sight line direction (the direction of the face) so as to place the animal in his or her own field of view. Further, a configuration may be adopted in which the user holds the depth camera 30 to capture an image of an animal. However, in the above configuration, the user cannot use both hands freely. On the other hand, the structure of the present embodiment has an advantage that the user can use both hands freely.
The image captured in the depth camera 30 is displayed on the head-mounted display 20 (small liquid crystal screen). Specifically, the image captured in the depth camera 30 is displayed on the head-mounted display 20 in real time. In the above configuration, the user can confirm the image captured by the depth camera 30 in real time. However, a configuration may be adopted in which the image captured by the depth camera 30 is not displayed in real time.
The image of the animal taken by the depth camera 30 is input to the computer 10. The computer 10 estimates an animal-related weight (for example, a weight of a carcass) from the input image by executing the weight estimation program PG. The above functions are described in detail below.
Fig. 2 is a functional block diagram of the weight estimation device 100. The weight estimation device 100 includes an image capturing unit 101, a display unit 102, an image acquisition unit 103, a shape specifying unit 104, a body selection unit 105, an information generation unit 106, a body model storage unit 107, and a weight estimation unit 108. The above-described functions are realized by the CPU11 executing the weight estimation program PG.
The image capturing unit 101 can capture an image of an animal. Specifically, the image capturing unit 101 is fixed at a specific position as viewed from the user, and can capture an animal positioned in the line of sight of the user. For example, the depth camera 30 functions as the image capturing section 101. The display unit 102 can display various images including the image captured by the image capturing unit 101 (see fig. 7(a) described later). For example, the head mounted display 20 functions as the display unit 102.
The image acquiring unit 103 acquires an image of an animal (see fig. 3(b) described later). Specifically, the image captured by the image capturing unit 101 includes an image of a background or the like in addition to an image of an animal. The image acquisition unit 103 extracts an image representing one animal from the image captured by the image capturing unit 101. The image of the animal acquired by the image acquisition unit 103 is used to estimate the weight of the animal.
The image acquiring unit 103 of the present embodiment acquires an image of an animal by a region expansion method from the image captured by the image capturing unit 101. Specifically, the image acquisition unit 103 specifies one pixel as a seed pixel in the image captured by the image capturing unit 101. The image acquisition unit 103 acquires an image in which a seed pixel is included in each pixel constituting itself in each image (object) included in the image captured by the image capturing unit 101. In the above configuration, any one of the pixels constituting the image of the animal is specified as a seed pixel, and the image of the animal is extracted and acquired from the image captured by the image capturing unit 101.
More specifically, when specifying a seed pixel, the image acquisition unit 103 assigns a predetermined label to the seed pixel. The image acquisition unit 103 assigns a common label to pixels satisfying a predetermined condition among pixels in the vicinity of the seed pixel. When the above condition is satisfied, a label is assigned to a pixel near a pixel to which a common label is assigned. The above process is repeated until the pixels assigned to the label disappear. The image acquisition unit 103 acquires an image composed of pixels to which a common label is given as an image of an animal. The method for specifying the seed pixel will be described in detail with reference to fig. 7(a) to 7(c) to be described later.
Hereinafter, for the sake of explanation, the image of the animal acquired by the image acquiring unit 103 may be referred to as an "animal image". In addition, the magnitude of the inclination of the imaging direction with respect to the vertical direction detected by the inclination sensor may be referred to as "inclination information". The weight estimation device 100 stores inclination information at the time of capturing the animal image in association with the animal image. The above tilt information is used when adjusting (correcting) the orientation of the animal image.
The shape specifying unit 104 specifies the shape of a predetermined part of an animal based on an animal image. The shape specifying unit 104 of the present embodiment specifies the shape of the back of an animal as a predetermined site. Hereinafter, the shape of the animal's back is simply referred to as "back curve". A specific example of the specific method of the ridge curve will be described in detail with reference to fig. 4(a) and 4(b) described later.
The half-length selecting unit 105 selects one of a right half-length located on the right side of the back and a left half-length located on the left side of the back as a specific half-length when viewed from the animal. Although details will be described later, the entire animal may not be able to be imaged depending on the imaging direction. In this case, an animal image showing an animal with a part of the body being defective is generated (see fig. 3 (b)). For example, when an animal is photographed from the right half, an animal image showing an animal with a part (or all) of the left half defective is generated. The half-length selection unit 105 selects a half-length in which a wider range is imaged as a specific half-length from among the animal halves.
The information generating unit 106 generates estimation information for estimating the weight of the animal based on the ridge curve (the shape of the predetermined portion). The estimation information of the present embodiment includes a whole image GW shown in fig. 5(d) described later. The whole image GW is an image showing the whole of an animal, and is generated (estimated) from an animal image (an image in which a part of an animal is defective) acquired by the image acquisition unit 103. Specifically, an image (a half-length image, described later, see fig. 5 c) representing the specific half-length is generated from the animal image, and the whole image GW is generated from the image representing the specific half-length.
The carcass model storage unit 107 stores a carcass model image GM (see fig. 6(a) described later). The carcass model image GM is an image representing the entire animal, similarly to the whole image GW. However, the carcass model image GM is an image of an animal from which parts (organs and the like) not included in the carcass are removed. The carcass model image GM is obtained by, for example, ct (computed tomogry) imaging of an animal of a standard body shape.
The weight estimation unit 108 estimates the weight of the animal based on the whole image GW (estimation information) of the animal. Specifically, the average density (kg/m) of the carcass of an animal 3 ) Is stored in the weight estimation device 100 in advance. For example, by repeating experiments for actually measuring the average density of an animal carcass, the average density of the carcass stored in the weight estimation device 100 can be determined from each measurement value obtained in each experiment. For example, the average density can be determined as the average value of the measured values of the experiments.
In addition, the carcass model image GM is fitted (enlarged, reduced) so that the outer edge of the carcass model image GM coincides with the outer edge of the whole image GW. The weight estimation unit 108 estimates the product of the volume of the fitted carcass model image GM and the average density of the animal carcass as the weight related to the animal.
As described above, in the present embodiment, the "weight of carcass of animal" is estimated as the "weight related to animal" of the present invention. However, a configuration may be adopted in which a weight other than the estimated "weight of carcass of animal" is used as the "weight related to animal". For example, a structure in which "the body weight of an animal including viscera (living body weight)" is estimated as "the weight related to the animal" can be considered.
Specifically, it is known that the weight of an animal (e.g., a pig) is determined from the weight formula (see, for example, japanese patent application laid-open No. 2019-45478). The above body weight formula represents the relationship between the body weight, the body length, and the chest circumference, and is experimentally obtained. Further, the body length and the chest circumference of the animal are specified from the overall image GW. Therefore, the weight estimation unit 108 can calculate (estimate) the living body weight of the animal using the body weight equation based on the body length and the bust size specified from the whole image GW. Further, the weight of the living body of the animal and the weight of the carcass may be estimated.
A specific example of the operation of the weight estimation device 100 will be described below with reference to fig. 3(a to c), fig. 4(a, b), fig. 5(a to d), fig. 6(a to c, d-1 to d-3), and fig. 7(a to c). In the specific examples described below, the animal a of which the estimated weight is "pig" is described as an example, but the animal of which the estimated weight is not limited to "pig". For example, the weight of an animal such as a "cow" or a "dolphin" may be estimated.
Fig. 3(a) is a diagram for explaining a specific example of the imaging direction Sc. In fig. 3(a), the vertical direction Sv is indicated by an arrow. The imaging direction Sc in the specific example of fig. 3(a) is a direction intersecting the vertical direction Sv. Specifically, a specific example of the animal a is assumed to be photographed from the upper left side by the image photographing section 101 (the depth camera 30) when viewed from the animal a.
Fig. 3(b) is a diagram for explaining a specific example of the animal image GA. As shown in fig. 3(b), the animal image GA is a three-dimensional image displayed in the XYZ space. An animal image GA is obtained by performing a plurality of times of curved surface approximation processing on the image captured by the image capturing section 101.
The animal image GA is rotated so that the vertical direction in the real space coincides with the Z-axis direction in the XYZ space, based on the tilt information at the time point of the imaging. The animal image GA is rotated so that the longitudinal direction coincides with the Y-axis direction. Specifically, the animal image GA is rotated so that the head of the animal faces the positive direction of the Y axis. The animal image GA is a point cloud image (point cloud data). Therefore, for example, the orientation of the head of the animal in the animal image GA can be specified using the principal component analysis. As a configuration for adjusting the orientation of the animal image GA, for example, the configuration described in japanese patent application laid-open No. 2014-44078 can be adopted.
However, depending on the shooting direction, the entire animal may not be shot. In the above case, the animal image GA represents an animal with a part of the defect. For example, in the specific example of fig. 3(b), an animal image GA when the animal a is photographed from the upper left (the same as the specific example of fig. 3 (a)) is assumed to be observed from the animal a. The animal image GA represents an animal with a lower side defect of the right half body. Specifically, as shown in fig. 3(b), the animal image GA shows an animal which is defective on the back side of the boundary portion L as viewed from the imaging direction Sc.
However, the carcass model image GM is an image showing the whole animal (except for organs, etc.) (see fig. 6(a) described later). Assume a configuration in which the weight of a carcass is estimated by making the outer edge of the carcass model image GM coincide with the outer edge of the animal image GA (hereinafter referred to as "comparative example X"). In comparative example X described above, when the body part of the animal shown in the animal image GA is defective, the outer edges of the images cannot be accurately aligned, and it is difficult to estimate the weight of the carcass with high accuracy.
In view of the above, the weight estimation device 100 according to the present embodiment is configured to be able to generate (estimate) the whole image GW from the animal image GA. The whole image GW represents the whole of the animal. Therefore, by matching the outer edge of the body model image GM with the outer edge of the entire image GW to estimate the weight of the body, the above-described problem can be suppressed as compared with comparative example X. The above structure is described in detail below.
Fig. 4 a and 4 b are diagrams for explaining the configuration (shape specifying unit 104) of the specific back curve S. Fig. 4(a) is the same as fig. 3(c), and assumes a case where the animal image GA is observed from the Z-axis direction. In addition, fig. 4(a) shows a plurality of vertices P (including Pn) constituting the dorsal curve S. By specifying the coordinates of each vertex P, the dorsal curve S is substantially specified.
Fig. 4(b) shows a cross section of an animal image GA parallel to the Y axis and the Z axis (parallel to the YZ plane). Specifically, the cross section shown in fig. 4(b) assumes a case where the animal image GA is cut at a position where the X coordinate is the numerical value "n" (see also fig. 4 (a)). In fig. 4(b), a part (outer edge) of an animal image GA showing the surface of the back of an animal in each part of the animal is extracted.
Each vertex P of the dorsal spine curve S constituting the animal is generally a point (vertex of the cross section) having the largest Z coordinate in the cross section parallel to the YZ plane of the animal image GA. Therefore, the weight estimation device 100 specifies the vertex of the cross section of the animal image GA as the vertex P constituting the dorsal curve S.
For example, in the specific example of fig. 4(b), a case is assumed where the coordinate of the vertex of a cross section parallel to the YZ plane, where the X coordinate is the numerical value "n", is (n, m, Zmax). In the above case, the coordinates (n, m, Zmax) are specified as the coordinates of the vertex P. The weight estimation device 100 also specifies the vertex of a cross section parallel to the YZ plane as the vertex P for other positions (positions other than X ═ n) on the X axis. In the above structure, the ridge curve S is specified.
Fig. 5 a to 5 d are diagrams for explaining a configuration (information generating unit 106) for generating the whole image GW (information for estimation) from the back curve S. Fig. 5(a) to 5(d) are the same as fig. 3(c) and 4(a), and assume a case where the animal image GA is observed from the Z-axis direction.
Fig. 5(a) is a diagram for explaining a specific example of an animal image GA used for generating the overall image GW. The weight estimation device 100 generates the overall image GW by performing the linearization process, the excision process, the selection process, and the generation process on the animal image GA.
Hereinafter, for the sake of explanation, a planar image perpendicular to the XY plane and perpendicular to the ridge curve S projected onto the XY plane is referred to as a "slice image Gc". The animal image GA is divided into, for example, approximately the same number of slice images Gc as the vertices P. However, fig. 5(a) shows the slice images Gc1 to Gc4 in the above slice images Gc.
As shown in fig. 5(a), the slice image Gc1 is an image including a vertex P1 in the dorsal curve S. In addition, the slice image Gc2 is an image including the vertex P2 in the dorsal spine curve S, the slice image Gc3 is an image including the vertex P3 in the dorsal spine curve S, and the slice image Gc4 is an image including the vertex P4 in the dorsal spine curve S.
In the specific example of fig. 5(a), it is assumed that the ridge curve S of the animal image GA is not a straight line when viewed from the Z-axis direction. As is clear from fig. 5(a), each slice image Gc constituting the animal image GA includes a slice image Gc that is not parallel to the YZ plane. The weight estimation device 100 of the present embodiment performs the linearization process on the above animal image GA.
Fig. 5(b) is a diagram for explaining a specific example of the animal image GA on which the linearization process is performed. The specific example of fig. 5(b) assumes a case where the animal image GA of fig. 5(a) is subjected to the linearization process. In the linearization processing, the positions and orientations of the slice images Gc are adjusted so that the orientations of all the slice images Gc in the animal image GA are parallel to the YZ plane, and the ridge curve S viewed from the Z-axis direction is parallel to the Y-axis direction. The linearization process may be performed by adjusting the ridge curve S viewed from the Z-axis direction to be parallel to the Y-axis direction, and the specific content is not limited to the above example.
After the above-described linearization processing is executed, the weight estimation device 100 (half body selection unit 105) executes selection processing. In the selection process, one of the right half body and the left half body of the animal a is selected as the specific half body. Specifically, assume a case where the animal image GA is cut into two images (an image representing the right half of the body and an image representing the left half of the body) in the Z-axis direction by the ridge curve S. In the selection process, a half-body represented by a larger image is selected as a specific half-body from the above two images.
For example, assume a case where the selection processing is performed on the animal image GA shown in fig. 5 (b). The animal image GA shows the entire left half of the animal a. On the other hand, as shown in fig. 5(b), a part of the right half of the body of the animal a on the right side of the boundary portion L as viewed from the animal a is not represented (damaged) by the animal image GA. When the selection processing is performed on the animal image GA, the left half body is selected as the specific half body. The weight estimation device 100 executes the excision processing on the animal image GA after executing the selection processing.
Fig. 5(c) is a diagram for explaining a specific example of the animal image GA subjected to the excision process. In the cutting-out process, a part of each half body of the animal a, which indicates a half body not selected as the specific half body, is cut out from the animal image GA. Hereinafter, the animal image GA subjected to the excision processing may be referred to as a "half-length image Gax" in order to be distinguished from the animal image GA before the excision processing is performed.
A specific example of fig. 5(c) is a half-length image Gax when an excision process is performed on the animal image GA shown in fig. 5 (b). The half-length image Gax is an image representing the left half-length of the animal a. In addition, a cross section generated by the cutting-out process in the half-body image Gax may be referred to as a "cross section Lx". As shown in fig. 5(c), the cross-section Lx is substantially parallel to the XZ plane. In addition, the outer edge of the cross section Lx includes the entirety of the ridge curve S. The weight estimation device 100 executes the generation processing after executing the cutting processing. The entire image GW is generated by the above generation processing.
Fig. 5(d) is a diagram for explaining the entire image GW generated by the generation processing. In the specific example of fig. 5(d), it is assumed that the whole image GW is generated from the half-body image Gax shown in fig. 5 (c). That is, it is assumed that the whole body image GW is generated from the half body image Gax representing the left half body of the animal a.
However, in animals such as pigs, there is a case where the left half and the right half are control faces. Therefore, the image to be compared with the half-length image Gax plane indicating the specific half-length is estimated to indicate the half-length opposite to the specific half-length. Therefore, when the weight estimation device 100 generates the whole body image GW from the half-length image Gax indicating the specific half-length of the animal a, an image (hereinafter referred to as "half-length image Gay") that is aligned with the half-length image Gax is generated as an image indicating the half-length opposite to the specific half-length. The image in which the above half-length image Gax and half-length image Gay are combined is stored as a whole image GW.
For example, the half-length image Gax shown in fig. 5(c) represents the left half-length of the animal a. When the whole body image GW is generated from the above half body image Gax, a half body image Gay representing the right half body of the animal a is generated. The half-length image Gay is compared with the half-length image Gax on a plane parallel to the XZ plane with respect to a cross section Lx passing through the half-length image Gax. As shown in fig. 5(c), the half-length image Gay includes a cross section Ly. The cross section Ly of the half-length image Gay is substantially equal to the cross section Lx of the half-length image Gax. The half-length image Gay is generated at a position where the cross section Ly substantially coincides with the cross section Lx of the half-length image Gax.
Fig. 5(d) shows a specific example of the whole image GW when the left half body of the animal a is selected as the specific half body. As described above, when the animal image GA taken from the left half body side of the animal a is acquired, the left half body of the animal a is selected as the specific half body. On the other hand, in the present embodiment, when the animal image GA taken from the right half of the animal a is acquired, the right half of the animal a can be selected as the specific half.
When the right half body of the animal a is selected as the specific half body, a half body image Gax representing the right half body of the animal a is generated. In the above case, the half-length image Gay representing the left half-length is generated from the half-length image Gax. That is, the half-length image Gay representing the left half-length is estimated from the half-length image Gax representing the right half-length of the animal a, and the whole image GW representing the whole animal a is generated.
As can be understood from the above description, according to the present embodiment, the entire image GW is generated even when an animal is photographed from a second direction (for example, right half) different from the first direction, in addition to the case when an animal is photographed from a first direction (for example, left half). As described above, the weight related to the animal is estimated from the overall image GW. That is, according to the present embodiment, even when an animal is photographed from any one of the first direction and the second direction, the weight of the animal can be estimated. In the above configuration, for example, compared to a configuration in which the weight of an animal can be estimated from only an image of the animal taken from a specific one direction, there is an advantage in that the degree of freedom in the image taking direction is improved.
Further, according to the linearization processing of the present embodiment, the spine curve can be linearized as viewed from the Z-axis direction, regardless of whether an image obtained by imaging an animal in which the spine curve is in the first shape posture (first posture) is acquired or an image obtained by imaging an animal in which the spine curve is in the second shape posture (second posture) is acquired. That is, the whole image GW (information for estimation) can be generated regardless of the posture of the animal, and the weight of the animal can be estimated. Therefore, compared to a configuration in which the weight of the animal can be estimated from the image of the animal in the first posture, but the weight of the animal cannot be estimated from the image of the animal in the second posture, for example, there is an advantage in that the degree of freedom of the posture in which the weight of the animal can be estimated is improved.
Fig. 6(a) to 6(c) and 6(d-1) to 6(d-3) are diagrams for explaining a specific example of a configuration (weight estimating unit 108) for calculating the weight of the carcass of the animal a. The weight estimation device 100 calculates (estimates) the weight of the carcass of the animal a by executing the weight estimation process.
Fig. 6(a) is a conceptual diagram of a carcass model image GM. As described above, the carcass model image GM is an image representing the entire animal. However, the carcass model image GM is an image of an animal from which parts (organs and the like) not included in the carcass are removed. The carcass model image GM is an image (normalized image) of an animal whose dorsal curve is a straight line, like the whole image GW. As shown in fig. 6(a), the carcass model image GM of the present embodiment includes model images GM1 to GM 7. Each model image Gm corresponds to any one of the respective portions of the overall image GW.
Fig. 6(b) and 6(c) are diagrams for explaining a portion of the overall image GW corresponding to the model image Gm. In the present embodiment, as shown in fig. 6(c), a portion of the whole image GW corresponding to the model image Gm1 is described as a "partial image GW 1". Similarly, the portion of the overall image GW corresponding to the model image Gm2 is referred to as "partial image GW 2", the portion of the overall image GW corresponding to the model image Gm3 is referred to as "partial image GW 3", the portion of the overall image GW corresponding to the model image Gm4 is referred to as "partial image GW 4", the portion of the overall image GW corresponding to the model image Gm5 is referred to as "partial image GW 5", the portion of the overall image GW corresponding to the model image Gm6 is referred to as "partial image GW 6", and the portion of the overall image GW corresponding to the model image Gm7 is referred to as "partial image GW 7".
In the weight estimation process, the model image Gm is fitted (enlarged or reduced) to the partial image Gw corresponding to the model image Gm. Specifically, the model image Gm is fitted so that the outer edge of the model image Gm coincides with the outer edge of the partial image Gw. In the present embodiment, the carcass model image GM is constituted by 7 model images m, but the carcass model image GM may be constituted by more than 7 model images m, or the carcass model image GM may be constituted by less than 7 model images m.
Fig. 6(d-1) to 6(d-3) are diagrams for explaining a specific example of fitting the carcass model image GM to the whole image GW. In the above specific example, a case is assumed where the model image Gm4 is fitted to each model image Gm.
As shown in fig. 6(d-1), the height in the Z-axis direction of the partial image Gw4 is assumed to be Zw, and the width in the Y-axis direction is assumed to be Yw. As shown in fig. 6(d-2), the model image Gm4 has a height Zm in the Z-axis direction and a width Ym in the Y-axis direction. In the above case, as shown in fig. 6(d-3), the weight estimation device 100 enlarges (or reduces as appropriate) the model image Gm4 so that the height in the Z-axis direction is Zw and the width in the Y-axis direction is Yw. Specifically, the weight estimation device 100 specifies (calculates) the longitudinal magnification "Zw/Zm" and the lateral magnification "Yw/Ym" by pattern matching with the cross section of the carcass model image GM using the shape of the partial image Gw 4. The weight estimation device 100 changes the height of the model image Gm4 in accordance with the vertical magnification "Zw/Zm" and changes the width of the model image Gm4 in accordance with the horizontal magnification "Yw/Ym".
The weight estimation device 100 calculates the volume of the carcass model image Gm after the fitting of the carcass model image Gm (all the model images Gm) is completed. The weight estimating device 100 estimates the product of the volume of the fitted carcass model image GM and the average density of the animal carcass as the weight of the animal carcass. The weight estimation device 100 of the present embodiment displays the estimated weight of the trunk body on the display unit 102 (head-mounted display 20).
Fig. 7(a) to 7(b) are diagrams for explaining various images displayed on the display unit 102. As described above, the display unit 102 of the present embodiment is a head-mounted display, and the user can confirm the scene actually visually recognized and the image of the display unit 102 at a time.
Fig. 7(a) is a simulation diagram of a screen M1 displayed before the weight estimation processing is executed. As described above, the image captured by the image capturing unit 101 is displayed on the display unit 102 in real time. The imaging direction of the image capturing unit 101 substantially coincides with the line of sight direction of the user. In the specific example of fig. 7(a), it is assumed that the animal a is positioned below the center of the user's visual field (hereinafter referred to as "visual field center"). In the above case, as shown in fig. 7(a), the animal image GA is displayed on the lower side of the center of the screen M1. The animal B is positioned on the upper right side of the center of the user's visual field, and an animal image GB is displayed on the upper right side of the center of the screen M1.
However, as described above, in the present embodiment, when the weight of the animal a is estimated, the animal image GA is extracted from the image (hereinafter referred to as "scene image") captured by the image capturing unit 101. Specifically, an animal image GA is specified from a scene image by a region expansion method, and the animal image GA is acquired. However, as described above, in the case where the animal image GA is specified by the region expansion method, it is necessary to specify the pixels included in the animal image GA as seed pixels. Hereinafter, a specific method of the seed pixel will be described in detail.
As shown in fig. 7(a), the screen M1 includes a dot image GP. The dot image GP is fixedly displayed on the screen M1. That is, in each image displayed on the screen M1, the scene image changes (moves) according to the shooting direction (the direction of the user's field of view), but the position of the dot image GP on the screen M1 does not move according to the shooting direction.
In the above-described configuration, the pixels of the scene image (including the animal image GA) where the point image GP is located are specified as seed pixels. Therefore, for example, in the case of estimating the weight relating to the animal a, the shooting direction (the direction of the user's line of sight) is changed so that the point image GP is positioned in the animal image GA. For example, in the specific example of fig. 7(a), the weight of the animal a is estimated by moving the line of sight direction of the user (moving the line of sight downward) so that the animal image GA moves in the direction of the arrow Y.
Fig. 7(b) is a simulation diagram showing a screen M2 during execution of the weight estimation process. The picture M2 is displayed immediately after the animal image GA is specified by the region expansion method, for example. As shown in fig. 7(b), the screen M2 includes a scene image including an animal image GA and a point image GP, similarly to the screen M1.
In addition, when an animal image of an animal of which the estimated weight is specified, a configuration may be adopted in which the animal image is displayed in a manner different from that of other animal images. For example, in the specific example of fig. 7(b), the outer edge of the animal image GA of the estimated weight animal a is emphasized (thickened) with respect to the outer edges of the other animal images. According to the above configuration, there is an advantage that the user can easily grasp the animal image used for weight estimation.
In the present embodiment, the weight estimation process is executed in accordance with the photographing operation of the user. Specifically, when the weight estimation apparatus 100 is subjected to the photographing operation, the animal image in which the point image GP is located is acquired at the photographing operation time point, and the weight estimation process is performed. However, the opportunity to execute the weight estimation process may be set as appropriate. For example, the weight estimation process may be automatically executed when the point image GP moves on the animal image GA.
Fig. 7(c) is a simulation diagram of a screen M3 displayed immediately after the weight estimation processing is completed. As shown in fig. 7(c), the screen M3 includes an animal image GA, similar to the screens M1 and M2. The screen M3 includes the weight image Gn. The weight image Gn shows the weight of the carcass calculated in the weight estimation process. For example, in the specific example of fig. 7(c), it is assumed that "75 kg" is estimated as the weight of the carcass.
With the above configuration, there is an advantage that the user can immediately grasp the weight estimated by the weight estimation device 100. As shown in fig. 7 c, the weight image Gn is superimposed on the animal image (GA in the example of fig. 7 c) used for weight estimation. In the above configuration, for example, compared to a configuration in which the weight image Gn is displayed at a position that is not related to the position of the animal image, there is an advantage in that it is easy to grasp an animal whose weight is estimated (an animal image used in the weight estimation processing). However, the position where the weight image Gn is displayed can be appropriately changed.
As described above, in the present invention, the weight of the whole animal (living body weight) may be estimated. In the above structure, the living body weight is displayed on the weight image Gn. In addition, when a structure is adopted in which both the weight of the live body and the weight of the carcass of the animal are estimated, it is preferable that both the weight of the live body and the weight of the carcass are displayed on the weight image Gn.
As shown in fig. 7(c), when the weight end processing is ended, the dot image GP is not displayed. That is, when the weight image Gn is displayed, the dot image GP is not displayed. In the above configuration, the problem that the spot image GP and the weight image Gn overlap, the weight image Gn is difficult to observe in the spot image GP is prevented. However, the dot image GP may be continuously displayed when the weight end processing is finished.
Fig. 8(a) is a flowchart of the weight estimation control process executed by the weight estimation device 100. The weight estimation device 100 executes the weight estimation control process at predetermined time intervals (interrupt cycles), for example. However, the execution opportunity of the weight estimation control process can be appropriately changed.
When the weight estimation control process is started, the weight estimation device 100 executes an image acquisition process (S101). In the image acquisition process, an animal image is acquired from a distance image (a scene image including the animal image) photographed according to a photographing operation. As a method for specifying an animal image from a distance image, for example, the above-described region expansion method is used. In the image acquisition process, the animal image is converted into actual coordinates (XYZ coordinates).
After the image acquisition processing is performed, the weight estimation device 100 performs the curved surface approximation processing (S102). For example, the surface of animals such as pigs is generally smooth. In consideration of the above, in the curved surface approximation process, the surface of the animal image acquired in step S102 is approximated (fitted) to a smooth curved surface. The details of the curved surface approximation processing will be described later with reference to fig. 8 (b).
After the curved surface approximation processing is performed, the weight estimation device 100 performs rotation correction processing (S103). In the rotation correction process, the orientation of the animal image in the Z-axis direction is adjusted (rotated) using the tilt information. In the rotation correction process, the orientation of the animal image on the XY plane (horizontal plane) is adjusted using the principal component analysis.
After the rotation correction processing is performed, the weight estimation device 100 performs the ridge specifying processing (S104). In the ridge specifying process, a ridge curve in the animal image is specified (see fig. 4 (a)). After the spine specifying processing is performed, the weight estimation device 100 performs the linearization processing (S105). In the linearization process, the animal image is adjusted (deformed) so that the ridge curve becomes a straight line when viewed from the Z-axis direction (see fig. 5 (b)).
After the linearization processing is performed, the weight estimation device 100 performs selection processing (S106). In the selection processing, assuming that the animal image GA is cut into two images in the Z-axis direction in the back curve, the half-length represented by the larger image of the two images is selected as the specific half-length.
After the selection process is performed, the weight estimation device 100 performs the cut-out process (S107). In the above-described cutting-out process, a portion of the half body that indicates that the specific half body is not selected is cut out from the animal image (see fig. 5 c). After the cutting process is performed, the weight estimation device 100 performs the generation process (S108). In the generation process, the whole image GW is generated from the animal image (half-body image) (see fig. 5 d).
After the generation processing is executed, the weight estimation device 100 executes weight estimation processing (S109). In the weight estimation process, the carcass weight of the animal is estimated (calculated) from the whole image GW generated by the above generation process. Specifically, the carcass model image GM is fitted so that the outer edge of the entire image GW coincides with the outer edge of the carcass model image GM (see fig. 6(d-1) to 6(d-3)), and the volume of the fitted carcass model image GM is obtained. In addition, the product of the average density stored in advance and the volume of the carcass model image GM is calculated as the weight of the carcass.
After the weight estimation process is performed, the weight estimation device 100 causes the display unit 102 to display the weight image Gn (see fig. 7(c)) (S110). After displaying the weight image Gn, the weight estimation device 100 returns the process to step S101.
Fig. 8(b) is a flowchart of the curved surface approximation processing (S102 in fig. 8 (a)). When the curved surface approximation processing is started, the weight estimation device 100 executes the first approximation processing (S201). In the first approximation process, polynomial approximation function surface fitting using the least square method is performed using each point constituting the surface of the animal image (point group image) as a sample point. The method of surface approximation is not limited to polynomial approximation function surface fitting, and an appropriate method can be employed.
However, when an animal of estimated weight is imaged, another animal may come into contact with the animal. In the above case, an image (hereinafter referred to as "noise image") indicating another animal may be included in an animal image of an animal (original subject) whose weight is estimated. If the animal image used for weight estimation contains a noise image, it is assumed that a problem occurs in that the weight cannot be accurately estimated.
In consideration of the above, the weight estimation device 100 deletes, as a noise image, an image that is not included in an approximate curved surface representing the surface of an animal that is a subject of imaging after the first approximation processing is executed (S202). That is, a point group deviated from 1 approximate curved surface representing the surface of an animal as an imaging target is deleted as a point group representing other animals and the like. In the above configuration, the above-described problems can be suppressed.
After the noise image other than the approximate curved surface representing the surface of the animal as the subject of photographing is deleted, the weight estimation device 100 executes the second approximation processing (S203). In the second approximation processing, polynomial approximation function surface fitting is performed on the animal image as in the first approximation processing. However, in the second approximation processing, polynomial approximation functional surface fitting is performed using a higher-order polynomial than in the first approximation processing.
In the above second approximation processing, the surface of the animal as the object of photographing can be extracted with higher accuracy than in the first approximation processing. Therefore, in a case where it is assumed in step S202 that the noise image is not completely deleted, the noise image is extracted in the second approximation processing as an image different from the surface of the animal as the photographic subject.
After the second approximation processing is performed, the weight estimation device 100 deletes the noise image (S204). According to the above configuration, for example, compared with a configuration in which only the first approximation processing of the first approximation processing and the second approximation processing is performed, the noise image is deleted from the animal image with high accuracy. Therefore, there is an advantage in that the weight of the animal as the subject of imaging can be estimated with high accuracy.
Further, a configuration may be adopted in which only the second approximation processing of the first approximation processing and the second approximation processing is executed (hereinafter, "comparative example Y"). However, if the processing is performed on a common image, the processing load of the second approximation processing tends to be larger than that of the first approximation processing. In addition, the noise image is finally deleted. From the above, there is a case where the smaller the noise image to be subjected to the second approximation processing, the better.
In the present embodiment, the first approximation processing is executed before the second approximation processing, and the noise image extracted in the first approximation processing is deleted. Therefore, there is an advantage that a noise image which is an object of the second approximation processing can be reduced as compared with the comparative example Y.
< second embodiment >
When estimating the weight of an animal (e.g., a pig), an image representing the entire back of the animal may be necessary (see, for example, patent document 1). In the first embodiment described above, there is an advantage that the weight associated with the animal can also be estimated from the image of the majority (e.g. half) of the defects in the back of the animal.
In the first embodiment, when the weight of the animal is estimated, the shape of the part of the animal that is missing in the animal image is estimated, and the overall image GW is generated. However, the possibility of an error occurring between the estimated shape and the actual shape cannot be completely excluded. That is, an error may occur between the shape of the animal represented by the overall image GW and the actual shape of the animal. Therefore, it is assumed that the weight of the animal can be estimated easily and accurately in a case where an animal image representing the entire back of the animal can be captured and a case where the carcass model image GM is fitted based on the animal image (an image representing the actual shape of the animal) as compared with a case where the carcass model image GM is fitted based on the whole image GW (an image representing the estimated shape of the animal).
In view of the above, the weight estimation device 100 according to the second embodiment fits the carcass model image GM based on the animal image when the animal image representing the entire back of the animal can be captured. On the other hand, otherwise, the carcass model image GM is fitted based on the whole image GW.
Specifically, the weight estimation device 100 according to the second embodiment determines whether or not an animal is imaged from the vertical direction based on the inclination information when an image of the animal is acquired. Specifically, when the inclination information is within a predetermined range (0 degrees ± α), it is determined that the animal is imaged from the vertical direction. On the other hand, when the tilt information is outside the range, it is determined that the animal is not imaged in the vertical direction. When an animal is photographed from the vertical direction, it is estimated that an image showing the entire back is included in the animal image.
When determining that an animal is photographed from the vertical direction, the weight estimation device 100 fits the carcass model image GM to the animal image, and estimates the weight of the animal. On the other hand, when determining that an animal is not imaged in the vertical direction, the weight estimation device 100 generates the whole image GW from the animal image (the same as the first embodiment). The weight estimation device 100 estimates the weight of the animal by fitting the body model image GM to the whole image GW.
In the second embodiment described above, the weight of the animal can be estimated, as in the first embodiment. In addition, when it is determined that the animal is photographed from the vertical direction, the carcass model image GM is fitted based on the animal image. Therefore, the effect of being able to estimate the weight of the animal with high accuracy is particularly remarkable. Further, in the second embodiment, a configuration is adopted in which whether the body model image GM is fitted from the animal image or the whole image GW is automatically selected. However, the shape of the animal image may be confirmed by the user before the weight estimation process, and the user may (manually) select the shape according to the animal image.
< summary of action and Effect of example of the present embodiment >
< first embodiment >
The weight estimation device (100) of the embodiment comprises: an image acquisition unit (101) that acquires an image of an animal; a shape specifying unit (104) that specifies the shape (dorsal curve) of a predetermined part of an animal from an image; an information generation unit (106) that generates estimation information (a whole image GW) for estimating the weight of an animal on the basis of the shape of a predetermined region (106); and a weight estimation unit (108) that estimates the weight based on the estimation information, wherein the information generation unit is capable of generating the estimation information when a first image (animal image GA) obtained by imaging the animal from a first direction (for example, the left half) is acquired, and is also capable of generating the estimation information when a second image obtained by imaging the animal from a second direction (for example, the right half) different from the first direction is acquired. According to the present embodiment described above, the degree of freedom in the imaging direction in which the weight of the animal can be estimated is improved.
< second embodiment and third embodiment >
In a second embodiment, the predetermined portion of the animal is a back of the animal, the weight estimation device (100) includes a half body selection unit (105), the half body selection unit (105) selects one of a right half body located on the right side of the back and a left half body located on the left side of the back as the specific half body when viewed from the animal, the information generation unit can estimate a shape of the half body not selected as the specific half body from a shape of the specific half body (see fig. 5(c)), and generate information indicating a shape of the entire animal as the estimation information from the estimated shape of the half body and the shape of the specific half body (see fig. 5 (d)). According to the present embodiment described above, the same effects as those of the first embodiment described above can be obtained. The weight estimating device of the third embodiment further includes an information storage unit that stores carcass model information (carcass model image GM) indicating a shape of a carcass, and the weight estimating unit estimates a weight of the animal carcass based on the shape of the carcass indicated by the carcass model information deformed in accordance with the shape of the animal indicated by the estimation information (see fig. 6(d-1) to 6 (d-3)).
< fourth embodiment >
The information generating unit of the weight estimating device (100) according to the present embodiment can generate the information for estimation even when a third image obtained by imaging an animal in a first posture (posture in which a ridge curve has a first shape) is acquired, and can generate the information for estimation even when a fourth image obtained by imaging an animal in a second posture (posture in which a ridge curve has a second shape) different from the first posture is acquired. According to the present embodiment described above, for example, compared to a configuration in which the weight of the animal can be estimated from the image of the animal in the first posture, but the weight of the animal cannot be estimated from the image of the animal in the second posture, there is an advantage in that the degree of freedom of the posture in which the weight of the animal can be estimated is improved.
< fifth and sixth embodiments >
A weight estimation device (100) of a fifth embodiment is provided with: an image capturing unit (101) which can be fixed at a specific position when viewed from a user and which can capture an animal positioned in the direction of the user's line of sight; and a display unit (102) as a head-mounted display, wherein the display unit (102) as a head-mounted display is capable of displaying the image captured by the image capturing unit, and the image acquiring unit acquires the image captured by the image capturing unit. According to the present embodiment, there is an advantage that the hand-held image capturing section is not required when capturing an animal. In addition, the image acquisition unit of the weight estimation device of the sixth embodiment acquires a distance image including information indicating the distance to the animal.
< seventh embodiment >
The program (weight estimation program PG) of the present embodiment is a program for causing a computer (10) to execute: an image acquisition process (S101 of fig. 8 (a)) of acquiring an image of an animal; a shape specifying process (S104 in fig. 8 (a)) of specifying a shape of a predetermined part of the animal from the image; an information generation process (S108 in fig. 8 (a)) for generating estimation information to be used for estimating the weight of the animal based on the shape of the predetermined portion; and a weight estimation process (S109 in fig. 8 a) of estimating the weight based on the information for estimation, wherein in the information generation process, the information for estimation can be generated when a first image obtained by imaging the animal from a first direction is acquired, and the information for estimation can be generated when a second image obtained by imaging the animal from a second direction different from the first direction is acquired. According to the seventh embodiment described above, the same effects as those of the first embodiment described above can be obtained.
Description of the reference numerals
100: a weight estimating device; 101: an image capturing unit; 102: a display unit; 103: an image acquisition unit; 104: a shape specifying section; 105: a half-length selecting section; 106: an information generating unit; 107: a carcass model storage unit; 108: a weight estimating unit.

Claims (7)

1. A weight estimation device is characterized by comprising:
an image acquisition unit that acquires an image of an animal;
a shape specifying unit that specifies a shape of a predetermined part of the animal based on the image;
an information generating unit that generates estimation information used for estimating the weight of the animal based on the shape of the predetermined portion; and
a weight estimating unit that estimates the weight based on the estimation information,
the information generating unit may generate the information for estimation when a first image obtained by imaging the animal from a first direction is acquired, and may generate the information for estimation when a second image obtained by imaging the animal from a second direction different from the first direction is acquired.
2. The weight estimation device according to claim 1,
the prescribed location of the animal is a dorsal spine of the animal,
the weight estimation device includes a half-body selection unit that selects one of a right half body located on the right side of the back and a left half body located on the left side of the back as a specific half body when viewed from the animal,
the information generating unit may estimate a shape of the half body not selected as the specific half body from the shape of the specific half body, and may generate information indicating a shape of the entire animal as the estimation information from the estimated shape of the half body and the shape of the specific half body.
3. The weight estimation device according to claim 2,
the weight estimating device includes an information storage unit that stores carcass model information indicating a shape of a carcass,
the weight estimating unit estimates the weight of the animal carcass based on the shape of the carcass represented by the carcass model information deformed in accordance with the shape of the animal represented by the estimation information.
4. The weight estimation device according to any one of claims 1 to 3,
the information generating unit may generate the estimation information when a third image obtained by imaging the animal in a first posture is acquired, and may generate the estimation information when a fourth image obtained by imaging the animal in a second posture different from the first posture is acquired.
5. The weight estimation device according to any one of claims 1 to 4, comprising:
an image capturing unit that can be fixed at a specific position when viewed from a user and can capture an image of the animal positioned in a direction of a user's sight; and
a display unit which is a head-mounted display capable of displaying the image captured by the image capturing unit,
the image acquisition unit acquires the image captured by the image capturing unit.
6. The weight estimation device according to any one of claims 1 to 5,
the image acquisition unit acquires a distance image including information indicating a distance to the animal.
7. A program for causing a computer to execute:
an image acquisition process of acquiring an image of an animal;
a shape specifying process of specifying a shape of a predetermined part of the animal from the image;
an information generation process of generating estimation information for estimating the weight of the animal based on the shape of the predetermined portion; and
a weight estimation process of estimating the weight based on the estimation information,
the information generation process may generate the information for estimation when a first image obtained by imaging the animal from a first direction is acquired, and may generate the information for estimation when a second image obtained by imaging the animal from a second direction different from the first direction is acquired.
CN202180013781.6A 2020-02-18 2021-02-16 Weight estimation device and program Active CN115135973B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020025139 2020-02-18
JP2020-025139 2020-02-18
PCT/JP2021/005666 WO2021166894A1 (en) 2020-02-18 2021-02-16 Weight estimation device and program

Publications (2)

Publication Number Publication Date
CN115135973A true CN115135973A (en) 2022-09-30
CN115135973B CN115135973B (en) 2024-04-23

Family

ID=77391257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180013781.6A Active CN115135973B (en) 2020-02-18 2021-02-16 Weight estimation device and program

Country Status (5)

Country Link
US (1) US20230154023A1 (en)
EP (1) EP4109057A4 (en)
JP (1) JP7210862B2 (en)
CN (1) CN115135973B (en)
WO (1) WO2021166894A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022150026A (en) * 2021-03-25 2022-10-07 学校法人東京理科大学 Growth condition evaluation system
WO2023190352A1 (en) * 2022-03-29 2023-10-05 学校法人東京理科大学 Growing condition evaluation system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4570728A (en) * 1983-10-31 1986-02-18 Yamato Scale Company, Limited Combination weighing system
US5412420A (en) * 1992-10-26 1995-05-02 Pheno Imaging, Inc. Three-dimensional phenotypic measuring system for animals
JP2007175050A (en) * 2005-11-29 2007-07-12 Yoshimoto Pole Co Ltd Control system for rearing animal population
CN101144705A (en) * 2007-07-25 2008-03-19 中国农业大学 Method for monitoring pig growth using binocular vision technology
BRPI0806332A2 (en) * 2007-02-07 2011-09-06 Scanvaegt Int As that are supplied to a batch composition equipment and item processing system
CN102405394A (en) * 2009-02-27 2012-04-04 体表翻译有限公司 Estimating physical parameters using three dimensional representations
CN103118647A (en) * 2010-09-22 2013-05-22 松下电器产业株式会社 Exercise assistance system
CN103884280A (en) * 2014-03-14 2014-06-25 中国农业大学 Mobile system for monitoring body sizes and weights of pigs in multiple pigsties
CN106256596A (en) * 2015-06-17 2016-12-28 福特全球技术公司 Drive and passengers weight and height estimation
JP2019045478A (en) * 2017-09-06 2019-03-22 国立大学法人 宮崎大学 Body weight estimation device for livestock and body weight estimation method for livestock
JP2019211364A (en) * 2018-06-06 2019-12-12 全国農業協同組合連合会 Device and method for estimating weight of body of animal
CN110672189A (en) * 2019-09-27 2020-01-10 北京海益同展信息科技有限公司 Weight estimation method, device, system and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6377353B1 (en) 2000-03-07 2002-04-23 Pheno Imaging, Inc. Three-dimensional measuring system for animals using structured light
US8369566B2 (en) * 2009-05-01 2013-02-05 Texas Tech University System Remote contactless stereoscopic mass estimation system
JP6083638B2 (en) 2012-08-24 2017-02-22 国立大学法人 宮崎大学 Weight estimation apparatus for animal body and weight estimation method
WO2017208436A1 (en) 2016-06-03 2017-12-07 株式会社オプティム Animal weight estimation system, animal weight estimation method, and program
JP6559197B2 (en) 2017-09-01 2019-08-14 Nttテクノクロス株式会社 Weight output device, weight output method and program
JP2021063774A (en) * 2019-10-17 2021-04-22 株式会社ノア Body weight estimation device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4570728A (en) * 1983-10-31 1986-02-18 Yamato Scale Company, Limited Combination weighing system
US5412420A (en) * 1992-10-26 1995-05-02 Pheno Imaging, Inc. Three-dimensional phenotypic measuring system for animals
JP2007175050A (en) * 2005-11-29 2007-07-12 Yoshimoto Pole Co Ltd Control system for rearing animal population
BRPI0806332A2 (en) * 2007-02-07 2011-09-06 Scanvaegt Int As that are supplied to a batch composition equipment and item processing system
CN101144705A (en) * 2007-07-25 2008-03-19 中国农业大学 Method for monitoring pig growth using binocular vision technology
CN102405394A (en) * 2009-02-27 2012-04-04 体表翻译有限公司 Estimating physical parameters using three dimensional representations
CN103118647A (en) * 2010-09-22 2013-05-22 松下电器产业株式会社 Exercise assistance system
CN103884280A (en) * 2014-03-14 2014-06-25 中国农业大学 Mobile system for monitoring body sizes and weights of pigs in multiple pigsties
CN106256596A (en) * 2015-06-17 2016-12-28 福特全球技术公司 Drive and passengers weight and height estimation
JP2019045478A (en) * 2017-09-06 2019-03-22 国立大学法人 宮崎大学 Body weight estimation device for livestock and body weight estimation method for livestock
JP2019211364A (en) * 2018-06-06 2019-12-12 全国農業協同組合連合会 Device and method for estimating weight of body of animal
CN110672189A (en) * 2019-09-27 2020-01-10 北京海益同展信息科技有限公司 Weight estimation method, device, system and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
兰添才;陈俊;张怡晨;李翠华;: "基于水平集和最大稳定极值区域的颈椎椎体分割方法", 厦门大学学报(自然科学版), no. 02, 5 February 2018 (2018-02-05), pages 129 - 136 *

Also Published As

Publication number Publication date
WO2021166894A1 (en) 2021-08-26
JPWO2021166894A1 (en) 2021-08-26
EP4109057A1 (en) 2022-12-28
JP7210862B2 (en) 2023-01-24
US20230154023A1 (en) 2023-05-18
CN115135973B (en) 2024-04-23
EP4109057A4 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
JP7004094B2 (en) Fish length measurement system, fish length measurement method and fish length measurement program
WO2019188506A1 (en) Information processing device, object measuring system, object measuring method, and program storing medium
JP5538667B2 (en) Position / orientation measuring apparatus and control method thereof
US9275473B2 (en) Image processing apparatus, image processing method, and program
US7671875B2 (en) Information processing method and apparatus
JP6981531B2 (en) Object identification device, object identification system, object identification method and computer program
CN115135973B (en) Weight estimation device and program
JP7001145B2 (en) Information processing equipment, object measurement system, object measurement method and computer program
JP6503906B2 (en) Image processing apparatus, image processing method and image processing program
US20060072808A1 (en) Registration of first and second image data of an object
JP2008275341A (en) Information processor and processing method
JP7049657B2 (en) Livestock weight estimation device and livestock weight estimation method
US20160003612A1 (en) Rapid and accurate three dimensional scanner
US20020150288A1 (en) Method for processing image data and modeling device
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
KR20200056764A (en) Method for automatically set up joints to create facial animation of 3d face model and computer program
JP6816773B2 (en) Information processing equipment, information processing methods and computer programs
JPWO2019045089A1 (en) Information processing apparatus, length measuring system, length measuring method, and computer program
JP6608165B2 (en) Image processing apparatus and method, and computer program
JP2007267979A (en) Method and system of analyzing organ form of living creature
JP6702370B2 (en) Measuring device, measuring system, measuring method and computer program
JPWO2020121406A1 (en) 3D measuring device, mobile robot, push wheel type moving device and 3D measurement processing method
JP6545847B2 (en) Image processing apparatus, image processing method and program
JPH10214349A (en) Method and instrument for measuring photographic parameter and recording medium
KR20160073488A (en) Handheld 3D Scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant