WO2020078111A1 - Weight measurement method and device, and computer readable storage medium - Google Patents

Weight measurement method and device, and computer readable storage medium Download PDF

Info

Publication number
WO2020078111A1
WO2020078111A1 PCT/CN2019/103274 CN2019103274W WO2020078111A1 WO 2020078111 A1 WO2020078111 A1 WO 2020078111A1 CN 2019103274 W CN2019103274 W CN 2019103274W WO 2020078111 A1 WO2020078111 A1 WO 2020078111A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
measured
weight
matrix
posture
Prior art date
Application number
PCT/CN2019/103274
Other languages
French (fr)
Chinese (zh)
Inventor
王博
李春华
Original Assignee
京东数字科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东数字科技控股有限公司 filed Critical 京东数字科技控股有限公司
Publication of WO2020078111A1 publication Critical patent/WO2020078111A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G17/00Apparatus for or methods of weighing material of special form or property
    • G01G17/08Apparatus for or methods of weighing material of special form or property for weighing livestock
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Definitions

  • the present invention relates to image processing technology in the field of communications, and in particular, to a weight measurement method, device, and computer-readable storage medium.
  • the existing pig body weight measurement methods are mainly divided into contact and non-contact types: contact measurement methods are further divided into direct measurement methods and indirect measurement methods; direct measurement mainly uses tools such as scales and electronic scales; indirect measurement mainly through the Pig body length, chest circumference, hip circumference and other body size indicators are measured, and then the weight of the pig is obtained through empirical formulas; the non-contact measurement method mainly relies on the camera to collect the image of the pig, and then uses the digital image processing technology to Body weight is estimated.
  • the embodiments of the present invention are expected to provide a weight measurement method, device, and computer-readable storage medium, which solves the problem of large measurement error and high cost in measuring pig weight in the prior art.
  • the problem has realized the accurate measurement of the weight of the pig, and reduced the operation difficulty and maintenance cost; at the same time, it has universal applicability.
  • a method for measuring body weight includes: acquiring image information to be monitored for an object to be measured, and performing image recognition on the image information to be monitored to obtain an outline of the object to be measured; based on the outline of the object to be measured Determine the posture of the object to be measured; calculate the weight of the object to be measured based on the posture of the object to be measured and the weight mapping matrix; wherein the weight mapping matrix is obtained by pre-training.
  • a weight measurement device the device includes: a processor, a memory, and a communication bus;
  • the communication bus is used to implement a communication connection between the processor and the memory
  • the processor is used to execute a weight measurement program stored in the memory to achieve the following steps:
  • a computer-readable storage medium stores one or more programs, and the one or more programs may be executed by one or more processors to implement the steps of the weight measurement method described above.
  • the posture of the object to be measured is determined based on the contour of the object to be measured, and finally based on the posture and
  • the weight mapping matrix calculates the weight of the object to be measured.
  • the obtained weight mapping matrix can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost when measuring the weight of the pig in the prior art, and realizes the accuracy of the weight of the pig Measurement, and reduces the difficulty of operation and maintenance costs; at the same time, it has universal applicability.
  • FIG. 1 is a schematic flowchart of a weight measurement method according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a standard posture of an object to be measured provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another weight measurement method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a weight measurement device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a weight measurement method. Referring to FIG. 1, the method includes the following steps:
  • Step 101 Acquire image information to be monitored for the object to be measured, and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured.
  • step 101 acquires the image information to be monitored for the object to be measured, and performs image recognition on the image information to be monitored to obtain the contour of the object to be measured can be realized by the weight measuring device; Obtained; the image information to be monitored may include one image information or multiple image information, which can be determined according to actual application scenarios.
  • the contour of the object to be measured may be obtained by performing image recognition on the object to be measured included in the image information to be monitored.
  • Step 102 Determine the posture of the object to be measured based on the contour of the object to be measured.
  • determining the posture of the object to be measured based on the contour of the object to be measured can be implemented by the weight measuring device; matching the image corresponding to the contour of the object to be measured with the image of the standard posture, and determining according to the result of the matching process The posture of the object to be measured.
  • Step 103 Based on the posture and weight mapping matrix of the object to be measured, calculate the weight of the object to be measured.
  • the weight mapping matrix is obtained by pre-training.
  • step 103 based on the posture of the object to be measured and the weight mapping matrix, calculating the weight of the object to be measured can be achieved by the weight measuring device; the weight measuring device can obtain the correspondence with the posture of the object to be measured according to the posture of the object to be measured Weight mapping matrix, and calculate the weight of the object to be measured according to the mask matrix and the weight mapping matrix of the object to be measured.
  • the weight measurement method obtained by the embodiment of the present invention obtains the image information to be monitored, and performs image recognition on the image information to obtain the contour of the object to be measured, and then determines the posture of the object to be measured based on the contour of the object to be measured, and finally The measurement object's posture and weight mapping matrix calculate the weight of the object to be measured.
  • contour and the weight mapping matrix obtained by pre-training can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost in the prior art when measuring the weight of the pig, and realizes the Accurate measurement of pig body weight reduces operation difficulty and maintenance costs; at the same time, it has universal applicability.
  • the embodiments of the present invention provide a weight measurement method. Referring to FIG. 2, the method includes the following steps:
  • Step 201 The weight measurement device acquires image information to be monitored for the object to be measured through the image collector.
  • the image information to be monitored may be collected and realized through an image collector; the image collector may be a monocular camera, such as a monocular camera.
  • the image information to be monitored may include a plurality of images to be monitored; of course, the image information to be monitored may also refer to the video to be monitored.
  • Step 202 The weight measuring device acquires a preset number of objects to be trained that meet a predetermined condition.
  • the posture of the object to be trained is a standard posture.
  • the preset number is an amount that can be adjusted according to actual application requirements and application scenarios, and the preset number is related to the number of standard gestures acquired; in a feasible implementation manner, the preset The number can be an integer multiple of the number of standard postures obtained; if the number of standard postures obtained is two (that is, including two standard postures), the preset number can be an integer multiple of 2; for example, the preset The number can be 1000, where the objects to be trained are all those with a standard posture; for example, the objects to be trained can include 500 to be trained as the first standard posture, and can also include 500 to be the second posture Subjects to be trained in standard poses.
  • the predetermined condition may mean that the object to be trained covers various parts in the collected picture, and the object to be trained covers various types of objects. That is to say, the object to be trained can be obtained by monitoring the object that needs to be trained in advance through the camera, and then selecting a preset number of objects from a large number of objects and covering objects in various parts of the picture collected by the camera .
  • Step 203 The weight measuring device obtains the number of objects to be trained included in each pixel in the image information to be monitored, and generates a first target matrix based on the number of objects to be trained.
  • the number of objects to be trained included on each pixel in the image information to be monitored may refer to the number of objects to be trained that have appeared on each pixel in the pixels included in the picture of the image to be monitored.
  • the first target matrix may be generated based on the acquired number of objects to be trained and the first matrix set initially, and the number of rows and columns of the first matrix is the same as the value indicated by the size of the image information to be monitored.
  • the first target matrix may be used to record the number of objects to be trained that appear at each pixel.
  • Step 204 The weight measurement device generates a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels.
  • the weight of the object to be trained is obtained after the weight measuring device measures the object to be trained in real time; meanwhile, the pixel area is the total pixel area of the pixel points occupied by the object to be trained in the image information to be monitored by the weight measuring device owned.
  • the second target matrix may be generated based on the acquired weight, pixel area and initial setting of the second matrix of the object to be trained.
  • the number of rows and columns of the second matrix is the same as the value indicated by the size of the image information to be monitored.
  • the second target matrix may be used to record the cumulative weight weight of the object to be trained that has appeared at each pixel.
  • Step 205 The weight measuring device generates a weight mapping matrix based on the second target matrix and the first target matrix.
  • the second target matrix can be divided by the first target matrix, and the obtained quotient is the weight mapping matrix.
  • Step 206 The weight measuring device uses a specific image segmentation algorithm to perform image recognition on the image information to be monitored, and determines the contour of the object to be measured from the image information to be monitored.
  • the specific image segmentation algorithm can refer to Masked Region Convolutional Neural Network (Mask Regions With Convolutional Neural Network, Mask R-CNN).
  • the contour of the object to be measured can be obtained by segmenting and edge detecting the object in the image information to be monitored using the Mask R-CNN algorithm.
  • Step 207 The weight measuring device acquires a standard posture for the object to be measured.
  • the standard posture can be set according to the actual application requirements and application scenarios, combined with the physical object of the measurement object; the standard posture can be a posture that each object to be measured has and is generally applicable, in the embodiment of the present invention
  • the standard posture obtained does not limit the specific number; if the object to be measured is a pig that needs to be measured, the standard posture obtained in this embodiment of the present invention may include two types (that is, two); as shown in FIG. 3, the standard posture Specifically, it may include two postures: the first standard posture A shown in FIG. 3a of FIG. 3 and the second standard posture B shown in FIG. 3b of FIG. 3; where the first standard posture may be Standing posture, the second standard posture can be a lateral lying posture.
  • Step 208 The weight measuring device determines the posture of the object to be measured based on the standard posture and the contour of the object to be measured.
  • the area of the same part of the image corresponding to the contour of the object to be measured and the image corresponding to the standard posture is obtained according to the contour of the object to be measured, and the matching degree between the two images mentioned above is calculated according to the area, and then according to the matching
  • the relationship between the degree and the preset threshold determines the posture of the object to be measured.
  • Step 209 The weight measuring device calculates the weight of the object to be measured based on the posture of the object to be measured and the weight mapping matrix.
  • the object to be trained and the object to be measured in the embodiment of the present invention may be any object that needs to be calculated for weight; in a feasible implementation manner, the object to be trained and the object to be measured may include different types of objects that need to be calculated for weight
  • the animal may be, for example, poultry.
  • the weight measurement method provided by the embodiment of the present invention only needs to extract the contour of the object to be measured from the image information of the object to be measured for the object to be measured without manual participation, and based on the contour of the object to be measured and pre-training
  • the obtained weight mapping matrix can realize the measurement of the weight of the object to be measured.
  • the image to be monitored is collected using a monocular camera, it solves the large measurement error existing in the prior art when measuring the weight of the pig.
  • the problem of higher cost has realized the accurate measurement of the pig's weight, and reduced the operation difficulty and maintenance cost; at the same time, it has universal applicability.
  • an embodiment of the present invention provides a weight measurement method. Referring to FIG. 4, the method includes the following steps:
  • Step 301 The weight measuring device obtains image information to be monitored for the object to be measured through the image collector.
  • Step 302 The weight measurement device acquires a preset number of objects to be trained that meet a predetermined condition.
  • Step 303 The weight measuring device obtains the number of objects to be trained included in each pixel in the image information to be monitored, and generates a first target matrix based on the number of objects to be trained.
  • step 303 can be implemented in the following manner:
  • Step 303a The weight measuring device sets the first matrix of M * N.
  • M * N is the size of the image information to be monitored.
  • the value of M may be the length value corresponding to the size of the image information to be monitored, and the value of N may be the width value corresponding to the size of the image information to be monitored.
  • the value of M may be, and the value of N may be 1024, that is, the first matrix is a matrix of 1280 * 1024. It should be noted that the first matrix is an initialized matrix, and the values of corresponding elements are all 0.
  • Step 303b The weight measuring device sequentially traverses the outline of the preset number of objects to be trained, and processes the outline of each object to be trained to obtain the target part of each object to be trained.
  • the contour of the object to be trained may be obtained by segmenting the acquired object to be trained and edge detection using the Mask R-CNN algorithm.
  • the image information to be monitored in the embodiment of the present invention may be a two-dimensional planar image
  • the object to be trained may be a pig body.
  • the main body weight of the pig body is distributed in the pig body, legs, tail and head The weight distribution is relatively small.
  • the image processing method can be used to remove the pig body part with a more prominent pig body contour; further, only the pig body part area is used as the training weight mapping matrix and the effective area for estimating weight.
  • the target portion may be determined according to the actual object of the object specifically referred to by the object to be trained, and may be an effective portion that can represent the weight of the object to be trained.
  • the target part can be a pig body part.
  • Step 303c The weight measuring device obtains the quantity value of the target portion of the object to be trained included on each pixel in the image information to be monitored.
  • the quantity value of the target portion of the object to be trained included on each pixel in the image information to be monitored in the embodiment of the present invention may refer to each pixel in the pixel included in the picture of the image information to be monitored The number of target parts of the object to be trained has appeared on the point.
  • Step 303d The weight measuring device assigns the quantity value to the first matrix according to the correspondence between the pixel points and the first matrix to obtain the first target matrix.
  • the specifications of the first matrix are the same as the size of the image information to be monitored, there is a one-to-one correspondence between each pixel in the image information to be monitored and the elements in the first matrix .
  • the first target matrix may be an M * N matrix obtained by updating the values of the elements in the first matrix using quantity values.
  • the first target matrix can be generated using the following formula (1):
  • x and y respectively represent the horizontal and vertical coordinates of the pixel;
  • formula (1) indicates that if the target part of the object to be trained appears once on the pixel, the value of the corresponding element in the first matrix corresponding to the pixel Add 1; if the pixel is the target part where the object to be trained appears, the value of the corresponding element in the first matrix corresponding to the pixel remains unchanged; the updated matrix after the assignment of all elements of the first matrix is The first target matrix.
  • Step 304 The weight measuring device generates a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels.
  • Step 304a The weight measuring device sets the second matrix of M * N.
  • the specifications of the second matrix are the same as the specifications of the first matrix, and the second matrix is also a matrix formed after initialization, and the values of the corresponding elements are all 0.
  • Step 304b The weight measuring device measures the weight of each object to be trained, and calculates the pixel area of the pixels covered by the target part of each object to be trained in the image information to be monitored.
  • the weight corresponding to the training can be obtained by weighing the actual weight of 500 pigs with the first standard posture (or the second standard posture) used for training using a scale, electronic scale or other tools .
  • the pixel area may refer to the area of the area formed by all pixels covered by the pig body part of each pig in the image information to be monitored.
  • Step 304c The weight measuring device calculates the weight value corresponding to each pixel in the pixels covered by each object to be trained based on the weight and pixel area of each object to be trained.
  • the weight weight corresponding to each pixel in the pixels covered by each object to be trained may be obtained by dividing the actual weight of each object to be trained by the pixel area.
  • Step 304d The weight measuring device adds the weight weights corresponding to the same pixels among the pixels covered by the preset number of objects to be trained to obtain the weight weight corresponding to each pixel in the image information to be monitored.
  • the weight weights of the 400 objects to be trained are a1, a2, a3 ... a400, then The weight value of the corresponding pixel in the image information to be monitored is a1 + a2 + a3 + ... + a400.
  • Step 304e The weight measurement device assigns weight weights to the second matrix according to the correspondence between pixels and the second matrix to obtain a second target matrix.
  • each pixel in the image information to be monitored has a one-to-one correspondence with the elements in the second matrix.
  • the second target matrix may be an M * N matrix obtained by updating the values of elements in the second matrix using weight weights. In a feasible implementation manner, if the weight value corresponding to a certain pixel in the image information to be monitored is 300, then the value of the element corresponding to this pixel in the corresponding first matrix is 300.
  • formula (2) indicates that the value of the corresponding element in the second matrix corresponding to each pixel covered by the target part of the training object is added to the quotient obtained by dividing the actual weight of the training object by the pixel area; If the pixel is not covered by the target part of the training object, the value of the corresponding element in the second matrix is unchanged; the updated matrix after the assignment of all elements of the second matrix is the second target matrix.
  • Step 305 The weight measuring device generates a weight mapping matrix based on the second target matrix and the first target matrix.
  • the objects to be trained are classified according to the pre-selected standard posture; if the standard posture includes two types, the objects to be trained need to be divided into two types at this time; the first type is the posture as The first type of standard posture to be trained, the second type is the type of posture to be trained.
  • the first weight mapping matrix is trained according to the first type of object to be trained, and the second weight mapping matrix is obtained according to the second type of object to be trained.
  • only one weight mapping matrix or more weight mapping matrices can be obtained, which can be determined according to the number of standard postures acquired.
  • Step 306 The weight measuring device uses a specific image segmentation algorithm to perform image recognition on the image information to be monitored, and determines the contour of the object to be measured from the image information to be monitored.
  • Step 307 The weight measuring device acquires a standard posture for the object to be measured.
  • Step 308 The weight measuring device acquires an image with a standard posture, and converts the format of the image with a standard posture into a preset format to obtain a first image.
  • the preset format is a binarization format; binarization refers to the image obtained by setting the background part of the image to 1 and the foreground part to 0; the first image may be an image whose pose is extracted as a standard pose After binarization, and then size scaling is obtained.
  • Step 309 The weight measuring device acquires an image corresponding to the contour of the object to be measured, and converts the format of the image corresponding to the contour of the object to be measured into a preset format to obtain a second image.
  • the second image may be obtained by binarizing the image corresponding to the contour of the object to be measured and then scaling the size; it should be noted that if the image information to be monitored includes multiple objects to be measured, then this time You can get multiple second images corresponding to the number.
  • the size scaling ratio when acquiring the first image and the second image may be the same, for example, the size scaling ratio of the image with the first standard posture may be 200 * 80, and the size scaling of the image with the second standard posture The ratio can be 200 * 100.
  • Step 310 The weight measuring device calculates the degree of matching between the second image and the first image.
  • the calculation of the matching degree between the second image and the first image in step 310 may be implemented in the following manner:
  • Step 310a The weight measuring device acquires a first area corresponding to the same area in the first image and the second image.
  • the first area may be obtained by comparing the first image with the second image and calculating the area of the area corresponding to the same part in the first image and the second image.
  • Step 310b The weight measuring device calculates the sum of the area of the first image and the area of the second image to obtain the second area.
  • the second area may be obtained by adding the area of the first image and the area of the second image.
  • Step 310c The weight measuring device calculates the ratio of the first area to the second area to obtain the matching degree of the second image and the first image.
  • the matching degree R between the second image and the first image can be calculated using formula (3):
  • Step 311 If the degree of matching between the second image and the first image is greater than or equal to a preset threshold, the weight measuring device determines that the posture of the object to be measured is a standard posture corresponding to the first image with a degree of matching greater than or equal to the preset threshold.
  • the standard posture includes the first standard posture and the second standard posture
  • the first image includes two images
  • the second image matches the image of the first standard posture in the first image
  • the matching degree is greater than or Equal to the preset threshold
  • the posture of the object to be measured is the first standard posture
  • the degree of matching between the second image and the image of the second standard posture in the first image is greater than or equal to the preset threshold, then the object to be measured
  • the posture is the second standard posture
  • the preset threshold can be set according to the actual application scenario and specific requirements.
  • Step 312 If the matching degree is less than the preset threshold, the weight measuring device rotates the second image according to the predetermined direction and the preset angle to obtain the third image, and calculates the matching degree of the third image and the first image.
  • the predetermined direction and the preset angle may be a preset direction and angle; in a feasible implementation manner, the predetermined direction may be clockwise or counterclockwise, and the preset angle may be 45 °.
  • Step 313 If the matching degree between the third image and the first image is less than a preset threshold, the weight measuring device rotates the third image according to a predetermined direction and a predetermined angle.
  • the weight measuring device determines that the posture of the object to be measured is a standard posture corresponding to the first image with a matching degree greater than or equal to the preset threshold.
  • the match between the second image obtained initially and the two images in the first image is less than the preset threshold, then rotate the second image clockwise (or counterclockwise) by 45 ° to obtain the third image, if the third image
  • the degree of matching with the image in the first image whose pose is the first standard posture is greater than or equal to the preset threshold, then the posture of the object to be measured is the first standard posture; if the posture in the second image and the first image is the second
  • the matching degree of the image of the standard posture is greater than or equal to the preset threshold, then the posture of the object to be measured is the second standard posture.
  • Step 314 If after one rotation according to a predetermined direction and a predetermined angle, the degree of matching between each rotated image and the first image is less than a preset threshold, the weight measuring device determines that the posture of the object to be measured is an invalid posture.
  • the match between the third image and the two images in the first image is less than the preset threshold, then continue to rotate the second image clockwise (or counterclockwise) by 45 ° to obtain a new image, if the new The matching degree between the image and any one of the first images is greater than or equal to the preset threshold, at this time it can be obtained that the posture of the object to be measured is the standard posture corresponding to the image with the matching degree greater than or equal to the preset threshold;
  • the matching degree of any image in the first image is less than the preset threshold, then continue to rotate the second image in a clockwise (or counterclockwise) direction by 45 °; if the second image is rotated once (360 °), rotate
  • the matching degree between all the images obtained in the process and the two images in the first image is less than the preset threshold.
  • the posture of the object to be measured is determined to be an invalid posture, and the corresponding contour is marked as an invalid contour.
  • Step 315 The weight measuring device calculates the weight of the object to be measured based on the posture of the object to be measured and the weight mapping matrix.
  • Step 315a If the posture of the object to be measured is a standard posture, obtain the position of the pixel point covered by the target part of the object to be measured in the image information to be monitored.
  • Step 315b Generate an M * N mask matrix based on the positions of the pixels.
  • the mask matrix may be generated in the following manner: first initialize a third matrix of M * N, the specifications of the third matrix are the same as the specifications of the first matrix, and are formed after initialization, and their corresponding The values of the elements are all 0; calculate the position of the pixel covered by the target part of the object to be measured in the image information to be monitored, and mark the value of the element corresponding to the pixel in the third matrix as 1, which is the same as the object to be measured The value of the element corresponding to the pixel that is not covered by the target part is recorded as 0, and finally the M * N mask matrix is generated.
  • each pixel in the image information to be monitored has a one-to-one correspondence with the elements in the third matrix.
  • Step 315c Based on the mask matrix and the weight mapping matrix, calculate the weight of the object to be measured.
  • the mask matrix is multiplied by the corresponding weight mapping matrix, and then the value of each element in the obtained new matrix is added, and the sum is the weight of the object to be measured.
  • the first weight mapping matrix in the weight mapping matrix may be multiplied by the mask matrix, and then the new matrix obtained The value of each element in is added to obtain the weight of the object to be measured; if the posture of the object to be measured is the second standard posture, then the second weight mapping matrix in the weight mapping matrix can be multiplied by the mask matrix , And then add the value of each element in the obtained new matrix to obtain the weight of the object to be measured.
  • the weight measurement method provided in the embodiment of the present invention may be used to measure the weight of each object to be measured.
  • the weight measurement method provided by the embodiment of the present invention only needs to extract the contour of the object to be measured from the image information of the object to be measured for the object to be measured without manual participation, and based on the contour of the object to be measured and pre-training
  • the obtained weight mapping matrix can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost when measuring the weight of the pig in the prior art, and realizes the accuracy of the weight of the pig Measurement, and reduces the difficulty of operation and maintenance costs; at the same time, it has universal applicability.
  • the weight measuring device 4 may include: a processor 41, a memory 42, and a communication bus 43, wherein:
  • the communication bus 43 is used to realize the communication connection between the processor 41 and the memory 42;
  • the processor 41 is used to execute the weight measurement program stored in the memory 42 to achieve the following steps: acquiring image information to be monitored for the object to be measured, and performing image recognition on the image information to obtain the outline of the object to be measured; based on the object to be measured The contour of determines the posture of the object to be measured; based on the posture of the object to be measured and the weight mapping matrix, calculates the weight of the object to be measured; wherein the weight mapping matrix is obtained by pre-training.
  • the processor 41 is used to execute the acquisition of the image information to be monitored stored in the memory 42 and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured, in order to achieve the following steps: through the image collector Obtain the image information to be monitored; use a specific image segmentation algorithm to perform image recognition on the image information to be monitored, and determine the contour of the object to be measured from the image information to be monitored.
  • the processor 41 is used to execute the determination of the posture of the object to be measured based on the contour of the object to be measured stored in the memory 42 to achieve the following steps: acquiring a standard posture for the object to be measured; based on the standard posture And the contour of the object to be measured, determine the posture of the object to be measured.
  • the processor 41 is used to execute the standard posture and the contour of the object to be measured stored in the memory 42 to determine the posture of the object to be measured, so as to achieve the following steps: acquiring an image with a standard posture, Convert the format of the image with the standard pose to the preset format to obtain the first image; obtain the image corresponding to the contour of the object to be measured, and convert the format of the image corresponding to the contour of the object to be measured to the preset format to obtain the second Image; calculate the degree of matching between the second image and the first image; if the degree of matching between the second image and the first image is greater than or equal to the preset threshold, determine the posture of the object to be measured is the first degree of matching greater than or equal to the preset threshold The standard pose corresponding to the image.
  • the processor 41 is used to execute a weight measurement program stored in the memory 42 and may also implement the following steps: if the matching degree is less than a preset threshold, rotate the second image according to a predetermined direction and a predetermined angle Obtain the third image and calculate the matching degree between the third image and the first image; if the matching degree between the third image and the first image is greater than or equal to the preset threshold, determine the posture of the object to be measured as the matching degree is greater than or equal to the preset The standard posture corresponding to the first image of the threshold; if the matching degree between the third image and the first image is less than the preset threshold, rotate the third image according to the predetermined direction and the predetermined angle; if it rotates according to the predetermined direction and the predetermined angle once, then rotate The matching degree between each subsequent image and the first image is less than a preset threshold, and it is determined that the posture of the object to be measured is an invalid posture.
  • the processor 41 is used to execute the calculation of the matching degree of the second image and the first image stored in the memory 42 to achieve the following steps: acquiring the corresponding areas of the first image and the second image in the same area The first area; calculate the sum of the area of the first image and the area of the second image to obtain the second area; calculate the ratio of the first area and the second area to obtain the degree of matching between the second image and the first image.
  • the processor 41 is used to execute the acquisition of image information to be monitored stored in the memory 42 and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured, so as to implement the following steps: obtain a preset A number of objects to be trained that meet predetermined conditions; where the posture of the object to be trained is a standard posture; the number of objects to be trained included on each pixel in the image information to be monitored is obtained and generated based on the number of objects to be trained A first target matrix; generate a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels; generate a weight mapping matrix based on the second target matrix and the first target matrix.
  • the processor 41 is configured to execute the acquisition of the number of objects to be trained included in each pixel in the image information to be monitored stored in the memory 42 and generate a first based on the number of objects to be trained
  • the target matrix to achieve the following steps: set the first matrix of M * N; where M * N is the size of the image information to be monitored; sequentially traverse the outline of a preset number of objects to be trained, and The contour is processed to obtain the target part of each object to be trained; the quantity value of the target part of the object to be trained included on each pixel in the image information to be monitored is obtained; according to the correspondence between the pixel and the first matrix, the quantity The value is assigned to the first matrix to obtain a first target matrix; wherein, there is a correspondence between pixels and elements in the first matrix.
  • the processor 41 is used to execute the pixel area based on the weight of the object to be trained and the covered pixel stored in the memory 42 to generate a second target matrix to achieve the following steps: set M * N The second matrix; measure the weight of each object to be trained, and calculate the pixel area of the pixels covered by the target part of each object to be trained in the image information to be monitored; based on the weight and pixel area of each object to be trained, Calculate the weight weight corresponding to each pixel in the pixels covered by each object to be trained; add the weight weight corresponding to the same pixel in the preset number of pixels covered by the object to be trained to obtain the image information to be monitored The weight value corresponding to each pixel in the pixel; according to the correspondence between the pixel and the second matrix, assign the weight weight to the second matrix to obtain the second target matrix.
  • the processor 41 is used to execute the calculation of the weight of the object to be measured based on the posture and weight mapping matrix of the object to be measured stored in the memory 42 to achieve the following steps: if the posture of the object to be measured is the standard Posture, obtain the position of the pixel covered by the target part of the object to be measured in the image information to be monitored; generate an M * N mask matrix based on the position of the pixel; calculate the object to be measured based on the mask matrix and the weight mapping matrix Weight.
  • the weight measuring device provided by the embodiment of the present invention only needs to extract the contour of the object to be measured from the image information of the object to be measured for the object to be measured without human participation, and based on the contour of the object to be measured and pre-training
  • the obtained weight mapping matrix can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost when measuring the weight of the pig in the prior art, and realizes the accuracy of the weight of the pig Measurement, and reduces the difficulty of operation and maintenance costs; at the same time, it has universal applicability.
  • the embodiments of the present invention provide a computer-readable storage medium that stores one or more programs, and the one or more programs may be executed by one or more processors to implement The following steps: Obtain the image information to be monitored, and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured; determine the posture of the object to be measured based on the contour of the object to be measured; based on the posture of the object to be measured and the weight mapping matrix, calculate the Measure the weight of the subject; among them, the weight mapping matrix is obtained by pre-training.
  • the one or more programs may be executed by one or more processors to achieve the following steps: acquiring image information to be monitored through an image collector; adopting a specific image segmentation algorithm to perform monitoring on the image information to be monitored Image recognition, determine the contour of the object to be measured from the image information to be monitored.
  • the one or more programs may be executed by one or more processors to achieve the following steps: acquiring a standard pose for the object to be measured; based on the standard pose and the contour of the object to be measured, determining The posture of the object to be measured.
  • the one or more programs may be executed by one or more processors to implement the following steps: obtain an image with a standard pose and convert the format of the image with a standard pose to The first image is obtained in a preset format; the image corresponding to the contour of the object to be measured is acquired, and the format of the image corresponding to the contour of the object to be measured is converted into a preset format to obtain the second image; If the degree of matching between the second image and the first image is greater than or equal to a preset threshold, it is determined that the posture of the object to be measured is the standard posture corresponding to the first image whose match degree is greater than or equal to the preset threshold.
  • the one or more programs may be executed by one or more processors to implement the following steps:
  • the matching degree is less than the preset threshold, rotate the second image according to the predetermined direction and the preset angle to obtain the third image, and calculate the matching degree of the third image and the first image;
  • the matching degree between the third image and the first image is greater than or equal to a preset threshold, determine that the posture of the object to be measured is a standard posture corresponding to the first image with a matching degree greater than or equal to the preset threshold;
  • the matching degree between the third image and the first image is less than a preset threshold, rotate the third image according to a predetermined direction and a predetermined angle;
  • the one or more programs may be executed by one or more processors to achieve the following steps: acquiring the first area corresponding to the same area in the first image and the second image; calculating the first The sum of the area of the image and the area of the second image yields the second area; the ratio of the first area and the second area is calculated to obtain the degree of matching between the second image and the first image.
  • the one or more programs may be executed by one or more processors to achieve the following steps: acquiring a preset number of objects to be trained that meet predetermined conditions; acquiring image information to be monitored The number of objects to be trained is included on each pixel of, and a first target matrix is generated based on the number of objects to be trained; a second target matrix is generated based on the weight of the object to be trained and the pixel area of the pixels covered; The second target matrix and the first target matrix generate a weight mapping matrix.
  • the one or more programs may be executed by one or more processors to achieve the following steps: set a first matrix of M * N; where M * N is the image information to be monitored Size; traverse the outline of a preset number of objects to be trained in sequence, and process the outline of each object to be trained to obtain the target part of each object to be trained; obtain the object to be included on each pixel in the image information to be monitored The quantity value of the target part of the training object; according to the correspondence between the pixels and the first matrix, the quantity value is assigned to the first matrix to obtain the first target matrix; wherein, there is a correspondence between the pixels and the elements in the first matrix relationship.
  • the one or more programs may be executed by one or more processors to achieve the following steps: setting a second matrix of M * N; measuring the weight of each object to be trained, and calculating The pixel area of the pixels covered by the target part of each object to be trained in the image information to be monitored; based on the weight and pixel area of each object to be trained, the calculation corresponds to each pixel of the pixels covered by each object to be trained Weight weight value; add the weight weight value corresponding to the same pixel among the pixels covered by the preset number of objects to be trained to obtain the weight weight value corresponding to each pixel in the image information to be monitored; Corresponding relationship between the two matrices, weight weights are assigned to the second matrix to obtain the second target matrix.
  • the one or more programs may be executed by one or more processors to achieve the following steps: acquiring the position of the pixel covered by the object to be measured in the image information to be monitored; based on the pixel The position of the point generates an M * N mask matrix; based on the mask matrix and the weight mapping matrix, the weight of the object to be measured is calculated.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage and optical storage, etc.) containing computer usable program code.
  • a computer usable storage media including but not limited to disk storage and optical storage, etc.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.

Abstract

A weight measurement method and device (4), and a computer readable storage medium. The weight measurement method comprises: obtaining image information to be monitored for an object to be measured, and carrying out image recognition on said image information to obtain a contour of said object (101); determining a posture of said object on the basis of the contour of said object (102); and calculating the weight of said object on the basis of the posture and a weight mapping matrix of said object (103). The weight mapping matrix is obtained by means of pre-training.

Description

一种体重测量方法、设备和计算机可读存储介质Weight measuring method, equipment and computer readable storage medium
相关申请的交叉引用Cross-reference of related applications
本申请基于申请号为201811210433.4、申请日为2018年10月17日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。This application is based on a Chinese patent application with an application number of 201811210433.4 and an application date of October 17, 2018, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated into this application by way of introduction.
技术领域Technical field
本发明涉及通信领域中的图像处理技术,尤其涉及一种体重测量方法、设备和计算机可读存储介质。The present invention relates to image processing technology in the field of communications, and in particular, to a weight measurement method, device, and computer-readable storage medium.
背景技术Background technique
我国已经成为猪肉生产第一大国,猪肉产量和消费量占全球一半以上。基于此,随着现代畜牧业的不断发展,一些企业已经开始将自动化控制、计算机视觉等技术应用在养猪领域中,以提高对猪场的管控能力,改善猪场生产环境,提高工作人员效率。在养殖过程中,定期检测猪体重变化,进而调整栏位、喂食量等,对提高养殖效率有着重要意义。China has become the largest country in pork production, with pork production and consumption accounting for more than half of the world. Based on this, with the continuous development of modern animal husbandry, some companies have begun to apply automation control, computer vision and other technologies in the field of pig farming, in order to improve the management and control of the farm, improve the production environment of the farm, and increase the efficiency of the staff . During the breeding process, regular changes in the weight of the pig, and then adjustment of the pen position, feeding amount, etc., are of great significance for improving the breeding efficiency.
现有的猪体重测量方式主要分为接触式和非接触式两类:接触式测量方法又分为直接测量方法和间接测量方法;直接测量主要借助磅秤、电子秤等工具;间接测量主要通过对猪的体长、胸围、臀围等体尺指标进行测量,然后通过经验公式进行推算得到猪的体重;非接触式的测量方法主要依靠摄像头采集猪的图像,然后借助数字图像处理技术对猪的体重进行估计。The existing pig body weight measurement methods are mainly divided into contact and non-contact types: contact measurement methods are further divided into direct measurement methods and indirect measurement methods; direct measurement mainly uses tools such as scales and electronic scales; indirect measurement mainly through the Pig body length, chest circumference, hip circumference and other body size indicators are measured, and then the weight of the pig is obtained through empirical formulas; the non-contact measurement method mainly relies on the camera to collect the image of the pig, and then uses the digital image processing technology to Body weight is estimated.
但是,发明人发现现有技术中的传统接触式存在测量方法费时费力,且存在较大的误差;同时,发明人发现现有技术中的非接触式方法,至少存在使用范围有限,且需基于人机交互才能实现,导致维护成本和操作难度较大。However, the inventor found that the traditional contact method in the prior art has a time-consuming and labor-intensive measurement method, and there are large errors; at the same time, the inventor found that the non-contact method in the prior art has at least a limited use range and needs to be based on Only human-computer interaction can be realized, which leads to greater maintenance costs and operational difficulties.
发明内容Summary of the invention
为解决上述技术问题,本发明实施例期望提供一种体重测量方法、设 备和计算机可读存储介质,解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。In order to solve the above technical problems, the embodiments of the present invention are expected to provide a weight measurement method, device, and computer-readable storage medium, which solves the problem of large measurement error and high cost in measuring pig weight in the prior art. The problem has realized the accurate measurement of the weight of the pig, and reduced the operation difficulty and maintenance cost; at the same time, it has universal applicability.
本发明的技术方案是这样实现的:The technical solution of the present invention is implemented as follows:
一种体重测量方法,所述方法包括:获取针对待测量对象的待监测图像信息,并对所述待监测图像信息进行图像识别得到所述待测量对象的轮廓;基于所述待测量对象的轮廓确定待测量对象的姿势;基于所述待测量对象的姿势和体重映射矩阵,计算所述待测量对象的体重;其中,所述体重映射矩阵是预先训练得到的。A method for measuring body weight, the method includes: acquiring image information to be monitored for an object to be measured, and performing image recognition on the image information to be monitored to obtain an outline of the object to be measured; based on the outline of the object to be measured Determine the posture of the object to be measured; calculate the weight of the object to be measured based on the posture of the object to be measured and the weight mapping matrix; wherein the weight mapping matrix is obtained by pre-training.
一种体重测量设备,所述设备包括:处理器、存储器和通信总线;A weight measurement device, the device includes: a processor, a memory, and a communication bus;
所述通信总线用于实现处理器和存储器之间的通信连接;The communication bus is used to implement a communication connection between the processor and the memory;
所述处理器用于执行存储器中存储的体重测量程序,以实现以下步骤:The processor is used to execute a weight measurement program stored in the memory to achieve the following steps:
获取针对待测量对象的待监测图像信息,并对所述待监测图像信息进行图像识别得到待测量对象的轮廓;Acquiring image information to be monitored for the object to be measured, and performing image recognition on the image information to be monitored to obtain the contour of the object to be measured;
基于所述待测量对象的轮廓确定待测量对象的姿势;Determine the posture of the object to be measured based on the outline of the object to be measured;
基于所述待测量对象的姿势和体重映射矩阵,计算所述待测量对象的体重;其中,所述体重映射矩阵是预先训练得到的。Calculate the weight of the object to be measured based on the posture and weight mapping matrix of the object to be measured; wherein the weight mapping matrix is obtained by pre-training.
一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现上述的体重测量方法的步骤。A computer-readable storage medium stores one or more programs, and the one or more programs may be executed by one or more processors to implement the steps of the weight measurement method described above.
因为采用获取针对待测量对象的待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓,然后基于待测量对象的轮廓确定待测量对象的姿势,最后基于待测量对象的姿势和体重映射矩阵计算待测量对象的体重,如此,在无需人工参与的情况下,只需要从针对待测量对象的待监测图像信息中提取待测量对象的轮廓,并基于待测量对象的轮廓和预先训练得到的体重映射矩阵就可以实现对待测量对象的体重的测量,解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。Because the image information to be monitored of the object to be measured is acquired, and the image of the image to be monitored is image-recognized to obtain the contour of the object to be measured, then the posture of the object to be measured is determined based on the contour of the object to be measured, and finally based on the posture and The weight mapping matrix calculates the weight of the object to be measured. In this way, without the need for human participation, it is only necessary to extract the contour of the object to be measured from the image information of the object to be measured, and based on the contour of the object to be measured and pre-training The obtained weight mapping matrix can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost when measuring the weight of the pig in the prior art, and realizes the accuracy of the weight of the pig Measurement, and reduces the difficulty of operation and maintenance costs; at the same time, it has universal applicability.
附图说明BRIEF DESCRIPTION
图1为本发明的实施例提供的一种体重测量方法的流程示意图;FIG. 1 is a schematic flowchart of a weight measurement method according to an embodiment of the present invention;
图2为本发明的实施例提供的另一种体重测量方法的流程示意图;2 is a schematic flowchart of another weight measurement method provided by an embodiment of the present invention;
图3为本发明的实施例提供的一种待测量对象的标准姿势的示意图;3 is a schematic diagram of a standard posture of an object to be measured provided by an embodiment of the present invention;
图4为本发明的实施例提供的又一种体重测量方法的流程示意图;FIG. 4 is a schematic flowchart of another weight measurement method according to an embodiment of the present invention;
图5为本发明的实施例提供的一种体重测量设备的结构示意图。FIG. 5 is a schematic structural diagram of a weight measurement device according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the drawings in the embodiments of the present invention.
本发明的实施例提供一种体重测量方法,参照图1所示,该方法包括以下步骤:An embodiment of the present invention provides a weight measurement method. Referring to FIG. 1, the method includes the following steps:
步骤101、获取针对待测量对象的待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓。Step 101: Acquire image information to be monitored for the object to be measured, and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured.
其中,步骤101获取针对待测量对象的待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓可以由体重测量设备来实现;待监测图像信息是使用摄像机对待测量对象进行拍摄后获取得到的;待监测图像信息中可以包括一张图像信息也可以包括多张图像信息,具体可以根据实际的应用场景来确定。待测量对象的轮廓可以是通过对待监测图像信息中包括的待测量对象进行图像识别后得到的。Among them, step 101 acquires the image information to be monitored for the object to be measured, and performs image recognition on the image information to be monitored to obtain the contour of the object to be measured can be realized by the weight measuring device; Obtained; the image information to be monitored may include one image information or multiple image information, which can be determined according to actual application scenarios. The contour of the object to be measured may be obtained by performing image recognition on the object to be measured included in the image information to be monitored.
步骤102、基于待测量对象的轮廓确定待测量对象的姿势。Step 102: Determine the posture of the object to be measured based on the contour of the object to be measured.
其中,步骤102基于待测量对象的轮廓确定待测量对象的姿势可以由体重测量设备来实现;将待测量对象的轮廓对应的图像与标准姿势的图像进行匹配处理,并根据匹配处理后的结果确定待测量对象的姿势。Wherein, in step 102, determining the posture of the object to be measured based on the contour of the object to be measured can be implemented by the weight measuring device; matching the image corresponding to the contour of the object to be measured with the image of the standard posture, and determining according to the result of the matching process The posture of the object to be measured.
步骤103、基于待测量对象的姿势和体重映射矩阵,计算待测量对象的体重。Step 103: Based on the posture and weight mapping matrix of the object to be measured, calculate the weight of the object to be measured.
其中,体重映射矩阵是预先训练得到的。Among them, the weight mapping matrix is obtained by pre-training.
需要说明的是,步骤103基于待测量对象的姿势和体重映射矩阵,计算待测量对象的体重可以由体重测量设备来实现;体重测量设备可以根据待测量对象的姿势获取与待测量对象的姿势对应的体重映射矩阵,并根据待测量对象的掩码矩阵与体重映射矩阵计算待测量对象的体重。It should be noted that in step 103, based on the posture of the object to be measured and the weight mapping matrix, calculating the weight of the object to be measured can be achieved by the weight measuring device; the weight measuring device can obtain the correspondence with the posture of the object to be measured according to the posture of the object to be measured Weight mapping matrix, and calculate the weight of the object to be measured according to the mask matrix and the weight mapping matrix of the object to be measured.
本发明的实施例所提供的体重测量方法,获取待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓,然后基于待测量对象的轮廓确定待测量对象的姿势,最后基于待测量对象的姿势和体重映射 矩阵计算待测量对象的体重,如此,在无需人工参与的情况下,只需要从针对待测量对象的待监测图像信息中提取待测量对象的轮廓,并基于待测量对象的轮廓和预先训练得到的体重映射矩阵就可以实现对待测量对象的体重的测量,解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。The weight measurement method provided by the embodiment of the present invention obtains the image information to be monitored, and performs image recognition on the image information to obtain the contour of the object to be measured, and then determines the posture of the object to be measured based on the contour of the object to be measured, and finally The measurement object's posture and weight mapping matrix calculate the weight of the object to be measured. In this way, without the need for human participation, only the contour of the object to be measured needs to be extracted from the image information of the object to be measured, and based on the object to be measured The contour and the weight mapping matrix obtained by pre-training can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost in the prior art when measuring the weight of the pig, and realizes the Accurate measurement of pig body weight reduces operation difficulty and maintenance costs; at the same time, it has universal applicability.
基于前述实施例本发明的实施例提供一种体重测量方法,参照图2所示,该方法包括以下步骤:Based on the foregoing embodiments, the embodiments of the present invention provide a weight measurement method. Referring to FIG. 2, the method includes the following steps:
步骤201、体重测量设备通过图像采集器获取针对待测量对象的待监测图像信息。Step 201: The weight measurement device acquires image information to be monitored for the object to be measured through the image collector.
其中,待监测图像信息可以是通过图像采集器来采集实现监测的;图像采集器可以是单目摄像头,例如单目相机。在一种可行的实现方式中,待监测图像信息可以包括多张数量的待监测图像;当然,待监测图像信息也可以指的是待监测视频。Among them, the image information to be monitored may be collected and realized through an image collector; the image collector may be a monocular camera, such as a monocular camera. In a feasible implementation manner, the image information to be monitored may include a plurality of images to be monitored; of course, the image information to be monitored may also refer to the video to be monitored.
步骤202、体重测量设备获取预设数量的且满足预定条件的待训练对象。Step 202: The weight measuring device acquires a preset number of objects to be trained that meet a predetermined condition.
其中,待训练对象的姿势为标准姿势。Among them, the posture of the object to be trained is a standard posture.
需要说明的是,预设数量是一个可以根据实际的应用需求和应用场景调整的数量,且预设数量与获取到的标准姿势的数量有关联关系;在一种可行的实现方式中,预设数量可以是获取到的标准姿势的数量的整数倍;若获取到的标准姿势的数量是两个(即包括两种标准姿势),那么预设数量就可以是2的整数倍;例如,预设数量可以是1000,其中,待训练对象均是姿势为标准姿势的对象;例如,待训练对象可以包括500个姿势为第一种标准姿势的待训练对象,还可以包括500个姿势为第二种标准姿势的待训练对象。It should be noted that the preset number is an amount that can be adjusted according to actual application requirements and application scenarios, and the preset number is related to the number of standard gestures acquired; in a feasible implementation manner, the preset The number can be an integer multiple of the number of standard postures obtained; if the number of standard postures obtained is two (that is, including two standard postures), the preset number can be an integer multiple of 2; for example, the preset The number can be 1000, where the objects to be trained are all those with a standard posture; for example, the objects to be trained can include 500 to be trained as the first standard posture, and can also include 500 to be the second posture Subjects to be trained in standard poses.
此外,预定条件可以指的是待训练对象覆盖采集到的画面中的各个部位,且待训练对象涵盖各种不同类型的对象。也就是说,待训练对象可以是通过摄像头对预先对需要进行训练的对象进行监测,然后从众多数量的对象中挑选出预设数量并且覆盖摄像头采集到的画面中的各个部位的对象后得到的。In addition, the predetermined condition may mean that the object to be trained covers various parts in the collected picture, and the object to be trained covers various types of objects. That is to say, the object to be trained can be obtained by monitoring the object that needs to be trained in advance through the camera, and then selecting a preset number of objects from a large number of objects and covering objects in various parts of the picture collected by the camera .
步骤203、体重测量设备获取待监测图像信息中的每个像素点上包括的待训练对象的数量,并基于待训练对象的数量生成第一目标矩阵。Step 203: The weight measuring device obtains the number of objects to be trained included in each pixel in the image information to be monitored, and generates a first target matrix based on the number of objects to be trained.
其中,待监测图像信息中的每个像素点上包括的待训练对象的数量, 可以指的是待监测图像信息的画面中包括的像素点中每个像素点上出现过待训练对象的数量。The number of objects to be trained included on each pixel in the image information to be monitored may refer to the number of objects to be trained that have appeared on each pixel in the pixels included in the picture of the image to be monitored.
第一目标矩阵可以是根据获取到的待训练对象的数量和初始化设置的第一矩阵生成的,第一矩阵的行数和列数与待监测图像信息的尺寸所指示的数值相同。在一种可行的实现方式中,第一目标矩阵可以是用于记录每个像素点处出现过的待训练对象的数量的。The first target matrix may be generated based on the acquired number of objects to be trained and the first matrix set initially, and the number of rows and columns of the first matrix is the same as the value indicated by the size of the image information to be monitored. In a feasible implementation manner, the first target matrix may be used to record the number of objects to be trained that appear at each pixel.
步骤204、体重测量设备基于待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵。Step 204: The weight measurement device generates a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels.
其中,待训练对象的体重是体重测量设备实时测量待训练对象后获取得到的;同时,像素面积是体重测量设备获取待训练对象在待监测图像信息中所占据的像素点的总的像素面积后得到的。Among them, the weight of the object to be trained is obtained after the weight measuring device measures the object to be trained in real time; meanwhile, the pixel area is the total pixel area of the pixel points occupied by the object to be trained in the image information to be monitored by the weight measuring device owned.
第二目标矩阵可以是根据获取到的待训练对象的体重、像素面积和初始化设置的第二矩阵生成的,第二矩阵的行数和列数与待监测图像信息的尺寸所指示的数值相同。在一种可行的实现方式中,第二目标矩阵可以是用于记录每个像素点处出现过的待训练对象的累积体重权值的。The second target matrix may be generated based on the acquired weight, pixel area and initial setting of the second matrix of the object to be trained. The number of rows and columns of the second matrix is the same as the value indicated by the size of the image information to be monitored. In a feasible implementation manner, the second target matrix may be used to record the cumulative weight weight of the object to be trained that has appeared at each pixel.
步骤205、体重测量设备基于第二目标矩阵和第一目标矩阵,生成体重映射矩阵。Step 205: The weight measuring device generates a weight mapping matrix based on the second target matrix and the first target matrix.
其中,在获取得到第一目标矩阵和第二目标矩阵后,可以用第二目标矩阵除以第一目标矩阵后,得到的商即为体重映射矩阵。After obtaining the first target matrix and the second target matrix, the second target matrix can be divided by the first target matrix, and the obtained quotient is the weight mapping matrix.
步骤206、体重测量设备采用特定图像分割算法对待监测图像信息进行图像识别,从待监测图像信息中确定待测量对象的轮廓。Step 206: The weight measuring device uses a specific image segmentation algorithm to perform image recognition on the image information to be monitored, and determines the contour of the object to be measured from the image information to be monitored.
其中,特定图像分割算法可以指的是掩码区域卷积神经网络(Mask Regions With Convolutional Neural Network,Mask R-CNN)。待测量对象的轮廓可以是采用Mask R-CNN算法对待监测图像信息中的对象进行分割以及边缘检测后得到的。Among them, the specific image segmentation algorithm can refer to Masked Region Convolutional Neural Network (Mask Regions With Convolutional Neural Network, Mask R-CNN). The contour of the object to be measured can be obtained by segmenting and edge detecting the object in the image information to be monitored using the Mask R-CNN algorithm.
步骤207、体重测量设备获取针对待测量对象的标准姿势。Step 207: The weight measuring device acquires a standard posture for the object to be measured.
其中,标准姿势可以是根据实际的应用需求和应用场景,并结合对测量对象的实物设置的;标准姿势可以是每个待测量对象均是具有的且普遍适用的姿势,本发明实施例中对获取的标准姿势不限定具体的数量;若待测量对象是需要进行测量的猪,那么本发明实施例中获取的标准姿势可以包括两种(也即两个);如图3所示,标准姿势具体可以包括两种姿势:图3的图3a中所示出的第一种标准姿势A和图3的图3b中所示出的第二种 标准姿势B;其中,第一种标准姿势可以是站立姿势,第二种标准姿势可以是侧卧姿势。Among them, the standard posture can be set according to the actual application requirements and application scenarios, combined with the physical object of the measurement object; the standard posture can be a posture that each object to be measured has and is generally applicable, in the embodiment of the present invention The standard posture obtained does not limit the specific number; if the object to be measured is a pig that needs to be measured, the standard posture obtained in this embodiment of the present invention may include two types (that is, two); as shown in FIG. 3, the standard posture Specifically, it may include two postures: the first standard posture A shown in FIG. 3a of FIG. 3 and the second standard posture B shown in FIG. 3b of FIG. 3; where the first standard posture may be Standing posture, the second standard posture can be a lateral lying posture.
步骤208、体重测量设备基于标准姿势和待测量对象的轮廓,确定待测量对象的姿势。Step 208: The weight measuring device determines the posture of the object to be measured based on the standard posture and the contour of the object to be measured.
其中,根据待测量对象的轮廓获取待测量对象的轮廓所对应的图像与标准姿势对应的图像相同部分的面积,并根据该面积计算上面提及的两张图像之间的匹配度,进而根据匹配度与预设阈值的大小关系确定待测量对象的姿势。Among them, the area of the same part of the image corresponding to the contour of the object to be measured and the image corresponding to the standard posture is obtained according to the contour of the object to be measured, and the matching degree between the two images mentioned above is calculated according to the area, and then according to the matching The relationship between the degree and the preset threshold determines the posture of the object to be measured.
步骤209、体重测量设备基于待测量对象的姿势和体重映射矩阵,计算待测量对象的体重。Step 209: The weight measuring device calculates the weight of the object to be measured based on the posture of the object to be measured and the weight mapping matrix.
其中,本发明实施例中的待训练对象和待测量对象可以是任何需要进行体重计算的对象;在一种可行的实现方式中待训练对象和待测量对象可以包括不同类型的需要进行体重计算的动物,例如可以是家禽类。Among them, the object to be trained and the object to be measured in the embodiment of the present invention may be any object that needs to be calculated for weight; in a feasible implementation manner, the object to be trained and the object to be measured may include different types of objects that need to be calculated for weight The animal may be, for example, poultry.
需要说明的是,本实施例中与其它实施例中相同步骤和相同内容的说明,可以参照其它实施例中的描述,此处不再赘述。It should be noted that the description of the same steps and the same content in this embodiment and other embodiments can refer to the descriptions in other embodiments, which will not be repeated here.
本发明的实施例所提供的体重测量方法,在无需人工参与的情况下,只需要从针对待测量对象的待监测图像信息中提取待测量对象的轮廓,并基于待测量对象的轮廓和预先训练得到的体重映射矩阵就可以实现对待测量对象的体重的测量,同时因为待监测图像是使用单目摄像头采集的,从而解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。The weight measurement method provided by the embodiment of the present invention only needs to extract the contour of the object to be measured from the image information of the object to be measured for the object to be measured without manual participation, and based on the contour of the object to be measured and pre-training The obtained weight mapping matrix can realize the measurement of the weight of the object to be measured. At the same time, because the image to be monitored is collected using a monocular camera, it solves the large measurement error existing in the prior art when measuring the weight of the pig. The problem of higher cost has realized the accurate measurement of the pig's weight, and reduced the operation difficulty and maintenance cost; at the same time, it has universal applicability.
基于前述实施例,本发明的实施例提供一种体重测量方法,参照图4所示,该方法包括以下步骤:Based on the foregoing embodiment, an embodiment of the present invention provides a weight measurement method. Referring to FIG. 4, the method includes the following steps:
步骤301、体重测量设备通过图像采集器获取针对待测量对象的待监测图像信息。Step 301: The weight measuring device obtains image information to be monitored for the object to be measured through the image collector.
步骤302、体重测量设备获取预设数量的且满足预定条件的待训练对象。Step 302: The weight measurement device acquires a preset number of objects to be trained that meet a predetermined condition.
步骤303、体重测量设备获取待监测图像信息中的每个像素点上包括的待训练对象的数量,并基于待训练对象的数量生成第一目标矩阵。Step 303: The weight measuring device obtains the number of objects to be trained included in each pixel in the image information to be monitored, and generates a first target matrix based on the number of objects to be trained.
其中,步骤303可以通过以下方式来实现:Among them, step 303 can be implemented in the following manner:
步骤303a、体重测量设备设置M*N的第一矩阵。Step 303a: The weight measuring device sets the first matrix of M * N.
其中,M*N为待监测图像信息的尺寸。Among them, M * N is the size of the image information to be monitored.
在本发明的其它实施例中,M的取值可以为待监测图像信息的尺寸对应的长度值,N的取值可以为待监测图像信息的尺寸对应的宽度值。在一种可行的实现方式中,若待监测图像信息的尺寸为1280*1024,那么M的值可以为,N的值可以为1024,即第一矩阵为1280*1024的矩阵。需要说明的是,第一矩阵是初始化后的矩阵,其对应的元素的值均为0。In other embodiments of the present invention, the value of M may be the length value corresponding to the size of the image information to be monitored, and the value of N may be the width value corresponding to the size of the image information to be monitored. In a feasible implementation manner, if the size of the image information to be monitored is 1280 * 1024, the value of M may be, and the value of N may be 1024, that is, the first matrix is a matrix of 1280 * 1024. It should be noted that the first matrix is an initialized matrix, and the values of corresponding elements are all 0.
步骤303b、体重测量设备依次遍历预设数量的待训练对象的轮廓,并对每一待训练对象的轮廓进行处理得到每一待训练对象的目标部分。Step 303b: The weight measuring device sequentially traverses the outline of the preset number of objects to be trained, and processes the outline of each object to be trained to obtain the target part of each object to be trained.
其中,待训练对象的轮廓可以是采用Mask R-CNN算法对获取到的待训练对象进行分割以及边缘检测后得到的。Among them, the contour of the object to be trained may be obtained by segmenting the acquired object to be trained and edge detection using the Mask R-CNN algorithm.
本发明实施例中的待监测图像信息可以为二维平面图像,待训练对象可以是猪体,考虑到在二维平面图像中,猪体主要重量分布在猪身部分,腿部、尾部以及头部重量分布相对较少,为了提高算法准确性,可以使用图像处理方法去除掉猪体轮廓较为突出的猪身部分;进而,仅使用猪身部分面积作为训练体重映射矩阵和估计体重的有效面积。也就是说,目标部分可以是根据待训练对象具体所指代的对象的实物来确定,可以是能够代表该待训练对象的体重的有效部分。在一种可行的实现方式中,若待训练对象可以是猪体,那么目标部分就可以是猪的猪身部分。The image information to be monitored in the embodiment of the present invention may be a two-dimensional planar image, and the object to be trained may be a pig body. Considering that in the two-dimensional planar image, the main body weight of the pig body is distributed in the pig body, legs, tail and head The weight distribution is relatively small. In order to improve the accuracy of the algorithm, the image processing method can be used to remove the pig body part with a more prominent pig body contour; further, only the pig body part area is used as the training weight mapping matrix and the effective area for estimating weight. That is to say, the target portion may be determined according to the actual object of the object specifically referred to by the object to be trained, and may be an effective portion that can represent the weight of the object to be trained. In a feasible implementation manner, if the object to be trained can be a pig body, then the target part can be a pig body part.
步骤303c、体重测量设备获取待监测图像信息中的每个像素点上包括的待训练对象的目标部分的数量值。Step 303c: The weight measuring device obtains the quantity value of the target portion of the object to be trained included on each pixel in the image information to be monitored.
其中,本发明实施例中的待监测图像信息中的每个像素点上包括的待训练对象的目标部分的数量值,可以指的是待监测图像信息的画面中包括的像素点中每个像素点上出现过待训练对象的目标部分的数量。Wherein, the quantity value of the target portion of the object to be trained included on each pixel in the image information to be monitored in the embodiment of the present invention may refer to each pixel in the pixel included in the picture of the image information to be monitored The number of target parts of the object to be trained has appeared on the point.
步骤303d、体重测量设备按照像素点与第一矩阵的对应关系,将数量值赋值至第一矩阵中得到第一目标矩阵。Step 303d. The weight measuring device assigns the quantity value to the first matrix according to the correspondence between the pixel points and the first matrix to obtain the first target matrix.
其中,像素点与第一矩阵中的元素之间具有对应关系。Among them, there is a correspondence between pixels and elements in the first matrix.
在本发明的其它实施例中,因为第一矩阵的规格与待监测图像信息的尺寸相同,因此待监测图像信息中的每个像素点与第一矩阵中的元素之间具有一一对应的关系。第一目标矩阵可以是使用数量值对第一矩阵中元素的值进行更新后得到的M*N的矩阵。在一种可行的实现方式中,若待监测图像信息中的某个像素点上出现过的待训练对象的目标部分的数量为400,那么对应的第一矩阵中与这个像素点对应的元素的值为400。In other embodiments of the present invention, because the specifications of the first matrix are the same as the size of the image information to be monitored, there is a one-to-one correspondence between each pixel in the image information to be monitored and the elements in the first matrix . The first target matrix may be an M * N matrix obtained by updating the values of the elements in the first matrix using quantity values. In a feasible implementation manner, if the number of target parts of the object to be trained that has appeared on a pixel in the image information to be monitored is 400, then the corresponding element in the first matrix corresponding to this pixel The value is 400.
需要说明的是,若第一矩阵为W n(x,y),待训练对象的目标部分为P′ i;那 么,第一目标矩阵可以采用如下公式(1)生成: It should be noted that if the first matrix is W n (x, y), the target part of the object to be trained is P ′ i ; then, the first target matrix can be generated using the following formula (1):
Figure PCTCN2019103274-appb-000001
Figure PCTCN2019103274-appb-000001
其中,x和y分别表示像素点的横纵坐标;公式(1)表示,如果像素点上出现了一次待训练对象的目标部分,则与该像素点对应的第一矩阵中对应的元素的值加1;若像素点上为出现待训练对象的目标部分,则该像素点对应的第一矩阵中对应的元素的值不变;第一矩阵的所有元素均赋值完成后更新后的矩阵则为第一目标矩阵。Among them, x and y respectively represent the horizontal and vertical coordinates of the pixel; formula (1) indicates that if the target part of the object to be trained appears once on the pixel, the value of the corresponding element in the first matrix corresponding to the pixel Add 1; if the pixel is the target part where the object to be trained appears, the value of the corresponding element in the first matrix corresponding to the pixel remains unchanged; the updated matrix after the assignment of all elements of the first matrix is The first target matrix.
步骤304、体重测量设备基于待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵。Step 304: The weight measuring device generates a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels.
步骤304a、体重测量设备设置M*N的第二矩阵。Step 304a: The weight measuring device sets the second matrix of M * N.
其中,第二矩阵的规格与第一矩阵的规格的相同,且第二矩阵也是初始化后形成的矩阵,其对应的元素的值均为0。The specifications of the second matrix are the same as the specifications of the first matrix, and the second matrix is also a matrix formed after initialization, and the values of the corresponding elements are all 0.
步骤304b、体重测量设备测量每一待训练对象的体重,并计算每一待训练对象的目标部分在待监测图像信息中所覆盖的像素的像素面积。Step 304b: The weight measuring device measures the weight of each object to be trained, and calculates the pixel area of the pixels covered by the target part of each object to be trained in the image information to be monitored.
其中,待训练对应的体重可以是使用磅秤、电子秤等工具称量用于进行训练的姿势为第一种标准姿势(或姿势为第二种标准姿势)的500头猪的实际体重后得到的。像素面积可以指的是每一头猪的猪身部分在待监测图像信息中所覆盖的所有像素点形成的区域的面积。Among them, the weight corresponding to the training can be obtained by weighing the actual weight of 500 pigs with the first standard posture (or the second standard posture) used for training using a scale, electronic scale or other tools . The pixel area may refer to the area of the area formed by all pixels covered by the pig body part of each pig in the image information to be monitored.
步骤304c、体重测量设备基于每一待训练对象的体重和像素面积,计算每个待训练对象所覆盖像素点中每个像素点对应的体重权值。Step 304c: The weight measuring device calculates the weight value corresponding to each pixel in the pixels covered by each object to be trained based on the weight and pixel area of each object to be trained.
其中,每个待训练对象所覆盖像素点中每个像素点对应的体重权值,可以是用每一个待训练对象的实际体重除以像素面积后得到的。The weight weight corresponding to each pixel in the pixels covered by each object to be trained may be obtained by dividing the actual weight of each object to be trained by the pixel area.
步骤304d、体重测量设备将预设数量个待训练对象所覆盖像素点中相同像素点对应的体重权值相加,得到待监测图像信息中每个像素点对应的体重权值。Step 304d: The weight measuring device adds the weight weights corresponding to the same pixels among the pixels covered by the preset number of objects to be trained to obtain the weight weight corresponding to each pixel in the image information to be monitored.
在一种可行的实现方式中,若待监测图像信息中某一个像素点上出现过400个待训练对象,这个400个待训练对象的体重权值分别为a1、a2、a3……a400,那么待监测图像信息中对应的这个像素点的体重权值为a1+a2+a3+……+a400。In a feasible implementation manner, if there are 400 objects to be trained on a pixel in the image information to be monitored, the weight weights of the 400 objects to be trained are a1, a2, a3 ... a400, then The weight value of the corresponding pixel in the image information to be monitored is a1 + a2 + a3 + ... + a400.
步骤304e、体重测量设备按照像素点与第二矩阵的对应关系,将体重权值赋值至第二矩阵中得到第二目标矩阵。Step 304e: The weight measurement device assigns weight weights to the second matrix according to the correspondence between pixels and the second matrix to obtain a second target matrix.
其中,因为第二矩阵的规格与待监测图像信息的尺寸相同,因此待监测图像信息中的每个像素点与第二矩阵中的元素具有一一对应的关系。第二目标矩阵可以是使用体重权值对第二矩阵中元素的值进行更新后得到的M*N的矩阵。在一种可行的实现方式中,若待监测图像信息中的某个像素点对应的体重权值为300,那么对应的第一矩阵中与这个像素点对应的元素的值为300。Since the specifications of the second matrix are the same as the size of the image information to be monitored, each pixel in the image information to be monitored has a one-to-one correspondence with the elements in the second matrix. The second target matrix may be an M * N matrix obtained by updating the values of elements in the second matrix using weight weights. In a feasible implementation manner, if the weight value corresponding to a certain pixel in the image information to be monitored is 300, then the value of the element corresponding to this pixel in the corresponding first matrix is 300.
需要说明的是,若第二矩阵为W m(x,y),待训练对象的实际体重为M i,待训练对象的目标部分对应的像素面积为S i;那么,第二目标矩阵可以采用如下公式(2)生成: It should be noted that if the second matrix is W m (x, y), the actual weight of the object to be trained is M i , and the pixel area corresponding to the target portion of the object to be trained is S i ; then, the second target matrix can be used The following formula (2) is generated:
Figure PCTCN2019103274-appb-000002
Figure PCTCN2019103274-appb-000002
其中,公式(2)表示,训练对象的目标部分所覆盖的每一个像素点对应的第二矩阵中对应的元素的值均加待训练对象的实际体重除以像素面积后得到的商值;若像素点未被训练对象的目标部分所覆盖,则第二矩阵中对应的元素的值不变;第二矩阵的所有元素均赋值完成后更新后的矩阵则为第二目标矩阵。Among them, formula (2) indicates that the value of the corresponding element in the second matrix corresponding to each pixel covered by the target part of the training object is added to the quotient obtained by dividing the actual weight of the training object by the pixel area; If the pixel is not covered by the target part of the training object, the value of the corresponding element in the second matrix is unchanged; the updated matrix after the assignment of all elements of the second matrix is the second target matrix.
步骤305、体重测量设备基于第二目标矩阵和第一目标矩阵,生成体重映射矩阵。Step 305: The weight measuring device generates a weight mapping matrix based on the second target matrix and the first target matrix.
需要说明的是,在生成体重映射矩阵过程中,待训练对象按照预先选定的标准姿势进行分类;若标准姿势包括两种,此时需将待训练对象分成两类;第一类为姿势为第一种标准姿势的待训练对象,第二类为姿势为第二种标准姿势的待训练对象。后续进行体重映射矩阵生成时根据第一类待训练对象训练得到第一体重映射矩阵,根据第二类待训练对象训练得到第二体重映射矩阵。当然,实际应用中也可以只得到一个体重映射矩阵或者更多的体重映射矩阵,具体可以根据获取的标准姿势的数量来确定。It should be noted that, in the process of generating the weight mapping matrix, the objects to be trained are classified according to the pre-selected standard posture; if the standard posture includes two types, the objects to be trained need to be divided into two types at this time; the first type is the posture as The first type of standard posture to be trained, the second type is the type of posture to be trained. In the subsequent generation of the weight mapping matrix, the first weight mapping matrix is trained according to the first type of object to be trained, and the second weight mapping matrix is obtained according to the second type of object to be trained. Of course, in practical applications, only one weight mapping matrix or more weight mapping matrices can be obtained, which can be determined according to the number of standard postures acquired.
步骤306、体重测量设备采用特定图像分割算法对待监测图像信息进行图像识别,从待监测图像信息中确定待测量对象的轮廓。Step 306: The weight measuring device uses a specific image segmentation algorithm to perform image recognition on the image information to be monitored, and determines the contour of the object to be measured from the image information to be monitored.
步骤307、体重测量设备获取针对待测量对象的标准姿势。Step 307: The weight measuring device acquires a standard posture for the object to be measured.
步骤308、体重测量设备获取姿势为标准姿势的图像,并将姿势为标准姿势的图像的格式转换为预设格式得到第一图像。Step 308: The weight measuring device acquires an image with a standard posture, and converts the format of the image with a standard posture into a preset format to obtain a first image.
其中,预设格式为二值化格式;二值化指的是将图像中的背景部分设置1,前景部分设为0后得到的图像;第一图像可以是将提取到姿势为标准 姿势的图像进行二值化,然后进行尺寸缩放后得到的。Among them, the preset format is a binarization format; binarization refers to the image obtained by setting the background part of the image to 1 and the foreground part to 0; the first image may be an image whose pose is extracted as a standard pose After binarization, and then size scaling is obtained.
步骤309、体重测量设备获取待测量对象的轮廓对应的图像,并将待测量对象的轮廓对应的图像的格式转换为预设格式得到第二图像。Step 309: The weight measuring device acquires an image corresponding to the contour of the object to be measured, and converts the format of the image corresponding to the contour of the object to be measured into a preset format to obtain a second image.
其中,第二图像可以是将待测量对象的轮廓对应的图进行二值化,然后进行尺寸缩放后得到的;需要说明的是,若待监控图像信息中包括多个待测量对象,那么此时就可以得到多张与之数量对应的第二图像。The second image may be obtained by binarizing the image corresponding to the contour of the object to be measured and then scaling the size; it should be noted that if the image information to be monitored includes multiple objects to be measured, then this time You can get multiple second images corresponding to the number.
需要说明,获取第一图像和第二图像时尺寸缩放比例可以相同,例如姿势为第一种标准姿势的图像的尺寸缩放比例可以为200*80,姿势为第二种标准姿势的图像的尺寸缩放比例可以为200*100。It should be noted that the size scaling ratio when acquiring the first image and the second image may be the same, for example, the size scaling ratio of the image with the first standard posture may be 200 * 80, and the size scaling of the image with the second standard posture The ratio can be 200 * 100.
步骤310、体重测量设备计算第二图像与第一图像的匹配度。Step 310: The weight measuring device calculates the degree of matching between the second image and the first image.
其中,步骤310计算第二图像与第一图像的匹配度可以通过以下方式来实现:Wherein, the calculation of the matching degree between the second image and the first image in step 310 may be implemented in the following manner:
步骤310a、体重测量设备获取第一图像与第二图像中相同区域对应的第一面积。Step 310a. The weight measuring device acquires a first area corresponding to the same area in the first image and the second image.
其中,第一面积可以是将第一图像与第二图像进行对比,计算第一图像和第二图像中相同部分对应的区域的面积来得到的。The first area may be obtained by comparing the first image with the second image and calculating the area of the area corresponding to the same part in the first image and the second image.
步骤310b、体重测量设备计算第一图像的面积与第二图像的面积的和值,得到第二面积。Step 310b. The weight measuring device calculates the sum of the area of the first image and the area of the second image to obtain the second area.
其中,第二面积可以是将第一图像的面积和第二图像面积相加得到的。The second area may be obtained by adding the area of the first image and the area of the second image.
步骤310c、体重测量设备计算第一面积与第二面积的比值,得到第二图像与第一图像的匹配度。Step 310c. The weight measuring device calculates the ratio of the first area to the second area to obtain the matching degree of the second image and the first image.
其中,若第一图像标识为M ref,第二图像标识为m,那么第二图像与第一图像的匹配度R可以采用公式(3)来计算得到: Where, if the first image identifier is M ref and the second image identifier is m, then the matching degree R between the second image and the first image can be calculated using formula (3):
Figure PCTCN2019103274-appb-000003
Figure PCTCN2019103274-appb-000003
步骤311、若第二图像与第一图像的匹配度大于或等于预设阈值,体重测量设备确定待测量对象的姿势为匹配度大于或等于预设阈值的第一图像对应的标准姿势。Step 311: If the degree of matching between the second image and the first image is greater than or equal to a preset threshold, the weight measuring device determines that the posture of the object to be measured is a standard posture corresponding to the first image with a degree of matching greater than or equal to the preset threshold.
其中,若标准姿势包括第一种标准姿势和第二种标准姿势,那么第一图像包括两张图像;若第二图像与第一图像中姿势为第一种标准姿势的图像的匹配度大于或等于预设阈值,那么待测量对象的姿势为第一种标准姿势;若第二图像与第一图像中姿势为第二种标准姿势的图像的匹配度大于 或等于预设阈值,那么待测量对象的姿势为第二种标准姿势;预设阈值可以是根据实际的应用场景和具体的需求设置的。Where, if the standard posture includes the first standard posture and the second standard posture, the first image includes two images; if the second image matches the image of the first standard posture in the first image, the matching degree is greater than or Equal to the preset threshold, then the posture of the object to be measured is the first standard posture; if the degree of matching between the second image and the image of the second standard posture in the first image is greater than or equal to the preset threshold, then the object to be measured The posture is the second standard posture; the preset threshold can be set according to the actual application scenario and specific requirements.
步骤312、若匹配度小于预设阈值,体重测量设备将第二图像按照预定方向和预设角度旋转得到第三图像,并计算第三图像与第一图像的匹配度。Step 312: If the matching degree is less than the preset threshold, the weight measuring device rotates the second image according to the predetermined direction and the preset angle to obtain the third image, and calculates the matching degree of the third image and the first image.
其中,预定方向和预设角度可以是预先设置的方向和角度;在一种可行的实现方式中,预定方向可以为顺时针或者逆时针,预设角度可以为45°。Wherein, the predetermined direction and the preset angle may be a preset direction and angle; in a feasible implementation manner, the predetermined direction may be clockwise or counterclockwise, and the preset angle may be 45 °.
步骤313、若第三图像与第一图像的匹配度小于预设阈值,体重测量设备将第三图像按照预定方向和预定角度旋转。Step 313: If the matching degree between the third image and the first image is less than a preset threshold, the weight measuring device rotates the third image according to a predetermined direction and a predetermined angle.
其中,若第三图像与第一图像的匹配度大于或等于预设阈值,体重测量设备确定待测量对象的姿势为匹配度大于或等于预设阈值的第一图像对应的标准姿势。Wherein, if the matching degree between the third image and the first image is greater than or equal to a preset threshold, the weight measuring device determines that the posture of the object to be measured is a standard posture corresponding to the first image with a matching degree greater than or equal to the preset threshold.
若初始得到的第二图像与第一图像中两张图像的匹配度均小于预设阈值,那延顺时针(或逆时针)方向将第二图像旋转45°得到第三图像,若第三图像与第一图像中姿势为第一种标准姿势的图像的匹配度大于或等于预设阈值,那么待测量对象的姿势为第一种标准姿势;若第二图像与第一图像中姿势为第二种标准姿势的图像的匹配度大于或等于预设阈值,那么待测量对象的姿势为第二种标准姿势。If the match between the second image obtained initially and the two images in the first image is less than the preset threshold, then rotate the second image clockwise (or counterclockwise) by 45 ° to obtain the third image, if the third image The degree of matching with the image in the first image whose pose is the first standard posture is greater than or equal to the preset threshold, then the posture of the object to be measured is the first standard posture; if the posture in the second image and the first image is the second The matching degree of the image of the standard posture is greater than or equal to the preset threshold, then the posture of the object to be measured is the second standard posture.
步骤314、若按照预定方向和预定角度旋转一周后,旋转后的每一图像与第一图像的匹配度均小于预设阈值,体重测量设备确定待测量对象的姿势为无效姿势。Step 314: If after one rotation according to a predetermined direction and a predetermined angle, the degree of matching between each rotated image and the first image is less than a preset threshold, the weight measuring device determines that the posture of the object to be measured is an invalid posture.
其中,若第三图像与第一图像中的两张图像的匹配度均小于预设阈值,那继续延顺时针(或逆时针)方向将第二图像旋转45°得到新的图像,如果新的图像与第一图像中的任一图像的匹配度大于或等于预设阈值,此时可以得到待测量对象的姿势为匹配度大于或等于预设阈值的图像对应的标准姿势;如果新的图像与第一图像中的任一图像的匹配度均小于预设阈值,那继续延顺时针(或逆时针)方向将第二图像旋转45°;若将第二图像旋转一周(360°)后,旋转过程中得到的所有图像与第一图像中的两张图像的匹配度均小于预设阈值,此时确定待测量对象的姿势为无效姿势,将其对应的轮廓标注为无效轮廓,不对该待测量对象运行后续的体重估计。Among them, if the match between the third image and the two images in the first image is less than the preset threshold, then continue to rotate the second image clockwise (or counterclockwise) by 45 ° to obtain a new image, if the new The matching degree between the image and any one of the first images is greater than or equal to the preset threshold, at this time it can be obtained that the posture of the object to be measured is the standard posture corresponding to the image with the matching degree greater than or equal to the preset threshold; The matching degree of any image in the first image is less than the preset threshold, then continue to rotate the second image in a clockwise (or counterclockwise) direction by 45 °; if the second image is rotated once (360 °), rotate The matching degree between all the images obtained in the process and the two images in the first image is less than the preset threshold. At this time, the posture of the object to be measured is determined to be an invalid posture, and the corresponding contour is marked as an invalid contour. Subject runs subsequent weight estimates.
步骤315、体重测量设备基于待测量对象的姿势和体重映射矩阵,计算待测量对象的体重。Step 315: The weight measuring device calculates the weight of the object to be measured based on the posture of the object to be measured and the weight mapping matrix.
步骤315a、若待测量对象的姿势为标准姿势,获取待测量对象的目标 部分在待监测图像信息中所覆盖的像素点的位置。Step 315a. If the posture of the object to be measured is a standard posture, obtain the position of the pixel point covered by the target part of the object to be measured in the image information to be monitored.
步骤315b、基于像素点的位置生成M*N的掩码矩阵。Step 315b: Generate an M * N mask matrix based on the positions of the pixels.
其中,掩码矩阵可以是采用如下方式生成的:先初始化设置一M*N的第三矩阵,第三矩阵的规格与第一矩阵的规格的相同,且是初始化后形成的矩阵,其对应的元素的值均为0;计算待测量对象的目标部分在待监测图像信息中所覆盖的像素点位置,并将第三矩阵中与该像素点对应的元素的值标记为1,与待测量对象的目标部分未覆盖的像素点对应的元素的值记为0,最终生成M*N的掩码矩阵。其中,待监测图像信息中的每个像素点与第三矩阵中的元素具有一一对应的关系。Among them, the mask matrix may be generated in the following manner: first initialize a third matrix of M * N, the specifications of the third matrix are the same as the specifications of the first matrix, and are formed after initialization, and their corresponding The values of the elements are all 0; calculate the position of the pixel covered by the target part of the object to be measured in the image information to be monitored, and mark the value of the element corresponding to the pixel in the third matrix as 1, which is the same as the object to be measured The value of the element corresponding to the pixel that is not covered by the target part is recorded as 0, and finally the M * N mask matrix is generated. Wherein, each pixel in the image information to be monitored has a one-to-one correspondence with the elements in the third matrix.
步骤315c、基于掩码矩阵和体重映射矩阵,计算待测量对象的体重。Step 315c: Based on the mask matrix and the weight mapping matrix, calculate the weight of the object to be measured.
其中,将掩码矩阵与对应的体重映射矩阵相乘,然后将得到的新的矩阵中的每一个元素的值相加,得到和即为待测量对象的体重。在本发明的其它实施例中,若待测量对象的姿势为第一种标准姿势,那么可以将体重映射矩阵中的第一体重映射矩阵与该掩码矩阵相乘,然后将得到的新的矩阵中的每一个元素的值相加得到该待测量对象的体重;若待测量对象的姿势为第二种标准姿势,那么可以将体重映射矩阵中的第二体重映射矩阵与该掩码矩阵相乘,然后将得到的新的矩阵中的每一个元素的值相加得到该待测量对象的体重。Wherein, the mask matrix is multiplied by the corresponding weight mapping matrix, and then the value of each element in the obtained new matrix is added, and the sum is the weight of the object to be measured. In other embodiments of the present invention, if the posture of the object to be measured is the first standard posture, the first weight mapping matrix in the weight mapping matrix may be multiplied by the mask matrix, and then the new matrix obtained The value of each element in is added to obtain the weight of the object to be measured; if the posture of the object to be measured is the second standard posture, then the second weight mapping matrix in the weight mapping matrix can be multiplied by the mask matrix , And then add the value of each element in the obtained new matrix to obtain the weight of the object to be measured.
若从待监测图像信息中识别得到多个待测量的对象,那么可以采用本发明实施例中提供的体重测量方法对每一个待测量对象的体重进行测量。If multiple objects to be measured are identified from the image information to be monitored, the weight measurement method provided in the embodiment of the present invention may be used to measure the weight of each object to be measured.
需要说明的是,本实施例中与其它实施例中相同步骤和相同内容的说明,可以参照其它实施例中的描述,此处不再赘述。It should be noted that the description of the same steps and the same content in this embodiment and other embodiments can refer to the descriptions in other embodiments, which will not be repeated here.
本发明的实施例所提供的体重测量方法,在无需人工参与的情况下,只需要从针对待测量对象的待监测图像信息中提取待测量对象的轮廓,并基于待测量对象的轮廓和预先训练得到的体重映射矩阵就可以实现对待测量对象的体重的测量,解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。The weight measurement method provided by the embodiment of the present invention only needs to extract the contour of the object to be measured from the image information of the object to be measured for the object to be measured without manual participation, and based on the contour of the object to be measured and pre-training The obtained weight mapping matrix can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost when measuring the weight of the pig in the prior art, and realizes the accuracy of the weight of the pig Measurement, and reduces the difficulty of operation and maintenance costs; at the same time, it has universal applicability.
基于前述实施例,本发明的实施例提供一种体重测量设备,该体重测量设备可以应用于图1~2和图4对应的实施例提供的一种体重测量方法中,参照图5所示,该体重测量设备4可以包括:处理器41、存储器42和通信总线43,其中:Based on the foregoing embodiments, embodiments of the present invention provide a weight measurement device, which can be applied to a weight measurement method provided in the embodiments corresponding to FIGS. 1-2 and 4, as shown in FIG. 5, The weight measuring device 4 may include: a processor 41, a memory 42, and a communication bus 43, wherein:
通信总线43用于实现处理器41和存储器42之间的通信连接;The communication bus 43 is used to realize the communication connection between the processor 41 and the memory 42;
处理器41用于执行存储器42中存储的体重测量程序,以实现以下步骤:获取针对待测量对象的待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓;基于待测量对象的轮廓确定待测量对象的姿势;基于待测量对象的姿势和体重映射矩阵,计算待测量对象的体重;其中,体重映射矩阵是预先训练得到的。The processor 41 is used to execute the weight measurement program stored in the memory 42 to achieve the following steps: acquiring image information to be monitored for the object to be measured, and performing image recognition on the image information to obtain the outline of the object to be measured; based on the object to be measured The contour of determines the posture of the object to be measured; based on the posture of the object to be measured and the weight mapping matrix, calculates the weight of the object to be measured; wherein the weight mapping matrix is obtained by pre-training.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的获取待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓,以实现以下步骤:通过图像采集器获取待监测图像信息;采用特定图像分割算法对待监测图像信息进行图像识别,从待监测图像信息中确定待测量对象的轮廓。In other embodiments of the present invention, the processor 41 is used to execute the acquisition of the image information to be monitored stored in the memory 42 and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured, in order to achieve the following steps: through the image collector Obtain the image information to be monitored; use a specific image segmentation algorithm to perform image recognition on the image information to be monitored, and determine the contour of the object to be measured from the image information to be monitored.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的基于待测量对象的轮廓确定待测量对象的姿势,以实现以下步骤:获取针对待测量对象的标准姿势;基于标准姿势和待测量对象的轮廓,确定待测量对象的姿势。In other embodiments of the present invention, the processor 41 is used to execute the determination of the posture of the object to be measured based on the contour of the object to be measured stored in the memory 42 to achieve the following steps: acquiring a standard posture for the object to be measured; based on the standard posture And the contour of the object to be measured, determine the posture of the object to be measured.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的基于标准姿势和待测量对象的轮廓,确定待测量对象的姿势,以实现以下步骤:获取姿势为标准姿势的图像,并将姿势为标准姿势的图像的格式转换为预设格式得到第一图像;获取待测量对象的轮廓对应的图像,并将待测量对象的轮廓对应的图像的格式转换为预设格式得到第二图像;计算第二图像与第一图像的匹配度;若第二图像与第一图像的匹配度大于或等于预设阈值,确定待测量对象的姿势为匹配度大于或等于预设阈值的第一图像对应的标准姿势。In other embodiments of the present invention, the processor 41 is used to execute the standard posture and the contour of the object to be measured stored in the memory 42 to determine the posture of the object to be measured, so as to achieve the following steps: acquiring an image with a standard posture, Convert the format of the image with the standard pose to the preset format to obtain the first image; obtain the image corresponding to the contour of the object to be measured, and convert the format of the image corresponding to the contour of the object to be measured to the preset format to obtain the second Image; calculate the degree of matching between the second image and the first image; if the degree of matching between the second image and the first image is greater than or equal to the preset threshold, determine the posture of the object to be measured is the first degree of matching greater than or equal to the preset threshold The standard pose corresponding to the image.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的体重测量程序,还可以实现以下步骤:若匹配度小于预设阈值,将第二图像按照预定方向和预设角度旋转得到第三图像,并计算第三图像与第一图像的匹配度;若第三图像与第一图像的匹配度大于或等于预设阈值,确定待测量对象的姿势为匹配度大于或等于预设阈值的第一图像对应的标准姿势;若第三图像与第一图像的匹配度小于预设阈值,将第三图像按照预定方向和预定角度旋转;若按照预定方向和预定角度旋转一周后,旋转后的每一图像与第一图像的匹配度均小于预设阈值,确定待测量对象的姿势为无效姿势。In other embodiments of the present invention, the processor 41 is used to execute a weight measurement program stored in the memory 42 and may also implement the following steps: if the matching degree is less than a preset threshold, rotate the second image according to a predetermined direction and a predetermined angle Obtain the third image and calculate the matching degree between the third image and the first image; if the matching degree between the third image and the first image is greater than or equal to the preset threshold, determine the posture of the object to be measured as the matching degree is greater than or equal to the preset The standard posture corresponding to the first image of the threshold; if the matching degree between the third image and the first image is less than the preset threshold, rotate the third image according to the predetermined direction and the predetermined angle; if it rotates according to the predetermined direction and the predetermined angle once, then rotate The matching degree between each subsequent image and the first image is less than a preset threshold, and it is determined that the posture of the object to be measured is an invalid posture.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的计算第二图像与第一图像的匹配度,以实现以下步骤:获取第一图像与第二图像中相同区域对应的第一面积;计算第一图像的面积与第二图像的面积的和值,得到第二面积;计算第一面积与第二面积的比值,得到第二图像与第一图像的匹配度。In other embodiments of the present invention, the processor 41 is used to execute the calculation of the matching degree of the second image and the first image stored in the memory 42 to achieve the following steps: acquiring the corresponding areas of the first image and the second image in the same area The first area; calculate the sum of the area of the first image and the area of the second image to obtain the second area; calculate the ratio of the first area and the second area to obtain the degree of matching between the second image and the first image.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的获取待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓之前,以实现以下步骤:获取预设数量的且满足预定条件的待训练对象;其中,待训练对象的姿势为标准姿势;获取待监测图像信息中的每个像素点上包括的待训练对象的数量,并基于待训练对象的数量生成第一目标矩阵;基于待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵;基于第二目标矩阵和第一目标矩阵生成体重映射矩阵。In other embodiments of the present invention, the processor 41 is used to execute the acquisition of image information to be monitored stored in the memory 42 and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured, so as to implement the following steps: obtain a preset A number of objects to be trained that meet predetermined conditions; where the posture of the object to be trained is a standard posture; the number of objects to be trained included on each pixel in the image information to be monitored is obtained and generated based on the number of objects to be trained A first target matrix; generate a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels; generate a weight mapping matrix based on the second target matrix and the first target matrix.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的获取待监测图像信息中的每个像素点上包括的待训练对象的数量,并基于待训练对象的数量生成第一目标矩阵,以实现以下步骤:设置M*N的第一矩阵;其中,M*N为待监测图像信息的尺寸;依次遍历预设数量的待训练对象的轮廓,并对每一待训练对象的轮廓进行处理得到每一待训练对象的目标部分;获取待监测图像信息中的每个像素点上包括的待训练对象的目标部分的数量值;按照像素点与第一矩阵的对应关系,将数量值赋值至第一矩阵中得到第一目标矩阵;其中,像素点与第一矩阵中的元素之间具有对应关系。In other embodiments of the present invention, the processor 41 is configured to execute the acquisition of the number of objects to be trained included in each pixel in the image information to be monitored stored in the memory 42 and generate a first based on the number of objects to be trained The target matrix to achieve the following steps: set the first matrix of M * N; where M * N is the size of the image information to be monitored; sequentially traverse the outline of a preset number of objects to be trained, and The contour is processed to obtain the target part of each object to be trained; the quantity value of the target part of the object to be trained included on each pixel in the image information to be monitored is obtained; according to the correspondence between the pixel and the first matrix, the quantity The value is assigned to the first matrix to obtain a first target matrix; wherein, there is a correspondence between pixels and elements in the first matrix.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的基于待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵,以实现以下步骤:设置M*N的第二矩阵;测量每一待训练对象的体重,并计算每一待训练对象的目标部分在待监测图像信息中所覆盖的像素的像素面积;基于每一待训练对象的体重和像素面积,计算每个待训练对象所覆盖像素点中每个像素点对应的体重权值;将预设数量个待训练对象所覆盖像素点中相同像素点对应的体重权值相加,得到待监测图像信息中每个像素点对应的体重权值;按照像素点与第二矩阵的对应关系,将体重权值赋值至第二矩阵中得到第二目标矩阵。In other embodiments of the present invention, the processor 41 is used to execute the pixel area based on the weight of the object to be trained and the covered pixel stored in the memory 42 to generate a second target matrix to achieve the following steps: set M * N The second matrix; measure the weight of each object to be trained, and calculate the pixel area of the pixels covered by the target part of each object to be trained in the image information to be monitored; based on the weight and pixel area of each object to be trained, Calculate the weight weight corresponding to each pixel in the pixels covered by each object to be trained; add the weight weight corresponding to the same pixel in the preset number of pixels covered by the object to be trained to obtain the image information to be monitored The weight value corresponding to each pixel in the pixel; according to the correspondence between the pixel and the second matrix, assign the weight weight to the second matrix to obtain the second target matrix.
在本发明的其他实施例中,处理器41用于执行存储器42中存储的基于待测量对象的姿势和体重映射矩阵计算待测量对象的体重,以实现以下 步骤:若待测量对象的姿势为标准姿势,获取待测量对象的目标部分在待监测图像信息中所覆盖的像素点的位置;基于像素点的位置生成M*N的掩码矩阵;基于掩码矩阵和体重映射矩阵,计算待测量对象的体重。In other embodiments of the present invention, the processor 41 is used to execute the calculation of the weight of the object to be measured based on the posture and weight mapping matrix of the object to be measured stored in the memory 42 to achieve the following steps: if the posture of the object to be measured is the standard Posture, obtain the position of the pixel covered by the target part of the object to be measured in the image information to be monitored; generate an M * N mask matrix based on the position of the pixel; calculate the object to be measured based on the mask matrix and the weight mapping matrix Weight.
需要说明的是,本实施例中处理器所执行的步骤的具体实现过程,可以参照图1~2和图4对应的实施例提供的体重测量方法中的实现过程,此处不再赘述。It should be noted that, for the specific implementation process of the steps performed by the processor in this embodiment, reference may be made to the implementation process in the weight measurement method provided in the embodiments corresponding to FIGS. 1 to 2 and 4, which will not be repeated here.
本发明的实施例所提供的体重测量设备,在无需人工参与的情况下,只需要从针对待测量对象的待监测图像信息中提取待测量对象的轮廓,并基于待测量对象的轮廓和预先训练得到的体重映射矩阵就可以实现对待测量对象的体重的测量,解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。The weight measuring device provided by the embodiment of the present invention only needs to extract the contour of the object to be measured from the image information of the object to be measured for the object to be measured without human participation, and based on the contour of the object to be measured and pre-training The obtained weight mapping matrix can realize the measurement of the weight of the object to be measured, which solves the problems of large measurement error and high cost when measuring the weight of the pig in the prior art, and realizes the accuracy of the weight of the pig Measurement, and reduces the difficulty of operation and maintenance costs; at the same time, it has universal applicability.
基于前述实施例本发明的实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:获取待监测图像信息,并对待监测图像信息进行图像识别得到待测量对象的轮廓;基于待测量对象的轮廓确定待测量对象的姿势;基于待测量对象的姿势和体重映射矩阵,计算待测量对象的体重;其中,体重映射矩阵是预先训练得到的。Based on the foregoing embodiments, the embodiments of the present invention provide a computer-readable storage medium that stores one or more programs, and the one or more programs may be executed by one or more processors to implement The following steps: Obtain the image information to be monitored, and perform image recognition on the image information to be monitored to obtain the contour of the object to be measured; determine the posture of the object to be measured based on the contour of the object to be measured; based on the posture of the object to be measured and the weight mapping matrix, calculate the Measure the weight of the subject; among them, the weight mapping matrix is obtained by pre-training.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:通过图像采集器获取待监测图像信息;采用特定图像分割算法对待监测图像信息进行图像识别,从待监测图像信息中确定待测量对象的轮廓。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: acquiring image information to be monitored through an image collector; adopting a specific image segmentation algorithm to perform monitoring on the image information to be monitored Image recognition, determine the contour of the object to be measured from the image information to be monitored.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:获取针对待测量对象的标准姿势;基于标准姿势和待测量对象的轮廓,确定待测量对象的姿势。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: acquiring a standard pose for the object to be measured; based on the standard pose and the contour of the object to be measured, determining The posture of the object to be measured.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:获取姿势为标准姿势的图像,并将姿势为标准姿势的图像的格式转换为预设格式得到第一图像;获取待测量对象的轮廓对应的图像,并将待测量对象的轮廓对应的图像的格式转换为预设格式得到第二图像;计算第二图像与第一图像的匹配度;若第二图像与第一图像的匹配度大于或等于预设阈值,确定待测量对象的姿势为匹配度大于或 等于预设阈值的第一图像对应的标准姿势。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to implement the following steps: obtain an image with a standard pose and convert the format of the image with a standard pose to The first image is obtained in a preset format; the image corresponding to the contour of the object to be measured is acquired, and the format of the image corresponding to the contour of the object to be measured is converted into a preset format to obtain the second image; If the degree of matching between the second image and the first image is greater than or equal to a preset threshold, it is determined that the posture of the object to be measured is the standard posture corresponding to the first image whose match degree is greater than or equal to the preset threshold.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:In other embodiments of the present invention, the one or more programs may be executed by one or more processors to implement the following steps:
若匹配度小于预设阈值,将第二图像按照预定方向和预设角度旋转得到第三图像,并计算第三图像与第一图像的匹配度;If the matching degree is less than the preset threshold, rotate the second image according to the predetermined direction and the preset angle to obtain the third image, and calculate the matching degree of the third image and the first image;
若第三图像与第一图像的匹配度大于或等于预设阈值,确定待测量对象的姿势为匹配度大于或等于预设阈值的第一图像对应的标准姿势;If the matching degree between the third image and the first image is greater than or equal to a preset threshold, determine that the posture of the object to be measured is a standard posture corresponding to the first image with a matching degree greater than or equal to the preset threshold;
若第三图像与第一图像的匹配度小于预设阈值,将第三图像按照预定方向和预定角度旋转;If the matching degree between the third image and the first image is less than a preset threshold, rotate the third image according to a predetermined direction and a predetermined angle;
若按照预定方向和预定角度旋转一周后,旋转后的每一图像与第一图像的匹配度均小于预设阈值,确定待测量对象的姿势为无效姿势。If the rotation of each image after the rotation in a predetermined direction and a predetermined angle is less than a preset threshold, it is determined that the posture of the object to be measured is an invalid posture.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:获取第一图像与第二图像中相同区域对应的第一面积;计算第一图像的面积与第二图像的面积的和值,得到第二面积;计算第一面积与第二面积的比值,得到第二图像与第一图像的匹配度。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: acquiring the first area corresponding to the same area in the first image and the second image; calculating the first The sum of the area of the image and the area of the second image yields the second area; the ratio of the first area and the second area is calculated to obtain the degree of matching between the second image and the first image.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:获取预设数量的且满足预定条件的待训练对象;获取待监测图像信息中的每个像素点上包括的待训练对象的数量,并基于待训练对象的数量生成第一目标矩阵;基于待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵;基于第二目标矩阵和第一目标矩阵,生成体重映射矩阵。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: acquiring a preset number of objects to be trained that meet predetermined conditions; acquiring image information to be monitored The number of objects to be trained is included on each pixel of, and a first target matrix is generated based on the number of objects to be trained; a second target matrix is generated based on the weight of the object to be trained and the pixel area of the pixels covered; The second target matrix and the first target matrix generate a weight mapping matrix.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:设置M*N的第一矩阵;其中,M*N为待监测图像信息的尺寸;依次遍历预设数量的待训练对象的轮廓,并对每一待训练对象的轮廓进行处理得到每一待训练对象的目标部分;获取待监测图像信息中的每个像素点上包括的待训练对象的目标部分的数量值;按照像素点与第一矩阵的对应关系,将数量值赋值至第一矩阵中得到第一目标矩阵;其中,像素点与第一矩阵中的元素之间具有对应关系。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: set a first matrix of M * N; where M * N is the image information to be monitored Size; traverse the outline of a preset number of objects to be trained in sequence, and process the outline of each object to be trained to obtain the target part of each object to be trained; obtain the object to be included on each pixel in the image information to be monitored The quantity value of the target part of the training object; according to the correspondence between the pixels and the first matrix, the quantity value is assigned to the first matrix to obtain the first target matrix; wherein, there is a correspondence between the pixels and the elements in the first matrix relationship.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:设置M*N的第二矩阵;测量每一待训练对象的体重,并计算每一待训练对象的目标部分在待监测图像信息中所覆盖的像素的像素面积;基于每一待训练对象的体重和像素面积,计算每个待训 练对象所覆盖像素点中每个像素点对应的体重权值;将预设数量个待训练对象所覆盖像素点中相同像素点对应的体重权值相加,得到待监测图像信息中每个像素点对应的体重权值;按照像素点与第二矩阵的对应关系,将体重权值赋值至第二矩阵中得到第二目标矩阵。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: setting a second matrix of M * N; measuring the weight of each object to be trained, and calculating The pixel area of the pixels covered by the target part of each object to be trained in the image information to be monitored; based on the weight and pixel area of each object to be trained, the calculation corresponds to each pixel of the pixels covered by each object to be trained Weight weight value; add the weight weight value corresponding to the same pixel among the pixels covered by the preset number of objects to be trained to obtain the weight weight value corresponding to each pixel in the image information to be monitored; Corresponding relationship between the two matrices, weight weights are assigned to the second matrix to obtain the second target matrix.
在本发明的其他实施例中,该一个或者多个程序可被一个或者多个处理器执行,以实现如下步骤:获取待测量对象在待监测图像信息中所覆盖的像素点的位置;基于像素点的位置生成M*N的掩码矩阵;基于掩码矩阵和体重映射矩阵,计算待测量对象的体重。In other embodiments of the present invention, the one or more programs may be executed by one or more processors to achieve the following steps: acquiring the position of the pixel covered by the object to be measured in the image information to be monitored; based on the pixel The position of the point generates an M * N mask matrix; based on the mask matrix and the weight mapping matrix, the weight of the object to be measured is calculated.
需要说明的是,本实施例中处理器所执行的步骤的具体实现过程,可以参照图1~2和图4对应的实施例提供的体重测量方法中的实现过程,此处不再赘述。It should be noted that, for the specific implementation process of the steps performed by the processor in this embodiment, reference may be made to the implementation process in the weight measurement method provided in the embodiments corresponding to FIGS. 1 to 2 and 4, which will not be repeated here.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage and optical storage, etc.) containing computer usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowcharts and / or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present invention. It should be understood that each flow and / or block in the flowchart and / or block diagram and a combination of the flow and / or block in the flowchart and / or block diagram may be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to produce a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device A device for realizing the functions specified in one block or multiple blocks of one flow or multiple blocks of a flowchart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的 步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device The instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。The above are only the preferred embodiments of the present invention and are not intended to limit the protection scope of the present invention.
工业实用性Industrial applicability
本实施例中,在无需人工参与的情况下,只需要从针对待测量对象的待监测图像信息中提取待测量对象的轮廓,并基于待测量对象的轮廓和预先训练得到的体重映射矩阵就可以实现对待测量对象的体重的测量,解决了现有技术中对猪的体重进行测量时存在的测误差较大且成本较高的问题,实现了对猪的体重的准确测量,且降低了操作难度和维护成本;同时,具有普遍适用性。In this embodiment, without manual participation, it is only necessary to extract the contour of the object to be measured from the image information of the object to be monitored, and based on the contour of the object to be measured and the weight mapping matrix obtained by pre-training. The measurement of the weight of the object to be measured is realized, which solves the problems of large measurement error and high cost in measuring the weight of the pig in the prior art, realizes the accurate measurement of the weight of the pig, and reduces the difficulty of operation And maintenance costs; at the same time, it has universal applicability.

Claims (12)

  1. 一种体重测量方法,其特征在于,所述方法包括:A weight measurement method, characterized in that the method includes:
    获取针对待测量对象的待监测图像信息,并对所述待监测图像信息进行图像识别得到所述待测量对象的轮廓;Acquiring image information to be monitored for the object to be measured, and performing image recognition on the image information to be monitored to obtain the outline of the object to be measured;
    基于所述待测量对象的轮廓确定待测量对象的姿势;Determine the posture of the object to be measured based on the outline of the object to be measured;
    基于所述待测量对象的姿势和体重映射矩阵,计算所述待测量对象的体重;其中,所述体重映射矩阵是预先训练得到的。Calculate the weight of the object to be measured based on the posture and weight mapping matrix of the object to be measured; wherein the weight mapping matrix is obtained by pre-training.
  2. 根据权利要求1所述的方法,其特征在于,所述获取待监测图像信息,并对所述待监测图像信息进行图像识别得到所述待测量对象的轮廓,包括:The method according to claim 1, wherein the acquiring image information to be monitored and performing image recognition on the image information to be monitored to obtain the contour of the object to be measured includes:
    通过图像采集器获取所述待监测图像信息;Acquiring the image information to be monitored through an image collector;
    采用特定图像分割算法对所述待监测图像信息进行图像识别,从所述待监测图像信息中确定所述待测量对象的轮廓。A specific image segmentation algorithm is used to perform image recognition on the image information to be monitored, and the contour of the object to be measured is determined from the image information to be monitored.
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述待测量对象的轮廓确定待测量对象的姿势,包括:The method according to claim 1, wherein the determining the posture of the object to be measured based on the contour of the object to be measured comprises:
    获取针对所述待测量对象的标准姿势;Acquiring a standard pose for the object to be measured;
    基于所述标准姿势和所述待测量对象的轮廓,确定所述待测量对象的姿势。Based on the standard posture and the contour of the object to be measured, the posture of the object to be measured is determined.
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述标准姿势和所述待测量对象的轮廓,确定所述待测量对象的姿势,包括:The method according to claim 3, wherein the determining the posture of the object to be measured based on the standard posture and the contour of the object to be measured comprises:
    获取姿势为所述标准姿势的图像,并将所述姿势为标准姿势的图像的格式转换为预设格式得到第一图像;Acquiring an image whose pose is the standard pose, and converting the format of the image whose pose is the standard pose into a preset format to obtain a first image;
    获取所述待测量对象的轮廓对应的图像,并将所述待测量对象的轮廓对应的图像的格式转换为预设格式得到第二图像;Acquiring an image corresponding to the contour of the object to be measured, and converting the format of the image corresponding to the contour of the object to be measured into a preset format to obtain a second image;
    计算所述第二图像与所述第一图像的匹配度;Calculating the degree of matching between the second image and the first image;
    若所述第二图像与所述第一图像的匹配度大于或等于预设阈值,确定所述待测量对象的姿势为匹配度大于或等于预设阈值的所述第一图像对应的标准姿势。If the degree of matching between the second image and the first image is greater than or equal to a preset threshold, it is determined that the posture of the object to be measured is a standard posture corresponding to the first image with a degree of matching greater than or equal to the preset threshold.
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:The method according to claim 4, wherein the method further comprises:
    若所述匹配度小于预设阈值,将所述第二图像按照预定方向和预设角 度旋转得到第三图像,并计算所述第三图像与所述第一图像的匹配度;If the matching degree is less than a preset threshold, rotate the second image according to a predetermined direction and a preset angle to obtain a third image, and calculate the matching degree of the third image and the first image;
    若所述第三图像与所述第一图像的匹配度大于或等于预设阈值,确定所述待测量对象的姿势为匹配度大于或等于预设阈值的所述第一图像对应的标准姿势;If the matching degree of the third image and the first image is greater than or equal to a preset threshold, it is determined that the posture of the object to be measured is a standard posture corresponding to the first image with a matching degree greater than or equal to the preset threshold;
    若所述第三图像与所述第一图像的匹配度小于预设阈值,将所述第三图像按照所述预定方向和所述预定角度旋转;If the degree of matching between the third image and the first image is less than a preset threshold, rotate the third image according to the predetermined direction and the predetermined angle;
    若按照所述预定方向和预定角度旋转一周后,旋转后的每一图像与所述第一图像的匹配度均小于所述预设阈值,确定所述待测量对象的姿势为无效姿势。If after one rotation according to the predetermined direction and predetermined angle, the degree of matching between each rotated image and the first image is less than the preset threshold, it is determined that the posture of the object to be measured is an invalid posture.
  6. 根据权利要求4或5所述的方法,其特征在于,所述计算所述第二图像与所述第一图像的匹配度,包括:The method according to claim 4 or 5, wherein the calculating the degree of matching between the second image and the first image includes:
    获取所述第一图像与所述第二图像中相同区域对应的第一面积;Acquiring a first area corresponding to the same area in the first image and the second image;
    计算所述第一图像的面积与所述第二图像的面积的和值,得到第二面积;Calculating the sum of the area of the first image and the area of the second image to obtain the second area;
    计算所述第一面积与所述第二面积的比值,得到所述第二图像与所述第一图像的匹配度。Calculate the ratio of the first area to the second area to obtain the degree of matching between the second image and the first image.
  7. 根据权利要求1所述的方法,其特征在于,所述获取待监测图像信息,并对所述待监测图像信息进行图像识别得到待测量对象的轮廓之前,还包括:The method according to claim 1, wherein before acquiring the image information to be monitored and performing image recognition on the image information to be monitored to obtain the contour of the object to be measured, the method further includes:
    获取预设数量的且满足预定条件的待训练对象;其中,所述待训练对象的姿势为标准姿势;Acquiring a preset number of objects to be trained that meet predetermined conditions; wherein, the posture of the object to be trained is a standard posture;
    获取所述待监测图像信息中的每个像素点上包括的所述待训练对象的数量,并基于所述待训练对象的数量生成第一目标矩阵;Acquiring the number of the objects to be trained included on each pixel in the image information to be monitored, and generating a first target matrix based on the number of the objects to be trained;
    基于所述待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵;Generate a second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels;
    基于所述第二目标矩阵和所述第一目标矩阵,生成所述体重映射矩阵。The weight mapping matrix is generated based on the second target matrix and the first target matrix.
  8. 根据权利要求7所述的方法,其特征在于,所述获取所述待监测图像信息中的每个像素点上包括的所述待训练对象的数量,并基于所述待训练对象的数量生成第一目标矩阵,包括:The method according to claim 7, wherein the acquiring the number of the objects to be trained included on each pixel in the image information to be monitored, and generating a number based on the number of the objects to be trained A target matrix, including:
    设置M*N的第一矩阵;其中,所述M*N为所述待监测图像信息的尺寸;Set a first matrix of M * N; where M * N is the size of the image information to be monitored;
    依次遍历所述预设数量的待训练对象的轮廓,并对每一待训练对象的 轮廓进行处理得到每一待训练对象的目标部分;Sequentially traverse the outline of the preset number of objects to be trained, and process the outline of each object to be trained to obtain the target portion of each object to be trained;
    获取所述待监测图像信息中的每个像素点上包括的所述待训练对象的目标部分的数量值;Acquiring the quantity value of the target portion of the object to be trained included on each pixel in the image information to be monitored;
    按照像素点与所述第一矩阵的对应关系,将所述数量值赋值至所述第一矩阵中得到所述第一目标矩阵;其中,所述像素点与所述第一矩阵中的元素之间具有对应关系。According to the correspondence between pixels and the first matrix, assign the quantity value to the first matrix to obtain the first target matrix; wherein, between the pixels and the elements in the first matrix There is a corresponding relationship.
  9. 根据权利要求7所述的方法,其特征在于,所述基于所述待训练对象的体重和所覆盖的像素的像素面积,生成第二目标矩阵,包括:The method according to claim 7, wherein the generating the second target matrix based on the weight of the object to be trained and the pixel area of the covered pixels includes:
    设置M*N的第二矩阵;其中,所述M*N为所述待监测图像信息的尺寸;Set a second matrix of M * N; where M * N is the size of the image information to be monitored;
    测量每一待训练对象的体重,并计算每一待训练对象的目标部分在待监测图像信息中所覆盖的像素的像素面积;Measure the weight of each object to be trained, and calculate the pixel area of pixels covered by the target part of each object to be trained in the image information to be monitored;
    基于每一所述待训练对象的体重和所述像素面积,计算每个待训练对象所覆盖像素点中每个像素点对应的体重权值;Calculate the weight weight corresponding to each pixel in the pixels covered by each object to be trained based on the weight of each object to be trained and the pixel area;
    将所述预设数量个待训练对象所覆盖像素点中相同像素点对应的体重权值相加,得到所述待监测图像信息中每个像素点对应的体重权值;Adding the weight weight corresponding to the same pixel among the pixels covered by the preset number of objects to be trained to obtain the weight weight corresponding to each pixel in the image information to be monitored;
    按照像素点与所述第二矩阵的对应关系,将所述体重权值赋值至所述第二矩阵中得到第二目标矩阵。According to the correspondence between pixels and the second matrix, assign the weight weight to the second matrix to obtain a second target matrix.
  10. 根据权利要求7~9任一所述的方法,其特征在于,所述基于所述待测量对象的姿势和体重映射矩阵,计算所述待测量对象的体重,包括:The method according to any one of claims 7 to 9, wherein the calculating the weight of the object to be measured based on the posture and weight mapping matrix of the object to be measured includes:
    若所述待测量对象的姿势为标准姿势,获取所述待测量对象的目标部分在所述待监测图像信息中所覆盖的像素点的位置;If the posture of the object to be measured is a standard posture, obtain the position of the pixel point covered by the target part of the object to be measured in the image information to be monitored;
    基于所述像素点的位置生成M*N的掩码矩阵;Generate an M * N mask matrix based on the positions of the pixels;
    基于所述掩码矩阵和所述体重映射矩阵,计算所述待测量对象的体重。Based on the mask matrix and the weight mapping matrix, calculate the weight of the object to be measured.
  11. 一种体重测量设备,其特征在于,所述设备包括:处理器、存储器和通信总线;A weight measuring device, characterized in that the device includes: a processor, a memory, and a communication bus;
    所述通信总线用于实现处理器和存储器之间的通信连接;The communication bus is used to implement a communication connection between the processor and the memory;
    所述处理器用于执行存储器中存储的体重测量程序,以实现以下步骤:The processor is used to execute a weight measurement program stored in the memory to achieve the following steps:
    获取针对待测量对象的待监测图像信息,并对所述待监测图像信息进行图像识别得到所述待测量对象的轮廓;Acquiring image information to be monitored for the object to be measured, and performing image recognition on the image information to be monitored to obtain the outline of the object to be measured;
    基于所述待测量对象的轮廓确定待测量对象的姿势;Determine the posture of the object to be measured based on the outline of the object to be measured;
    基于所述待测量对象的姿势和体重映射矩阵,计算所述待测量对象的 体重;其中,所述体重映射矩阵是预先训练得到的。The weight of the object to be measured is calculated based on the posture and weight mapping matrix of the object to be measured; wherein the weight mapping matrix is obtained by pre-training.
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1至10中任一项所述的体重测量方法的步骤。A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement claim 1 To any of the steps of the weight measurement method according to any one of 10.
PCT/CN2019/103274 2018-10-17 2019-08-29 Weight measurement method and device, and computer readable storage medium WO2020078111A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811210433.4A CN109459119B (en) 2018-10-17 2018-10-17 Weight measurement method, device and computer readable storage medium
CN201811210433.4 2018-10-17

Publications (1)

Publication Number Publication Date
WO2020078111A1 true WO2020078111A1 (en) 2020-04-23

Family

ID=65607887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103274 WO2020078111A1 (en) 2018-10-17 2019-08-29 Weight measurement method and device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109459119B (en)
WO (1) WO2020078111A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112511767A (en) * 2020-10-30 2021-03-16 济南浪潮高新科技投资发展有限公司 Video splicing method and device, and storage medium
CN115620210A (en) * 2022-11-29 2023-01-17 广东祥利科技有限公司 Method and system for determining performance of electronic wire based on image processing

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459119B (en) * 2018-10-17 2020-06-05 京东数字科技控股有限公司 Weight measurement method, device and computer readable storage medium
CN110065855B (en) * 2019-04-21 2024-01-23 苏州科技大学 Multi-car elevator control method and control system
CN110426112B (en) * 2019-07-04 2022-05-13 平安科技(深圳)有限公司 Live pig weight measuring method and device
CN110672189A (en) * 2019-09-27 2020-01-10 北京海益同展信息科技有限公司 Weight estimation method, device, system and storage medium
CN111126636A (en) * 2019-12-31 2020-05-08 杭州铁哥们环保科技有限公司 Intelligent scrap steel recycling method
CN111401386B (en) * 2020-03-30 2023-06-13 深圳前海微众银行股份有限公司 Livestock shed monitoring method and device, intelligent cruising robot and storage medium
CN113532616A (en) * 2020-04-15 2021-10-22 阿里巴巴集团控股有限公司 Weight estimation method, device and system based on computer vision
CN111507432A (en) * 2020-07-01 2020-08-07 四川智迅车联科技有限公司 Intelligent weighing method and system for agricultural insurance claims, electronic equipment and storage medium
CN111862189B (en) * 2020-07-07 2023-12-05 京东科技信息技术有限公司 Body size information determining method, body size information determining device, electronic equipment and computer readable medium
CN112233144A (en) * 2020-09-24 2021-01-15 中国农业大学 Underwater fish body weight measuring method and device
CN112330677A (en) * 2021-01-05 2021-02-05 四川智迅车联科技有限公司 High-precision weighing method and system based on image, electronic equipment and storage medium
CN114001810A (en) * 2021-11-08 2022-02-01 厦门熵基科技有限公司 Weight calculation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7899206B2 (en) * 2006-03-27 2011-03-01 Eyecue Vision Technologies Ltd. Device, system and method for determining compliance with a positioning instruction by a figure in an image
CN104778374A (en) * 2015-05-04 2015-07-15 哈尔滨理工大学 Automatic dietary estimation device based on image processing and recognizing method
CN106537173A (en) * 2014-08-07 2017-03-22 谷歌公司 Radar-based gesture recognition
CN106529400A (en) * 2016-09-26 2017-03-22 深圳奥比中光科技有限公司 Mobile terminal and human body monitoring method and device
CN106780530A (en) * 2016-12-15 2017-05-31 广州视源电子科技股份有限公司 A kind of build Forecasting Methodology and equipment
CN109459119A (en) * 2018-10-17 2019-03-12 北京京东金融科技控股有限公司 A kind of body weight measurement, equipment and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6083638B2 (en) * 2012-08-24 2017-02-22 国立大学法人 宮崎大学 Weight estimation apparatus for animal body and weight estimation method
CN103983334B (en) * 2014-05-20 2017-01-11 联想(北京)有限公司 Information processing method and electronic equipment
CN105784083B (en) * 2016-04-05 2018-05-18 北京农业信息技术研究中心 Dairy cow's conformation measuring method and system based on stereovision technique
CN107194987B (en) * 2017-05-12 2021-12-10 西安蒜泥电子科技有限责任公司 Method for predicting human body measurement data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7899206B2 (en) * 2006-03-27 2011-03-01 Eyecue Vision Technologies Ltd. Device, system and method for determining compliance with a positioning instruction by a figure in an image
CN106537173A (en) * 2014-08-07 2017-03-22 谷歌公司 Radar-based gesture recognition
CN104778374A (en) * 2015-05-04 2015-07-15 哈尔滨理工大学 Automatic dietary estimation device based on image processing and recognizing method
CN106529400A (en) * 2016-09-26 2017-03-22 深圳奥比中光科技有限公司 Mobile terminal and human body monitoring method and device
CN106780530A (en) * 2016-12-15 2017-05-31 广州视源电子科技股份有限公司 A kind of build Forecasting Methodology and equipment
CN109459119A (en) * 2018-10-17 2019-03-12 北京京东金融科技控股有限公司 A kind of body weight measurement, equipment and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112511767A (en) * 2020-10-30 2021-03-16 济南浪潮高新科技投资发展有限公司 Video splicing method and device, and storage medium
CN115620210A (en) * 2022-11-29 2023-01-17 广东祥利科技有限公司 Method and system for determining performance of electronic wire based on image processing
CN115620210B (en) * 2022-11-29 2023-03-21 广东祥利科技有限公司 Method and system for determining performance of electronic wire material based on image processing

Also Published As

Publication number Publication date
CN109459119A (en) 2019-03-12
CN109459119B (en) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2020078111A1 (en) Weight measurement method and device, and computer readable storage medium
Wongsriworaphon et al. An approach based on digital image analysis to estimate the live weights of pigs in farm environments
Xiao et al. Behavior-induced health condition monitoring of caged chickens using binocular vision
US10262417B2 (en) Tooth axis estimation program, tooth axis estimation device and method of the same, tooth profile data creation program, tooth profile data creation device and method of the same
US10318839B2 (en) Method for automatic detection of anatomical landmarks in volumetric data
WO2017049677A1 (en) Facial key point marking method
WO2020182036A1 (en) Image processing method and apparatus, server, and storage medium
CN109544606B (en) Rapid automatic registration method and system based on multiple Kinects
CN106650701B (en) Binocular vision-based obstacle detection method and device in indoor shadow environment
CN107240117B (en) Method and device for tracking moving object in video
JP2018004310A (en) Information processing device, measurement system, information processing method and program
CN109740659B (en) Image matching method and device, electronic equipment and storage medium
CN113537175B (en) Same-fence swinery average weight estimation method based on computer vision
US20150356346A1 (en) Feature point position detecting appararus, feature point position detecting method and feature point position detecting program
CN103218809A (en) Image measuring method of pearl length parameter
Guo et al. 3D scanning of live pigs system and its application in body measurements
CA2844392A1 (en) Method for generating a three-dimensional representation of an object
CN111177811A (en) Automatic fire point location layout method applied to cloud platform
JP2012123631A (en) Attention area detection method, attention area detection device, and program
JP5976089B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, and program
CA3131590C (en) Golf ball set-top detection method, system and storage medium
CN107320118B (en) Method and system for calculating three-dimensional image space information of carbon nano C-shaped arm
CN110211200B (en) Dental arch wire generating method and system based on neural network technology
CN102201060A (en) Method for tracking and evaluating nonparametric outline based on shape semanteme
CN109509194B (en) Front human body image segmentation method and device under complex background

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872460

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19872460

Country of ref document: EP

Kind code of ref document: A1