CN112116647B - Weighting method and weighting device - Google Patents

Weighting method and weighting device Download PDF

Info

Publication number
CN112116647B
CN112116647B CN201910532898.XA CN201910532898A CN112116647B CN 112116647 B CN112116647 B CN 112116647B CN 201910532898 A CN201910532898 A CN 201910532898A CN 112116647 B CN112116647 B CN 112116647B
Authority
CN
China
Prior art keywords
image
processed
estimated
feature space
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910532898.XA
Other languages
Chinese (zh)
Other versions
CN112116647A (en
Inventor
徐法明
林建华
朱敏
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rainbow Software Co ltd
Original Assignee
Rainbow Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rainbow Software Co ltd filed Critical Rainbow Software Co ltd
Priority to CN201910532898.XA priority Critical patent/CN112116647B/en
Publication of CN112116647A publication Critical patent/CN112116647A/en
Application granted granted Critical
Publication of CN112116647B publication Critical patent/CN112116647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Abstract

The invention discloses a weight estimation method and a weight estimation device. Wherein the method comprises the following steps: processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated; determining a weight interval in which an object to be estimated is positioned based on the feature space; determining a second neural network corresponding to the weight interval; and processing the image to be processed based on the second neural network to obtain a weight estimation result of the object to be weighted. The invention solves the technical problems that the weighing of the existing electronic scale consumes manpower, material resources and time.

Description

Weighting method and weighting device
Technical Field
The present invention relates to the field of image processing, and in particular, to a weighting method and a weighting apparatus.
Background
In animal husbandry, farmers need to record the weight of the bred animals frequently, and the growth condition of the bred animals can be determined by analyzing the weight of the bred animals, so that the breeding links of the animals are adjusted according to the growth condition. In addition, in animal husbandry insurance, when an animal raised by a farmer is subjected to claim settlement, the claim settlement is usually performed according to indexes such as the length and weight of the animal.
In the prior art, animals are typically weighed manually using electronic scales. However, when the animal to be weighed is large in volume or heavy in weight, and the number of animals to be weighed is large, it takes a long time because more staff is required to complete the weighing because the animals need to be lifted and moved. As the number and weight of animals to be weighed increases, so too does the costs of manpower and time.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a weighing method and a weighing device, which at least solve the technical problem that the weighing of the existing electronic scale consumes manpower, material resources and time.
According to an aspect of an embodiment of the present invention, there is provided a method of evaluating a weight, including: processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated; determining a weight interval in which an object to be estimated is positioned based on the feature space; determining a second neural network corresponding to the weight interval; and processing the image to be processed based on the second neural network to obtain a weight estimation result of the object to be weighted.
Further, the weighting method further includes: performing feature extraction layer by layer on an image to be processed based on a first neural network to obtain a feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features include at least one of the following: the class and semantics of the object to be assessed.
Further, after the image to be processed is processed based on the first neural network to obtain the feature space, the method further comprises: and carrying out fusion treatment on the bottom layer features and the high layer features to obtain a fusion result.
Further, the image to be processed further includes a reference object, wherein the weighting method further includes: before processing the image to be processed based on the second neural network to obtain an estimated result of the object to be estimated, segmenting the image to be processed based on the feature space to obtain a segmented image, wherein the segmented image at least comprises: an object region to be estimated, a reference object region, and a background region.
Further, the weighting method further includes: determining initial information of the object to be weighted based on the feature space, wherein the initial information comprises at least one of the following: the body length, chest circumference, waistline and hip circumference of the object to be estimated; and determining the weight interval of the object to be estimated according to the initial information.
Further, after determining the weight interval in which the object to be weighted is located based on the feature space, the weighting method further includes: and combining the image to be processed and the segmentation image to obtain a combined image.
Further, the weighting method further includes: acquiring a depth map of an object to be estimated; after a weight interval of an object to be estimated is determined based on a feature space, performing view transformation on the depth map, the segmented image and the image to be processed according to a reference object area to obtain a transformed depth map, segmented image and image to be processed; and combining the transformed depth map, the segmented image and the image to be processed to obtain a combined image.
Further, the weighting method further includes: extracting features from the combined image based on the second neural network to obtain a target feature space; and carrying out regression processing based on the target feature space to obtain a weight estimation result of the object to be weighted.
Further, in the image to be processed, the reference object and the object to be estimated do not overlap, and the shape of the reference object is rectangular.
According to another aspect of the embodiment of the present invention, there is also provided a method for evaluating a weight, including: processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated and a reference object; dividing an image to be processed based on a feature space to obtain a divided image, wherein the divided image at least comprises: an object region to be estimated, a reference object region and a background region; and processing the image to be processed and the segmentation image based on the second neural network to obtain an estimated result of the object to be estimated.
Further, the weighting method further includes: performing feature extraction layer by layer on an image to be processed based on a first neural network to obtain a feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features include at least one of the following: the class and semantics of the object to be assessed.
Further, after the image to be processed is processed based on the first neural network to obtain the feature space, the weighting method further includes: and carrying out fusion treatment on the bottom layer features and the high layer features to obtain a fusion result.
Further, the weighting method further includes: combining the image to be processed and the segmented image to obtain a combined image; and processing the combined image to obtain a weight estimation result.
Further, the weighting method further includes: acquiring a depth map of an object to be estimated; processing the image to be processed and the segmented image based on the second neural network to obtain an estimated result of the object to be estimated, including: performing view angle transformation on the depth map, the segmented image and the image to be processed according to the reference object region to obtain a transformed depth map, the segmented image and the image to be processed; combining the transformed depth map, the segmented image and the image to be processed to obtain a combined image; and processing the combined image to obtain a weight estimation result.
Further, the weighting method further includes: extracting features from the combined image based on the second neural network to obtain a target feature space; and carrying out regression processing based on the target feature space to obtain a weight estimation result of the object to be weighted.
Further, before the image to be processed is processed based on the first neural network to obtain the feature space, the weighting method further includes: and acquiring an image to be processed acquired by the mobile terminal.
Further, in the image to be processed, the reference object and the object to be estimated do not overlap, and the shape of the reference object is rectangular.
According to another aspect of the embodiment of the present invention, there is also provided a weighting apparatus, including: the first processing module is used for processing the image to be processed based on the first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated; the first determining module is used for determining a weight interval in which the object to be estimated is positioned based on the feature space; a first selection module for determining a second neural network corresponding to the weight interval; the second processing module is used for processing the image to be processed based on the second neural network to obtain a weight estimation result of the object to be weighted.
According to another aspect of the embodiment of the present invention, there is also provided a weighting apparatus, including: the third processing module is used for processing the image to be processed based on the first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated and a reference object; the segmentation module is used for segmenting the image to be processed based on the feature space to obtain a segmented image, wherein the segmented image at least comprises: an object region to be estimated, a reference object region and a background region; and the fourth processing module is used for processing the image to be processed and the segmentation image based on the second neural network to obtain an estimated result of the object to be estimated.
According to another aspect of the embodiment of the present invention, there is also provided a storage medium, which includes a stored program, where the device on which the storage medium is controlled to execute the above-described weighting method when the program runs.
According to another aspect of the embodiment of the present invention, there is also provided a processor for running a program, where the program executes the above-mentioned valuation method.
In the embodiment of the invention, a non-contact mode is adopted for weighing, a feature space is obtained by processing an image to be processed containing an object to be weighed based on a first neural network, then a weight interval in which the object to be weighed is located is determined based on the feature space, a second neural network corresponding to the weight interval is determined, and finally an image to be processed is processed based on the second neural network, so that a weighing result of the object to be weighed is obtained.
It is easy to notice that in the process of evaluating the object to be evaluated, the evaluation result can be obtained only by processing the image containing the object to be evaluated, and the operations of lifting, moving and the like of a plurality of staff are not needed, so that the labor cost and the time cost are saved. Moreover, the above-mentioned valuation process does not need to use the electronic scale either, only need a mobile terminal (for example, mobile phone) to treat the valuation object to carry on the valuation anytime and anywhere, convenient and fast, difficult to receive the restriction of place and apparatus. In addition, in the process of processing the image to be processed, by determining the weight interval in which the object to be estimated is located and selecting the corresponding neural network to process the image to be processed according to the weight interval, a more accurate estimated result can be obtained.
Therefore, the scheme provided by the application achieves the purpose of non-contact weighing, and solves the technical problem that the existing electronic scale consumes manpower, material resources and time.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a method of valuation according to an embodiment of the invention;
FIG. 2 is a schematic illustration of an alternative image to be processed according to an embodiment of the invention;
FIG. 3 is a schematic illustration of an alternative method of valuation according to an embodiment of the invention;
FIG. 4 is a schematic illustration of an alternative method of valuation according to an embodiment of the invention;
FIG. 5 is a flow chart of a method of valuation according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an alternative method of valuation according to an embodiment of the invention;
FIG. 7 is a schematic illustration of an alternative method of valuation according to an embodiment of the invention;
FIG. 8 is a schematic view of a weighting apparatus according to an embodiment of the present invention; and
fig. 9 is a schematic view of a weighting apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method of valuation, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
FIG. 1 is a flow chart of a method of valuation according to an embodiment of the invention, as shown in FIG. 1, the method comprising the steps of:
step S102, processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated.
It should be noted that, the weighting system may be used as an execution body of the weighting method in this embodiment, and optionally, the weighting system includes at least one of the following: the mobile terminal is provided with an image acquisition unit and can acquire the image to be processed of the object to be estimated, for example, the mobile terminal can be a mobile phone, a tablet computer and the like with a photographing function.
In an alternative embodiment, after the mobile terminal shoots and obtains the image to be processed, the image to be processed is transmitted to the server through the network, the server obtains the image to be processed collected by the mobile terminal, and processes the image to be processed to obtain the estimated result of the object to be estimated. The number of the objects to be estimated may be multiple, that is, the server may simultaneously estimate multiple objects to be estimated. In addition, one image to be processed can correspond to one object to be estimated, namely, one image to be processed only contains one object to be estimated; one image to be processed can also correspond to a plurality of objects to be estimated, namely one image to be processed contains a plurality of objects to be estimated. The object to be estimated can be animals such as pigs, cows and sheep, and can also be other objects, and the object to be estimated can be used as the object to be estimated in the application as long as the object to be weighed is needed.
It should be noted that, after the mobile terminal acquires the images to be processed, the mobile terminal does not need to immediately transmit the images to be processed to the server through the network, and the mobile terminal can transmit all the images of the objects to be estimated to the server through the network after acquiring all the images of the objects to be estimated. In addition, the mobile terminal and the server may communicate with each other wirelessly (e.g., via a network) or by a wired method, and the specific communication method is not limited in this application.
In addition, it should be noted that after the mobile terminal obtains the image to be processed, the mobile terminal may process the image to be processed to obtain the estimation result of the object to be estimated, without sending the image to be processed to the server, and then processing the image by the server.
Optionally, in step S102, the first neural network is a deep convolutional neural network. After the server or the mobile terminal acquires the image to be processed, inputting the image to be processed into a first neural network, and then carrying out feature extraction on the image to be processed layer by layer based on the first neural network to obtain a feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features include at least one of the following: the class and semantics of the object to be assessed.
It should be noted that the feature space is effective feature information automatically learned by the deep neural network according to the characteristics and the distribution of the training sample, wherein the training sample is a plurality of images, the plurality of images include a first type image and a second type image, the first type image is an image containing an object to be estimated, and the second type image is an image not containing the object to be estimated.
In addition, the depth convolution neural network is used for carrying out layer-by-layer feature extraction on the image to be processed, so that the processing capacity of the depth convolution neural network for processing the image can be improved.
Step S104, determining a weight interval in which the object to be estimated is located based on the feature space.
Optionally, the server or the mobile terminal further performs rough estimation of the weight interval of the object to be estimated by segmentation based on the feature space, firstly, the classifier can be trained according to a large number of sample images (including images of the object to be estimated with known weight and corresponding intervals) so as to divide the weight ranges of different objects to be estimated into a plurality of intervals, then the server or the mobile terminal determines the weight interval where the current object to be estimated is located based on the feature space, for example, when the object to be estimated is a pig, the weight range of the pig can be divided into three weight intervals in advance, wherein the range of the first weight interval is below 50Kg, the range of the second weight interval is 50-80Kg, and the range of the third weight interval is above 80 Kg.
Step S106, selecting a second neural network corresponding to the weight zone.
It should be noted that the second neural network is a deep convolutional neural network. The different weight intervals correspond to different second neural networks, for example, in the case that the number of the weight intervals is three, there are three second neural networks, and weight estimation is performed on the objects to be estimated in different weight ranges respectively.
Step S108, processing the image to be processed based on the second neural network to obtain an estimated result of the object to be estimated.
It should be noted that, after the second neural network corresponding to the object to be estimated is determined, the client or the server processes the image to be processed with the second neural network, so as to obtain a more accurate estimated result.
Based on the schemes described in the above steps S102 to S108, it can be known that the non-contact manner is adopted for the estimation, the feature space is obtained by processing the image to be processed including the object to be estimated based on the first neural network, then the weight interval in which the object to be estimated is located is determined based on the feature space, the second neural network corresponding to the weight interval is selected, and finally the estimation result of the object to be estimated is obtained by processing the image to be processed based on the second neural network.
It is easy to notice that in the process of evaluating the object to be evaluated, the evaluation result can be obtained only by processing the image containing the object to be evaluated, and the operations of lifting, moving and the like of a plurality of staff are not needed, so that the labor cost and the time cost are saved. Moreover, the above-mentioned valuation process does not need to use the electronic scale either, only need a mobile terminal (for example, mobile phone) to treat the valuation object to carry on the valuation anytime and anywhere, convenient and fast, difficult to receive the restriction of place and apparatus. In addition, in the process of processing the image to be processed, firstly, the object to be estimated is estimated, and by determining the weight interval in which the object to be estimated is located and selecting the corresponding neural network to process the image to be processed according to the weight interval, a more accurate estimation result can be obtained.
Therefore, the scheme provided by the application achieves the purpose of non-contact weighing, and the technical problem that the existing electronic scale consumes manpower, material resources and time is solved.
In an alternative embodiment, the image to be processed further comprises a reference.
For example, when the object to be estimated is a pig, in the image to be processed, the reference object may be a solid rectangular plate of A4 size, and the position of the reference object does not overlap with the object to be estimated.
It should be noted that, a reference object with a suitable shape and size can be selected according to the object to be estimated, so as to reduce the influence of errors caused by image segmentation on the estimation result, so that when the object to be estimated is a pig, the accuracy of the estimation result can be improved by using a solid rectangular plate with an A4 size as the reference object. In addition, the rectangular plate with pure color is easier to position and divide, and the specification and the size of A4 are easier to unify.
In addition, it should be noted that, as shown in fig. 2, the reference object is placed beside the object to be estimated, and the reference object and the object to be estimated are not overlapped, so that the reference object does not cause shielding to the object to be estimated, and further, the object to be estimated which is segmented from the image to be processed is not affected. In addition, because the object to be estimated is higher and lower than the ground, if the reference object is placed on the body of the object to be estimated, the reference object in the photographed image is easy to deflect and deform, and the estimation result is affected.
In an alternative embodiment, the server or the mobile terminal needs to perform segmentation processing on the image to be processed before processing the image to be processed based on the second neural network to obtain the estimation result of the object to be estimated. Specifically, the server or the mobile terminal segments the image to be processed based on the feature space to obtain a segmentation result, wherein the server or the mobile terminal can perform fusion processing on the bottom layer features and the high layer features to obtain a fusion result, then in the feature space, the server or the mobile terminal segments the image to be processed based on the fusion result to obtain a segmentation result, namely fusion operation is performed on the bottom layer features and the high layer features based on a preset fusion strategy, then the fusion result is subjected to semantic segmentation of the object to be identified, the reference object and the background at the pixel level to obtain a final segmentation image, wherein the segmentation image at least comprises: an object region to be estimated, a reference object region, and a background region.
In addition, the server or the mobile terminal determines a weight interval in which the object to be estimated is located based on the feature space. Optionally, the estimation system may determine initial information of the object to be estimated based on the feature space, and then determine a weight interval in which the object to be estimated is located according to the initial information. Wherein the initial information includes at least one of: the body length, chest circumference, waist circumference and hip circumference of the subject to be weighted. For example, the server or the mobile terminal primarily estimates the body length, the chest circumference, the waist circumference, the hip circumference, and other information of the object to be estimated by analyzing the feature space, and then primarily estimates the weight interval in which the object to be estimated is located by analyzing the body length, the chest circumference, the waist circumference, the hip circumference, and other information of the object to be estimated.
Further, after determining the weight interval in which the object to be estimated is located based on the feature space, the server or the mobile terminal further processes the image to be processed.
In an alternative embodiment, after determining the weight interval in which the object to be estimated is located based on the feature space, the server or the mobile terminal performs merging processing on the image to be processed and the segmented image to obtain a merged image. As shown in the schematic diagram of the estimation method shown in fig. 3, specifically, after extracting features of an image to be processed to obtain a feature space, the server or the mobile terminal performs segmentation processing on the image to be processed according to the feature space to obtain a segmented image, and determines a weight interval of an object to be estimated. Then the server or the mobile terminal performs combination processing on the image to be processed of the object to be estimated and the segmentation image to obtain a combined image; for example, when the image to be processed is a 3-channel color image and the divided image is a single-channel image, the combined image obtained after the combining process is a 4-channel image. And finally, selecting a second neural network corresponding to the weight interval, and carrying out feature extraction and weight regression to obtain a weight estimation result.
In an optional embodiment, the method for evaluating the weight further includes obtaining a depth map of the object to be evaluated, after determining a weight interval in which the object to be evaluated is located based on the feature space, performing perspective transformation on the depth map, the segmented image and the image to be processed according to the reference object area to obtain a transformed depth map, a segmented image and the image to be processed, and finally performing merging processing on the transformed depth map, the segmented image and the image to be processed to obtain a merged image; for example, when the image to be processed is a 3-channel color image and the depth image and the divided image are both single-channel images, the combined image obtained after the combination processing is a 5-channel image. As shown in the schematic diagram of the estimation method shown in fig. 4, specifically, the mobile terminal may also collect a depth map of the object to be estimated, and after extracting features of the image to be processed to obtain a feature space, the server or the mobile terminal performs segmentation processing on the image to be processed according to the feature space to obtain a segmented image, and determines a weight interval of the object to be estimated. And then the server or the mobile terminal performs three-dimensional view angle transformation on the depth map, the segmented image and the image to be processed of different view angles according to the depth map and the reference object area in the segmented image, and transforms the depth map, the segmented image and the image to be processed under the fixed view angle, thereby obtaining the depth map, the segmented image and the image to be processed under the fixed view angle. And then the server or the mobile terminal performs combination processing on the depth map, the segmented image and the image to be processed under the fixed visual angle to obtain a combined image, and finally selects a second neural network corresponding to the combined image and performs feature extraction and weight regression to obtain a weight estimation result.
It should be noted that, the above embodiment combines the depth information of the image to be processed, so as to achieve unification of different shooting angles of view and reduce errors caused by different angles of view. Meanwhile, the deficiency of two-dimensional projection information can be made up by adding three-dimensional depth information, so that the estimated result is more accurate.
Further, as can be seen from fig. 3 and fig. 4, after obtaining the combined image, the server or the mobile terminal extracts features from the combined image based on the second neural network to obtain a target feature space, and then performs regression processing based on the target feature space to obtain an estimated result of the object to be estimated. Specifically, feature extraction and weight regression are performed on the objects to be estimated in different weight intervals by using different deep neural convolutional networks, namely, training and fitting are performed on the objects to be estimated in different weight ranges by training a plurality of different deep neural networks, so as to obtain a final estimated result, wherein the estimated result can be displayed in an image to be processed or played in a voice mode.
According to the scheme, a rapid, non-contact, high-quality and high-efficiency weighing method is provided, and the problems that weighing of an object is time-consuming and labor-consuming at present are solved.
Example 2
There is further provided an embodiment of a method for estimating a weight according to an embodiment of the present invention, wherein fig. 5 is a flowchart of the method for estimating a weight according to an embodiment of the present invention, and as shown in fig. 5, the method includes the steps of:
step S502, processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated and a reference object.
It should be noted that, the weighting system may be used as an execution body of the weighting method in this embodiment, and optionally, the weighting system includes at least one of the following: the mobile terminal is provided with an image acquisition unit and can acquire the image to be processed of the object to be estimated, for example, the mobile terminal can be a mobile phone, a tablet computer and the like with a photographing function.
In an alternative embodiment, after the mobile terminal shoots and obtains the image to be processed, the image to be processed is transmitted to the server through the network, the server obtains the image to be processed collected by the mobile terminal, and processes the image to be processed to obtain the estimated result of the object to be estimated. The number of the objects to be estimated may be multiple, that is, the server may simultaneously estimate multiple objects to be estimated. In addition, one image to be processed can correspond to one object to be estimated, namely, one image to be processed only contains one object to be estimated; one image to be processed can also correspond to a plurality of objects to be estimated, namely one image to be processed contains a plurality of objects to be estimated. The object to be estimated can be animals such as pigs, cows and sheep, and can also be other objects, and the object to be estimated can be used as the object to be estimated in the application as long as the object to be weighed is needed.
It should be noted that, after the mobile terminal acquires the images to be processed, the mobile terminal does not need to immediately transmit the images to be processed to the server through the network, and the mobile terminal can transmit all the images of the objects to be estimated to the server through the network after acquiring all the images of the objects to be estimated. In addition, the mobile terminal and the server may communicate with each other wirelessly (e.g., via a network) or by a wired method, and the specific communication method is not limited in this application.
In addition, it should be noted that after the mobile terminal obtains the image to be processed, the mobile terminal may process the image to be processed to obtain the estimation result of the object to be estimated, without sending the image to be processed to the server, and then processing the image by the server.
Optionally, in step S502, the first neural network is a deep convolutional neural network. After the server or the mobile terminal acquires the image to be processed, inputting the image to be processed into a first neural network, and then carrying out feature extraction on the image to be processed layer by layer based on the first neural network to obtain a feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features include at least one of the following: the class and semantics of the object to be assessed.
It should be noted that the feature space is effective feature information automatically learned by the deep neural network according to the characteristics and the distribution of the training sample, wherein the training sample is a plurality of images, the plurality of images include a first type image and a second type image, the first type image is an image containing an object to be estimated, and the second type image is an image not containing the object to be estimated.
In addition, the depth convolution neural network is used for carrying out layer-by-layer feature extraction on the image to be processed, so that the processing capacity of the depth convolution neural network for processing the image can be improved.
Alternatively, in the image to be processed, the reference object does not overlap with the object to be estimated, and the shape of the reference object is rectangular, for example, when the object to be estimated is a pig, in the image to be processed, the reference object may be a solid rectangular plate of A4 size.
It should be noted that, a reference object with a suitable shape and size can be selected according to the object to be estimated, so as to reduce the influence of errors caused by image segmentation on the estimation result, so that when the object to be estimated is a pig, the accuracy of the estimation result can be improved by using a solid rectangular plate with an A4 size as the reference object. In addition, the rectangular plate with pure color is easier to position and divide, and the specification and the size of A4 are easier to unify.
In addition, it should be noted that, as shown in fig. 2, the reference object is placed beside the object to be estimated, and the reference object and the object to be estimated are not overlapped, so that the reference object does not cause shielding to the object to be estimated, and further, the object to be estimated which is segmented from the image to be processed is not affected. In addition, because the object to be estimated is higher and lower than the ground, if the reference object is placed on the body of the object to be estimated, the reference object in the photographed image is easy to deflect and deform, and the estimation result is affected.
Step S504, segmenting the image to be processed based on the feature space to obtain a segmented image, wherein the segmented image at least comprises: an object region to be estimated, a reference object region, and a background region.
Specifically, the server or the mobile terminal may perform segmentation processing on the image to be processed. The server or the mobile terminal can perform fusion processing on the bottom layer features and the high-level features to obtain fusion results, then in a feature space, the server or the mobile terminal segments the image to be processed based on the fusion results to obtain segmentation results, namely fusion operation is performed on the bottom layer features and the high-level features based on a preset fusion strategy, and then semantic segmentation is performed on the fusion results on the object to be identified, the reference object and the background at the pixel level to obtain a final segmentation image.
Step S506, processing the image to be processed and the segmentation image based on the second neural network to obtain an estimated result of the object to be estimated.
It should be noted that the second neural network may also be a deep convolutional neural network, where the second neural network may be a preset neural network, or the second neural network may be selected according to a weight interval in which the object to be estimated is located, where different weight intervals correspond to different second neural networks in a scenario in which the second neural network is selected according to the weight interval in which the object to be estimated is located, for example, in a case where the number of weight intervals is three, there are three second neural networks, and weight estimation is performed on the object to be estimated in different weight ranges, where the corresponding neural network is selected according to the weight interval to process the image to be processed, so that a more accurate estimation result may be obtained.
Based on the schemes described in the above steps S502 to S506, it can be known that the non-contact manner is adopted for the estimation, the feature space is obtained by processing the to-be-processed image containing the to-be-estimated object based on the first neural network, then the segmentation processing is performed on the to-be-processed image based on the feature space, the segmented image is obtained, and finally the estimation result of the to-be-estimated object is obtained by processing the to-be-processed image and the segmented image based on the second neural network.
It is easy to notice that in the process of evaluating the object to be evaluated, the evaluation result can be obtained only by processing the image containing the object to be evaluated, and the operations of lifting, moving and the like of a plurality of staff are not needed, so that the labor cost and the time cost are saved. Moreover, the above-mentioned valuation process does not need to use the electronic scale either, only need a mobile terminal (for example, mobile phone) to treat the valuation object to carry on the valuation anytime and anywhere, convenient and fast, difficult to receive the restriction of place and apparatus.
Therefore, the scheme provided by the application achieves the purpose of non-contact weighing, and the technical problem that the existing electronic scale consumes manpower, material resources and time is solved.
In an alternative embodiment, after obtaining the segmented image, the server or the mobile terminal performs a merging process on the image to be processed and the segmented image to obtain a merged image, and processes the merged image to obtain the estimated result. As shown in the schematic diagram of the weighting method shown in fig. 6, specifically, after extracting features of an image to be processed to obtain a feature space, the server or the mobile terminal performs segmentation processing on the image to be processed according to the feature space to obtain a segmented image. And then the server or the mobile terminal performs merging processing on the to-be-processed image of the to-be-estimated object and the segmentation image to obtain a merged image, for example, when the to-be-processed image is a 3-channel color image and the segmentation image is a single-channel image, the merged image obtained after the merging processing is a 4-channel image. And finally, carrying out feature extraction and weight regression on the combined image through a second neural network to obtain a weight estimation result.
In another optional embodiment, the method of estimating further comprises obtaining a depth map of the object to be estimated; after obtaining the depth map and the segmentation image, the server or the mobile terminal performs perspective transformation on the depth map, the segmentation image and the image to be processed according to the reference object region to obtain a transformed depth map, a segmentation image and the image to be processed, and then performs merging processing on the transformed depth map, the segmentation image and the image to be processed to obtain a merged image, for example, when the image to be processed is a 3-channel color image and the depth image and the segmentation image are both single-channel images, the merged image obtained after the merging processing is a 5-channel image. And finally, processing the combined image to obtain a weight estimation result. As shown in the schematic diagram of the estimation method shown in fig. 7, specifically, the mobile terminal may also collect a depth map of the object to be estimated, and after extracting features of the image to be processed to obtain a feature space, the server or the mobile terminal performs segmentation processing on the image to be processed according to the feature space to obtain a segmented image. And then the server or the mobile terminal performs three-dimensional view angle transformation on the depth map, the segmented image and the image to be processed of different view angles according to the depth map and the reference object area in the segmented image, and transforms the depth map, the segmented image and the image to be processed under the fixed view angle, thereby obtaining the depth map, the segmented image and the image to be processed under the fixed view angle. And then the server or the mobile terminal performs combination processing on the depth map, the segmented image and the image to be processed under the fixed visual angle to obtain a combined image, and finally performs feature extraction and weight regression on the combined image through a second neural network to obtain a weight estimation result.
It should be noted that, the above embodiment combines the depth information of the image to be processed, so as to achieve unification of different shooting angles of view and reduce errors caused by different angles of view. Meanwhile, the deficiency of two-dimensional projection information can be made up by adding three-dimensional depth information, so that the estimated result is more accurate.
Further, as can be seen from fig. 6 and fig. 7, after obtaining the combined image, the server or the mobile terminal extracts features from the combined image based on the second neural network to obtain a target feature space, and then performs regression processing based on the target feature space to obtain an estimated result of the object to be estimated. Specifically, feature extraction and weight regression are performed on the objects to be estimated in different weight intervals by using different deep neural convolutional networks, namely, training and fitting are performed on the objects to be estimated in different weight ranges by training a plurality of different deep neural networks, so as to obtain a final estimated result, wherein the estimated result can be displayed in an image to be processed or played in a voice mode.
According to the scheme, a rapid, non-contact, high-quality and high-efficiency weighing method is provided, and the problems that weighing of an object is time-consuming and labor-consuming at present are solved.
Example 3
There is further provided in accordance with an embodiment of the present invention an embodiment of a weighing apparatus, wherein fig. 8 is a schematic diagram of the weighing apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus includes: a first processing module 801, a first determination module 803, a first selection module 805, and a second processing module 807.
The first processing module 801 is configured to process an image to be processed based on a first neural network to obtain a feature space, where the image to be processed at least includes an object to be estimated; a first determining module 803, configured to determine a weight interval in which the object to be estimated is located based on the feature space; a first selection module 805 for determining a second neural network corresponding to the weight interval; a second processing module 807, configured to process the image to be processed based on the second neural network, to obtain a weighted result of the object to be weighted.
Here, the first processing module 801, the first determining module 803, the first selecting module 805, and the second processing module 807 correspond to steps S102 to S108 of the above embodiment, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiment.
In an alternative embodiment, the first processing module includes: a first extraction module. The first extraction module is used for extracting features of the image to be processed layer by layer based on the first neural network to obtain a feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features include at least one of the following: the class and semantics of the object to be assessed.
In an alternative embodiment, the weighing apparatus further comprises: and the fusion module is used for carrying out fusion processing on the bottom layer characteristics and the high layer characteristics before the image to be processed is processed based on the second neural network to obtain the estimated result of the object to be estimated, so as to obtain the fusion result.
In an alternative embodiment, the image to be processed further comprises a reference object, and the weighing apparatus further comprises: and a segmentation module. The segmentation module is used for segmenting the image to be processed based on the feature space to obtain a segmented image, wherein the segmented image at least comprises: an object region to be estimated, a reference object region, and a background region. When the estimation device comprises a fusion module, the segmentation module segments the image to be processed based on the fusion result to obtain a segmented image.
In an alternative embodiment, the first determining module includes: the detection module and the interval determination module. The detection module is used for acquiring initial information of the object to be estimated based on the feature space, wherein the initial information comprises at least one of the following components: the body length, chest circumference, waistline and hip circumference of the object to be estimated; and the interval determining module is used for determining the weight interval of the object to be estimated according to the initial information.
In an alternative embodiment, the weighing apparatus further comprises: and a first merging processing module. The first merging processing module is used for merging the images to be processed and the divided images after determining the weight interval of the object to be estimated based on the feature space, so as to obtain a merged image.
In an alternative embodiment, the weighing apparatus further comprises: the device comprises a depth map acquisition module, a transformation module and a second merging processing module. The depth map acquisition module is used for acquiring a depth map of an object to be estimated; after determining a weight interval of an object to be estimated based on the feature space, a transformation module is used for performing visual angle transformation on the depth map, the segmented image and the image to be processed according to a reference object area in the segmented image to obtain a transformed depth map, the segmented image and the image to be processed; and the second merging processing module is used for merging the transformed depth map, the segmented image and the image to be processed to obtain a merged image.
In an alternative embodiment, the second processing module includes: and the second extraction module and the result acquisition module. The second extraction module is used for extracting features from the combined image based on a second neural network to obtain a target feature space; and the result acquisition module is used for carrying out regression processing based on the target feature space to obtain the estimated result of the object to be estimated.
In an alternative embodiment, the weighing apparatus further comprises: and an image acquisition module. The image acquisition module is used for acquiring the image to be processed acquired by the mobile terminal before the image to be processed is processed based on the first neural network to obtain the feature space.
Optionally, the first neural network is a deep convolutional neural network, and the second neural network is a deep convolutional neural network.
Optionally, in the image to be processed, the reference object and the object to be estimated do not overlap, and the shape of the reference object is rectangular.
Example 4
There is further provided in accordance with an embodiment of the present invention an embodiment of a weighing apparatus, wherein fig. 9 is a schematic diagram of the weighing apparatus according to an embodiment of the present invention, as shown in fig. 9, the apparatus includes: a third processing module 901, a segmentation module 903, and a fourth processing module 905.
The third processing module 901 is configured to process an image to be processed based on the first neural network to obtain a feature space, where the image to be processed at least includes an object to be estimated and a reference object.
The segmentation module 903 is configured to segment an image to be processed based on a feature space, to obtain a segmented image, where the segmented image at least includes: an object region to be estimated, a reference object region, and a background region.
A fourth processing module 905, configured to process the to-be-processed image and the segmented image based on the second neural network, to obtain an estimated result of the to-be-estimated object.
Here, the third processing module 901, the dividing module 903, and the fourth processing module 905 correspond to steps S502 to S506 of the above embodiment, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiment.
In an alternative embodiment, the third processing module includes: a first extraction module. The first extraction module is used for extracting features of the image to be processed layer by layer based on the first neural network to obtain a feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features include at least one of the following: the class and semantics of the object to be assessed.
In an alternative embodiment, the weighing apparatus further comprises: and the fusion module is used for carrying out fusion processing on the bottom layer features and the high layer features after the image to be processed is processed based on the first neural network to obtain a feature space, so as to obtain a fusion result.
In an alternative embodiment, the fourth processing module includes: and a first merging processing module. The first merging processing module is used for merging the images to be processed and the segmented images to obtain a merged image, and processing the merged image to obtain a weight estimation result.
In an alternative embodiment, the weighing apparatus further comprises: the device comprises a depth map acquisition module, a transformation module and a second merging processing module. The depth map acquisition module is used for acquiring a depth map of an object to be estimated; the transformation module is used for carrying out visual angle transformation on the depth map, the segmented image and the image to be processed according to the reference object region to obtain a transformed depth map, the segmented image and the image to be processed; and the second merging processing module is used for merging the transformed depth map, the segmented image and the image to be processed to obtain a merged image, and processing the merged image to obtain a weight estimation result.
In an alternative embodiment, the fourth processing module includes: and the second extraction module and the result acquisition module. The second extraction module is used for extracting features from the combined image based on a second neural network to obtain a target feature space; and the result acquisition module is used for carrying out regression processing based on the target feature space to obtain the estimated result of the object to be estimated.
In an alternative embodiment, the weighing apparatus further comprises: and an image acquisition module. The image acquisition module is used for acquiring the image to be processed acquired by the mobile terminal before the image to be processed is processed based on the first neural network to obtain the feature space.
Optionally, in the image to be processed, the reference object and the object to be estimated do not overlap, and the shape of the reference object is rectangular.
Example 5
According to another aspect of the embodiment of the present invention, there is also provided a storage medium, which includes a stored program, where the device on which the storage medium is controlled to execute the above-described weighting method when the program runs.
Example 6
According to another aspect of the embodiment of the present invention, there is also provided a processor for running a program, where the program executes the above-mentioned valuation method.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of units may be a logic function division, and there may be another division manner in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (16)

1. A method of evaluating weight, comprising:
processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated;
determining a weight interval in which the object to be estimated is positioned based on the feature space;
selecting a second neural network corresponding to the weight interval;
combining the image to be processed and the segmented image to obtain a combined image; or, obtaining a depth map of the object to be estimated, and performing view angle transformation on the depth map, the segmented image and the image to be processed according to a reference object region in the image to be processed to obtain the transformed depth map, segmented image and image to be processed; combining the transformed depth map, the segmented image and the image to be processed to obtain a combined image, wherein the segmented image is obtained by segmenting the image to be processed based on a feature space;
Extracting features from the combined image based on the second neural network to obtain a target feature space; and carrying out regression processing based on the target feature space to obtain an estimated result of the object to be estimated.
2. The method of claim 1, wherein processing the image to be processed based on the first neural network to obtain the feature space comprises:
and carrying out feature extraction on the image to be processed layer by layer based on the first neural network to obtain the feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features comprise at least one of the following: and the category and the semantic of the object to be estimated.
3. The method according to claim 2, wherein after processing the image to be processed based on the first neural network to obtain the feature space, the method further comprises: and carrying out fusion processing on the bottom layer features and the high layer features to obtain a fusion result.
4. The method of claim 1, wherein the image to be processed further comprises a reference, and wherein prior to processing the image to be processed based on the second neural network to obtain the result of the estimation of the object to be estimated, the method further comprises:
Dividing the image to be processed based on the feature space to obtain a divided image, wherein the divided image at least comprises: an object region to be estimated, a reference object region, and a background region.
5. The method according to claim 1, wherein determining a weight interval in which the object to be weighted is located based on the feature space comprises:
determining initial information of the object to be assessed based on the feature space, wherein the initial information comprises at least one of the following: the body length, chest circumference, waistline and hip circumference of the object to be estimated;
and determining the weight interval of the object to be estimated according to the initial information.
6. The method of claim 1, wherein prior to processing the image to be processed based on the first neural network to obtain the feature space, the method further comprises:
and acquiring an image to be processed acquired by the mobile terminal.
7. The method according to claim 4, wherein in the image to be processed, the reference object does not overlap with the object to be weighted, and the reference object is rectangular in shape.
8. A method of evaluating weight, comprising:
Processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated and a reference object;
dividing the image to be processed based on the feature space to obtain a divided image, wherein the divided image at least comprises: an object region to be estimated, a reference object region and a background region;
combining the image to be processed and the segmentation image to obtain a combined image; or, obtaining a depth map of the object to be estimated; performing view angle transformation on the depth map, the segmented image and the image to be processed according to the reference object region to obtain the transformed depth map, the segmented image and the image to be processed; combining the transformed depth map, the segmented image and the image to be processed to obtain a combined image;
extracting features from the combined image based on a second neural network to obtain a target feature space; and carrying out regression processing based on the target feature space to obtain an estimated result of the object to be estimated.
9. The method of claim 8, wherein processing the image to be processed based on the first neural network to obtain the feature space comprises:
And carrying out feature extraction on the image to be processed layer by layer based on the first neural network to obtain the feature space, wherein the feature space comprises bottom layer features and high layer features, and the bottom layer features comprise at least one of the following: the color and texture of the object to be assessed, and the high-level features comprise at least one of the following: and the category and the semantic of the object to be estimated.
10. The method of claim 9, wherein after processing the image to be processed based on the first neural network to obtain the feature space, the method further comprises: and carrying out fusion processing on the bottom layer features and the high layer features to obtain a fusion result.
11. The method of claim 8, wherein prior to processing the image to be processed based on the first neural network to obtain the feature space, the method further comprises:
and acquiring an image to be processed acquired by the mobile terminal.
12. The method according to claim 8, wherein in the image to be processed, the reference object does not overlap with the object to be weighted, and the reference object is rectangular in shape.
13. A weight estimation device, comprising:
The first processing module is used for processing an image to be processed based on a first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated;
the first determining module is used for determining a weight interval in which the object to be estimated is located based on the feature space;
the first merging processing module is used for merging the image to be processed and the segmentation image to obtain a merged image, or the depth map acquisition module is used for acquiring the depth map of the object to be estimated; the transformation module is used for carrying out visual angle transformation on the depth map, the segmented image and the image to be processed according to a reference object area in the image to be processed to obtain the transformed depth map, the segmented image and the image to be processed; the second merging processing module is used for merging the transformed depth map, the segmented image and the image to be processed to obtain a merged image, wherein the segmented image is obtained by segmenting the image to be processed based on a feature space;
a first selection module for selecting a second neural network corresponding to the weight interval;
The second processing module is used for extracting features from the combined image based on the second neural network to obtain a target feature space; and carrying out regression processing based on the target feature space to obtain an estimated result of the object to be estimated.
14. A weight estimation device, comprising:
the third processing module is used for processing the image to be processed based on the first neural network to obtain a feature space, wherein the image to be processed at least comprises an object to be estimated and a reference object;
the segmentation module is used for segmenting the image to be processed based on the feature space to obtain a segmented image, wherein the segmented image at least comprises: an object region to be estimated, a reference object region and a background region;
the fourth processing module is configured to process the to-be-processed image and the segmented image based on a second neural network, to obtain an estimated result of the to-be-estimated object, and includes: the first merging processing module is used for merging the image to be processed and the segmentation image to obtain a merged image, or the depth map acquisition module is used for acquiring the depth map of the object to be estimated; the transformation module is used for carrying out visual angle transformation on the depth map, the segmented image and the image to be processed according to the reference object area to obtain the transformed depth map, the segmented image and the image to be processed; the second merging processing module is used for merging the transformed depth map, the segmented image and the image to be processed to obtain a merged image;
The second extraction module is used for extracting features from the combined image based on a second neural network to obtain a target feature space; and the result acquisition module is used for carrying out regression processing based on the target feature space to obtain the estimated result of the object to be estimated.
15. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the weighing method of any one of claims 1 to 12.
16. A processor for running a program, wherein the program runs on performing the method of assessing weight according to any one of claims 1 to 12.
CN201910532898.XA 2019-06-19 2019-06-19 Weighting method and weighting device Active CN112116647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910532898.XA CN112116647B (en) 2019-06-19 2019-06-19 Weighting method and weighting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910532898.XA CN112116647B (en) 2019-06-19 2019-06-19 Weighting method and weighting device

Publications (2)

Publication Number Publication Date
CN112116647A CN112116647A (en) 2020-12-22
CN112116647B true CN112116647B (en) 2024-04-05

Family

ID=73796629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910532898.XA Active CN112116647B (en) 2019-06-19 2019-06-19 Weighting method and weighting device

Country Status (1)

Country Link
CN (1) CN112116647B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330677A (en) * 2021-01-05 2021-02-05 四川智迅车联科技有限公司 High-precision weighing method and system based on image, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN109124606A (en) * 2018-06-14 2019-01-04 深圳小辣椒科技有限责任公司 A kind of blood pressure computing model construction method and building system
CN109508470A (en) * 2017-09-22 2019-03-22 广东工业大学 The method for establishing ship weight computation model based on deep neural network study
CN109726613A (en) * 2017-10-27 2019-05-07 虹软科技股份有限公司 A kind of method and apparatus for detection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2957861A1 (en) * 2014-06-17 2015-12-23 Expert Ymaging, SL Device and method for automated parameters calculation of an object
JP2018027581A (en) * 2016-08-17 2018-02-22 株式会社安川電機 Picking system
US20190075756A1 (en) * 2017-09-11 2019-03-14 FarmIn Technologies Systems, methods, and apparatuses for animal weight monitoring and management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN109508470A (en) * 2017-09-22 2019-03-22 广东工业大学 The method for establishing ship weight computation model based on deep neural network study
CN109726613A (en) * 2017-10-27 2019-05-07 虹软科技股份有限公司 A kind of method and apparatus for detection
CN109124606A (en) * 2018-06-14 2019-01-04 深圳小辣椒科技有限责任公司 A kind of blood pressure computing model construction method and building system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴君等.基于人工神经网络的足月胎儿体重预测方法.生物医学工程学杂志.2005,(05),全文. *
杨艳等.种猪体重测量新方法初探.畜禽业.2005,(10),全文. *

Also Published As

Publication number Publication date
CN112116647A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
Yukun et al. Automatic monitoring system for individual dairy cows based on a deep learning framework that provides identification via body parts and estimation of body condition score
US11627726B2 (en) System and method of estimating livestock weight
Fernandes et al. A novel automated system to acquire biometric and morphological measurements and predict body weight of pigs via 3D computer vision
KR102062609B1 (en) A portable weighting system for livestock using 3D images
CN110426112B (en) Live pig weight measuring method and device
Wang et al. ASAS-NANP SYMPOSIUM: Applications of machine learning for livestock body weight prediction from digital images
US8971586B2 (en) Apparatus and method for estimation of livestock weight
Salau et al. Feasibility of automated body trait determination using the SR4K time-of-flight camera in cow barns
Gomes et al. Estimating body weight and body composition of beef cattle trough digital image analysis
Halachmi et al. Cow body shape and automation of condition scoring
US11568541B2 (en) System for high performance, AI-based dairy herd management and disease detection
McPhee et al. Live animal assessments of rump fat and muscle score in Angus cows and steers using 3-dimensional imaging
CN106662437A (en) Method and device for automated parameters calculation of object
CN112116647B (en) Weighting method and weighting device
Shelley Incorporating machine vision in precision dairy farming technologies
CN115752683A (en) Weight estimation method, system and terminal based on depth camera
Xiong et al. Estimating body weight and body condition score of mature beef cows using depth images
Tao et al. Development and implementation of a training dataset to ensure clear boundary value of body condition score classification of dairy cows in automatic system
CN114358163A (en) Food intake monitoring method and system based on twin network and depth data
Prinsloo et al. How unique is unique? Quantifying geometric differences in stripe patterns of Cape mountain zebra, Equus zebra zebra (Perissodactyla: Equidae)
CN112784713A (en) Pig weight estimation method, system, equipment and storage medium based on image
CN113344001A (en) Organism weight estimation method, device, equipment and storage medium
CN108896490B (en) Meat block homologous relation verification method and device
Kadlec et al. Automated acquisition of top-view dairy cow depth image data using an RGB-D sensor camera
KR20210096448A (en) A contactless mobile weighting system for livestock using asymmetric stereo cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: Zhong Guo

Address after: 19/F, Hongruan Building, 392 Binxing Road, Binjiang District, Hangzhou, Zhejiang, 310051

Applicant after: Rainbow Software Co.,Ltd.

Address before: 310012 22nd and 23rd Floors of Building A, Paradise Software Park, No. 3 Xidoumen Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Rainbow Software Co.,Ltd.

Country or region before: Zhong Guo

GR01 Patent grant
GR01 Patent grant