CN115731566A - Weight detection method, device and equipment based on image and storage medium - Google Patents

Weight detection method, device and equipment based on image and storage medium Download PDF

Info

Publication number
CN115731566A
CN115731566A CN202110977583.3A CN202110977583A CN115731566A CN 115731566 A CN115731566 A CN 115731566A CN 202110977583 A CN202110977583 A CN 202110977583A CN 115731566 A CN115731566 A CN 115731566A
Authority
CN
China
Prior art keywords
preset
body part
weight
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110977583.3A
Other languages
Chinese (zh)
Inventor
降小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202110977583.3A priority Critical patent/CN115731566A/en
Publication of CN115731566A publication Critical patent/CN115731566A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a weight detection method, a device, equipment and a storage medium based on images, wherein the method comprises the following steps: identifying a plurality of human body key points of a target to be detected in an image to be processed; determining the real area of each preset body part of the target to be detected according to the plurality of human body key points; and determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part. The real area of each preset body part is determined according to the key points of the human body, and then the weight of the target to be detected is calculated. The weight can be estimated only according to one image, and the weight detection can be simultaneously carried out on a plurality of targets in the image, so that the method is simple to operate, high in efficiency and strong in real-time performance. And the weight is estimated according to the real area of each preset body part, and the precision is very high. And the method still has good robustness when the target is shielded, and avoids the conditions that the weight estimation of tall and thin people is low and the weight estimation of short and fat people is high by referring to the real height of the target.

Description

Weight detection method, device and equipment based on image and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a weight detection method, device, equipment and storage medium based on images.
Background
Need measure people's weight under many scenes in life, measure through the weighing scale, once can only measure one person's weight usually, when the people that need measure is many, will take a long time through the weighing scale measurement.
At present, a weight detection method is provided in the related art, which identifies the height, width, and projection area of a body surface contour of an object to be detected from an image including the object to be detected, and estimates the weight of the object to be detected based on the height, width, and projection area of the body surface contour.
But the weight is estimated only by the characteristics of the body surface contour in the image, and the accuracy is not high. And under the condition that the body surface contour of the target to be detected is shielded, the estimated weight error is large.
Disclosure of Invention
The application provides a weight detection method, a device, equipment and a storage medium based on images, which are used for determining the real area of each preset body part according to key points of a human body and further calculating the weight of a target to be detected. The weight can be estimated only according to one image, and the weight detection can be simultaneously carried out on a plurality of targets in the image, so that the method is simple to operate, high in efficiency and strong in real-time performance. And the weight is estimated according to the real area of each preset body part, and the precision is very high. The method still has good robustness when the target is occluded.
An embodiment of a first aspect of the present application provides a method for detecting a body weight based on an image, including:
identifying a plurality of human body key points of a target to be detected in an image to be processed;
determining the real area of each preset body part of the target to be detected according to the plurality of human body key points;
and determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part.
In some embodiments of the present application, the determining the real area of each preset body part of the target to be detected according to the plurality of human body key points includes:
determining the real height and the image height of the target to be detected and the pixel area of each preset body part according to the plurality of human body key points;
and respectively determining the real area of each preset body part according to the real height, the image height and the pixel area of each preset body part.
In some embodiments of the application, the determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part includes:
respectively calculating the product of the real area of each preset body part and a preset weight coefficient corresponding to each preset body part to obtain the weight of each preset body part;
and calculating the sum of the weight of each preset body part to obtain the weight of the target to be detected.
In some embodiments of the present application, before determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part, the method further includes:
acquiring the weights of a plurality of sample targets and the real area of each preset body part of each sample target;
fitting a fitting coefficient corresponding to each preset body part through a preset neural network model according to the weight of each sample target and the real area of each preset body part;
and configuring the fitting coefficient corresponding to each preset body part into a preset weight coefficient corresponding to each preset body part.
In some embodiments of the present application, fitting a fitting coefficient corresponding to each preset body part through a preset neural network model according to the weight of each sample target and the real area of each preset body part includes:
respectively establishing a first body weight equation corresponding to each sample target through a preset neural network model, wherein the first body weight equation is the sum of products of the weight of the sample target equal to the real area of each preset body part of the sample target and the fitting coefficient corresponding to each preset body part;
and simultaneously solving the first body weight equations of all sample targets through the preset neural network model to obtain the fitting coefficient corresponding to each preset body part.
In some embodiments of the present application, the method further comprises:
dividing the height of a human body into a plurality of preset height intervals;
respectively determining an adjusting coefficient corresponding to each preset height interval;
according to the adjustment coefficient corresponding to each preset height interval, respectively correcting the fitting coefficient corresponding to each preset body part to obtain a preset weight coefficient of each preset body part corresponding to each preset height interval;
and storing the mapping relation among the preset height interval, the preset body part and the corresponding preset weight coefficient.
In some embodiments of the present application, the determining the adjustment coefficient corresponding to each preset height interval respectively includes:
determining all first sample targets with heights within a first height interval from each sample target, wherein the first height interval is any one of a plurality of preset height intervals;
acquiring a plurality of candidate coefficients from a preset numerical interval;
respectively calculating a theoretical weight value of the first sample target obtained by adopting each candidate coefficient through a preset neural network model according to the real area of each preset body part of the first sample target, the fitting coefficient corresponding to each preset body part and each candidate coefficient;
respectively determining a calculation error corresponding to each candidate coefficient according to the weight of the first sample target and the theoretical weight value corresponding to each candidate coefficient;
and determining the candidate coefficient with the minimum calculation error as the adjustment coefficient corresponding to the first preset height interval.
In some embodiments of the present application, before determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part, the method further includes:
determining the real height of the target to be detected according to the plurality of human body key points;
determining a preset height interval to which the real height belongs;
and acquiring the preset weight coefficient of each corresponding preset body part from the mapping relation according to the preset height interval to which the real height belongs.
In some embodiments of the present application, the determining the real height of the target to be detected according to the plurality of key points of the human body includes:
determining an image interpupillary distance corresponding to the target to be detected according to the left-eye key points and the right-eye key points in the plurality of human body key points;
determining the image height corresponding to the target to be detected according to the plurality of human body key points;
calculating a ratio of a preset standard interpupillary distance to the image interpupillary distance;
and determining the product of the ratio and the height of the image as the real height of the target to be detected.
In some embodiments of the present application, the identifying a plurality of human body key points of a target to be detected included in an image to be processed includes:
acquiring a training data set, wherein the training data set comprises a plurality of sample images marked with key points of a human body;
training a key point recognition model for recognizing key points of the human body according to the training data set;
and identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model.
In some embodiments of the present application, before the identifying, by the keypoint identification model, a plurality of human keypoints of an object to be detected included in an image to be processed, the method further includes:
detecting whether the image of the target to be detected in the image to be processed meets a preset posture condition or not through the key point identification model;
if yes, executing the operation of identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model;
if not, sending prompt information to the user, wherein the prompt information is used for prompting the user to provide the image to be processed which meets the preset posture condition.
In some embodiments of the present application, the obtaining a training data set includes:
acquiring a plurality of occlusion sample images and a plurality of non-occlusion sample images, wherein part of the body of a sample target in the occlusion sample images is occluded by an object, and all human key points of the sample target are marked in the occlusion sample images and the non-occlusion sample images;
the plurality of occlusion sample images and the plurality of non-occlusion sample images are combined into a training data set.
An embodiment of a second aspect of the present application provides an image-based weight detection apparatus, including:
the key point identification module is used for identifying a plurality of human body key points of the target to be detected in the image to be processed;
the real area determining module is used for determining the real area of each preset body part of the target to be detected according to the plurality of human body key points;
and the weight determining module is used for determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part.
Embodiments of the third aspect of the present application provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of the first aspect.
An embodiment of a fourth aspect of the present application provides a computer-readable storage medium having a computer program stored thereon, the program being executable by a processor to implement the method of the first aspect.
The technical scheme provided in the embodiment of the application at least has the following technical effects or advantages:
in the embodiment of the application, the human key points of the target to be detected in the image to be processed are identified, the real area of each preset body part of the target to be detected is determined according to the human key points, and then the weight of the target to be detected is calculated according to the real area of each preset body part. Therefore, the weight of the target to be detected can be estimated according to only one image, the weight of a plurality of targets to be detected in the image can be detected simultaneously, the operation is simple, the efficiency is high, and the real-time performance of weight detection is strong. And the real area of each preset body part is determined according to the human body key points of the target to be detected, so that the weight is estimated, and the precision of weight detection is improved. The method still has good robustness under the condition that the target to be detected is shielded, the real height of the target to be detected is referred in the weight estimation process, the precision of weight detection is improved, and the conditions that the weight estimation of tall and thin people is low and the weight estimation of short and fat people is high are avoided.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings.
In the drawings:
FIG. 1 is a diagram illustrating human keypoints identified by a keypoint identification model according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a preset neural network model for fitting the fitting coefficients corresponding to each preset body part according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a method for image-based weight detection according to an embodiment of the present application;
FIG. 4 is another flow chart of a method for image-based weight detection provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image-based weight detection device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a storage medium according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
An image-based weight detection method, an image-based weight detection device, an image-based weight detection apparatus, and a storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
In daily life, a weight scale is used for measuring the weight, and the weight of only one person can be measured at a time, so that the weight scale takes a long time when a plurality of persons need to be measured. In the related art, a method for detecting body weight based on an image is provided, which identifies the height, width, and projection area of a body surface contour of an object to be detected from an image including the object to be detected, and estimates the body weight of the object to be detected based on the height, width, and projection area of the body surface contour. But the weight is estimated only by the characteristics of the body surface contour in the image, and the accuracy is not high. And under the condition that the body surface contour of the target to be detected is shielded, the estimated weight error is very large.
Based on this, the embodiment of the application provides an image-based weight detection method, which includes identifying human key points of a target to be detected in an image to be processed, determining the real area of each preset body part of the target to be detected according to the human key points, and further calculating the weight of the target to be detected according to the real area of each preset body part. Therefore, the weight of the target to be detected can be estimated according to only one image, the weight of a plurality of targets to be detected in the image can be detected simultaneously, the operation is simple, the efficiency is high, and the real-time performance of weight detection is strong. And the real area of each preset body part is determined according to the human body key points of the target to be detected, so that the weight is estimated, and the precision of weight detection is improved.
The image to be processed comprises one or more images of a target to be detected, and the target to be detected is a person needing to detect the weight. The embodiment of the application divides the human body into a plurality of preset body parts, and the divided preset body parts can comprise 4 parts including a head, arms, a body and legs. Alternatively, the predetermined body part may include a head, an arm, a chest, an abdomen, a thigh, a calf, and the like. The embodiment of the application does not limit the specific division mode of the preset body part, and the body part can be divided according to requirements in practical application.
Before determining the weight of a target to be detected by the method provided by the embodiment of the application, a key point identification model for identifying key points of a human body needs to be acquired. Specifically, the key point recognition model is trained through the following operations of steps A1 and A2, including:
a1: a training data set is obtained, and the training data set comprises a plurality of sample images marked with key points of a human body.
A plurality of sample images are acquired, each sample image including one or more images of a person. Since the subsequent detection of the body weight requires the use of all human key points of the human whole body, the image of the human included in the sample image is an image of the human whole body. Further, since the areas of a plurality of preset body parts of the person are needed in the process of detecting the weight, and the interpupillary distance of the person is needed when the real height of the person is calculated in the process, the whole-body image of the person in the sample image can be a whole-body image standing straight in front, so that information of parts such as eyes, the outline of the whole body and the like of the person can be displayed in the sample image.
After a plurality of sample images are obtained, all human key points of each person are respectively marked on each portrait in each sample image, and specifically, the mark number and the coordinates of each human key point can be marked. In the embodiment of the present application, all the human body key points of the human body include each key point on the whole body contour, and further may include all the key points of the face. The number of all human key points of the human body can be 77 or 106, etc.
And forming a training data set by all sample images marked with all the human body key points of each portrait.
Because the sample images containing the multiple human images exist in the sample images included in the training data set, the subsequent key point recognition model trained based on the training data set can simultaneously recognize the human key points of multiple targets to be detected in one image, and the recognition efficiency of the human key points is improved.
In other embodiments of the present application, in order to improve the accuracy of detecting the body weight under the occlusion condition, in consideration of the situation that the human figure in the image may be occluded by the object, the sample image with the occlusion is introduced for model training when the key point identification model is trained. Specifically, a plurality of occluding sample images and a plurality of non-occluding sample images are acquired. Part of the body of the sample target in the sample image is shielded by the object, the body of the sample target in the non-shielded sample image is not shielded by the object, and the sample target is the portrait in the image. All human body key points of each sample target in the image are marked for each occluded sample image and each non-occluded sample image.
The whole body contour of the occluded sample target in the occluded sample image can be manually determined, and all the human body key points of the occluded sample target can be marked. Alternatively, the whole-body contour of the occluded sample target can be automatically restored through an image processing technology, and then all the human body key points are marked on the whole-body contour.
And after all human body key points of all sample targets in each occlusion sample image and each non-occlusion sample image are marked, forming a training data set by the multiple occlusion sample images and the multiple non-occlusion sample images.
Due to the fact that the occlusion sample images are included in the training data set, the key point recognition model trained based on the training data set can recognize the human body key points of the occluded target to be detected, accuracy of determining the weight of the occluded target to be detected is improved, and the weight detection method provided by the embodiment of the application has good robustness to occlusion conditions.
The operation of marking the key points of the human body can be carried out by adopting a marking tool, and the marking tool can be a labelme marking tool.
After the training data set is obtained through the above operation, the keypoint recognition model is trained through step A2.
A2: and training a key point recognition model for recognizing the key points of the human body according to the training data set.
In the embodiment of the application, the key point identification model is a model constructed based on a preset detection algorithm. The preset detection algorithm uses centrnet as the detection framework. On the basis of the original CenterNet, the foundation network structure is replaced by EfficientNet-B0, and the good balance of precision and speed can be realized. The whole network structure is formed by combining EfficientNet-B0 and a deconvolution module. The EfficientNet-B0 part consists of one normal convolution layer plus 7 MBconv convolution blocks. The structure of the MBconv convolution block comprises a convolution layer Conv, a normalization operation layer Batcnorm, a channel separation convolution layer DepthwisCon, an activation function layer Swish, a deactivation layer drop _ connect, an Average pooling layer Average and an activation function layer sigmoid. The inactivation layer drop _ connect represents a module for randomly inactivating the connection of the neurons, and can increase the generalization capability of the model
And B, acquiring a preset number of sample images from the training data set obtained in the step A1, wherein the preset number is the parallel processing number of the key point identification models of the network structure, namely the batchsize. And inputting the acquired sample images with the preset number into the key point identification model of the network structure, carrying out human body key point detection on the input sample images by the key point identification model, and outputting a detection result. And calculating the loss value of the current training period through a loss function corresponding to the key point recognition model according to the detection result and the human key points of each sample target marked in the input sample image. And then entering the training of the next period according to the mode, and stopping the training after the current iteration times reach the preset times through multiple times of iteration training. And determining the training period with the minimum loss value from the plurality of training periods which are trained. And taking the model parameters of the key point recognition model in the training period with the minimum loss value as the model parameters of the finally trained key point recognition model. The trained key point recognition model is obtained, and the trained key point recognition model is configured in a server or a terminal device for operating the weight detection method provided by the embodiment of the application.
After the key point identification model is configured, the key point identification model can be used for identifying the human body key point of each target to be detected in the image to be processed.
In the embodiment of the application, in the process of detecting the weight of the target to be detected in the image to be processed, the weight needs to be calculated according to the real area of each preset body part of the target to be detected and the preset weight coefficient corresponding to each preset body part. Therefore, before the method is implemented, a preset weight coefficient corresponding to each preset body part is also determined. Specifically, the determination is performed by the following operations of steps B1 to B3, including:
b1: the weights of a plurality of sample targets and the real area of each preset body part of each sample target are obtained.
A number of sample images are first acquired, which may be 1000 or 2000, etc. Each sample image comprises one or more sample objects, and the sample objects are human images. And respectively identifying all human body key points of each sample target in each sample image through the trained key point identification model. As shown in fig. 1, 77 human key points where a human body is recognized by the key point recognition model are shown.
And for each sample target in each sample image, determining the real height of the sample target according to all human key points of the sample target. First, the left eye key point and the right eye key point of the sample target are determined from a plurality of human body key points of the sample target. And if the left-eye key point comprises a left-eye central point and the right-eye key point comprises a right-eye central point, determining the distance between the left-eye central point and the right-eye central point as the image interpupillary distance corresponding to the sample target. If the left-eye key points do not include the left-eye center points, the right-eye key points do not include the right-eye center points. And averaging the coordinates of all the key points of the left eye, and determining the obtained average coordinate as the coordinate of the center point of the left eye. Similarly, the coordinates of all the key points of the right eye are averaged, and the obtained average coordinate is determined as the coordinate of the center point of the right eye. And then determining the distance between the center point of the left eye and the center point of the right eye as the image interpupillary distance corresponding to the sample target.
And then determining the image height corresponding to the sample target according to the plurality of human body key points of the sample target. Specifically, key points with the highest positions on the forehead of the sample target are determined from the plurality of human body key points of the sample target, and key points with the lowest positions on the feet of the sample target are determined. And calculating the distance between the key point with the highest position and the key point with the lowest position, and determining the distance as the image height corresponding to the sample target.
After the image interpupillary distance of the sample target is obtained in the above mode, the ratio between the preset standard interpupillary distance and the image interpupillary distance corresponding to the sample target is calculated. The preset standard interpupillary distance can be 60mm, 65mm or 70mm and the like.
In the embodiment of the application, the standard interpupillary distance can be preset according to the gender, and the preset standard interpupillary distance of the male can be set to be any value between 60mm and 73 mm. The female's pre-set standard interpupillary distance may be set to any value between 53mm and 68 mm. The gender of the sample target is identified based on characteristics of the sample target such as hairstyle, clothes, second sexual characteristics and the like, a corresponding preset standard interpupillary distance is obtained according to the identified gender, and then the ratio of the obtained preset standard interpupillary distance to the interpupillary distance of the sample target is calculated.
And after the ratio between the preset standard interpupillary distance and the interpupillary distance of the sample target is calculated in any mode, calculating the product of the ratio and the image height of the sample target, and determining the product as the real height of the sample target.
And then respectively determining the pixel area of each preset body part according to all human body key points of the sample target. For any preset body part, determining human key points on the outline of the preset body part from all human key points of the sample target, and sequentially connecting the human key points on the outline of the preset body part to obtain a polygonal area corresponding to the preset body part. And calculating the area of the polygonal area according to the coordinates of each human body key point serving as a vertex in the polygonal area. And determining the area of the polygonal area as the pixel area corresponding to the preset body part. And for each other preset body part, respectively determining the pixel area corresponding to each other preset body part according to the mode.
After the real height and the image height of the sample target and the pixel area corresponding to each preset body part are determined in the above mode, for each preset body part, the ratio between the real height and the image height of the sample target is calculated, the product between the ratio and the pixel area corresponding to the preset body part is calculated, and the product is determined as the real area corresponding to the preset body part. And for each other preset body part, respectively determining the real area corresponding to each other preset body part according to the mode.
For each sample target in each sample image, the real area corresponding to each preset body part of each sample target is respectively obtained according to the above mode.
And for each sample object in each sample image, collecting the true body weight of each sample object.
B2: and fitting a fitting coefficient corresponding to each preset body part through a preset neural network model according to the weight of each sample target and the real area of each preset body part.
The preset neural network model comprises a plurality of product calculation modules which are executed in parallel and a weight summation module connected with the product calculation modules, wherein each product calculation module is respectively used for calculating the product between the real area of one preset body part and the fitting coefficient corresponding to the preset body part, and the product is equivalent to the weight of the preset body part. The weight summation module is used for calculating the sum of the output results of each product calculation module, namely calculating the sum of the weights of all preset body parts.
The preset neural network model shown in fig. 2 includes four product calculation modules, which are a head weight calculation module, an arm weight calculation module, a body weight calculation module, and a leg weight calculation module, respectively, and a weight summation module. The head weight calculation module is used for calculating the product of the real area of the head and the fitting coefficient K1. The arm weight calculation module is used for calculating the product of the real area of the arm and the fitting coefficient K2. The body weight calculation module is used for calculating the product of the real area of the body and the fitting coefficient K3. The leg weight calculation module is used for calculating the product of the real area of the leg and the fitting coefficient K4.
And respectively establishing a first body weight equation corresponding to each sample target through a preset neural network model, wherein the first body weight equation is the sum of products of the weight of the sample target and the real area of each preset body part of the sample target and the fitting coefficient corresponding to each preset body part. And simultaneously solving the first body weight equations of all sample targets through a preset neural network model to obtain the fitting coefficient corresponding to each preset body part.
After the weights of a large number of sample targets and the real areas of the preset body parts of each sample target are obtained in the step B1, the weights of the sample targets and the real areas of the preset body parts are input into the preset neural network model of the structure, and the fitting coefficients corresponding to the preset body parts are solved based on the fitting capacity of the preset neural network model. The fitting coefficient corresponding to each preset body part is fitted by the product of the thickness and the density of each body part.
B3: and configuring the fitting coefficient corresponding to each preset body part into a preset weight coefficient corresponding to each preset body part.
And after the preset neural network model outputs the fitting coefficient corresponding to each preset body part, taking the fitting coefficient as a preset weight coefficient. On a server or a terminal device running the weight detection method of the embodiment of the application, a corresponding relation between the identification information of each preset body part and a corresponding preset weight coefficient is configured. The identification information of the preset body part may be a character string for identifying the preset body part.
In other embodiments of the present application, persons of high height will generally not weigh too much, taking into account that weight has a relationship with height. Therefore, the height is also used as an influence factor influencing the weight, and the influence of the height is also introduced when the preset weight coefficient corresponding to the preset body part is determined. Specifically, the height of the human body is divided into a plurality of preset height intervals, for example, the height of the human body is divided into 5 preset height intervals, including [180, + ∞ ], [170, 180 ], [160, 170 ], [150, 160) and [0, 150 ]. The embodiment of the application does not limit the specific division mode of the preset height interval and the number of the divided preset height intervals, and the height intervals can be divided according to requirements in practical application.
After the plurality of preset height intervals are divided by the above method, the adjustment coefficient corresponding to each preset height interval is respectively determined. Specifically, all first sample targets with heights within a first height interval are determined from each sample target, and the first height interval is any one of a plurality of preset height intervals. And acquiring a plurality of candidate coefficients from a preset value interval. The preset value interval may be [0.9,1.1], and the candidate coefficient may be obtained from the preset value interval by using a preset step value, which may be 0.01. For example, a value is selected for each 0.01 as a candidate coefficient, and the selected candidate coefficients may be 0.9, 0.91, 0.92, \8230 \ 8230;, 1.1.
And respectively calculating a theoretical weight value of the first sample target obtained by adopting each candidate coefficient through a preset neural network model according to the real area of each preset body part of the first sample target, the fitting coefficient corresponding to each preset body part and each candidate coefficient. And for each first sample target, calculating the sum of products of the real surface area of each preset body part of the first sample target and the fitting coefficient corresponding to each preset body part obtained in the step B2 through a preset neural network model. And then, calculating the product of the sum value and each candidate coefficient to obtain the theoretical weight value of the first sample target corresponding to each candidate coefficient.
And then respectively determining the calculation error corresponding to each candidate coefficient according to the weight of the first sample target and the theoretical weight value corresponding to each candidate coefficient. Namely, the difference between the actual weight of the first sample target and the theoretical weight value corresponding to each candidate coefficient is calculated respectively, and the calculated difference corresponding to each candidate coefficient is taken as the calculation error corresponding to each candidate coefficient. And determining the candidate coefficient with the minimum calculation error as an adjusting coefficient corresponding to the first preset height interval.
And respectively determining the adjustment coefficients corresponding to other preset height intervals for each of the other preset height intervals.
In other embodiments of the present application, a rerank algorithm may be further employed, wherein based on the weights of different sample targets, an adjustment coefficient within a preset value range may be searched for each preset height range, and the preset value range may be [0.9,1.1], respectively based on the heights of different sample targets.
For example, assume that the divided preset height intervals include [180, + ∞), [170, 180), [160, 170), [150, 160), (0, 150), and the adjustment coefficients corresponding to the 5 preset height intervals are sequentially represented as p180, p170-180, p160-170, p150-160, and p150. The finally determined adjustment coefficient corresponding to each preset height interval may be [ p180, p170-180, p160-170, p150-160, p150] = [0.98,1.0,1.02,1.01,1.03].
In the embodiment of the application, specific division of the preset height intervals is not limited, and specific values of the adjustment coefficients corresponding to each preset height interval are also not limited, and the adjustment coefficients can be determined according to requirements in practical application.
The preset height intervals can be divided by technicians instead of automatically dividing the height intervals by programs. The adjustment coefficient corresponding to each preset height interval can also be determined not by an algorithm but by a technician. And configuring each divided preset height interval and the corresponding adjustment coefficient on a server or a terminal device running the method.
And after the adjustment coefficient corresponding to each preset height interval is determined, respectively correcting the fitting coefficient corresponding to each preset body part output by the preset neural network model according to the adjustment coefficient corresponding to each preset height interval. Specifically, the adjustment coefficient corresponding to each determined preset height interval is obtained, and the product of the adjustment coefficient corresponding to each preset height interval and the fitting coefficient corresponding to each preset body part output by the preset neural network model is calculated respectively, so as to obtain the preset weight coefficient corresponding to each preset body part in each preset height interval. And then storing the mapping relation among the preset height interval, the preset body part and the corresponding preset weight coefficient.
For example, the stored mapping relationship between the preset height interval, the preset body part and the corresponding preset weight coefficient may be as shown in table 1.
TABLE 1
Real height of body K1 K2 K3 K4
[180,+∞) 0.0098 0.012054 0.0195 0.01274
[170,180) 0.01 0.0123 0.0199 0.013
[160,170) 0.0102 0.012546 0.020298 0.01326
[150,160) 0.0101 0.012423 0.02 0.01313
(0,150) 0.0103 0.012669 0.020497 0.013389
After the key point recognition model is obtained through the training in the above manner and the preset weight coefficient corresponding to each preset body part is determined, as shown in fig. 3, the weight of the target to be detected is determined through the following operations of steps 101 to 103, which specifically includes:
step 101: and identifying a plurality of human body key points of the target to be detected in the image to be processed.
When the weight of the target to be detected needs to be determined, the image to be processed containing the target to be detected is shot through the camera device, or the image to be processed containing the target to be detected is directly obtained from a local folder or a network. And uploading the image to be processed to a server or a terminal device for weight detection.
After the server or the terminal device obtains the image to be processed, the image to be processed is input into a pre-trained key point recognition model, and the key point recognition model outputs a plurality of human body key points of the target to be detected included in the image to be processed.
If the image to be processed comprises a plurality of targets to be detected, human key points of the plurality of targets to be detected can be detected simultaneously through the key point identification model, and the efficiency of detecting the human key points is high. If part of the body of the target to be detected in the image to be processed is shielded by the object, the key point identification model can still accurately detect all human body key points of the shielded target to be detected, the human body key point detection accuracy is high, the robustness is still good under the situation that the target is shielded, and the accuracy of detecting the weight of the shielded target is further improved.
In other embodiments of the application, before the key point recognition model is used to recognize the human body key point of each target to be detected in the image to be processed, the key point recognition model is used to detect whether the image of the target to be detected in the image to be processed meets the preset posture condition. The preset posture condition is used for stipulating that the image of the target to be detected is a front and upright whole body image. And if the image of the target to be detected is detected to be in accordance with the preset posture condition, recognizing a plurality of human body key points of the target to be detected through the key point recognition model. And if the key point recognition model detects that the image of the target to be detected in the image to be processed is in a posture of a reverse side, a side surface or a stoop and the like, determining that the image of the target to be detected does not accord with a preset posture condition, and sending prompt information to a user, wherein the prompt information is used for prompting the user to provide the image to be processed which accords with the preset posture condition.
Step 102: and determining the real area of each preset body part of the target to be detected according to the plurality of human body key points.
After a plurality of human body key points of the target to be detected are obtained, the real height, the image height and the pixel area of each preset body part of the target to be detected are determined according to the human body key points. And respectively determining the real area of each preset body part according to the real height of the target to be detected, the image height and the pixel area of each preset body part.
The specific determination process of the actual height, the image height, the pixel area of each preset body part, and the determination process of the actual area of each preset body part are the same as the corresponding determination process in the step B1, and are not described herein again.
In the process of determining the weight of the target to be detected, the real area of each preset body part of the target to be detected is determined according to the human body key points of the target to be detected, so that the weight of each preset body part can be estimated according to the real area of each preset body part, and finally the weight of the target to be detected is added, and the precision of weight detection is greatly improved.
Step 103: and determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part.
After the real area of each preset body part of the target to be detected is obtained through the operation of step 102, preset weight coefficients corresponding to each preset body part configured in advance are respectively obtained. In the embodiment of correcting the preset weight coefficient by height, the preset height interval to which the real height belongs is determined according to the real height of the target to be detected determined in the step 102. And then, according to a preset height section to which the real height belongs, acquiring a preset weight coefficient of each corresponding preset body part from the mapping relation among the preset height section, the preset body part and the corresponding preset weight coefficient.
After the real area and the preset weight coefficient of each preset body part of the target to be detected are obtained, the product of the real area of each preset body part and the preset weight coefficient corresponding to each preset body part is calculated respectively, and the weight of each preset body part is obtained. And calculating the sum of the weight of each preset body part to obtain the weight of the target to be detected.
In order to facilitate understanding of the process for detecting body weight provided in the embodiments of the present application, the following description is made with reference to the accompanying drawings. As shown in fig. 4, S1: and identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model. S2: and determining the image height and the image pupil distance corresponding to the target to be detected according to the plurality of human body key points of the target to be detected. S3: and calculating a ratio of the preset standard interpupillary distance to the image interpupillary distance corresponding to the target to be detected, and calculating a product of the ratio and the image height of the target to be detected to obtain the real height of the target to be detected. S4: and respectively determining the pixel area of each preset body part according to all human body key points of the target to be detected. S5: and calculating the ratio of the real height of the target to be detected to the height of the image, and then respectively calculating the product of the ratio and the pixel area corresponding to each preset body part to obtain the real area corresponding to each preset body part. S6: and respectively calculating products of the real area of each preset body part and the preset weight coefficient corresponding to each preset body part, and calculating the sum of each product to obtain the weight of the target to be detected.
In the embodiment of the application, the human key points of the target to be detected in the image to be processed are identified, the real area of each preset body part of the target to be detected is determined according to the human key points, and then the weight of the target to be detected is calculated according to the real area of each preset body part. Therefore, the weight of the target to be detected can be estimated only according to one image, the weight of a plurality of targets to be detected in the image can be detected simultaneously, the operation is simple, the efficiency is high, and the real-time performance of weight detection is strong. And the real area of each preset body part is determined according to the human body key points of the target to be detected, so that the weight is estimated, and the precision of weight detection is improved. The method still has good robustness under the condition that the target to be detected is shielded, the real height of the target to be detected is referred in the weight estimation process, the precision of weight detection is improved, and the conditions that the weight estimation of tall and thin people is low and the weight estimation of short and fat people is high are avoided.
The embodiment of the application further provides an image-based weight detection device, which is used for executing the image-based weight detection method provided by any one of the embodiments. As shown in fig. 5, the apparatus includes:
a key point identification module 201, configured to identify a plurality of human key points of a target to be detected included in an image to be processed;
a real area determining module 202, configured to determine a real area of each preset body part of the target to be detected according to the plurality of human body key points;
the weight determining module 203 is configured to determine the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part.
The real area determining module 202 is configured to determine a real height of the target to be detected, an image height of the target to be detected, and a pixel area of each preset body part according to the plurality of human body key points; and respectively determining the real area of each preset body part according to the real height, the image height and the pixel area of each preset body part.
The weight determining module 203 is configured to calculate a product of the real area of each preset body part and a preset weight coefficient corresponding to each preset body part, respectively, to obtain a weight of each preset body part; and calculating the sum of the weight of each preset body part to obtain the weight of the target to be detected.
The device also includes: the weight coefficient determining module is used for acquiring the weights of a plurality of sample targets and the real area of each preset body part of each sample target; fitting a fitting coefficient corresponding to each preset body part through a preset neural network model according to the weight of each sample target and the real area of each preset body part; and configuring the fitting coefficient corresponding to each preset body part into a preset weight coefficient corresponding to each preset body part.
The weight coefficient determining module is used for respectively establishing a first body weight equation corresponding to each sample target through a preset neural network model, wherein the first body weight equation is the sum of products of the weight of the sample target and the real area of each preset body part of the sample target and the fitting coefficient corresponding to each preset body part; and simultaneously solving the first body weight equations of all sample targets through a preset neural network model to obtain the fitting coefficient corresponding to each preset body part.
The weight coefficient determining module is also used for dividing the height of the human body into a plurality of preset height intervals; respectively determining an adjusting coefficient corresponding to each preset height interval; according to the adjustment coefficient corresponding to each preset height interval, respectively correcting the fitting coefficient corresponding to each preset body part to obtain a preset weight coefficient of each preset body part corresponding to each preset height interval; and storing the mapping relation among the preset height interval, the preset body part and the corresponding preset weight coefficient.
The weight coefficient determining module is further used for determining all first sample targets with heights within a first height interval from each sample target, and the first height interval is any one preset height interval in a plurality of preset height intervals; acquiring a plurality of candidate coefficients from a preset numerical interval; respectively calculating a theoretical weight value of the first sample target obtained by adopting each candidate coefficient through a preset neural network model according to the real area of each preset body part of the first sample target, the fitting coefficient corresponding to each preset body part and each candidate coefficient; respectively determining a calculation error corresponding to each candidate coefficient according to the weight of the first sample target and a theoretical weight value corresponding to each candidate coefficient; and determining the candidate coefficient with the minimum calculation error as an adjusting coefficient corresponding to the first preset height interval.
The weight coefficient determining module is also used for determining the real height of the target to be detected according to the plurality of human body key points; determining a preset height interval to which the real height belongs; and acquiring the preset weight coefficient of each corresponding preset body part from the mapping relation according to the preset height interval to which the real body height belongs.
The real area determining module 202 is configured to determine an image interpupillary distance corresponding to the target to be detected according to the left-eye key point and the right-eye key point included in the plurality of human body key points; determining the image height corresponding to the target to be detected according to the plurality of human body key points; calculating a ratio of a preset standard interpupillary distance to an image interpupillary distance; and determining the product of the ratio and the height of the image as the real height of the target to be detected.
A key point identification module 201, configured to obtain a training data set, where the training data set includes a plurality of sample images labeled with key points of a human body; training a key point identification model for identifying key points of a human body according to a training data set; and identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model.
The device also includes: the target posture detection module is used for detecting whether the image of the target to be detected in the image to be processed meets a preset posture condition or not through the key point recognition model; if yes, executing the operation of identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model; if not, sending prompt information to the user, wherein the prompt information is used for prompting the user to provide the image to be processed which meets the preset posture condition.
The key point identification module 201 is configured to obtain a plurality of occlusion sample images and a plurality of non-occlusion sample images, where a part of a body of a sample target in the occlusion sample images is occluded by an object, and all human key points of the sample target are labeled in both the occlusion sample images and the non-occlusion sample images; and combining the plurality of occlusion sample images and the plurality of non-occlusion sample images into a training data set.
The image-based weight detection device provided by the above embodiment of the present application and the image-based weight detection method provided by the embodiment of the present application have the same inventive concept and have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the image-based weight detection device.
The embodiment of the application also provides electronic equipment for executing the weight detection method based on the image. Please refer to fig. 6, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 6, the electronic device 8 includes: a processor 800, a memory 801, a bus 802 and a communication interface 803, the processor 800, the communication interface 803 and the memory 801 being connected by the bus 802; the memory 801 stores a computer program operable on the processor 800, and the processor 800 executes the computer program to perform the image-based weight detection method provided in any one of the previous embodiments of the present application.
The Memory 801 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the apparatus and at least one other network element is implemented through at least one communication interface 803 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used.
Bus 802 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 801 is used for storing a program, and the processor 800 executes the program after receiving an execution instruction, and the image-based weight detection method disclosed in any embodiment of the present application may be applied to the processor 800, or implemented by the processor 800.
The processor 800 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 800. The Processor 800 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 801, and the processor 800 reads the information in the memory 801 and completes the steps of the method in combination with the hardware thereof.
The electronic device provided by the embodiment of the application and the image-based weight detection method provided by the embodiment of the application have the same inventive concept and the same beneficial effects as the method adopted, operated or realized by the electronic device.
Referring to fig. 7, the computer readable storage medium is an optical disc 30, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program performs the image-based weight detection method according to any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the image-based weight detection method provided by the embodiment of the present application have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted to reflect the following schematic diagram: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Moreover, those of skill in the art will understand that although some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method for detecting body weight based on images is characterized by comprising the following steps:
identifying a plurality of human body key points of a target to be detected in an image to be processed;
determining the real area of each preset body part of the target to be detected according to the plurality of human body key points;
and determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part.
2. The method according to claim 1, wherein the determining the real area of each preset body part of the object to be detected according to the plurality of human body key points comprises:
determining the real height and the image height of the target to be detected and the pixel area of each preset body part according to the plurality of human body key points;
and respectively determining the real area of each preset body part according to the real height, the image height and the pixel area of each preset body part.
3. The method according to claim 1, wherein the determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part comprises:
respectively calculating the product of the real area of each preset body part and a preset weight coefficient corresponding to each preset body part to obtain the weight of each preset body part;
and calculating the sum of the weight of each preset body part to obtain the weight of the target to be detected.
4. The method according to claim 1, wherein before determining the weight of the object to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part, the method further comprises:
acquiring the weights of a plurality of sample targets and the real area of each preset body part of each sample target;
fitting a fitting coefficient corresponding to each preset body part through a preset neural network model according to the weight of each sample target and the real area of each preset body part;
and configuring the fitting coefficient corresponding to each preset body part into a preset weight coefficient corresponding to each preset body part.
5. The method according to claim 4, wherein the fitting coefficients corresponding to each preset body part are fitted through a preset neural network model according to the weight of each sample target and the real area of each preset body part, and the fitting coefficients comprise:
respectively establishing a first body weight equation corresponding to each sample target through a preset neural network model, wherein the first body weight equation is the sum of products of the weight of the sample target equal to the real area of each preset body part of the sample target and the fitting coefficient corresponding to each preset body part;
and simultaneously solving the first body weight equations of all sample targets through the preset neural network model to obtain the fitting coefficient corresponding to each preset body part.
6. The method of claim 4, further comprising:
dividing the height of a human body into a plurality of preset height intervals;
respectively determining an adjustment coefficient corresponding to each preset height interval;
according to the adjustment coefficient corresponding to each preset height interval, respectively correcting the fitting coefficient corresponding to each preset body part to obtain a preset weight coefficient of each preset body part corresponding to each preset height interval;
and storing the mapping relation among the preset height interval, the preset body part and the corresponding preset weight coefficient.
7. The method according to claim 6, wherein the determining the adjustment factor for each preset height interval comprises:
determining all first sample targets with heights within a first height interval from each sample target, wherein the first height interval is any one of a plurality of preset height intervals;
acquiring a plurality of candidate coefficients from a preset numerical interval;
respectively calculating a theoretical weight value of the first sample target obtained by adopting each candidate coefficient through a preset neural network model according to the real area of each preset body part of the first sample target, the fitting coefficient corresponding to each preset body part and each candidate coefficient;
respectively determining a calculation error corresponding to each candidate coefficient according to the weight of the first sample target and the theoretical weight value corresponding to each candidate coefficient;
and determining the candidate coefficient with the minimum calculation error as the adjustment coefficient corresponding to the first preset height interval.
8. The method according to claim 6, wherein before determining the weight of the object to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part, the method further comprises:
determining the real height of the target to be detected according to the plurality of human body key points;
determining a preset height interval to which the real height belongs;
and acquiring the preset weight coefficient of each corresponding preset body part from the mapping relation according to the preset height interval to which the real height belongs.
9. The method according to claim 2 or 8, wherein the determining the real height of the target to be detected according to the plurality of key points of the human body comprises:
determining an image interpupillary distance corresponding to the target to be detected according to the left-eye key points and the right-eye key points included in the plurality of human body key points;
determining the image height corresponding to the target to be detected according to the plurality of human body key points;
calculating a ratio of a preset standard interpupillary distance to the image interpupillary distance;
and determining the product of the ratio and the height of the image as the real height of the target to be detected.
10. The method according to any one of claims 1 to 8, wherein the identifying a plurality of human keypoints of an object to be detected included in the image to be processed comprises:
acquiring a training data set, wherein the training data set comprises a plurality of sample images marked with key points of a human body;
training a key point identification model for identifying key points of a human body according to the training data set;
and identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model.
11. The method according to claim 10, wherein before the identifying a plurality of human key points of the object to be detected included in the image to be processed by the key point identification model, the method further comprises:
detecting whether the image of the target to be detected in the image to be processed meets a preset posture condition or not through the key point identification model;
if yes, executing the operation of identifying a plurality of human body key points of the target to be detected in the image to be processed through the key point identification model;
and if not, sending prompt information to the user, wherein the prompt information is used for prompting the user to provide the image to be processed which meets the preset posture condition.
12. The method of claim 10, wherein the obtaining a training data set comprises:
acquiring a plurality of occlusion sample images and a plurality of non-occlusion sample images, wherein part of the body of a sample target in the occlusion sample images is occluded by an object, and all human key points of the sample target are marked in the occlusion sample images and the non-occlusion sample images;
the plurality of occlusion sample images and the plurality of non-occlusion sample images are combined into a training data set.
13. An image-based weight detection device, comprising:
the key point identification module is used for identifying a plurality of human body key points of the target to be detected in the image to be processed;
the real area determining module is used for determining the real area of each preset body part of the target to be detected according to the plurality of human body key points;
and the weight determining module is used for determining the weight of the target to be detected according to the real area of each preset body part and the preset weight coefficient corresponding to each preset body part.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any one of claims 1-12.
15. A computer-readable storage medium, on which a computer program is stored, which program is executed by a processor to implement the method according to any one of claims 1-12.
CN202110977583.3A 2021-08-24 2021-08-24 Weight detection method, device and equipment based on image and storage medium Pending CN115731566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110977583.3A CN115731566A (en) 2021-08-24 2021-08-24 Weight detection method, device and equipment based on image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110977583.3A CN115731566A (en) 2021-08-24 2021-08-24 Weight detection method, device and equipment based on image and storage medium

Publications (1)

Publication Number Publication Date
CN115731566A true CN115731566A (en) 2023-03-03

Family

ID=85289541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110977583.3A Pending CN115731566A (en) 2021-08-24 2021-08-24 Weight detection method, device and equipment based on image and storage medium

Country Status (1)

Country Link
CN (1) CN115731566A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206342A (en) * 2023-04-27 2023-06-02 广东省农业科学院动物科学研究所 Pig weight detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206342A (en) * 2023-04-27 2023-06-02 广东省农业科学院动物科学研究所 Pig weight detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110426112B (en) Live pig weight measuring method and device
CN104978549B (en) Three-dimensional face images feature extracting method and system
CN102236899B (en) Method and device for detecting objects
CN113272852A (en) Method for acquiring photograph for measuring body size, and body size measuring method, server, and program using same
WO2018133691A1 (en) Method and device for obtaining figure parameter of user
CN110472481B (en) Sleeping gesture detection method, device and equipment
CN105844276A (en) Face posture correction method and face posture correction device
CN107590460B (en) Face classification method, apparatus and intelligent terminal
CA2794659A1 (en) Apparatus and method for iris recognition using multiple iris templates
CN112668359A (en) Motion recognition method, motion recognition device and electronic equipment
JP6381368B2 (en) Image processing apparatus, image processing method, and program
CN114494347A (en) Single-camera multi-mode sight tracking method and device and electronic equipment
CN109272546A (en) A kind of fry length measurement method and system
CN115731566A (en) Weight detection method, device and equipment based on image and storage medium
CN110728754A (en) Rigid body mark point identification method, device, equipment and storage medium
KR101636171B1 (en) Skeleton tracking method and keleton tracking system using the method
CN117115922A (en) Seat body forward-bending evaluation method, system, electronic equipment and storage medium
US20220078339A1 (en) Method for obtaining picture for measuring body size and body size measurement method, server, and program using same
CN112244401A (en) Human body measurement error correction method and system based on human body sample library
CN114745985A (en) Bra sizing optimization from 3D shape of breast
CN111611928A (en) Height and body size measuring method based on monocular vision and key point identification
US20190357767A1 (en) Measuring a posterior corneal surface of an eye
US20220156977A1 (en) Calibration apparatus, calibration method, and non-transitory computer readable medium storing program
JP7136344B2 (en) Camera calibration method, camera and program
CN114694235A (en) Eye gaze tracking system, related method and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination