CN106529400A - Mobile terminal and human body monitoring method and device - Google Patents
Mobile terminal and human body monitoring method and device Download PDFInfo
- Publication number
- CN106529400A CN106529400A CN201610850221.7A CN201610850221A CN106529400A CN 106529400 A CN106529400 A CN 106529400A CN 201610850221 A CN201610850221 A CN 201610850221A CN 106529400 A CN106529400 A CN 106529400A
- Authority
- CN
- China
- Prior art keywords
- human body
- target body
- mobile terminal
- depth image
- height
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1072—Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
Abstract
The invention discloses a mobile terminal and a human body monitoring method and device. The human body monitoring mobile terminal comprises an image collector which is used for collecting the depth image of a target human body, and a processor which is used for acquiring the human body information of the target human body based on the depth image. The human body information comprises at least one of height, weight and sex. According to the invention, the mobile terminal accurately monitors the human body information.
Description
Technical field
The present invention relates to technical field of mobile terminals, more particularly to mobile terminal and its human body monitoring method, device.
Background technology
At present, mobile terminal such as mobile phone, panel computer etc. becomes the daily indispensable equipment of people.General movement
Terminal can provide the functions such as communication, game and some life applications.
In addition, human body information collection has been widely used in the health control of people, targeted advertisements positioning, intelligent monitoring etc.
Different field.However, existing mobile terminal only provides the general camera that can obtain coloured image, and utilize coloured image difficult
Not carry out height, surname exactly to human body not and the information such as body weight carries out real-time monitoring.
The content of the invention
The invention mainly solves the technical problem of providing mobile terminal and its human body monitoring method, device, can realize
Accurate measurements of the mobile terminal to human body information.
For solving above-mentioned technical problem, one aspect of the present invention is:A kind of mobile terminal is provided, including:
Image acquisition device, for gathering the depth image of target body;
Processor, for obtaining the human body information of the target body according to the depth image;Wherein, the human body letter
Breath includes at least one of height, body weight, sex.
Wherein, the processor is specifically for extracting the view data related to human body information from the depth image,
And the view data using the extraction obtains the human body information of the target body.
Wherein, when the human body information includes height, the processor specifically for:
The view data related to the height, the related view data bag of the height is extracted from the depth image
Include corresponding first three dimensional space coordinate of head peak pixel, the left thigh peak pixel corresponding of the target body
Corresponding 3rd three dimensional space coordinate of two three dimensional space coordinates, right thigh peak pixel, left foot minimum point pixel corresponding
Four three dimensional space coordinates and corresponding 5th three dimensional space coordinate of right crus of diaphragm minimum point pixel;
According to the three dimensional space coordinate of the extraction, it is calculated by the left thigh peak and the right thigh highest
Between point, the primary vector of the head peak is pointed at the midpoint of line, to point to the left foot by the left thigh peak minimum
The secondary vector of point and the 3rd vectorial of the right crus of diaphragm minimum point is pointed to by the right thigh peak;
The height of the target body is calculated using the primary vector, secondary vector and three-dimensional amount.
Wherein, the processor is further used for:By the primary vectorSecondary vectorIt is vectorial with the 3rdUnder substitution
Formula 1 or formula 2 is stated, height Height of the target body is obtained,
Wherein, when the human body information includes sex, the processor specifically for:
The view data related to sex, wherein, the picture number related to sex is extracted from the depth image
According to the wheel of the contour feature of the face including the target body, the contour feature of shoulder, the contour feature of chest and buttocks
Wide feature;
The height of the view data related to sex and the target body of the extraction is input into setting grader
Classified, and determined the sex of the target body according to the classification results of the setting grader.
Wherein, when the human body information includes body weight, the processor specifically for:
The view data related to body weight, wherein, the picture number related to body weight is extracted from the depth image
Profile according to the contour feature of the face including the target body, the contour feature of shoulder, the contour feature of chest, buttocks is special
Levy and belly contour feature;
Height and sex of the view data related to body weight of the extraction with the target body is input into setting
Regression model is estimated to body weight, and determines the body weight of the target body according to estimation result.
Wherein, the view data related to sex/body weight of the extraction is also included the positive of the target body
Angle α between the optical axis of normal and the camera for obtaining the depth image.
Wherein, also including display, for showing the human body information of the target body and/or matching with the human body information
Setting data.
For solving above-mentioned technical problem, another technical solution used in the present invention is:A kind of people of mobile terminal is provided
Body monitoring method, including:
The depth image of target body is gathered using image acquisition device;
The human body information of the target body is obtained according to the depth image;
Wherein, the human body information includes at least one of height, body weight, sex.
For solving above-mentioned technical problem, another technical scheme that the present invention is adopted is:A kind of people of mobile terminal is provided
Body detection means, including:
Acquisition module, the depth image of the target body that the image acquisition device for obtaining the mobile terminal is gathered;
Computing module, for being calculated the human body information of the target body according to the depth image.
The invention has the beneficial effects as follows:Mobile terminal can gather the depth image of target body, and utilize the depth image
The human body information of the target body is obtained, monitoring of the mobile terminal to human body information is realized.And, mobile terminal does not utilize coloured silk
Color image but gather human body information using depth image, as depth image includes three-dimensional information, therefore the human body that obtains of monitoring
Information is more accurate, and as depth image is used for characterizing depth information, and do not affected by ambient light, therefore above-mentioned human body information
Monitoring is not limited by ambient light, i.e., no matter which kind of environment does not interfere with the monitoring of the human body information, improves the human body information and adopts
The stability of collection.
Description of the drawings
Fig. 1 is the flow chart of one embodiment of human body monitoring method of mobile terminal of the present invention;
Fig. 2 be mobile terminal of the present invention one application scenarios of human body monitoring method in mobile terminal structural representation;
Fig. 3 be mobile terminal of the present invention another embodiment of human body monitoring method in the sub-step flow process that includes of step S12
Figure;
Fig. 4 is the flow chart of the human body monitoring method another embodiment of mobile terminal of the present invention;
Fig. 5 be mobile terminal of the present invention one application scenarios of human body monitoring method in target body depth image signal
Figure.
Fig. 6 be mobile terminal of the present invention one application scenarios of human body monitoring method in face in target body depth image
Schematic diagram;
Fig. 7 is the structural representation of one embodiment of human body monitoring device of mobile terminal of the present invention;
Fig. 8 is the structural representation of one embodiment of mobile terminal of the present invention.
Specific embodiment
In order to be better understood from technical scheme, below in conjunction with the accompanying drawings the embodiment of the present invention is retouched in detail
State.
The term for using in embodiments of the present invention is the purpose only merely for description specific embodiment, and is not intended to be limiting
The present invention." one kind ", " described " and " being somebody's turn to do " of singulative used in the embodiment of the present invention and appended claims
It is also intended to include most forms, unless context clearly shows that other implications.It is also understood that term used herein
"and/or" is referred to and comprising one or more associated any or all possible combinations for listing project.
Fig. 1 is referred to, Fig. 1 is the flow chart of one embodiment of human body monitoring method of mobile terminal of the present invention.The present embodiment
In, the method can be performed by mobile terminal, and the mobile terminal is, for example, the terminals such as mobile phone, panel computer.The method includes following
Step:
S11:The depth image of target body is gathered using image acquisition device.
Wherein, image acquisition device is provided with mobile terminal, with sampling depth image.Specifically, following three kinds of sides can be adopted
Formula gathers the depth image of the target body:
1) mode based on binocular vision
Specifically, the image acquisition device of mobile terminal includes that two cameras (equivalent to right and left eyes) for imitating human eye distance are synchronous
Target body is acquired, left and right two width target image is obtained, the left and right target image is calculated by algorithm, with
Obtain the depth image of the target body.Two cameras can be color camera or be black light camera.Its
In, the black light camera can for the following two kinds mode black light camera, black light camera be, for example, infrared camera or
Person's ultraviolet-cameras etc..
2) mode based on structure light
Specifically, can be as shown in Fig. 2 being provided with black light projection module 22, black light camera 21 in mobile terminal 20
With human body monitoring device 24, the black light camera 21 is used as above-mentioned image acquisition device.Using the black light of mobile terminal 20
Projection module 22 by projecting the structured light patterns of warp knit code (such as the speckle of irregular alignment to the space residing for target body 23
Pattern), and 21 pairs, the black light camera target body 23 using mobile terminal 20 is acquired, and obtains with structure light figure
The black light image of the target body of case, black light camera 21 can be carried out to the black light image according to triangulation Jing
The depth image of the target body is obtained after calculating, and is exported to the human body monitoring device 24 of mobile terminal 20.Certainly, at other
In embodiment, black light camera 21 directly can export the black light image to human body monitoring device 24, be supervised by the human body
Survey the depth image that device 24 obtains the target body Jing after calculating to the black light image according to triangulation.
Wherein, above-mentioned black light projection module 22 is typically made up of light source and diffraction optical element, and light source can be side
Transmitting laser can also be vertical cavity surface-emitting laser, the light source send can by the black light camera 21 recognize it is invisible
Light.For example, for example, the black light is infrared light, and the black light camera 21 is infrared camera, and the black light image is red
Outer image;Or the black light is ultraviolet light, the black light camera 21 is ultraviolet-cameras, and the black light image is ultraviolet
Image.Diffraction optical element needs to be configured of the work(such as collimation, beam splitting, diffusion according to different structured light patterns
Energy.
Said structure light pattern can be the irregular speckle pattern of distribution.The dense degree of the speckle pattern have impact on depth
Speed and precision that angle value is calculated, speckle particle are more, and calculating speed is slower, but precision is higher.Therefore, the black light is thrown
Shadow module 22 can select suitable speckle particle density according to the approximate depth of the target area of shooting image, ensure to calculate
While speed, still there is higher computational accuracy.Certainly, the speckle particle density also can be by 24, above-mentioned human body monitoring device
Determine according to the calculating demand of itself, and the density information of the determination is sent to black light projection module 22.
In the present embodiment, the black light projection module 22 is but does not limit to the region that target body is located to be with certain
Angle of flare projection speckle particle pattern.
It should be noted that above-mentioned black light projection module 22 is may also be arranged in the black light camera 21.
3) mode based on jet lag (Time of Flight, TOF)
Specifically, the image acquisition device of mobile terminal is specially black light camera, by the region being located to target body
Projection black light, and the black light returned by black light collected by camera, calculated according to the time of black light flight
The depth information of target body, to form the depth image of the target body.Wherein, the black light is such as infrared light, the throwing
The black light penetrated can be sent by the black light camera or be tied by other other projections independently of black light camera
Structure sends.
It is understood that the depth map of target body in other embodiments, can be also collected using other modes
Picture, therefore here is not limited.
And, after aforesaid way is obtained depth image, mobile terminal first can be carried out to the human body in the depth image
Identification, and the human body parts segmentation in depth image can be obtained into new depth image, then the depth obtained from segmentation further
Picture carries out following step S12.
S12:The human body information of the target body is obtained according to the depth image.
Wherein, the human body information includes at least one of height, body weight, sex.
Specifically, mobile terminal is calculated the human body information of the target body using the information carried in depth image,
To realize human body monitoring.As shown in figure 3, S12 may include following sub-step:
S121:The view data related to human body information is extracted from the depth image.
In the present embodiment, mobile terminal can according to the human body information of required collection predefine needed for extraction with the human body
The related view data of information, with after the depth image for obtaining the target body, according to the figure of predetermined required extraction
As data carry out image data extraction from the depth image.
For example, mobile terminal sets the three-dimensional spatial information conduct of human body respective pixel in can extract the depth image
The related view data of height;At least can extract the contour feature at some positions of human body in the depth image as with body weight, property
Not related view data.Depth information of the three-dimensional spatial information i.e. including the respective pixel of the depth image.The profile is special
Levy may include this pair should human body some adjacent margins multiple pixel coordinates or the further calculating to the plurality of pixel coordinate
As a result.
S122:The human body information of the target body is obtained using the view data of the extraction.
For example, the target is calculated using the three-dimensional spatial information for setting human body respective pixel in the depth image
The height of human body, is carried out point using the contour feature at some positions of human body in the depth image and/or the height of the target body
Class identification obtains the sex of the target body, using contour feature and/or the target at some positions of human body in the depth image
The height and sex of human body is analyzed calculates the body weight for obtaining the target body.
It is understood that above-mentioned human body monitoring method can be performed, regularly perform or receive user for real-time continuous
Just perform during instruction, for example, the mobile terminal can constantly gather the depth image of the human body in current goal region, and root in real time
Obtain in the human body information for not being located at the people of the target area in the same time according to corresponding depth image, and then realize the reality to human body
When monitor.
In the present embodiment, mobile terminal can gather the depth image of target body, and obtain the mesh using the depth image
The human body information of mark human body, realizes monitoring of the mobile terminal to human body information.And, mobile terminal do not utilize coloured image and
It is to gather human body information using depth image, as depth image includes three-dimensional information, therefore the human body information that monitoring is obtained is more
Accurately, and as depth image is used for characterizing depth information, and do not affected by ambient light, therefore the monitoring of above-mentioned human body information is not received
Ambient light is limited, i.e., no matter which kind of environment does not interfere with the monitoring of the human body information, improves stablizing for human body information collection
Property.
Fig. 4 is referred to, Fig. 4 is the flow chart of another embodiment of human body monitoring method of mobile terminal of the present invention.This enforcement
In example, the method can be performed by mobile terminal, comprised the following steps:
S41:The depth image of target body is gathered using image acquisition device.
The description of above-mentioned S11 is see specifically.
S42:The view data related to height, sex and body weight is extracted from the depth image.
In the present embodiment, being somebody's turn to do the view data related to height may include the head peak pixel pair of the target body
The first three dimensional space coordinate P1 for answering, the corresponding second three dimensional space coordinate P2 of left thigh peak pixel, right thigh peak
The corresponding 3rd three dimensional space coordinate P3 of pixel, the corresponding 4th three dimensional space coordinate P4 of left foot minimum point pixel and right crus of diaphragm are minimum
The corresponding 5th three dimensional space coordinate P5 of point pixel, as shown in Figure 5.The three dimensional space coordinate of above-mentioned pixel is by the pixel pair
The target body position answered is constituted in the depth information of the two-dimensional coordinate and the pixel of the target body place plane.One
In concrete application, the corresponding target body position of the pixel can be the picture in the two-dimensional coordinate of the target body place plane
World coordinates of the vegetarian refreshments in the camera for gathering the depth image, the world coordinates are can be regarded as with the optical center of the camera as original
Point, the coordinate in the two-dimensional coordinate system formed in the plane parallel with the camera lens that the target body is located.
Being somebody's turn to do the view data related to sex may include:The profile of the contour feature of the face of the target body, shoulder
The contour feature of feature, the contour feature of chest and buttocks.
The view data related to body weight includes the profile spy of the contour feature of the face of the target body, shoulder
Levy, the contour feature of the contour feature of the contour feature of chest, buttocks and belly.
Wherein, the contour feature of certain position A of above-mentioned target body is the profile of the position A of the target body
Line, the position A of the target body one parallel to the outline line in the section of target body place plane length with it is wide
The outline line in multiple sections parallel to target body place plane of the position A of ratio, the target body between degree
Length and width between ratio mean value or the target body position A it is multiple parallel to the target body institute
The maximum in ratio between the length of the outline line in the section of plane and width.The outline line of position A is i.e. in the depth
The lines that the corresponding pixel in position A edges is formed in image, the target body place plane are i.e. parallel with camera lens flat
Face, length and the width of the outline line in the section are respectively on the section most greatly enhancing under orthogonal two reference directions
Degree.As shown in fig. 6, the outline line of the face of the target body is L, what the nose of the face was located is located parallel to target body
The length of the outline line at the interface of plane is a1, and width is b1, therefore its ratio is a1/b1, further, can also obtain the face
Length a2 and width b2 of the left eye pearl place parallel to the outline line in the section of target body place plane, obtaining its ratio is
A2/b2, and obtain length a3 of the right eye pearl place parallel to the outline line in the section of target body place plane of the face
With width b3, it is a3/b3 to obtain its ratio, takes ratio a1/b1, a2/b2, the mean value of a3/b3 that above three section obtains
Or maximum is used as the contour feature of its head.
Further, it is different according to the feature of different parts, above different images data can be selected to make for different parts
For the contour feature at the position, for example, for face, then the multiple sections parallel to target body place plane of face are selected
Contour feature of the mean value of the ratio between length and width as the face of the target body;For shoulder, chest, belly with
And buttocks, then select the position multiple parallel to the section of target body place plane length and width between ratio in most
Contour feature of the big value as the position.
In the present embodiment, the said extracted view data related to sex/body weight is just also included the target body
Angle α between the optical axis of the normal in face and the camera for obtaining the depth image, as shown in Figure 5.Specifically can be by extracting mesh
The corresponding second three dimensional space coordinate P2 (x of left thigh peak pixel of mark human body2,y2,z2), right thigh peak pixel pair
The 3rd three dimensional space coordinate P3 (x for answering3,y3,z3), and be calculated using following formula 13,
The setting of angle can avoid the caused characteristic information of leaning to one side due to human body in motion process from changing what is brought
Sex, body weight estimation error, such that it is able to realize real-time monitoring.
Certainly, in other embodiments, the view data related to sex/body weight may not include angle α, and here is not limited
It is fixed.
S43:Height, sex and the body weight of the target body are obtained using the view data of the extraction.
Wherein, the height of the target body is calculated using the view data related to height extracted in above-mentioned S42.
Specifically, the step of this obtains height may include following sub-step:
S431:According to the three dimensional space coordinate of the extraction, it is calculated right big with described by the left thigh peak
Between leg peak, the midpoint of line is pointed to the primary vector of the head peak, points to the left side by the left thigh peak
The secondary vector of pin minimum point and the 3rd vectorial of the right crus of diaphragm minimum point is pointed to by the right thigh peak.
For example, as shown in figure 5, the corresponding first three dimensional space coordinate P1 of head peak pixel for obtaining is (x1,y1,z1)、
The corresponding second three dimensional space coordinate P2 of left thigh peak pixel is (x2,y2,z2), right thigh peak pixel corresponding
Three three dimensional space coordinate P3 are (x3,y3,z3), the corresponding 4th three dimensional space coordinate P4 of left foot minimum point pixel be (x4,y4,z4)
Fiveth three dimensional space coordinate P5 corresponding with right crus of diaphragm minimum point pixel is (x5,y5,z5), above-mentioned x coordinate, y-coordinate are the collection depth
The world coordinates of the camera of degree image, z coordinate are the depth value of the pixel in depth image, can be calculated by above-mentioned space coordinates
Obtaining primary vector isSecondary vectorIt is vectorial with the 3rd
S432:The height of the target body is calculated using the primary vector, secondary vector and three-dimensional amount
For example, by the primary vectorSecondary vectorIt is vectorial with the 3rdFollowing formula 11 or formula 12 is substituted into, is obtained
Height Height of the target body.
In the present embodiment, which is calculated using three dimensional space coordinate of the region of interest of target body in depth image
Height, can cause the height to be accurately calculated.Do not need human body to stand with a certain anchor according to this algorithm, that is, make one
Also can accurately measure in motion process, realize monitoring in real time.
Wherein, the height of the target body is calculated using the view data related to sex extracted in above-mentioned S42.
Specifically such as, the height of the view data related to sex and the target body of the extraction is input into setting grader
Row classification, and the sex of the target body is determined according to the classification results of the setting grader.The setting grader can be with
It is nearest neighbor classifier in prior art, Bayes classifier, SVMs etc..The height of the target body can be by this
The mode of the above-mentioned acquisition height in S43 is obtained or is calculated by user input or using other modes.
Wherein, the height of the target body is calculated using the view data related to body weight extracted in above-mentioned S42.
It is concrete such as, the height and sex of the view data related to body weight of the extraction and the target body are input into setting back
Return model to estimate body weight, and determine the body weight of the target body according to estimation result.The height of the target body, property
Can not be obtained by the above-mentioned acquisition height and property in this S43 otherwise or by user input or adopt other modes
It is calculated.Due to input feature vector it is polynary, thus can using multiple linear regression model as above-mentioned setting regression model, example
Body weight Weight of target body is obtained using following formula 14 such as,
Wherein, ωiIt is weight coefficient,For obtain the image data value related to body weight, the n be extraction and body weight
The sum of related view data.
Above-mentioned to set that grader is trained to be obtained, this sets what regression model was arrived as study of Confucian classics acquistion.For example, it is sharp in advance
With depth camera to different heights, surname not and the crowd of body weight measures, and accurately obtain its surname not, height, body weight
These view data are trained and are learnt to grader and regression model by information.
It is understood that in other embodiment, only can extract in height, sex and body weight wherein one or two
Dependent image data, is calculated wherein one or two in height, sex and the body weight of the target body then.
Fig. 7 is referred to, Fig. 7 is the structural representation of one embodiment of human body monitoring device of mobile terminal of the present invention.This reality
Apply in example, the human body monitoring device 70 is arranged in above-mentioned mobile terminal, including acquisition module 71 and computing module 72.
Acquisition module 71 is used for the depth image of the target body of the image acquisition device collection for obtaining the mobile terminal.
Computing module 72 for being calculated the human body information of the target body according to the depth image.
Alternatively, the computing module 72 includes extraction unit 721 and obtains unit 722.
Extraction unit 721 is for the view data related to human body information of the extraction from the depth image;
Unit 722 is obtained for obtaining the human body information of the target body using the view data of the extraction, wherein,
The human body information includes at least one of height, body weight, sex.
Alternatively, when the human body information includes height, the extraction unit 721 specifically for:From the depth map
The view data related to the height is extracted as in, and the related view data of the height includes the head of the target body
Corresponding first three dimensional space coordinate of peak pixel, corresponding second three dimensional space coordinate of left thigh peak pixel, the right side are big
Corresponding 3rd three dimensional space coordinate of leg peak pixel, corresponding 4th three dimensional space coordinate of left foot minimum point pixel and right crus of diaphragm
Corresponding 5th three dimensional space coordinate of minimum point pixel;
Obtain unit 722 specifically for:According to the three dimensional space coordinate of the extraction, it is calculated by the left thigh most
Between high point and the right thigh peak midpoint of line point to the head peak primary vector, by the left thigh most
High point is pointed to the secondary vector of the left foot minimum point and is pointed to the of the right crus of diaphragm minimum point by the right thigh peak
Three is vectorial;The height of the target body is calculated using the primary vector, secondary vector and three-dimensional amount.
Still optionally further, obtain unit 722 to be further used for the primary vectorSecondary vectorIt is vectorial with the 3rdAbove-mentioned formula 11 or formula 12 is substituted into, height Height of the target body is obtained.
Alternatively, when the human body information includes sex, the extraction unit 721 specifically for:From the depth map
The view data related to sex is extracted as in, and wherein, the view data related to sex includes the target body
The contour feature of the contour feature of face, the contour feature of shoulder, the contour feature of chest and buttocks;
Obtain unit 722 specifically for:By the view data related to sex and the target body of the extraction
Height is input into setting grader is classified, and determines the target body according to the classification results of the setting grader
Sex.
Alternatively, when the human body information includes body weight, extraction unit 721 specifically for:From the depth image
The view data related to body weight is extracted, wherein, the view data related to body weight includes the face of the target body
Contour feature, the contour feature of shoulder, the contour feature of chest, the contour feature of the contour feature of buttocks and belly;
Obtain unit 722 specifically for:By the view data related to body weight of the extraction and the target body
Height and sex are input into setting regression model and body weight are estimated, and determine the body of the target body according to estimation result
Weight.
Still optionally further, the view data related to sex/body weight of the extraction is also included the target body
Positive normal and the optical axis of the camera for obtaining the depth image between angle α.
Wherein, the angle α can be by the three dimensional space coordinate P2 (x2,y2,z2) and the three dimensional space coordinate P3 (x3,
y3,z3) substitute into above-mentioned formula 13 be calculated.
Alternatively, the contour feature at certain position of the target body is the corresponding position of the target body
Outline line, the corresponding position of the target body one parallel to the outline line in the section of target body place plane length
Ratio, multiple sections parallel to target body place plane of the corresponding position of the target body between degree and width
Outline line length and width between ratio mean value or the target body corresponding position it is multiple parallel to described
The maximum in ratio between the length of the outline line in the section of target body place plane and width.
Wherein, the above-mentioned module of the human body monitoring device of the mobile terminal is adopted in being respectively used to perform said method embodiment
Corresponding steps after the depth image of collection target body, the as above embodiment of the method explanation of concrete implementation procedure, here are not gone to live in the household of one's in-laws on getting married
State.
Fig. 8 is referred to, Fig. 8 is the structural representation of one embodiment example of mobile terminal of the present invention.In the present embodiment, the movement
Image acquisition device 81 that terminal 80 includes being connected with each other, processor 82, memory 83.
Image acquisition device 81 is used for gathering the depth image of target body, and exports to the processor 82.Specifically, the figure
As collector 81 may include to imitate two cameras arranging in the eyes of human body, with using said method embodiment based on binocular vision
Feel mode collects the depth image;The image acquisition device 81 also can be as shown in Fig. 2 with using in said method embodiment
The depth image is collected based on structure light mode;Image acquisition device 81 may also include invisible light source and black light phase
Machine, to collect the depth image based on TOF modes in using said method embodiment.Certainly, image acquisition device 81 is gone back
Can be other structures, therefore here is not limited.
Memory 83 is used for storing computer instruction, and provides the computer instruction to processor 82, and can store figure
The data used as needed for 82 processing procedure of depth image and processor of the collection of collector 81 are as being used for being calculated property
Not, the setting grader and regression model and the formula in said method embodiment etc. of body weight.
Processor 82 performs the computer instruction, for obtaining the human body of the target body according to the depth image
Information, wherein, the human body information includes at least one of height, body weight, sex.
Alternatively, processor 82 is specifically for extracting the view data related to human body information from the depth image;
And the view data using the extraction obtains the human body information of the target body.
Alternatively, when the human body information includes height, processor 82 specifically for:Extract from the depth image
The view data related to the height, the related view data of the height include the head peak picture of the target body
Corresponding first three dimensional space coordinate of element, corresponding second three dimensional space coordinate of left thigh peak pixel, right thigh peak
Corresponding 3rd three dimensional space coordinate of pixel, corresponding 4th three dimensional space coordinate of left foot minimum point pixel and right crus of diaphragm minimum point picture
Corresponding 5th three dimensional space coordinate of element;According to the three dimensional space coordinate of the extraction, it is calculated by the left thigh highest
Between point and the right thigh peak midpoint of line point to the primary vector of the head peak, by the left thigh highest
Point points to the secondary vector of the left foot minimum point and is pointed to the 3rd of the right crus of diaphragm minimum point by the right thigh peak
Vector;The height of the target body is calculated using the primary vector, secondary vector and three-dimensional amount.
Still optionally further, processor 82 is further used for the primary vectorSecondary vectorIt is vectorial with the 3rd
Above-mentioned formula 11 or formula 12 is substituted into, height Height of the target body is obtained.
Alternatively, when the human body information includes sex, processor 82 specifically for:Extract from the depth image
The view data related to sex, wherein, the view data related to sex includes the wheel of the face of the target body
The contour feature of wide feature, the contour feature of shoulder, the contour feature of chest and buttocks;By the related to sex of the extraction
View data and the height of the target body be input into setting grader and classified, and according to the setting grader
Classification results determine the sex of the target body.
Alternatively, when the human body information includes body weight, processor 82 is specifically for extracting from the depth image
The view data related to body weight, wherein, the view data related to body weight includes the wheel of the face of the target body
The contour feature of wide feature, the contour feature of shoulder, the contour feature of chest, the contour feature of buttocks and belly;Will be described
The view data related to body weight extracted is input into setting regression model to body weight with the height and sex of the target body
Estimated, and determined the body weight of the target body according to estimation result.
Still optionally further, the view data related to sex/body weight of the extraction is also included the target body
Positive normal and the optical axis of the camera for obtaining the depth image between angle α.
Wherein, the angle α can be by the three dimensional space coordinate P2 (x2,y2,z2) and the three dimensional space coordinate P3 (x3,
y3,z3) substitute into above-mentioned formula 13 be calculated.
Alternatively, the contour feature at certain position of the target body is the corresponding position of the target body
Outline line, the corresponding position of the target body one parallel to the outline line in the section of target body place plane length
Ratio, multiple sections parallel to target body place plane of the corresponding position of the target body between degree and width
Outline line length and width between ratio mean value or the target body corresponding position it is multiple parallel to described
The maximum in ratio between the length of the outline line in the section of target body place plane and width.
Wherein, the method that the invention described above embodiment is disclosed is can be applicable in processor 82, or real by processor 82
It is existing.A kind of possibly IC chip of processor 82, the disposal ability with signal.During realization, said method
Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 82 or software form.Above-mentioned process
Device 82 can be general processor, digital signal processor (DSP), special IC (ASIC), ready-made programmable gate array
Or other PLDs, discrete gate or transistor logic, discrete hardware components (FPGA).Can realize or
Disclosed each method, step and logic diagram in person's execution embodiment of the present invention.General processor can be microprocessor or
The person processor can also be any conventional processor etc..The step of method with reference to disclosed in the embodiment of the present invention, can be straight
Connect and be presented as that hardware decoding processor execution is completed, or performed with the hardware in decoding processor and software module combination
Into.Software module may be located at random access memory, flash memory, read-only storage, and programmable read only memory or electrically-erasable can
In the ripe storage medium in this areas such as programmable memory, register.The storage medium is located at memory 82, and processor 82 reads
Information in respective memory, the step of complete said method with reference to its hardware.
In such scheme, mobile terminal arrange can sampling depth image image acquisition device, by collecting target person
The depth image of body, and the human body information of the target body is obtained using the depth image, realize mobile terminal and human body is believed
The monitoring of breath.And, mobile terminal does not utilize coloured image but gathers human body information using depth image, due to depth image
Comprising three-dimensional information, therefore the human body information that obtains of monitoring is more accurate, and as depth image is used for characterizing depth information, and not
Affected by ambient light, therefore the monitoring of above-mentioned human body information is not limited by ambient light, i.e., no matter which kind of environment is not interfered with the human body
The monitoring of information, improves the stability of the human body information collection.
Embodiments of the present invention are the foregoing is only, the scope of the claims of the present invention is not thereby limited, it is every using this
Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations
Technical field, is included within the scope of the present invention.
Claims (10)
1. a kind of mobile terminal, it is characterised in that include:
Image acquisition device, for gathering the depth image of target body;
Processor, for obtaining the human body information of the target body according to the depth image;
Wherein, the human body information includes at least one of height, body weight, sex.
2. mobile terminal according to claim 1, it is characterised in that the processor is specifically for from the depth image
It is middle to extract the view data related to human body information, and the view data using the extraction obtains the human body of the target body
Information.
3. mobile terminal according to claim 2, it is characterised in that when the human body information includes height, the place
Reason implement body is used for:
The view data related to the height is extracted from the depth image, and the related view data of the height includes institute
State corresponding first three dimensional space coordinate of head peak pixel, the left thigh peak pixel the corresponding 2nd 3 of target body
Corresponding 3rd three dimensional space coordinate of dimension space coordinate, right thigh peak pixel, left foot minimum point pixel the corresponding 4th 3
Dimension space coordinate and corresponding 5th three dimensional space coordinate of right crus of diaphragm minimum point pixel;
According to the three dimensional space coordinate of the extraction, it is calculated by between the left thigh peak and the right thigh peak
Point to the primary vector of the head peak, the left foot minimum point is pointed to by the left thigh peak in the midpoint of line
Secondary vector and the 3rd vectorial of the right crus of diaphragm minimum point is pointed to by the right thigh peak;
The height of the target body is calculated using the primary vector, secondary vector and three-dimensional amount.
4. mobile terminal according to claim 3, it is characterised in that the processor is further used for:By described first
VectorSecondary vectorIt is vectorial with the 3rdFollowing formula 1 or formula 2 is substituted into, height Height of the target body is obtained,
5. mobile terminal according to claim 2, it is characterised in that when the human body information includes sex, the place
Reason implement body is used for:
The view data related to sex, wherein, the view data bag related to sex is extracted from the depth image
The profile for including contour feature, the contour feature of shoulder, the contour feature of chest and the buttocks of the face of the target body is special
Levy;
The height of the view data related to sex and the target body of the extraction is input into setting grader to be carried out
Classification, and the sex of the target body is determined according to the classification results of the setting grader.
6. mobile terminal according to claim 2, it is characterised in that when the human body information includes body weight, the place
Reason implement body is used for:
The view data related to body weight, wherein, the view data bag related to body weight is extracted from the depth image
Include the contour feature of the face of the target body, the contour feature of shoulder, the contour feature of chest, buttocks contour feature with
And the contour feature of belly;
The view data related to body weight of the extraction is input into setting with the height and sex of the target body and is returned
Model is estimated to body weight, and determines the body weight of the target body according to estimation result.
7. the mobile terminal according to claim 5 or 6, it is characterised in that the figure related to sex/body weight of the extraction
As data are also included the folder between the positive normal of the target body and the optical axis of the camera for obtaining the depth image
Angle α.
8. mobile terminal according to claim 1, it is characterised in that also including display, for showing the target person
The human body information of body and/or the setting data matched with the human body information.
9. a kind of human body monitoring method of mobile terminal, it is characterised in that include:
The depth image of target body is gathered using image acquisition device;
The human body information of the target body is obtained according to the depth image;
Wherein, the human body information includes at least one of height, body weight, sex.
10. a kind of human body detection device of mobile terminal, it is characterised in that include:
Acquisition module, the depth image of the target body that the image acquisition device for obtaining the mobile terminal is gathered;
Computing module, for being calculated the human body information of the target body according to the depth image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610850221.7A CN106529400A (en) | 2016-09-26 | 2016-09-26 | Mobile terminal and human body monitoring method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610850221.7A CN106529400A (en) | 2016-09-26 | 2016-09-26 | Mobile terminal and human body monitoring method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106529400A true CN106529400A (en) | 2017-03-22 |
Family
ID=58344264
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610850221.7A Pending CN106529400A (en) | 2016-09-26 | 2016-09-26 | Mobile terminal and human body monitoring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106529400A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107256565A (en) * | 2017-05-19 | 2017-10-17 | 安徽信息工程学院 | The measuring method and system of human body predominant body types parameter based on Kinect |
CN107296611A (en) * | 2017-07-05 | 2017-10-27 | 深圳市泰衡诺科技有限公司 | A kind of height measurement method and device based on intelligent terminal |
CN107515844A (en) * | 2017-07-31 | 2017-12-26 | 广东欧珀移动通信有限公司 | Font method to set up, device and mobile device |
CN107727220A (en) * | 2017-10-11 | 2018-02-23 | 上海展扬通信技术有限公司 | A kind of human body measurement method and body measurement system based on intelligent terminal |
CN108416253A (en) * | 2018-01-17 | 2018-08-17 | 深圳天珑无线科技有限公司 | Avoirdupois monitoring method, system and mobile terminal based on facial image |
CN109141248A (en) * | 2018-07-26 | 2019-01-04 | 深源恒际科技有限公司 | Pig weight measuring method and system based on image |
WO2020078111A1 (en) * | 2018-10-17 | 2020-04-23 | 京东数字科技控股有限公司 | Weight measurement method and device, and computer readable storage medium |
CN115426432A (en) * | 2022-10-28 | 2022-12-02 | 荣耀终端有限公司 | Method and system for evaluating functional fitness, electronic device, and readable medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102106719A (en) * | 2009-12-24 | 2011-06-29 | 财团法人工业技术研究院 | Health information analysis method adopting image recognition technology and system utilizing method |
CN102657532A (en) * | 2012-05-04 | 2012-09-12 | 深圳泰山在线科技有限公司 | Height measuring method and device based on body posture identification |
CN104679831A (en) * | 2015-02-04 | 2015-06-03 | 腾讯科技(深圳)有限公司 | Method and device for matching human model |
CN104881642A (en) * | 2015-05-22 | 2015-09-02 | 海信集团有限公司 | Method and device for content pushing, and equipment |
CN104966284A (en) * | 2015-05-29 | 2015-10-07 | 北京旷视科技有限公司 | Method and equipment for acquiring object dimension information based on depth data |
CN105519102A (en) * | 2015-03-26 | 2016-04-20 | 北京旷视科技有限公司 | Video monitoring method, video monitoring system and computer program product |
CN105700488A (en) * | 2014-11-27 | 2016-06-22 | 中国移动通信集团公司 | Processing method and system of target human body activity information |
-
2016
- 2016-09-26 CN CN201610850221.7A patent/CN106529400A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102106719A (en) * | 2009-12-24 | 2011-06-29 | 财团法人工业技术研究院 | Health information analysis method adopting image recognition technology and system utilizing method |
CN102657532A (en) * | 2012-05-04 | 2012-09-12 | 深圳泰山在线科技有限公司 | Height measuring method and device based on body posture identification |
CN105700488A (en) * | 2014-11-27 | 2016-06-22 | 中国移动通信集团公司 | Processing method and system of target human body activity information |
CN104679831A (en) * | 2015-02-04 | 2015-06-03 | 腾讯科技(深圳)有限公司 | Method and device for matching human model |
CN105519102A (en) * | 2015-03-26 | 2016-04-20 | 北京旷视科技有限公司 | Video monitoring method, video monitoring system and computer program product |
CN104881642A (en) * | 2015-05-22 | 2015-09-02 | 海信集团有限公司 | Method and device for content pushing, and equipment |
CN104966284A (en) * | 2015-05-29 | 2015-10-07 | 北京旷视科技有限公司 | Method and equipment for acquiring object dimension information based on depth data |
Non-Patent Citations (2)
Title |
---|
徐黄浩: "基于Kinect的运动捕捉与人体模型测量算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李红波 等: "基于Kinect深度图像的人体识别分析", 《数字通信》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107256565A (en) * | 2017-05-19 | 2017-10-17 | 安徽信息工程学院 | The measuring method and system of human body predominant body types parameter based on Kinect |
CN107296611A (en) * | 2017-07-05 | 2017-10-27 | 深圳市泰衡诺科技有限公司 | A kind of height measurement method and device based on intelligent terminal |
CN107515844A (en) * | 2017-07-31 | 2017-12-26 | 广东欧珀移动通信有限公司 | Font method to set up, device and mobile device |
CN107515844B (en) * | 2017-07-31 | 2021-03-16 | Oppo广东移动通信有限公司 | Font setting method and device and mobile device |
CN107727220A (en) * | 2017-10-11 | 2018-02-23 | 上海展扬通信技术有限公司 | A kind of human body measurement method and body measurement system based on intelligent terminal |
CN108416253A (en) * | 2018-01-17 | 2018-08-17 | 深圳天珑无线科技有限公司 | Avoirdupois monitoring method, system and mobile terminal based on facial image |
CN109141248A (en) * | 2018-07-26 | 2019-01-04 | 深源恒际科技有限公司 | Pig weight measuring method and system based on image |
CN109141248B (en) * | 2018-07-26 | 2020-09-08 | 深源恒际科技有限公司 | Pig weight measuring and calculating method and system based on image |
WO2020078111A1 (en) * | 2018-10-17 | 2020-04-23 | 京东数字科技控股有限公司 | Weight measurement method and device, and computer readable storage medium |
CN115426432A (en) * | 2022-10-28 | 2022-12-02 | 荣耀终端有限公司 | Method and system for evaluating functional fitness, electronic device, and readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106529400A (en) | Mobile terminal and human body monitoring method and device | |
CN106529399A (en) | Human body information acquisition method, device and system | |
CN107748869B (en) | 3D face identity authentication method and device | |
CN107633165B (en) | 3D face identity authentication method and device | |
CN111460875B (en) | Image processing method and apparatus, image device, and storage medium | |
CN101715581B (en) | Volume recognition method and system | |
CN101162524B (en) | Image-processing apparatus and method | |
CN106415445A (en) | Technologies for viewer attention area estimation | |
CN104036488B (en) | Binocular vision-based human body posture and action research method | |
CN103099602B (en) | Based on the physical examinations method and system of optical identification | |
CN108256504A (en) | A kind of Three-Dimensional Dynamic gesture identification method based on deep learning | |
CN106454287A (en) | Combined camera shooting system, mobile terminal and image processing method | |
CN106504283A (en) | Information broadcasting method, apparatus and system | |
CN104035557B (en) | Kinect action identification method based on joint activeness | |
CN105740780A (en) | Method and device for human face in-vivo detection | |
CN107016697B (en) | A kind of height measurement method and device | |
CN110074788B (en) | Body data acquisition method and device based on machine learning | |
CN105869166B (en) | A kind of human motion recognition method and system based on binocular vision | |
CN110188700B (en) | Human body three-dimensional joint point prediction method based on grouping regression model | |
CN107656619A (en) | A kind of intelligent projecting method, system and intelligent terminal | |
CN109938737A (en) | A kind of human body body type measurement method and device based on deep learning critical point detection | |
CN106256394A (en) | The training devices of mixing motion capture and system | |
CN113239797B (en) | Human body action recognition method, device and system | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
CN106843507A (en) | A kind of method and system of virtual reality multi-person interactive |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170322 |