CN108209926A - Human Height measuring system based on depth image - Google Patents
Human Height measuring system based on depth image Download PDFInfo
- Publication number
- CN108209926A CN108209926A CN201810015951.4A CN201810015951A CN108209926A CN 108209926 A CN108209926 A CN 108209926A CN 201810015951 A CN201810015951 A CN 201810015951A CN 108209926 A CN108209926 A CN 108209926A
- Authority
- CN
- China
- Prior art keywords
- cameras
- image
- measured
- human height
- ground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1072—Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Abstract
The invention discloses a kind of Human Height measuring system based on depth image, including:The Kinect2.0 cameras of the skeleton of real-time tracing the measured, the depth image of acquisition the measured, it is embedded in Kinect2.0 cameras, detects the gravity accelerometer of the position on ground, the host computer for being handled to the depth image that Kinect2.0 cameras upload, identifying the gesture of the measured and Human Height information being calculated, the Human Height information that host computer uploads is converted into the singlechip microprocessor and display device of corresponding word and audio, voice device and auxiliary circuit.The invention has the beneficial effects that:1st, it is operated with gesture control, it is extremely easy to use;2nd, Kinect2.0 cameras can extract the bone information of the bone information of people, multi collect difference height and distance on the move, using the highest artis in bone, carry out curve fitting, measurement result is more accurate;3rd, it is corrected using gravity accelerometer, further improves the accuracy of measurement result.
Description
Technical field
The present invention relates to a kind of Human Height measuring systems, and in particular to a kind of Human Height based on depth image is surveyed
Amount system, belongs to technical field of measuring equipment.
Background technology
At present, the height measurement method generally used both at home and abroad is all traditional contact measurement method, that is, utilizes height
It counts to measure.
This traditional measuring method is comparatively comparatively laborious, and the requirement to the measured and gauger is all compared
It is higher.
The measured will take off one's shoes, rehabilitate, and " attention position " be kept to stand on the bottom plate of height measure, heel, sacral region and two
Between omoplate on the column of height measure.
Gauger station is all possible in the left and right of the measured, head first is adjusted to tragus upper limb and eye socket
The minimum point of lower edge flushes, then moves the level board of height measure to the crown of the measured, makes level board elasticity appropriate, side
It can measure the measured height.
Gauger is when measuring, if the measured does not keep body erect or occipitalia, two omoplates
Between this 3 points of spine and sacrum portion there is no study plot to be attached on the column for the height measure for measuring height or maintain other classes
As there is no posture up to standard, can all lead to the inaccuracy of measurement result.
Certainly, height is measured in addition to traditional contact measurement method, also contactless measurement, such as:Using
The mode of RGB image measures, however such method such as is easily illuminated by the light at the influence of external environments, measures inaccurate.
Invention content
To solve the deficiencies in the prior art, the purpose of the present invention is to provide one kind is easy to use, measurement is accurate, is based on
The Human Height measuring system of depth image.
In order to realize above-mentioned target, the present invention adopts the following technical scheme that:
Human Height measuring system based on depth image, which is characterized in that including:
Kinect2.0 cameras:The skeleton of real-time tracing the measured acquires the depth image of the measured;
Gravity accelerometer:It is embedded in Kinect2.0 cameras, detects the position on ground;
Host computer:The depth image uploaded to Kinect2.0 cameras is handled, and identifies the gesture of the measured, with
And Human Height information is calculated;
Singlechip microprocessor:The Human Height information that host computer uploads is converted into corresponding word and audio;
Display device:Show the word of the record Human Height information of singlechip microprocessor output;
Voice device:Report the Human Height information of singlechip microprocessor output;
Auxiliary circuit:Power supply is provided for singlechip microprocessor.
The aforementioned Human Height measuring system based on depth image, which is characterized in that aforementioned host computer is using PC
Machine, basic is configured to:Window10 systems, the GPU of more than 3.0 CUDA Capability, installation Visual Studio
The window versions of tensorflow are installed in 2015 and Python3.5.2 and opencv2.4.9 libraries.
The aforementioned Human Height measuring system based on depth image, which is characterized in that aforementioned host computer uses
Faster-RCNN algorithms identify the gesture of the measured.
The aforementioned Human Height measuring system based on depth image, which is characterized in that people is calculated in aforementioned host computer
The method of body height information is:
(1) denoising is carried out by the way of gaussian filtering;
(2) library function that Microsoft is used to be provided for Kinect2.0 cameras, acquisition 20 skeleton joint points of human body
Three-dimensional information;
(3) the 3-D view coordinate P (x of joint of head point are selectedhd,yhd,zhd) as peak 3-D view coordinate;
(4) peak 3-D view coordinate is calculated to the distance L (d) on ground, which is preliminary human body body
It is high;
(5) ground equation is obtained using the value of gravity accelerometer, it is specific as follows:
1) depth information in infrared image is calculated
Depth information inside infrared image is skeleton diagram of the kinect three-dimensional coordinates for infrared image, X-Y coordinate
It is the plane where infrared image, Z is depth, has equation below about depth information:
In formula, d is the depth of each pixel in infrared image, and b is the horizontal base line of kinect2.0 cameras, and f is
The focal length of kinect2.0 cameras, kd are kinect pixel differences, and doff is the offset for kinect2.0 cameras;
2) infrared image is registrated with coloured image
The relationship of earth coordinates and infrared image coordinate system is as follows:
In formula, (i, j) is the coordinate of infrared image, (u0, v0) be infrared image optical centre, (x, y, z) be the earth
Three-dimensional coordinate in coordinate system, coordinate origin are the optical centres of infrared camera;
3) ground is calculated
When kinect2.0 cameras move on the ground, it is assumed that og-(xg, yg, zg) it is earth coordinates, coordinate origin
It is the center of kinect2.0 cameras, vector g is (0,0,1), and kinect2.0 cameras vector perpendicular to the ground when moving is
g′-(ag′, bg′, cg′), this vector sum earth coordinates is all consistent always, and h is kinect2.0 image centers to ground
Vertical range, so, by vector g '-(ag′, bg′, cg′) and vertical range h can obtain ground equation:
ag′x+bg′y+cg′Z+d=0 (3)
4) remove some abnormal points
Remove some abnormal points using RANSAN algorithms, so as to accurately detect ground;
(6) height function curve is fitted
Assuming that the practical height of people is H, then there is different with distance and existing error function:
Error (d)=L (d)-H (4)
The L (d) of same person station any one position in the range of distance kinect2.0 cameras 1.5m to 3.1m is measured, is led to
Cross multigroup d and L (d) fit error curve error (d);
(7) accurate height information is obtained
During the error curve error (d) of fitting is measured for reality, the practical height H of the measuredcFor:
Hc=L (d)-error (d) (5).
The aforementioned Human Height measuring system based on depth image, which is characterized in that aforementioned singlechip microprocessor is adopted
It is STC89C52 processors.
The aforementioned Human Height measuring system based on depth image, which is characterized in that aforementioned singlechip microprocessor is led to
It crosses serial ports and turns bluetooth communication mode and send height information to display device and voice device.
The invention has the beneficial effects that:
1st, the measured station is at 1.5m to the 4.5m in front of Kinect2.0 cameras, and ensures attention position, does hand
The palm opens and action of clenching fist, and without doing other excessive actions, the depth image of the measured is acquired by Kinect2.0 cameras,
After the processing of host computer, the height data of the measured can quick, convenient, be accurately measured, use is extremely square
Just;
2nd, based on depth image, Kinect2.0 cameras can extract the bone information of people on the move, and multi collect is not
With the bone information of height and distance, using the highest artis in bone, carry out curve fitting, measurement result is more accurate;
3rd, the normal direction of Kinect2.0 camera planes may be with ground out of plumb, and measuring height has error, we adopt
It is adjusted with gravity accelerometer, the direction that picture is obtained to Kinect2.0 cameras is corrected, and is further carried
The high accuracy of measurement result.
Description of the drawings
Fig. 1 is the composition schematic diagram of the Human Height measuring system based on depth image of the present invention;
Fig. 2 is the schematic diagram of the trappable artis of Kinect2.0 cameras;
Fig. 3 is that the Human Height measuring system based on depth image of the present invention carries out the flow chart of height measurement;
Fig. 4 is the flow chart of host computer detecting identification palm hand gesture;
Fig. 5 is the flow chart that host computer detecting identifies gesture of clenching fist;
Fig. 6 is the structure diagram of Faster-RCNN frames;
Fig. 7 is the structure diagram of RPN frames;
Fig. 8 is the schematic diagram of the anchor on 9 kinds of different locations;
Fig. 9 is the convolution flow chart of Faster-RCNN;
Figure 10 is the training effect figure obtained after having trained Kinect2.0 cameras with Faster-RCNN algorithms;
Figure 11 is the coordinate relational graph of Kinect2.0 cameras and infrared camera based on earth coordinates;
Figure 12 is Kinect2.0 camera mobility model figures;
Figure 13 is the surface map (white portion) detected before correcting;
Figure 14 is the surface map (black portions) detected after correcting.
Specific embodiment
Make specific introduce to the present invention below in conjunction with the drawings and specific embodiments.
First part, the composition of Human Height measuring system based on depth image
With reference to Fig. 1, the Human Height measuring system of the invention based on depth image includes:Kinect2.0 cameras, again
Power acceleration transducer, host computer, singlechip microprocessor, display device, voice device and auxiliary circuit.
1st, Kinect2.0 cameras
Kinect2.0 cameras have the function of skeleton tracking function and face, can real-time tracing the measured bone
Frame, for acquiring the depth image of the measured.
2nd, gravity accelerometer
Kinect2.0 cameras can be swung up and down, so what is not necessarily maintained like when people is measured bows every time
The elevation angle.So, different angles will make the shooting visual angle of camera different, so as to make the picture shot poor
It is different.Moreover, for the people of different height, Kinect2.0 cameras are also required to adjust the angle, and make the picture taken the most accurate,
So as to which error be made to reach minimum.
In order to which Kinect2.0 cameras is enable to be directed at front, we have been embedded in a weight in Kinect2.0 cameras
Power acceleration transducer detects the position on ground with the gravity accelerometer, so as to the angle to Kinect2.0 cameras
Degree is adjusted.
3rd, host computer
The depth image that host computer can upload Kinect2.0 cameras is handled, and identifies the hand of the measured
Gesture, and accurate Human Height information is calculated.
The basic configuration of host computer is:Window10 systems, the GPU of more than 3.0 CUDA Capability, installation
The window versions of tensorflow are installed in Visual Studio 2015 and Python3.5.2 and opencv2.4.9 libraries.
In the present invention, host computer can detect the different gestures (stretch out one's hand and clench fist) for identifying the measured, Wo Mentong
Cross variation gesture (stretch out one's hand and clench fist) can control Kinect2.0 cameras start sampling depth image and stop acquisition and
Control host computer and slave computer (singlechip microprocessor) take turns to operate.
Host computer detecting identifies the flow of different gestures (stretch out one's hand and clench fist) as shown in Figure 4 and Figure 5.
When host computer detecting, which recognizes, stretches out one's hand gesture, information starts to input, and host computer handles the data of these acquisitions
Stream, is calculated accurate Human Height information.After continuing several seconds, host computer, which can be detected, recognizes gesture of clenching fist, information
Stop input, host computer can also be stopped, but slave computer is started to work, and the Human Height information being finally calculated is by showing
Showing device and sound broadcasting device are shown and voice broadcast.Entire control process is automatically performed.
Host computer detecting identification gesture uses Faster-RCNN (Faster Region-based Convolutional
Neural Networks) algorithm.
We do Faster-RCNN algorithms detailed introduction below.
The image of input:The size of image is unrestricted, does not interfere with image recognition energy.
Faster-RCNN is that network (Region Proposal Network, RPN) and detection network are suggested in a kind of region
The deep learning method of shared full figure convolution feature so that region suggests hardly taking time, and RPN is a full convolutional network,
Object boundary and score are predicted simultaneously in each position.RPN is the network of end-to-end training, generation high quality region Suggestion box.
Faster-RCNN frame structures are as shown in fig. 6, the realization step of the algorithm based on the frame is:Input picture →
Generate candidate region → feature extraction → classification → position refine.
Arbitrary input picture is obtained Feature Mapping by multilayer convolutional layer, is further reflected using feature by above-mentioned steps
It penetrates on layer and is classified by RPN.
RPN core concepts:Suggestion areas is directly generated using CNN convolutional neural networks (hereinafter referred to as CNN networks)
(RP), the method used is substantially exactly sliding window (need to only be slided one time on last convolutional layer), because of anchor machines
System and frame recurrence can obtain the RP of multiple dimensioned more length-width ratios.
RPN networks are also full convolutional network, and being directed to generation detection Suggestion box of the task is end-to-endly trained, together
When can predict boundary and the score of object.RPN networks only add additional 2 convolutional layers on CNN networks --- and it is complete
Convolutional layer cls and reg, wherein, reg layers are coordinate (x, y, w, h) corresponding for predicting candidate region anchor, i.e. upper left
Point coordinates (x, y) and the wide w and high h of candidate region;Cls layers be for judge candidate region be prospect (object) or
Background (non-object).
The main thought of RPN networks is as follows:
(1) by the position encoded into a feature vector of each characteristic pattern, 256 dimensions are used using ZF networks, 512 dimensions
VGG16 networks, wherein, VGG16 networks are 13 convolutional layers for characteristic extraction part, i.e., 13 can share convolutional layer, ZF nets
Network is 5 for characteristic extraction part can share convolutional layer.
(2) an object score and 9 RP are exported to each position, i.e., exports this in each convolution mapping position
The object score and return boundary that the k of a variety of scales (3 kinds) and length-width ratio (3 kinds) (3*3=9) regions are suggested on position.
(3) after RPN network establishments are good, the input of RPN networks can be the picture of arbitrary size, but have minimum
Resolution requirement, such as VGG16 is 228*228, if carrying out feature extraction with VGG16, then the composition shape of RPN networks
Formula can be expressed as VGG16+RPN.The network built can refer to Fig. 7, and the central point of convolution kernel corresponds to the position in artwork
(point) using the point as the central point of anchor, outlines the anchors of multiple dimensioned, a variety of length-width ratios in artwork, so
Anchor is not on conv characteristic patterns and in artwork.With reference to Fig. 8, three kinds of scales (area) { 1282, 2562, 5122, three kinds
Ratio { 1:1,1:2,2:1 }, the position of three kinds of scales, three ratios shares 3*3=9 anchor, i.e., different using 9 kinds
Anchor detects object.
The convolution flow chart of Faster-RCNN is as shown in Figure 9.Any image obtains 600*1000 after normalization
Size picture, after CNN convolution, CNN last layer (conv5) obtain be 40*60 sizes characteristic pattern, if characteristic pattern
Size is W*H, then needs W*H*K anchor, that is, needs 2000 anchor of 40*60*9 ≈.
After network establishment is good, how to be optimal network, then need to establish loss function and optimize the network.
Faster R-CNN loss functions, it then follows multi-task loss are defined, and minimize object function, are calculated in FasterR-CNN
In method, the function of an image is defined as:
In formula:
I is subscripts of the anchor in a mini-batch;
piIt is the probability that anchor is predicted as target;
pi *It is GT labels,
uiIt is divided into the i-th class;
λ is balance weight;
NclsIt is the sample number of small lot processing;
NregIt is the number for being standardized as anchor positions;
Lcls(pi, pi *) be two classifications (target vs, non-targeted) logarithm loss;
Lreg(ti, ti *) it is to return loss, use Lreg(ti, ti *)=R (ti-ti *) calculate, R is smooth L1 functions;
ti={ tx, ty, tw, thIt is a vector, represent that 4 parametrizations of the bounding box bounding boxs of prediction are sat
Mark;
ti *It is the coordinate vector of groundtruth bounding boxs corresponding with positive anchor.
pi *LregThis means only prospect anchor (pi *=1) recurrence just loses, other situations just do not have
(pi *=0).
Cls layers and reg layers of output is respectively by { piAnd { uiComposition, this two respectively by NclsAnd NregAnd one flat
The weight that weighs λ normalization.In early stage realization and disclosed code, the normalized value of λ=10, cls is the big of mini-batch
It is small, i.e. NclsQuantity of the normalized value of=256, reg for anchor positions, i.e. Nreg≈ 2400 (40*60), it is cls such
Almost it is equal weight with reg.
In the present invention, we use gesture as control signal, i.e., start to measure when palm is unfolded, work as palm
Measurement terminates when being closed (clenching fist).The training sample of Faster-RCNN algorithm picks is a variety of different gesture shapes
State --- palm expansion, palm are closed.
We are trained the RGB pictures that Kinect2.0 cameras acquire using Faster-RCNN algorithms, have trained
Effect is shown in Figure 10, and the probability of identification palm open configuration is 0.925.The probability of identification is up to 1.
4th, singlechip microprocessor
Singlechip microprocessor (STC89C52) is slave computer, and the gesture identification of host computer processing is received by serial ports
Then the Human Height information that host computer uploads is converted into corresponding word and audio by instruction and height information.
5th, display device
Display device is used for showing the word of the record Human Height information of singlechip microprocessor output.
Display device is connect with singlechip microprocessor signal, and the height information of host computer identification passes through the micro- place of microcontroller
Turn bluetooth communication mode by serial ports after reason device conversion to be shown on the desplay apparatus.
6th, voice device
Voice device is used for reporting the Human Height information of singlechip microprocessor output.
Voice device is connect with singlechip microprocessor signal, and the height information of host computer identification passes through the micro- place of microcontroller
Turn bluetooth communication mode by serial ports after reason device conversion to be reported by voice device.
7th, auxiliary circuit
Auxiliary circuit is for providing power supply for singlechip microprocessor, is answered including power module, microprocessor
Position circuit etc..
The measuring principle of second part, Human Height measuring system based on depth image
The framework information that Kinect2.0 cameras are extracted include each artis in skeleton space three-dimensional coordinate (x,
Y, z), unit is rice (m), its coordinate distribution is such:
Looked over from Kinect2.0 camera planes, centered on Kinect2.0 cameras, an X axis left side for just, to the right for
It is negative;It is downwards negative just to be in Y-axis;It is Z axis perpendicular to x/y plane, Z axis represents range information, closer to Kinect2.0 phases
Machine, value is smaller, more remote from Kinect2.0 cameras, and value is bigger.
After Kinect2.0 cameras open skeleton data stream, if someone appears in Kinect2.0 cameras and can know
In other effective range, then Kinect2.0 cameras will rapidly detect this people automatically, later can also in real time into
Row skeleton is tracked, and is as follows:
(1) Kinect2.0 cameras recognize 20 artis (referring to Fig. 2) of manikin first, then obtain these
Coordinate information of the artis in skeleton coordinate system later transforms to these information in one two-dimensional coordinate system;
(2) bone three-dimensional system of coordinate is transformed to two coordinate systems of screen by Kinect2.0 cameras according to coordinate transform;
(3) Kinect2.0 cameras build human skeleton according to skeleton topological structure, and make its visualization.
Part III, the Human Height measuring system based on depth image measure the flow of Human Height
With reference to Fig. 3, the flow that the Human Height measuring system of the invention based on depth image measures Human Height is specific
It is as follows:
First, the three-dimensional image information of manikin is obtained using Kinect2.0 cameras.
Then, that the three-dimensional image information of manikin is input to host computer (PC machine) is inner, by host computer to human body front
Image is handled, and sums up the highs and lows extracting method of a set of reasonable, accurate human skeleton model, later
It can be obtained by the height of manikin by data processing, in order to obtain more accurate height information, host computer can also be right
Obtained data carry out curve fitting, and the Y-axis pixel value initial from its is fitted to the accurate height for manikin.
With reference to Fig. 3, the flow that the depth image that host computer uploads Kinect2.0 cameras is handled is specific as follows:
(1) denoising
Denoising is carried out by the way of gaussian filtering.
(2) three-dimensional human skeleton information is obtained
The library function of Kinect2.0 cameras offer is provided using Microsoft, acquisition 20 skeleton joint points of human body
Three-dimensional information.
(3) peak 3-D view coordinate is screened
Select the 3-D view coordinate P (x of joint of head pointhd,yhd,zhd) as peak 3-D view coordinate.
(4) initial calculation Human Height
Peak 3-D view coordinate is calculated to the distance L (d) on ground, which is preliminary Human Height.
Since the joint of head point position of selection is at forehead, for height measurement, there are certain error, joint of head
The distance on the three-dimensional coordinate point of point to ground is height distance L (d).If directly calculating L (d), there are certain error, institutes
With the distance L (d) for preliminary Human Height.
(5) ground equation is obtained using the value of gravity accelerometer
In order to detect the position on ground (obtaining ground equation), we are first introduced in earth coordinates
The coordinate relationship of Kinect2.0 cameras and infrared camera.
Depth information inside infrared image is skeleton diagram of the kinect three-dimensional coordinates for infrared image, X-Y coordinate
It is the plane where infrared image, Z is depth, and the depth d of each pixel is represented with following formula in infrared image:
In formula, b is the horizontal base line of Kinect2.0 cameras, and f is the focal length of Kinect2.0 cameras, and kd is kinect pictures
Plain difference, doff are the offsets for Kinect2.0 cameras,The value for representing kd existsIn pixel unit, the estimated value of b is big
The representative value of about 7.5cm, doff are 1090.
With the optical centre (u of infrared camera0, v0) as coordinate origin, three-dimensional coordinate in earth coordinates (x, y,
Z), the optical centre (u of the coordinate (i, j) of infrared image, infrared camera0, v0) the coordinate relationship between three is:
The coordinate relationship of Kinect2.0 cameras and infrared camera based on earth coordinates is shown in Figure 11.
As shown in Figure 11, the coordinate of Kinect2.0 cameras and the coordinate of infrared camera are an a pair in earth coordinates
It answers, but also there are certain transformation of scale relationships between the two, can be incited somebody to action by certain transformation of scale
The coloured image that Kinect2.0 cameras obtain is converted to infrared image.
The earth coordinates centered on the infrared camera origin built in Kinect2.0 cameras are established, by Kinect2.0
Normal vector of the acceleration of gravity vector as ground level built in camera, next we, which introduce, utilizes gravity sensor (base
In acceleration of gravity) obtain ground specific method.
When Kinect2.0 cameras move on the ground, the mobility model figure of Kinect2.0 cameras is as shown in figure 12,
In, the region that big rectangle is irised out represents the plane that Kinect2.0 cameras move on the ground, small rectangle and two border circular areas tables
Show the place for placing kinect2.0 cameras of certain distance from the ground.
In the mobility model figure of Kinect2.0 cameras, og-(xg, yg, zg) it is earth coordinates, coordinate origin is
The center of Kinect2.0 cameras;Vector g is (0,0,1);When Kinect2.0 cameras move, it is necessary to pass through Kinect2.0
Camera obtains its vector g '-(a perpendicular to the ground in real timeg′, bg′, cg′), this vector sum earth coordinates is all one always
It causes;H is the center of Kinect2.0 cameras and the vertical range on ground.
So by vector g '-(ag′, bg′, cg′) and distance h can obtain ground equation:
ag′x+bg′y+cg′Z+d=0 (3)
In order to make Kinect2.0 cameras during movement or the measured still can be with during movement
The position on ground is accurately detected, we employ following Processing Algorithm:
(1) depth information in infrared image is calculated according to formula (1), can be calculated when Kinect2.0 cameras move
Go out the three-dimensional coordinate each put in infrared image.
(2) three-dimensional coordinate each put in infrared image is calibrated in coloured image.
(3) ground equation is calculated by acceleration of gravity vector g ', distance h and formula (3).
(4) remove some abnormal points using RANSAN algorithms, so as to accurately detect ground.
The surface map detected before correction is as shown in figure 13, and Kinect2.0 cameras are more far away, and error will be bigger;School
The surface map just detected afterwards is as shown in figure 14, it has got rid of a part of abnormal point.
As it can be seen that when Kinect2.0 cameras move or when changing pitch angle, by the processing of above-mentioned Processing Algorithm
Afterwards, some abnormal points can be got rid of in real time, the purpose of avoidance is realized, so as to be accurately detected ground.
(6) height function curve is fitted
Assuming that the practical height of people is H, then there is different with distance and existing error function:
Error (d)=L (d)-H (4)
The L (d) of same person station any one position in the range of distance kinect2.0 cameras 1.5m to 3.1m is measured, is led to
Cross multigroup d and L (d) fit error curve error (d).
(7) accurate height information is obtained
During the error curve error (d) of fitting is measured for reality, the practical height H of the measuredcFor:
Hc=L (d)-error (d) (5)
Next, the height information that host computer is handled is converted into corresponding word and sound by singlechip microprocessor
Frequently.
Finally, the height information obtained shown by display device, meanwhile, voice broadcast is carried out by voice device.
It can be seen that the Human Height measuring system based on depth image of the present invention breaches traditional measurement method,
Its measurement only needs the measured station at 1.5m to the 4.5m in front of Kinect2.0 cameras, and ensures attention position,
The measured need to only do palm opening and action of clenching fist, and without doing other excessive actions, quilt is acquired by Kinect2.0 cameras
The depth image of gauger calculates the height of the measured, and detects the gesture of identification the measured, is based on by host computer
Faster-RCNN carries out gesture identification, and different gestures are formed control instruction, quick, convenient, accurate can measure by
The height data of gauger.
It should be noted that the invention is not limited in any way for above-described embodiment, it is all to use equivalent replacement or equivalent change
The technical solution that the mode changed is obtained, all falls in protection scope of the present invention.
Claims (6)
1. the Human Height measuring system based on depth image, which is characterized in that including:
Kinect2.0 cameras:The skeleton of real-time tracing the measured acquires the depth image of the measured;
Gravity accelerometer:It is embedded in Kinect2.0 cameras, detects the position on ground;
Host computer:The depth image uploaded to Kinect2.0 cameras is handled, and identifies the gesture of the measured, Yi Jiji
Calculation obtains Human Height information;
Singlechip microprocessor:The Human Height information that host computer uploads is converted into corresponding word and audio;
Display device:Show the word of the record Human Height information of singlechip microprocessor output;
Voice device:Report the Human Height information of singlechip microprocessor output;
Auxiliary circuit:Power supply is provided for singlechip microprocessor.
2. the Human Height measuring system according to claim 1 based on depth image, which is characterized in that the host computer
Using PC machine, basic is configured to:Window10 systems, the GPU of more than 3.0 CUDA Capability, installation
The window versions of tensorflow are installed in Visual Studio 2015 and Python3.5.2 and opencv2.4.9 libraries.
3. the Human Height measuring system according to claim 2 based on depth image, which is characterized in that the host computer
The gesture of the measured is identified using Faster-RCNN algorithms.
4. the Human Height measuring system according to claim 2 based on depth image, which is characterized in that the host computer
The method that Human Height information is calculated is:
(1) denoising is carried out by the way of gaussian filtering;
(2) library function that Microsoft is used to be provided for Kinect2.0 cameras, the three-dimensional letter of acquisition 20 skeleton joint points of human body
Breath;
(3) the 3-D view coordinate P (x of joint of head point are selectedhd,yhd,zhd) as peak 3-D view coordinate;
(4) peak 3-D view coordinate is calculated to the distance L (d) on ground, which is preliminary Human Height;
(5) ground equation is obtained using the value of gravity accelerometer, it is specific as follows:
1) depth information in infrared image is calculated
Depth information inside infrared image is skeleton diagram of the kinect three-dimensional coordinates for infrared image, and X-Y coordinate is red
Plane where outer image, Z are depth, have equation below about depth information:
In formula, d is the depth of each pixel in infrared image, and b is the horizontal base line of kinect2.0 cameras, and f is kinect2.0
The focal length of camera, kd are kinect pixel differences, and doff is the offset for kinect2.0 cameras;
2) infrared image is registrated with coloured image
The relationship of earth coordinates and infrared image coordinate system is as follows:
In formula, (i, j) is the coordinate of infrared image, (u0,v0) be infrared image optical centre, (x, y, z) is earth coordinates
In three-dimensional coordinate, coordinate origin is the optical centre of infrared camera;
3) ground is calculated
When kinect2.0 cameras move on the ground, it is assumed that og-(xg,yg,zg) it is earth coordinates, coordinate origin is
The center of kinect2.0 cameras, vector g are (0,0,1), kinect2.0 cameras vector perpendicular to the ground when moving be g '-
(ag′,bg′,cg′), this vector sum earth coordinates is all consistent always, and h is kinect2.0 image centers hanging down to ground
Straight distance, so, by vector g '-(ag′,bg′,cg′) and vertical range h can obtain ground equation:
ag′x+bg′y+cg′Z+d=0 (3)
4) remove some abnormal points
Remove some abnormal points using RANSAN algorithms, so as to accurately detect ground;
(6) height function curve is fitted
Assuming that the practical height of people is H, then there is different with distance and existing error function:
Error (d)=L (d)-H (4)
The L (d) of same person station any one position in the range of distance kinect2.0 cameras 1.5m to 3.1m is measured, by more
Group d and L (d) fit error curve error (d);
(7) accurate height information is obtained
During the error curve error (d) of fitting is measured for reality, the practical height H of the measuredcFor:
Hc=L (d)-error (d) (5).
5. the Human Height measuring system according to claim 1 based on depth image, which is characterized in that the microcontroller
Microprocessor is using STC89C52 processors.
6. the Human Height measuring system according to claim 5 based on depth image, which is characterized in that the microcontroller
Microprocessor turns bluetooth communication mode by serial ports and sends height information to display device and voice device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810015951.4A CN108209926A (en) | 2018-01-08 | 2018-01-08 | Human Height measuring system based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810015951.4A CN108209926A (en) | 2018-01-08 | 2018-01-08 | Human Height measuring system based on depth image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108209926A true CN108209926A (en) | 2018-06-29 |
Family
ID=62643160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810015951.4A Pending CN108209926A (en) | 2018-01-08 | 2018-01-08 | Human Height measuring system based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108209926A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064511A (en) * | 2018-08-22 | 2018-12-21 | 广东工业大学 | A kind of gravity center of human body's height measurement method, device and relevant device |
CN109463809A (en) * | 2018-11-27 | 2019-03-15 | 浙江理工大学 | A kind of production method of personalized fit brassiere |
CN109542218A (en) * | 2018-10-19 | 2019-03-29 | 深圳奥比中光科技有限公司 | A kind of mobile terminal, man-machine interactive system and method |
CN111460897A (en) * | 2020-03-03 | 2020-07-28 | 珠海格力电器股份有限公司 | Height identification method and device, storage medium and electric appliance |
CN112037158A (en) * | 2020-07-22 | 2020-12-04 | 四川长宁天然气开发有限责任公司 | Image enhancement labeling method based on shale gas field production equipment |
CN112183206A (en) * | 2020-08-27 | 2021-01-05 | 广州中国科学院软件应用技术研究所 | Traffic participant positioning method and system based on roadside monocular camera |
CN112739975A (en) * | 2018-09-28 | 2021-04-30 | 松下知识产权经营株式会社 | Dimension measuring device and dimension measuring method |
CN112758001A (en) * | 2021-01-27 | 2021-05-07 | 四川长虹电器股份有限公司 | TOF-based vehicle lamp follow-up control method |
CN113539467A (en) * | 2021-06-02 | 2021-10-22 | 陈昊 | Intelligent wristband capable of measuring height and weight and measuring method thereof |
US11928839B2 (en) | 2020-01-19 | 2024-03-12 | Udisense Inc. | Measurement calibration using patterned sheets |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130249786A1 (en) * | 2012-03-20 | 2013-09-26 | Robert Wang | Gesture-based control system |
CN104720817A (en) * | 2015-04-02 | 2015-06-24 | 吴爱好 | Height measuring instrument |
CN104771174A (en) * | 2015-03-13 | 2015-07-15 | 东莞捷荣技术股份有限公司 | Standing posture height measuring device and standing posture height measuring method |
CN105832336A (en) * | 2016-03-18 | 2016-08-10 | 京东方科技集团股份有限公司 | Body height measurement system and method |
CN105832340A (en) * | 2016-05-12 | 2016-08-10 | 董元忠 | Intelligent multifunctional handheld laser height measuring instrument |
CN106851937A (en) * | 2017-01-25 | 2017-06-13 | 触景无限科技(北京)有限公司 | A kind of method and device of gesture control desk lamp |
CN106959075A (en) * | 2017-02-10 | 2017-07-18 | 深圳奥比中光科技有限公司 | The method and system of accurate measurement is carried out using depth camera |
CN107016697A (en) * | 2017-04-11 | 2017-08-04 | 杭州光珀智能科技有限公司 | A kind of height measurement method and device |
CN107239731A (en) * | 2017-04-17 | 2017-10-10 | 浙江工业大学 | A kind of gestures detection and recognition methods based on Faster R CNN |
US20170351911A1 (en) * | 2014-02-04 | 2017-12-07 | Pointgrab Ltd. | System and method for control of a device based on user identification |
CN107491712A (en) * | 2016-06-09 | 2017-12-19 | 北京雷动云合智能技术有限公司 | A kind of human body recognition method based on RGB D images |
-
2018
- 2018-01-08 CN CN201810015951.4A patent/CN108209926A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130249786A1 (en) * | 2012-03-20 | 2013-09-26 | Robert Wang | Gesture-based control system |
US20170351911A1 (en) * | 2014-02-04 | 2017-12-07 | Pointgrab Ltd. | System and method for control of a device based on user identification |
CN104771174A (en) * | 2015-03-13 | 2015-07-15 | 东莞捷荣技术股份有限公司 | Standing posture height measuring device and standing posture height measuring method |
CN104720817A (en) * | 2015-04-02 | 2015-06-24 | 吴爱好 | Height measuring instrument |
CN105832336A (en) * | 2016-03-18 | 2016-08-10 | 京东方科技集团股份有限公司 | Body height measurement system and method |
CN105832340A (en) * | 2016-05-12 | 2016-08-10 | 董元忠 | Intelligent multifunctional handheld laser height measuring instrument |
CN107491712A (en) * | 2016-06-09 | 2017-12-19 | 北京雷动云合智能技术有限公司 | A kind of human body recognition method based on RGB D images |
CN106851937A (en) * | 2017-01-25 | 2017-06-13 | 触景无限科技(北京)有限公司 | A kind of method and device of gesture control desk lamp |
CN106959075A (en) * | 2017-02-10 | 2017-07-18 | 深圳奥比中光科技有限公司 | The method and system of accurate measurement is carried out using depth camera |
CN107016697A (en) * | 2017-04-11 | 2017-08-04 | 杭州光珀智能科技有限公司 | A kind of height measurement method and device |
CN107239731A (en) * | 2017-04-17 | 2017-10-10 | 浙江工业大学 | A kind of gestures detection and recognition methods based on Faster R CNN |
Non-Patent Citations (4)
Title |
---|
T. HOANG NGAN LE ET AL: "Multiple Scale Faster R-CNN Approach to Drive’s Cell-phone Usage and Hands on Steering Wheel Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 * |
付利华等: "基于深度信息的人体检测窗口快速提取方法", 《北京工业大学学报》 * |
周长劭等: "基于景深图像的身高测量系统设计", 《桂林电子科技大学学报》 * |
范凯熹: "《信息交互设计》", 31 March 2017 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064511A (en) * | 2018-08-22 | 2018-12-21 | 广东工业大学 | A kind of gravity center of human body's height measurement method, device and relevant device |
CN109064511B (en) * | 2018-08-22 | 2022-02-15 | 广东工业大学 | Method and device for measuring height of center of gravity of human body and related equipment |
CN112739975A (en) * | 2018-09-28 | 2021-04-30 | 松下知识产权经营株式会社 | Dimension measuring device and dimension measuring method |
CN112739975B (en) * | 2018-09-28 | 2023-06-13 | 松下知识产权经营株式会社 | Dimension measuring device and dimension measuring method |
CN109542218A (en) * | 2018-10-19 | 2019-03-29 | 深圳奥比中光科技有限公司 | A kind of mobile terminal, man-machine interactive system and method |
CN109542218B (en) * | 2018-10-19 | 2022-05-24 | 奥比中光科技集团股份有限公司 | Mobile terminal, human-computer interaction system and method |
CN109463809A (en) * | 2018-11-27 | 2019-03-15 | 浙江理工大学 | A kind of production method of personalized fit brassiere |
US11928839B2 (en) | 2020-01-19 | 2024-03-12 | Udisense Inc. | Measurement calibration using patterned sheets |
CN111460897A (en) * | 2020-03-03 | 2020-07-28 | 珠海格力电器股份有限公司 | Height identification method and device, storage medium and electric appliance |
CN112037158A (en) * | 2020-07-22 | 2020-12-04 | 四川长宁天然气开发有限责任公司 | Image enhancement labeling method based on shale gas field production equipment |
CN112037158B (en) * | 2020-07-22 | 2023-09-15 | 四川长宁天然气开发有限责任公司 | Shale gas field production equipment-based image enhancement labeling method |
CN112183206A (en) * | 2020-08-27 | 2021-01-05 | 广州中国科学院软件应用技术研究所 | Traffic participant positioning method and system based on roadside monocular camera |
CN112183206B (en) * | 2020-08-27 | 2024-04-05 | 广州中国科学院软件应用技术研究所 | Traffic participant positioning method and system based on road side monocular camera |
CN112758001A (en) * | 2021-01-27 | 2021-05-07 | 四川长虹电器股份有限公司 | TOF-based vehicle lamp follow-up control method |
CN113539467A (en) * | 2021-06-02 | 2021-10-22 | 陈昊 | Intelligent wristband capable of measuring height and weight and measuring method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108209926A (en) | Human Height measuring system based on depth image | |
CN105956586B (en) | A kind of intelligent tracking system based on TOF 3D video camera | |
CN111104816B (en) | Object gesture recognition method and device and camera | |
CN106022213B (en) | A kind of human motion recognition method based on three-dimensional bone information | |
CN103035008B (en) | A kind of weighted demarcating method of multicamera system | |
US20070098250A1 (en) | Man-machine interface based on 3-D positions of the human body | |
CN107945268A (en) | A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light | |
CN107357427A (en) | A kind of gesture identification control method for virtual reality device | |
US9117138B2 (en) | Method and apparatus for object positioning by using depth images | |
CN109670396A (en) | A kind of interior Falls Among Old People detection method | |
CN106361345A (en) | System and method for measuring height of human body in video image based on camera calibration | |
CN108731587A (en) | A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model | |
CN104035557B (en) | Kinect action identification method based on joint activeness | |
CN105286871A (en) | Video processing-based body height measurement method | |
CN109344690A (en) | A kind of demographic method based on depth camera | |
CN103839277A (en) | Mobile augmented reality registration method of outdoor wide-range natural scene | |
CN106981091A (en) | Human body three-dimensional modeling data processing method and processing device | |
CN103223236B (en) | Intelligent evaluation system for table tennis training machine | |
CN105631852B (en) | Indoor human body detection method based on depth image contour | |
CN103593641B (en) | Object detecting method and device based on stereo camera | |
CN106127205A (en) | A kind of recognition methods of the digital instrument image being applicable to indoor track machine people | |
CN106403901A (en) | Measuring apparatus and method for attitude of buoy | |
CN203102374U (en) | Weighting calibration apparatus of multi-camera system | |
CN109646924A (en) | A kind of visualization distance measuring method and device | |
CN106651957A (en) | Monocular vision target space positioning method based on template |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180629 |
|
RJ01 | Rejection of invention patent application after publication |