CN106778474A - 3D human body recognition methods and equipment - Google Patents

3D human body recognition methods and equipment Download PDF

Info

Publication number
CN106778474A
CN106778474A CN201611024504.2A CN201611024504A CN106778474A CN 106778474 A CN106778474 A CN 106778474A CN 201611024504 A CN201611024504 A CN 201611024504A CN 106778474 A CN106778474 A CN 106778474A
Authority
CN
China
Prior art keywords
human body
information
distribution characteristics
space distribution
rgbd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611024504.2A
Other languages
Chinese (zh)
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201611024504.2A priority Critical patent/CN106778474A/en
Publication of CN106778474A publication Critical patent/CN106778474A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a kind of 3D human body recognition methods and equipment.The method is comprised the following steps:Obtain the RGBD human body images of people to be measured;The 3d space distribution characteristics information of human body feature point is obtained by the RGBD human body images;The 3d space distribution characteristics information of the human body feature point for obtaining is matched with the 3d space distribution characteristics information of the human body feature point in human body 3D characteristic identity information banks;If the match is successful, the identity information of the people to be measured is obtained.The equipment includes human body image acquisition module, characteristics of human body's data obtaining module, human body information matching module and identity information acquisition module.The present invention carries out human bioequivalence by colouring information and depth information, is not influenceed by different seasons, the dress ornament of people and ambient lighting change etc., improves the accuracy of human bioequivalence.

Description

3D human body recognition methods and equipment
Technical field
The present invention relates to 3D human body recognition technologies field, more particularly to a kind of 3D human body recognition methods and equipment.
Background technology
Information security issue has caused the extensive attention of various circles of society.The main path for ensuring information safety is exactly Identity to information user is accurately differentiated whether the authority for determining whether user's acquisition information by identification result closes Method, so as to reach the purpose that guarantee information was not leaked and ensured user's legitimate rights and interests.Therefore, reliable identification is very It is important and necessary.
Human bioequivalence is with a wide range of applications in fields such as gate control system, security monitoring, man-machine interaction, medical diagnosis And economic worth.Traditional human body recognition technology is 2D human bioequivalences, and 2D human bioequivalences only have colouring information to believe without depth Breath, the colouring information information such as including color, texture, shape, therefore, inevitably lead to the attitude in colouring information Uncertain problem.In addition, according to different seasons, the dress ornament of people and ambient lighting change, colouring information be it is unstable (or It is not robust), therefore, in complex environment, the accuracy of the human bioequivalence based on colouring information is relatively low.
The content of the invention
The present invention provides a kind of 3D human body recognition methods and equipment, can solve the problem that the human bioequivalence that prior art is present is accurate The low problem of degree.
In order to solve the above technical problems, one aspect of the present invention is:A kind of 3D human body recognition methods are provided, The method is comprised the following steps:Obtain the RGBD human body images of people to be measured;Characteristics of human body is obtained by the RGBD human body images The 3d space distribution characteristics information of point;The 3d space distribution characteristics information and human body 3D features of the human body feature point that will be obtained The 3d space distribution characteristics information of the human body feature point in identity information storehouse is matched;If the match is successful, treated described in acquisition Survey the identity information of people.
Wherein, it is described by the RGBD human body images obtain human body feature point 3d space distribution characteristics information the step of Including:Human body feature point is gathered by the RGBD human body images;Human body 3D grids are set up according to the human body feature point;According to The human body 3D grids measure the characteristic value of the human body feature point and calculate the 3d space distribution characteristics of the human body feature point Information.
Wherein, the 3d space distribution characteristics information by the human body feature point for obtaining is believed with human body 3D characteristic identities The step of 3d space distribution characteristics information of the human body feature point in breath storehouse is matched includes:Calculate the human body for obtaining special Levy the 3d space distribution characteristics of the human body feature point in 3d space distribution characteristics information and human body 3D characteristic identity information banks a little The matching degree of information, to obtain highest matching degree;The highest matching degree is compared with default matching degree threshold value, if it is described most Matching degree high reaches the scope of the default matching degree threshold value, then judge that the match is successful.
Wherein, in the step of obtaining the RGBD human body images of people to be measured, the RGBD human body images are RGBD human body images Sequence;The step of obtaining the 3d space distribution characteristics information of human body feature point by the RGBD human body images also includes:Pass through The RGBD human body images retrieval human body dynamic feature information;The 3d space of the human body feature point that will be obtained is distributed special The step of reference breath is matched with the 3d space distribution characteristics information of the human body feature point in human body 3D characteristic identity information banks Also include:Human body dynamic feature information in the human body dynamic feature information and human body 3D characteristic identity information banks that will obtain Matched.
Wherein, in the step of obtaining the RGBD human body images of people to be measured, also include:Obtain the RGBD face figures of people to be measured Picture;In the step of obtaining the 3d space distribution characteristics information of human body feature point by the RGBD human body images, also include:Pass through The RGBD facial images obtain the 3d space distribution characteristics information of human face characteristic point;The 3D of the human body feature point that will be obtained The 3d space distribution characteristics information of the human body feature point in spatial distribution characteristic information and human body 3D characteristic identity information banks is carried out The step of matching, also includes:The 3d space distribution characteristics information of the human face characteristic point that will be obtained is believed with human body 3D characteristic identities The 3d space distribution characteristics information of the human face characteristic point in breath storehouse is matched;If the match is successful, obtain the people's to be measured In the step of identity information, described the match is successful for the 3d space distribution characteristics information of the human body feature point is special with the face The match is successful to levy 3d space distribution characteristics information a little.
In order to solve the above technical problems, another technical solution used in the present invention is:A kind of 3D human bioequivalences are provided to set Standby, the equipment includes human body image acquisition module, characteristics of human body's data obtaining module, human body information matching module and identity information Acquisition module;Human body image acquisition module is used to obtain the RGBD human body images of people to be measured;Characteristics of human body's data obtaining module with The human body image acquisition module connection, the 3d space for obtaining human body feature point by the RGBD human body images is distributed special Reference ceases;Human body information matching module is connected with characteristics of human body's data obtaining module, and the human body for that will obtain is special Levy the 3d space distribution characteristics of the human body feature point in 3d space distribution characteristics information and human body 3D characteristic identity information banks a little Information is matched;Identity information acquisition module is connected with the human body information matching module, for when the match is successful, obtaining The identity information of the people to be measured.
Wherein, characteristics of human body's data obtaining module includes that acquisition module, grid set up module and processing module;Collection Module is connected with the human body image acquisition module, for gathering human body feature point by the RGBD human body images;Grid is built Formwork erection block is connected with the acquisition module, for setting up human body 3D grids according to the human body feature point;Processing module with it is described Grid sets up module connection, for measuring the characteristic value of the human body feature point according to the human body 3D grids and calculating the people The 3d space distribution characteristics information of body characteristicses point.
Wherein, the human body information matching module includes computing module and comparison module;Computing module and the treatment mould Block is connected, 3d space distribution characteristics information and human body 3D characteristic identity information banks for calculating the human body feature point for obtaining In human body feature point 3d space distribution characteristics information matching degree, to obtain highest matching degree;Comparison module and the meter Module connection is calculated, for the highest matching degree to be compared with default matching degree threshold value, if the highest matching degree reaches institute The scope of default matching degree threshold value is stated, then judges that the match is successful.
Wherein, what the human body image acquisition module was obtained is RGBD human body image sequences;The equipment also includes dynamic Characteristic information acquisition module, obtains mould and is connected, for by the RGBD human body images retrieval people with the human body image Body dynamic feature information;The human body information matching module is further used for the human body dynamic feature information for obtaining and people Human body dynamic feature information in body 3D characteristic identity information banks is matched.
Wherein, the equipment also includes facial image acquisition module, face characteristic information acquisition module and face information With module;Facial image acquisition module is used to obtain the RGBD facial images of people to be measured;Face characteristic information acquisition module and institute The connection of facial image acquisition module is stated, the 3d space distribution characteristics for obtaining human face characteristic point by the RGBD facial images Information;Face information matching module is connected with the face characteristic information acquisition module, for the face characteristic that will be obtained The 3d space distribution characteristics letter of the human face characteristic point in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of point Breath is matched;The identity information acquisition module is also connected with the face information matching module, and the identity information is obtained Module is used for the 3d space distribution characteristics of the 3d space distribution characteristics information and the human face characteristic point in the human body feature point Information obtains the identity information of the people to be measured when the match is successful.
The beneficial effects of the invention are as follows:The situation of prior art is different from, the present invention is by the present invention by obtaining RGBD Human body image obtains the 3d space distribution characteristics information of human body feature point, and by the 3d space distribution characteristics of the human body feature point Information is matched with the 3d space distribution characteristics information of the human body feature point preserved in human body 3D characteristic identities information bank, so that Recognition of face is carried out, is human body 3D information due to what is matched, including colouring information and depth information, human body information is more complete Face, also, human skeleton can be set up by the 3d space distribution characteristics information, such that it is able to be known by human skeleton Not, so different seasons, the dress ornament of people and ambient lighting change etc. will not be impacted to human bioequivalence, therefore the present invention The accuracy of human bioequivalence can be improved.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to that will make needed for embodiment description Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing.
Fig. 1 is a kind of schematic flow sheet of 3D human body recognition methods that first embodiment of the invention is provided;
Fig. 2 is the schematic flow sheet of step S12 in Fig. 1;
Fig. 3 is the schematic flow sheet of step S13 in Fig. 1;
Fig. 4 is a kind of schematic flow sheet of 3D human body recognition methods that second embodiment of the invention is provided;
Fig. 5 is a kind of schematic flow sheet of 3D human body recognition methods that third embodiment of the invention is provided;
Fig. 6 is a kind of structural representation of 3D human bioequivalences equipment that first embodiment of the invention is provided;
Fig. 7 is a kind of structural representation of 3D human bioequivalences equipment that second embodiment of the invention is provided;
Fig. 8 is a kind of structural representation of 3D human bioequivalences equipment that third embodiment of the invention is provided;
Fig. 9 is a kind of structural representation of the entity apparatus of 3D human bioequivalences equipment provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is all other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Fig. 1 is referred to, Fig. 1 is a kind of schematic flow sheet of 3D human body recognition methods that first embodiment of the invention is provided.
The 3D human body recognition methods of the present embodiment are comprised the following steps:
S11:Obtain the RGBD human body images of people to be measured.
Specifically, RGBD human body images include the colouring information (RGB) and depth information (Depth) of human body, RGBD human bodies Image can be obtained by Kinect sensor.Wherein, the RGBD human body images can be an image set, including for example same Multiple RGBD human body images of personal multiple angles.
In certain embodiments, when occurring how personal in camera lens, then many personal RGBD human body images are gathered.
S12:The 3d space distribution characteristics information of human body feature point is obtained by RGBD human body images.
As shown in Fig. 2 Fig. 2 is the schematic flow sheet of step S12 in Fig. 1.Specifically, step S12 includes:
S121:Human body feature point is gathered by RGBD human body images.
Specifically, the present embodiment carries out the collection of human body feature point by gathering human body, wherein, human body bag Include:One or more in trunk, four limbs and head.
The acquisition method of characteristic point can be various, for example, by the face such as the eyes of handmarking's face, nose, face The characteristic points such as cheek, lower jaw and its edge, it is also possible to which the human face characteristic point labeling method of compatible RGB (2D) determines the feature of face Point, can also automatic marker characteristic point.
For example, automatic marker characteristic point is needed by three steps:
(1) human body segmentation.The present embodiment splits motion using the method that inter-frame difference and background subtraction split-phase combine Human body, chooses the frame in RGBD images as background frames in advance, sets up the Gauss model of each pixel, then with inter-frame difference Method carries out difference processing to adjacent two field pictures, distinguishes region (the region bag for changing in the current frame of background dot and change Include and appear area and moving object), the corresponding region of region of variation and background frames then carried out into models fitting distinguish to appear area And moving object, shade is finally removed in moving object, so by without the moving meshes of shade out.Context update When inter-frame difference is determined as the point of background, then be updated with certain rule;Background subtraction timesharing is determined as appearing area Point, then background frames are updated with larger turnover rate, the corresponding region of moving object is not updated.The method can be managed relatively The segmentation object thought.
(2) contours extract and analysis.After the image after obtaining binaryzation, calculated using some classical rim detections Method obtains profile.For example with Canny algorithms, Canny edge detection operators fully reflect the number of optimal edge detector Characteristic is learned, for different types of edge, good signal to noise ratio is respectively provided with, excellent positioning performance is produced many to single edge The low probability of individual response and the maximum suppression ability to false edge response, after obtaining light stream segmentation field using partitioning algorithm, Comprising our moving targets of concern all in these cut zone.Therefore, Canny will be utilized in these cut zone Operator extraction edge, on the one hand can significantly limit ambient interferences, on the other hand can effectively improve the speed of operation.
(3) artis is marked automatically.Obtain being partitioned into moving target by calculus of finite differences, Canny edge detection operators are carried After contouring, by the 2D belt patterns (RibbonModel) of MaylorK.LeungandYee-HongYang to human body target Further analysis.Human body front is divided into different regions by the model, for example, human body is constructed with 5 U-shaped regions, this 5 U-shaped region represents the head and four limbs of human body respectively.
So, by finding 5 body end points of U-shape, so that it may the approximate location of body is determined, in the profile for having extracted On the basis of, compress to extract the information of needs by vector outline, retain the feature of topmost human limb, by human body Profile is compressed into a fixed shape, for example so that profile is inverted U-shaped with fixed 8 end points and 5 U-shaped points and 3 Point, so obvious feature can conveniently calculate profile.The distance algorithm that adjacent end points on profile may be used herein carrys out compression wheel Exterior feature, by iterative processing so that 8 end points of profile boil down to.
Automatic marking just can be carried out to characteristic point using following algorithm after compression profile is obtained:
(1) the body end points of U-shape is determined.Some reference length M is set, the vector more than M can consider that it is body A part for body profile, then ignores less than it.Begun look for from certain point according to the profile after vector quantization, find one more than M Vector be designated as Mi, find next vector and be designated as Mj, compare the angle of Mi to Mj, if angle within a certain range (0~ 90 °), (noticing that angle is just, to represent that it is convex here), then it is assumed that they are U end points, record the two vectors, find one Individual U end points.So until finding out 5 U end points.
(2) three end points of inverted u-shaped are determined.Same step (1), as long as angle condition is just being changed to negative.
(3) head, hand, the position of pin are readily available according to U and the end points of U.Physiology shape according to body, so that it may To determine each artis, using arm and body angle portions, head and leg angle portions, trunk can be respectively determined Width and length;Then trunk ratio 0.75,0.3 is accounted for respectively using neck, waist position, ancon is located at the midpoint of shoulder and hand, Knee is located at the midpoint of waist and pin.So each characteristic point approximate location can be defined and.
S122:Human body 3D grids are set up according to human body feature point;
S123:The characteristic value of human body feature point is measured according to human body 3D grids and the 3d space distribution of human body feature point is calculated Characteristic information.
Characteristic value in step S123 include one in height, brachium, shoulder breadth, hand size and head size or It is multiple.The spatial positional information of each human body feature point can be calculated by human body 3D grids, such that it is able to calculate each individual Topological relation between body characteristicses point, it is hereby achieved that the body shape information of solid, empty with the 3D for obtaining human body feature point Between distribution characteristics information.
S13:In the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The 3d space distribution characteristics information of human body feature point is matched.
As shown in figure 3, Fig. 3 is the schematic flow sheet of step S13 in Fig. 1.Step S13 is specifically included:
S131:In the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that calculating is obtained Human body feature point 3d space distribution characteristics information matching degree, to obtain highest matching degree.
S132:Highest matching degree is compared with default matching degree threshold value, if highest matching degree reaches default matching degree The scope of threshold value, then judge that the match is successful, hence into step S14.
Wherein, when for the first time using the method for the present invention, also included presetting matching degree threshold before above-mentioned steps S12 The step of scope of value, the step can be before or after the step S11, it is also possible to carried out simultaneously with step S11.
The algorithm of human bioequivalence has various, and the histogrammic phase of two spaces is calculated for example with Bhattacharyya distances Like property.Or, matching degree (similarity) etc. is calculated using EMD (Earth Movers Distance) scheduling algorithms.
S14:Obtain the identity information of people to be measured.
After the match is successful, the personal identity information of highest matching degree is that this is to be measured in human body 3D characteristic identity information banks The identity information of people.
3D human body recognition methods of the invention can be various in mobile phone, gate inhibition, security protection, game account, login, payment etc. The recognition of face of level of security carries out the application of authentication etc..After the identity information for obtaining people to be measured, it can be determined that treat The authority of people is surveyed, for example, applying when in gate control system, whether the identity information according to people to be measured understands allow the people to be measured to enter Enter.
Prior art is different from, the present invention obtains human body feature point by the present invention by obtaining RGBD human body images 3d space distribution characteristics information, and by the 3d space distribution characteristics information of the human body feature point and human body 3D characteristic identity information banks The 3d space distribution characteristics information of the human body feature point of interior preservation is matched, so as to carry out recognition of face, due to being matched Be human body 3D information, including colouring information and depth information, human body information is more comprehensive, also, is distributed by the 3d space Characteristic information can set up human skeleton, such that it is able to be identified by human skeleton, so different seasons, the clothes of people Decorations and ambient lighting change etc. will not be impacted to human bioequivalence, therefore the present invention can improve the accuracy of human bioequivalence.
Fig. 4 is referred to, Fig. 4 is a kind of schematic flow sheet of 3D human body recognition methods that second embodiment of the invention is provided.
The 3D human body recognition methods of the present embodiment are comprised the following steps:
S21:Obtain the RGBD human body image sequences of people to be measured.
In step S21, by Kinect sensor acquisition is the continuous RGBD human body images of dynamic in certain period of time Sequence, it is hereby achieved that the movable information of people to be measured.
S22:The 3d space distribution characteristics information of human body feature point is obtained by RGBD human body images, by RGBD human figures As retrieval human body dynamic feature information.
Wherein, the acquisition methods phase of the acquisition methods of the 3d space distribution characteristics information of human body feature point and first embodiment Together, will not be repeated here.
The acquisition of RGBD human body image retrieval human body dynamic feature informations can stand by by according to human body, walk The behavioral characteristics of behavior such as go, run, and specific dynamic behavior process, such as it is double palm finger crossover process and result, double The information such as arm crossover process and result.
Specifically, the present embodiment can detect the movement posture of human body using the continuous RGBD image sequences of dynamic, Increase the attribute project of feature recognition, for example:If target is the rigid body articles such as cup, automobile, in continuous RGBD figures, mesh Mark continuously shows as rigid body, with this discrimination objective as rigid body;If animals such as target behaviour, cat, dogs, according to continuous dynamic RGBD Tracking target, detecting position non-rigid carries out accurate human bioequivalence according still further to technologies such as characteristics of human body's identifications.
In certain embodiments, can also by gathering the feature of the animals such as voice, body temperature or people to be identified certification, Prevent from cracking certification identifying system using image, recording etc., improve accuracy of identification.
Obtain human body dynamic feature information, it is necessary first to carry out human motion detection, i.e., in the image sequence for obtaining Determine the process of being defined of movement human, scale size and attitude.The method of human motion detection has various, for example, OGHMs (Orthogonal Gaussian-Hermite Moments) detection method, its general principle is:It is continuous in time by comparing Picture frame between the intensity of variation of correspondence pixel value judge whether the pixel belongs to foreground moving region.
One group of image sequence of input is represented with { f (x, y, t) | t=0,1,2 ... }, f (x, y, t) represents the figure of t Picture, x, y represents the coordinate of pixel on image, if Gaussian functions are g (x, σ), Bn (t) is g (x, σ) and Hermite Polynomial product, then n ranks OGHMs be represented by:
Wherein aiDetermined by the standard deviation of Gaussian functions.According to the property of convolution algorithm, n ranks OGHMs can be regarded as It is the convolution of image sequence function all-order derivative sum in time and Gaussian functions.Certain point derivative value is bigger, then table Show that the pixel value changes changed over time on the position are also bigger, illustrate that the point should belong to moving region block, this is OGHMs Method can detect that moving object provides theoretical foundation.In addition, be can be seen that from formula (1), the basic function of OGHMs isThis is formed by the different order derivative linear combinations of Gaussian functions.Because Gaussian function has in itself There is the ability of smooth noise, so OGHMs equally effectively filters out the performance of various noises.
And for example, time differencing method, time differencing method (Temporal Difference) is using image continuous in time Several consecutive frames before and after sequence, the time difference based on pixel extracts the moving region in image by thresholding.The side of early stage Method is to obtain moving object using adjacent two frame difference, such as sets FkIt is kth frame gradation of image Value Data, F in image sequencek+1Table Show the two field picture gray value data of kth+1 in image sequence, then the difference image of temporally adjacent two field pictures is defined as:
Wherein T is threshold value.If difference is more than T, illustrate that the grey scale change in the region is larger, that is, need the fortune for detecting Moving-target region.
And for example, optical flow method (Optical Flow), optical flow method be based on it is assumed hereinafter that:The change of gradation of image entirely due to What the motion of target or background caused.That is, the gray scale of target and background is not changed over time.Motion inspection based on optical flow approach Survey, the characteristic for showing as velocity field in the picture is exactly changed over time using moving object, estimated according to certain constraints The light stream corresponding to motion is calculated, its advantage is the interframe movement less-restrictive to target, can process larger interframe displacement.
For another example, background subtraction method (Background Subtraction), its general principle is to build a background first Model image, is then made the difference point with current frame image and background two field picture, and moving target is detected by thresholding difference result. Assuming that t background two field picture is F0, correspondence current frame image is Ft, then the difference of present frame and background frames be represented by:
Assuming that the gray value difference of current frame image and background two field picture respective pixel is more than threshold value, then resulting two-value Corresponding value is 1 in image, that is, assert that the region belongs to moving target.
After human motion attitude is detected, by motion history image (MHI, motion history images) With the expression that kinergety image (MEI, motion energy images) carries out human action attitude.
The expression of human action attitude is carried out using motion history image (MHI) and kinergety image (MEI), wherein MEI reflects region and the intensity that human action attitude occurs, and MHI then reflects human action attitude to a certain extent How to occur and how to change in time.
Bianry image MEI produces as follows:
Wherein:B (x, y, n) is the bianry image sequence for representing human action attitude generation area, and parameter τ represents that human body is moved The duration for gesturing.Therefore, MEI describes the region that whole human action attitude occurs.
The generation of MHI is as follows:
Motion history image MH I not only reflect shape, and the distribution and human action attitude that also reflects brightness occur Direction.In MHI, the brightness value of each pixel is proportional to the persistent movement time of the position movement posture, and, most The pixel brightness values of the nearly movement posture for occurring are maximum, and the change of gray scale embodies the direction of movement posture generation.
The statistics for setting up movement posture template using Invariant Moment Method is described.Bending moment is not:M’k=lg | Mk|, wherein:K= 1,2 ..., 7.Characteristic vector is designated as F=[M '1, M '2... M '7], use F1, F2..., FMCertain human body of M width in representative image storehouse The image of movement posture is to image Fi, its corresponding characteristic vector is designated as Fi=[M 'i1, M 'i2..., M 'i7], so moved by human body The image library that gestures can be obtained by the eigenmatrix F=M ' of M × 7 of the movement postureij, wherein M 'ijIt is FiJ-th Characteristic element thus can obtain the mean vector and covariance matrix of the set of eigenvectors of the M width human action pose presentation, Set up the statistics description of the movement posture template.
S23:In the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The 3d space distribution characteristics information of human body feature point is matched, human body dynamic feature information and the human body 3D feature bodies that will be obtained Human body dynamic feature information in part information bank is matched.
If the 3d space distribution characteristics information of human body feature point is with human body dynamic feature information, and the match is successful, enter step Rapid S24.
Specifically, the matching of human body dynamic feature information can be in the following manner:Weighed by Mahalanobis distances Similitude between the movement posture of the new input of amount and the known movement posture template for having stored, as long as calculate Mahalanobis distances can be assumed that it is that the match is successful within the threshold range of regulation, if the movement posture of matching is more than One, then that of chosen distance minimum is used as successfully matching.The computing formula of Mahalanobis distances is as follows:
γ2=(f- μx)Tc-1(f-μx)
Wherein, γ is Mahalanobis distances, and f is the invariant moment features vector of improved human action pose presentation, μx It is the mean vector of the set of eigenvectors trained, c is the covariance matrix of the set of eigenvectors trained.
It is to be appreciated that in some other embodiment, can also use other matching algorithms, the present invention is not limited System.
S24:Obtain the identity information of people to be measured.
The human body 3D characteristic informations that the present embodiment is obtained from RGBD human body image sequences not only include human body feature point 3d space distribution characteristics information, further comprises human body dynamic feature information, increased the attribute project of feature recognition, the present embodiment Human bioequivalence is carried out with reference to the 3d space distribution characteristics information and human body dynamic feature information of human body feature point so that compare identification Attribute project it is more abundant so that human bioequivalence is more accurate.
Fig. 5 is referred to, Fig. 5 is a kind of schematic flow sheet of 3D human body recognition methods that third embodiment of the invention is provided.
S31:Obtain the RGBD human body images and RGBD facial images of people to be measured.
S32:The 3d space distribution characteristics information of human body feature point is obtained by RGBD human body images, by RGBD face figures 3d space distribution characteristics information as obtaining human face characteristic point.
Wherein, the 3d space distribution characteristics information of human face characteristic point is comprised the following steps:
(1) characteristic point of collection face is schemed by face RGBD.In the step, feature is usually carried out by gathering face unit The collection of point, wherein, face element includes:Eyebrow, eyes, nose, face, cheek and lower Palestine and China one or more. Characteristic point can be obtained by the face such as the eyes of handmarking's face, nose, cheek, lower jaw and its edge etc..
(2) face colour 3D grids are set up according to characteristic point.
(3) characteristic value according to face colour 3D grids measures characteristic point simultaneously calculates the annexation between characteristic point.
Specifically, the characteristic point that can be directed to face characteristic by colouring information is measured to associated eigenvalue, should Characteristic value be face characteristic in 2D planes including in position, distance, shape, size, angle, radian and curvature Plant or various measurements, additionally, also including the measurement to color, brightness, texture etc..For example according to iris central pixel point to Surrounding extends, and obtains whole location of pixels of eyes, the shape of eyes, the inclination radian at canthus, eye color etc..
Color combining information and depth information, then can calculate the annexation between characteristic point, and the annexation can Being the topological connection relation and space geometry distance between characteristic point, or it can also be the dynamic of the various combinations of characteristic point Connection relation information etc..
Measurement and calculating according to face colour 3D grids can obtain the plane letter of each element including face in itself The local message of the spatial relation of the characteristic point on breath and each element, and the spatial relation between each element Global Information.Local message and Global Information are respectively from the local information lain in reflection on the whole on RGBD facial images And structural relation.
For example, it is several to the topological connection relation between characteristic value, characteristic point and space using finite element method What distance is analyzed to obtain the 3d space distribution characteristics information of human face characteristic point.
Specifically, can be to face colour 3D grid march facial disfigurements using finite element analysis.Finite element analysis (FEA, Finite Element Analysis) it is that actual physical system (geometry and load working condition) is entered using the method for mathematical approach Row simulation.Also using element that is simple and interacting, i.e. unit, it is possible to gone to approach infinitely with the unknown quantity of limited quantity The real system of unknown quantity.
For example, after carrying out strain energy of distortion analysis to face colour 3D grids each line units, the list of line unit can be set up First stiffness equations.Then introduce constraint element, such as point, line, cut arrow, method arrow constraint element type.Because curve and surface will expire Foot required to its shape, position, size and with the continuity of adjacent curved surface etc. when checking design, these be all by constrain come Realize.The present embodiment processes these and constrains by penalty function method, the final stiffness matrix and equivalent load for obtaining constraint element Array.
Expand the data structure of Deformable curve and surface so that the data structure of Deformable curve and surface is both comprising such as exponent number, control The geometric parameter part on summit processed and knot vector etc., also including showing some parameters of physical characteristic and external applied load.So that Obtaining Deformable curve and surface can integrally represent that some complex bodies show, enormously simplify the geometrical model of face.And And, physical parameter and constrained parameters in data structure uniquely determine the configuration geometric parameter of face.
Finite element solving Deformable curve and surface is used by programming, for different constraint elements, setting unit enters Mouth program, can calculate the element stiffness matrix and unit load array of any constraint.According to the right of global stiffness matrix Title property, banding and openness, using variable bandwidth one-dimension array storage method to global stiffness matrix computations.During assembling, not only By line unit or face element stiffness matrix, constraint element stiffness matrix is also added to global stiffness square by " sitting in the right seat " mode In battle array, while constraint element equivalent load array is added in General load array, line is finally solved using Gaussian reduction Property Algebraic Equation set.
For example, the formative method of face curve and surface can be described as with Mathematical Modeling:
Required deformation curve
U ∈ Ω=[0,1], or curved surface
(u, v) ∈ Ω=[0,1] × [0,1] is the solution of following extreme-value problem
Wherein,It is the energy functional of curve and surface, it reflects the deformation characteristicses of curve and surface to a certain extent, Assign curve and surface physical characteristic.F1, f2, f3, f4 are the functions on variable in (),It is parameter definition The border in domain, Γ ' is the curve in Surface Parameters domain, (μ0, v0) be certain parameter value in parameter field, condition (1) be interpolating on sides about Beam, condition (2) is boundary continuity constraint, and condition (3) is the constraint of characteristic curve in curved surface, and condition (4) is in curve and surface Point constraint.In the application, energy functionalTake into following form:
Curve:
Curved surface:
Wherein, α, β, γ represent respectively curve stretching, object for appreciation go, coefficient of torsion, α ij and β ij be respectively curved surface (μ, v) Place removes coefficient very much partially along μ, the drawing in v directions with object for appreciation.
It is both full as can be seen that Deformable curve and surface modeling method is same, process all kinds of constraints in phase from Mathematical Modeling Foot Partial controll, in turn ensure that overall wide suitable.Using variation principle, solving above-mentioned extreme-value problem can be converted into solution such as lower section Journey:
Here δ represents first variation.Formula (5) is a differential equation, because the equation is more complicated, it is difficult to obtain essence Really analysis is tied, therefore using numerical value liberation.For example, using finite element method.
Finite element method is regarded as first selecting suitable Interpolation as needed, then solves combination parameter, therefore institute The solution for obtaining is not only conitnuous forms, and the grid of pre-treatment generation is also for finite element analysis is laid a good foundation.
Similarity measurement between cognitive phase, unknown facial image and known face template is given by:
In formula:CiXjThe feature of face, i in the feature and face database of face respectively to be identified1,i2,j1,j2,k1,k2For 3D grid vertex features.Section 1 in formula is that machine selects corresponding local feature X in two vector fieldsjAnd CiSimilarity degree, Binomial then be calculate local location relation and matching order, it can be seen that, when best match i.e. least energy function Match somebody with somebody.
Curved surface deformation has been carried out to face colour 3D grids by above-mentioned finite element method, make face colour 3D grids each Point is constantly close to the characteristic point of real human face, so as to obtain the face shape information of solid, and then obtains human face characteristic point 3d space distribution characteristics information.
And for example, the Dynamic link library relation between characteristic value and characteristic point is divided using wavelet transformation texture analysis method Analysis, to obtain the 3d space distribution characteristics information of human face characteristic point.
Specifically, Dynamic link library relation is the Dynamic link library relation of various features point combination.Wavelet transformation be the time and The local conversion of frequency, it has the feature of multiresolution analysis, and all has sign signal local feature in time-domain and frequency-domain Ability.The present embodiment is by the way that wavelet transformation texture analysis is by the extraction to textural characteristics, classification and analytical procedure and combines Face characteristic value and Dynamic link library relation information, specifically include colouring information and depth information, final to obtain three-dimensional face Shape information, finally analysis extracts the lower people's shape of face with consistency of face slight expression change from face shape information again Shape information, carries out encoding face shape model parameter, and the model parameter can be used as the geometric properties of face, so as to obtain face The 3d space distribution characteristics information of characteristic point.
For example, the basis of 3 D wavelet transformation is as follows:
Wherein,
AJ1For function f (x, y, z) arrives SPACE V3 J1Projection operator,
QnIt is Hx,Hy,Hz Gx,Gy,GzCombination;
Order matrix H=(HM, k), G=(GM, k), wherein, Hx,Hy,HzPoint Not Biao Shi H be applied on three dimensional signal x, y, z directions, Gx,Gy,GzRepresent that G is applied on three dimensional signal x, y, z directions respectively.
Cognitive phase, by unknown facial image wavelet transformation after, take its low frequency low resolution subgraph and be mapped to face space, Characteristic coefficient will be obtained, it is possible to use between Euclidean distance characteristic coefficient more to be sorted and everyone characteristic coefficient away from From with reference to PCA algorithms, according to formula:
In formula, K is the people most matched with unknown face, and N is database number, and Y is mapped to by eigenface for unknown face The m dimensional vectors obtained on the subspace of formation, YkFor known face is mapped on the subspace formed by eigenface in database The m dimensional vectors for obtaining.
It is to be appreciated that in another embodiment, it is the 3D recognitions of face based on 2-d wavelet feature that can also use Method is identified, it is necessary first to carry out 2-d wavelet feature extraction, and 2-d wavelet basic function g (x, y) is defined as
gmn(x, y)=a-mnG (x ', y '), a > 1, m, n ∈ Z
Wherein, σ is the size of Gauss window, and a filter function for self similarity can be by function gmn(x, y) to g (x, Y) suitably expansion and rotation is carried out to obtain.Based on superior function, the wavelet character to image I (x, y) can be defined as
Facial image 2-d wavelet extraction algorithm realizes that step is as follows:
(1) obtain the small echo on face by wavelet analysis to characterize, convert the individual features in original image I (x, y) It is wavelet-based attribute vector F (F ∈ Rm)。
(2) Fractional power polynomial models (FPP) model k (x, y)=(xy) is usedd(0 < d < 1) makes m tie up wavelet character space RmProject to n-dimensional space R highernIn.
(3) based on the linear judgment analysis algorithm (KFDA) of core, in RnMatrix S between class is set up in spacebWith matrix S in classw
Calculate SwNormal orthogonal characteristic vector α1, α2..., αn
(4) extract facial image and significantly differentiate characteristic vector.Another P1=(α1, α2..., αq), wherein, α1, α2..., αqIt is SwCorresponding q characteristic value is positive characteristic vector, q=rank (Sw).CalculateIt is maximum special corresponding to L The characteristic vector β of value indicative1, β2..., βL, (L≤c-1), wherein,C is face point The quantity of class.Significantly differentiate characteristic vector, fregular=BTP1 TY wherein, y ∈ Rn;B=(β1, β2..., βl)。
(5) the inapparent differentiation characteristic vector of facial image is extracted.CalculateCorresponding to a feature for eigenvalue of maximum to Amount γ1, γ2..., γL, (L≤c-1).Make P2=(αq+1, αq+2..., αm), then inapparent differentiation characteristic vector
It is as follows the step of including in the 3D recognition of face stages:
(1) front face is detected, crucial face characteristic in one front face of positioning and a facial image The contour feature point of point, such as face, left eye and right eye, mouth and nose etc..
(2) three-dimensional people is rebuild by the conventional 3D face databases of the two-dimensional Gabor characteristic vector of said extracted and Face model.In order to rebuild a three-dimensional face model, ORL (Olivetti Research Laboratory) single face three is used Dimension face database, including 100 facial images for detecting.Each faceform has nearly 70000 tops in database Point.Determine a Feature Conversion matrix P, in original three-dimensional face identification method, the matrix is typically by subspace analysis side The subspace analysis projection matrix that method is obtained, the characteristic vector of preceding m eigenvalue of maximum is corresponded to by the covariance matrix of sample Composition.The small echo that will be extracted differentiates that characteristic vector corresponds to the m characteristic vector of eigenvalue of maximum, constitutes main Feature Conversion square Battle array P ', this feature transition matrix has stronger robustness than original eigenmatrix P to factors such as illumination, attitude and expressions, i.e., The feature of representative is more accurate and stable.
(3) newly-generated faceform is processed using template matches and linear discriminant analysis (FLDA) method, is carried Difference and class inherited in the class of modulus type, further optimize last recognition result.
S33:In the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The 3d space distribution characteristics information of human body feature point is matched, the 3d space distribution characteristics information of the human face characteristic point that will be obtained 3d space distribution characteristics information with the human face characteristic point in human body 3D characteristic identity information banks is matched.
The matching process of the 3d space distribution characteristics information of human face characteristic point is as follows:
In one embodiment, by finding the basic element of facial image distribution, i.e. facial image sample set covariance The characteristic vector of matrix, therefore facial image is approx characterized, these characteristic vectors are referred to as eigenface, and eigenface has reacted implicit The structural relation of information and face inside face sample set, by eyes, cheek, the sample set covariance matrix of lower jaw Feature phase two is characterized eye, feature jaw and feature lip, with city sub-face of feature.Sub-face of feature produces son in corresponding image space Space, referred to as sub-face space.Projector distance of the test image window in sub-face space is calculated, if video in window meets threshold value ratio Compared with condition, then judge that it is face.
The method is first to determine the attributes such as size, position, the distances of image surface face profile such as eye iris, the wing of nose, the corners of the mouth, so Calculate their geometric feature again afterwards, and these characteristic quantities form a characteristic vector for describing the image surface.It is this based on whole The identification of body face, not only remains the topological relation between face each element, also remains individual element information in itself.It is this Algorithm is the method using each organ of human body face and characteristic portion.Such as correspond to many data of geometrical relationship and form identification parameter and number It is compared according to all of initial parameter in storehouse, judged and is confirmed.
In another embodiment, it is identified using finite element method.Object is described with sparse figure, its The multiple dimensioned description that summit is composed with local energy is marked, and side then represents topological connection relation and marked with geometric distance, so Nearest known figure is found using plasticity Graphics matching afterwards.
Facial image is modeled as deformable 3D surface meshes (x, y, I (x, y)), so as to face matching problem be converted It is the Elastic Matching problem of deformable surface.Using the method march facial disfigurement of finite element analysis, and according to the situation of deformation Judge whether two pictures are same person.The characteristics of this method is that space (x, y) and gray scale I (x, y) have been placed on into one Consider simultaneously in individual 3d space.
In yet another embodiment, 3D facial contour lines are filtered using small echo, to realize face 3D feature extractions, Obtain characterizing the characteristic of the faceform, this feature data are passed through into graderMatched with existing model in sample database, calculated matching journey Degree.Wherein,
xi∈ { supporting vector for having trained and having drawn in 3D face databases },
yiIt is xiCorresponding classification value, b is classification thresholds, and x is three-dimensional face features' data to be identified.
Can show that three-dimensional face features' data x to be identified is sub-category to each grader according to matching primitives to return Category, travels through all of SVM classifier and is voted, and x finally is judged into belonging to certain classification obtains the most classification of poll.
If equal of the 3d space distribution characteristics information of the 3d space distribution characteristics information of human body feature point and human face characteristic point With success, then into step S34.
S34:Obtain the identity information of people to be measured.
The human body 3D characteristic informations that the present embodiment is obtained include the 3d space distribution characteristics letter of overall human body feature point The 3d space distribution characteristics information of breath and local human face characteristic point, can be from entirety and local feature when human bioequivalence To be identified, the attribute project of human bioequivalence is increased, improve the accuracy of human bioequivalence.
In some other embodiment, RGB facial images can also be obtained 2D information and the human bodies such as face complexion, texture The 3d space distribution characteristics information of characteristic point and the 3d space distribution characteristics information of human face characteristic point are combined, and further increase and know Other attribute project, improves identification accuracy.
It is understood that in some other embodiment, above-mentioned second embodiment and 3rd embodiment are can also be With reference to acquisition human body RGBD image sequences and face RGBD images, so as to obtain the 3d space distribution characteristics letter of human body feature point The 3d space distribution characteristics information of breath, human body dynamic feature information and human face characteristic point, so as to increased the attribute of human bioequivalence Project, there is provided the accuracy of human bioequivalence.
Fig. 6 is referred to, Fig. 6 is a kind of structural representation of 3D human bioequivalences equipment that first embodiment of the invention is provided.
The 3D human bioequivalences equipment of the present embodiment includes human body image acquisition module 10, characteristics of human body's data obtaining module 11st, human body information matching module 12 and identity information acquisition module 13.
Wherein, human body image acquisition module 10 is used to obtain the RGBD human body images of people to be measured.
Characteristics of human body's data obtaining module 11 is connected with human body image acquisition module 10, for being obtained by RGBD human body images Take the 3d space distribution characteristics information of human body feature point.Characteristics of human body's data obtaining module includes:
Characteristics of human body's data obtaining module 11 sets up module 111 and processing module 112 including acquisition module 110, grid.Its Middle acquisition module 110 is connected with human body image acquisition module 10, for gathering human body feature point by RGBD human body images.Grid Set up module 111 to be connected with acquisition module 110, for setting up human body 3D grids according to human body feature point.Processing module 112 and net Lattice are set up module 111 and are connected, for measuring the characteristic value of human body feature point according to human body 3D grids and calculating human body feature point 3d space distribution characteristics information.
Human body information matching module 12 is connected with characteristics of human body's data obtaining module 11, for the human body feature point that will be obtained 3d space distribution characteristics information and human body 3D characteristic identity information banks in human body feature point 3d space distribution characteristics information Matched.
Human body matching module 12 includes computing module 120 and comparison module 121.Computing module 120 connects with processing module 112 Connect, for the human body in the 3d space distribution characteristics information and human body 3D characteristic identity information banks that calculate the human body feature point for obtaining The matching degree of the 3d space distribution characteristics information of characteristic point, to obtain highest matching degree.Comparison module 121 and computing module 120 Connection, for highest matching degree to be compared with default matching degree threshold value, if highest matching degree reaches default matching degree threshold value Scope, then judge the match is successful.
Identity information acquisition module 13 is connected with human body information matching module 12, for when the match is successful, obtaining to be measured The identity information of people.
Fig. 7 is referred to, Fig. 7 is a kind of structural representation of 3D human bioequivalences equipment that second embodiment of the invention is provided.
The 3D human bioequivalences equipment of the present embodiment includes human body image acquisition module 20, characteristics of human body's data obtaining module 21st, human body information matching module 22, identity information acquisition module 23 and dynamic feature information acquisition module 24.
Human body image acquisition module 20 is used to obtain the RGBD human body images of people to be measured.
Characteristics of human body's data obtaining module 21 is connected with human body image acquisition module 20, for being obtained by RGBD human body images Take the 3d space distribution characteristics information of human body feature point.
Dynamic feature information acquisition module 24 obtains mould 20 and is connected with human body image, for by RGBD human body image sequences Obtain human body dynamic feature information.
Human body information matching module 22 connects with characteristics of human body's data obtaining module 21 and dynamic feature information acquisition module 24 Connect, it is special for the human body in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The 3d space distribution characteristics information levied a little is matched.
Identity information acquisition module 23 is connected with human body information matching module 22, for when the match is successful, obtaining to be measured The identity information of people.
Fig. 8 is referred to, Fig. 8 is a kind of structural representation of 3D human bioequivalences equipment that third embodiment of the invention is provided.
The 3D human bioequivalences equipment of the present embodiment includes human body image acquisition module 30, characteristics of human body's data obtaining module 31st, human body information matching module 32, identity information acquisition module 33, facial image acquisition module 34, face characteristic information are obtained Module 35 and face information matching module 36.
Human body image acquisition module 30 is used to obtain the RGBD human body images of people to be measured.
Characteristics of human body's data obtaining module 31 is connected with human body image acquisition module 30, for being obtained by RGBD human body images Take the 3d space distribution characteristics information of human body feature point.
Facial image acquisition module 34 is used to obtain the RGBD facial images of people to be measured.
Face characteristic information acquisition module 35 is connected with facial image acquisition module 34, for being obtained by RGBD facial images Take the 3d space distribution characteristics information of human face characteristic point.
Face information matching module 32 connects with characteristics of human body's data obtaining module 31 and face characteristic information acquisition module 35 Connect, it is special for the human body in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The 3d space distribution characteristics information levied a little is matched, and will obtain human face characteristic point 3d space distribution characteristics information with The 3d space distribution characteristics information of the human face characteristic point in human body 3D characteristic identity information banks is matched.
Identity information acquisition module 33 is connected with face information matching module 32, for the 3d space point in human body feature point The 3d space distribution characteristics information of cloth characteristic information and human face characteristic point obtains the identity information of people to be measured when the match is successful.
Fig. 9 is referred to, Fig. 9 is that a kind of structure of the entity apparatus of 3D human bioequivalences equipment provided in an embodiment of the present invention is shown It is intended to.The device of present embodiment can perform the step in the above method, and related content refers to detailed in the above method Illustrate, will not be repeated here.
The intelligent electronic device includes the memory 42 that processor 41 is coupled with processor 41.
Memory 42 is used for storage program area, the program for setting.
Processor 41 is used to obtain the RGBD human body images of people to be measured;Human body feature point is obtained by RGBD human body images 3d space distribution characteristics information;The 3d space distribution characteristics information and human body 3D characteristic identity information of the human body feature point that will be obtained The 3d space distribution characteristics information of the human body feature point in storehouse is matched;If the match is successful, the identity letter of people to be measured is obtained Breath.
Processor 41 is additionally operable to gather human body feature point by RGBD human body images;Human body 3D is set up according to human body feature point Grid;The characteristic value of human body feature point is measured according to human body 3D grids and the 3d space distribution characteristics letter of human body feature point is calculated Breath.
Processor 41 is additionally operable to calculate the 3d space distribution characteristics information and human body 3D feature bodies of the human body feature point for obtaining The matching degree of the 3d space distribution characteristics information of the human body feature point in part information bank, to obtain highest matching degree;By highest Compare with default matching degree threshold value with degree, if highest matching degree reaches the scope of default matching degree threshold value, judge matching Success.
Processor 41 is additionally operable to obtain RGBD human body image sequences;It is special by RGBD human body image retrievals human body dynamic Reference ceases;The human body dynamic feature information of acquisition is carried out with the human body dynamic feature information in human body 3D characteristic identity information banks Matching.
Processor 41 is additionally operable to obtain the RGBD facial images of people to be measured;Human face characteristic point is obtained by RGBD facial images 3d space distribution characteristics information;The 3d space distribution characteristics information of the human face characteristic point that will be obtained is believed with human body 3D characteristic identities The 3d space distribution characteristics information of the human face characteristic point in breath storehouse is matched;If the match is successful, the identity of people to be measured is obtained In the step of information, the match is successful for the 3d space distribution characteristics information of human body feature point is distributed with the 3d space of human face characteristic point The match is successful for characteristic information.
In several implementation methods provided by the present invention, it should be understood that disclosed apparatus and method, can pass through Other modes are realized.For example, equipment implementation method described above is only schematical, for example, module or unit Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, for example multiple units or component Can combine or be desirably integrated into another system, or some features can be ignored, or do not perform.It is another, it is shown or The coupling each other for discussing or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces Close or communicate to connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, be shown as unit Part can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple networks On unit.Some or all of unit therein can be according to the actual needs selected to realize the mesh of present embodiment scheme 's.
In addition, during each functional unit in each implementation method of the invention can be integrated in a processing unit, also may be used Being that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.It is above-mentioned integrated Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If integrated unit, can to realize in the form of SFU software functional unit and as independent production marketing or when using To store in a computer read/write memory medium.Based on such understanding, technical scheme substantially or Saying all or part of the part or technical scheme contributed to prior art can be embodied in the form of software product Out, computer software product storage is in a storage medium, including some instructions are used to so that a computer equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform each implementation of the present invention The all or part of step of methods.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. it is various Can be with the medium of store program codes.
In sum, the present invention by colouring information and depth information carries out human bioequivalence, not by different seasons, people The influence of dress ornament and ambient lighting change etc., improves the accuracy of human bioequivalence.
Embodiments of the present invention are the foregoing is only, the scope of the claims of the invention is not thereby limited, it is every using this Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations Technical field, is included within the scope of the present invention.

Claims (10)

1. a kind of 3D human body recognition methods, it is characterised in that comprise the following steps:
Obtain the RGBD human body images of people to be measured;
The 3d space distribution characteristics information of human body feature point is obtained by the RGBD human body images;
Human body in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The 3d space distribution characteristics information of characteristic point is matched;
If the match is successful, the identity information of the people to be measured is obtained.
2. method according to claim 1, it is characterised in that described that characteristics of human body is obtained by the RGBD human body images The step of 3d space distribution characteristics information of point, includes:
Human body feature point is gathered by the RGBD human body images;
Human body 3D grids are set up according to the human body feature point;
The characteristic value of the human body feature point is measured according to the human body 3D grids and the 3d space of the human body feature point is calculated Distribution characteristics information.
3. method according to claim 2, it is characterised in that the 3d space by the human body feature point for obtaining point Cloth characteristic information is matched with the 3d space distribution characteristics information of the human body feature point in human body 3D characteristic identity information banks Step includes:
Calculate the people in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point for obtaining The matching degree of the 3d space distribution characteristics information of body characteristicses point, to obtain highest matching degree;
The highest matching degree is compared with default matching degree threshold value, if the highest matching degree reaches the default matching The scope of threshold value is spent, then judges that the match is successful.
4. method according to claim 1, it is characterised in that in the step of obtaining the RGBD human body images of people to be measured, institute RGBD human body images are stated for RGBD human body image sequences;
The step of obtaining the 3d space distribution characteristics information of human body feature point by the RGBD human body images also includes:By institute State RGBD human body image retrieval human body dynamic feature informations;
Human body in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The step of 3d space distribution characteristics information of characteristic point is matched also includes:Will obtain the human body dynamic feature information with Human body dynamic feature information in human body 3D characteristic identity information banks is matched.
5. method according to claim 1, it is characterised in that in the step of obtaining the RGBD human body images of people to be measured, also Including:Obtain the RGBD facial images of people to be measured;
In the step of obtaining the 3d space distribution characteristics information of human body feature point by the RGBD human body images, also include:It is logical Cross the 3d space distribution characteristics information that the RGBD facial images obtain human face characteristic point;
Human body in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of the human body feature point that will be obtained The step of 3d space distribution characteristics information of characteristic point is matched also includes:The 3d space of the human face characteristic point that will be obtained Distribution characteristics information is matched with the 3d space distribution characteristics information of the human face characteristic point in human body 3D characteristic identity information banks;
If the match is successful, the step of obtain the identity information of the people to be measured in, it is described that the match is successful is the characteristics of human body The match is successful with the 3d space distribution characteristics information of the human face characteristic point for the 3d space distribution characteristics information of point.
6. a kind of 3D human bioequivalences equipment, it is characterised in that including:
Human body image acquisition module, the RGBD human body images for obtaining people to be measured;
Characteristics of human body's data obtaining module, is connected with the human body image acquisition module, for by the RGBD human body images Obtain the 3d space distribution characteristics information of human body feature point;
Human body information matching module, is connected with characteristics of human body's data obtaining module, for the characteristics of human body that will be obtained The 3d space distribution characteristics letter of the human body feature point in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of point Breath is matched;
Identity information acquisition module, is connected with the human body information matching module, for when the match is successful, obtaining described to be measured The identity information of people.
7. equipment according to claim 6, it is characterised in that characteristics of human body's data obtaining module includes:
Acquisition module, is connected with the human body image acquisition module, for gathering characteristics of human body by the RGBD human body images Point;
Grid sets up module, is connected with the acquisition module, for setting up human body 3D grids according to the human body feature point;
Processing module, sets up module and is connected with the grid, for measuring the human body feature point according to the human body 3D grids Characteristic value and calculate the 3d space distribution characteristics information of the human body feature point.
8. equipment according to claim 7, it is characterised in that the human body information matching module includes:
Computing module, is connected with the processing module, the 3d space distribution characteristics for calculating the human body feature point for obtaining The matching degree of the 3d space distribution characteristics information of the human body feature point in information and human body 3D characteristic identity information banks, to obtain most Matching degree high;
Comparison module, is connected with the computing module, for the highest matching degree to be compared with default matching degree threshold value, if The highest matching degree reaches the scope of the default matching degree threshold value, then judge that the match is successful.
9. equipment according to claim 6, it is characterised in that what the human body image acquisition module was obtained is RGBD human bodies Image sequence;
The equipment also includes dynamic feature information acquisition module, and obtaining mould with the human body image is connected, for by described RGBD human body image retrieval human body dynamic feature informations;
The human body information matching module is further used for the human body dynamic feature information and the human body 3D feature bodies that will be obtained Human body dynamic feature information in part information bank is matched.
10. equipment according to claim 6, it is characterised in that the equipment also includes:
Facial image acquisition module, the RGBD facial images for obtaining people to be measured;
Face characteristic information acquisition module, is connected with the facial image acquisition module, for by the RGBD facial images Obtain the 3d space distribution characteristics information of human face characteristic point;
Face information matching module, is connected with the face characteristic information acquisition module, for the face characteristic that will be obtained The 3d space distribution characteristics letter of the human face characteristic point in the 3d space distribution characteristics information and human body 3D characteristic identity information banks of point Breath is matched;
The identity information acquisition module is also connected with the face information matching module, and the identity information acquisition module is used for In equal of the 3d space distribution characteristics information of 3d space distribution characteristics information and the human face characteristic point of the human body feature point During with success, the identity information of the people to be measured is obtained.
CN201611024504.2A 2016-11-14 2016-11-14 3D human body recognition methods and equipment Pending CN106778474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611024504.2A CN106778474A (en) 2016-11-14 2016-11-14 3D human body recognition methods and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611024504.2A CN106778474A (en) 2016-11-14 2016-11-14 3D human body recognition methods and equipment

Publications (1)

Publication Number Publication Date
CN106778474A true CN106778474A (en) 2017-05-31

Family

ID=58969859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611024504.2A Pending CN106778474A (en) 2016-11-14 2016-11-14 3D human body recognition methods and equipment

Country Status (1)

Country Link
CN (1) CN106778474A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334863A (en) * 2018-03-09 2018-07-27 百度在线网络技术(北京)有限公司 Identity identifying method, system, terminal and computer readable storage medium
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN108537236A (en) * 2018-04-04 2018-09-14 天目爱视(北京)科技有限公司 A kind of polyphaser data control system for identifying
CN108564017A (en) * 2018-04-04 2018-09-21 北京天目智联科技有限公司 A kind of biological characteristic 3D 4 D datas recognition methods and system based on grating camera
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109034344A (en) * 2018-06-28 2018-12-18 深圳市必发达科技有限公司 A kind of swimming pool number instant playback device
CN109426785A (en) * 2017-08-31 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of human body target personal identification method and device
CN109919121A (en) * 2019-03-15 2019-06-21 百度在线网络技术(北京)有限公司 A kind of projecting method of manikin, device, electronic equipment and storage medium
CN110135442A (en) * 2019-05-20 2019-08-16 驭势科技(北京)有限公司 A kind of evaluation system and method for feature point extraction algorithm
CN110188616A (en) * 2019-05-05 2019-08-30 盎锐(上海)信息科技有限公司 Space modeling method and device based on 2D and 3D image
CN110598556A (en) * 2019-08-12 2019-12-20 深圳码隆科技有限公司 Human body shape and posture matching method and device
CN111554064A (en) * 2020-03-31 2020-08-18 苏州科腾软件开发有限公司 Remote household monitoring alarm system based on 5G network
CN112435414A (en) * 2020-11-23 2021-03-02 苏州卡创信息科技有限公司 Security monitoring system based on face recognition and monitoring method thereof
CN117994838A (en) * 2024-04-03 2024-05-07 精为技术(天津)有限公司 Real-time micro-expression recognition method and device based on incremental depth subspace network

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932842A (en) * 2006-08-10 2007-03-21 中山大学 Three-dimensional human face identification method based on grid
CN101339669A (en) * 2008-07-29 2009-01-07 上海师范大学 Three-dimensional human face modelling approach based on front side image
CN101347332A (en) * 2008-08-22 2009-01-21 深圳先进技术研究院 Measurement method and equipment of digitized measurement system of human face three-dimensional surface shape
CN102164113A (en) * 2010-02-22 2011-08-24 深圳市联通万达科技有限公司 Face recognition login method and system
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103310440A (en) * 2013-05-16 2013-09-18 北京师范大学 Canonical correlation analysis-based skull identity authentication method
CN103871106A (en) * 2012-12-14 2014-06-18 韩国电子通信研究院 Method of fitting virtual item using human body model and system for providing fitting service of virtual item
CN104008571A (en) * 2014-06-12 2014-08-27 深圳奥比中光科技有限公司 Human body model obtaining method and network virtual fitting system based on depth camera
CN104077804A (en) * 2014-06-09 2014-10-01 广州嘉崎智能科技有限公司 Method for constructing three-dimensional human face model based on multi-frame video image
CN104573634A (en) * 2014-12-16 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method
CN104599367A (en) * 2014-12-31 2015-05-06 苏州福丰科技有限公司 Multi-user parallel access control recognition method based on three-dimensional face image recognition
CN104715493A (en) * 2015-03-23 2015-06-17 北京工业大学 Moving body posture estimating method
CN104915003A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Somatosensory control parameter adjusting method, somatosensory interaction system and electronic equipment
CN105022982A (en) * 2014-04-22 2015-11-04 北京邮电大学 Hand motion identifying method and apparatus
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user
CN105786016A (en) * 2016-03-31 2016-07-20 深圳奥比中光科技有限公司 Unmanned plane and RGBD image processing method
US20160220312A1 (en) * 2007-08-17 2016-08-04 Zimmer, Inc. Implant design analysis suite
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN205453893U (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN105930766A (en) * 2016-03-31 2016-09-07 深圳奥比中光科技有限公司 Unmanned plane

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932842A (en) * 2006-08-10 2007-03-21 中山大学 Three-dimensional human face identification method based on grid
US20160220312A1 (en) * 2007-08-17 2016-08-04 Zimmer, Inc. Implant design analysis suite
CN101339669A (en) * 2008-07-29 2009-01-07 上海师范大学 Three-dimensional human face modelling approach based on front side image
CN101347332A (en) * 2008-08-22 2009-01-21 深圳先进技术研究院 Measurement method and equipment of digitized measurement system of human face three-dimensional surface shape
CN102164113A (en) * 2010-02-22 2011-08-24 深圳市联通万达科技有限公司 Face recognition login method and system
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103871106A (en) * 2012-12-14 2014-06-18 韩国电子通信研究院 Method of fitting virtual item using human body model and system for providing fitting service of virtual item
CN103310440A (en) * 2013-05-16 2013-09-18 北京师范大学 Canonical correlation analysis-based skull identity authentication method
CN105022982A (en) * 2014-04-22 2015-11-04 北京邮电大学 Hand motion identifying method and apparatus
CN104077804A (en) * 2014-06-09 2014-10-01 广州嘉崎智能科技有限公司 Method for constructing three-dimensional human face model based on multi-frame video image
CN104008571A (en) * 2014-06-12 2014-08-27 深圳奥比中光科技有限公司 Human body model obtaining method and network virtual fitting system based on depth camera
CN104573634A (en) * 2014-12-16 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method
CN104599367A (en) * 2014-12-31 2015-05-06 苏州福丰科技有限公司 Multi-user parallel access control recognition method based on three-dimensional face image recognition
CN104715493A (en) * 2015-03-23 2015-06-17 北京工业大学 Moving body posture estimating method
CN104915003A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Somatosensory control parameter adjusting method, somatosensory interaction system and electronic equipment
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user
CN105786016A (en) * 2016-03-31 2016-07-20 深圳奥比中光科技有限公司 Unmanned plane and RGBD image processing method
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN205453893U (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN105930766A (en) * 2016-03-31 2016-09-07 深圳奥比中光科技有限公司 Unmanned plane

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
樊景超 等: "基于Kinect的农业虚拟教学研究", 《安徽农业科学》 *
毛雁明 等: "基于Kinect 骨架追踪技术的PPT全自动控制方法研究", 《海南大学学报自然科学版》 *
胡旬: "基于Kinect的虚拟人物模型运动控制研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
谢剑斌 等: "《视觉感知与智能视频监控》", 31 March 2012 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11126828B2 (en) 2017-08-31 2021-09-21 Hangzhou Hikvision Digital Technology Co., Ltd. Method and device for recognizing identity of human target
CN109426785B (en) * 2017-08-31 2021-09-10 杭州海康威视数字技术股份有限公司 Human body target identity recognition method and device
CN109426785A (en) * 2017-08-31 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of human body target personal identification method and device
CN108334863B (en) * 2018-03-09 2020-09-04 百度在线网络技术(北京)有限公司 Identity authentication method, system, terminal and computer readable storage medium
CN108334863A (en) * 2018-03-09 2018-07-27 百度在线网络技术(北京)有限公司 Identity identifying method, system, terminal and computer readable storage medium
US10769423B2 (en) 2018-03-09 2020-09-08 Baidu Online Network Technology (Beijing) Co., Ltd. Method, system and terminal for identity authentication, and computer readable storage medium
CN108416312B (en) * 2018-03-14 2019-04-26 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification method taken pictures based on visible light
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN108537236A (en) * 2018-04-04 2018-09-14 天目爱视(北京)科技有限公司 A kind of polyphaser data control system for identifying
CN108564017A (en) * 2018-04-04 2018-09-21 北京天目智联科技有限公司 A kind of biological characteristic 3D 4 D datas recognition methods and system based on grating camera
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN108830252B (en) * 2018-06-26 2021-09-10 哈尔滨工业大学 Convolutional neural network human body action recognition method fusing global space-time characteristics
CN109034344A (en) * 2018-06-28 2018-12-18 深圳市必发达科技有限公司 A kind of swimming pool number instant playback device
CN109919121A (en) * 2019-03-15 2019-06-21 百度在线网络技术(北京)有限公司 A kind of projecting method of manikin, device, electronic equipment and storage medium
CN110188616A (en) * 2019-05-05 2019-08-30 盎锐(上海)信息科技有限公司 Space modeling method and device based on 2D and 3D image
CN110188616B (en) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 Space modeling method and device based on 2D and 3D images
CN110135442A (en) * 2019-05-20 2019-08-16 驭势科技(北京)有限公司 A kind of evaluation system and method for feature point extraction algorithm
CN110135442B (en) * 2019-05-20 2021-12-14 驭势科技(北京)有限公司 Evaluation system and method of feature point extraction algorithm
CN110598556A (en) * 2019-08-12 2019-12-20 深圳码隆科技有限公司 Human body shape and posture matching method and device
CN111554064A (en) * 2020-03-31 2020-08-18 苏州科腾软件开发有限公司 Remote household monitoring alarm system based on 5G network
CN112435414A (en) * 2020-11-23 2021-03-02 苏州卡创信息科技有限公司 Security monitoring system based on face recognition and monitoring method thereof
CN117994838A (en) * 2024-04-03 2024-05-07 精为技术(天津)有限公司 Real-time micro-expression recognition method and device based on incremental depth subspace network

Similar Documents

Publication Publication Date Title
CN106778474A (en) 3D human body recognition methods and equipment
CN106778468B (en) 3D face identification method and equipment
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN106780906B (en) A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN102592136B (en) Three-dimensional human face recognition method based on intermediate frequency information in geometry image
CN102324025B (en) Human face detection and tracking method based on Gaussian skin color model and feature analysis
CN101558431B (en) Face authentication device
CN108182397B (en) Multi-pose multi-scale human face verification method
CN104616316B (en) Personage's Activity recognition method based on threshold matrix and Fusion Features vision word
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
KR101433472B1 (en) Apparatus, method and computer readable recording medium for detecting, recognizing and tracking an object based on a situation recognition
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN103049751A (en) Improved weighting region matching high-altitude video pedestrian recognizing method
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN105320950A (en) A video human face living body detection method
CN105022982A (en) Hand motion identifying method and apparatus
CN105354555B (en) A kind of three-dimensional face identification method based on probability graph model
CN107392187A (en) A kind of human face in-vivo detection method based on gradient orientation histogram
Yu et al. Improvement of face recognition algorithm based on neural network
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
CN105608710A (en) Non-rigid face detection and tracking positioning method
CN107784284B (en) Face recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531

RJ01 Rejection of invention patent application after publication