CN106372552B - Human body target recognition positioning method - Google Patents

Human body target recognition positioning method Download PDF

Info

Publication number
CN106372552B
CN106372552B CN201610755695.3A CN201610755695A CN106372552B CN 106372552 B CN106372552 B CN 106372552B CN 201610755695 A CN201610755695 A CN 201610755695A CN 106372552 B CN106372552 B CN 106372552B
Authority
CN
China
Prior art keywords
human body
body target
positioning result
positioning
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610755695.3A
Other languages
Chinese (zh)
Other versions
CN106372552A (en
Inventor
孙宁霄
杨继绕
吴琼之
孙林
苏昶仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201610755695.3A priority Critical patent/CN106372552B/en
Publication of CN106372552A publication Critical patent/CN106372552A/en
Application granted granted Critical
Publication of CN106372552B publication Critical patent/CN106372552B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10009Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of human body target recognition positioning method, comprising: obtains the first positioning result positioned using ultrahigh-frequency radio-frequency identification system to human body target;Obtain the second positioning result positioned using computer vision system to human body target;First positioning result and the second positioning result are merged using default blending algorithm, obtain the third positioning result to human body target;Positioning accuracy optimization is carried out to the third positioning result using default combinated optimization algorithm, obtains the final positioning result to human body target.The positioning of the identification to human body target may be implemented in the present invention, high-efficient, accuracy is high.

Description

Human body target recognition positioning method
Technical field
The present invention relates to security protection identification field of locating technology more particularly to a kind of human body target recognition positioning methods and dress It sets.
Background technique
Currently, the demand and security protection industry answer and life to security protection are always along with the development of human civilization.20 generation Sixties closed monitor system (Closed Circuit Television, abbreviation CCTV) is recorded in the security of New York city Application identity the birth of modern security protection industry.Since then, with the development of science and technology with increasingly serious social safety wind Danger, many technologies are applied to security protection industry and derive a series of security devices.Traditionally, a complete security system packet Containing three parts: monitoring modular, risk evaluation module and respond module.Although the universal of intellectual technology has obscured three's Boundary, but it is unquestionable, and monitoring modular is still the front end of whole system.The sensitivity of monitoring device and the energy for collecting information Power determines the precision and robustness of entire security device.
And due to three kinds of common detection devices: closed monitor system (i.e. computer vision system), motion detector with Technical vulnerability is individually present in ultrahigh-frequency radio-frequency identification system, and any single monitoring system is not able to satisfy modern security protection industry pair Efficiency, the demand of accuracy and the degree of automation.In recent years, in addition to studying new monitoring mode, all kinds of different more monitoring systems System integration program is also constantly suggested.Wherein most solutions are conceived to the fusion to each monitoring subsystem final result, and The fusion of this higher level is generally difficult to play the advantage of each subsystem completely.It is therefore desirable to propose a kind of deeper time The amalgamation modes of more monitoring means overcome the limitation of existing scheme and promote the efficiency of emerging system.
In consideration of it, how to provide a kind of high efficiency, the human body target recognition positioning method of high accuracy becomes current needs The technical issues of solution.
Summary of the invention
In order to solve the above technical problems, the present invention provides a kind of human body target recognition positioning method, may be implemented to people The identification of body target positions, high-efficient, accuracy is high.
In a first aspect, the present invention provides a kind of human body target recognition positioning method, comprising:
Obtain the first positioning result positioned using ultrahigh-frequency radio-frequency identification system to human body target;
Obtain the second positioning result positioned using computer vision system to human body target;
First positioning result and the second positioning result are merged using default blending algorithm, obtained to human body mesh Target third positioning result;
Positioning accuracy optimization is carried out to the third positioning result using default combinated optimization algorithm, is obtained to human body target Final positioning result.
Optionally, described obtain is tied using the first positioning that ultrahigh-frequency radio-frequency identification system positions human body target Fruit, comprising:
Using the angle of arrival location algorithm arranged based on passive tag, on human body target in ultrahigh-frequency radio-frequency identification system Label column position is positioned, and then obtains the first positioning result positioned to the human body target;
Wherein, the ultrahigh-frequency radio-frequency identification system, comprising: label column on human body target and set in same linear interval At least three super high frequency radio frequencies identification equipment set;Two super high frequency radio frequencies identification equipment interval default first of arbitrary neighborhood away from From the label column includes: two electronic tags for being spaced default second distance.
Optionally, the default second distance is greater than zero and is less than λ/4, and λ is that the super high frequency radio frequency identifies that equipment is sent Carrier wave wavelength.
Optionally, described using the angle of arrival location algorithm arranged based on passive tag, in ultrahigh-frequency radio-frequency identification system Label column position on human body target is positioned, and then obtains the first positioning result positioned to the human body target, Include:
Each super high frequency radio frequency identification equipment is obtained to arrive by the echo-signal that its antenna receives the label column return Up to angle;
According to several between the angle of arrival, the position of the label column and the super high frequency radio frequency identification device antenna position What relationship carries out two-dimensional localization to the label column position, and then obtains the first positioning positioned to the human body target As a result.
Optionally, described to obtain the second positioning result positioned using computer vision system to human body target, packet It includes:
The camera that predeterminated position is set in computer vision system is obtained to the image acquired after human body target shooting;
Using gauss hybrid models algorithm, the variation of the pixel value of each pixel in described image is counted;
Utilization orientation histogram of gradients algorithm, detects the human body target in described image;
Coordinate system conversion is carried out to the human body target detected in described image, the real world for obtaining human body target is sat Mark, and then obtain the second positioning result of human body target actual position.
Optionally, described that coordinate system conversion is carried out to the human body target detected in described image, obtain human body target Real world coordinate, and then obtain the second positioning result of human body target actual position, comprising:
The three Cartesian coordinates that one is established using the position of the camera as origin, with this three-dimensional cartesian coordinate Three unit coordinate vectors of system indicate the unit coordinate vector of world coordinate system, are further sought on this basis by human body mesh Mark pixel coordinate is mapped to the transfer matrix of the physical plane coordinate of human body target, and people is calculated according to the transfer matrix The real world coordinate of body target, and then obtain the second positioning result of human body target actual position.
Optionally, the utilization orientation histogram of gradients algorithm, detects the human body target in described image, comprising:
Described image is converted into grayscale image;
The pixel value in the grayscale image is normalized using Gamma correction algorithm;
Calculate the gradient direction and size of each pixel;
Grayscale image after normalization is divided into the square shaped cells with same size, counts the ladder of wherein each pixel Direction and size are spent, to obtain the feature vector of each square shaped cells;
Multiple adjacent square unit groups are combined into a rectangular block, the feature vector in rectangular block is normalized, Obtain the Feature Descriptor of rectangular block;
All pieces of Feature Descriptor is combined, obtains the histogram of gradients feature vector of described image, in turn Detect the human body target in described image.
It is optionally, described that first positioning result and the second positioning result are merged using default blending algorithm, Obtain the third positioning result to human body target, comprising:
Using variance weighted average algorithm, first positioning result and second positioning result are tentatively melted It closes;
Using Kalman filtering algorithm, the true value of signal is estimated according to measurement data, after further increasing preliminary fusion The precision of obtained positioning result obtains the third positioning result to human body target.
Optionally, the default combinated optimization algorithm, comprising: initialization procedure, circulation tracing process and display process;
The initialization procedure, for initializing the new human body target being caught in;
The circulation tracing process, for constantly updating and tracking the position of human body target;
The display process, for the result of circulation tracking to be drawn in every frame image;
Wherein, the initialization procedure includes with the circulation tracing process: preprocessing module, data comparison module and Precision enhances module;Wherein:
The preprocessing module: upper the one of the human body target position or human body target that are obtained according to first positioning result Frame position estimates current human body target region that may be present;
The data comparison module: the human body target position that first positioning result is obtained and the second positioning knot The human body target position that fruit obtains is compared with last human body target position respectively, chooses most probable human body target position simultaneously Being sent into precision enhances module;
The precision enhances module, the most probable human body target position pair for choosing according to the data comparison module The third positioning result carries out precision optimizing, obtains the final positioning result to human body target.
As shown from the above technical solution, human body target recognition positioning method of the invention, will be sharp by default blending algorithm Human body target is positioned with ultrahigh-frequency radio-frequency identification system the first positioning result and using computer vision system to people The second positioning result that body target is positioned is merged, and is positioned using default combinated optimization algorithm to fusion results Precision optimizing obtains the final positioning result to human body target, thus, it is possible to realize the identification positioning to human body target, efficiency High, accuracy height.
Detailed description of the invention
Fig. 1 is the flow diagram for the human body target recognition positioning method that one embodiment of the invention provides;
Fig. 2 is the signal of the basic model of the angle of arrival location algorithm provided in an embodiment of the present invention based on passive tag column Figure;
Fig. 3 is angle of arrival, label column in the angle of arrival location algorithm provided in an embodiment of the present invention based on passive tag column Geometrical relationship schematic diagram between position and aerial position;
Fig. 4 is to utilize the angle of arrival location algorithm arranged based on passive tag to identify super high frequency radio frequency the embodiment of the present invention Label column position positioning in system on human body target and the antenna of experiment is verified and assessed to the first positioning result obtained Placement position schematic diagram;
Fig. 5 a is a kind of idiographic flow schematic diagram of human body target recognition positioning method provided in an embodiment of the present invention;
Fig. 5 b is a kind of idiographic flow schematic diagram of human body target recognition positioning method provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, the technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only It is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiment of the present invention, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Table 1 shows the superiority and inferiority comparison of ultrahigh-frequency radio-frequency identification system, computer vision system and motion detector, by table 1 it is found that only there is the human body target in large area to position energy for computer vision system and ultrahigh-frequency radio-frequency identification system Power.This common ground means that two systems can realize depth integration by sharing the location information of human body target.
Table 1
Fig. 1 shows the flow diagram of the human body target recognition positioning method of one embodiment of the invention offer, such as Fig. 1 institute Show, the human body target recognition positioning method of the present embodiment includes the following steps 101-104.
101, the first positioning result positioned using ultrahigh-frequency radio-frequency identification system to human body target is obtained.
102, the second positioning result positioned using computer vision system to human body target is obtained.
103, first positioning result and the second positioning result are merged using default blending algorithm, is obtained to people The third positioning result of body target.
104, positioning accuracy optimization is carried out to the third positioning result using default combinated optimization algorithm, obtained to human body The final positioning result of target.
The human body target recognition positioning method of the present embodiment will utilize super high frequency radio frequency identification system by default blending algorithm Unite the first positioning result that human body target is positioned with human body target is positioned using computer vision system the Two positioning results are merged, and carry out positioning accuracy optimization to fusion results using default combinated optimization algorithm, are obtained to people The final positioning result of body target, thus, it is possible to realize the identification positioning to human body target, it is high-efficient, accuracy is high.
In a particular application, above-mentioned steps 101 may include:
Using the angle of arrival location algorithm arranged based on passive tag, on human body target in ultrahigh-frequency radio-frequency identification system Label column position is positioned, and then obtains the first positioning result positioned to the human body target;
Wherein, the ultrahigh-frequency radio-frequency identification system, comprising: label column on human body target and set in same linear interval At least three super high frequency radio frequencies identification equipment set;Two super high frequency radio frequencies identification equipment interval default first of arbitrary neighborhood away from From the label column includes: two electronic tags for being spaced default second distance.
Wherein, the default second distance is greater than zero and is less than λ/4, and λ is that the super high frequency radio frequency identifies what equipment was sent The wavelength of carrier wave.
In a particular application, the super high frequency radio frequency identification equipment is that super high frequency radio frequency identifies RFID reader.
Specifically, described using the angle of arrival location algorithm arranged based on passive tag, in ultrahigh-frequency radio-frequency identification system Label column position on human body target is positioned, and then obtains the first positioning result positioned to the human body target, The step 101a and 101b being not shown in the figure can further be specifically included:
101a, the echo-signal that each super high frequency radio frequency identification equipment receives the label column return by its antenna is obtained Angle of arrival.
Specifically, the step 101a may include the step S1 and S2 being not shown in the figure:
S1, each super high frequency radio frequency identification equipment of acquisition pass through two electronics marks in the received label column of its antenna Sign the arrival phase pushing figure of two echo-signals returned respectively.
S2, identify what equipment was sent according to the arrival phase pushing figure of described two echo-signals, the super high frequency radio frequency The wavelength of carrier wave and the default second distance obtain each super high frequency radio frequency identification equipment by its antenna and receive the label Arrange the angle of arrival of the echo-signal returned.
Further, the step S2 can be specifically included:
According to the arrival phase pushing figure of described two echo-signalsWithThe super high frequency radio frequency identification equipment is sent Carrier wave wavelength X and the default second distance d obtain each super high frequency radio frequency identification equipment and pass through by the first formula Its antenna receives the angle of arrival θ for the echo-signal that the label column returns;
Wherein, first formula are as follows:
101b, device antenna position is identified according to the angle of arrival, the position of the label column and the super high frequency radio frequency Between geometrical relationship, two-dimensional localization is carried out to the label column position, and then obtain the positioned to the human body target One positioning result.
Specifically, the derivation process of above-mentioned formula (1) is as described below:
Referring to fig. 2, Fig. 2 shows the basic model of the angle of arrival location algorithm based on passive tag column, the basic models In include: in the ultrahigh-frequency radio-frequency identification system any one super high frequency radio frequency identification equipment antenna C and the label column (being spaced two electronic tags A and B of default second distance), enabling the coordinate of electronic tag A is (0,0), the seat of electronic tag B It is designated as (d, 0), the coordinate of antenna C is (x, h), then the difference at a distance from electronic tag A and B and antenna C are as follows:
It, can to dimensionless replacement is done on the right side of formula (2) since d is much smaller than the distance of antenna C to electronic tag A/B :
In view of each super high frequency radio frequency identification equipment receives by its antenna the echo-signal of label column return The linear relationship reached between phase offset and distance can obtain:
Wherein,Identify what equipment was returned respectively by the received electronic tag A and B of antenna C for super high frequency radio frequency The arrival phase pushing figure of two echo-signals;
It enablesIt is available by transformation according to formula (3) and (4):
A new freedom degree (direction of label column), above-mentioned step are introduced since label is classified as positioning target position Rapid 101b needs three or more antennas that can realize the two-dimensional localization to label column position.The two-dimensional coordinate of label column can root It is calculated according to obtained angle of arrival according to simple geometric relationships, geometry of the Fig. 3 between angle of arrival, label column position and aerial position closes It is schematic diagram.
It should be noted that the selection process of the default second distance d is as follows:
On the one hand, in formula (5)Value range be [- 1,1], andValue range be [0,2 π), therefore formula (5) is to anySet up and if only if λ/4 d <.In a particular application, super high frequency radio frequency identification is set Standby, the wavelength of carrier wave is about 32cm, therefore d should be less than 8cm.On the other hand, following error analyses shows the random of positioning Error is inversely proportional to electronic tag interval d, therefore d as big as possible should reduce random error.Therefore, d in the present embodiment Value should be preferably 8cm.
In order to verify and assess the angle of arrival algorithm based on passive tag column, the present embodiment is IPJ-REV- using signal The super high frequency radio frequency recognition read-write machine of R420-GX21M is tested, and selection antenna model is E911011PCR, and directionality increases Benefit is 11dBic, and beam angle is 40 °, and antenna disposing way is referring to Fig. 4.A1 and A2 is antenna, and b and b ' are the contour of antenna. Label column is placed in rectangle D1D2D3D4, is divided into 0.5m between the line between label column.Experimental result shows maximum deviations For 0.43m, minimum deviations are 0.07m.This result meets theory expectation, and the testing result of the present embodiment the method is accurate It spends higher.
In a particular application, above-mentioned steps 102 may include the step 102a-102d being not shown in the figure:
102a, it obtains the camera of predeterminated position is set in computer vision system to acquiring after human body target shooting Image.
102b, using gauss hybrid models algorithm, count the variation of the pixel value of each pixel in described image.
It should be noted that needing to establish each pixel K Gaussian Profile mould in the gauss hybrid models algorithm Type, and with these Gaussian distribution models come the variation of the pixel value of each pixel in statistical picture, i.e., it is every in described image One pixel indicates its value probability in time-domain with K Gaussian distribution model respectively;
Current pixel value is xt=[x1,t,x2,t,…,xn,t] probability can be expressed as:
Wherein, K is the number of Gauss model, wk,tFor the weight of k-th of Gaussian distribution model in t moment current pixel, μk,tFor the mathematic expectaion of k-th of Gaussian distribution model in t moment current pixel, σk,tFor k-th of Gauss in t moment current pixel The covariance matrix of distributed model, n are positive integer.For the sake of simplicity, it has been generally acknowledged that in rgb color space or YUV color space Three people having a common goal be mutually it is independent, i.e.,Wherein I is three-dimensional unit matrix, σk,tIt is k-th in t moment current pixel The standard deviation of each color component in Gaussian distribution model.
By the Gaussian distribution model of each pixel according to wk,tk,tSequence, this value can characterize corresponding Gaussian distribution model It is the probability of background model.If a pixel value is unsatisfactory for any one Gaussian distribution model, a new height can be established This distributed model, and the Gaussian distribution model after that sorts will be eliminated.
In order to be further simplified operation, in t moment, pixel value xtIncorporated into k-th Gaussian distribution model and if only if it Meet following formula (3):
|xtk,t-1| < D σk,t-1 (8)
Wherein, D is user-defined parameter, and usual value is 2.5.
At this point, the parameter of Gauss Gaussian distribution model is updated according to following stepping type:
wk,t=(1- ρ) wk,t-1+ρ (9)
μk,t=(1- ρ) μk,t-1+ρxt (10)
Wherein, ρ is learning rate, can be in 0 to 1 value.The value of ρ determines the rate that Gaussian distribution model updates.
The weight of not matched Gaussian distribution model updates according to the following equation (12):
wi,t=(1- α) wi,t-1 i≠k (12)
If pixel value is unsatisfactory for any one Gaussian distribution model, a new Gaussian distribution model can be established, new Gaussian distribution model is desired for pixel value xt, and its standard deviation and expectation will be set as preset default value.
102c, utilization orientation histogram of gradients algorithm, detect the human body target in described image.
Specifically, the step 102c may include the step A1-A6 being not shown in the figure:
A1, described image is converted into grayscale image.
A2, the pixel value in the grayscale image is normalized using gamma Gamma correction algorithm.
In a particular application, the expression formula of Gamma correction algorithm are as follows:
I'(x, y)=I (x, y)Γ (13)
Wherein, I (x, y) is the pixel value of input, I'(x, y) it is the pixel value exported, Γ is gamma parameter, the value of Γ Range is (0,1), and Γ value is 0.5 under normal conditions.
It is understood that step A2 can reduce the influence from illumination variation and random noise.
A3, the gradient direction and size for calculating each pixel.
In a particular application, the gradient G of each pixel is calculatedxAnd GyFormula are as follows:
Wherein, Gx(x, y) is the gradient vector in the pixel horizontal direction that coordinate is (x, y), Gy(x, y) is for coordinate Gradient vector in the pixel horizontal direction of (x, y), H (x, y) are the pixel value for the pixel that coordinate is (x, y);
The mould G (x, y) (i.e. gradient magnitude) of gradient are as follows:
Gradient direction θ ' (x, y) are as follows:
In general, the step for can be by the way that image array and Canny edge detection operator be carried out two-dimensional convolution realization.
A4, the grayscale image after normalization is divided into the square shaped cells with same size, counts wherein each pixel Gradient direction and size, to obtain the feature vector of each square shaped cells.
A5, multiple adjacent square unit groups are combined into a rectangular block, normalizing is carried out to the feature vector in rectangular block Change, obtains the Feature Descriptor of rectangular block.
A6, all pieces of Feature Descriptor is combined, obtains the histogram of gradients feature vector of described image, into And detect the human body target in described image.
102d, coordinate system conversion is carried out to the human body target detected in described image, obtains the practical generation of human body target Boundary's coordinate, and then obtain the second positioning result of human body target actual position.
Specifically, the step 102d may include:
The three Cartesian coordinates that one is established using the position of the camera as origin, with this three-dimensional cartesian coordinate Three unit coordinate vectors of system indicate the unit coordinate vector of world coordinate system, are further sought on this basis by human body mesh Mark pixel coordinate is mapped to the transfer matrix of the physical plane coordinate of human body target, and people is calculated according to the transfer matrix The real world coordinate of body target, and then obtain the second positioning result of human body target actual position.
It should be noted that in the present embodiment, camera be directed toward with the angle of horizontal plane can regard as camera direction with by taking the photograph As head position and the angle between the straight line of vector V to infinite point point determination.Picture point (also referred to as disappears to picture centre Lose point) distance L be proportional to end point arrive by the line of imaging point and camera be directed toward between folder tangent of an angle [4].Its expression formula It is provided by following formula (17) and (18):
Wherein, α is imaging center and the horizontal angle formed by imaging point, and β is that imaging center is bowed with formed by imaging point The elevation angle, x' and z' are horizontal and vertical pixel distance of the picture point to imaging surface center.
As described above, the key that estimation camera is directed toward be to find picture centre with it is infinite in world coordinate system reference axis Pixel distance between " point " at a distance.And the position of infinite point " point " in the picture can or water vertical by calculating world coordinate system " intersection point " of two straight lines parallel in practice in figure obtains in average face.
Usually indoors in monitoring, the seam between floor or ceiling can be regarded as preferably with reference to parallel lines, then infinite Position can be obtained via following steps on the image of " point " at a distance:
A) straight line all in image is found using Hough transform;
B) straight line that wherein length is met the requirements is chosen;
C) all slopes are filtered out according to the priori posture information from holder and is unsatisfactory for desired straight line;
D) two straight lines of hypotelorism are merged into one
E) wherein three or more the points passed through jointly are found.
One is established using camera position as the three Cartesian coordinates of origin, referred to as camera coordinate system.Enable camera shooting Head is oriented to reference axis V, and enabling axis horizontally to the right is reference axis U, and reference axis W is provided by U × V.The three of three-dimensional world coordinate system A reference axis is X, Y and Z.The unit coordinate vector of the camera directional information obtained according to upper section, world coordinate system can be shot As three unit vectors of head coordinate systemIt indicates are as follows:
Then have:
The transfer matrix that object pixel coordinate is mapped to the physical plane coordinate of target is further sought on this basis.
Assuming that the Z coordinate of human body target is definite value h.Linear Mapping is mapped as due to camera coordinate to world coordinates, Horizontal plane z=h in world coordinate system is centainly mapped in the camera coordinate system to be a flat surface.Assuming that this plane is
V=au+bw+c (22)
It can be obtained according to formula (20)
Then
The equal sign right part of obvious formula (24) should be unrelated with u or w, therefore obtains equation
Solution can obtain:
Then, actual coordinate [x, y]TIt can be by map reference [u, w]TIt is expressed as
Equally, coordinate can be expressed as by actual coordinate on image
To which specific person recognition actual coordinate be calculated by matrix.
In a particular application, above-mentioned steps 103 may include the step 103a and 103b being not shown in the figure:
103a, using variance weighted average algorithm, first positioning result and second positioning result are carried out just Step fusion.
In a particular application, the process of the variance weighted average algorithm may include:
Assuming that the position of ultrahigh-frequency radio-frequency identification system and computer vision system measurement equation is respectively as follows:
B=X+U (31)
C=X+V (32)
Wherein, B is the position vector that ultrahigh-frequency radio-frequency identification system measures;C is the position that computer vision subsystem measures Set vector;X is physical location vector;U and V is noise vector, and the two meets:
Wherein, Q and R is positive definite matrix;
Enable Z=K × B+ (I-K) × C (35)
Wherein, coefficient matrix when K is Z covariance matrix minimum, referred to as optimal coefficient matrix;
Offset between Z and physical location X are as follows:
The covariance matrix of Z are as follows:
It enablesIf adding optimal coefficient matrix K with deviation ratio matrix delta K, have
δ P=δ KW+ (δ KW)T+δK(Q+R)δKT (39)
Wherein:
W=QKT-R(I-K)T (40)
According to front it is assumed that (Q+R) is positive definite matrix.No matter therefore what value δ K takes, the Section 3 perseverance of formula (39) is Just;And as W ≠ 0, preceding two symbols of formula (39) change with the variation of δ K.Therefore P variance minimum is and if only if W ≡ 0, i.e.,
K=R (Q+R)-1 (41)
(I-K)=Q (Q+R)-1 (42)
Z and its matrix of covariance at this time are as follows:
Z=R (Q+R)-1×B+Q(Q+R)-1×C (43)
P=(Q+R)-1(QRQ+RQR)(Q+R)-1 (44)
The determinant of covariance matrix are as follows:
| P | be less than | Q | and | R | in any value.
103b, it further increases and tentatively melts according to the true value of measurement data estimation signal using Kalman filtering algorithm The precision of the positioning result obtained after conjunction obtains the third positioning result to human body target.
In a particular application, the process of the Kalman filtering algorithm may include:
Define moment tkThe state vector at place is Xk。XkBy random noise sequences Wk-1Driving, driving equation are as follows:
Xkk,k-1Xk-1k-1Wk-1 (46)
Measure equation are as follows:
Zk=HkXk+Vk (47)
Wherein, Φk,k-1For tk-1Moment is to tkThe transfer matrix at moment;Γk-1To drive matrix;HkFor calculation matrix;VkFor Random measurement noise.
Random noise sequences WkAnd VkMeet following relationship:
To state vector XkEstimated valueIt can be obtained according to following stepping type:
One step state estimation:
State estimation:
Filtering gain:
One step state estimation variance:
State estimation variance:
In a particular application, the default combinated optimization algorithm in above-mentioned steps 104, may include: initialization procedure, Recycle tracing process and display process;
The initialization procedure, for initializing the new human body target being caught in;
The circulation tracing process, for constantly updating and tracking the position of human body target;
The display process, for the result of circulation tracking to be drawn in every frame image;
Wherein, the initialization procedure includes with the circulation tracing process: preprocessing module, data comparison module and Precision enhances module;Wherein:
The preprocessing module: upper the one of the human body target position or human body target that are obtained according to first positioning result Frame position estimates current human body target region that may be present;
The data comparison module: the human body target position that first positioning result is obtained and the second positioning knot The human body target position that fruit obtains is compared with last human body target position respectively, chooses most probable human body target position simultaneously Being sent into precision enhances module;
The precision enhances module, the most probable human body target position pair for choosing according to the data comparison module The third positioning result carries out precision optimizing, obtains the final positioning result to human body target.
For the validity for assessing the default combinated optimization algorithm, the present embodiment used one section by business level closed-circuit control The monitoring video that camera (DS-2DC2202-DE3/W HIKVISION) is recorded is tested.It is limited to experimental site electromagnetism ring The complexity in border is difficult to meet the needs of super high frequency radio frequency identification positioning, and the location data of super high frequency radio frequency identification herein uses Simulation result.Video length is about 6 seconds, totally 130 frame.The results show that miss detection is completely eliminated, while the positioning to target Precision improves about an order of magnitude: 0.0673m and computer vision of the root-mean-square error from ultrahigh-frequency radio-frequency identification system The 0.1226m of system has been increased to 0.0107m.The output of Kalman filter is the results show that its absolute error more tentatively merges knot Fruit is further improved, and root-mean-square value 0.0071m, increase rate is about 30%, and the position of target is by precise marking.
One kind that Fig. 5 a and Fig. 5 b respectively illustrate human body target recognition positioning method provided in an embodiment of the present invention is specific Flow diagram.
The human body target recognition positioning method of the present embodiment will utilize super high frequency radio frequency identification system by default blending algorithm Unite the first positioning result that human body target is positioned with human body target is positioned using computer vision system the Two positioning results are merged, and carry out positioning accuracy optimization to fusion results using default combinated optimization algorithm, are obtained to people The identification positioning to human body target may be implemented in the final positioning result of body target, high-efficient, accuracy is high.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.Term " on ", "lower" etc. refer to The orientation or positional relationship shown is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of the description present invention and simplifies Description, rather than the device or element of indication or suggestion meaning must have a particular orientation, constructed and grasped with specific orientation Make, therefore is not considered as limiting the invention.Unless otherwise clearly defined and limited, term " installation ", " connected ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can be Mechanical connection, is also possible to be electrically connected;It can be directly connected, two can also be can be indirectly connected through an intermediary Connection inside element.For the ordinary skill in the art, above-mentioned term can be understood at this as the case may be Concrete meaning in invention.
In specification of the invention, numerous specific details are set forth.Although it is understood that the embodiment of the present invention can To practice without these specific details.In some instances, well known method, structure and skill is not been shown in detail Art, so as not to obscure the understanding of this specification.Similarly, it should be understood that disclose in order to simplify the present invention and helps to understand respectively One or more of a inventive aspect, in the above description of the exemplary embodiment of the present invention, each spy of the invention Sign is grouped together into a single embodiment, figure, or description thereof sometimes.However, should not be by the method solution of the disclosure Release is in reflect an intention that i.e. the claimed invention requires more than feature expressly recited in each claim More features.More precisely, as the following claims reflect, inventive aspect is less than single reality disclosed above Apply all features of example.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in the specific embodiment, It is wherein each that the claims themselves are regarded as separate embodiments of the invention.It should be noted that in the absence of conflict, this The feature in embodiment and embodiment in application can be combined with each other.The invention is not limited to any single aspect, It is not limited to any single embodiment, is also not limited to any combination and/or displacement of these aspects and/or embodiment.And And can be used alone each aspect and/or embodiment of the invention or with other one or more aspects and/or its implementation Example is used in combination.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme should all cover within the scope of the claims and the description of the invention.

Claims (8)

1. a kind of human body target recognition positioning method characterized by comprising
Obtain the first positioning result positioned using ultrahigh-frequency radio-frequency identification system to human body target;
Obtain the second positioning result positioned using computer vision system to human body target;
First positioning result and the second positioning result are merged using default blending algorithm, obtained to human body target Third positioning result;
Positioning accuracy optimization is carried out to the third positioning result using default combinated optimization algorithm, is obtained to human body target most Whole positioning result;
Wherein, the default combinated optimization algorithm, comprising: initialization procedure, circulation tracing process and display process;
The initialization procedure, for initializing the new human body target being caught in;
The circulation tracing process, for constantly updating and tracking the position of human body target;
The display process, for the result of circulation tracking to be drawn in every frame image;
Wherein, the initialization procedure includes with the circulation tracing process: preprocessing module, data comparison module and precision Enhance module;Wherein:
The preprocessing module: according to the previous frame position of human body target position or human body target that first positioning result obtains Set the current human body target region that may be present of estimation;
The data comparison module: human body target position that first positioning result obtains and second positioning result are obtained To human body target position compared respectively with last human body target position, choose most probable human body target position and be sent into Precision enhances module;
The precision enhances module, and the most probable human body target position for being chosen according to the data comparison module is to described Third positioning result carries out precision optimizing, obtains the final positioning result to human body target.
2. the method according to claim 1, wherein described obtain using ultrahigh-frequency radio-frequency identification system to human body The first positioning result that target is positioned, comprising:
Using the angle of arrival location algorithm arranged based on passive tag, to the label on human body target in ultrahigh-frequency radio-frequency identification system Column position is positioned, and then obtains the first positioning result positioned to the human body target;
Wherein, the ultrahigh-frequency radio-frequency identification system, comprising: label column on human body target and in the setting of same linear interval At least three super high frequency radio frequencies identify equipment;Two super high frequency radio frequencies identification equipment interval of arbitrary neighborhood presets first distance, The label column includes: two electronic tags for being spaced default second distance.
3. according to the method described in claim 2, λ is it is characterized in that, the default second distance is greater than zero and is less than λ/4 The wavelength for the carrier wave that the super high frequency radio frequency identification equipment is sent.
4. according to the method described in claim 3, it is characterized in that, described calculated using the angle of arrival arranged based on passive tag positioning Method positions the label column position on human body target in ultrahigh-frequency radio-frequency identification system, and then obtains to the human body mesh Mark the first positioning result positioned, comprising:
Obtain the angle of arrival that each super high frequency radio frequency identification equipment receives the echo-signal that the label column returns by its antenna;
Identify that the geometry between device antenna position closes according to the angle of arrival, the position of the label column and the super high frequency radio frequency System carries out two-dimensional localization to the label column position, and then obtains the first positioning result positioned to the human body target.
5. the method according to claim 1, wherein described obtain using computer vision system to human body target The second positioning result positioned, comprising:
The camera that predeterminated position is set in computer vision system is obtained to the image acquired after human body target shooting;
Using gauss hybrid models algorithm, the variation of the pixel value of each pixel in described image is counted;
Utilization orientation histogram of gradients algorithm, detects the human body target in described image;
Coordinate system conversion is carried out to the human body target detected in described image, obtains the real world coordinate of human body target, into And obtain the second positioning result of human body target actual position.
6. according to the method described in claim 5, it is characterized in that, described carry out the human body target detected in described image Coordinate system conversion obtains the real world coordinate of human body target, and then obtains the second positioning result of human body target actual position, Include:
The three Cartesian coordinates that one is established using the position of the camera as origin, with this three Cartesian coordinates Three unit coordinate vectors indicate the unit coordinate vector of world coordinate system, are further sought on this basis by human body target picture Plain coordinate is mapped to the transfer matrix of the physical plane coordinate of human body target, and human body mesh is calculated according to the transfer matrix Target real world coordinate, and then obtain the second positioning result of human body target actual position.
7. according to the method described in claim 5, it is characterized in that, the utilization orientation histogram of gradients algorithm, detects institute State the human body target in image, comprising:
Described image is converted into grayscale image;
The pixel value in the grayscale image is normalized using Gamma correction algorithm;
Calculate the gradient direction and size of each pixel;
Grayscale image after normalization is divided into the square shaped cells with same size, counts the gradient side of wherein each pixel To and size, to obtain the feature vector of each square shaped cells;
Multiple adjacent square unit groups are combined into a rectangular block, the feature vector in rectangular block is normalized, is obtained The Feature Descriptor of rectangular block;
All pieces of Feature Descriptor is combined, obtains the histogram of gradients feature vector of described image, and then detect Human body target in described image out.
8. the method according to claim 1, wherein described tie first positioning using default blending algorithm Fruit and the second positioning result are merged, and the third positioning result to human body target is obtained, comprising:
Using variance weighted average algorithm, first positioning result and second positioning result are tentatively merged;
Using Kalman filtering algorithm, the true value of signal is estimated according to measurement data, is obtained after further increasing preliminary fusion Positioning result precision, obtain to the third positioning result of human body target.
CN201610755695.3A 2016-08-29 2016-08-29 Human body target recognition positioning method Expired - Fee Related CN106372552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610755695.3A CN106372552B (en) 2016-08-29 2016-08-29 Human body target recognition positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610755695.3A CN106372552B (en) 2016-08-29 2016-08-29 Human body target recognition positioning method

Publications (2)

Publication Number Publication Date
CN106372552A CN106372552A (en) 2017-02-01
CN106372552B true CN106372552B (en) 2019-03-26

Family

ID=57901811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610755695.3A Expired - Fee Related CN106372552B (en) 2016-08-29 2016-08-29 Human body target recognition positioning method

Country Status (1)

Country Link
CN (1) CN106372552B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106911915B (en) * 2017-02-28 2020-05-29 潍坊恩源信息科技有限公司 Commodity information acquisition system based on augmented reality technology
CN107273799A (en) * 2017-05-11 2017-10-20 上海斐讯数据通信技术有限公司 A kind of indoor orientation method and alignment system
CN107608541B (en) * 2017-10-17 2021-03-05 宁波视睿迪光电有限公司 Three-dimensional attitude positioning method and device and electronic equipment
CN107782304B (en) * 2017-10-26 2021-03-09 广州视源电子科技股份有限公司 Mobile robot positioning method and device, mobile robot and storage medium
SG10201913005YA (en) * 2019-12-23 2020-09-29 Sensetime Int Pte Ltd Method, apparatus, and system for recognizing target object
CN111833397B (en) * 2020-06-08 2022-11-29 西安电子科技大学 Data conversion method and device for orientation-finding target positioning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102848388A (en) * 2012-04-05 2013-01-02 上海大学 Service robot locating and grabbing method based on multiple sensors
CN104330771A (en) * 2014-10-31 2015-02-04 富世惠智科技(上海)有限公司 Indoor RFID precise positioning method and device
CN105180943A (en) * 2015-09-17 2015-12-23 南京中大东博信息科技有限公司 Ship positioning system and ship positioning method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011005513A1 (en) * 2011-03-14 2012-09-20 Kuka Laboratories Gmbh Robot and method for operating a robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102848388A (en) * 2012-04-05 2013-01-02 上海大学 Service robot locating and grabbing method based on multiple sensors
CN104330771A (en) * 2014-10-31 2015-02-04 富世惠智科技(上海)有限公司 Indoor RFID precise positioning method and device
CN105180943A (en) * 2015-09-17 2015-12-23 南京中大东博信息科技有限公司 Ship positioning system and ship positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视频序列图像中运动目标检测与跟踪算法研究;宋佳声;《中国博士学位论文全文数据库 信息科技辑》;20141215;全文

Also Published As

Publication number Publication date
CN106372552A (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN106372552B (en) Human body target recognition positioning method
US11189044B2 (en) Method and device for detecting object stacking state and intelligent shelf
Krajník et al. A practical multirobot localization system
CN109166077B (en) Image alignment method and device, readable storage medium and computer equipment
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
CN105405154B (en) Target object tracking based on color-structure feature
US10237493B2 (en) Camera configuration method and apparatus
Lloyd et al. Recognition of 3D package shapes for single camera metrology
Alt et al. Rapid selection of reliable templates for visual tracking
EP3566172A1 (en) Systems and methods for lane-marker detection
CN108322724B (en) Image solid matching method and binocular vision equipment
CN102087703A (en) Method for determining frontal face pose
CN108537214B (en) Automatic construction method of indoor semantic map
CN102063607A (en) Method and system for acquiring human face image
JP2015002547A (en) Image processing apparatus, program, and image processing method
CN109493384A (en) Camera position and orientation estimation method, system, equipment and storage medium
CN109961501A (en) Method and apparatus for establishing three-dimensional stereo model
US20100246905A1 (en) Person identifying apparatus, program therefor, and method thereof
Sadeghi et al. Ocrapose: An indoor positioning system using smartphone/tablet cameras and OCR-aided stereo feature matching
CN103591953B (en) A kind of personnel positioning method based on single camera
CN107816990B (en) Positioning method and positioning device
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
Shah et al. Performance evaluation of 3d local surface descriptors for low and high resolution range image registration
Babic et al. Indoor RFID localization improved by motion segmentation
EP4209826A1 (en) High-confidence optical head pose correspondence mapping with multiple markers for high-integrity headtracking on a headworn display (hwd)

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190326

Termination date: 20200829