CN106503605A - Human body target recognition methods based on stereovision technique - Google Patents

Human body target recognition methods based on stereovision technique Download PDF

Info

Publication number
CN106503605A
CN106503605A CN201510552079.3A CN201510552079A CN106503605A CN 106503605 A CN106503605 A CN 106503605A CN 201510552079 A CN201510552079 A CN 201510552079A CN 106503605 A CN106503605 A CN 106503605A
Authority
CN
China
Prior art keywords
point
camera
coordinate system
window
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510552079.3A
Other languages
Chinese (zh)
Inventor
吕芳
任侃
潘佳惠
韶阿俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201510552079.3A priority Critical patent/CN106503605A/en
Publication of CN106503605A publication Critical patent/CN106503605A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of human body target recognition methods based on stereovision technique, including:The picture that same scene is obtained from two different angles simultaneously by two cameras, forms stereo pairs;The inside and outside parameter of video camera is determined by camera calibration, imaging model is established;Using the matching algorithm based on window, window is created centered on the point to be matched of wherein piece image, identical sliding window is created on another piece image, sliding window is moved in units of pixel successively along EP point, calculation window match measure, optimal match point is found, and the three-dimensional geometric information for target being obtained by principle of parallax generates depth image;Using One-Dimensional Maximum-Entropy thresholding method, head and shoulder information is distinguished in conjunction with gray feature, recognize human body target.Amount of calculation of the present invention is little, can fast and accurately identify human body target with simple image.

Description

Human body target recognition methods based on stereovision technique
Technical field
The present invention relates to a kind of human body target recognition methods, particularly a kind of human body target identification based on stereovision technique Method.
Background technology
With the quick raising of the aspect performance such as Computer Storage, computing, computer is progressively applied to and realizes scene by people The sophisticated functions such as reconstruct, target identification, human-computer interaction, this have not only opened up the scale of computer application field and research side To, and promote the fast development of related discipline.Used as the research field that enlivens now, the essence of computer vision is just It is to replace human eye using video camera, replaces brain using computer, target is identified tracking, and makes corresponding figure Analyzing and processing, generates the image for being suitable for instrument detection or eye-observation.Video technique can be continuously transmitted within a period of time Image, comprising more details information, while it have the advantages that directly perceived, concrete, disposable.Identification to video object Become the important topic in the fields such as image procossing, pattern-recognition, human-computer interaction, be widely used in manufacturing industry, doctor Treat in the various intelligence systems in the fields such as diagnosis, military affairs.
Traditional APC system mainly have pressure capsule system and infrared block system, rapid with laser infrared is sent out Exhibition, detects the signal that human body sends using suitable heat release infrared probe, is identified counting.When target is walked about, Change caused by infrared sensor detection human body infrared spectrum obtains the process of human body target motion, by signal transacting Can be with discrimination objective, their low costs are simple to operate, but there is identification statistics inaccurately, and application places are limited Etc. problems.
Image processing method can also be used for solving the problems, such as human bioequivalence simultaneously.But most methods are only using the one of two dimensional image A little recognizers, such as choose some parts of human body as feature, it is intended to mated in the picture, so as to reach knowledge Other purpose.At present, the conventional method of human bioequivalence also has a lot:Based on manikin, the method for structural element, should The method of kind has higher requirement to the image information for extracting whole people, and the object of motion deformation is difficult to process, and to figure Requirement of real-time height as collection;Based on wavelet transformation and the method for SVMs, the method is mainly based upon small echo mould The principle of plate, needs to search for whole image according to different scale sizes, computationally intensive.
Content of the invention
It is an object of the invention to provide a kind of human body target recognition methods based on stereovision technique.
The technical solution for realizing the object of the invention is:A kind of human body target recognition methods based on stereovision technique, Comprise the following steps:
Step 1, the picture for being obtained same scene by two cameras from two different angles simultaneously, form stereo-picture Right;
Step 2, the inside and outside parameter of video camera is determined by camera calibration, establish imaging model;
The matching algorithm of step 3, employing based on window, creates window centered on the point to be matched of wherein piece image, Identical sliding window is created on another piece image, and sliding window is moved in units of pixel successively along EP point, Calculation window match measure, finds optimal match point, the three-dimensional geometric information for obtaining target by principle of parallax, generates deep Degree image;
Step 4, One-Dimensional Maximum-Entropy thresholding method is adopted, head and shoulder information is distinguished in conjunction with gray feature, recognize people Body target.
The present invention compared with prior art, its remarkable advantage:
(1) present invention is little in the human body target recognition methods amount of calculation of stereovision technique, can use simple image Human body target is fast and accurately identified;
(2) present invention can be identified using the depth information of image in the case of crowded, effectively excluded dry Disturb, differentiate moving target.
Description of the drawings
Fig. 1 is human body target recognition methods FB(flow block) of the present invention based on stereovision technique.
Fig. 2 is original depth-map in the embodiment of the present invention.
Fig. 3 is using the human body target identification figure obtained after the inventive method process in the embodiment of the present invention.
Specific embodiment
In conjunction with Fig. 1, the human body target recognition methods based on stereovision technique of the present invention, comprise the following steps:
Step 1, the acquisition of stereo pairs:
Two MTV-1881EX-3 cameras are placed in parallel, from two different angles while obtaining the picture of same scene, Form stereo pairs;
Step 2, the inside and outside parameter of video camera is determined by camera calibration, establish imaging model, specially:
Step 2-1, camera coordinates are demarcated, calibration figure is gridiron pattern, calibration principle is as follows:
Assume z=0 world coordinate system plane be stencil plane, [r1r2r3] sit relative to the world for camera coordinate system Mark system spin matrix, t be camera coordinate system relative to world coordinate system translation vector, [X Y 1]TFor point in template Homogeneous coordinates, [u υ 1]TFor the homogeneous coordinates on the spot projection on stencil plane to the plane of delineation, K is represented in video camera Ginseng matrix;
Step 2-2, set camera coordinate system OxcyczcFor the rectangular coordinate system being fixed on video camera, its origin O definition For the photocentre of video camera, xc, ycAxle is respectively parallel to the x of image physical coordinates system, y-axis, zcAxle and optical axis coincidence, i.e., zcImaging plane of the axle perpendicular to video camera, photocentre is to the plane of delineation apart from OO1Effective focal length f for video camera;
Step 2-3, set (xw, yw, zw) for certain P point in three-dimensional world coordinate system three-dimensional coordinate, (xc, yc, zc) it is three-dimensional coordinate of same point P in camera coordinate system, the point in world coordinate system is to video camera The conversion of coordinate system is expressed as by orthogonal spin matrix R and translation transformation matrix T:
Wherein, R is 3 × 3 spin matrixs, translation matrix
Orthogonal matrix R is that optical axis is combined relative to the direction cosines of world coordinate system reference axis, comprising three independent angles Variable (Eulerian angles):ψ angles (driftage) are rotated around x-axis;θ angles (pitching) are rotated around y-axis;φ angles (side is rotated around z-axis Incline), add three variables of T totally six parameters, referred to as video camera external parameter;
Step 2-4, the rigid transformation homogeneous coordinates of world coordinate system and camera coordinate system and matrix form are reduced to:
Therefore, can be with a matrix M between world coordinate system and camera coordinate system2To represent, as long as known M2Just The conversion of coordinate can be carried out between two coordinate systems;
Camera coordinates are tied to the preferable perspective projection transformation under the conversion i.e. pin-hole model of preferable image physical coordinates system, There is following formula to set up:
X=f xc/zcY=f yc/zc
X, y are respectively the abscissa and ordinate of preferable image physical coordinates system;
Equally represent that above formula is with homogeneous coordinates and matrix:
Ideal image coordinate is tied to the conversion of image pixel coordinates system, is indicated with homogeneous coordinates:
Wherein, u0、v0Represent the coordinate of camera coordinate system;
Its reverse-power is:
Above-mentioned relation formula is substituted into, it is possible to obtain the coordinate (u, υ) of P point coordinates that world coordinate system represents and its projection P ' Between relation:
Wherein, α=f/dx=f sx/dy, β=f/dyM1For inner parameter battle array, M2For external parameter battle array, M is 3 × 4 matrix, referred to as projection matrix, characterize the fundamental relation between two dimensional image coordinate and three-dimensional world coordinate, it is known that thing Point World coordinates using the matrix can to obtain corresponding ideal image coordinate, whereas if be aware of Metzler matrix and The image coordinate of picture point, it is possible to obtain by a space ray corresponding to video camera photocentre;
The fundamental relation between two dimensional image coordinate and three-dimensional world coordinate is obtained, that is, completes the demarcation of video camera;Video camera The image coordinate of acquisition can be converted to the coordinate that three-dimensional world coordinate is fastened by this imaging model unification, that is, determine The imaging model that video camera gained image is fastened in three-dimensional world coordinate.
The matching algorithm of step 3, employing based on window, creates window centered on the point to be matched of wherein piece image, Identical sliding window is created on another piece image, and sliding window is moved in units of pixel successively along EP point, Calculation window match measure, finds optimal match point, the three-dimensional geometric information for obtaining target by principle of parallax, generates deep Degree image;Specifically include following steps:
Step 3-1, hypothesis on the basis of right figure are made the difference with background, are obtained foreground picture;
Step 3-2, the determination of parallax:
The first step, in foreground picture, it is assumed that on the basis of right figure, calculates each pixel corresponding with left figure on given parallax The gray scale difference value of point;
Second step, on each parallax, is changed to using the narrow bar window perpendicular to base direction, using based on window Matching algorithm calculates the gray scale difference value of the window centered on each pixel and expression formula is as follows:
In formula, sizes of the m*n for template window, unit lengths of the γ for template window, unit width of the δ for template window Degree, Iright[xe+ γ, ye+ δ] it is right view [xe+ γ, ye+ δ] gray value at coordinate, Ileft[xe+ γ+d, ye+ δ] it is left view [xe+ γ+d, ye+ δ] gray value at coordinate, d is parallax;
D, in set disparity range, is got maximum disparity from minimum parallax, successively comparison expression by the 3rd step Value, the minimum corresponding point of value is optimal match point, the parallax value of corresponding parallax value as the pixel;
Step 3-3, the depth information for determining target:
Binocular range finding has mainly used the impact point difference that the lateral coordinates of imaging are directly present on two width views of left and right (i.e. parallax) and impact point have inversely proportional relation to imaging plane apart from Z, when video camera focal length known to In the case of, the depth information of any point, the i.e. coordinate value of the Z axis under camera coordinate system, if b is two cameras Optical center distance;Target Q to camera vertical range be H;Identical focal length is f, Q1、Q2It is target Q in video camera Imaging point;D is parallax, it is assumed that the optical axis of two video cameras is parallel to each other, and is derived from similar triangles:
H=(b × f))/d
Target Q for obtaining is the depth information of target to camera vertical range;
So, stereoscopic vision is counted and can pass through triangle meter using two or more than two video cameras for having position skew Calculate, obtain the depth information of place scene, on condition that requiring that the point in scene all has picture point in left images;On a left side In right view, the position of picture point is different, that is, parallax, and the point in scene is different with a distance from video camera, parallax It is also different, parallax diminishes big with the distance from camera;Binocular stereo vision is based on this parallax, Determine object to the distance of video camera using triangulo operation.
Step 4, One-Dimensional Maximum-Entropy thresholding method is adopted, head and shoulder information is distinguished in conjunction with gray feature, recognize people Body target, specially:
Step 4-1, the sub-box that depth image is divided into L*L pixels, L are positive integer, and with nine grids are Unit, with from left to right when mobile, order from top to bottom often compares once, the sub-box of a mobile L*L pixel, If the average gray of middle grid is higher than surrounding eight neighborhood average gray, it is head target area to establish middle grid;
Step 4-2, to head target area given threshold binaryzation, split head target;Specially:
Head and non-head region are split using One-Dimensional Maximum-Entropy thresholding method, p is madeiRepresent picture of the gray value for i in image Ratio shared by element, with gray level t as Threshold segmentation head and shoulder regions, is higher than the pixel structure of t gray levels in region Into head zone, non-head region is constituted less than the pixel of gray level t, then the entropy difference of non-head region and head zone It is defined as:
HO=-Σi[pi/(1-pt)]lg[pi/(1-pt)]
Wherein:Wherein i represents the gray value (0≤i≤255) of pixel, Ht=-Σtpilgpi, HE=-Σipilgpi, when entropy function value andWhen obtaining maximum, gray level t can be used as segmentation figure The threshold value of picture:
T=arg { max { HB+Ho}}
Step 4-3, the average gray and gray variance that determine the head zone after splitting:
Wherein M, N represent that the ranks number in each region, ε, ∈ represent that unit ranks number, f (ε, ∈) are represented respectively respectively The gray value that (ε, ∈) puts, when gray variance is more than the threshold value for setting, filters the pixel;
Step 4-4, according to human body head, under different field heights, whether the ratio of total pixel width and field height meets The scope of setting filters the long and narrow pseudo- target of profile.
The geometric properties of head mainly have class ellipticalness, head area, long width etc.;By continuous emulation testing In the scope of the total pixel width of certain field height head portion, at the same by emulation can obtain the scope of w/h for [0.65, 1.5], differentiated by such threshold value, the long and narrow pseudo- target of profile can be effectively filtered out.
With reference to specific embodiment, the invention will be further described.
Embodiment
In conjunction with the original depth image shown in Fig. 2, to visible ray in the case of, human hair more black and dark clothes or Human hair color is shallower, and humanbody moving object when easy and background is obscured again is used and carried out based on stereoscopic vision algorithm Process.
Fig. 3 is the segmentation figure picture using human body target head shoulder after the inventive method process, significantly can find out, to can The result for seeing moving target in the case of light is, when human hair is more black and dark clothes or human hair color compared with Shallow, again easily and when background obscures, the depth image accuracy of identification height of stereovision technique output, be difficult by light and The impact of background, it will be apparent that distinguished target and background.

Claims (4)

1. a kind of human body target recognition methods based on stereovision technique, it is characterised in that comprise the following steps:
Step 1, the picture for being obtained same scene by two cameras from two different angles simultaneously, form stereo-picture Right;
Step 2, the inside and outside parameter of video camera is determined by camera calibration, establish imaging model;
The matching algorithm of step 3, employing based on window, creates window centered on the point to be matched of wherein piece image, Identical sliding window is created on another piece image, and sliding window is moved in units of pixel successively along EP point, Calculation window match measure, finds optimal match point, the three-dimensional geometric information for obtaining target by principle of parallax, generates deep Degree image;
Step 4, One-Dimensional Maximum-Entropy thresholding method is adopted, head and shoulder information is distinguished in conjunction with gray feature, recognize people Body target.
2. the human body target recognition methods based on stereovision technique according to claim 1, it is characterised in that Step 2 is specially:
Step 2-1, camera coordinates are demarcated, calibration figure is gridiron pattern, calibration principle is:
u v 1 = K r 1 r 2 r 3 t X Y 0 1 = K r 1 r 2 t X Y 1
Assume z=0 world coordinate system plane be stencil plane, [r1r2r3] sit relative to the world for camera coordinate system Mark system spin matrix, t be camera coordinate system relative to world coordinate system translation vector, [X Y 1]TFor point in template Homogeneous coordinates, [u v 1]TFor the homogeneous coordinates on the spot projection on stencil plane to the plane of delineation, K is represented in video camera Ginseng matrix;
Step 2-2, set camera coordinate system OxcyczcFor the rectangular coordinate system being fixed on video camera, its origin O definition For the photocentre of video camera, xc, ycAxle is respectively parallel to the x of image physical coordinates system, y-axis, zcAxle and optical axis coincidence, i.e., zcImaging plane of the axle perpendicular to video camera, photocentre is to the plane of delineation apart from OO1Effective focal length f for video camera;
Step 2-3, set (xw, yw, zw) for certain P point in three-dimensional world coordinate system three-dimensional coordinate, (xc, yc, zc) it is three-dimensional coordinate of same point P in camera coordinate system, the point in world coordinate system is to video camera The conversion of coordinate system is expressed as by orthogonal spin matrix R and translation transformation matrix T:
x c y c z c = R x w y w z w + T
Wherein, R is 3 × 3 spin matrixs, translation matrix T = t x t y t z ;
Orthogonal matrix R is that optical axis is combined relative to the direction cosines of world coordinate system reference axis, comprising three independent angles Variable:ψ angles rotated around x-axis, rotate θ angles around y-axis and rotate φ angles around z-axis, be referred to as outside video camera with three variables of T Portion's parameter;
Step 2-4, the rigid transformation homogeneous coordinates of world coordinate system and camera coordinate system and matrix form are reduced to:
x c y c z c 1 = R T 0 T 1 x w y w z w 1 = M 2 x w y w z w 1
x c ~ = R T 0 T 1 x w ~ ⇒ x c ~ = M 2 x w ~ ⇒ x w ~ = M 2 - 1 x c ~
Camera coordinates are tied to the preferable perspective projection transformation under the conversion i.e. pin-hole model of preferable image physical coordinates system, There is following formula to set up:
X=f xc/zcY=f yc/zc
X, y are respectively the abscissa and ordinate of preferable image physical coordinates system;
Equally represent that above formula is with homogeneous coordinates and matrix:
z c x y 1 = f 0 0 0 0 f 0 0 0 0 1 0 x c y c z c 1
Ideal image coordinate is tied to the conversion of image pixel coordinates system, is indicated with homogeneous coordinates:
u v 1 = 1 / d x 0 u 0 0 1 / d y v 0 0 0 1 x y 1 = s x / d y 0 u 0 0 1 / d y v 0 0 0 1 x y 1
Its reverse-power is:
x y 1 = d y / s x 0 - u 0 d y / s x 0 d y - v 0 d y 0 0 1 u v 1
Obtain the relation between the coordinate (u, v) of P point coordinates that world coordinate system represents and its projection P ':
z c u v 1 = s x / d y 0 u 0 0 1 / d y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = α 0 u 0 0 0 β v 0 0 0 0 1 0 R T 0 T 1 x w y w z w 1 = M 1 M 2 x w ~ = M x w ~
Wherein, α=f/dx=f sx/ dy, β=f/dy;M1For inner parameter battle array, M2For external parameter battle array, M is 3 × 4 projection matrix, characterizes the fundamental relation between two dimensional image coordinate and three-dimensional world coordinate.
3. the human body target recognition methods based on stereovision technique according to claim 1, it is characterised in that Step 3 is specially:
Step 3-1, hypothesis on the basis of right figure are made the difference with background, are obtained foreground picture;
Step 3-2, determine parallax:
The first step, in foreground picture, it is assumed that on the basis of right figure, calculates each pixel corresponding with left figure on given parallax The gray scale difference value of point;
Second step, on each parallax, is changed to using the narrow bar window perpendicular to base direction, using based on window Matching algorithm calculates the gray scale difference value of the window centered on each pixel and expression formula is as follows:
Σ γ = - m 2 m 2 Σ δ = - n 2 R 2 | I r i g h t [ x e + γ , y e + δ ] - I l e f t [ x e + γ + d , y e + δ ] |
In formula, sizes of the m*n for template window, unit lengths of the γ for template window, unit width of the δ for template window Degree, Iright[xe+ γ, ye+ δ] it is right view [xe+ γ, ye+ δ] gray value at coordinate, Ileft[xe+ γ+d, ye+ δ] it is left view [xe+ γ+d, ye+ δ] gray value at coordinate, d is parallax;
D, in set disparity range, is got maximum disparity from minimum parallax, successively comparison expression by the 3rd step Value, the minimum corresponding point of value is optimal match point, the parallax value of corresponding parallax value as the pixel;
Step 3-3, the depth information for determining target:
The focal length of known video camera, the depth information i.e. coordinate value of the Z axis under camera coordinate system of any point, If b is two camera optical center distances;Target Q to camera vertical range be H;Identical focal length is f, Q1、Q2Point Not Wei target Q two video cameras imaging point;D is parallax, it is assumed that the optical axis of two video cameras is parallel to each other, by phase Derive like triangle and understand:
H=(b × f)/d
Target Q for obtaining is the depth information of target to camera vertical range.
4. the human body target recognition methods based on stereovision technique according to claim 1, it is characterised in that Step 4 is specially:
Step 4-1, the sub-box that depth image is divided into L*L pixels, L are positive integer, and with nine grids are Unit, with from left to right when mobile, order from top to bottom often compares once, the sub-box of a mobile L*L pixel, If the average gray of middle grid is higher than surrounding eight neighborhood average gray, it is head target area to establish middle grid;
Step 4-2, to head target area given threshold binaryzation, split head target;Specially:
Head and non-head region are split using One-Dimensional Maximum-Entropy thresholding method, p is madeiRepresent picture of the gray value for i in image Ratio shared by element, with gray level t as Threshold segmentation head and shoulder regions, is higher than the pixel structure of t gray levels in region Into head zone, non-head region is constituted less than the pixel of gray level t, then the entropy difference of non-head region and head zone It is defined as:
H B = - Σ i ( p i p t ) lg ( p i p t )
HO=-Σi[pi/(1-pt)]lg[pi/(1-pt)]
Wherein,I represents the gray value (0≤i≤255) of pixel, Ht=-Σtpilgpi,HE=-Σipilgpi, when entropy function value andWhen taking maximum, gray level t as segmentation figure as Threshold value:
T=arg (max { HB+HO}}
Step 4-3, the average gray and gray variance that determine the head zone after splitting:
g ‾ = Σ ϵ = 0 M - 1 Σ ϵ = 0 M - 1 f ( ϵ , ϵ ) M * N
var = Σ ϵ = 0 M - 1 Σ ϵ = 0 M - 1 ( f ( ϵ , ϵ ) - g ‾ ) 2 M * N
Wherein M, N represent that the ranks number in each region, ε, ∈ represent that unit ranks number, f (ε, ∈) are represented respectively respectively The gray value that (ε, ∈) puts, when gray variance is more than the threshold value for setting, filters the pixel;
Step 4-4, according to human body head, under different field heights, whether the ratio of total pixel width and field height meets The scope of setting filters the long and narrow pseudo- target of profile, obtains human body target.
CN201510552079.3A 2015-09-01 2015-09-01 Human body target recognition methods based on stereovision technique Pending CN106503605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510552079.3A CN106503605A (en) 2015-09-01 2015-09-01 Human body target recognition methods based on stereovision technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510552079.3A CN106503605A (en) 2015-09-01 2015-09-01 Human body target recognition methods based on stereovision technique

Publications (1)

Publication Number Publication Date
CN106503605A true CN106503605A (en) 2017-03-15

Family

ID=58286268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510552079.3A Pending CN106503605A (en) 2015-09-01 2015-09-01 Human body target recognition methods based on stereovision technique

Country Status (1)

Country Link
CN (1) CN106503605A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107360416A (en) * 2017-07-12 2017-11-17 天津大学 Stereo image quality evaluation method based on local multivariate Gaussian description
CN107481288A (en) * 2017-03-31 2017-12-15 触景无限科技(北京)有限公司 The inside and outside ginseng of binocular camera determines method and apparatus
CN107478227A (en) * 2017-07-11 2017-12-15 厦门博尔利信息技术有限公司 The location algorithm of interactive large space
CN107514745A (en) * 2017-08-03 2017-12-26 上海斐讯数据通信技术有限公司 A kind of method and system of intelligent air condition stereoscopic vision positioning
CN108920996A (en) * 2018-04-10 2018-11-30 泰州职业技术学院 A kind of small target detecting method based on robot vision
CN108960096A (en) * 2018-06-22 2018-12-07 杭州晶智能科技有限公司 Human body recognition method based on stereoscopic vision and infrared imaging
CN109274871A (en) * 2018-09-27 2019-01-25 维沃移动通信有限公司 A kind of image imaging method and device of mobile terminal
CN109961455A (en) * 2017-12-22 2019-07-02 杭州萤石软件有限公司 A kind of object detection method and device
CN110374045A (en) * 2019-07-29 2019-10-25 哈尔滨工业大学 A kind of intelligence de-icing method
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN111091086A (en) * 2019-12-11 2020-05-01 安徽理工大学 Method for improving single-feature information recognition rate of logistics surface by using machine vision technology
CN111382773A (en) * 2018-12-31 2020-07-07 南京拓步智能科技有限公司 Image matching method based on nine-grid principle for monitoring inside of pipeline
CN112330726A (en) * 2020-10-27 2021-02-05 天津天瞳威势电子科技有限公司 Image processing method and device
CN113786229A (en) * 2021-09-15 2021-12-14 苏州朗润医疗系统有限公司 AR augmented reality-based auxiliary puncture navigation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222265A (en) * 2010-04-13 2011-10-19 上海申腾三盛信息技术工程有限公司 Binocular vision and laterally mounted video camera-based passenger flow counting method
US20130236058A1 (en) * 2007-07-03 2013-09-12 Shoppertrak Rct Corporation System And Process For Detecting, Tracking And Counting Human Objects Of Interest
CN103455792A (en) * 2013-08-20 2013-12-18 深圳市飞瑞斯科技有限公司 Guest flow statistics method and system
CN104504688A (en) * 2014-12-10 2015-04-08 上海大学 Method and system based on binocular stereoscopic vision for passenger flow density estimation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130236058A1 (en) * 2007-07-03 2013-09-12 Shoppertrak Rct Corporation System And Process For Detecting, Tracking And Counting Human Objects Of Interest
CN102222265A (en) * 2010-04-13 2011-10-19 上海申腾三盛信息技术工程有限公司 Binocular vision and laterally mounted video camera-based passenger flow counting method
CN103455792A (en) * 2013-08-20 2013-12-18 深圳市飞瑞斯科技有限公司 Guest flow statistics method and system
CN104504688A (en) * 2014-12-10 2015-04-08 上海大学 Method and system based on binocular stereoscopic vision for passenger flow density estimation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
刘欣: "基于立体视觉的公交客流统计方法与实现", 《中国优秀硕士论文全文数据库 工程科技Ⅱ辑》 *
孙中旭: "基于双目视觉的运动物体测距测速研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
尹章芹: "三维自动客流计数系统设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
杨帆: "《图像数字处理与分析》", 31 May 2015, 北京航空航天大学出版社 *
赛干内克,希伯特: "《三维计算机视觉技术和算法导论》", 1 October 2014, 国防工业出版社 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481288A (en) * 2017-03-31 2017-12-15 触景无限科技(北京)有限公司 The inside and outside ginseng of binocular camera determines method and apparatus
CN107478227A (en) * 2017-07-11 2017-12-15 厦门博尔利信息技术有限公司 The location algorithm of interactive large space
CN107360416A (en) * 2017-07-12 2017-11-17 天津大学 Stereo image quality evaluation method based on local multivariate Gaussian description
CN107514745A (en) * 2017-08-03 2017-12-26 上海斐讯数据通信技术有限公司 A kind of method and system of intelligent air condition stereoscopic vision positioning
CN109961455B (en) * 2017-12-22 2022-03-04 杭州萤石软件有限公司 Target detection method and device
CN109961455A (en) * 2017-12-22 2019-07-02 杭州萤石软件有限公司 A kind of object detection method and device
US11367276B2 (en) 2017-12-22 2022-06-21 Hangzhou Ezviz Software Co., Ltd. Target detection method and apparatus
CN108920996A (en) * 2018-04-10 2018-11-30 泰州职业技术学院 A kind of small target detecting method based on robot vision
CN108960096A (en) * 2018-06-22 2018-12-07 杭州晶智能科技有限公司 Human body recognition method based on stereoscopic vision and infrared imaging
CN108960096B (en) * 2018-06-22 2021-08-17 深圳市恒天伟焱科技股份有限公司 Human body identification method based on stereoscopic vision and infrared imaging
CN109274871A (en) * 2018-09-27 2019-01-25 维沃移动通信有限公司 A kind of image imaging method and device of mobile terminal
CN111382773A (en) * 2018-12-31 2020-07-07 南京拓步智能科技有限公司 Image matching method based on nine-grid principle for monitoring inside of pipeline
CN110374045A (en) * 2019-07-29 2019-10-25 哈尔滨工业大学 A kind of intelligence de-icing method
CN110374045B (en) * 2019-07-29 2021-09-28 哈尔滨工业大学 Intelligent deicing method
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN111091086A (en) * 2019-12-11 2020-05-01 安徽理工大学 Method for improving single-feature information recognition rate of logistics surface by using machine vision technology
CN111091086B (en) * 2019-12-11 2023-04-25 安徽理工大学 Method for improving identification rate of single characteristic information of logistics surface by utilizing machine vision technology
CN112330726A (en) * 2020-10-27 2021-02-05 天津天瞳威势电子科技有限公司 Image processing method and device
CN112330726B (en) * 2020-10-27 2022-09-09 天津天瞳威势电子科技有限公司 Image processing method and device
CN113786229A (en) * 2021-09-15 2021-12-14 苏州朗润医疗系统有限公司 AR augmented reality-based auxiliary puncture navigation method
CN113786229B (en) * 2021-09-15 2024-04-12 苏州朗润医疗系统有限公司 Auxiliary puncture navigation system based on AR augmented reality

Similar Documents

Publication Publication Date Title
CN106503605A (en) Human body target recognition methods based on stereovision technique
CN106485735A (en) Human body target recognition and tracking method based on stereovision technique
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN101443817B (en) Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
US20130136302A1 (en) Apparatus and method for calculating three dimensional (3d) positions of feature points
CN105913013A (en) Binocular vision face recognition algorithm
CN106361345A (en) System and method for measuring height of human body in video image based on camera calibration
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN105096307A (en) Method for detecting objects in paired stereo images
CA2812117A1 (en) A method for enhancing depth maps
CN104036488A (en) Binocular vision-based human body posture and action research method
CN102609724A (en) Method for prompting ambient environment information by using two cameras
CN103020988A (en) Method for generating motion vector of laser speckle image
Tian et al. Human Detection using HOG Features of Head and Shoulder Based on Depth Map.
Yang et al. IR stereo RealSense: Decreasing minimum range of navigational assistance for visually impaired individuals
CN108230351A (en) Sales counter evaluation method and system based on binocular stereo vision pedestrian detection
CN105354828B (en) Read and write intelligent identification and the application thereof of reading matter three-dimensional coordinate in scene
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN104063689A (en) Face image identification method based on binocular stereoscopic vision
Wang et al. LBP-based edge detection method for depth images with low resolutions
Ye et al. 3D Human behavior recognition based on binocular vision and face–hand feature
CN100375977C (en) Three-dimensional portrait imaging device and distinguishing method for three-dimensional human face
Meers et al. Face recognition using a time-of-flight camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170315

RJ01 Rejection of invention patent application after publication