CN110020627A - A kind of pedestrian detection method based on depth map and Fusion Features - Google Patents
A kind of pedestrian detection method based on depth map and Fusion Features Download PDFInfo
- Publication number
- CN110020627A CN110020627A CN201910282728.0A CN201910282728A CN110020627A CN 110020627 A CN110020627 A CN 110020627A CN 201910282728 A CN201910282728 A CN 201910282728A CN 110020627 A CN110020627 A CN 110020627A
- Authority
- CN
- China
- Prior art keywords
- feature
- clbc
- gaussian profile
- image
- depth map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
A kind of pedestrian detection method based on depth map and Fusion Features, comprising the following steps: S1 obtains color diagram and depth map from video, and pre-processes to image;S2 carries out shadow Detection on the basis of mixed Gaussian background modeling, extracts foreground target;S3 extracts the HOG feature of color diagram and the CLBC feature of depth map, is merged to obtain fusion feature by two features by parallel mode;Fusion feature is input in classifier and is analyzed and processed by S4, judges whether there is pedestrian, completes detection.Well-behaved sold to you blends the CLBC feature of the HOG feature of color diagram and depth map, effectively overcomes the interference that the factors such as the stature due to pedestrian, posture, visual angle and clothing, illumination generate, can more accurately come out pedestrian detection from video.
Description
Technical field
The present invention relates to computer visions and field of image processing, and in particular to a kind of based on depth map and Fusion Features
Pedestrian detection method.
Background technique
In computer vision field, pedestrian detection is an important research hot spot of target detection, is mainly utilized each
Kind sensor obtains the data information of pedestrian, detects trip from image data by Image Processing and Pattern Recognition scheduling algorithm
People.It is with vehicle assistant drive, the fields such as intelligent video monitoring and human body behavioural analysis, Aerial Images are closely bound up.
Currently, more classical pedestrian detection method has the pedestrian detection method of HOG and SVM classifier combination and is based on
The pedestrian detection method of HOG-LBP feature.Traditional Fusion Features pedestrian detection method spectral information damage based on HOG and LBP
Lose more, more sensitive to noise, rotation of the original LBP algorithm to non-uniform illumination variation poor robustness, to textural characteristics
Invariance is poor.
Compared with common grey chromatic graph, there is no interference caused by the texture of illumination, shade and body surface in depth image,
Available reliable three-dimension object geological information.Pedestrian detection, tracking are carried out using HOG feature and image depth information, is had
The interference for overcoming the factors such as the stature due to pedestrian, posture, visual angle and clothing, illumination and generating of effect, can more accurately from
Pedestrian detection is come out in video., more accurately pedestrian detection can be come out from video.
Summary of the invention
In order to overcome the shortcomings of interference caused by complex background in existing pedestrian detection, the present invention provides one kind to be based on
Pedestrian detection method based on depth map and Fusion Features makes pedestrian have stronger characterization ability, the pedestrian under complex background
The accuracy rate of detection significantly improves.
The purpose of the present invention is mainly achieved through the following technical solutions:
A kind of pedestrian detection method based on depth map and Fusion Features, comprising the following steps:
S1: color diagram and depth map are obtained, and image is pre-processed;
S2: carrying out shadow Detection on the basis of mixed Gaussian background modeling, extracts foreground target, and process is as follows:
S2-1: mixed Gauss model, calculating observation point x are usedtLocate the Gaussian mixtures probability density function of pixel, it is public
Formula is as follows:
Wherein k is distribution pattern sum, η (xt,μi,t,τi,t) it is i-th of Gaussian Profile of t moment, μi,tFor its mean value, τi,t
For its covariance matrix, δi,tFor variance, I is three-dimensional unit matrix, wi,tFor the weight of i-th of Gaussian Profile of t moment;
S2-2: by the Gaussian Profile of specified distance threshold selection and new observation point best match, and its position is marked
For khit, newly it is worth at a distance from Gaussian Profile with following formula quantitative analysis:
Wherein μt-1, ∑t-1It is the mean value and covariance matrix of each single Gaussian Profile, x respectivelytIt is exactly the new sight of t moment
Measured value updates the value that each Gaussian Profile corresponds to weight coefficient:
S2-3: if not meeting the Gaussian Profile of condition, a new Gaussian Profile is reinitialized, replaces weight
The smallest distribution, while single Gaussian Profile is according to wk,t-1The rearrangement of/σ value descending, and for core Gaussian Profile, by as follows
Formula selection:
Wherein, T is the threshold value for dividing core Gaussian Profile;
S2-4: compare best match Gaussian Profile mark khitWith core Gaussian Profile number B, belong to when best match is distributed
In core Gaussian Profile, then current observation is classified as background model;
S2-5: if best match distribution is not belonging to core Gaussian Profile, then to each pixel selection one suitable number
Purpose Gaussian Profile, the scene changes according to caused by brightness change are capable of detecting when shade and are marked, to obtain really
Prospect;
By above step, shade can detecte out, avoid foreground object from obtaining the shape of mistake, thus to subsequent processing
Deleterious effect is generated, very big improves accuracy rate.
S3: extracting the HOG feature of color diagram and the CLBC feature of depth map, is melted two features by parallel mode
Conjunction obtains fusion feature, and process is as follows:
S3-1: the HOG feature of color diagram is extracted
S3-2: extracting improved LBP feature-CLBC feature, is first two complements by the local Difference Solution of depth map,
That is symbol SpWith amplitude mp, it is respectively as follows:
Wherein: gpFor the gray value of the adjacent pixel on the circle that radius is R, gcFor the gray value of center pixel, P is phase
The sum of adjacent pixel;
S3-3: extracting complete local grain information, and formula is as follows:
Calculate separately SCLBC、MCLBC、CCLBCThree operators, wherein SCLBCIt is equal to traditional LBP, MCLBCMeasure the width of part
Degree variation, CCLBCOperator extract local center information, be defined as follows:
CCLBC(P, R)=t (gc,cI),
Wherein, c is picture in its entirety mpMean value, cIFor the average gray level of picture in its entirety;
S3-4: combination SCLBC、MCLBCAnd CCLBCOperator obtains CLBC feature;
The edge feature of image texture had not only been remained by above step, but also has remained the local feature of image, to texture
Feature has good rotational invariance, and improves the robustness to noise, improves the precision of detection.
S3-5: the feature vector of HOG feature and CLBC feature is fused into a complex vector using parallel mode, is then existed
Feature, i.e. fusion feature are extracted in complex vector space;
The program can concentrate the information for obtaining most otherness from primitive character, while can eliminate because of different characteristic collection
Between correlation and the redundancy that generates.
S4: inputting the feature into classifier and be analyzed and processed, and judges whether there is pedestrian, completes detection.
Further, the treatment process of the step S1 is as follows:
Original image is divided into several subgraphs by S1-1, and makes between subgraph that there are the overlappings of certain region, according to ladder
Statistics with histogram characteristic is spent, the threshold value of each subgraph is automatically set;
S1-2 is according to histogram of gradients characteristic, the method for generating a kind of self-adapting estimation fringe region and non-edge.
Further, the step of S3-1 is as follows:
S3-1-1 gray processing regards image as an x, the 3-D image of y, z;
S3-1-2 carries out the standardization of color space using the method for Gamma correction to the image of input, and Gamma compression is public
Formula:
I (x, y)=I (x, y)gamma
S3-1-3 calculates the gradient of each pixel value of video image, including size and Orientation, and calculates each picture accordingly
The gradient direction value of plain position, the gradient of pixel (x, y) in image are as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x,y),Gy(x, y), H (x, y) respectively indicate the ladder of the horizontal direction in input picture at pixel (x, y)
Degree, vertical gradient and pixel value, gradient magnitude and gradient direction at pixel (x, y) are respectively as follows:
S3-1-4 divides an image into junior unit;A coding is provided for local image region, while to human body in image
The posture of object and the hyposensitiveness perception of appearance can be held essentially constant;
S3-1-5 counts the number of the different gradients of each unit, obtains the feature of each unit;
S3-1-6 will form a region unit per several units, and the feature of all units, which is together in series, in a block just obtains
The HOG feature of the block, the block that will test all overlappings in window carries out the collection of HOG feature, and combines them into final
Feature vector is used for classification.
Beneficial effects of the present invention are mainly manifested in: can effectively overcome stature, posture, visual angle and the clothing of different pedestrians
, illumination constitute complicated background scene the problem of, more accurately pedestrian detection can be come out from video.
Detailed description of the invention
Fig. 1 is a kind of pedestrian detection method flow chart based on depth map and Fusion Features of the present invention.
Fig. 2 is the schematic diagram of background modeling module.
Fig. 3 is HOG feature extraction schematic diagram.
Fig. 4 is CLBC feature extraction schematic diagram.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.
Referring to Fig.1~Fig. 4, a kind of pedestrian detection method based on depth map and Fusion Features, comprising the following steps:
S1 obtains color diagram and depth map, and pre-processes to image, and process is as follows:
Original image is divided into several subgraphs by S1-1, and makes between subgraph that there are the overlappings of certain region, according to ladder
Statistics with histogram characteristic is spent, the threshold value of each subgraph is automatically set;
S1-2 is according to histogram of gradients characteristic, the method for generating a kind of self-adapting estimation fringe region and non-edge.
By above step, the noise in depth image can be preferably removed, while can be well protected depth map again
As marginal information, carry out big convenience very to subsequent processing work belt.
S2: referring to Fig. 2, shadow Detection module is added on the basis of mixed Gaussian background modeling, extracts foreground target, mistake
Journey is as follows:
S2-1: mixed Gauss model, calculating observation point x are usedtLocate the Gaussian mixtures probability density function of pixel, it is public
Formula is as follows:
Wherein k is distribution pattern sum, η (xt,μi,t,τi,t) it is i-th of Gaussian Profile of t moment, μi,tFor its mean value, τi,t
For its covariance matrix, δi,tFor variance, I is three-dimensional unit matrix, wi,tFor the weight of i-th of Gaussian Profile of t moment;
S2-2: by the Gaussian Profile of specified distance threshold selection and new observation point best match, and its position is marked
For khit, newly it is worth at a distance from Gaussian Profile with following formula quantitative analysis:
Wherein μt-1, ∑t-1It is the mean value and covariance matrix of each single Gaussian Profile, x respectivelytIt is exactly the new sight of t moment
Measured value updates the value that each Gaussian Profile corresponds to weight coefficient:
S2-3: if not meeting the Gaussian Profile of condition, a new Gaussian Profile is reinitialized, replaces weight
The smallest distribution, while single Gaussian Profile is according to wk,t-1The rearrangement of/σ value descending, and for core Gaussian Profile, by as follows
Formula selection:
Wherein T is the threshold value for dividing core Gaussian Profile;
S2-4: compare best match Gaussian Profile mark khitWith core Gaussian Profile number B, belong to when best match is distributed
In core Gaussian Profile, then current observation is classified as background model;
S2-5: if best match distribution is not belonging to core Gaussian Profile, then to each pixel selection one suitable number
Purpose Gaussian Profile, the scene changes according to caused by brightness change are capable of detecting when shade and are marked, to obtain really
Prospect;
By above step, facilitate acquirement foreground target, prepares for subsequent extracted feature.
S3: extracting the HOG feature of color diagram and the CLBC feature of depth map, is melted two features by parallel mode
Conjunction obtains fusion feature, and process is as follows:
S3-1: extracting the HOG feature of color diagram, and referring to Fig. 3, steps are as follows:
S3-1-1 gray processing regards image as an x, the 3-D image of y, z (gray scale);
S3-1-2 carries out the standardization of color space using the method for Gamma correction to the image of input.
Gamma compresses formula:
T (x, y)=I (x, y)gamma
For example Gamma=1/2 can be taken;
S3-1-3 calculates the gradient of each pixel value of video image, including size and Orientation, and calculates each picture accordingly
The gradient direction value of plain position, the gradient of pixel (x, y) in image are as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x,y),Gy(x, y), H (x, y) respectively indicate the ladder of the horizontal direction in input picture at pixel (x, y)
Degree, vertical gradient and pixel value, gradient magnitude and gradient direction at pixel (x, y) are respectively as follows:
S3-1-4 divides an image into junior unit;A coding is provided for local image region, while to human body in image
The posture of object and the hyposensitiveness perception of appearance can be held essentially constant;
S3-1-5 counts the number of the different gradients of each unit, obtains the feature of each unit;
S3-1-6 will form a region unit per several units, and the feature of all units, which is together in series, in a block just obtains
The HOG feature of the block, the block that will test all overlappings in window carries out the collection of HOG feature, and combines them into final
Feature vector is used for classification.
S3-2: referring to Fig. 4, extracting improved LBP feature-CLBC feature, is first two by the local Difference Solution of depth map
Complements, i.e. symbol SpWith amplitude mp, it is respectively as follows:
Wherein: gpFor the gray value of the adjacent pixel on the circle that radius is R, gcFor the gray value of center pixel, P is phase
The sum of adjacent pixel;
S3-3: extracting complete local grain information, and formula is as follows:
Calculate separately SCLBC、MCLBC、CCLBCThree operators, wherein SCLBCIt is equal to traditional LBP, MCLBCMeasure the width of part
Degree variation, CCLBCOperator extract local center information, be defined as follows:
CCLBC(P, R)=t (gc,cI),
Wherein, c is picture in its entirety mpMean value, cIFor the average gray level of picture in its entirety;
S3-4: combination SCLBC、MCLBCAnd CCLBCOperator obtains CLBC feature;
S3-5: the feature vector of HOG feature and CLBC feature is fused into a complex vector using parallel mode, is then existed
Feature, i.e. fusion feature are extracted in complex vector space;
S4: inputting the feature into classifier and be analyzed and processed, and judges whether there is pedestrian, completes detection.
The present embodiment passes through the pedestrian detection method of depth map and Fusion Features, can be more accurately from video by pedestrian
It detected.
Claims (3)
1. a kind of pedestrian detection method based on depth map and Fusion Features, which is characterized in that the described method comprises the following steps:
S1: color diagram and depth map are obtained, and image is pre-processed;
S2: carrying out shadow Detection on the basis of mixed Gaussian background modeling, then extracts foreground target, and process is as follows:
S2-1: mixed Gauss model, calculating observation point x are usedtLocate the Gaussian mixtures probability density function of pixel, formula is such as
Under:
Wherein k is distribution pattern sum, η (xt,μi,t,τi,t) it is i-th of Gaussian Profile of t moment, μi,tFor its mean value, τi,tFor it
Covariance matrix, δi,tFor variance, I is three-dimensional unit matrix, wi,tFor the weight of i-th of Gaussian Profile of t moment;
S2-2: selecting the Gaussian Profile with new observation point best match from mixed model, and marking its position is khit, with such as
Lower formula quantitative analysis is newly worth at a distance from Gaussian Profile:
Wherein μt-1, ∑t-1It is the mean value and covariance matrix of each single Gaussian Profile, x respectivelytIt is exactly the new observation of t moment,
Update the value that each Gaussian Profile corresponds to weight coefficient:
S2-3: if not meeting the Gaussian Profile of condition, a new Gaussian Profile is reinitialized, replacement weight is minimum
Distribution, while single Gaussian Profile is according to wk,t-1The rearrangement of/σ value descending, and for core Gaussian Profile, pass through following public affairs
Formula selection:
Wherein T is the threshold value for dividing core Gaussian Profile;
S2-4: compare best match Gaussian Profile mark khitWith core Gaussian Profile number B, belong to core when best match is distributed
Heart Gaussian Profile, then current observation is classified as background model;
S2-5: if best match distribution is not belonging to core Gaussian Profile, then suitable number of to each pixel selection one
Gaussian Profile, the scene changes according to caused by brightness change are capable of detecting when shade and are marked, thus before obtaining really
Scape;
S3: extracting the HOG feature of color diagram and the CLBC feature of depth map, merge by two features by parallel mode
To fusion feature, process is as follows:
S3-1: the HOG feature of color diagram is extracted
S3-2: extracting and improve LBP feature-CLBC feature, is first two complements, i.e. symbol by the local Difference Solution of depth map
SpWith amplitude mp, it is respectively as follows:
Wherein: gpFor the gray value of the adjacent pixel on the circle that radius is R, gcFor the gray value of center pixel, P is adjacent picture
The sum of element;
S3-3: extracting complete local grain information, and formula is as follows:
Calculate separately SCLBC、MCLBC、CCLBCThree operators, wherein SCLBCIt is equal to traditional LBP, MCLBCThe amplitude of measurement part becomes
Change, CCLBCFor taking local center information, it is defined as follows:
CCLBC(P, R)=t (gc,cI), (11)
Wherein, c is picture in its entirety mpMean value, cIFor the average gray level of picture in its entirety;
S3-4: combination SCLBC、MCLBCAnd CCLBCOperator obtains CLBC feature;
S3-5: being fused into a complex vector using parallel mode for the feature vector of HOG feature and CLBC feature, then it is multiple to
Feature, i.e. fusion feature are extracted in quantity space;
S4: inputting the feature into classifier and be analyzed and processed, and judges whether there is pedestrian, completes detection.
2. a kind of pedestrian detection method based on depth map and Fusion Features as described in claim 1, which is characterized in that described
The treatment process of step S1 is as follows:
Original image is divided into several subgraphs by S1-1, and is made between subgraph there are the overlapping of certain region, straight according to gradient
Square figure statistical property, automatically sets the threshold value of each subgraph;
S1-2 is according to histogram of gradients characteristic, the method for generating a kind of self-adapting estimation fringe region and non-edge.
3. a kind of pedestrian detection method based on depth map and Fusion Features as claimed in claim 1 or 2, which is characterized in that
The step of S3-1, is as follows:
S3-1-1 gray processing regards image as an x, the 3-D image of y, z;
S3-1-2 carries out the standardization of color space using the method for Gamma correction to the image of input, and Gamma compresses formula:
I (x, y)=I (x, y)gamma
S3-1-3 calculates the gradient of each pixel value of video image, including size and Orientation, and calculates each pixel position accordingly
The gradient direction value set, the gradient of pixel (x, y) in image are as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x,y),Gy(x, y), H (x, y) respectively indicate horizontal direction gradient in input picture at pixel (x, y), hang down
Straight direction gradient and pixel value, gradient magnitude and gradient direction at pixel (x, y) are respectively as follows:
S3-1-4 divides an image into junior unit;A coding is provided for local image region, while to human object in image
Posture and the hyposensitiveness perception of appearance can be held essentially constant;
S3-1-5 counts the number of the different gradients of each unit, obtains the feature of each unit;
S3-1-6 will form a region unit per several units, and the feature of all units, which is together in series, in a block is just somebody's turn to do
The HOG feature of block, the block that will test all overlappings in window carries out the collection of HOG feature, and combines them into final spy
Vector is levied to use for classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910282728.0A CN110020627A (en) | 2019-04-10 | 2019-04-10 | A kind of pedestrian detection method based on depth map and Fusion Features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910282728.0A CN110020627A (en) | 2019-04-10 | 2019-04-10 | A kind of pedestrian detection method based on depth map and Fusion Features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110020627A true CN110020627A (en) | 2019-07-16 |
Family
ID=67190839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910282728.0A Pending CN110020627A (en) | 2019-04-10 | 2019-04-10 | A kind of pedestrian detection method based on depth map and Fusion Features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110020627A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145331A (en) * | 2020-01-09 | 2020-05-12 | 深圳市数字城市工程研究中心 | Cloud rendering image fusion method and system for massive urban space three-dimensional data |
CN111967531A (en) * | 2020-08-28 | 2020-11-20 | 南京邮电大学 | High-precision indoor image positioning method based on multi-feature fusion |
CN112750151A (en) * | 2020-12-30 | 2021-05-04 | 成都云盯科技有限公司 | Clothing color matching method, device and equipment based on mathematical statistics |
CN112784854A (en) * | 2020-12-30 | 2021-05-11 | 成都云盯科技有限公司 | Method, device and equipment for segmenting and extracting clothing color based on mathematical statistics |
CN114119753A (en) * | 2021-12-08 | 2022-03-01 | 北湾科技(武汉)有限公司 | Transparent object 6D attitude estimation method facing mechanical arm grabbing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740833A (en) * | 2016-02-03 | 2016-07-06 | 北京工业大学 | Human body behavior identification method based on depth sequence |
CN106504289A (en) * | 2016-11-02 | 2017-03-15 | 深圳乐行天下科技有限公司 | A kind of indoor objects detection method and device |
US20180165552A1 (en) * | 2016-12-12 | 2018-06-14 | National Chung Shan Institute Of Science And Technology | All-weather thermal-image pedestrian detection method |
CN109359549A (en) * | 2018-09-20 | 2019-02-19 | 广西师范大学 | A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP |
-
2019
- 2019-04-10 CN CN201910282728.0A patent/CN110020627A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740833A (en) * | 2016-02-03 | 2016-07-06 | 北京工业大学 | Human body behavior identification method based on depth sequence |
CN106504289A (en) * | 2016-11-02 | 2017-03-15 | 深圳乐行天下科技有限公司 | A kind of indoor objects detection method and device |
US20180165552A1 (en) * | 2016-12-12 | 2018-06-14 | National Chung Shan Institute Of Science And Technology | All-weather thermal-image pedestrian detection method |
CN109359549A (en) * | 2018-09-20 | 2019-02-19 | 广西师范大学 | A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP |
Non-Patent Citations (5)
Title |
---|
JIAN YANG等: "Feature fusion: parallel strategy vs. serial strategy", 《PATTERN RECOGNITION》 * |
MOFFIS: "混合高斯背景建模", 《HTTPS://WWW.CNBLOGS.COM/WALCCOTT/P/4956929.HTML》 * |
WYU123: "混合高斯背景建模原理及实现", 《HTTPS://WWW.CNBLOGS.COM/WYUZL/P/6868093.HTML》 * |
搬砖小松鼠: "行人检测全局特征中的HOG、LBP、Hear特征整理", 《HTTPS://BLOG.CSDN.NET/WHU_ZCJ/ARTICLE/DETAILS/50856533?LOCATIONNUM=9》 * |
程德强等: "改进的HOG-CLBC的行人检测方法", 《光电工程》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145331A (en) * | 2020-01-09 | 2020-05-12 | 深圳市数字城市工程研究中心 | Cloud rendering image fusion method and system for massive urban space three-dimensional data |
CN111145331B (en) * | 2020-01-09 | 2023-04-07 | 深圳市数字城市工程研究中心 | Cloud rendering image fusion method and system for massive urban space three-dimensional data |
CN111967531A (en) * | 2020-08-28 | 2020-11-20 | 南京邮电大学 | High-precision indoor image positioning method based on multi-feature fusion |
CN111967531B (en) * | 2020-08-28 | 2022-09-16 | 南京邮电大学 | High-precision indoor image positioning method based on multi-feature fusion |
CN112750151A (en) * | 2020-12-30 | 2021-05-04 | 成都云盯科技有限公司 | Clothing color matching method, device and equipment based on mathematical statistics |
CN112784854A (en) * | 2020-12-30 | 2021-05-11 | 成都云盯科技有限公司 | Method, device and equipment for segmenting and extracting clothing color based on mathematical statistics |
CN112784854B (en) * | 2020-12-30 | 2023-07-14 | 成都云盯科技有限公司 | Clothing color segmentation extraction method, device and equipment based on mathematical statistics |
CN112750151B (en) * | 2020-12-30 | 2023-09-26 | 成都云盯科技有限公司 | Clothing color matching method, device and equipment based on mathematical statistics |
CN114119753A (en) * | 2021-12-08 | 2022-03-01 | 北湾科技(武汉)有限公司 | Transparent object 6D attitude estimation method facing mechanical arm grabbing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110020627A (en) | A kind of pedestrian detection method based on depth map and Fusion Features | |
CN108520226B (en) | Pedestrian re-identification method based on body decomposition and significance detection | |
CN103942577B (en) | Based on the personal identification method for establishing sample database and composite character certainly in video monitoring | |
CN104008370B (en) | A kind of video face identification method | |
Huang et al. | Morphological building/shadow index for building extraction from high-resolution imagery over urban areas | |
CN105631455B (en) | A kind of image subject extracting method and system | |
CN104063702B (en) | Three-dimensional gait recognition based on shielding recovery and partial similarity matching | |
CN107679503A (en) | A kind of crowd's counting algorithm based on deep learning | |
CN103035013A (en) | Accurate moving shadow detection method based on multi-feature fusion | |
CN103735269B (en) | A kind of height measurement method followed the tracks of based on video multi-target | |
CN104715238A (en) | Pedestrian detection method based on multi-feature fusion | |
CN104508704A (en) | Body measurement | |
CN106599785B (en) | Method and equipment for establishing human body 3D characteristic identity information base | |
Zang et al. | Road network extraction via aperiodic directional structure measurement | |
CN108268814A (en) | A kind of face identification method and device based on the fusion of global and local feature Fuzzy | |
Denman et al. | Determining operational measures from multi-camera surveillance systems using soft biometrics | |
CN109902565A (en) | The Human bodys' response method of multiple features fusion | |
CN110032932A (en) | A kind of human posture recognition method based on video processing and decision tree given threshold | |
CN107067037B (en) | Method for positioning image foreground by using LL C criterion | |
Chai | A probabilistic framework for building extraction from airborne color image and DSM | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN110910497B (en) | Method and system for realizing augmented reality map | |
Galiyawala et al. | Visual appearance based person retrieval in unconstrained environment videos | |
CN102163343B (en) | Three-dimensional model optimal viewpoint automatic obtaining method based on internet image | |
Li et al. | A hierarchical framework for image-based human age estimation by weighted and OHRanked sparse representation-based classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190716 |
|
RJ01 | Rejection of invention patent application after publication |