CN103136511B - Behavioral value method and device - Google Patents

Behavioral value method and device Download PDF

Info

Publication number
CN103136511B
CN103136511B CN201310021649.7A CN201310021649A CN103136511B CN 103136511 B CN103136511 B CN 103136511B CN 201310021649 A CN201310021649 A CN 201310021649A CN 103136511 B CN103136511 B CN 103136511B
Authority
CN
China
Prior art keywords
gauss
shoulder
gauss model
staff
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310021649.7A
Other languages
Chinese (zh)
Other versions
CN103136511A (en
Inventor
王海峰
王晓萌
何小波
董博
杨宇
张凯歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Original Assignee
XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd filed Critical XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Priority to CN201310021649.7A priority Critical patent/CN103136511B/en
Publication of CN103136511A publication Critical patent/CN103136511A/en
Application granted granted Critical
Publication of CN103136511B publication Critical patent/CN103136511B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of behavioral value method and device, the method includes: use Gaussian Background modeling to extract foreground image;This foreground image extracts the position of human body shoulder and staff;Whether position according to this staff and the position judgment of shoulder run behavior.By the present invention, improve the accuracy of behavior monitoring.

Description

Behavioral value method and device
Technical field
The present invention relates to technical field of machine vision, particularly to a kind of behavioral value method and device.
Background technology
Along with the needs of social development and intelligent city, increasing public place is mounted with video monitoring system.Some video monitoring system has such class functional requirement, namely whether video have the people quickly run.This demand is behavior analysis problem in video monitoring, belongs to intelligent video monitoring higher level processing target.
Summary of the invention
In order to achieve the above object, the present invention proposes a kind of behavioral value method and device.
According to an aspect of the present invention, behavioral value method includes: use Gaussian Background modeling to extract foreground image;Described foreground image extracts the position of human body shoulder and staff;Whether position according to described staff and the position judgment of shoulder run behavior.
Preferably, use Gaussian Background modeling to extract foreground image to include: using each image as unit in foreground image as the stochastic variable obtained of sampling from Gaussian mixtures sample;According to preset value, the prior probability that each pixel is prospect or background carries out valuation.
Preferably, the position extracting human body shoulder and staff in described foreground image includes: use different positive and negative sample training three kinds detection: human body, head and shoulder and staff;Use described human detection that moving region is detected;Same method detection head and shoulder and staff is used in described human region and in preset range.
Preferably, judge whether that the method for the behavior of running also includes described in: extract staff center hand1 in a people, hand2, left and right inferior horn corner1, the corner2 of head and shoulder detection, and the height height1 of head and shoulder;Judge the distance of hand1 or hand2 and corner1 or corner2 than upper height1 more than threshold value thr1 time, then judge to run behavior.
According to an aspect of the present invention, a kind of behavioral value device includes: the first extraction module, is used for using Gaussian Background modeling to extract foreground image;Second extraction module, for extracting the position of human body shoulder and staff in described foreground image;Whether judge module, for running behavior according to the position of described staff and the position judgment of shoulder.
Preferably, described first extraction module includes:
First processing module, for using each image as unit in foreground image as the stochastic variable obtained of sampling from Gaussian mixtures sample;
Second processing module, for according to preset value, the prior probability that each pixel is prospect or background carries out valuation.
Preferably, described second extraction module includes:
Training module, for using different positive and negative sample training three kinds detection: human body, head and shoulder and staff;First detection module, is used for using described human detection that moving region is detected;Second detection module, for using same method detection head and shoulder and staff in described human region and in preset range.
Preferably, described judge module includes: extraction module, is used for extracting staff center hand1 in a people, hand2, left and right inferior horn corner1, the corner2 of head and shoulder detection, and the height height1 of head and shoulder;Behavior determination module, for judge the distance of hand1 or hand2 and corner1 or corner2 than upper height1 more than threshold value thr1 time, then judge to run behavior.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, the accompanying drawing used required in embodiment or description of the prior art will be briefly described below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the flow chart of behavioral value method according to embodiments of the present invention;
Fig. 2 is the structured flowchart of behavioral value device according to embodiments of the present invention;
Fig. 3 is the flow chart of detection of running according to embodiments of the present invention.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under not making creative work premise, broadly fall into the scope of protection of the invention.
Present embodiments providing a kind of behavioral value method, Fig. 1 is the flow chart of behavioral value method according to embodiments of the present invention, as it is shown in figure 1, comprise the steps:
Step S102: use Gaussian Background modeling to extract foreground image.
Step S104: extract the position of human body shoulder and staff in this foreground image.
Step S106: whether run behavior according to the position judgment of the position of this staff and shoulder.
Preferably, use Gaussian Background modeling to extract foreground image to include: using each image as unit in foreground image as the stochastic variable obtained of sampling from Gaussian mixtures sample;According to preset value, the prior probability that each pixel is prospect or background carries out valuation.
Preferably, the position extracting human body shoulder and staff in this foreground image includes: use different positive and negative sample training three kinds detection: human body, head and shoulder and staff;Use this human detection that moving region is detected;Same method detection head and shoulder and staff is used in this human region and in preset range.
Preferably, this judges whether that the method for the behavior of running also includes: extract staff center hand1 in a people, hand2, left and right inferior horn corner1, the corner2 of head and shoulder detection, and the height height1 of head and shoulder;Judge the distance of hand1 or hand2 and corner1 or corner2 than upper height1 more than threshold value thr1 time, then judge to run behavior.
According to an aspect of the present invention, a kind of behavioral value device, Fig. 2 is the structured flowchart of behavioral value device according to embodiments of the present invention, as in figure 2 it is shown, this device includes: the first extraction module 22, is used for using Gaussian Background modeling to extract foreground image;Second extraction module 24, for extracting the position of human body shoulder and staff in this foreground image;Whether judge module 26, for running behavior according to the position of this staff and the position judgment of shoulder.
Preferably, this first extraction module 22 includes:
First processing module, for using each image as unit in foreground image as the stochastic variable obtained of sampling from Gaussian mixtures sample;
Second processing module, for according to preset value, the prior probability that each pixel is prospect or background carries out valuation.
Preferably, this second extraction module 24 includes:
Training module, for using different positive and negative sample training three kinds detection: human body, head and shoulder and staff;First detection module, is used for using this human detection that moving region is detected;Second detection module, for using same method detection head and shoulder and staff in this human region and in preset range.
Preferably, this judge module includes: extraction module, is used for extracting staff center hand1 in a people, hand2, left and right inferior horn corner1, the corner2 of head and shoulder detection, and the height height1 of head and shoulder;Behavior determination module, for judge the distance of hand1 or hand2 and corner1 or corner2 than upper height1 more than threshold value thr1 time, then judge to run behavior.
Preferred embodiment one
This preferred embodiment proposes a kind of detection method of running based on the metering of arm angle.Mixed Gaussian background modeling method is utilized to extract the foreground image of motion.Again to Acquiring motion area human region, then human region is extracted the position of head and shoulder and hands.Whether run behavior finally according to the position judgment of same human body following and hands.
Preferred embodiment two
This preferred embodiment provides a kind of behavior monitoring method, and the method includes:
(1) movement human is extracted:
Mixed Gaussian background modeling method is used to extract the human region of motion in scene.
Single Gaussian Background is modeled as f ( x ; μ ; σ ) = φ exp ( - ( x - μ ) 2 2 σ 2 .
Mixed Gaussian background modeling
1) mixed model parameter is first initialized, including:
The shared weight of each Gauss model
The average of each Gauss model and standard deviation.
Wherein the distribution of background is carried out the valuation of prior probability by the initialization of weight exactly, initialized time, generally the weight of first Gauss model is taken relatively big, and other are less with regard to corresponding value, it may be assumed that
ω k ( x , y , 1 ) = W k = 1 ( 1 - W ) / ( K - 1 ) k ≠ 1
Wherein the average of first Gauss model corresponding equal to the first frame of input video pixel value or process the meansigma methods of unit, it may be assumed that
&mu; k ( x , y , l , 1 ) = I ( x , y , l , 1 ) k = 1 0 k &NotEqual; 1 0 < k < = K
The variance v of K Gauss model:
σk 2(x, y, 1)=vark=1,2 ..., K
The initial variance of all Gauss models is all equal, it may be assumed that σk 2(x, y, 1)=vark=1,2 ..., K
Var value is directly relevant to the dynamic characteristic of this video.
2) Gauss model parameter is updated
Travel through each Gauss model, compare following formula:
(I(x,y,l,f)-μk(x,y,l,f-1))2<c*σk(x,y,f-1)2
If all set up for all of color component, then just this pixel is attributed to the B Gauss model, otherwise, being just not belonging to any one Gauss model, this is equivalent to occur in that wild point.Both the above situation is required for doing corresponding renewal.
Situation 1 updates accordingly:
Situation 1 represents that the value of current pixel meets the B Gauss distribution, then this pixel might not belong to background, it is necessary to judges whether this B Gauss distribution meets the following conditions:
&Sigma; n = 1 B w B ( x , y , f ) < T h r e s h o l d
Then illustrate that this pixel belongs to background dot, otherwise just belong to foreground point.
If this pixel belongs to background dot, then just illustrating that the B background distributions outputs a sampled value, at this moment all distributions are required for carrying out parameter renewal.
The B corresponding Gauss model parameter updates as follows:
wB(x, y, f)=(1-α) * wB(x,y,f-1)+α
μB(x, y, l, f)=(1-β) * μB(x,y,l,f-1)+β*I(x,y,l,f)
σB 2(x, y, f)=(1-β) * σB 2(x,y,f-1)+β*(I(:)-μB(:))T*(I(:)-μB(:))
Remaining Gauss model only changes weights, average and variance and all remains unchanged, it may be assumed that
wk(x, y, f)=(1-α) * wk(x,y,f-1)k≠B
β=α η (I (x, y:, f) | μBB)
Wild point refers to this pixel value and does not meet any one Gauss distribution, now we regard this pixel as the new situation occurred in video, k-th Gauss distribution is replaced by this new situation, its weight and average and variance are all determined according to initialization thinking, namely one less weight of distribution, the variance bigger with one, it may be assumed that
wK(x, y, f)=(1-W)/(K-1)
μK(x, y, l, and f)=I (x, y, l, f)
σK(x, y, l, f)=var
Determine that this point is foreground point simultaneously.
(2) detection human body, head and shoulder and staff
Detection human body, head and shoulder and staff:
Human detection is carried out first by support vector machine method.
Training: choose suitable kernel function, k (xi, xj).
Minimize | | w | |, at ωi(w·xi-b)≥1-ξiWhen.
Only store the α of non-zeroiWith corresponding xi(they are to support vector).
Image is zoomed to by a certain percentage different scale, under each yardstick, uses the window scanogram of 8*16 size.Then again the image under each window is classified.
(3) classification: for pattern X, with supporting vector xiWith corresponding weight αiComputational discrimination functional expressionThe symbol of this function determines that this region is human body.
Make the human region detected to detect head and shoulder and staff in aforementioned manners respectively.
(4) detection is run:
Under same human body, it is determined that the left and right inferior horn corner1 of head and shoulder, corner2, the height height of head and shoulder, and the center of staff, hand1, hand2.Distance () is the function calculating bright spot Euclidean distance.Thr1 is artificial adjustable threshold value.
As Distance (hand1, corner1)/height > Thr1
Or
Distance (hand2, corner2)/height > Thr1
Or
Distance (hand1, corner2)/height > Thr1
Or
During Distance (hand2, corner1)/height > Thr1, it is determined that behavior of running occurs.
Preferred embodiment three
This preferred embodiment provides a kind of behavior monitoring method, and Fig. 3 is the flow chart of detection of running according to embodiments of the present invention, as it is shown on figure 3, the method comprising the steps of S302 to step S316.
Step S302: obtain image.
Step S304: extract moving region.
Step S306: extract human region.
Step S308: location head and shoulder, staff.
Step S310: calculate staff shoulder distance.
Step S312: distance is more than threshold value.
Step S314: run.
Step S316: non-run.
It should be noted that the present invention is not by the impact of illumination variation, it is possible to the quickly accurate event of running detected in video.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can be completed by the hardware that programmed instruction is relevant, aforesaid program can be stored in a computer read/write memory medium, this program upon execution, performs to include the step of said method embodiment;And aforesaid storage medium includes: the various media that can store program code such as ROM, RAM, magnetic disc or CDs.
Last it is noted that above example is only in order to illustrate technical scheme, it is not intended to limit;Although the present invention being described in detail with reference to previous embodiment, it will be understood by those within the art that: the technical scheme described in foregoing embodiments still can be modified by it, or wherein portion of techniques feature is carried out equivalent replacement;And these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (2)

1. a behavioral value method, it is characterised in that including:
Gaussian Background modeling is used to extract foreground image;
Described foreground image extracts the position of human body shoulder and staff;
Whether position according to described staff and the position judgment of shoulder run behavior;
The described method judging whether the behavior of running also includes:
Extract staff center hand1 in a people, hand2, left and right inferior horn corner1, the corner2 of head and shoulder detection, and the height height1 of head and shoulder;
Judge the distance of hand1 or hand2 and corner1 or corner2 than upper height1 more than threshold value thr1 time, then judge to run behavior;
It addition, described method also includes:
Extract movement human: use mixed Gaussian background modeling method to extract the human region of motion in scene;
Single Gaussian Background is modeled as f ( x ; &mu; ; &sigma; ) = &phi; exp ( - ( x - &mu; ) 2 2 &sigma; 2 ;
Mixed Gaussian background modeling:
First mixed model parameter is initialized, including:
The shared weight of each Gauss model, the average of each Gauss model and standard deviation;
Wherein the distribution of background is carried out the valuation of prior probability by the initialization of weight exactly, initialized time, generally the weight of first Gauss model is taken relatively big, and other are less with regard to corresponding value, it may be assumed that
&omega; k ( x , y , 1 ) = W k = 1 ( 1 - W ) / ( K - 1 ) k &NotEqual; 1 ;
Wherein the average of first Gauss model corresponding equal to the first frame of input video pixel value or process the meansigma methods of unit, it may be assumed that
&mu; k ( x , y , l , 1 ) = I ( x , y , l , 1 ) k = 1 0 k &NotEqual; 1 , 0 < k < = K ;
The variance v of K Gauss model:
σk 2(x, y, 1)=vark=1,2 ..., K;
The initial variance of all Gauss models is all equal, it may be assumed that σk 2(x, y, 1)=vark=1,2 ..., K;
Var value is directly relevant to the dynamic characteristic of this video;
Update Gauss model parameter:
Travel through each Gauss model, compare following formula:
(I(x,y,l,f)-μk(x,y,l,f-1))2<c*σk(x,y,f-1)2
If all set up for all of color component, then just this pixel is attributed to the B Gauss model, otherwise, being just not belonging to any one Gauss model, this is equivalent to occur in that wild point;Both the above situation is required for doing corresponding renewal;
Situation 1 updates accordingly:
Situation 1 represents that the value of current pixel meets the B Gauss distribution, then this pixel might not belong to background, it is necessary to judges whether this B Gauss distribution meets the following conditions:
&Sigma; n = 1 B w B ( x , y , f ) < T h r e s h o l d ;
Then illustrate that this pixel belongs to background dot, otherwise just belong to foreground point;
If this pixel belongs to background dot, then just illustrating that the B background distributions outputs a sampled value, at this moment all distributions are required for carrying out parameter renewal;
The B corresponding Gauss model parameter updates as follows:
wB(x, y, f)=(1-α) * wB(x,y,f-1)+α
μB(x, y, l, f)=(1-β) * μB(x,y,l,f-1)+β*I(x,y,l,f)
σB 2(x, y, f)=(1-β) * σB 2(x,y,f-1)+β*(I(:)-μB(:))T*(I(:)-μB(:));
Remaining Gauss model only changes weights, average and variance and all remains unchanged, it may be assumed that
wk(x, y, f)=(1-α) * wk(x, y, f-1) k ≠ B;
β=α η (I (x, y:, f) | μBB);
Wild point refers to this pixel value and does not meet any one Gauss distribution, now we regard this pixel as the new situation occurred in video, k-th Gauss distribution is replaced by this new situation, its weight and average and variance are all determined according to initialization thinking, namely one less weight of distribution, the variance bigger with one, it may be assumed that
wK(x, y, f)=(1-W)/(K-1)
μK(x, y, l, and f)=I (x, y, l, f)
σK(x, y, l, f)=var;
Determine that this point is foreground point simultaneously;
Detection human body, head and shoulder and staff:
Human detection is carried out first by support vector machine method;
Training: choose suitable kernel function, k (xi, xj);
Minimize | | w | |, at ωi(w·xi-b)≥1-ξiWhen;
Only store the α of non-zeroiWith corresponding xi
Image is zoomed to by a certain percentage different scale, under each yardstick, uses the window scanogram of 8*16 size;Then again the image under each window is classified;
Classification: for pattern X, with supporting vector xiWith corresponding weight αiComputational discrimination functional expressionThe symbol of this function determines that this region is human body;
Make the human region detected to detect head and shoulder and staff in aforementioned manners respectively.
2. a behavioral value device, it is characterised in that including:
First extraction module, is used for using Gaussian Background modeling to extract foreground image;
Second extraction module, for extracting the position of human body shoulder and staff in described foreground image;
Whether judge module, for running behavior according to the position of described staff and the position judgment of shoulder;
Described judge module includes:
Extraction module, is used for extracting staff center hand1 in a people, hand2, left and right inferior horn corner1, the corner2 of head and shoulder detection, and the height height1 of head and shoulder;
Behavior determination module, for judge the distance of hand1 or hand2 and corner1 or corner2 than upper height1 more than threshold value thr1 time, then judge to run behavior;
Wherein, movement human is extracted: use mixed Gaussian background modeling method to extract the human region of motion in scene;
Single Gaussian Background is modeled as f ( x ; &mu; ; &sigma; ) = &phi; exp ( - ( x - &mu; ) 2 2 &sigma; 2 ;
Mixed Gaussian background modeling:
First mixed model parameter is initialized, including:
The shared weight of each Gauss model, the average of each Gauss model and standard deviation;
Wherein the distribution of background is carried out the valuation of prior probability by the initialization of weight exactly, initialized time, generally the weight of first Gauss model is taken relatively big, and other are less with regard to corresponding value, it may be assumed that
&omega; k ( x , y , 1 ) = W k = 1 ( 1 - W ) / ( K - 1 ) k &NotEqual; 1 ;
Wherein the average of first Gauss model corresponding equal to the first frame of input video pixel value or process the meansigma methods of unit, it may be assumed that
&mu; k ( x , y , l , 1 ) = I ( x , y , l , 1 ) k = 1 0 k &NotEqual; 1 , 0 < k < = K ;
The variance v of K Gauss model:
σk 2(x, y, 1)=vark=1,2 ..., K;
The initial variance of all Gauss models is all equal, it may be assumed that σk 2(x, y, 1)=vark=1,2 ..., K;
Var value is directly relevant to the dynamic characteristic of this video;
Update Gauss model parameter:
Travel through each Gauss model, compare following formula:
(I(x,y,l,f)-μk(x,y,l,f-1))2<c*σk(x,y,f-1)2
If all set up for all of color component, then just this pixel is attributed to the B Gauss model, otherwise, being just not belonging to any one Gauss model, this is equivalent to occur in that wild point;Both the above situation is required for doing corresponding renewal;
Situation 1 updates accordingly:
Situation 1 represents that the value of current pixel meets the B Gauss distribution, then this pixel might not belong to background, it is necessary to judges whether this B Gauss distribution meets the following conditions:
&Sigma; n = 1 B w B ( x , y , f ) < T h r e s h o l d ;
Then illustrate that this pixel belongs to background dot, otherwise just belong to foreground point;
If this pixel belongs to background dot, then just illustrating that the B background distributions outputs a sampled value, at this moment all distributions are required for carrying out parameter renewal;
The B corresponding Gauss model parameter updates as follows:
wB(x, y, f)=(1-α) * wB(x,y,f-1)+α
μB(x, y, l, f)=(1-β) * μB(x,y,l,f-1)+β*I(x,y,l,f)
σB 2(x, y, f)=(1-β) * σB 2(x,y,f-1)+β*(I(:)-μB(:))T*(I(:)-μB(:));
Remaining Gauss model only changes weights, average and variance and all remains unchanged, it may be assumed that
wk(x, y, f)=(1-α) * wk(x, y, f-1) k ≠ B;
β=α η (I (x, y:, f) | μBB);
Wild point refers to this pixel value and does not meet any one Gauss distribution, now we regard this pixel as the new situation occurred in video, k-th Gauss distribution is replaced by this new situation, its weight and average and variance are all determined according to initialization thinking, namely one less weight of distribution, the variance bigger with one, it may be assumed that
wK(x, y, f)=(1-W)/(K-1)
μK(x, y, l, and f)=I (x, y, l, f)
σK(x, y, l, f)=var;
Determine that this point is foreground point simultaneously;
Detection human body, head and shoulder and staff:
Human detection is carried out first by support vector machine method;
Training: choose suitable kernel function, k (xi, xj);
Minimize | | w | |, at ωi(w·xi-b)≥1-ξiWhen;
Only store the α of non-zeroiWith corresponding xi
Image is zoomed to by a certain percentage different scale, under each yardstick, uses the window scanogram of 8*16 size;Then again the image under each window is classified;
Classification: for pattern X, with supporting vector xiWith corresponding weight αiComputational discrimination functional expressionThe symbol of this function determines that this region is human body;
Make the human region detected to detect head and shoulder and staff in aforementioned manners respectively.
CN201310021649.7A 2013-01-21 2013-01-21 Behavioral value method and device Expired - Fee Related CN103136511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310021649.7A CN103136511B (en) 2013-01-21 2013-01-21 Behavioral value method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310021649.7A CN103136511B (en) 2013-01-21 2013-01-21 Behavioral value method and device

Publications (2)

Publication Number Publication Date
CN103136511A CN103136511A (en) 2013-06-05
CN103136511B true CN103136511B (en) 2016-06-29

Family

ID=48496319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310021649.7A Expired - Fee Related CN103136511B (en) 2013-01-21 2013-01-21 Behavioral value method and device

Country Status (1)

Country Link
CN (1) CN103136511B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866841B (en) * 2015-06-05 2018-03-09 中国人民解放军国防科学技术大学 A kind of human body target is run behavioral value method
WO2016199749A1 (en) * 2015-06-10 2016-12-15 コニカミノルタ株式会社 Image processing system, image processing device, image processing method, and image processing program
CN111461036B (en) * 2020-04-07 2022-07-05 武汉大学 Real-time pedestrian detection method using background modeling to enhance data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007028121A (en) * 2005-07-15 2007-02-01 Fujitsu Access Ltd Alarm monitor control system and alarm monitor control method
CN101355449A (en) * 2008-09-19 2009-01-28 武汉噢易科技有限公司 Intelligent analysis system and analysis method for computer user behaviors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007028121A (en) * 2005-07-15 2007-02-01 Fujitsu Access Ltd Alarm monitor control system and alarm monitor control method
CN101355449A (en) * 2008-09-19 2009-01-28 武汉噢易科技有限公司 Intelligent analysis system and analysis method for computer user behaviors

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于混合高斯的背景建模与更新算法的研究与实现;陈雪莹;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120615;第13-28页 *
基于视频序列的人体肢体标定研究;张传金;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100715;第16-17页,第35-61页 *

Also Published As

Publication number Publication date
CN103136511A (en) 2013-06-05

Similar Documents

Publication Publication Date Title
CN103065134B (en) A kind of fingerprint identification device and method with information
CN101404086B (en) Target tracking method and device based on video
CN107688784A (en) A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features
CN103473539B (en) Gait recognition method and device
CN103400105B (en) Method identifying non-front-side facial expression based on attitude normalization
CN107742099A (en) A kind of crowd density estimation based on full convolutional network, the method for demographics
CN108224691A (en) A kind of air conditioner system control method and device
CN106650688A (en) Eye feature detection method, device and recognition system based on convolutional neural network
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN104318263A (en) Real-time high-precision people stream counting method
CN106407911A (en) Image-based eyeglass recognition method and device
CN104680144A (en) Lip language recognition method and device based on projection extreme learning machine
CN105224285A (en) Eyes open and-shut mode pick-up unit and method
CN105046197A (en) Multi-template pedestrian detection method based on cluster
CN103093212A (en) Method and device for clipping facial images based on face detection and face tracking
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN106846362A (en) A kind of target detection tracking method and device
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN103105924B (en) Man-machine interaction method and device
CN103593672A (en) Adaboost classifier on-line learning method and Adaboost classifier on-line learning system
CN109460704A (en) A kind of fatigue detection method based on deep learning, system and computer equipment
CN103871081A (en) Method for tracking self-adaptive robust on-line target
CN105046206A (en) Pedestrian detection method and apparatus based on moving associated prior information in videos
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN103136511B (en) Behavioral value method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160629

Termination date: 20200121