CN102509070A - Video-based human face area tracking method for counting people paying close attention to advertisement - Google Patents
Video-based human face area tracking method for counting people paying close attention to advertisement Download PDFInfo
- Publication number
- CN102509070A CN102509070A CN201110308514XA CN201110308514A CN102509070A CN 102509070 A CN102509070 A CN 102509070A CN 201110308514X A CN201110308514X A CN 201110308514XA CN 201110308514 A CN201110308514 A CN 201110308514A CN 102509070 A CN102509070 A CN 102509070A
- Authority
- CN
- China
- Prior art keywords
- human face
- face region
- adhesion
- people
- connected domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a video-based human face area tracking method for counting people paying close attention to advertisement. The method comprises the following steps of: detecting and positioning the human face area; marking the mutual relationship state of the human face area; tracking the human face in different states; counting the residence time of the successfully tracked human face area; and giving the target according to the counting result. According to the invention, the people paying close attention to the advertisement are identified through accurate tracking of the target when the targets are shielded by each other.
Description
Technical field
The invention belongs to the image processing method technical field, relate to a kind of human face region tracking of the concern advertisement demographics based on video.
Background technology
Multimedia advertising form Along with computer technology, development of internet technology, its form of expression; And the broadcast place is more and more diversified; In order to add up the attention rate situation of advertisement, the place of in time adjusting ad content and laying advertisement need be added up the crowd density of paying close attention to advertisement.And can not have stronger advantage because of it to paying close attention to crowd's characteristics of interference based on the ad attention analytic system of video content analysis.In ad attention analytic system,, be the key that realizes its correct statistics to the correct tracking of the people's face target in the sequence of frames of video based on video content analysis.
Summary of the invention
The human face region tracking that the purpose of this invention is to provide a kind of concern advertisement demographics based on video can be blocking each other between target under the situation, and through the tracking to target, accurately the number of advertisement is paid close attention in identification.
The technical scheme that the present invention adopted is, a kind of human face region tracking of the concern advertisement demographics based on video is characterized in that concrete steps are following:
The detection of step 1, human face region and location:
Step 1.1, employing skin color detection method obtain area of skin color in the video pictures, and the area of skin color definition that obtains is people's face connected domain;
Step 1.2, front face detect:
People's face connected domain to step 1.1 obtains is carried out SHAPE DETECTION, in detecting people's face connected domain, comprises three cavities, and satisfy two cavities be arranged in parallel within another be positioned at below the time, judge that this people's face connected domain comprises a front face zone; When detecting people's face connected domain and comprise 3N-1 or 3N cavity, N is a positive integer, and N >=2, judges that this people's face connected domain comprises N human face region;
Step 1.3, front face area identification:
Front face zone to step 1.2 obtains uses minimum boundary rectangle to identify respectively, and specifically using the upper left corner coordinate of each boundary rectangle is [x
0 (k), y
0 (k)] and lower right corner coordinate be [x
1 (k), y
1 (k)], be designated { [x
0 (k), y
0 (k)], [x
1 (k), y
1 (k)], k=1,2 ..., N};
The area in front face that calculation procedure 1.2 obtains zone when the people's face that calculates is long-pending when not satisfying preset scope, is discharged this human face region target, as continuing tracing object;
Step 2, the front face testing result that step 1.2 is obtained are carried out human face region mutual relationship status indicator:
When people's face connected domain is a front face zone, judge that this human face region and other human face regions do not have adhesion each other; When people's face connected domain comprises N human face region, judge between this N human face region to have adhesion that N is a positive integer, and N >=2;
Step 3, face tracking
Step 3.1, human face region do not have the face tracking of adhesion each other
When judging that to 2 human face regions are no adhesion in present frame and former frame through step 1, with this human face region in the present frame position { [x
0 (t), y
0 (t)], [x
1 (t), y
1 (t)], and in the former frame position { [x
0 (t-1), y
0 (t-1)], [x
1 (t-1), y
1 (t-1)] compare, when satisfying x
1 (t)>=x
0 (t-1), and y
1 (t)>=y
0 (t-1)The time; Or satisfy x
1 (t)>=x
1 (t-1), and y
1 (t)>=y
1 (t-1)The time; Or satisfy x
1 (t)≤x
0 (t-1), and y
1 (t)≤y
1 (t-1)The time; Or satisfy x
0 (t)>=x
1 (t-1), and y
0 (t)>=y
1 (t-1)The time, judge this human face region is followed the tracks of successfully;
The face tracking that step 3.2, a plurality of human face region exist the state of adhesion not change
When judging that to 2 all there is adhesion in human face regions through step 1 in present frame and former frame, and people's face connected domain at this human face region place empty number and cavity position in present frame and former frame be when constant, and judgement to this face tracking successfully;
Step 3.3, the face tracking when human face region becomes adhesion
Judge that to 2 human face region is no adhesion in former frame through step 1; And when there was adhesion in present frame, each the angular coordinate analysis to present frame human face region place people's face connected domain comprises obtained N the human face region that this connected domain comprises; N is a positive integer; And N >=2 near being criterion to the maximum with overlapping area, are specified the human face region of a certain human face region for following the tracks of with movement velocity in this N human face region; And judge tracking success to this human face region, other human face region in this N human face region is defined as emerging human face region to be tracked;
Face tracking when step 3.4, human face region become no adhesion
Judge that to 2 there is adhesion in human face region in former frame through step 1, and when present frame was no adhesion, a plurality of human face regions that present frame is obtained were defined as emerging human face region to be tracked;
Step 4, the successful human face region residence time of statistical trace, the human face region target that residence time is shorter than certain threshold value is got rid of, not as continuing tracing object.
In the step 1.1, skin color detection method is:
The coloured image of input is { R
M * n, G
M * n, B
M * n, represent that the size of its red, green, blue three Color Channels is m * n, obtain the Face Detection result according to following formula:
Y(i,j)=0.299·R(i,j)+0.587·G(i,j)+0.114·B(i,j),i=1,2,...,m,j=1,2,...,n;
L(i,j)=R(i,j)+G(i,j)+B(i,j),
Wherein, (i, each connected domain j)=1 is an area of skin color to Lab, Th
R1, th
R2, Th
G1, Th
G2, Th
yBe the judgment threshold of the colour of skin, k for the adjustment factor, R (i, j), G (i, j) and B (i j) is video image red, green, blue tristimulus values.The span of the judgment threshold of each colour of skin is: 0.35≤Th
R1≤0.37,0.68≤Th
R2≤0.69,0.24≤Th
G1≤0.25,0.39≤Th
R2≤0.40,0.2≤Th
Y≤0.3,1.3≤k≤1.4.
The present invention and the contrast of existing tracking have the following advantages:
1, only the positive face of the spectators that face billboard is detected and follows the tracks of, can ignore the pedestrian that the side face is passed by automatically.
2, provide the default value scope that effective people's face distance of paying close attention to advertisement is filtered, as from billboard passerby too far away, though positive face, can be ignored because distance is too far away to advertisement; Or,, can not think that also it is paying close attention to ad content owing to too near from billboard to the street cleaner that billboard is cleaned.
3, for the people's face that blocks each other, can be corresponding with which people's appearance of previous frame through judging that its regional matching degree with previous frame people face is distinguished the detected people's face of present frame.
4, can adapt to voluntarily for the human face region of human face region of of short duration appearance (owing to people's face detects the situation that mistake is known or passerby causes towards reasons such as billboard flash across just) or of short duration disappearance (because mistake is known or human face region situation such as is blocked in a flash by other moving object just), and provide adjustable a plurality of parameter to change the continuous degree of tracking (time of the of short duration appearance of human face region or the time of disappearance etc.) of human face region for different statistic or track demand.
Description of drawings
Fig. 1 concerns synoptic diagram for the position that human face region among the present invention does not have adhesion;
Fig. 2 exists the position of adhesion to concern synoptic diagram for human face region among the present invention each other;
Fig. 3 concerns synoptic diagram for human face region among the present invention is no adhesion in present frame and former frame position;
Fig. 4 all exists the position of adhesion to concern synoptic diagram in present frame and former frame for human face region among the present invention;
Fig. 5 concerns synoptic diagram for human face region among the present invention becomes the position that has adhesion by no adhesion;
Fig. 6 concerns synoptic diagram for human face region among the present invention by the position that exists adhesion to become no adhesion.
Embodiment
Below in conjunction with accompanying drawing and embodiment the present invention is elaborated.
The human face region tracking of a kind of concern advertisement demographics based on video of the present invention, concrete steps are following:
The detection of step 1, human face region and location:
Step 1.1, employing skin color detection method obtain area of skin color in the video pictures, and the area of skin color definition that obtains is people's face connected domain.
Wherein, the monitor video coloured image of input is { R
M * n, G
M * n, B
M * n, represent that the size of its red, green, blue three Color Channels is m * n.Skin color detection method is:
Obtain the Face Detection result according to following formula:
Y(i,j)=0.299·R(i,j)+0.587·G(i,j)+0.114·B(i,j),i=1,2,...,m,j=1,2,...,n,
L(i,j)=R(i,j)+G(i,j)+B(i,j),
Wherein, (i, each connected domain j)=1 is an area of skin color to Lab, Th
R1, Th
R2, Th
G1, Th
G2, Th
yBe the judgment threshold of the colour of skin, k is the adjustment factor.
Because present embodiment mainly is applicable to artificial lead domestic with the yellow race, so the span of each colour of skin judgment threshold is: 0.35≤Th
R1≤0.37,0.68≤Th
R2≤0.69,0.24≤Th
G1≤0.25,0.39≤Th
R2≤0.40,0.2≤Th
Y≤0.3,1.3≤k≤1.4.
Step 1.2, front face detect:
The people who pays close attention to advertisement is defined as the people who is just seeing in the face of view of advertisement, therefore, detected front face zone is estimated as the people who pays close attention to advertisement.People's face connected domain to step 1.1 obtains is carried out SHAPE DETECTION, in detecting people's face connected domain, comprises three cavities, and satisfy two cavities be arranged in parallel within another be positioned at below the time, judge that this people's face connected domain comprises a front face zone; When detecting people's face connected domain and comprise 3N-1 or 3N cavity, N is a positive integer, and N >=2, judges that this people's face connected domain comprises N human face region.
According to watching position relation attentively between target and advertisement,, confirm as two front faces and block together each other if connected domain comprises five (allowing one of them people's a eye to be blocked) or six when empty.Connected domain comprises 3N-1 (at this moment, one of them people's a eye is blocked by other people) or 3N cavity, and then showing has N the mutual adhesion of people's appearance, blocks together.
Step 1.3, front face area identification:
Front face zone to step 1.2 obtains uses minimum boundary rectangle to identify respectively, and specifically using the upper left corner coordinate of each boundary rectangle is [x
0 (k), y
0 (k)] and lower right corner coordinate be [x
0 (k), y
1 (k)], be designated { [x
0 (k), y
0 (k)], [x
1 (k), y
1 (k)], k=1,2 ..., N}.
The area in front face that calculation procedure 1.2 obtains zone when the people's face that calculates is long-pending when not satisfying preset scope, is discharged this human face region target, as continuing tracing object.Wherein, the size of preset range is set in system according to actual conditions (like effective people's face distance of definite concern advertisement).If this people's face is long-pending during greater than preset range, show that it is the street cleaner etc. of wiping advertisement; If people's face is long-pending during less than preset range, show it not within the coverage of paying close attention to advertisement, more than in two kinds of situations personage's target be not effective tracing object all, do not follow the tracks of.
Step 2, the front face testing result that step 1.2 is obtained are carried out human face region mutual relationship status indicator:
When people's face connected domain is a front face zone, judge that this human face region and other human face regions do not have adhesion each other, two human face regions as shown in Figure 1 do not have adhesion, and target A that it is represented respectively and target B do not have adhesion each other.
When people's face connected domain comprises N human face region, judge between this N human face region to have adhesion that N is a positive integer, and N >=2.There is adhesion in two human face regions as shown in Figure 2, and exist between target A that it is represented respectively and the target B and block, at this moment, these two human face regions blocking an angular coordinate of losing and can infer out each other through other three coordinates.
Step 3, face tracking
Step 3.1, human face region do not have the face tracking of adhesion each other
As shown in Figure 3, when judging that to 2 people's face connected domains are no adhesion in present frame (i.e. t frame) and former frame (i.e. t-1 frame) through step 1, with this human face region in the present frame position { [x
0 (t), y
0 (t)], [x
1 (t), y
0 (t)], and in the former frame position { [x
0 (t-1), y
0 (t-1)], [x
1 (t-1), y
1 (t-1)] compare A
T-1The human face region position of expression former frame, A
tX is satisfied through working as in four positions that expression present frame human face region position possibly occur therefore
1 (t)>=x
0 (t-1), and y
1 (t)>=y
0 (t-1)The time; Or satisfy x
1 (t)>=x
1 (t-1), and y
1 (t)>=y
1 (t-1)The time; Or satisfy x
1 (t)≤x
0 (t-1), and y
1 (t)≤y
1 (t-1)The time; Or satisfy x
0 (t)>=x
1 (t-1), and y
0 (t)>=y
1 (t-1)The time, judge this human face region is followed the tracks of successfully.
The face tracking that step 3.2, a plurality of human face region exist the state of adhesion not change
When judging that to 2 all there is adhesion in human face regions through step 1 in present frame and former frame, and people's face connected domain at this human face region place empty number and cavity position in present frame and former frame be when constant, and judgement to this face tracking successfully.
As shown in Figure 4, A
T-1And A
tRepresent the position of same human face region respectively, B at former frame (i.e. t-1 frame) and present frame (i.e. t frame)
T-1And B
tRepresent the position of another human face region respectively, A at former frame and present frame
T-1And B
T-1Stick together and belong to same people's face connected domain in former frame, this people's face connected domain does not all change in the hole of present frame number and cavity position.Therefore judge two target following successes that this people's face connected domain is comprised.
Step 3.3, the face tracking when human face region becomes adhesion
Judge that to 2 human face region is no adhesion in former frame through step 1; And when there was adhesion in present frame, each the angular coordinate analysis to present frame human face region place people's face connected domain comprises obtained N the human face region that this connected domain comprises; N is a positive integer; And N >=2 near being criterion to the maximum with overlapping area, are specified the human face region of a certain human face region for following the tracks of that meets above criterion most with movement velocity in this N human face region; And judge tracking success to this human face region, again other human face region in this N human face region is defined as emerging human face region to be tracked.
As shown in Figure 5, A
T-1And A
tRepresent the position of same human face region respectively, B at former frame (i.e. t-1 frame) and present frame (i.e. t frame)
tRepresent the position of another human face region, at former frame A at present frame
T-1There is not adhesion with other human face regions, A in present frame
tAnd B
tThere is adhesion and belongs to same people's face connected domain, in this people's face connected domain, be chosen in the A of former frame
T-1The region overlapping area is big, and immediate one of movement velocity direction is former tracking target, and another then is labeled as emerging human face region to be tracked.In like manner, when blocking adhesion each other with a plurality of targets, also with speed near and overlapping area be criterion to the maximum.
Wherein, the movement velocity of target is the speed calculation of the upper left corner coordinate of this target area.
Face tracking when step 3.4, human face region become no adhesion
Judge that to 2 there is adhesion in human face region in former frame through step 1, and when present frame was no adhesion, a plurality of human face regions that present frame is obtained were defined as emerging human face region to be tracked.
As shown in Figure 6, A
T-1And A
tRepresent the position of a human face region A respectively, B at former frame (i.e. t-1 frame) and present frame (i.e. t frame)
T-1And B
tRepresent another human face region B respectively in the position of former frame and present frame, at present frame, respectively with the movement velocity direction nearer be that criterion judges that two targets separately are respectively emerging human face region to be tracked: A target and B target.
Step 4, the successful human face region residence time of statistical trace, the human face region target that residence time is shorter than certain threshold value is got rid of, not as continuing tracing object.
Claims (3)
1. human face region tracking based on the concern advertisement demographics of video is characterized in that concrete steps are following:
The detection of step 1, human face region and location:
Step 1.1, employing skin color detection method obtain area of skin color in the video pictures, and the area of skin color definition that obtains is people's face connected domain;
Step 1.2, front face detect:
People's face connected domain to step 1.1 obtains is carried out SHAPE DETECTION, in detecting people's face connected domain, comprises three cavities, and satisfy two cavities be arranged in parallel within another be positioned at below the time, judge that this people's face connected domain comprises a front face zone; When detecting people's face connected domain and comprise 3N-1 or 3N cavity, N is a positive integer, and N >=2, judges that this people's face connected domain comprises N human face region;
Step 1.3, front face area identification:
Front face zone to step 1.2 obtains uses minimum boundary rectangle to identify respectively, and specifically using the upper left corner coordinate of each boundary rectangle is [x
0 (k), y
0 (k)] and lower right corner coordinate be [x
1 (k), y
1 (k)], be designated { [x
0 (k), y
0 (k)], [x
1 (k), y
1 (k)], k=1,2 ..., N};
The area in front face that calculation procedure 1.2 obtains zone when the people's face that calculates is long-pending when not satisfying preset scope, is discharged this human face region target, as continuing tracing object;
Step 2, the front face testing result that step 1.2 is obtained are carried out human face region mutual relationship status indicator:
When people's face connected domain is a front face zone, judge that this human face region and other human face regions do not have adhesion each other; When people's face connected domain comprises N human face region, judge between this N human face region to have adhesion that N is a positive integer, and N >=2;
Step 3, face tracking
Step 3.1, human face region do not have the face tracking of adhesion each other
When judging that to 2 human face regions are no adhesion in present frame and former frame through step 1, with this human face region in the present frame position { [x
0 (t), y
0 (t)], [x
1 (t), Y
1 (t)], and in the former frame position { [x
0 (t-1), y
0 (t-1)], [x
1 (t-1), y
1 (t-1)] compare, when satisfying x
1 (t)>=x
0 (t-1), and y
1 (t)>=y
0 (t-1)The time; Or satisfy x
1 (t)>=x
1 (t-1), and y
1 (t)>=y
1 (t-1)The time; Or satisfy x
1 (t)≤x
0 (t-1), and y
1 (t)≤y
1 (t-1)The time; Or satisfy x
0 (t)>=x
1 (t-1), and y
0 (t)>=y
1 (t-1)The time, judge this human face region is followed the tracks of successfully;
The face tracking that step 3.2, a plurality of human face region exist the state of adhesion not change
When judging that to 2 all there is adhesion in human face regions through step 1 in present frame and former frame, and people's face connected domain at this human face region place empty number and cavity position in present frame and former frame be when constant, and judgement to this face tracking successfully;
Step 3.3, the face tracking when human face region becomes adhesion
Judge that to 2 human face region is no adhesion in former frame through step 1; And when there was adhesion in present frame, each the angular coordinate analysis to present frame human face region place people's face connected domain comprises obtained N the human face region that this connected domain comprises; N is a positive integer; And N >=2 near being criterion to the maximum with overlapping area, are specified the human face region of a certain human face region for following the tracks of with movement velocity in this N human face region; And judge tracking success to this human face region, other human face region in this N human face region is defined as emerging human face region to be tracked;
Face tracking when step 3.4, human face region become no adhesion
Judge that to 2 there is adhesion in human face region in former frame through step 1, and when present frame was no adhesion, a plurality of human face regions that present frame is obtained were defined as emerging human face region to be tracked;
Step 4, the successful human face region residence time of statistical trace, the human face region target that residence time is shorter than certain threshold value is got rid of, not as continuing tracing object.
2. according to the human face region tracking of the described concern advertisement demographics based on video of claim 1, it is characterized in that skin color detection method is in the step 1.1:
The coloured image of input is { R
M * n, G
M * n, B
M * n, represent that the size of its red, green, blue three Color Channels is m * n, obtain the Face Detection result according to following formula:
Y(i,j)=0.299·R(i,j)+0.587·G(i,j)+0.114·B(i,j),i=1,2,...,m,j=1,2,...,n;
L(i,j)=R(i,j)+G(i,j)+B(i,j),
Wherein, (i, each connected domain j)=1 is an area of skin color to Lab, Th
R1, Th
R2, Th
G1, Th
G2, Th
yBe the judgment threshold of the colour of skin, k for the adjustment factor, R (i, j), G (i, j) and B (i j) is video image red, green, blue tristimulus values.
3. according to the human face region tracking of the described concern advertisement demographics based on video of claim 2, it is characterized in that the span of the judgment threshold of said each colour of skin is: 0.35≤Th
R1≤0.37,0.68≤Th
R2≤0.69,0.24≤Th
G1≤0.25,0.39≤Th
R2≤0.40,0.2≤Th
Y≤0.3,1.3≤k≤1.4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110308514XA CN102509070A (en) | 2011-10-12 | 2011-10-12 | Video-based human face area tracking method for counting people paying close attention to advertisement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110308514XA CN102509070A (en) | 2011-10-12 | 2011-10-12 | Video-based human face area tracking method for counting people paying close attention to advertisement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102509070A true CN102509070A (en) | 2012-06-20 |
Family
ID=46221151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110308514XA Pending CN102509070A (en) | 2011-10-12 | 2011-10-12 | Video-based human face area tracking method for counting people paying close attention to advertisement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102509070A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077380A (en) * | 2013-01-07 | 2013-05-01 | 信帧电子技术(北京)有限公司 | Method and device for carrying out statistics on number of people on basis of video |
CN104317860A (en) * | 2014-10-16 | 2015-01-28 | 中航华东光电(上海)有限公司 | Evaluation device of stereoscopic advertisement player and evaluation method of evaluation device |
CN104834896A (en) * | 2015-04-03 | 2015-08-12 | 惠州Tcl移动通信有限公司 | Method and terminal for information acquisition |
CN109034863A (en) * | 2018-06-08 | 2018-12-18 | 浙江新再灵科技股份有限公司 | The method and apparatus for launching advertising expenditure are determined based on vertical ladder demographics |
CN109064489A (en) * | 2018-07-17 | 2018-12-21 | 北京新唐思创教育科技有限公司 | Method, apparatus, equipment and medium for face tracking |
CN109558812A (en) * | 2018-11-13 | 2019-04-02 | 广州铁路职业技术学院(广州铁路机械学校) | The extracting method and device of facial image, experience system and storage medium |
CN110309710A (en) * | 2019-05-20 | 2019-10-08 | 特斯联(北京)科技有限公司 | Content based on recognition of face pays close attention to big data processing method, apparatus and system |
CN110351353A (en) * | 2019-07-03 | 2019-10-18 | 店掂智能科技(中山)有限公司 | Stream of people's testing and analysis system with advertising function |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1509082A (en) * | 2002-12-14 | 2004-06-30 | 三星电子株式会社 | Apparatus and method for reproducing flesh colour in video frequency signals |
CN102129690A (en) * | 2011-03-21 | 2011-07-20 | 西安理工大学 | Tracking method of human body moving object with environmental disturbance resistance |
-
2011
- 2011-10-12 CN CN201110308514XA patent/CN102509070A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1509082A (en) * | 2002-12-14 | 2004-06-30 | 三星电子株式会社 | Apparatus and method for reproducing flesh colour in video frequency signals |
CN102129690A (en) * | 2011-03-21 | 2011-07-20 | 西安理工大学 | Tracking method of human body moving object with environmental disturbance resistance |
Non-Patent Citations (1)
Title |
---|
李欢: "基于自适应肤色模型与几何特征的人脸检测", 《中国优秀硕士学位论文全文数据库》, 9 October 2011 (2011-10-09) * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077380A (en) * | 2013-01-07 | 2013-05-01 | 信帧电子技术(北京)有限公司 | Method and device for carrying out statistics on number of people on basis of video |
CN103077380B (en) * | 2013-01-07 | 2016-06-29 | 信帧电子技术(北京)有限公司 | A kind of demographic method based on video and device |
CN104317860A (en) * | 2014-10-16 | 2015-01-28 | 中航华东光电(上海)有限公司 | Evaluation device of stereoscopic advertisement player and evaluation method of evaluation device |
CN104834896A (en) * | 2015-04-03 | 2015-08-12 | 惠州Tcl移动通信有限公司 | Method and terminal for information acquisition |
CN109034863A (en) * | 2018-06-08 | 2018-12-18 | 浙江新再灵科技股份有限公司 | The method and apparatus for launching advertising expenditure are determined based on vertical ladder demographics |
CN109064489A (en) * | 2018-07-17 | 2018-12-21 | 北京新唐思创教育科技有限公司 | Method, apparatus, equipment and medium for face tracking |
CN109558812A (en) * | 2018-11-13 | 2019-04-02 | 广州铁路职业技术学院(广州铁路机械学校) | The extracting method and device of facial image, experience system and storage medium |
CN110309710A (en) * | 2019-05-20 | 2019-10-08 | 特斯联(北京)科技有限公司 | Content based on recognition of face pays close attention to big data processing method, apparatus and system |
CN110351353A (en) * | 2019-07-03 | 2019-10-18 | 店掂智能科技(中山)有限公司 | Stream of people's testing and analysis system with advertising function |
CN110351353B (en) * | 2019-07-03 | 2022-06-17 | 店掂智能科技(中山)有限公司 | People stream detection and analysis system with advertisement function |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102509070A (en) | Video-based human face area tracking method for counting people paying close attention to advertisement | |
CN101847206B (en) | Pedestrian traffic statistical method and system based on traffic monitoring facilities | |
CN103473554B (en) | Artificial abortion's statistical system and method | |
CN103530874B (en) | People stream counting method based on Kinect | |
CN103778786B (en) | A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model | |
CN102622886B (en) | Video-based method for detecting violation lane-changing incident of vehicle | |
CN102509306B (en) | Specific target tracking method based on video | |
CN102054167B (en) | All-weather multipath channel pedestrian flow monitoring system based on wireless infrared monitoring | |
CN104575003B (en) | A kind of vehicle speed detection method based on traffic surveillance videos | |
JP2019505866A (en) | Passerby head identification method and system | |
CN103854273A (en) | Pedestrian tracking counting method and device based on near forward direction overlooking monitoring video | |
CN102176246A (en) | Camera relay relationship determining method of multi-camera target relay tracking system | |
CN106355682B (en) | A kind of video analysis method, apparatus and system | |
CN102005120A (en) | Traffic intersection monitoring technology and system based on video image analysis | |
CN103971380A (en) | Pedestrian trailing detection method based on RGB-D | |
CN102496281B (en) | Vehicle red-light violation detection method based on combination of tracking and virtual loop | |
CN106203513A (en) | A kind of based on pedestrian's head and shoulder multi-target detection and the statistical method of tracking | |
CN110717400A (en) | Passenger flow statistical method, device and system | |
CN103324977A (en) | Method and device for detecting target number | |
CN103971103A (en) | People counting system | |
CN101231756A (en) | Method and apparatus for detecting moving goal shade | |
CN106327880B (en) | A kind of speed recognition methods and its system based on monitor video | |
CN105261034A (en) | Method and device for calculating traffic flow on highway | |
CN110555397A (en) | crowd situation analysis method | |
CN103226860B (en) | Passage passenger traffic density estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120620 |