CN105550663A - Cinema attendance statistical method and system - Google Patents

Cinema attendance statistical method and system Download PDF

Info

Publication number
CN105550663A
CN105550663A CN201610009132.XA CN201610009132A CN105550663A CN 105550663 A CN105550663 A CN 105550663A CN 201610009132 A CN201610009132 A CN 201610009132A CN 105550663 A CN105550663 A CN 105550663A
Authority
CN
China
Prior art keywords
image
seat
time cycle
target image
seating area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610009132.XA
Other languages
Chinese (zh)
Inventor
李亚鹏
柴智
肖军波
翟佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN201610009132.XA priority Critical patent/CN105550663A/en
Publication of CN105550663A publication Critical patent/CN105550663A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cinema attendance statistical method and system. The method comprises following steps: collecting a background image; extracting the edge of every seat region in the background image to generate a seat grid; obtaining monitoring images in at least one time period, calculating the differential image of every monitoring image in any one time period and the background image; accumulating all differential images in the time period to generate a target image in the time period; covering the seat grid on the target image to obtain the seat region of the target image; calculating the area of every seat region in the target image; with respect to the every seat region in the target image, obtaining the seating information of every seat in the time period based on the pixel distribution and seat region area of the seat region; and obtaining the attendance according to the seating information. According to the cinema attendance statistical method and system of the invention, the attendance of the cinema can be calculated accurately; therefore, judgment data are provided for obtaining real box office information.

Description

Theatre occupancy rate statistical method and system
Technical field
The present invention relates to technical field of image processing, particularly relate to a kind of theatre occupancy rate statistical method and system.
Background technology
The channel of film ticket sale has been enriched in the emergence of ecommerce, simultaneously also for the statistics of box office data adds new difficulty.For ensureing the correctness of box office data, supervision department has to send supervisor to carry out box office confirmation, but in the face of huge cinemas quantity, this mode is an utterly inadequate amount; Can directly obtain box office data by accessing each cinemas ticketing system, but this method can cause the significantly raising of administrative authority's communication system and software systems complexity, being also difficult to the imitation behavior identifying movie theatre simultaneously.
Therefore, need a kind of theatre occupancy rate statistical method and system badly, by attendance and then acquisition box office information, to solve the problems referred to above of prior art.
Summary of the invention
The invention provides a kind of theatre occupancy rate statistical method and system, utilize the video monitoring system being placed in the cinema projection Room, monitoring image is clearly obtained in rather dark Movie House, and the process passed through monitoring image and identification, the attendance of accurate calculating movie theatre, provides basis for estimation for obtaining real box office information.
One aspect of the present invention provides a kind of theatre occupancy rate statistical method, comprising: S1. gathers background image; Extract the edge of each seating area in background image, generate seat grid; S2. obtain the monitoring image of at least one time cycle, and calculate the difference image of every frame monitoring image and background image in a period of time in office; S3. difference images all in this time cycle is accumulated, generate the target image of this time cycle; Seat grid is covered on target image, obtains the seating area of target image; Calculate the area of each seating area in target image; S4. for each seating area in target image, based on pixel distribution and the seating area area of seating area, the information of taking one's seat at each seat within this time cycle is obtained; According to acquisition of information attendance of taking one's seat.
Preferably, gather background image and specifically comprise: gather m frame without image during spectators, and based on gather without image during spectators, according to formula 1 background extraction image;
F B ( i , j ) = 1 m · Σ p = 1 m FOL p ( i , j ) Formula 1
Wherein, m>1, and m ∈ N; The pixel value that FB (i, j) puts for (i, j) in background image; FOL p(i, j) is the pixel value that p frame is put without (i, j) in image during spectators; P=1,2,3 ... m; I is positive integer, and j is positive integer.
Preferably, step S2 specifically comprises: the monitoring image obtaining at least one time cycle, and the difference image calculating n frame monitoring image and background image in a period of time in office according to formula 2;
BFD k(i, j)=| F k(i, j)-FB (i, j) | formula 2
Wherein, n>1, and n ∈ N; F kthe pixel value that (i, j) puts for (i, j) in kth frame monitoring image in a period of time in office; BFD k(i, j) is the pixel value that in kth frame difference image, (i, j) puts within this time cycle; K=1,2,3 ... n.
Preferably, step S2 also comprises: arrange with the null matrix of the picture element matrix homotype of monitoring image as initial accumulated matrix; Accumulate difference images all in this time cycle, the target image generating this time cycle specifically comprises:
S31. determine the segmentation threshold of every frame difference image respectively, and by segmentation threshold, split this difference image according to formula 3, generate the subdivision matrix corresponding with this difference image;
BFS k ( i , j ) = 1 BFD k ( i , j ) &GreaterEqual; Th k - 1 BFD k ( i , j ) < Th k Formula 3
S32. based on initial accumulated matrix and subdivision matrix corresponding to every frame difference image, accumulated image matrix is calculated according to formula 4;
FSM k ( i , j ) = m i n < m a x { &lsqb; FSM k - 1 ( i , j ) + BFS k ( i , j ) &rsqb; , 0 } , 255 > Formula 4
S33. the binary-state threshold by presetting, carries out binary conversion treatment according to formula 5 pairs of accumulated image matrixes, generates the target image of this time cycle;
F S E ( i , j ) = 1 FSM n ( i , j ) &GreaterEqual; t h 0 FSM n ( i , j ) < t h Formula 5
Wherein, Th kfor the segmentation threshold with kth frame difference image;
BFS kthe pixel value that (i, j) puts for (i, j) in a kth subdivision matrix;
Max is max function, and min is minimum value function;
FSM k-1(i, j) is the pixel value that in the accumulated matrix corresponding with kth-1 subdivision matrix, (i, j) puts;
As k=1, FSM k-1(i, j)=FSM 0(i, j), FSM 0the pixel value that (i, j) puts for (i, j) in initial accumulated matrix;
FSM k(i, j) is the pixel value that in the accumulated matrix corresponding with a kth subdivision matrix, (i, j) puts;
As k=n, FSM k(i, j)=FSM n(i, j), FSM nthe pixel value that (i, j) puts for (i, j) in accumulated image matrix;
Th is default binary-state threshold;
The pixel value that in the target image that FSE (i, j) is this time cycle, (i, j) puts.
Preferably, determine that the segmentation threshold of every frame difference image specifically comprises respectively: the arithmetic mean and the standard deviation that calculate all pixel values in every frame difference image; For the pixel value being greater than arithmetic mean corresponding to this difference image and standard deviation sum in difference image, obtain the segmentation threshold of this difference image with maximum between-cluster variance criterion; Cover on target image by seat grid, the seating area obtaining target image specifically comprises: by the anchor point that coupling seat grid is corresponding with target image, cover on target image, obtain the seating area of target image by seat grid.
Preferably, step S33, after carrying out binary conversion treatment to accumulated image matrix, before the target image generating this time cycle, also comprises: for the accumulated image matrix after binary conversion treatment, adopt the first square templates to remove noise, adopt the second square templates filling cavity.
Preferably, for each seating area in target image, based on pixel distribution and the seating area area of seating area, the information of taking one's seat at each seat within this time cycle that obtains specifically comprises:
Calculate the bright spot area in the target image of this time cycle in each seating area, seat bright spot area in seating area being greater than area threshold is labeled as has people in this time cycle; Wherein, described bright spot to be pixel value be 1 point; Area threshold is the product of seating area area and default fixed coefficient;
According to taking one's seat, acquisition of information attendance specifically comprises: if what obtain is the monitoring image of a time cycle, have the seating capacity of people and total seating capacity, based on formula 6 calculated attendance rate according to being labeled as in target image in this time cycle;
R = W 1 W Formula 6
Wherein, R is attendance, W 1have the seating capacity of people for being labeled as in target image in this time cycle, W is total seating capacity.
Preferably, also comprise according to acquisition of information attendance of taking one's seat: if acquisition is respectively in period of time T 1, T 2t ainterior monitoring image, then obtain attendance according to step S43, S44, S45:
S43. each seat is obtained respectively in period of time T 1, T 2t ainterior information of taking one's seat;
S44. for each seat, statistics is labeled as the time cycle number b of people x; If b x>0.5a, then judge this use that loses one's seat;
S45. based on being judged to be occupied seating capacity and total seating capacity, by formula 7 calculated attendance rate;
R = W 2 W Formula 7
Wherein, a is time cycle sum, a>1, and a ∈ N; T 1=T 2=T a; X is seat number, x ∈ N; b x<a, and b x∈ N; W 2for being judged to be occupied seating capacity.
The present invention also provides a kind of theatre occupancy rate statistical system, comprising: image acquisition units, seating area acquiring unit, image difference unit, Image accumulate unit and attendance computing unit; Wherein,
Image acquisition units, for gathering background image and the monitoring image at least one time cycle; And background image is sent to seating area acquiring unit, monitoring image is sent to image difference unit;
Image difference unit, for receiving monitoring image, calculating the difference image of every frame monitoring image and background image in a period of time in office, and difference image is sent to Image accumulate unit;
Image accumulate unit, for receiving difference image, carrying out difference images all in this time cycle accumulating the target image generating this time cycle, and target image being sent to seating area acquiring unit and attendance computing unit;
Seating area acquiring unit, for receiving background image, extracting the edge of each seating area in background image, generating seat grid; Seat grid is covered on target image, obtains the seating area of target image; Calculate the area of each seating area in target image; And the seating area of target image and seating area area are sent to attendance computing unit;
Attendance computing unit, for each seating area in target image, based on pixel distribution and the seating area area of seating area, obtains the information of taking one's seat at each seat within this time cycle; According to acquisition of information attendance of taking one's seat.
Preferably, Image accumulate unit comprises:
Difference image receiver module, for receiving difference image, and is sent to segmentation module by the difference image of reception;
Segmentation module, for being split difference image corresponding to this segmentation threshold by segmentation threshold, is generated the subdivision matrix corresponding with this difference image, and subdivision matrix is sent to accumulation module;
Accumulation module, for according to each subdivision matrix, calculates accumulated image matrix, and accumulated image matrix is sent to binarization block;
Binarization block, for carrying out binary conversion treatment to accumulated image matrix, and is sent to Morphological scale-space module by the accumulated image matrix after binary conversion treatment;
Morphological scale-space module, for for the accumulated image matrix after binary conversion treatment, the first square templates is adopted to remove noise, adopt the second square templates filling cavity, generate the target image of this time cycle, and target image is sent to seating area acquiring unit and attendance computing unit.
The present invention under the prerequisite not increasing administrative authority's communication system and software systems complexity, can utilize the video monitoring system being placed in the cinema projection Room, accurately calculating the attendance of movie theatre, providing basis for estimation for obtaining real box office information.
Accompanying drawing explanation
Fig. 1 is the first pass figure of theatre occupancy rate statistical method of the present invention;
Fig. 2 is the background image schematic diagram of theatre occupancy rate statistical method of the present invention and system;
Fig. 3 is the monitoring image schematic diagram of theatre occupancy rate statistical method of the present invention and system;
Fig. 4 is the difference image schematic diagram of theatre occupancy rate statistical method of the present invention and system;
Fig. 5 is the target image schematic diagram of theatre occupancy rate statistical method of the present invention and system;
Fig. 6 is the statistics schematic diagram of theatre occupancy rate statistical method of the present invention and system;
Fig. 7 is the second process flow diagram of theatre occupancy rate statistical method of the present invention;
Fig. 8 is multiple time cycle statistical flowsheet figure of theatre occupancy rate statistical method of the present invention;
Embodiment
For making object of the present invention, technical scheme and advantage clearly understand, enumerate preferred embodiment referring to accompanying drawing, the present invention is described in more detail.But it should be noted that, the many details listed in instructions are only used to make reader to have a thorough understanding, even if do not have these specific details also can realize these aspects of the present invention to one or more aspect of the present invention.
Existing box office information getting method exists that required supervisor is many, communication system and the problem such as software systems complexity is high.The invention provides a kind of theatre occupancy rate statistical method and system, utilize the video monitoring system being placed in the cinema projection Room, monitoring image is clearly obtained in rather dark Movie House, and the process passed through monitoring image and identification, the attendance of accurate calculating movie theatre, provides basis for estimation for obtaining real box office information.
One aspect of the present invention provides a kind of theatre occupancy rate statistical method, as shown in Figure 1, comprising:
S1. background image is gathered; Extract the edge of each seating area in background image, generate seat grid; Above-mentioned seating area comprises the border up and down at seat, and background image as shown in Figure 2.
In a preferred embodiment of the invention, gather background image and specifically comprise: gather m frame without image during spectators, and based on gather without image during spectators, according to formula 1 background extraction image;
F B ( i , j ) = 1 m &CenterDot; &Sigma; p = 1 m FOL p ( i , j ) Formula 1
Wherein, m>1, and m ∈ N; The pixel value that FB (i, j) puts for (i, j) in background image; FOL p(i, j) is the pixel value that p frame is put without (i, j) in image during spectators; P=1,2,3 ... m; I is positive integer, and j is positive integer.
In a preferred embodiment of the invention, m gets 10.
Above-mentioned steps adopts the multi-frame mean method shown in formula 1 to carry out background modeling for m frame without image during spectators, obtains background image, said method can effectively stress release treatment on the impact of background image.
The edge extracting each seating area in background image need adopt existing pattern-recognition and edge extraction techniques, repeats no more herein.
S2. obtain the monitoring image of at least one time cycle, and calculate the difference image of every frame monitoring image and background image in a period of time in office.Monitoring image as shown in Figure 3.
In a preferred embodiment of the invention, step S2 specifically comprises:
Obtain the monitoring image of at least one time cycle, and calculate the difference image of n frame monitoring image and background image in a period of time in office according to formula 2.Difference image as shown in Figure 4;
BFD k(i, j)=| F k(i, j)-FB (i, j) | formula 2
Wherein, n>1, and n ∈ N; F kthe pixel value that (i, j) puts for (i, j) in kth frame monitoring image; BFD kthe pixel value that (i, j) puts for (i, j) in kth frame difference image; K=1,2,3 ... n.
In a preferred embodiment of the invention, monitoring image sum n=500; The monitoring image sum n in different time cycle can be identical, also can be different, determines as the case may be.
In a preferred embodiment of the invention, step S2 also comprises: arrange with the null matrix of the picture element matrix homotype of monitoring image as initial accumulated matrix, for the accumulation of difference image in step S3.
S3. difference images all in this time cycle is accumulated, generate the target image of this time cycle; Seat grid is covered on target image, obtains the seating area of target image; Calculate the area of each seating area in target image.
In this step, dividing processing is carried out to difference image and obtains motion target area, and the moving region image of each frame is accumulated, obtain target image.
In a preferred embodiment of the invention, accumulate difference images all in this time cycle, the target image generating this time cycle specifically comprises:
S31. determine the segmentation threshold of every frame difference image respectively, and by segmentation threshold, split difference image corresponding to this segmentation threshold according to formula 3, generate the subdivision matrix corresponding with this difference image;
BFS k ( i , j ) = 1 BFD k ( i , j ) &GreaterEqual; T h ( k ) - 1 BFD k ( i , j ) < T h ( k ) Formula 3
Wherein, Th kfor the segmentation threshold corresponding with kth frame difference image; BFS kthe pixel value that (i, j) puts for (i, j) in a kth subdivision matrix; High pixel region in subdivision matrix is moving region.
In a preferred embodiment of the invention, determine that the segmentation threshold of every frame difference image specifically comprises respectively: the arithmetic mean and the standard deviation that calculate all pixel values in every frame difference image; For the pixel value being greater than arithmetic mean corresponding to this difference image and standard deviation sum in difference image, obtain the segmentation threshold of this difference image with maximum between-cluster variance criterion.
In this step, segmentation threshold extracts in the pixel being greater than difference image pixel value arithmetic mean and standard deviation sum, can improve the accuracy of moving region segmentation, and adopt maximum between-cluster variance to choose segmentation threshold, misclassification probability can be made minimum.
Calculating and the maximum between-cluster variance criterion of described arithmetic mean, standard deviation are all known technological means, repeat no more herein.
S32. based on initial accumulated matrix and subdivision matrix corresponding to every frame difference image, accumulated image matrix is calculated according to formula 4;
FSM k ( i , j ) = m i n < m a x { &lsqb; FSM k - 1 ( i , j ) + BFS k ( i , j ) &rsqb; , 0 } , 255 > Formula 4
Wherein, max is max function, obtains the maximal value in independent variable; Min is minimum value function, obtains the minimum value in independent variable.Operation result is each time limited in [0,255] scope by max function and min function by formula 4;
FSM k-1(i, j) is and kth-1 subdivision matrix BFS k-1the pixel value that in the accumulated matrix that (i, j) is corresponding, (i, j) puts;
As k=1, FSM k-1(i, j)=FSM 0(i, j), FSM 0the pixel value that in the initial accumulated matrix that (i, j) is arranged for step 2, (i, j) puts;
FSM k(i, j) is and subdivision matrix BFS kthe pixel value that in the accumulated matrix that (i, j) is corresponding, (i, j) puts;
As k=n, FSM k(i, j)=FSM n(i, j), FSM nthe pixel value that (i, j) puts for (i, j) in accumulated image matrix.
Step S32 achieves the accumulation to multi-frame difference image motion region, for the accurate calculating of attendance provides foundation.
S33. the binary-state threshold by presetting, carries out binary conversion treatment according to formula 5 pairs of accumulated image matrixes, generates the target image of this time cycle; Target image as shown in Figure 5.
F S E ( i , j ) = 1 FSM n ( i , j ) &GreaterEqual; t h 0 FSM n ( i , j ) < t h Formula 5
Wherein, th is default binary-state threshold, chooses as the case may be; The pixel value that in the target image that FSE (i, j) is this time cycle, (i, j) puts.
In a preferred embodiment of the invention, th=200.
In a preferred embodiment of the invention, step S33 is after carrying out binary conversion treatment to accumulated image matrix, before the target image generating this time cycle, also morphological operation will be carried out, comprise: for the accumulated image matrix after binary conversion treatment, the first square templates is adopted to carry out etching operation, to remove noise; The second square templates is adopted to carry out expansive working, with filling cavity.This step can obtain more complete target image, is conducive to improving later pixel statistical accuracy.
In a preferred embodiment of the invention, the first square templates is the square templates of 5 × 5, and the second square templates is the square templates of 7 × 7.
S34. seat grid is covered on target image, obtain the seating area of target image; Calculate the area of each seating area in target image.
Because background image and monitoring image are that the same video monitoring system being fixed on Movie House obtains, and target image is obtained by the difference to monitoring image, segmentation, accumulation, therefore the distribution of the seat of background image is identical with target image.Therefore, in step S34, the seat grid in background image is covered on target image with identical direction, the seating area of target image can be obtained.Calculate the area of each seating area in target image, each seating area in record object image and area thereof, as the foundation obtaining information of taking one's seat in step S4.
In a preferred embodiment of the invention, seat grid is covered on target image, the seating area obtaining target image specifically comprises: preset corresponding anchor point at seat grid and target image, by mating above-mentioned anchor point, seat grid is covered on target image, obtains the seating area of target image; Anchor point can be the summit of seat grid and target image, also can be other point.
S4. for each seating area in target image, based on pixel distribution and the seating area area of seating area, the information of taking one's seat at each seat within this time cycle is obtained; According to acquisition of information attendance of taking one's seat.Pixel distribution refers to the information of different pixels point occupied area and the ordered state of different pixels point.
In a preferred embodiment of the invention, step S4 specifically comprises:
S41. calculate the bright spot area in the target image of this time cycle in each seating area, seat bright spot area in seating area being greater than area threshold is labeled as has people in this time cycle; Wherein, bright spot to be pixel value be 1 point; Area threshold is the product of seating area area and default fixed coefficient.
Default fixed coefficient is chosen as the case may be, and in a preferred embodiment of the invention, default fixed coefficient is 0.2.
If what S42. obtain is the monitoring image of a time cycle, there are the seating capacity of people and total seating capacity, based on formula 6 calculated attendance rate according to being labeled as in target image in this time cycle;
R = W 1 W Formula 6
Wherein, R is attendance, W 1have the seating capacity of people for being labeled as in target image in this time cycle, W is total seating capacity.
In a preferred embodiment of the invention, also comprise according to acquisition of information attendance of taking one's seat:
If what obtain is respectively in period of time T 1, T 2t ainterior monitoring image, then obtain attendance according to step S43, S44, S45.Wherein, a is time cycle sum, a>1, and a ∈ N; T 1=T 2=T a.
S43. each seat is obtained respectively in period of time T 1, T 2t ainterior flag state;
The time span of time cycle is chosen as the case may be, in a preferred embodiment of the invention, and T 1=T 2=T a=10min.
Monitoring image due to the different time cycle is that the same video monitoring system being fixed on Movie House obtains, therefore the target image in different time cycle has identical seat distribution, can obtain each seat thus respectively in the information of taking one's seat in different time cycle.
S44. for each seat, statistics is labeled as the time cycle number b of people x; If b x>0.5a, then judge this use that loses one's seat; Wherein, x is seat number, x ∈ N; b x<a, and b x∈ N;
S45. based on being judged to be occupied seating capacity W 2and total seating capacity, by formula 7 calculated attendance rate;
R = W 2 W Formula 7
In this step, obtaining the omnidistance attendance of movie theatre, can provide basis for estimation for obtaining real box office information, omnidistance attendance statistics is as shown in Figure 6.
The present invention provides a kind of theatre occupancy rate statistical system on the other hand, comprising: image acquisition units, seating area acquiring unit, image difference unit, Image accumulate unit and attendance computing unit; Wherein,
Image acquisition units, for gathering background image and the monitoring image at least one time cycle; And background image is sent to seating area acquiring unit, monitoring image is sent to image difference unit;
Image difference unit, for receiving monitoring image, calculating the difference image of every frame monitoring image and background image in a period of time in office, and difference image is sent to Image accumulate unit;
Image accumulate unit, for receiving difference image, carrying out difference images all in this time cycle accumulating the target image generating this time cycle, and target image being sent to seating area acquiring unit and attendance computing unit;
Seating area acquiring unit, for receiving background image, extracting the edge of each seating area in background image, generating seat grid; Seat grid is covered on target image, obtains the seating area of target image; Calculate the area of each seating area in target image; And the seating area of target image and seating area area are sent to attendance computing unit;
Attendance computing unit, for each seating area in target image, based on pixel distribution and the seating area area of seating area, obtains the information of taking one's seat at each seat within this time cycle; According to acquisition of information attendance of taking one's seat.
In a preferred embodiment of the invention, image acquisition units is the video monitoring system being fixed on the cinema projection Room, adopts near-infrared light source light filling, and coordinates with narrow band pass filter, can obtain image and not by viewer senses clearly in rather dark movie theater.
In a preferred embodiment of the invention, Image accumulate unit comprises:
Difference image receiver module, for receiving difference image, and is sent to segmentation module by the difference image of reception;
Segmentation module, for being split difference image corresponding to this segmentation threshold by segmentation threshold, is generated the subdivision matrix corresponding with this difference image, and subdivision matrix is sent to accumulation module;
Accumulation module, for according to each subdivision matrix, calculates accumulated image matrix, and accumulated image matrix is sent to binarization block;
Binarization block, for carrying out binary conversion treatment to accumulated image matrix, and is sent to Morphological scale-space module by the accumulated image matrix after binary conversion treatment;
Morphological scale-space module, for for the accumulated image matrix after binary conversion treatment, the first square templates is adopted to remove noise, adopt the second square templates filling cavity, generate the target image of this time cycle, and target image is sent to seating area acquiring unit and attendance computing unit.
Fig. 7 is the second process flow diagram of theatre occupancy rate statistical method of the present invention, and shown in figure, described method comprises off-line part and online part.Off-line part comprises: the background modeling of background extraction image and seating area extract.Online part comprises image acquisition, background difference, Threshold segmentation, accumulative, the accumulative image binaryzation of difference, region area statistics and Output rusults.
Fig. 8 is the statistics schematic diagram of theatre occupancy rate statistical method of the present invention in multiple time cycle.Shown in figure, for each seat, add up in different time cycle T 1, T 2t alabel information after carry out vote (namely in step S44, judging that whether seat occupied), draw statistics.
In a preferred embodiment of the invention, the hardware platform of image procossing adopts high-performance computer to realize, and software code adopts C language programming realization, and under MicrosoftVisualC++6.0 platform compilation run.
According to theatre occupancy rate statistical method of the present invention and system, accurately can calculating the attendance of movie theatre, providing basis for estimation for obtaining real box office information.
One of ordinary skill in the art will appreciate that all or part of step realized in above-described embodiment method is that the hardware that can carry out instruction relevant by program has come, this program can be stored in a computer read/write memory medium, as: ROM/RAM, magnetic disc, CD etc.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. a theatre occupancy rate statistical method, is characterized in that, comprising:
S1. background image is gathered; Extract the edge of each seating area in background image, generate seat grid;
S2. obtain the monitoring image of at least one time cycle, and calculate the difference image of every frame monitoring image and background image in a period of time in office;
S3. difference images all in this time cycle is accumulated, generate the target image of this time cycle; Seat grid is covered on target image, obtains the seating area of target image; Calculate the area of each seating area in target image;
S4. for each seating area in target image, based on pixel distribution and the seating area area of seating area, the information of taking one's seat at each seat within this time cycle is obtained; According to acquisition of information attendance of taking one's seat.
2. the method for claim 1, is characterized in that, gathers background image and specifically comprises: gather m frame without image during spectators, and based on gather without image during spectators, according to formula 1 background extraction image;
F B ( i , j ) = 1 m &CenterDot; &Sigma; p = 1 m FOL p ( i , j ) Formula 1
Wherein, m>1, and m ∈ N; The pixel value that FB (i, j) puts for (i, j) in background image; FOL p(i, j) is the pixel value that p frame is put without (i, j) in image during spectators; P=1,2,3 ... m; I is positive integer, and j is positive integer.
3. method as claimed in claim 2, it is characterized in that, step S2 specifically comprises: the monitoring image obtaining at least one time cycle, and the difference image calculating n frame monitoring image and background image in a period of time in office according to formula 2;
BFD k(i, j)=| F k(i, j)-FB (i, j) | formula 2
Wherein, n>1, and n ∈ N; F kthe pixel value that (i, j) puts for (i, j) in kth frame monitoring image in a period of time in office; BFD k(i, j) is the pixel value that in kth frame difference image, (i, j) puts within this time cycle; K=1,2,3 ... n.
4. method as claimed in claim 3, it is characterized in that, step S2 also comprises: arrange with the null matrix of the picture element matrix homotype of monitoring image as initial accumulated matrix;
Accumulate difference images all in this time cycle, the target image generating this time cycle specifically comprises:
S31. determine the segmentation threshold of every frame difference image respectively, and by segmentation threshold, split this difference image according to formula 3, generate the subdivision matrix corresponding with this difference image;
BFS k ( i , j ) = 1 BFD k ( i , j ) &GreaterEqual; Th k - 1 BFD k ( i , j ) < Th k Formula 3
S32. based on initial accumulated matrix and subdivision matrix corresponding to every frame difference image, accumulated image matrix is calculated according to formula 4;
FSM k(i, j)=min<max{ [FSM k-1(i, j)+BFS k(i, j)], 0}, 255> formula 4
S33. the binary-state threshold by presetting, carries out binary conversion treatment according to formula 5 pairs of accumulated image matrixes, generates the target image of this time cycle;
F S E ( i , j ) = 1 FSM n ( i , j ) &GreaterEqual; t h 0 FSM n ( i , j ) < t h Formula 5
Wherein, Th kfor the segmentation threshold with kth frame difference image;
BFS kthe pixel value that (i, j) puts for (i, j) in a kth subdivision matrix;
Max is max function, and min is minimum value function;
FSM k-1(i, j) is the pixel value that in the accumulated matrix corresponding with kth-1 subdivision matrix, (i, j) puts;
As k=1, FSM k-1(i, j)=FSM 0(i, j), FSM 0the pixel value that (i, j) puts for (i, j) in initial accumulated matrix;
FSM k(i, j) is the pixel value that in the accumulated matrix corresponding with a kth subdivision matrix, (i, j) puts;
As k=n, FSM k(i, j)=FSM n(i, j), FSM nthe pixel value that (i, j) puts for (i, j) in accumulated image matrix;
Th is default binary-state threshold;
The pixel value that in the target image that FSE (i, j) is this time cycle, (i, j) puts.
5. method as claimed in claim 4, is characterized in that, determine that the segmentation threshold of every frame difference image specifically comprises respectively: the arithmetic mean and the standard deviation that calculate all pixel values in every frame difference image; For the pixel value being greater than arithmetic mean corresponding to this difference image and standard deviation sum in difference image, obtain the segmentation threshold of this difference image with maximum between-cluster variance criterion;
Cover on target image by seat grid, the seating area obtaining target image specifically comprises:
By the anchor point that coupling seat grid is corresponding with target image, seat grid is covered on target image, obtains the seating area of target image.
6. method as claimed in claim 5, is characterized in that, step S33, after carrying out binary conversion treatment to accumulated image matrix, before the target image generating this time cycle, also comprises:
For the accumulated image matrix after binary conversion treatment, adopt the first square templates to remove noise, adopt the second square templates filling cavity.
7. method as claimed in claim 6, is characterized in that,
For each seating area in target image, based on pixel distribution and the seating area area of seating area, the information of taking one's seat at each seat within this time cycle that obtains specifically comprises:
Calculate the bright spot area in the target image of this time cycle in each seating area, seat bright spot area in seating area being greater than area threshold is labeled as has people in this time cycle; Wherein, described bright spot to be pixel value be 1 point; Area threshold is the product of seating area area and default fixed coefficient;
According to taking one's seat, acquisition of information attendance specifically comprises:
If what obtain is the monitoring image of a time cycle, there are the seating capacity of people and total seating capacity, based on formula 6 calculated attendance rate according to being labeled as in target image in this time cycle;
R = W 1 W Formula 6
Wherein, R is attendance, W 1have the seating capacity of people for being labeled as in target image in this time cycle, W is total seating capacity.
8. method as claimed in claim 7, it is characterized in that, according to taking one's seat, acquisition of information attendance also comprises:
If what obtain is respectively in period of time T 1, T 2t ainterior monitoring image, then obtain attendance according to step S43, S44, S45:
S43. each seat is obtained respectively in period of time T 1, T 2t ainterior information of taking one's seat;
S44. for each seat, statistics is labeled as the time cycle number b of people x; If b x>0.5a, then judge this use that loses one's seat;
S45. based on being judged to be occupied seating capacity and total seating capacity, by formula 7 calculated attendance rate;
R = W 2 W Formula 7
Wherein, a is time cycle sum, a>1, and a ∈ N; T 1=T 2=T a; X is seat number, x ∈ N; b x<a, and b x∈ N; W 2for being judged to be occupied seating capacity.
9. a theatre occupancy rate statistical system, is characterized in that, comprising: image acquisition units, seating area acquiring unit, image difference unit, Image accumulate unit and attendance computing unit; Wherein,
Image acquisition units, for gathering background image and the monitoring image at least one time cycle; And background image is sent to seating area acquiring unit, monitoring image is sent to image difference unit;
Image difference unit, for receiving monitoring image, calculating the difference image of every frame monitoring image and background image in a period of time in office, and difference image is sent to Image accumulate unit;
Image accumulate unit, for receiving difference image, carrying out difference images all in this time cycle accumulating the target image generating this time cycle, and target image being sent to seating area acquiring unit and attendance computing unit;
Seating area acquiring unit, for receiving background image, extracting the edge of each seating area in background image, generating seat grid; Seat grid is covered on target image, obtains the seating area of target image; Calculate the area of each seating area in target image; And the seating area of target image and seating area area are sent to attendance computing unit;
Attendance computing unit, for each seating area in target image, based on pixel distribution and the seating area area of seating area, obtains the information of taking one's seat at each seat within this time cycle; According to acquisition of information attendance of taking one's seat.
10. system as claimed in claim 9, it is characterized in that, Image accumulate unit comprises:
Difference image receiver module, for receiving difference image, and is sent to segmentation module by the difference image of reception;
Segmentation module, for being split difference image corresponding to this segmentation threshold by segmentation threshold, is generated the subdivision matrix corresponding with this difference image, and subdivision matrix is sent to accumulation module;
Accumulation module, for according to each subdivision matrix, calculates accumulated image matrix, and accumulated image matrix is sent to binarization block;
Binarization block, for carrying out binary conversion treatment to accumulated image matrix, and is sent to Morphological scale-space module by the accumulated image matrix after binary conversion treatment;
Morphological scale-space module, for for the accumulated image matrix after binary conversion treatment, the first square templates is adopted to remove noise, adopt the second square templates filling cavity, generate the target image of this time cycle, and target image is sent to seating area acquiring unit and attendance computing unit.
CN201610009132.XA 2016-01-07 2016-01-07 Cinema attendance statistical method and system Pending CN105550663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610009132.XA CN105550663A (en) 2016-01-07 2016-01-07 Cinema attendance statistical method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610009132.XA CN105550663A (en) 2016-01-07 2016-01-07 Cinema attendance statistical method and system

Publications (1)

Publication Number Publication Date
CN105550663A true CN105550663A (en) 2016-05-04

Family

ID=55829846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610009132.XA Pending CN105550663A (en) 2016-01-07 2016-01-07 Cinema attendance statistical method and system

Country Status (1)

Country Link
CN (1) CN105550663A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204346A (en) * 2016-06-30 2016-12-07 北京文安智能技术股份有限公司 A kind of movie seat sample automatic marking method based on video analysis, device and electronic equipment
CN106228126A (en) * 2016-07-18 2016-12-14 北京文安智能技术股份有限公司 A kind of theatre occupancy demographic method based on video analysis, device and electronic equipment
CN110059961A (en) * 2019-04-19 2019-07-26 河南应用技术职业学院 With attendance data analysis in teaching area in the period of imparting knowledge to students for the campus style of study analysis method of foundation and matched data acquisition equipment
CN111524401A (en) * 2020-07-03 2020-08-11 南京晓庄学院 Intelligent teaching classroom integrated system
CN113792674A (en) * 2021-09-17 2021-12-14 支付宝(杭州)信息技术有限公司 Method and device for determining unoccupied seat rate and electronic equipment
CN114663830A (en) * 2022-03-04 2022-06-24 山东巍然智能科技有限公司 Method for calculating number of people in multi-camera scene based on graph structure matching

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090030643A1 (en) * 2007-07-25 2009-01-29 White Timothy J Method for collecting statistics for movie theaters
CN101770642A (en) * 2008-12-26 2010-07-07 深圳先进技术研究院 Method and system for counting number of people in car
CN103679690A (en) * 2012-09-24 2014-03-26 中国航天科工集团第二研究院二O七所 Object detection method based on segmentation background learning
CN104217206A (en) * 2013-05-31 2014-12-17 上海亚视信息科技有限公司 Real-time attendance counting method based on high-definition videos
CN104658008A (en) * 2015-01-09 2015-05-27 北京环境特性研究所 Personnel gathering detection method based on video images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090030643A1 (en) * 2007-07-25 2009-01-29 White Timothy J Method for collecting statistics for movie theaters
CN101770642A (en) * 2008-12-26 2010-07-07 深圳先进技术研究院 Method and system for counting number of people in car
CN103679690A (en) * 2012-09-24 2014-03-26 中国航天科工集团第二研究院二O七所 Object detection method based on segmentation background learning
CN104217206A (en) * 2013-05-31 2014-12-17 上海亚视信息科技有限公司 Real-time attendance counting method based on high-definition videos
CN104658008A (en) * 2015-01-09 2015-05-27 北京环境特性研究所 Personnel gathering detection method based on video images

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204346A (en) * 2016-06-30 2016-12-07 北京文安智能技术股份有限公司 A kind of movie seat sample automatic marking method based on video analysis, device and electronic equipment
CN106228126A (en) * 2016-07-18 2016-12-14 北京文安智能技术股份有限公司 A kind of theatre occupancy demographic method based on video analysis, device and electronic equipment
CN106228126B (en) * 2016-07-18 2019-02-01 北京文安智能技术股份有限公司 A kind of theatre occupancy demographic method, device and electronic equipment based on video analysis
CN110059961A (en) * 2019-04-19 2019-07-26 河南应用技术职业学院 With attendance data analysis in teaching area in the period of imparting knowledge to students for the campus style of study analysis method of foundation and matched data acquisition equipment
CN111524401A (en) * 2020-07-03 2020-08-11 南京晓庄学院 Intelligent teaching classroom integrated system
CN113792674A (en) * 2021-09-17 2021-12-14 支付宝(杭州)信息技术有限公司 Method and device for determining unoccupied seat rate and electronic equipment
CN113792674B (en) * 2021-09-17 2024-03-26 支付宝(杭州)信息技术有限公司 Method and device for determining empty rate and electronic equipment
CN114663830A (en) * 2022-03-04 2022-06-24 山东巍然智能科技有限公司 Method for calculating number of people in multi-camera scene based on graph structure matching
CN114663830B (en) * 2022-03-04 2024-05-14 山东巍然智能科技有限公司 Method for calculating number of people in multiphase airport scene based on graph structure matching

Similar Documents

Publication Publication Date Title
CN105550663A (en) Cinema attendance statistical method and system
Singh et al. Muhavi: A multicamera human action video dataset for the evaluation of action recognition methods
CN101657839B (en) System and method for region classification of 2D images for 2D-to-3D conversion
JP5075924B2 (en) Classifier learning image generation program, method, and system
CN113963445B (en) Pedestrian falling action recognition method and equipment based on gesture estimation
CN101610425B (en) Method for evaluating stereo image quality and device
CN106157307A (en) A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF
CN109255342B (en) Image region-of-interest extraction method and system based on two-step clustering of eye movement trajectory data
CN103942525A (en) Real-time face optimal selection method based on video sequence
CN111832465A (en) Real-time head classification detection method based on MobileNet V3
CN105574848A (en) A method and an apparatus for automatic segmentation of an object
US10169661B2 (en) Filtering methods for visual object detection
CN111325051A (en) Face recognition method and device based on face image ROI selection
CN107749048B (en) Image correction system and method, and color blindness image correction system and method
CN107948586A (en) Trans-regional moving target detecting method and device based on video-splicing
CN106384359A (en) Moving target tracking method and television set
CN110309737A (en) A kind of information processing method applied to cigarette sales counter, apparatus and system
CN110189181B (en) Advertisement browsing beneficial accounting system for online video APP
CN112417974A (en) Public health monitoring method
CN113496235A (en) Image processing method, device and system, storage medium and computing equipment
Avgerinakis et al. Moving camera human activity localization and recognition with motionplanes and multiple homographies
CN112702877B (en) Cabinet interior remote monitoring and diagnosis method and system, cabinet device and storage medium
CN113505733A (en) Behavior recognition method, behavior recognition device, storage medium and electronic device
CN114049682A (en) Human body abnormal behavior identification method, device, equipment and storage medium
CN113822155A (en) Clustering-assisted weak surveillance video anomaly detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160504

RJ01 Rejection of invention patent application after publication