CN106295790A - A kind of method and device being carried out destination number statistics by video camera - Google Patents

A kind of method and device being carried out destination number statistics by video camera Download PDF

Info

Publication number
CN106295790A
CN106295790A CN201610733841.2A CN201610733841A CN106295790A CN 106295790 A CN106295790 A CN 106295790A CN 201610733841 A CN201610733841 A CN 201610733841A CN 106295790 A CN106295790 A CN 106295790A
Authority
CN
China
Prior art keywords
destination number
video camera
virtual
target
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610733841.2A
Other languages
Chinese (zh)
Other versions
CN106295790B (en
Inventor
戴安娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201610733841.2A priority Critical patent/CN106295790B/en
Publication of CN106295790A publication Critical patent/CN106295790A/en
Application granted granted Critical
Publication of CN106295790B publication Critical patent/CN106295790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06MCOUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
    • G06M11/00Counting of objects distributed at random, e.g. on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of method and device being carried out destination number statistics by video camera, according to the position of door in virtual three-dimensional model, obtain in the monitoring image of video camera the position answered the door, and according to the position of door in the monitoring image of video camera, the monitoring image of video camera is divided into zones of different, then detect people's situation of movement in zones of different by the motion detection function of video camera, and carry out the demographics in statistical regions according to people at the situation of movement of zones of different.Solve prior art when carrying out demographics, special demographics video camera must be installed detect, cause the biggest wasting of resources, and demographics video camera needs to carry out substantial amounts of human configuration before the use, uses inconvenience and the problem needing to expend a large amount of manpower and materials.

Description

A kind of method and device being carried out destination number statistics by video camera
Technical field
The invention belongs to field of video monitoring, particularly relate to a kind of by video camera carry out destination number statistics method and Device.
Background technology
Demographics is that the public places such as megastore, shopping center, museum, station can not in management and decision-making level The data lacked, for retail business, flow of the people and sales volume have direct proportional relation, and the data being the most basic refer to Mark, therefore, we are while carrying out video monitoring to large-scale public place, it is possible to promptly and accurately geo-statistic large-scale public place Number information, become the management requirement of a lot of large-scale public place.
Prior art carries out the method for large-scale public place demographics: in the position of all doors of large-scale public place Special demographics video camera is installed, and the direction of motion configuring good each people from position images with corresponding demographics In machine, people passes in and out the corresponding relation of this large-scale public place, then detects turnover in real time by the motion detection function of video camera The number of this large-scale public place, calculate that all demographics video cameras detect respectively enters this large-scale public place The total number of persons leaving this large-scale public place that total number of persons detects with all demographics video cameras, with entering, this is large-scale public The total number of persons in place deducts the total number of persons leaving this large-scale public place, i.e. obtains the total number of persons in this large-scale public place.
Although prior art can carry out demographics, but must pacify vertically downward in all door positions of large-scale public place Fill special demographics video camera to detect, and demographics camera do not have in addition to carrying out demographics any other Effect, causes the biggest wasting of resources.It addition, demographics video camera needs to carry out substantial amounts of human configuration before the use, Turnover direction such as personage configures, and zone name configuration etc. uses inconvenience and needs to expend substantial amounts of manpower and materials.
Summary of the invention
It is an object of the invention to provide a kind of method and device being carried out destination number statistics by video camera, existing to solve When having technology to carry out demographics, it is necessary to special demographics video camera is installed and detects, cause the biggest wasting of resources, And demographics video camera needs to carry out substantial amounts of human configuration before the use, use inconvenience and need to expend a large amount of manpower The problem of material resources.
To achieve these goals, technical solution of the present invention is as follows:
A kind of method carrying out destination number statistics by video camera, described carries out destination number statistics by video camera Method, including:
According to entrance in virtual three-dimensional model and the visible range of virtual video camera, select to unite for destination number The real camera of meter;
The monitoring image of the real camera selected by acquisition, is projected on monitoring image in virtual three-dimensional model and regards Frequency merges calibration;
According to entrance in virtual three-dimensional model, obtain corresponding position, gateway in the monitoring image of real camera Put, according to entrance in the monitoring image of real camera, the monitoring image of real camera is divided into zones of different;
Detection target is at the situation of movement of zones of different, and carries out Statistical Area according to target at the situation of movement of zones of different Destination number statistics in territory.
Further, described according to entrance in virtual three-dimensional model and the visible range of virtual video camera, select The real camera added up for destination number, including:
The visible range of each virtual video camera is analyzed by virtual three-dimensional model, finds with the fewest shooting Machine covers the camera chain scheme of all gateways in virtual three-dimensional model;
Choose the real camera corresponding to the virtual video camera in this assembled scheme as carrying out destination number statistics Video camera.
Further, described according to entrance in virtual three-dimensional model, obtain in the monitoring image of real camera Corresponding entrance, according to entrance in the monitoring image of real camera, by the monitoring image of real camera It is divided into zones of different, including:
By the real world three dimensional Coordinate Conversion of the crucial anchor point of entrance in virtual three-dimensional model for truly to take the photograph Two-dimensional coordinate in camera monitoring image, obtains the position of the crucial anchor point of entrance in real camera monitoring image Put;
Monitor at real camera according to the position of the crucial anchor point of entrance in real camera monitoring image Image is drawn out entrance;
According to entrance in real camera monitoring image, the monitoring image of real camera is divided into gateway Inner region and gateway exterior domain.
Further, described detection target is at the situation of movement of zones of different, and according to target in the movement of zones of different Situation carries out the destination number statistics in statistical regions, including:
The target situation of movement in zones of different is judged at the situation of movement of zones of different according to image formed by target, when Image formed by target moves to gateway exterior domain from gateway inner region, then the destination number entered in statistical regions added One, when image formed by target moves to gateway inner region from gateway exterior domain, then will leave the number of targets of statistical regions Amount adds one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain in current statistic region Destination number.
Further, described detection target is at the situation of movement of zones of different, and according to target in the movement of zones of different Situation carries out the destination number statistics in statistical regions, also includes:
The situation of change of part gateway whether is sheltered from, it is judged that target is in zones of different according to image formed by target Situation of movement, is changed to shelter from position, part gateway when image formed by target never blocks position, any gateway Time, then the destination number entered in statistical regions is added one, when image formed by target becomes from sheltering from position, part gateway When turning to not block any position, gateway, then the destination number leaving statistical regions is added one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain in current statistic region Destination number.
The invention allows for a kind of device being carried out destination number statistics by video camera, described carried out by video camera The device of destination number statistics, including:
Video camera selects module, visual for according to entrance in virtual three-dimensional model and virtual video camera Territory, selects the real camera for destination number statistics;
Video fusion calibration module, for obtaining the monitoring image of selected real camera, projects monitoring image Video fusion calibration is carried out in virtual three-dimensional model;
Region divides module, for according to entrance in virtual three-dimensional model, obtaining the monitoring figure of real camera Corresponding entrance in Xiang, according to entrance in the monitoring image of real camera, by the monitoring figure of real camera As being divided into zones of different;
Destination number statistical module, for detecting the target situation of movement in zones of different, and according to target in not same district The situation of movement in territory carries out the destination number statistics in statistical regions.
Further, described video camera selects module according to entrance in virtual three-dimensional model and virtual video camera Visible range, select the real camera for destination number statistics, perform following operation:
The visible range of each virtual video camera is analyzed by virtual three-dimensional model, finds with the fewest shooting Machine covers the camera chain scheme of all gateways in virtual three-dimensional model;
Choose the real camera corresponding to the virtual video camera in this assembled scheme as carrying out destination number statistics Video camera.
Further, described region divides module according to entrance in virtual three-dimensional model, acquisition real camera Monitoring image in corresponding entrance, according to entrance in the monitoring image of real camera, will truly image The monitoring image of machine is divided into zones of different, performs to operate as follows:
By the real world three dimensional Coordinate Conversion of the crucial anchor point of entrance in virtual three-dimensional model for truly to take the photograph Two-dimensional coordinate in camera monitoring image, obtains the position of the crucial anchor point of entrance in real camera monitoring image Put;
Monitor at real camera according to the position of the crucial anchor point of entrance in real camera monitoring image Image is drawn out entrance;
According to entrance in real camera monitoring image, the monitoring image of real camera is divided into gateway Inner region and gateway exterior domain.
Further, described destination number statistical module detects the target situation of movement in zones of different, and according to target Situation of movement in zones of different carries out the destination number statistics in statistical regions, performs to operate as follows:
The target situation of movement in zones of different is judged at the situation of movement of zones of different according to image formed by target, when Image formed by target moves to gateway exterior domain from gateway inner region, then the destination number entered in statistical regions added One, when image formed by target moves to gateway inner region from gateway exterior domain, then will leave the number of targets of statistical regions Amount adds one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain in current statistic region Destination number.
Further, described destination number statistical module detects the target situation of movement in zones of different, and according to target Situation of movement in zones of different carries out the destination number statistics in statistical regions, performs to operate as follows:
The situation of change of part gateway whether is sheltered from, it is judged that target is in zones of different according to image formed by target Situation of movement, is changed to shelter from position, part gateway when image formed by target never blocks position, any gateway Time, then the destination number entered in statistical regions is added one, when image formed by target becomes from sheltering from position, part gateway When turning to not block any position, gateway, then the destination number leaving statistical regions is added one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain in current statistic region Destination number.
The present invention proposes a kind of method and device being carried out destination number statistics by video camera, according to virtual three-dimensional mould Type and the visible range of virtual video camera, carry out selecting the CCTV camera in demographics region carrying out number system from needing The video camera of meter, and the monitoring image of video camera is divided into zones of different, so according to the position of door in camera supervised image Afterwards by the motion detection function of video camera detection people at the situation of movement of zones of different, and according to people in the movement of zones of different Situation is automatically performed needing to carry out the demographics in demographics region.Solve prior art when carrying out demographics, Special demographics video camera must be installed detect, cause the biggest wasting of resources, and demographics video camera exists Need before use to carry out substantial amounts of human configuration, use inconvenience and the problem needing to expend a large amount of manpower and materials.
Accompanying drawing explanation
Fig. 1 is the flow chart that the present invention carries out the method for demographics by video camera;
Fig. 2 is the schematic diagram that the present embodiment opposite house carries out perspective projection;
Fig. 3 is the schematic diagram that the present embodiment carries out three-dimensional coordinate conversion;
Fig. 4 is the structure chart that the present invention carries out the device of demographics by video camera.
Detailed description of the invention
Being described in further details technical solution of the present invention with embodiment below in conjunction with the accompanying drawings, following example are not constituted Limitation of the invention.
The overall plan of the present invention is: needing all gateways carrying out destination number statistical regions can be by video camera In the case of covering, it is thus achieved that the corresponding relation of gateway and video camera in this statistical regions, and select and carry out destination number The video camera of statistics, more automatically will carry out the two-dimentional monitoring image of destination number statistics video camera by 3 D video integration technology Carry out region segmentation, caught the regional change feelings of target location in its monitoring image afterwards by the motion detection function of video camera Condition, calculates the destination number in statistical regions automatically.In the present embodiment, using people as target, using door as gateway as a example by Illustrate, so it is easy to understand that target can also is that other animals or article, and gateway can be door or other passages or Railing etc..
As it is shown in figure 1, a kind of method carrying out destination number statistics by video camera, including:
Step S1, visible range according to entrance in virtual three-dimensional model and virtual video camera, select for mesh The real camera of mark statistics.
The present embodiment, by needing the whole region carrying out demographics to carry out three-dimensional modeling, sets up whole statistical regions Virtual three-dimensional model.The structure of virtual three-dimensional model is to be computer-readable point-line-surface by actual statistical regions environmental transformation The process of information, it is similar with existing 3D monitoring software that it builds flow process, at the CAD diagram paper obtaining actual count regional environment After, by all kinds of 3 d modeling softwares, such as 3DMAX, Revit etc., the ratio setting up actual count regional environment is the virtual of 1:1 Threedimensional model, then by methods such as manually taking pictures, take photo by plane, it is carried out pinup picture so that it is with actual statistical regions environment more adjunction Closely.Monitoring scene information in virtual three-dimensional model, coordinate position may be directly applied to the statistical regions environment of reality.
Virtual video camera is to apply in virtual environment, and the virtual unit realized by computer graphics algorithm is permissible Shoot with video-corder virtual environment and generate image, for observing the visible range of single real camera, and the present embodiment virtual three-dimensional model It is to model generation according to true statistical regional environment in the ratio of 1:1, so adjusting the parameter of virtual video camera with true After video camera is consistent, the visible range of virtual video camera may be directly applied to real camera.
The present embodiment sets up the virtual video camera that each real camera is corresponding in virtual three-dimensional model, by virtual In threedimensional model, the visible range to each virtual video camera is analyzed, and finds and covers virtual three-dimensional with the fewest video camera The camera chain scheme of all doors in model, and make with the real camera corresponding to the virtual video camera in this assembled scheme The video camera of demographics is carried out for the present embodiment.
That is, on the premise of in monitoring scene, all doors can be covered by video camera, utilize in virtual three-dimensional model empty Intend video camera visible range, obtain the corresponding relation of door and video camera, can be one to one, it is also possible to be one-to-many, take the photograph for i.e. one The corresponding two or more doors of camera, the suitable video camera of sort out, complete in room with the fewest camera chain Demographics.
Such as, in virtual three-dimensional model there is door 1 in statistical regions, door 2, and the import and export of 3 three, door can by virtual video camera Viewshed analysis, finds that virtual video camera 1 can cover door 3, and virtual video camera 2 can cover door 1 and door 2 simultaneously, virtual Video camera 3 can cover door 2, then choose the real camera 1 corresponding to virtual video camera 1 and virtual video camera 2 with true Video camera 2 carries out the video camera of demographics as this statistical regions.
The monitoring image of the real camera selected by step S2, acquisition, is projected on virtual three-dimensional model by monitoring image In carry out video fusion calibration.
The present embodiment is after choosing the real camera carrying out demographics, and the method merged by 3 D video makes virtual Corresponding true of the parameters such as the installation site of the virtual video camera in threedimensional model, monitoring visual angle, CCD size and focal length Video camera is completely the same.
3 D video fusion process is through obtaining the monitoring image of real camera, by visible for virtual video camera void Pinup picture in quasi-3-dimensional model is replaced into the monitoring image of real camera and shows.Specifically, true shooting is first obtained The monitoring image of machine, and by corresponding virtual video camera, the monitoring image of real camera is projected to virtual three-dimensional model In, then it is monitored image and the format match of three-dimensional environment and conversion, then carries out the visible selection of virtual video camera Thing is rejected with being blocked, and finally carries out the pinup picture displacement of visible.The present embodiment by regulation virtual video camera installation site, The parameters such as monitoring visual angle, CCD size and focal length, make monitoring image overlapping with virtual three-dimensional model.
In actual applications, owing to real camera imaging exists certain distortion, and virtual video camera imaging does not exists Distortion, therefore monitoring image possibly cannot be completely overlapped with virtual three-dimensional model, and the present embodiment to carry out demographics be basis The position of door carries out region segmentation to video image, and it it is critical only that the accuracy of position of door, therefore has only to by regulation The parameter of virtual video camera, makes the door in camera supervised image overlapping with the door in virtual three-dimensional model.
Step S3, according to entrance in virtual three-dimensional model, obtain correspondence in the monitoring image of real camera and go out Entry position, according to entrance in the monitoring image of real camera, is divided into the monitoring image of real camera not Same region.
The present embodiment uses the method for perspective projection to be converted to camera supervised by three-dimensional coordinate point in virtual three-dimensional model Two-dimensional coordinate point in image, i.e. uses one group of radial projection line produced by projection centre (i.e. the visual angle point of video camera), will Three dimensional object projects to projection plane up, as in figure 2 it is shown, A, B, C, D are respectively the upper left corner of door, the right side in virtual three-dimensional model Upper angle, the lower left corner, the lower right corner, E, F, G, H are respectively A, B, C, D subpoint on the projection plane of video camera, i.e. A, B, C, D Imaging point in camera supervised image, P point is the visual angle point of video camera, and dotted line is projection line.
The two-dimensional coordinate point be converted to by three-dimensional coordinate point in virtual three-dimensional model in camera supervised image is divided into two Step is carried out, and first by the three-dimensional coordinate in real world coordinates system, the three-dimensional coordinate point in virtual three-dimensional model is converted to shooting Three-dimensional coordinate in machine coordinate system, concrete grammar is as follows:
As it is shown on figure 3, OXYZ is real world coordinates system, the coordinate system that O ' X ' Y ' Z ' is video camera, this establishment of coordinate system On the projection plane of video camera, the direction of O ' Z ' is the normal direction of virtual video camera projection plane, and some O ' is in coordinate system Coordinate in OXYZ is (xo,yo,zo), unit direction vector axial for O ' X ', O ' Y ', O ' Z ' is respectively (a11,a12,a13)、 (a21,a22,a23)、(a31,a32,a33), then from coordinate system OXYZ to O ' the three-dimensional coordinate transformation matrix of X ' Y ' Z ' is:
x ′ y ′ z ′ = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 x - x 0 y - y 0 z - z 0
Due to A, B, C, D, P point world coordinates in virtual three-dimensional model it is known that therefore by this three-dimensional coordinate transformation Matrix obtains A point three-dimensional coordinate (x ' in the coordinate system of virtual video cameraA, y 'A, z 'A), B point is at the coordinate of virtual video camera Three-dimensional coordinate (x ' in systemc, y 'c, z 'c), C point three-dimensional coordinate (x ' in the coordinate system of virtual video camerac, y 'c, z 'c)、D Point three-dimensional coordinate (x ' in the coordinate system of virtual video cameraD, y 'D, z 'D) and G point in the coordinate system of virtual video camera Three-dimensional coordinate (x 'G, y 'G, z 'G)。
Then the three-dimensional coordinate in camera coordinate system is converted to the two-dimensional coordinate in camera supervised image, specifically side Method is:
According to crossing space two point (x1, y1, z1) and (x2, y2, z2) space line equation:
x - x 1 x 2 - x 1 = y - y 1 y 2 - y 1 = z - z 1 z 2 - z 1
The space line equation obtaining the visual angle point P of video camera and the upper left corner A of door is:
x ′ - x ′ A x ′ P - x ′ A = y ′ - y ′ A y ′ P - y ′ A = z ′ - z ′ A z ′ P - z ′ A
Obtain through conversion:
Owing to E point is A point subpoint on video camera projection plane, therefore the coordinate of E point meets above-mentioned linear equation, Obtain:
x E ′ = x ′ A + ( x ′ P - x ′ A ) z E ′ - z A ′ z ′ P - z ′ A , y E ′ = y ′ A + ( y ′ P - y ′ A ) z E ′ - z ′ A z ′ P - z ′ A
And coordinate system O ' X ' Y ' Z ' sets up on the projection plane of video camera, therefore z 'E=0, obtain E point at video camera Two-dimensional coordinate in projection plane (the most camera supervised image) is:
x E ′ = x ′ A - ( x ′ P - x ′ A ) z A ′ z ′ P - z ′ A , y E ′ = y ′ A - ( y ′ P - y ′ A ) z ′ A z ′ P - z ′ A
The present embodiment passes through said method, by the real world three dimensional coordinate of the upper left corner A point of door in virtual three-dimensional model Be converted to the two-dimensional coordinate of A point subpoint E in camera supervised image, and by same method, be calculated F, G, H Point two-dimensional coordinate in camera supervised image.Then using line EF, FG, GH, the HE between E, F, G, H as doorframe, root According to the position of doorframe, the monitoring image of video camera is divided into two different regions outer with doorframe in doorframe.
It should be noted that after being calibrated by video fusion, the monitoring image of real camera and the imaging of virtual shooting Overlap, such that it is able to by the position of door during the position of gateway determines real camera monitoring image in virtual three-dimensional model.
By said method, the monitoring image of video camera is divided into different regions by the present embodiment.
Step S4, detection target are at the situation of movement of zones of different, and enter at the situation of movement of zones of different according to target Destination number statistics in row statistical regions.
The monitoring region of video camera is divided into not same district according to the position of doorframe in camera supervised image by the present embodiment Behind territory, first pass through the motion detection function detection people situation of movement in zones of different of video camera.Monitoring due to video camera Image is two dimensional image, and the different parts of the person is likely located at different regions in camera supervised image, and the foot of people Close proximity to ground, therefore the present embodiment judges the region at people place according to position in camera supervised image of the foot of people, and Foot according to people judges people's situation of movement in zones of different at the situation of movement of zones of different.Such as, when people is from the most inwards During walking, the foot of people line (line between the lower left corner G and lower right corner H of door) of more moving into one's husband's household upon marriage enters after doorframe exterior domain, its of people Image formed by its position is also located at doorframe inner region, now judges that people is at doorframe exterior domain;When people from inside to outside walks, people's Head and above the waist formed image can be introduced into doorframe inner region, and image formed by the foot of now people is also located at doorframe outskirt Territory, therefore judges that people, at doorframe exterior domain, only more moves into one's husband's household upon marriage line when the foot of people, when entering doorframe inner region, could judge that people exists Doorframe inner region.
It should be noted that whether the present embodiment can also block according to people's formed image in camera supervised image Live part doorframe and judge people's situation of movement in zones of different.Such as, when people is from walking and the most more move into one's husband's household upon marriage line the most inwards Time, image formed by people will not shelter from any doorframe position, now judge people in doorframe inner region, when people continues to go ahead, More moving into one's husband's household upon marriage after line, image formed by people is certain to shelter from the side of the door wired-OR gate frame of doorframe, the most then judge that people is at doorframe Exterior domain;When people from inside to outside walks, the head of people and the most formed image can be introduced into doorframe inner region, and shelter from door The side of the door wired-OR gate frame of frame, now judges that people, at doorframe exterior domain, only more moves into one's husband's household upon marriage line as people, enters doorframe inner region After, image formed by people just will not shelter from any doorframe position, now judges that people is in doorframe inner region.
The present embodiment carries out demographics according to people at the situation of movement of zones of different.Specifically, if be detected that certain People moves to doorframe exterior domain from doorframe inner region, then add one by the number entering statistical regions, if be detected that someone from Doorframe exterior domain moves to doorframe inner region, then add one by the number leaving statistical regions, subtracts by the number entering statistical regions Go to leave the number of statistical regions, i.e. obtain the number in current statistic region.
The present embodiment passes through said method, obtains the number in the statistical regions that each video camera counts, then will be every Number in the statistical regions that individual video camera counts is added, and obtains the total number of persons in statistical regions.
By said method, the present embodiment carries out the demographics in statistical regions automatically.
The present embodiment also proposed a kind of device being carried out destination number statistics by video camera, corresponding with said method, As shown in Figure 4, including:
Video camera selects module, visual for according to entrance in virtual three-dimensional model and virtual video camera Territory, selects the real camera for destination number statistics;
Video fusion calibration module, for obtaining the monitoring image of selected real camera, projects monitoring image Video fusion calibration is carried out in virtual three-dimensional model;
Region divides module, for according to entrance in virtual three-dimensional model, obtaining the monitoring figure of real camera Corresponding entrance in Xiang, according to entrance in the monitoring image of real camera, by the monitoring figure of real camera As being divided into zones of different;
Destination number statistical module, for detecting the target situation of movement in zones of different, and according to target in not same district The situation of movement in territory carries out the destination number statistics in statistical regions.
The present embodiment video camera select module according to entrance in virtual three-dimensional model and virtual video camera can The ken, selects the real camera for destination number statistics, the following operation of execution:
The visible range of each virtual video camera is analyzed by virtual three-dimensional model, finds with the fewest shooting Machine covers the camera chain scheme of all gateways in virtual three-dimensional model;
Choose the real camera corresponding to the virtual video camera in this assembled scheme as carrying out destination number statistics Video camera.
The present embodiment region divides module according to entrance in virtual three-dimensional model, the monitoring of acquisition real camera Entrance corresponding in image, according to entrance in the monitoring image of real camera, by the prison of real camera Control image division is zones of different, performs to operate as follows:
By the real world three dimensional Coordinate Conversion of the crucial anchor point of entrance in virtual three-dimensional model for truly to take the photograph Two-dimensional coordinate in camera monitoring image, obtains the position of the crucial anchor point of entrance in real camera monitoring image Put;
Monitor at real camera according to the position of the crucial anchor point of entrance in real camera monitoring image Image is drawn out entrance;
According to entrance in real camera monitoring image, the monitoring image of real camera is divided into gateway Inner region and gateway exterior domain.
The present embodiment destination number statistical module detects the target situation of movement in zones of different, and according to target in difference The situation of movement in region carries out the destination number statistics in statistical regions, performs to operate as follows:
The target situation of movement in zones of different is judged at the situation of movement of zones of different according to image formed by target, when Image formed by target moves to gateway exterior domain from gateway inner region, then the destination number entered in statistical regions added One, when image formed by target moves to gateway inner region from gateway exterior domain, then will leave the number of targets of statistical regions Amount adds one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain in current statistic region Destination number.
The present embodiment destination number statistical module detects the target situation of movement in zones of different, and according to target in difference The situation of movement in region carries out the destination number statistics in statistical regions, it is also possible to by operating realization as follows:
The situation of change of part gateway whether is sheltered from, it is judged that target is in zones of different according to image formed by target Situation of movement, is changed to shelter from position, part gateway when image formed by target never blocks position, any gateway Time, then the destination number entered in statistical regions is added one, when image formed by target becomes from sheltering from position, part gateway When turning to not block any position, gateway, then the destination number leaving statistical regions is added one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain in current statistic region Destination number.
Above example is only limited in order to technical scheme to be described, without departing substantially from present invention essence In the case of god and essence thereof, those of ordinary skill in the art are when making various corresponding change and change according to the present invention Shape, but these change accordingly and deform the protection domain that all should belong to appended claims of the invention.

Claims (10)

1. the method carrying out destination number statistics by video camera, it is characterised in that described carry out target by video camera The method of quantity statistics, including:
According to entrance in virtual three-dimensional model and the visible range of virtual video camera, select for destination number statistics Real camera;
The monitoring image of the real camera selected by acquisition, is projected on monitoring image in virtual three-dimensional model and carries out video and melt Close calibration;
According to entrance in virtual three-dimensional model, obtain corresponding entrance, root in the monitoring image of real camera According to entrance in the monitoring image of real camera, the monitoring image of real camera is divided into zones of different;
Detection target is at the situation of movement of zones of different, and carries out in statistical regions at the situation of movement of zones of different according to target Destination number statistics.
The method carrying out destination number statistics by video camera the most according to claim 1, it is characterised in that described basis Entrance and the visible range of virtual video camera in virtual three-dimensional model, select the true shooting for destination number statistics Machine, including:
The visible range of each virtual video camera is analyzed by virtual three-dimensional model, finds and cover with the fewest video camera The camera chain scheme of all gateways in lid virtual three-dimensional model;
Choose the real camera corresponding to the virtual video camera in this assembled scheme as the shooting carrying out destination number statistics Machine.
The method carrying out destination number statistics by video camera the most according to claim 1, it is characterised in that described basis Entrance in virtual three-dimensional model, obtains entrance corresponding in the monitoring image of real camera, according to truly Entrance in the monitoring image of video camera, is divided into zones of different by the monitoring image of real camera, including:
It is real camera by the real world three dimensional Coordinate Conversion of the crucial anchor point of entrance in virtual three-dimensional model Two-dimensional coordinate in monitoring image, obtains the position of the crucial anchor point of entrance in real camera monitoring image;
According to the position of the crucial anchor point of entrance in real camera monitoring image at real camera monitoring image In draw out entrance;
According to entrance in real camera monitoring image, the monitoring image of real camera is divided into gateway inner region Territory and gateway exterior domain.
The method carrying out destination number statistics by video camera the most according to claim 3, it is characterised in that described detection Target is at the situation of movement of zones of different, and carries out the number of targets in statistical regions according to target at the situation of movement of zones of different Amount statistics, including:
Judge the target situation of movement in zones of different according to image formed by target at the situation of movement of zones of different, work as target Formed image moves to gateway exterior domain from gateway inner region, then add one by the destination number entered in statistical regions, When image formed by target moves to gateway inner region from gateway exterior domain, then the destination number leaving statistical regions is added One;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain the mesh in current statistic region Mark quantity.
The method carrying out destination number statistics by video camera the most according to claim 3, it is characterised in that described detection Target is at the situation of movement of zones of different, and carries out the number of targets in statistical regions according to target at the situation of movement of zones of different Amount statistics, also includes:
The situation of change of part gateway whether is sheltered from, it is judged that target is in the movement of zones of different according to image formed by target Situation, when image formed by target never block position, any gateway be changed to shelter from position, part gateway time, then The destination number entered in statistical regions is added one, when image formed by target is changed to not from sheltering from position, part gateway When blocking any position, gateway, then the destination number leaving statistical regions is added one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain the mesh in current statistic region Mark quantity.
6. the device being carried out destination number statistics by video camera, it is characterised in that described carry out target by video camera The device of quantity statistics, including:
Video camera selects module, for according to entrance in virtual three-dimensional model and the visible range of virtual video camera, choosing Select the real camera for destination number statistics;
Video fusion calibration module, for obtaining the monitoring image of selected real camera, is projected on void by monitoring image Quasi-3-dimensional model carries out video fusion calibration;
Region divides module, for according to entrance in virtual three-dimensional model, obtaining in the monitoring image of real camera Corresponding entrance, according to entrance in the monitoring image of real camera, draws the monitoring image of real camera It is divided into zones of different;
Destination number statistical module, for detecting the target situation of movement in zones of different, and according to target in zones of different Situation of movement carries out the destination number statistics in statistical regions.
The method carrying out destination number statistics by video camera the most according to claim 6, it is characterised in that described shooting Machine selects module according to entrance in virtual three-dimensional model and the visible range of virtual video camera, selects for destination number The real camera of statistics, performs to operate as follows:
The visible range of each virtual video camera is analyzed by virtual three-dimensional model, finds and cover with the fewest video camera The camera chain scheme of all gateways in lid virtual three-dimensional model;
Choose the real camera corresponding to the virtual video camera in this assembled scheme as the shooting carrying out destination number statistics Machine.
The method carrying out destination number statistics by video camera the most according to claim 6, it is characterised in that described region Divide module according to entrance in virtual three-dimensional model, position, gateway corresponding in the monitoring image of acquisition real camera Put, according to entrance in the monitoring image of real camera, the monitoring image of real camera be divided into zones of different, Perform to operate as follows:
It is real camera by the real world three dimensional Coordinate Conversion of the crucial anchor point of entrance in virtual three-dimensional model Two-dimensional coordinate in monitoring image, obtains the position of the crucial anchor point of entrance in real camera monitoring image;
According to the position of the crucial anchor point of entrance in real camera monitoring image at real camera monitoring image In draw out entrance;
According to entrance in real camera monitoring image, the monitoring image of real camera is divided into gateway inner region Territory and gateway exterior domain.
The method carrying out destination number statistics by video camera the most according to claim 8, it is characterised in that described target Quantity statistics module detection target is at the situation of movement of zones of different, and unites at the situation of movement of zones of different according to target Destination number statistics in meter region, performs to operate as follows:
Judge the target situation of movement in zones of different according to image formed by target at the situation of movement of zones of different, work as target Formed image moves to gateway exterior domain from gateway inner region, then add one by the destination number entered in statistical regions, When image formed by target moves to gateway inner region from gateway exterior domain, then the destination number leaving statistical regions is added One;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain the mesh in current statistic region Mark quantity.
The method carrying out destination number statistics by video camera the most according to claim 8, it is characterised in that described mesh Mark quantity statistics module detection target is at the situation of movement of zones of different, and carries out at the situation of movement of zones of different according to target Destination number statistics in statistical regions, performs to operate as follows:
The situation of change of part gateway whether is sheltered from, it is judged that target is in the movement of zones of different according to image formed by target Situation, when image formed by target never block position, any gateway be changed to shelter from position, part gateway time, then The destination number entered in statistical regions is added one, when image formed by target is changed to not from sheltering from position, part gateway When blocking any position, gateway, then the destination number leaving statistical regions is added one;
Deduct the destination number leaving statistical regions by the destination number entering statistical regions, obtain the mesh in current statistic region Mark quantity.
CN201610733841.2A 2016-08-25 2016-08-25 Method and device for counting target number through camera Active CN106295790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610733841.2A CN106295790B (en) 2016-08-25 2016-08-25 Method and device for counting target number through camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610733841.2A CN106295790B (en) 2016-08-25 2016-08-25 Method and device for counting target number through camera

Publications (2)

Publication Number Publication Date
CN106295790A true CN106295790A (en) 2017-01-04
CN106295790B CN106295790B (en) 2020-05-19

Family

ID=57676989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610733841.2A Active CN106295790B (en) 2016-08-25 2016-08-25 Method and device for counting target number through camera

Country Status (1)

Country Link
CN (1) CN106295790B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324572A (en) * 2018-03-28 2019-10-11 佳能株式会社 Monitoring system, monitoring method and non-transitory computer-readable storage media
CN112689131A (en) * 2021-03-12 2021-04-20 深圳市安软科技股份有限公司 Gridding-based moving target monitoring method and device and related equipment
CN113066214A (en) * 2021-03-26 2021-07-02 深圳市博盛科电子有限公司 Access control system based on 5G network remote monitoring
CN114821483A (en) * 2022-06-20 2022-07-29 武汉惠得多科技有限公司 Monitoring method and system capable of measuring temperature and applied to monitoring video

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040200955A1 (en) * 2003-04-08 2004-10-14 Aleksandr Andzelevich Position detection of a light source
CN102036054A (en) * 2010-10-19 2011-04-27 北京硅盾安全技术有限公司 Intelligent video monitoring system based on three-dimensional virtual scene
CN103198353A (en) * 2011-08-31 2013-07-10 尼尔森(美国)有限公司 Methods and apparatus to count people in images
JP2014130397A (en) * 2012-12-28 2014-07-10 Chugoku Electric Power Co Inc:The Meter-reading handy terminal with movable camera
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 Method and system for passenger flow statistics based on cameras with intelligent analysis function
CN103985182A (en) * 2014-05-30 2014-08-13 长安大学 Automatic public transport passenger flow counting method and system
CN105225230A (en) * 2015-09-11 2016-01-06 浙江宇视科技有限公司 A kind of method and device identifying foreground target object
CN105407259A (en) * 2015-11-26 2016-03-16 北京理工大学 Virtual camera shooting method
CN105635696A (en) * 2016-03-22 2016-06-01 南阳理工学院 Statistical method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040200955A1 (en) * 2003-04-08 2004-10-14 Aleksandr Andzelevich Position detection of a light source
CN102036054A (en) * 2010-10-19 2011-04-27 北京硅盾安全技术有限公司 Intelligent video monitoring system based on three-dimensional virtual scene
CN103198353A (en) * 2011-08-31 2013-07-10 尼尔森(美国)有限公司 Methods and apparatus to count people in images
JP2014130397A (en) * 2012-12-28 2014-07-10 Chugoku Electric Power Co Inc:The Meter-reading handy terminal with movable camera
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 Method and system for passenger flow statistics based on cameras with intelligent analysis function
CN103985182A (en) * 2014-05-30 2014-08-13 长安大学 Automatic public transport passenger flow counting method and system
CN105225230A (en) * 2015-09-11 2016-01-06 浙江宇视科技有限公司 A kind of method and device identifying foreground target object
CN105407259A (en) * 2015-11-26 2016-03-16 北京理工大学 Virtual camera shooting method
CN105635696A (en) * 2016-03-22 2016-06-01 南阳理工学院 Statistical method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324572A (en) * 2018-03-28 2019-10-11 佳能株式会社 Monitoring system, monitoring method and non-transitory computer-readable storage media
CN110324572B (en) * 2018-03-28 2021-11-30 佳能株式会社 Monitoring system, monitoring method, and non-transitory computer-readable storage medium
CN112689131A (en) * 2021-03-12 2021-04-20 深圳市安软科技股份有限公司 Gridding-based moving target monitoring method and device and related equipment
CN112689131B (en) * 2021-03-12 2021-06-01 深圳市安软科技股份有限公司 Gridding-based moving target monitoring method and device and related equipment
CN113066214A (en) * 2021-03-26 2021-07-02 深圳市博盛科电子有限公司 Access control system based on 5G network remote monitoring
CN114821483A (en) * 2022-06-20 2022-07-29 武汉惠得多科技有限公司 Monitoring method and system capable of measuring temperature and applied to monitoring video

Also Published As

Publication number Publication date
CN106295790B (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN106295790A (en) A kind of method and device being carried out destination number statistics by video camera
CN107836012B (en) Projection image generation method and device, and mapping method between image pixel and depth value
Xie et al. Video crowd detection and abnormal behavior model detection based on machine learning method
CN103473554B (en) Artificial abortion's statistical system and method
Yang et al. Counting people in crowds with a real-time network of simple image sensors
Benezeth et al. Towards a sensor for detecting human presence and characterizing activity
CN110287519A (en) A kind of the building engineering construction progress monitoring method and system of integrated BIM
CN105898216B (en) A kind of number method of counting carried out using unmanned plane
CN105069429B (en) A kind of flow of the people analytic statistics methods and system based on big data platform
US20170178345A1 (en) Method, system and apparatus for matching moving targets between camera views
CN112216049A (en) Construction warning area monitoring and early warning system and method based on image recognition
CN106463032A (en) Intrusion detection with directional sensing
CN109816745A (en) Human body thermodynamic chart methods of exhibiting and Related product
CN106767819A (en) A kind of indoor navigation data construction method and navigation system based on BIM
US20210398318A1 (en) Auto Calibrating A Single Camera From Detectable Objects
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN109583366A (en) A kind of sports building evacuation crowd's orbit generation method positioned based on video image and WiFi
CN106600628A (en) Target object identification method and device based on infrared thermal imaging system
CN107396037A (en) Video frequency monitoring method and device
CN106504227A (en) Demographic method and its system based on depth image
CN108257182A (en) A kind of scaling method and device of three-dimensional camera module
Pan et al. Virtual-real fusion with dynamic scene from videos
Cao et al. Quantifying visual environment by semantic segmentation using deep learning
US20220398804A1 (en) System for generation of three dimensional scans and models
CN103903269B (en) The description method and system of ball machine monitor video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant