CN106295790B - Method and device for counting target number through camera - Google Patents

Method and device for counting target number through camera Download PDF

Info

Publication number
CN106295790B
CN106295790B CN201610733841.2A CN201610733841A CN106295790B CN 106295790 B CN106295790 B CN 106295790B CN 201610733841 A CN201610733841 A CN 201610733841A CN 106295790 B CN106295790 B CN 106295790B
Authority
CN
China
Prior art keywords
entrance
camera
exit
targets
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610733841.2A
Other languages
Chinese (zh)
Other versions
CN106295790A (en
Inventor
戴安娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201610733841.2A priority Critical patent/CN106295790B/en
Publication of CN106295790A publication Critical patent/CN106295790A/en
Application granted granted Critical
Publication of CN106295790B publication Critical patent/CN106295790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06MCOUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
    • G06M11/00Counting of objects distributed at random, e.g. on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for counting the number of targets through a camera, which are characterized in that the position of a corresponding door in a monitoring image of the camera is obtained according to the position of the door in a virtual three-dimensional model, the monitoring image of the camera is divided into different areas according to the position of the door in the monitoring image of the camera, then the movement conditions of people in different areas are detected through the motion detection function of the camera, and the number of people in the counting area is counted according to the movement conditions of the people in the different areas. The problem of prior art carry out the number statistics time, must install special number statistics camera and detect, cause very big wasting of resources to and the number statistics camera need carry out a large amount of manual configuration before using, use inconvenient and need consume a large amount of manpower and materials is solved.

Description

Method and device for counting target number through camera
Technical Field
The invention belongs to the field of video monitoring, and particularly relates to a method and a device for counting the number of targets through a camera.
Background
People counting is indispensable data in the aspects of management and decision making of public places such as large shopping malls, shopping centers, museums, stations and the like, and for the retail industry, the direct proportional relation between the flow of people and the sales volume is a very basic data index, so that people information of the large public places can be timely and accurately counted while video monitoring is carried out on the large public places, and the management requirements of a plurality of large public places are met.
The method for counting the number of people in the large public places in the prior art comprises the following steps: the method comprises the steps of installing special people counting cameras at all doors of a large public place, configuring the corresponding relation between the movement direction of people at the position of each door and the number of people in the corresponding people counting camera entering and exiting the large public place, detecting the number of people entering and exiting the large public place in real time through the movement detection function of the cameras, respectively calculating the total number of people entering the large public place and the total number of people leaving the large public place, which are detected by the people counting cameras, and subtracting the total number of people leaving the large public place from the total number of people entering the large public place to obtain the total number of people in the large public place.
Although people counting can be carried out in the prior art, special people counting cameras are required to be vertically and downwards installed at all door positions of a large public place for detection, and the people counting cameras have no other functions except people counting, so that great resource waste is caused. In addition, the people counting camera needs to be manually configured in a large quantity before being used, such as the arrangement of the entering and exiting directions of people and the arrangement of area names, and is inconvenient to use and needs to consume a large amount of manpower and material resources.
Disclosure of Invention
The invention aims to provide a method and a device for counting the number of targets by a camera, which are used for solving the problems that in the prior art, when people counting is carried out, a special people counting camera needs to be installed for detection, so that the resource waste is great, and the people counting camera needs to be configured by a large amount of manpower before being used, is inconvenient to use and needs to consume a large amount of manpower and material resources.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a method for counting a number of targets by a camera, the method for counting a number of targets by a camera comprising:
selecting a real camera for counting the number of targets according to the positions of an entrance and an exit in the virtual three-dimensional model and the visual field of the virtual camera;
acquiring a monitoring image of the selected real camera, and projecting the monitoring image into the virtual three-dimensional model for video fusion calibration;
acquiring a corresponding entrance position in a monitoring image of a real camera according to the entrance position in the virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance position in the monitoring image of the real camera;
and detecting the moving conditions of the target in different areas, and counting the number of the targets in the statistical area according to the moving conditions of the target in different areas.
Further, the selecting a real camera for counting the number of targets according to the entrance and exit position in the virtual three-dimensional model and the visual field of the virtual camera includes:
analyzing the visual field of each virtual camera in the virtual three-dimensional model, and searching a camera combination scheme for covering all entrances and exits in the virtual three-dimensional model by using as few cameras as possible;
and selecting a real camera corresponding to the virtual camera in the combination scheme as a camera for counting the target number.
Further, the obtaining, according to the entrance and exit position in the virtual three-dimensional model, a corresponding entrance and exit position in the monitoring image of the real camera, and dividing the monitoring image of the real camera into different areas according to the entrance and exit position in the monitoring image of the real camera, includes:
converting the real world three-dimensional coordinates of the key positioning points of the entrance and exit positions in the virtual three-dimensional model into two-dimensional coordinates in the monitoring image of the real camera to obtain the positions of the key positioning points of the entrance and exit positions in the monitoring image of the real camera;
drawing the position of the entrance and the exit in the real camera monitoring image according to the position of a key positioning point of the entrance and the exit in the real camera monitoring image;
according to the entrance and exit positions in the monitoring image of the real camera, the monitoring image of the real camera is divided into an entrance and exit inner area and an entrance and exit outer area.
Further, the detecting the moving conditions of the target in different areas and counting the number of the targets in the statistical area according to the moving conditions of the target in different areas includes:
judging the moving conditions of the target in different areas according to the moving conditions of the image formed by the target in different areas, adding one to the number of the targets entering the statistical area when the image formed by the target moves from the area inside the entrance and the exit to the area outside the entrance and the exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target moves from the area outside the entrance and the exit to the area inside the entrance and the exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
Further, the detecting the moving conditions of the target in different areas and counting the number of the targets in the statistical area according to the moving conditions of the target in different areas further includes:
judging the moving conditions of the target in different areas according to the change condition of whether the image formed by the target covers part of the entrance and exit, adding one to the number of the targets entering the statistical area when the image formed by the target changes from the part which does not cover any entrance and exit to the part which covers part of the entrance and exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target changes from the part which covers any entrance and exit to the part which does not cover any entrance and exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
The invention also provides a device for counting the number of targets by the camera, which comprises the following components:
the camera selection module is used for selecting a real camera for counting the number of targets according to the entrance and exit positions in the virtual three-dimensional model and the visual field of the virtual camera;
the video fusion calibration module is used for acquiring the monitoring image of the selected real camera and projecting the monitoring image into the virtual three-dimensional model for video fusion calibration;
the area dividing module is used for acquiring a corresponding entrance position in the monitoring image of the real camera according to the entrance position in the virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance position in the monitoring image of the real camera;
and the target quantity counting module is used for detecting the moving conditions of the target in different areas and counting the target quantity in the counting area according to the moving conditions of the target in different areas.
Further, the camera selection module selects a real camera for counting the number of targets according to the entrance and exit position in the virtual three-dimensional model and the visual field of the virtual camera, and executes the following operations:
analyzing the visual field of each virtual camera in the virtual three-dimensional model, and searching a camera combination scheme for covering all entrances and exits in the virtual three-dimensional model by using as few cameras as possible;
and selecting a real camera corresponding to the virtual camera in the combination scheme as a camera for counting the target number.
Further, the region dividing module acquires a corresponding entrance position in the monitoring image of the real camera according to the entrance position in the virtual three-dimensional model, divides the monitoring image of the real camera into different regions according to the entrance position in the monitoring image of the real camera, and executes the following operations:
converting the real world three-dimensional coordinates of the key positioning points of the entrance and exit positions in the virtual three-dimensional model into two-dimensional coordinates in the monitoring image of the real camera to obtain the positions of the key positioning points of the entrance and exit positions in the monitoring image of the real camera;
drawing the position of the entrance and the exit in the real camera monitoring image according to the position of a key positioning point of the entrance and the exit in the real camera monitoring image;
according to the entrance and exit positions in the monitoring image of the real camera, the monitoring image of the real camera is divided into an entrance and exit inner area and an entrance and exit outer area.
Further, the target number counting module detects the moving conditions of the target in different areas, and performs target number counting in the counting area according to the moving conditions of the target in different areas, and executes the following operations:
judging the moving conditions of the target in different areas according to the moving conditions of the image formed by the target in different areas, adding one to the number of the targets entering the statistical area when the image formed by the target moves from the area inside the entrance and the exit to the area outside the entrance and the exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target moves from the area outside the entrance and the exit to the area inside the entrance and the exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
Further, the target number counting module detects the moving conditions of the target in different areas, and performs target number counting in the counting area according to the moving conditions of the target in different areas, and executes the following operations:
judging the moving conditions of the target in different areas according to the change condition of whether the image formed by the target covers part of the entrance and exit, adding one to the number of the targets entering the statistical area when the image formed by the target changes from the part which does not cover any entrance and exit to the part which covers part of the entrance and exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target changes from the part which covers any entrance and exit to the part which does not cover any entrance and exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
The invention provides a method and a device for counting the number of targets by a camera, which select the camera for counting the number of people from monitoring cameras in an area needing the number of people according to a virtual three-dimensional model and a visual field of the virtual camera, divide a monitoring image of the camera into different areas according to the position of a door in the monitoring image of the camera, detect the moving condition of people in different areas by the motion detection function of the camera, and automatically complete the number counting of people in the area needing the number of people according to the moving condition of people in different areas. The problem of prior art carry out the number statistics time, must install special number statistics camera and detect, cause very big wasting of resources to and the number statistics camera need carry out a large amount of manual configuration before using, use inconvenient and need consume a large amount of manpower and materials is solved.
Drawings
FIG. 1 is a flow chart of a method of people counting by a camera according to the present invention;
FIG. 2 is a schematic view of a door in a perspective projection according to the present embodiment;
FIG. 3 is a schematic diagram illustrating three-dimensional coordinate transformation performed in the present embodiment;
fig. 4 is a block diagram of an apparatus for counting people by a camera according to the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the drawings and examples, which should not be construed as limiting the present invention.
The general scheme of the invention is as follows: under the condition that all the entrances and exits of the area needing target quantity counting can be covered by the cameras, the corresponding relation between the entrances and the exits in the counting area and the cameras for target quantity counting is obtained, the cameras for target quantity counting are selected, the two-dimensional monitoring image of the cameras for target quantity counting is automatically subjected to area segmentation through a three-dimensional video fusion technology, then the area change condition of the target position in the monitoring image is captured through the motion detection function of the cameras, and the target quantity in the counting area is automatically calculated. In this embodiment, a person is taken as a target, and a door is taken as an entrance, for example, it is easy to understand that the target may be other animals or articles, and the entrance may be a door or other passageway or a railing.
As shown in fig. 1, a method for counting the number of targets by a camera includes:
and step S1, selecting a real camera for target statistics according to the entrance and exit position in the virtual three-dimensional model and the visual field of the virtual camera.
The embodiment performs three-dimensional modeling on the whole region needing people counting to establish a virtual three-dimensional model of the whole counting region. The virtual three-dimensional model is constructed by converting an actual statistical area environment into computer-readable point-line-plane information, the construction process is similar to that of the existing 3D monitoring software, after a CAD drawing of the actual statistical area environment is obtained, a virtual three-dimensional model with the actual statistical area environment in the proportion of 1:1 is established through various three-dimensional modeling software such as 3DMAX, Revit and the like, and then the virtual three-dimensional model is pasted through manual shooting, aerial shooting and other methods, so that the virtual three-dimensional model is closer to the actual statistical area environment. Monitoring scene information and coordinate positions in the virtual three-dimensional model can be directly applied to actual statistical region environments.
The virtual camera is applied to a virtual environment, virtual equipment realized through a computer graphics algorithm can shoot the virtual environment and generate images for observing the visible area of a single real camera, and the virtual three-dimensional model is generated according to the real statistical area environment in a modeling mode according to the proportion of 1:1, so that the visible area of the virtual camera can be directly applied to the real camera after the parameters of the virtual camera are adjusted to be consistent with the real camera.
In this embodiment, a virtual camera corresponding to each real camera is established in a virtual three-dimensional model, a camera combination scheme in which all doors in the virtual three-dimensional model are covered with as few cameras as possible is found by analyzing the visual field of each virtual camera in the virtual three-dimensional model, and the real camera corresponding to the virtual camera in the combination scheme is used as the camera for performing people counting in this embodiment.
That is, on the premise that all doors in the monitored scene can be covered by the cameras, the visual field of the virtual cameras in the virtual three-dimensional model is utilized to obtain the corresponding relationship between the doors and the cameras, which can be one-to-one or one-to-many, that is, one camera corresponds to two or more doors, and the proper cameras are selected, so that people counting in the room is completed by combining the cameras as few as possible.
For example, three entrances and exits of the door 1, the door 2 and the door 3 exist in a statistical area in the virtual three-dimensional model, and through analysis of a visual domain of the virtual camera, it is found that the door 3 can be covered by the virtual camera 1, the door 1 and the door 2 can be covered by the virtual camera 2, the door 2 can be covered by the virtual camera 3, and then the real camera 1 and the real camera 2 corresponding to the virtual camera 1 and the virtual camera 2 are selected as the cameras for counting the number of people in the statistical area.
And step S2, acquiring the monitoring image of the selected real camera, and projecting the monitoring image into the virtual three-dimensional model for video fusion calibration.
In the embodiment, after the real cameras for people counting are selected, the parameters of the virtual cameras in the virtual three-dimensional model, such as the installation position, the monitoring visual angle, the CCD size, the focal length and the like, are completely consistent with the corresponding real cameras through a three-dimensional video fusion method.
The three-dimensional video fusion process is to replace a map in a virtual three-dimensional model visible by a virtual camera into a monitoring image of a real camera and display the monitoring image by acquiring the monitoring image of the real camera. Specifically, a monitoring image of a real camera is obtained, the monitoring image of the real camera is projected into a virtual three-dimensional model through a corresponding virtual camera, format matching and conversion of the monitoring image and a three-dimensional environment are performed, visible object sorting and blocked object removing of the virtual camera are performed, and finally mapping replacement of the visible object is performed. In the embodiment, the monitored image is overlapped with the virtual three-dimensional model by adjusting parameters such as the installation position, the monitoring visual angle, the CCD size and the focal length of the virtual camera.
In practical application, because the real camera imaging has certain distortion, and the virtual camera imaging does not have distortion, the monitoring image may not be completely overlapped with the virtual three-dimensional model, and the people counting in the embodiment is to perform region segmentation on the video image according to the position of the door, and the key point is the accuracy of the position of the door, so that the door in the camera monitoring image is overlapped with the door in the virtual three-dimensional model only by adjusting the parameters of the virtual camera.
Step S3, obtaining the corresponding entrance and exit position in the monitoring image of the real camera according to the entrance and exit position in the virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance and exit position in the monitoring image of the real camera.
In this embodiment, a perspective projection method is used to convert three-dimensional coordinate points in a virtual three-dimensional model into two-dimensional coordinate points in a camera monitoring image, i.e., a set of radiation projection lines generated by a projection center (i.e., a view angle point of a camera) is used to project a three-dimensional object onto a projection plane, as shown in fig. 2, A, B, C, D are respectively the upper left corner, the upper right corner, the lower left corner and the lower right corner of a door in the virtual three-dimensional model, E, F, G, H are respectively projection points of A, B, C, D on the projection plane of the camera, i.e., imaging points of A, B, C, D in the camera monitoring image, a point P is a view angle point of the camera, and a dotted line is a projection line.
Converting three-dimensional coordinate points in the virtual three-dimensional model into two-dimensional coordinate points in a camera monitoring image in two steps, firstly converting the three-dimensional coordinate points in the virtual three-dimensional model from three-dimensional coordinates in a real world coordinate system into three-dimensional coordinates in a camera coordinate system, and the specific method comprises the following steps:
as shown in FIG. 3, OXYZ is the real world coordinate system, O ' X ' Y ' Z ' is the coordinate system of the camera, the coordinate system is established on the projection plane of the camera, the direction of O ' Z ' is the normal direction of the projection plane of the virtual camera, and the coordinate of the point O ' in the coordinate system OXYZ is (X X)o,yo,zo) The unit direction vectors in the axial directions of O 'X', O 'Y', O 'Z' are (a)11,a12,a13)、(a21,a22,a23)、(a31,a32,a33) Then the three-dimensional coordinate transformation matrix from the coordinate system xyz to O 'X' Y 'Z' is:
Figure BDA0001091088060000081
since the world coordinates of point A, B, C, D, P in the virtual three-dimensional model are known, the three-dimensional coordinates (x ') of point a in the coordinate system of the virtual camera are obtained from the three-dimensional coordinate transformation matrix'A,y′A,z′A) And three-dimensional coordinates (x ') of point B in the coordinate system of the virtual camera'c,y′c,z′c) And three-dimensional coordinates (x ') of the point C in the coordinate system of the virtual camera'c,y′c,z′c) And three-dimensional coordinates (x ') of the point D in the coordinate system of the virtual camera'D,y′D,z′D) And three-dimensional coordinates (x ') of the G point in the coordinate system of the virtual camera'G,y′G,z′G)。
Then, converting the three-dimensional coordinates in the camera coordinate system into two-dimensional coordinates in the camera monitoring image, wherein the specific method comprises the following steps:
according to two points (x) in the space1,y1,z1) And (x)2,y2,z2) The spatial linear equation of (a):
Figure BDA0001091088060000082
the space linear equation of the view point P passing through the camera and the upper left corner A of the door is obtained as follows:
Figure BDA0001091088060000091
obtaining through transformation:
Figure BDA0001091088060000092
because the point E is the projection point of the point A on the projection plane of the camera, the coordinate of the point E satisfies the above linear equation, and the following results are obtained:
Figure BDA0001091088060000093
and the coordinate system O ' X ' Y ' Z ' is established on the projection plane of the camera, thus Z 'EWhen the two-dimensional coordinate of the point E in the projection plane of the camera (i.e., the camera monitor image) is 0, the two-dimensional coordinate is obtained as follows:
Figure BDA0001091088060000094
in this embodiment, the real world three-dimensional coordinates of the point a at the upper left corner of the door in the virtual three-dimensional model are converted into the two-dimensional coordinates of the projection point E of the point a in the camera monitoring image by the above method, and the two-dimensional coordinates of the point F, G, H in the camera monitoring image are calculated by the same method. Then connecting lines EF, FG, GH and HE among E, F, G, H are used as a door frame, and monitoring images of the camera are divided into two different areas inside and outside the door frame according to the position of the door frame.
It should be noted that, after video fusion calibration, the monitoring image of the real camera is overlapped with the imaging of the virtual camera, so that the position of the door in the monitoring image of the real camera can be determined by the position of the entrance in the virtual three-dimensional model.
By the above method, the present embodiment divides the monitoring image of the camera into different areas.
And step S4, detecting the moving condition of the target in different areas, and counting the number of the targets in the statistical area according to the moving condition of the target in different areas.
In the embodiment, after the monitoring area of the camera is divided into different areas according to the position of the door frame in the monitoring image of the camera, the movement condition of a person in the different areas is firstly detected through the motion detection function of the camera. Since the monitored image of the camera is a two-dimensional image, different parts of the human body may be located in different areas in the monitored image of the camera, and the feet of the human are tightly attached to the ground, the embodiment determines the area where the human is located according to the positions of the feet of the human in the monitored image of the camera, and determines the movement conditions of the human in the different areas according to the movement conditions of the feet of the human in the different areas. For example, when a person walks from the outside to the inside of the door, after the feet of the person cross a door line (a connecting line between a left lower corner G and a right lower corner H of the door) and enter an area outside the door frame, images formed by other parts of the person are also located in the area inside the door frame, and at the moment, the person is judged to be in the area outside the door frame; when a person walks from inside to outside, the image formed by the head and the upper body of the person firstly enters the area inside the doorframe, and the image formed by the feet of the person is also positioned in the area outside the doorframe, so that the person is judged to be in the area outside the doorframe, and the person can be judged to be in the area inside the doorframe only when the feet of the person cross the doorline and enter the area inside the doorframe.
It should be noted that, in this embodiment, the moving situation of the person in different areas can also be determined according to whether the image formed by the person in the camera monitoring image covers part of the door frame. For example, when a person walks from the outside to the inside of the door and does not cross the door line, the image formed by the person does not shield any door frame part, and at this time, the person is judged to be in the area in the door frame; when a person walks from inside to outside, the image formed by the head and the upper half of the person can enter the area inside the door frame firstly and shield the door line of the door frame or the side surface of the door frame, at the moment, the person is judged to be in the area outside the door frame, only when the person crosses the door line and enters the area inside the door frame, the image formed by the person can not shield any door frame part, and at the moment, the person is judged to be in the area inside the door frame.
The embodiment carries out people counting according to the moving conditions of people in different areas. Specifically, if it is detected that a person moves from an area inside the door frame to an area outside the door frame, the number of people who enter the statistical area is increased by one, and if it is detected that a person moves from an area outside the door frame to an area inside the door frame, the number of people who leave the statistical area is increased by one, and the number of people who leave the statistical area is subtracted from the number of people who enter the statistical area, so that the number of people in the current statistical area is obtained.
In the embodiment, the number of people in the statistical area counted by each camera is obtained through the method, and then the number of people in the statistical area counted by each camera is added to obtain the total number of people in the statistical area.
By the method, people counting in the counting area is automatically performed.
The embodiment further provides an apparatus for counting the number of targets by using a camera, which corresponds to the above method, and as shown in fig. 4, includes:
the camera selection module is used for selecting a real camera for counting the number of targets according to the entrance and exit positions in the virtual three-dimensional model and the visual field of the virtual camera;
the video fusion calibration module is used for acquiring the monitoring image of the selected real camera and projecting the monitoring image into the virtual three-dimensional model for video fusion calibration;
the area dividing module is used for acquiring a corresponding entrance position in the monitoring image of the real camera according to the entrance position in the virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance position in the monitoring image of the real camera;
and the target quantity counting module is used for detecting the moving conditions of the target in different areas and counting the target quantity in the counting area according to the moving conditions of the target in different areas.
The camera selection module of the embodiment selects real cameras for counting the number of targets according to the positions of the entrance and the exit in the virtual three-dimensional model and the visible field of the virtual cameras, and executes the following operations:
analyzing the visual field of each virtual camera in the virtual three-dimensional model, and searching a camera combination scheme for covering all entrances and exits in the virtual three-dimensional model by using as few cameras as possible;
and selecting a real camera corresponding to the virtual camera in the combination scheme as a camera for counting the target number.
The region dividing module in this embodiment acquires a corresponding entrance position in the monitoring image of the real camera according to the entrance position in the virtual three-dimensional model, divides the monitoring image of the real camera into different regions according to the entrance position in the monitoring image of the real camera, and executes the following operations:
converting the real world three-dimensional coordinates of the key positioning points of the entrance and exit positions in the virtual three-dimensional model into two-dimensional coordinates in the monitoring image of the real camera to obtain the positions of the key positioning points of the entrance and exit positions in the monitoring image of the real camera;
drawing the position of the entrance and the exit in the real camera monitoring image according to the position of a key positioning point of the entrance and the exit in the real camera monitoring image;
according to the entrance and exit positions in the monitoring image of the real camera, the monitoring image of the real camera is divided into an entrance and exit inner area and an entrance and exit outer area.
The target quantity counting module in this embodiment detects the moving conditions of the target in different areas, and performs target quantity counting in the counting area according to the moving conditions of the target in different areas, and performs the following operations:
judging the moving conditions of the target in different areas according to the moving conditions of the image formed by the target in different areas, adding one to the number of the targets entering the statistical area when the image formed by the target moves from the area inside the entrance and the exit to the area outside the entrance and the exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target moves from the area outside the entrance and the exit to the area inside the entrance and the exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
The target quantity counting module in this embodiment detects the moving conditions of the target in different areas, and counts the target quantity in the counting area according to the moving conditions of the target in different areas, and can also be implemented by the following operations:
judging the moving conditions of the target in different areas according to the change condition of whether the image formed by the target covers part of the entrance and exit, adding one to the number of the targets entering the statistical area when the image formed by the target changes from the part which does not cover any entrance and exit to the part which covers part of the entrance and exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target changes from the part which covers any entrance and exit to the part which does not cover any entrance and exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and those skilled in the art can make various corresponding changes and modifications according to the present invention without departing from the spirit and the essence of the present invention, but these corresponding changes and modifications should fall within the protection scope of the appended claims.

Claims (8)

1. A method for counting the number of targets by a camera is characterized by comprising the following steps:
selecting a real camera for counting the number of targets according to the positions of an entrance and an exit in the virtual three-dimensional model and the visual field of the virtual camera;
acquiring a monitoring image of the selected real camera, and projecting the monitoring image into the virtual three-dimensional model for video fusion calibration;
acquiring a corresponding entrance position in a monitoring image of a real camera according to the entrance position in the virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance position in the monitoring image of the real camera, wherein the different areas comprise an entrance area and an exit area;
detecting the moving conditions of the target in different areas, and counting the number of the targets in the statistical area according to the moving conditions of the target in different areas;
the method includes the steps of obtaining corresponding entrance and exit positions in a monitoring image of a real camera according to the entrance and exit positions in a virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance and exit positions in the monitoring image of the real camera, and includes the following steps:
converting the real world three-dimensional coordinates of the key positioning points of the entrance and exit positions in the virtual three-dimensional model into two-dimensional coordinates in the monitoring image of the real camera to obtain the positions of the key positioning points of the entrance and exit positions in the monitoring image of the real camera;
drawing the position of the entrance and the exit in the real camera monitoring image according to the position of a key positioning point of the entrance and the exit in the real camera monitoring image;
according to the entrance and exit positions in the monitoring image of the real camera, the monitoring image of the real camera is divided into an entrance and exit inner area and an entrance and exit outer area.
2. The method for counting the number of targets by using the camera according to claim 1, wherein the selecting the real camera for counting the number of targets according to the entrance and exit position in the virtual three-dimensional model and the visual field of the virtual camera comprises:
analyzing the visual field of each virtual camera in the virtual three-dimensional model, and searching a camera combination scheme for covering all entrances and exits in the virtual three-dimensional model by using as few cameras as possible;
and selecting a real camera corresponding to the virtual camera in the combination scheme as a camera for counting the target number.
3. The method for counting the number of targets by using a camera according to claim 1, wherein the detecting the moving condition of the target in different areas and counting the number of targets in the counted area according to the moving condition of the target in different areas comprises:
judging the moving conditions of the target in different areas according to the moving conditions of the image formed by the target in different areas, adding one to the number of the targets entering the statistical area when the image formed by the target moves from the area inside the entrance and the exit to the area outside the entrance and the exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target moves from the area outside the entrance and the exit to the area inside the entrance and the exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
4. The method for counting the number of targets by using a camera according to claim 1, wherein the detecting of the movement of the target in different areas and the counting of the number of targets in the counted area according to the movement of the target in different areas further comprises:
judging the moving conditions of the target in different areas according to the change condition of whether the image formed by the target covers part of the entrance and exit, adding one to the number of the targets entering the statistical area when the image formed by the target changes from the part which does not cover any entrance and exit to the part which covers part of the entrance and exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target changes from the part which covers any entrance and exit to the part which does not cover any entrance and exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
5. An apparatus for counting a number of objects by a camera, the apparatus comprising:
the camera selection module is used for selecting a real camera for counting the number of targets according to the entrance and exit positions in the virtual three-dimensional model and the visual field of the virtual camera;
the video fusion calibration module is used for acquiring the monitoring image of the selected real camera and projecting the monitoring image into the virtual three-dimensional model for video fusion calibration;
the area dividing module is used for acquiring corresponding entrance and exit positions in the monitoring image of the real camera according to the entrance and exit positions in the virtual three-dimensional model, and dividing the monitoring image of the real camera into different areas according to the entrance and exit positions in the monitoring image of the real camera, wherein the different areas comprise an entrance area inside the entrance and exit and an exit area outside the entrance and exit;
the target quantity counting module is used for detecting the moving conditions of the target in different areas and counting the target quantity in the counting area according to the moving conditions of the target in different areas;
the region division module acquires corresponding entrance and exit positions in the monitoring image of the real camera according to the entrance and exit positions in the virtual three-dimensional model, divides the monitoring image of the real camera into different regions according to the entrance and exit positions in the monitoring image of the real camera, and executes the following operations:
converting the real world three-dimensional coordinates of the key positioning points of the entrance and exit positions in the virtual three-dimensional model into two-dimensional coordinates in the monitoring image of the real camera to obtain the positions of the key positioning points of the entrance and exit positions in the monitoring image of the real camera;
drawing the position of the entrance and the exit in the real camera monitoring image according to the position of a key positioning point of the entrance and the exit in the real camera monitoring image;
according to the entrance and exit positions in the monitoring image of the real camera, the monitoring image of the real camera is divided into an entrance and exit inner area and an entrance and exit outer area.
6. The apparatus for counting the number of targets by the camera according to claim 5, wherein the camera selection module selects the real camera for counting the number of targets according to the entrance and exit position in the virtual three-dimensional model and the visual field of the virtual camera, and performs the following operations:
analyzing the visual field of each virtual camera in the virtual three-dimensional model, and searching a camera combination scheme for covering all entrances and exits in the virtual three-dimensional model by using as few cameras as possible;
and selecting a real camera corresponding to the virtual camera in the combination scheme as a camera for counting the target number.
7. The apparatus for counting the number of targets by a camera according to claim 5, wherein the target number counting module detects the movement of the target in different areas and counts the number of targets in the counted areas according to the movement of the target in different areas, and performs the following operations:
judging the moving conditions of the target in different areas according to the moving conditions of the image formed by the target in different areas, adding one to the number of the targets entering the statistical area when the image formed by the target moves from the area inside the entrance and the exit to the area outside the entrance and the exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target moves from the area outside the entrance and the exit to the area inside the entrance and the exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
8. The apparatus for counting the number of targets by a camera according to claim 5, wherein the target number counting module detects the movement of the target in different areas and counts the number of targets in the counted areas according to the movement of the target in different areas, and performs the following operations:
judging the moving conditions of the target in different areas according to the change condition of whether the image formed by the target covers part of the entrance and exit, adding one to the number of the targets entering the statistical area when the image formed by the target changes from the part which does not cover any entrance and exit to the part which covers part of the entrance and exit, and adding one to the number of the targets leaving the statistical area when the image formed by the target changes from the part which covers any entrance and exit to the part which does not cover any entrance and exit;
and subtracting the number of the targets leaving the statistical area from the number of the targets entering the statistical area to obtain the number of the targets in the current statistical area.
CN201610733841.2A 2016-08-25 2016-08-25 Method and device for counting target number through camera Active CN106295790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610733841.2A CN106295790B (en) 2016-08-25 2016-08-25 Method and device for counting target number through camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610733841.2A CN106295790B (en) 2016-08-25 2016-08-25 Method and device for counting target number through camera

Publications (2)

Publication Number Publication Date
CN106295790A CN106295790A (en) 2017-01-04
CN106295790B true CN106295790B (en) 2020-05-19

Family

ID=57676989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610733841.2A Active CN106295790B (en) 2016-08-25 2016-08-25 Method and device for counting target number through camera

Country Status (1)

Country Link
CN (1) CN106295790B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019176306A (en) * 2018-03-28 2019-10-10 キヤノン株式会社 Monitoring system and control method therefor, and program
CN112689131B (en) * 2021-03-12 2021-06-01 深圳市安软科技股份有限公司 Gridding-based moving target monitoring method and device and related equipment
CN113066214B (en) * 2021-03-26 2022-08-23 深圳市博盛科电子有限公司 Access control system based on 5G network remote monitoring
CN114821483B (en) * 2022-06-20 2022-09-30 武汉惠得多科技有限公司 Monitoring method and system capable of measuring temperature and applied to monitoring video

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014130397A (en) * 2012-12-28 2014-07-10 Chugoku Electric Power Co Inc:The Meter-reading handy terminal with movable camera

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040200955A1 (en) * 2003-04-08 2004-10-14 Aleksandr Andzelevich Position detection of a light source
CN102036054B (en) * 2010-10-19 2013-04-17 北京硅盾安全技术有限公司 Intelligent video monitoring system based on three-dimensional virtual scene
US8620088B2 (en) * 2011-08-31 2013-12-31 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 Method and system for passenger flow statistics based on cameras with intelligent analysis function
CN103985182B (en) * 2014-05-30 2016-04-20 长安大学 A kind of bus passenger flow automatic counting method and automatic counter system
CN105225230B (en) * 2015-09-11 2018-07-13 浙江宇视科技有限公司 A kind of method and device of identification foreground target object
CN105407259B (en) * 2015-11-26 2019-07-30 北京理工大学 Virtual image capture method
CN105635696A (en) * 2016-03-22 2016-06-01 南阳理工学院 Statistical method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014130397A (en) * 2012-12-28 2014-07-10 Chugoku Electric Power Co Inc:The Meter-reading handy terminal with movable camera

Also Published As

Publication number Publication date
CN106295790A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
Boltes et al. Collecting pedestrian trajectories
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
CN106295790B (en) Method and device for counting target number through camera
CN108700468A (en) Method for checking object, object detection terminal and computer-readable medium
JP6156665B1 (en) Facility activity analysis apparatus, facility activity analysis system, and facility activity analysis method
JP7205613B2 (en) Image processing device, image processing method and program
US7720257B2 (en) Object tracking system
JP6619927B2 (en) Calibration device
CN103473554B (en) Artificial abortion's statistical system and method
EP3518146A1 (en) Image processing apparatus and image processing method
US20100103266A1 (en) Method, device and computer program for the self-calibration of a surveillance camera
US10318817B2 (en) Method and apparatus for surveillance
CN105427345B (en) Three-dimensional stream of people's method of motion analysis based on camera projection matrix
CN108694741A (en) A kind of three-dimensional rebuilding method and device
EP2798611A1 (en) Camera calibration using feature identification
CA2466085A1 (en) Method and apparatus for providing immersive surveillance
CN107273799A (en) A kind of indoor orientation method and alignment system
JP5525495B2 (en) Image monitoring apparatus, image monitoring method and program
CN110796032A (en) Video fence based on human body posture assessment and early warning method
CN106504227B (en) Demographic method and its system based on depth image
CN107862713A (en) Video camera deflection for poll meeting-place detects method for early warning and module in real time
CN106600628A (en) Target object identification method and device based on infrared thermal imaging system
CN108362382A (en) A kind of thermal imaging monitoring method and its monitoring system
CN109670391B (en) Intelligent lighting device based on machine vision and dynamic identification data processing method
WO2009016624A2 (en) System and method employing thermal imaging for object detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant