CN112633096A - Passenger flow monitoring method and device, electronic equipment and storage medium - Google Patents

Passenger flow monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112633096A
CN112633096A CN202011466662.XA CN202011466662A CN112633096A CN 112633096 A CN112633096 A CN 112633096A CN 202011466662 A CN202011466662 A CN 202011466662A CN 112633096 A CN112633096 A CN 112633096A
Authority
CN
China
Prior art keywords
human head
sequence
dimensional
head frame
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011466662.XA
Other languages
Chinese (zh)
Inventor
郝凯旋
黄哲
王孝宇
胡文泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202011466662.XA priority Critical patent/CN112633096A/en
Publication of CN112633096A publication Critical patent/CN112633096A/en
Priority to PCT/CN2021/114965 priority patent/WO2022127181A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The embodiment of the invention provides a method and a device for monitoring passenger flow, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first target image sequence and a second target image sequence of a target area; respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence; matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person; according to the paired human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person; and monitoring passenger flow of the target area according to the three-dimensional head sequence. The passenger flow monitoring effect can be improved.

Description

Passenger flow monitoring method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a passenger flow monitoring method and device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence, for seeking a business mode capable of realizing accurate marketing, an image technology in artificial intelligence is adopted to obtain passenger flow volume information, passenger flow conversion rate information, behavior information of a customer in a shop and the like, so that marketing strategies are more accurate. In general imaging technology, the passenger flow volume information is generally acquired by drawing lines, picture frames and the like in a two-dimensional plane, and when people cross the lines or enter the frames, passenger flow counting is performed. However, the single calculation logic cannot be compatible with the factors of the changeability of the shop scene, the changeability of the pedestrian entering the shop, the height of the pedestrian and the like, so that the counting accuracy is poor, and the reliability of the other analyzed information is also poor. Therefore, the existing passenger flow volume monitoring effect is not good.
Disclosure of Invention
The embodiment of the invention provides a passenger flow monitoring method, which can improve the counting accuracy of passenger flow and further improve the passenger flow monitoring effect.
In a first aspect, an embodiment of the present invention provides a method for monitoring passenger flow, where the method includes:
acquiring a first target image sequence and a second target image sequence of a target area, wherein the first target image sequence and the second target sequence are acquired at the same time and at different angles;
respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person;
matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person;
according to the paired human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person;
and monitoring passenger flow of the target area according to the three-dimensional head sequence.
Optionally, the first target image sequence is acquired by a first camera, and the second target image sequence is acquired by a second camera, and the method further includes:
carrying out ground calibration under the coordinate system of the first camera or the second camera to obtain a calibrated ground;
and constructing and obtaining the three-dimensional space based on the calibrated ground.
Optionally, the performing ground calibration in the coordinate system of the first camera or the second camera to obtain a calibrated ground includes:
obtaining calibration object information associated with the target area, wherein the calibration object information is calibration object information under a coordinate system of the first camera or the second camera;
and carrying out ground calibration according to the information of the calibration object to obtain a calibrated ground.
Optionally, the performing ground calibration in the coordinate system of the first camera or the second camera to obtain a calibrated ground includes:
calculating ground feature points corresponding to a first camera and a second camera, and triangulating the ground feature points to obtain three-dimensional space points corresponding to the ground feature points;
and performing plane parameter fitting on the three-dimensional space points to obtain a calibrated ground.
Optionally, the performing human head three-dimensional reconstruction on the paired human head frame sequence in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person includes:
calculating an effective disparity map of a first human head frame and a second human head frame in the matched human head frame in the current frame;
performing human head three-dimensional reconstruction in a preset three-dimensional space through the effective disparity map to obtain a current frame three-dimensional human head;
and obtaining a three-dimensional head sequence of the target person based on the current frame three-dimensional head.
Optionally, the calculating an effective disparity map of a first human head frame and a second human head frame in the paired human head frames in the current frame includes:
calculating to obtain an effective parallax interval according to a preset prior parallax;
calculating a disparity map of a first human head frame and a second human head frame in the matched human head frame in the current frame, and judging whether the disparity map falls into the effective disparity interval or not;
and if the disparity map falls into the effective disparity interval, judging that the disparity map is an effective disparity map.
Optionally, the performing, through the effective disparity map, three-dimensional human head reconstruction in a preset three-dimensional space to obtain a current frame three-dimensional human head includes:
calculating a final parallax value according to the effective parallax map;
calculating to obtain the head depth information of the target person according to preset first camera internal reference or second camera internal reference;
and according to the human head depth information, performing human head three-dimensional reconstruction in the preset three-dimensional space to obtain the current frame three-dimensional human head.
Optionally, the three-dimensional space includes a calibrated ground, and the monitoring of passenger flow to the target area according to the three-dimensional head sequence includes:
projecting the three-dimensional human head in the three-dimensional human head sequence to the calibrated ground to obtain a projection track of a target person;
and monitoring passenger flow of the target area according to the projection track.
Optionally, the step of monitoring passenger flow in the target area according to the projection trajectory includes:
calculating the state information of the projection track at each time point and the target calibration area to obtain a state sequence of the projection track and the target calibration area;
and monitoring passenger flow of the target area according to the state sequence.
In a second aspect, an embodiment of the present invention further provides a device for monitoring passenger flow, where the device includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first target image sequence and a second target image sequence of a target area, and the first target image sequence and the second target sequence are acquired at the same time and at different angles;
the processing module is used for respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person;
the matching module is used for matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person;
the three-dimensional reconstruction module is used for performing human head three-dimensional reconstruction in a preset three-dimensional space according to the paired human head frame sequence to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person;
and the monitoring module is used for monitoring passenger flow of the target area according to the three-dimensional head sequence.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the monitoring method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps in the monitoring method for passenger flow provided by the embodiment of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the method for monitoring passenger flow provided by the embodiment of the present invention.
In the embodiment of the invention, a first target image sequence and a second target image sequence of a target area are obtained, wherein the first target image sequence and the second target sequence are acquired at the same time and at different angles; respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person; matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person; according to the paired human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person; and monitoring passenger flow of the target area according to the three-dimensional head sequence. Through the human head images of the target personnel at different angles, more accurate target human head information is extracted for three-dimensional reconstruction, so that the position of the three-dimensional target human head in a three-dimensional space is more accurate, the counting accuracy of the passenger flow is improved, and the passenger flow monitoring effect is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for monitoring passenger flow according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for constructing a three-dimensional space according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a three-dimensional human head reconstruction method according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for monitoring passenger flow according to an embodiment of the present invention;
FIG. 4a is a diagram illustrating a relationship between a region and a state according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a passenger flow monitoring device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another passenger flow monitoring device provided in the embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a calibration module according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of another calibration module provided in an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a three-dimensional reconstruction module according to an embodiment of the present invention;
FIG. 10 is a block diagram of a second computing submodule provided in an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a reconstruction submodule according to an embodiment of the present invention;
FIG. 12 is a schematic structural diagram of a monitoring module according to an embodiment of the present invention;
FIG. 13 is a schematic structural diagram of a monitoring submodule provided in an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for monitoring passenger flow according to an embodiment of the present invention, and as shown in fig. 1, the method is used for monitoring passenger flow at regular time or in real time, and includes the following steps:
101. and acquiring a first target image sequence and a second target image sequence of the target area.
In an embodiment of the present invention, the first target image sequence and the second target image sequence are acquired at the same time and at different angles. The first target image sequence and the second target image sequence comprise at least 1 target person.
The first target image sequence and the second target image sequence can be respectively collected through the cameras with two different shooting angles, the two cameras can be calibrated and form association when being installed, so that the two cameras can shoot in the same coordinate system, and the two cameras can shoot at the same time. The first target image sequence and the second target image sequence can also be acquired through a calibrated binocular camera. In the embodiment of the present invention, preferably, the first target image sequence and the second target image sequence are acquired by a calibrated binocular camera, and the first target image sequence and the second target image sequence may be a left-eye image sequence and a right-eye image sequence, respectively.
The first target image sequence and the second target image sequence may be continuous frame image sequences (video stream images) acquired by the binocular camera in real time, or continuous frame images acquired by the binocular camera in history.
102. And respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence.
In an embodiment of the present invention, the first person head box sequence includes a first person head box of at least one target person, and the second person head box sequence includes a second person head box of at least one target person. The first human head frame corresponds to a first target image sequence, and the second human head frame corresponds to a second target image sequence.
And respectively carrying out human head detection on the first target image sequence and the second target image sequence through a human head detection model to obtain a first human head frame sequence and a second human head frame sequence. Specifically, the human head detection model may perform human head detection on the first target image sequence and the second target image sequence frame by frame to obtain a human head frame corresponding to each frame of image.
Optionally, the human head frame tracking may be performed on the first human head frame sequence through a human head tracking model, so as to obtain a first human head frame sequence tracked according to a first human head frame corresponding to each frame of the first target image. In the first human head frame sequence, a first human head frame detected in the first target image of each frame is included. Of course, the person head frame tracking may also be performed on the second person head frame sequence through the person head tracking model, so as to obtain a second person head frame sequence tracked according to the second person head frame corresponding to each frame of the second target image. In addition, the human head frame tracking can be performed on the first human head frame sequence through the human head tracking model, meanwhile, the human head frame tracking is performed on the second human head frame sequence through the human head tracking model, so that a first human head frame sequence for tracking according to the first human head frame corresponding to each frame of the first target image and a second human head frame sequence for tracking according to the second human head frame corresponding to each frame of the second target image are obtained.
In one possible embodiment, the first human head box sequence may be a first human head box sequence corresponding to a plurality of target persons, and the second human head box sequence may be a second human head box sequence corresponding to a plurality of target persons. Further, the first head frame may be a plurality of first head frames corresponding to a plurality of target persons, and the second head frame may be a plurality of second head frames corresponding to a plurality of target persons.
Furthermore, ID distribution can be performed on the human head frame of each target person according to a human head frame tracking algorithm, namely, the human head frame of each target person corresponds to a human head frame ID, the human head frame ID is used for identifying whether the corresponding human head frame belongs to the same person, and the same human head frame ID corresponds to the human head frame of the same target person. For example, when only the first head box is tracked, different head box IDs may be assigned to different target persons, and then the first head box sequences of different target persons are obtained according to the different head box IDs. Further, a second human head frame sequence of different target persons can be obtained according to the similarity between the first human head frame and the second human head frame.
103. And matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person.
In an embodiment of the present invention, the paired human head box sequence includes paired human head boxes of at least one target person.
The time sequence relation between the first human head frame sequence and the second human head frame sequence can be determined through the time sequence of each image frame in the first target image sequence and the second target image sequence. When the head of the target image sequence is detected frame by frame, the obtained head frame has the same time sequence with the target image sequence.
The first target image sequence and the second target image sequence are acquired at the same time, and the frame images corresponding to the first target image sequence and the second target image sequence are synchronous, so that the first human head frame sequence and the second human head frame sequence extracted from the first target image sequence and the second target image sequence also have a synchronous property. Therefore, similarity matching can be carried out on the synchronized first human head frame and the second human head frame, and a paired human head frame sequence of each target person is obtained.
104. And according to the matched human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person.
In the embodiment of the present invention, the paired people's head frame sequence includes paired people's head frames corresponding to the first target image and the second target image of each frame, and each paired people's head frame includes a first people's head frame and a second people's head frame of the same target person. The human head depth of field information of the target personnel in the first human head frame and the second human head frame can be calculated through the first human head frame and the second human head frame of the same target personnel, human head three-dimensional reconstruction is carried out in a preset three-dimensional space according to the depth of field information of the human head to obtain the three-dimensional human head of the target personnel, human head three-dimensional reconstruction is carried out on each matched human head frame in a matched human head frame sequence of the target personnel, and then the three-dimensional human head sequence of the target personnel can be obtained.
Furthermore, a corresponding first human head image can be extracted from the first target image according to the first human head frame, a corresponding second human head image can be extracted from the second target image according to the second human head frame, and the first human head image and the second human head image are human head images of the same target person at the same time. The human head depth of field information of the target person can be calculated through the first human head image and the second human head image of the same target person, human head three-dimensional reconstruction is carried out in a preset three-dimensional space according to the human head depth information to obtain the three-dimensional human head of the target person, human head three-dimensional reconstruction is carried out on each human head matching frame in the human head matching frame sequence of the target person, and then the three-dimensional human head sequence of the target person can be obtained.
105. And monitoring passenger flow in the target area according to the three-dimensional head sequence.
In the embodiment of the invention, because the target person is moving, the position of the three-dimensional head of the target person obtained through reconstruction in the three-dimensional space also can be changed correspondingly, so that the moving track of the three-dimensional head in the three-dimensional space is formed, and the moving track of the target person can be obtained according to the moving track of the three-dimensional head in the three-dimensional space. For example, the movement trajectory of the three-dimensional head in the three-dimensional space may be mapped to the real space, and the movement trajectory of the target person in the real space may be obtained to monitor the passenger flow in the target area. The position of the target area in the three-dimensional space can be mapped in the three-dimensional space, and passenger flow monitoring can be performed on the target area according to the moving track of the three-dimensional head in the three-dimensional space and the position of the target area in the three-dimensional space.
The monitoring of the passenger flow in the target area may be a statistical monitoring of the number of passenger flows, for example, a statistical monitoring of the number of passenger flows entering and exiting the target area, or a monitoring of a passenger flow trajectory.
In the embodiment of the invention, a first target image sequence and a second target image sequence of a target area are obtained, wherein the first target image sequence and the second target sequence are acquired at the same time and at different angles; respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person; matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person; according to the paired human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person; and monitoring passenger flow of the target area according to the three-dimensional head sequence. Through the human head images of the target personnel at different angles, more accurate target human head information is extracted for three-dimensional reconstruction, so that the position of the three-dimensional target human head in a three-dimensional space is more accurate, the counting accuracy of the passenger flow is improved, and the passenger flow monitoring effect is further improved.
Optionally, in an embodiment of the present invention, the first target image sequence is acquired by a first camera, and the second target image sequence is acquired by a second camera, so that a three-dimensional space suitable for three-dimensional human head reconstruction may be constructed according to the arrangement and installation of the first camera and the second camera.
Specifically, the three-dimensional space includes a calibrated ground, please refer to fig. 2, fig. 2 is a flowchart of a method for constructing the three-dimensional space according to an embodiment of the present invention, and as shown in fig. 2, the method includes the following steps:
201. and carrying out ground calibration under the coordinate system of the first camera or the second camera to obtain a calibrated ground.
In the embodiment of the present invention, the first camera and the second camera may be a left eye camera and a right eye camera, respectively, of the binocular cameras. When the first camera and the second camera are initialized, calibrating the internal parameters and the external parameters of the first camera and the second camera, and calibrating the ground to obtain the calibrated ground.
The calibration of the internal parameters and the external parameters of the first camera and the second camera can adopt a checkerboard calibration method. Specifically, checkerboard is used for single-camera calibration and double-camera calibration respectively, and internal reference of the first camera, internal reference of the second camera and external reference between the first camera and the second camera are obtained.
The calibrated ground may be obtained based on a coordinate system of the first camera, or may be obtained based on a coordinate system of the second camera, and the image acquired by the first camera may be subjected to coordinate transformation with the image acquired by the second camera, specifically, the coordinates of the image acquired by the first camera may be transformed into the coordinate system of the second camera through an internal reference of the first camera, an internal reference of the second camera, and an external reference between the first camera and the second camera. Therefore, the calibrated ground can select the coordinate system of any one of the first camera or the second camera for calibration.
In one possible embodiment, when calibrating the ground, the information of the calibration object associated with the target area may be obtained, and the information of the calibration object is the information of the calibration object on the same plane with the ground. The calibration object information is calibration object information in a coordinate system of the first camera or the second camera; and carrying out ground calibration according to the information of the calibration object to obtain a calibrated ground. Specifically, the calibration object information includes corner Point information, and the camera pose can be calculated by using a PnP (passive-n-Point) algorithm according to the corner Point information, so as to obtain ground parameters, and obtain a calibrated ground according to the ground parameters. For example, a two-dimensional code may be set on the ground in or near the target area, and the two-dimensional code is associated with the target area as specific calibration object information, so that 4 corner points of the two-dimensional code may be found, sub-pixel refinement may be performed on the 4 corner points of the two-dimensional code, and the camera pose may be obtained by the PnP algorithm for the 4 corner points, thereby obtaining the calibrated ground. In the PnP algorithm, the camera pose can be calculated through known camera internal parameters, coordinates of 4 angular points in a world coordinate system and coordinates in an image coordinate system, so that ground parameters are obtained, and a calibrated ground is obtained according to the ground parameters.
In another possible embodiment, when calibrating the ground, the ground feature points corresponding to the first camera or the second camera may be calculated, and the ground feature points are triangulated to obtain three-dimensional space points corresponding to the ground feature points; and performing plane parameter fitting on the three-dimensional space points to obtain a calibrated ground. Specifically, after the ground characteristic points are calculated and extracted, the ground characteristic points are triangulated to obtain point cloud coordinates of the characteristic points in a world coordinate system, plane parameters can be fitted through a 3D Hough transformation point cloud plane detection algorithm and a ransac random consistency sampling algorithm to obtain ground parameters, and the calibrated ground is obtained according to the ground parameters.
In another possible embodiment, the above two ground calibration methods may be combined, for example, by feature extraction, to extract ground calibration object information, which is a calibration object on the same plane as the ground. The method comprises the steps of triangularizing ground calibration object feature points after the ground calibration object feature points are extracted through calculation to obtain point cloud coordinates of the ground calibration object feature points in a world coordinate system, fitting plane parameters through a 3D Hough transformation point cloud plane detection algorithm and a ransac random consistency sampling algorithm to obtain ground parameters, and obtaining calibrated ground according to the ground parameters.
202. And constructing to obtain a three-dimensional space based on the calibrated ground.
In the embodiment of the invention, the calibrated ground is obtained based on the coordinate system of the first camera or the coordinate system of the second camera, so that the constructed three-dimensional space is also based on the three-dimensional space of the first camera or the three-dimensional space of the second camera. Because the three-dimensional space is established based on the camera coordinate system, the three-dimensional head information can be obtained more accurately by reconstructing the three-dimensional head through the three-dimensional space.
Optionally, referring to fig. 3, fig. 3 is a flowchart of a three-dimensional human head reconstruction method according to an embodiment of the present invention, as shown in fig. 3, including the following steps:
301. and calculating an effective disparity map of a first human head frame and a second human head frame in the current frame.
In the embodiment of the present invention, the disparity map refers to a distance between two images, and the same disparity represents that the distance of the corresponding object from the camera is the same.
Specifically, the effective parallax interval can be calculated according to a preset prior parallax; calculating a disparity map of a first human head frame and a second human head frame in the paired human head frames in the current frame, and judging whether the disparity map falls into an effective disparity interval or not; and if the disparity map falls into the effective disparity interval, judging the disparity map to be an effective disparity map. The effective disparity interval can be calculated by a preset disparity threshold, for example, assuming that the a priori disparity is 64 and the threshold is 10, the effective disparity interval is 54-74, that is, when the value corresponding to the disparity map of the image is within 54-74 of the effective disparity interval, the disparity map of the image is an effective disparity map.
Optionally, a disparity map of the first human head frame and the second human head frame may be calculated by using a Semi-Global Block Matching (SGBM) algorithm, an adaptive parameter in the SGBM algorithm may be determined according to the prior disparity, and a result obtained by dividing the prior disparity by 16 may be rounded and then added as an adaptive parameter of the SGBM algorithm.
It should be noted that the first head frame includes a head image of the target person in the first target image, the second head frame includes a head image of the target person in the second target image, and the disparity map of the first head frame and the second head frame may be understood as a disparity between the head image of the target person in the first target image and the head image of the target person in the second target image.
302. And performing human head three-dimensional reconstruction in a preset three-dimensional space through the effective parallax map to obtain the current frame three-dimensional human head.
In the embodiment of the invention, the head depth information (also called person-to-depth information) of the target person can be obtained by calculation through the disparity map, and then the three-dimensional reconstruction of the head is carried out according to the head depth information.
Further, a final disparity value can be calculated according to the effective disparity map; calculating to obtain the head depth information of the target person according to preset first camera internal reference or second camera internal reference; and according to the human head depth information, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain the current frame three-dimensional human head. When the three-dimensional space is constructed based on the coordinate system of the first camera, the head depth information of the target person can be calculated according to the internal reference of the first camera, and when the three-dimensional space is constructed based on the coordinate system of the second camera, the head depth information of the target person can be calculated according to the internal reference of the second camera. The final disparity value may be an average value of the effective disparity map. The camera internal parameters may include camera focal length, baseline length. The head depth information of the target person is obtained through calculation by the camera internal reference and the final parallax value, and specifically, calculation can be performed through the formula of Z ═ fb/d, wherein Z represents depth, f represents focal length, b represents base length, and d represents the final parallax value.
Optionally, after obtaining the head depth information, coordinate transformation may be performed on the head depth information to obtain the head depth information in the first camera or the second camera coordinate system, so as to perform three-dimensional reconstruction of the head in a three-dimensional space.
303. And obtaining a three-dimensional head sequence of the target person based on the current frame three-dimensional head.
In the embodiment of the invention, the three-dimensional human head reconstruction is carried out on the target personnel in each frame of the first target image and the second target image, so that the three-dimensional human head sequence of the target personnel can be obtained.
Optionally, referring to fig. 4, fig. 4 is a flowchart of another passenger flow monitoring method provided in an embodiment of the present invention, a three-dimensional space includes a calibrated floor, and monitoring passenger flow in a target area may be statistical monitoring of passenger flow volume, for example, statistical monitoring of passenger flow volume in and out of the target area. As shown in fig. 4, the method comprises the following steps:
401. and projecting the three-dimensional human head in the three-dimensional human head sequence to a calibrated ground to obtain a projection track of the target person.
If a plurality of target persons exist, the three-dimensional head of each target person can be projected to the ground calibrated in the three-dimensional space, so that the projection track of each target person is obtained.
402. And monitoring passenger flow of the target area according to the projection track.
The monitoring of the passenger flow in the target area may be a statistical monitoring of the amount of passenger flow.
Optionally, because the ground calibrated in the three-dimensional space is constructed based on the first camera coordinate system or the second camera coordinate system, the camera coordinate system may also be converted into a world coordinate system, and the projection trajectory of the target person is further converted into a trajectory of the target person in the world coordinate system, so that the statistical monitoring of the amount of passenger flow in the target area is performed according to the trajectory of the target person in the world coordinate system.
In a possible embodiment, the ground calibrated in the three-dimensional space includes a target calibration area corresponding to the target area, and the state information of the projection trajectory at each time point and the target calibration area can be calculated to obtain a state sequence of the projection trajectory and the target calibration area; and monitoring the passenger flow of the target area according to the state sequence.
For example, taking an entrance and an exit of a store as an example, the passenger flow volume entering and exiting the store is statistically monitored, the state sequence of the projection trajectory and the target calibration area includes state information of each time point, the state information may be divided into an effective area state and an ineffective area state, specifically, as shown in fig. 4a, the target area may be an area on the left and right sides of the store, that is, the target calibration area may be a fixed area on the left and right sides of the store in a calibrated ground, and the effective area state may further be divided into:
a: the outside of the store may indicate that the head of the person is projected to the outer side of the target calibration area at this time, and further indicate the outer side areas of the target person on both sides of the store area.
b: in the store, the head of the person may be projected on the inner side of the target calibration area at this time, and further, the inner areas of the target person on both sides of the store area may be shown.
c: the out-of-store state may indicate that the human head is projected outside the target calibration area at this time, and further, that the target person is outside the store-door outside area, and the inside of the store-door outside area is the inside area on both sides of the store-door area corresponding to the b state.
d: in the store, it can be shown that the head of a person is projected inside the target designation area at this time, and further that the outside of the inner area of the store door is the outside area on both sides of the store door area corresponding to the state of a.
Invalid state:
t: the id disappears, which may indicate that the target person is out of the camera's shooting range at that time.
n: uncertain regions, which may include buffers near a, b and regions very close and far from the camera (usually caused by frame matching errors).
In this case, the state sequence may be composed of a, b, c, d, t, and n, and a regular search may be performed based on the state sequence, thereby counting the number of the passenger traffic. For example, if the status sequence of a target person is c … ca … ab … bd … dt, it can be said that a process from the outside of the store, to the outside of the store door, to the inside of the store, to the disappearance of the target person is counted as adding 1 to the store-entering passenger flow; the state sequence of a target person is d … db … ba … ac … ct, and the process that the target person goes from the inside of the store, to the inside of the store door, to the outside of the store and disappears can be counted as adding 1 to the flow of the store-out person; the state sequence of a target person is c … ca … ac … ca … ab … ba … ab … bd … dt, which can explain the process that the target person goes from the outside of the store, to the outside of the store door, to the inside of the store and to the disappearance, and although the target person goes around each of the outside of the store, to the outside of the store door, to the outside of the store, to the inside of the store door, to the outside of the store door and to the inside of the store door, the target person finally enters the store, and the flow of the target person into the store can be counted as 1. Therefore, the problem that one person carries out passenger flow counting for many times when the person wanders at the store door can be solved, the passenger flow counting accuracy is improved, and the passenger flow monitoring accuracy is improved.
Of course, the projection trajectory may also be converted into a world coordinate system, the state information of the trajectory corresponding to the target person at each time point and the target area is calculated to obtain a state sequence of the target person and the target area, and then the passenger flow in the target area is monitored according to the state sequence.
It should be noted that the above-mentioned identifiers of the respective states may be set according to the needs of the user, and should not be considered as a limitation to the embodiment of the present invention. For example, a, b, c, d, t, n may be identified by 1, 2, 3, 4, 5, 6.
In the embodiment of the invention, the passenger flow statistics is carried out on the target personnel through the state sequence, so that the statistical accuracy can be increased.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a passenger flow monitoring device according to an embodiment of the present invention, and as shown in fig. 5, the device includes:
an obtaining module 501, configured to obtain a first target image sequence and a second target image sequence of a target area, where the first target image sequence and the second target sequence are obtained by acquiring at the same time and at different angles;
a processing module 502, configured to perform human head detection on the first target image sequence and the second target image sequence respectively to obtain a first human head frame sequence and a second human head frame sequence, where the first human head frame sequence includes a first human head frame of at least one target person, and the second human head frame sequence includes a second human head frame of at least one target person;
a matching module 503, configured to match the first human head frame with the second human head frame according to a time sequence relationship between the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, where the paired human head frame sequence includes at least one paired human head frame of the target person;
a three-dimensional reconstruction module 504, configured to perform human head three-dimensional reconstruction in a preset three-dimensional space according to the paired human head frame sequence to obtain a three-dimensional human head sequence of the target person, where the three-dimensional human head sequence includes a three-dimensional human head of the target person;
and a monitoring module 505, configured to monitor passenger flow in the target area according to the three-dimensional head sequence.
Optionally, as shown in fig. 6, the first target image sequence is acquired by a first camera, and the second target image sequence is acquired by a second camera, where the apparatus further includes:
a calibration module 506, configured to perform ground calibration in a coordinate system of the first camera or the second camera to obtain a calibrated ground;
and a building module 507, configured to build the three-dimensional space based on the calibrated ground.
Optionally, as shown in fig. 7, the calibration module 506 includes:
the obtaining sub-module 5061 is configured to obtain calibration object information associated with the target area, where the calibration object information is calibration object information in a coordinate system of the first camera or the second camera;
and the first calibration submodule 5062 is used for performing ground calibration according to the calibration object information to obtain a calibrated ground.
Optionally, as shown in fig. 8, the calibration module 506 includes:
the first calculation submodule 5063 is used for calculating ground feature points corresponding to the first camera and the second camera, and triangulating the ground feature points to obtain three-dimensional space points corresponding to the ground feature points;
and the second calibration submodule 5064 is used for performing plane parameter fitting on the three-dimensional space points to obtain a calibrated ground.
Optionally, as shown in fig. 9, the three-dimensional reconstruction module 504 includes:
the second calculating submodule 5041 is configured to calculate an effective disparity map of a first human head frame and a second human head frame in the paired human head frames in the current frame;
the reconstruction submodule 5042 is configured to perform human head three-dimensional reconstruction in a preset three-dimensional space through the effective disparity map to obtain a current frame three-dimensional human head;
and the sequence sub-module 5043 is used for obtaining a three-dimensional human head sequence of the target person based on the current frame three-dimensional human head.
Optionally, as shown in fig. 10, the second calculating sub-module 5041 includes:
the first calculation unit 50411 is configured to calculate an effective disparity interval according to a preset prior disparity;
a second calculating unit 50412, configured to calculate a disparity map of a first human head frame and a second human head frame in the paired human head frames in the current frame, and determine whether the disparity map falls into the effective disparity interval;
the determining unit 50413 is configured to determine that the disparity map is an effective disparity map if the disparity map falls into the effective disparity interval.
Optionally, as shown in fig. 11, the reconstruction sub-module 5042 includes:
a third calculation unit 50421, configured to calculate a final disparity value according to the effective disparity map;
a fourth calculating unit 50422, configured to calculate, according to preset first camera internal reference or second camera internal reference, head depth information of the target person;
and the reconstruction unit 50423 is configured to perform human head three-dimensional reconstruction in the preset three-dimensional space according to the human head depth information to obtain a current frame three-dimensional human head.
Optionally, as shown in fig. 12, the monitoring module 505 includes:
the projection submodule 5051 is used for projecting the three-dimensional human head in the three-dimensional human head sequence to the calibrated ground to obtain a projection track of the target person;
and the monitoring submodule 5052 is configured to monitor passenger flow of the target area according to the projection trajectory.
Optionally, as shown in fig. 13, the calibrated ground includes a target calibration area corresponding to the target area, and the monitoring sub-module 5052 includes:
a fifth calculating unit 50521, configured to calculate state information of the projection trajectory at each time point and the target calibration area, so as to obtain a state sequence of the projection trajectory and the target calibration area;
a monitoring unit 50522 is configured to monitor the passenger flow of the target area according to the state sequence.
It should be noted that the device for monitoring passenger flow provided in the embodiment of the present invention may be applied to devices such as a mobile phone, a monitor, a computer, and a server that can monitor passenger flow.
The passenger flow monitoring device provided by the embodiment of the invention can realize each process realized by the passenger flow monitoring method in the method embodiment, and can achieve the same beneficial effect. To avoid repetition, further description is omitted here.
Referring to fig. 14, fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 14, including: a memory 1402, a processor 1401, and a computer program stored on the memory 1402 and executable on the processor 1401, wherein:
the processor 1401 is used for calling the computer program stored in the memory 1402, and executing the following steps:
acquiring a first target image sequence and a second target image sequence of a target area, wherein the first target image sequence and the second target sequence are acquired at the same time and at different angles;
respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person;
matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person;
according to the paired human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person;
and monitoring passenger flow of the target area according to the three-dimensional head sequence.
Optionally, the first target image sequence is acquired by a first camera, the second target image sequence is acquired by a second camera, and the processor 1401 further performs the steps including:
carrying out ground calibration under the coordinate system of the first camera or the second camera to obtain a calibrated ground;
and constructing and obtaining the three-dimensional space based on the calibrated ground.
Optionally, the performing, by the processor 1401, ground calibration in the coordinate system of the first camera or the second camera to obtain a calibrated ground includes:
obtaining calibration object information associated with the target area, wherein the calibration object information is calibration object information under a coordinate system of the first camera or the second camera;
and carrying out ground calibration according to the information of the calibration object to obtain a calibrated ground.
Optionally, the performing, by the processor 1401, ground calibration in the coordinate system of the first camera or the second camera to obtain a calibrated ground includes:
calculating ground feature points corresponding to a first camera and a second camera, and triangulating the ground feature points to obtain three-dimensional space points corresponding to the ground feature points;
and performing plane parameter fitting on the three-dimensional space points to obtain a calibrated ground.
Optionally, the performing, by the processor 1401, the human head three-dimensional reconstruction of the paired human head frame sequence in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person includes:
calculating an effective disparity map of a first human head frame and a second human head frame in the matched human head frame in the current frame;
performing human head three-dimensional reconstruction in a preset three-dimensional space through the effective disparity map to obtain a current frame three-dimensional human head;
and obtaining a three-dimensional head sequence of the target person based on the current frame three-dimensional head.
Optionally, the calculating, performed by the processor 1401, an effective disparity map of a first human head frame and a second human head frame in the paired human head frames in the current frame includes:
calculating to obtain an effective parallax interval according to a preset prior parallax;
calculating a disparity map of a first human head frame and a second human head frame in the matched human head frame in the current frame, and judging whether the disparity map falls into the effective disparity interval or not;
and if the disparity map falls into the effective disparity interval, judging that the disparity map is an effective disparity map.
Optionally, the performing, by the processor 1401, three-dimensional reconstruction of a human head in a preset three-dimensional space through the effective disparity map to obtain a current frame three-dimensional human head includes:
calculating a final parallax value according to the effective parallax map;
calculating to obtain the head depth information of the target person according to preset first camera internal reference or second camera internal reference;
and according to the human head depth information, performing human head three-dimensional reconstruction in the preset three-dimensional space to obtain the current frame three-dimensional human head.
Optionally, the three-dimensional space includes a calibrated ground, and the monitoring of the passenger flow in the target area according to the three-dimensional head sequence performed by the processor 1401 includes:
projecting the three-dimensional human head in the three-dimensional human head sequence to the calibrated ground to obtain a projection track of a target person;
and monitoring passenger flow of the target area according to the projection track.
Optionally, the step of monitoring passenger flow in the target area according to the projection trajectory includes:
calculating the state information of the projection track at each time point and the target calibration area to obtain a state sequence of the projection track and the target calibration area;
and monitoring passenger flow of the target area according to the state sequence.
The electronic device may be a device that can be applied to a mobile phone, a monitor, a computer, a server, and the like that can monitor the passenger flow.
The electronic device provided by the embodiment of the invention can realize each process realized by the passenger flow monitoring method in the method embodiment, can achieve the same beneficial effects, and is not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the passenger flow monitoring method provided in the embodiment of the present invention, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (12)

1. A method of monitoring passenger flow, comprising the steps of:
acquiring a first target image sequence and a second target image sequence of a target area, wherein the first target image sequence and the second target image sequence are acquired at the same time and at different angles;
respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person;
matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person;
according to the paired human head frame sequence, performing human head three-dimensional reconstruction in a preset three-dimensional space to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person;
and monitoring passenger flow of the target area according to the three-dimensional head sequence.
2. The method of claim 1, wherein the first sequence of target images is acquired by a first camera and the second sequence of target images is acquired by a second camera, the method further comprising:
carrying out ground calibration under the coordinate system of the first camera or the second camera to obtain a calibrated ground;
and constructing and obtaining the three-dimensional space based on the calibrated ground.
3. The method of claim 2, wherein performing ground calibration in the coordinate system of the first camera or the second camera to obtain a calibrated ground comprises:
obtaining calibration object information associated with the target area, wherein the calibration object information is calibration object information under a coordinate system of the first camera or the second camera;
and carrying out ground calibration according to the information of the calibration object to obtain a calibrated ground.
4. The method of claim 2, wherein performing ground calibration in the coordinate system of the first camera or the second camera to obtain a calibrated ground comprises:
calculating ground feature points corresponding to the first camera or the second camera, and triangulating the ground feature points to obtain three-dimensional space points corresponding to the ground feature points;
and performing plane parameter fitting on the three-dimensional space points to obtain a calibrated ground.
5. The method of claim 1, wherein the performing human head three-dimensional reconstruction on the paired human head frame sequence in a preset three-dimensional space to obtain a three-dimensional human head sequence of a target person comprises:
calculating an effective disparity map of a first human head frame and a second human head frame in the matched human head frame in the current frame;
performing human head three-dimensional reconstruction in a preset three-dimensional space through the effective disparity map to obtain a current frame three-dimensional human head;
and obtaining a three-dimensional head sequence of the target person based on the current frame three-dimensional head.
6. The method of claim 5, wherein said calculating the effective disparity map for a first human head frame and a second human head frame of the paired human head frames in the current frame comprises:
calculating to obtain an effective parallax interval according to a preset prior parallax;
calculating a disparity map of a first human head frame and a second human head frame in the matched human head frame in the current frame, and judging whether the disparity map falls into the effective disparity interval or not;
and if the disparity map falls into the effective disparity interval, judging that the disparity map is an effective disparity map.
7. The method as claimed in claim 6, wherein the obtaining of the current frame three-dimensional human head by performing human head three-dimensional reconstruction in a preset three-dimensional space through the effective disparity map comprises:
calculating a final parallax value according to the effective parallax map;
calculating to obtain the head depth information of the target person according to preset first camera internal reference or second camera internal reference;
and according to the human head depth information, performing human head three-dimensional reconstruction in the preset three-dimensional space to obtain the current frame three-dimensional human head.
8. The method of any one of claims 1 to 7, wherein the three-dimensional space includes a nominal floor, and wherein said monitoring of the target area for passenger flow according to the three-dimensional head sequence comprises:
projecting the three-dimensional human head in the three-dimensional human head sequence to the calibrated ground to obtain a projection track of a target person;
and monitoring passenger flow of the target area according to the projection track.
9. The method of claim 8, wherein the calibrated floor comprises a target calibration area corresponding to the target area, and wherein monitoring the target area for passenger flow based on the projected trajectory comprises:
calculating the state information of the projection track at each time point and the target calibration area to obtain a state sequence of the projection track and the target calibration area;
and monitoring passenger flow of the target area according to the state sequence.
10. A device for monitoring passenger flow, the device comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first target image sequence and a second target image sequence of a target area, and the first target image sequence and the second target sequence are acquired at the same time and at different angles;
the processing module is used for respectively carrying out human head detection on the first target image sequence and the second target image sequence to obtain a first human head frame sequence and a second human head frame sequence, wherein the first human head frame sequence comprises a first human head frame of at least one target person, and the second human head frame sequence comprises a second human head frame of at least one target person;
the matching module is used for matching the first human head frame with the second human head frame according to the time sequence relation of the first human head frame sequence and the second human head frame sequence to obtain a paired human head frame sequence of each target person, wherein the paired human head frame sequence comprises the paired human head frame of at least one target person;
the three-dimensional reconstruction module is used for performing human head three-dimensional reconstruction in a preset three-dimensional space according to the paired human head frame sequence to obtain a three-dimensional human head sequence of the target person, wherein the three-dimensional human head sequence comprises the three-dimensional human head of the target person;
and the monitoring module is used for monitoring passenger flow of the target area according to the three-dimensional head sequence.
11. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the steps in the method of monitoring of passenger flow according to any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps in the method of monitoring of passenger flow according to any one of claims 1 to 9.
CN202011466662.XA 2020-12-14 2020-12-14 Passenger flow monitoring method and device, electronic equipment and storage medium Pending CN112633096A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011466662.XA CN112633096A (en) 2020-12-14 2020-12-14 Passenger flow monitoring method and device, electronic equipment and storage medium
PCT/CN2021/114965 WO2022127181A1 (en) 2020-12-14 2021-08-27 Passenger flow monitoring method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011466662.XA CN112633096A (en) 2020-12-14 2020-12-14 Passenger flow monitoring method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112633096A true CN112633096A (en) 2021-04-09

Family

ID=75312656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011466662.XA Pending CN112633096A (en) 2020-12-14 2020-12-14 Passenger flow monitoring method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112633096A (en)
WO (1) WO2022127181A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326830A (en) * 2021-08-04 2021-08-31 北京文安智能技术股份有限公司 Passenger flow statistical model training method and passenger flow statistical method based on overlook images
WO2022127181A1 (en) * 2020-12-14 2022-06-23 深圳云天励飞技术股份有限公司 Passenger flow monitoring method and apparatus, and electronic device and storage medium
CN114677651A (en) * 2022-05-30 2022-06-28 山东极视角科技有限公司 Passenger flow statistical method based on low-image-quality low-frame-rate video and related device
WO2022142413A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Method and apparatus for predicting customer flow volume of mall, and electronic device and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2518661A2 (en) * 2011-04-29 2012-10-31 Tata Consultancy Services Limited System and method for human detection and counting using background modeling, hog and haar features
JP2013093013A (en) * 2011-10-06 2013-05-16 Ricoh Co Ltd Image processing device and vehicle
US20140063188A1 (en) * 2012-09-06 2014-03-06 Nokia Corporation Apparatus, a Method and a Computer Program for Image Processing
US20140241576A1 (en) * 2013-02-28 2014-08-28 Electronics And Telecommunications Research Institute Apparatus and method for camera tracking
CN104103077A (en) * 2014-07-29 2014-10-15 浙江宇视科技有限公司 Human head detecting method and human head detecting device
CN105160649A (en) * 2015-06-30 2015-12-16 上海交通大学 Multi-target tracking method and system based on kernel function unsupervised clustering
US20160379375A1 (en) * 2014-03-14 2016-12-29 Huawei Technologies Co., Ltd. Camera Tracking Method and Apparatus
CN106709432A (en) * 2016-12-06 2017-05-24 成都通甲优博科技有限责任公司 Binocular stereoscopic vision based head detecting and counting method
CN107133988A (en) * 2017-06-06 2017-09-05 科大讯飞股份有限公司 The scaling method and calibration system of camera in vehicle-mounted panoramic viewing system
CN109191504A (en) * 2018-08-01 2019-01-11 南京航空航天大学 A kind of unmanned plane target tracking
CN109785396A (en) * 2019-01-23 2019-05-21 中国科学院自动化研究所 Writing posture monitoring method based on binocular camera, system, device
CN110222673A (en) * 2019-06-21 2019-09-10 杭州宇泛智能科技有限公司 A kind of passenger flow statistical method based on head detection
CN111028271A (en) * 2019-12-06 2020-04-17 浩云科技股份有限公司 Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111354077A (en) * 2020-03-02 2020-06-30 东南大学 Three-dimensional face reconstruction method based on binocular vision
CN111899282A (en) * 2020-07-30 2020-11-06 平安科技(深圳)有限公司 Pedestrian trajectory tracking method and device based on binocular camera calibration

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455792A (en) * 2013-08-20 2013-12-18 深圳市飞瑞斯科技有限公司 Guest flow statistics method and system
WO2018135510A1 (en) * 2017-01-19 2018-07-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional reconstruction method and three-dimensional reconstruction device
CN108446611A (en) * 2018-03-06 2018-08-24 深圳市图敏智能视频股份有限公司 A kind of associated binocular image bus passenger flow computational methods of vehicle door status
CN112633096A (en) * 2020-12-14 2021-04-09 深圳云天励飞技术股份有限公司 Passenger flow monitoring method and device, electronic equipment and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2518661A2 (en) * 2011-04-29 2012-10-31 Tata Consultancy Services Limited System and method for human detection and counting using background modeling, hog and haar features
JP2013093013A (en) * 2011-10-06 2013-05-16 Ricoh Co Ltd Image processing device and vehicle
US20140063188A1 (en) * 2012-09-06 2014-03-06 Nokia Corporation Apparatus, a Method and a Computer Program for Image Processing
US20140241576A1 (en) * 2013-02-28 2014-08-28 Electronics And Telecommunications Research Institute Apparatus and method for camera tracking
US20160379375A1 (en) * 2014-03-14 2016-12-29 Huawei Technologies Co., Ltd. Camera Tracking Method and Apparatus
CN104103077A (en) * 2014-07-29 2014-10-15 浙江宇视科技有限公司 Human head detecting method and human head detecting device
CN105160649A (en) * 2015-06-30 2015-12-16 上海交通大学 Multi-target tracking method and system based on kernel function unsupervised clustering
CN106709432A (en) * 2016-12-06 2017-05-24 成都通甲优博科技有限责任公司 Binocular stereoscopic vision based head detecting and counting method
CN107133988A (en) * 2017-06-06 2017-09-05 科大讯飞股份有限公司 The scaling method and calibration system of camera in vehicle-mounted panoramic viewing system
CN109191504A (en) * 2018-08-01 2019-01-11 南京航空航天大学 A kind of unmanned plane target tracking
CN109785396A (en) * 2019-01-23 2019-05-21 中国科学院自动化研究所 Writing posture monitoring method based on binocular camera, system, device
CN110222673A (en) * 2019-06-21 2019-09-10 杭州宇泛智能科技有限公司 A kind of passenger flow statistical method based on head detection
CN111028271A (en) * 2019-12-06 2020-04-17 浩云科技股份有限公司 Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN111354077A (en) * 2020-03-02 2020-06-30 东南大学 Three-dimensional face reconstruction method based on binocular vision
CN111899282A (en) * 2020-07-30 2020-11-06 平安科技(深圳)有限公司 Pedestrian trajectory tracking method and device based on binocular camera calibration

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127181A1 (en) * 2020-12-14 2022-06-23 深圳云天励飞技术股份有限公司 Passenger flow monitoring method and apparatus, and electronic device and storage medium
WO2022142413A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Method and apparatus for predicting customer flow volume of mall, and electronic device and storage medium
CN113326830A (en) * 2021-08-04 2021-08-31 北京文安智能技术股份有限公司 Passenger flow statistical model training method and passenger flow statistical method based on overlook images
CN114677651A (en) * 2022-05-30 2022-06-28 山东极视角科技有限公司 Passenger flow statistical method based on low-image-quality low-frame-rate video and related device

Also Published As

Publication number Publication date
WO2022127181A1 (en) 2022-06-23

Similar Documents

Publication Publication Date Title
CN112633096A (en) Passenger flow monitoring method and device, electronic equipment and storage medium
JP6295645B2 (en) Object detection method and object detection apparatus
US9792505B2 (en) Video monitoring method, video monitoring system and computer program product
JP5180733B2 (en) Moving object tracking device
CA2990758C (en) Methods circuits devices systems and associated computer executable code for multi factor image feature registration and tracking
CN104966062B (en) Video monitoring method and device
CN112102409B (en) Target detection method, device, equipment and storage medium
CN106033601A (en) Method and apparatus for detecting abnormal situation
CN110675426B (en) Human body tracking method, device, equipment and storage medium
Nair Camera-based object detection, identification and distance estimation
JP5027741B2 (en) Image monitoring device
CN109697444B (en) Object identification method and device based on depth image, equipment and storage medium
CN113313097B (en) Face recognition method, terminal and computer readable storage medium
Liem et al. Multi-person localization and track assignment in overlapping camera views
JP5027758B2 (en) Image monitoring device
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
Rougier et al. 3D head trajectory using a single camera
KR101117235B1 (en) Apparatus and method for recognizing traffic accident
JP6548306B2 (en) Image analysis apparatus, program and method for tracking a person appearing in a captured image of a camera
CN114694204A (en) Social distance detection method and device, electronic equipment and storage medium
JP2017182295A (en) Image processor
CN107802468B (en) Blind guiding method and blind guiding system
CN112686173A (en) Passenger flow counting method and device, electronic equipment and storage medium
KR102461980B1 (en) Method for producing three-dimensional map
Lee et al. Development of people counting algorithm using stereo camera on NVIDIA Jetson TX2

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination