CN112446355B - Pedestrian recognition method and people stream statistics system in public place - Google Patents

Pedestrian recognition method and people stream statistics system in public place Download PDF

Info

Publication number
CN112446355B
CN112446355B CN202011477711.XA CN202011477711A CN112446355B CN 112446355 B CN112446355 B CN 112446355B CN 202011477711 A CN202011477711 A CN 202011477711A CN 112446355 B CN112446355 B CN 112446355B
Authority
CN
China
Prior art keywords
pedestrian
pedestrians
matching
dimensional
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011477711.XA
Other languages
Chinese (zh)
Other versions
CN112446355A (en
Inventor
舒元昊
张一杨
马小雯
刘倚剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETHIK Group Ltd
Original Assignee
CETHIK Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETHIK Group Ltd filed Critical CETHIK Group Ltd
Priority to CN202011477711.XA priority Critical patent/CN112446355B/en
Priority to PCT/CN2020/137803 priority patent/WO2022126668A1/en
Publication of CN112446355A publication Critical patent/CN112446355A/en
Application granted granted Critical
Publication of CN112446355B publication Critical patent/CN112446355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a public place pedestrian recognition method and a people stream statistics system, wherein the method comprises the following steps: acquiring an optical image, detecting pedestrians in the optical image, and outputting a three-dimensional bounding box of the pedestrians and corresponding time stamps; acquiring pedestrian characteristics based on the optical image and the three-dimensional bounding box; pedestrian recognition is carried out based on pedestrian features in the history feature library; marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful missing, successful re-matching after missing, successful continuous matching or leaving the shooting range according to the current matching result and the historical matching result. The pedestrian flow statistical system is provided based on the method, and can accurately count the flow of people entering and exiting the statistical range in unit time.

Description

Pedestrian recognition method and people stream statistics system in public place
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a public place pedestrian recognition method and a pedestrian flow statistical system.
Background
The people flow statistics relates to pedestrian identification, residence time and access track of pedestrians in a statistical area, and the current common statistical methods comprise base station-based statistical methods, such as Bluetooth base stations, 4G base stations and the like, but the positioning accuracy of the methods is not accurate enough; the statistical method based on non-optical imaging equipment, such as an infrared array and a millimeter wave radar, has relatively high positioning accuracy, but cannot accurately identify pedestrians, and is easy to cause repeated statistics; the statistical method based on the optical imaging equipment, such as a camera, has higher positioning precision, can accurately identify pedestrians, but has the problem that pedestrians are blocked, and the statistical method based on the re-identification of pedestrians also has the repeated statistical problem caused by the fact that the motion mode of the pedestrians is inconsistent with the filtering prediction track.
Disclosure of Invention
The invention aims to provide a pedestrian recognition method and a pedestrian flow statistics system in public places, which can accurately recognize pedestrians and have high pedestrian flow statistics accuracy.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a public place pedestrian recognition method, comprising:
step 1, acquiring an optical image, detecting pedestrians in the optical image, and outputting a three-dimensional bounding box of the pedestrians and corresponding time stamps;
Step 2, acquiring pedestrian characteristics based on the optical image and the three-dimensional bounding box, comprising:
step 2.1, extracting the human body shape and characteristics of pedestrians in the optical image as the apparent characteristics of pedestrians of each pedestrian, and storing the pedestrian appearance characteristics into a historical characteristic library;
step 2.2, extracting three-dimensional motion characteristics of pedestrians of each pedestrian based on the current three-dimensional bounding box of the pedestrian and the three-dimensional bounding boxes distributed according to time sequences in a historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrians in the historical characteristic library;
step 2.3, predicting the three-dimensional motion characteristics of the pedestrian at the next moment based on the three-dimensional motion characteristics of the pedestrian and the three-dimensional motion characteristics of the pedestrian in the appointed time in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrian in the historical characteristic library;
step 3, pedestrian recognition is carried out based on pedestrian features in the history feature library, and the method comprises the following steps:
step 3.1, calculating apparent feature distances one by one based on the present apparent features of pedestrians and the apparent features of pedestrians in each history in the history feature library, if the apparent feature distances are larger than the apparent threshold value, judging that the present apparent features of pedestrians and the apparent features of pedestrians in the history feature library belong to the same pedestrians, and determining the present apparent feature distances as the apparent feature distances of the pedestrians;
Step 3.2, calculating a spatial feature distance one by one based on the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment of each pedestrian in the historical feature library, judging that the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment in the historical feature library belong to the same pedestrian if the spatial feature distance is larger than a spatial threshold, and determining the current spatial feature distance as the spatial feature distance of the pedestrian;
step 3.3, judging whether the motion pattern of the same pedestrian is met or not based on the current three-dimensional motion characteristics, apparent characteristic distance and spatial characteristic distance of the pedestrian and the historical three-dimensional motion characteristics of each pedestrian in the historical characteristic library, and outputting the motion pattern matching degree as the motion pattern matching degree of the pedestrian;
step 3.4, weighting calculation is carried out on the apparent feature distance, the space feature distance and the motion pattern matching degree of the same pedestrian to obtain a matching result of the pedestrian in the current three-dimensional bounding box and the pedestrian in the historical feature library, wherein the matching result comprises matching success or matching failure and pedestrian information obtained by matching when the matching is successful;
And 4, marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful re-matching after losing, successful continuous matching or leaving the shooting range according to the matching result of the time and the historical matching result.
The following provides several alternatives, but not as additional limitations to the above-described overall scheme, and only further additions or preferences, each of which may be individually combined for the above-described overall scheme, or may be combined among multiple alternatives, without technical or logical contradictions.
Preferably, the detecting a pedestrian in the optical image outputs a three-dimensional bounding box of the pedestrian, including:
calibrating a camera for acquiring an optical image to obtain a mapping relation between pixels in the optical image and the distance of the camera;
detecting pedestrians in the optical image, and acquiring a two-dimensional bounding box of the pedestrians in the optical image;
and obtaining the three-dimensional bounding box of the pedestrian based on the two-dimensional bounding box and the mapping relation.
Preferably, the extracting the three-dimensional motion feature of the pedestrian of each pedestrian based on the three-dimensional bounding box of the pedestrian and the three-dimensional bounding box distributed according to the time sequence in the history feature library includes:
Step 2.2.1, extracting a direction vector: extracting the moving direction of pedestrians in the horizontal direction and the moving direction of pedestrians in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
step 2.2.2, extracting the movement speed: extracting the motion speed of a person in the horizontal direction and the motion speed in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
step 2.2.3, extracting relative positions: outputting coordinates of pedestrians in a three-dimensional coordinate system taking the camera as a center based on the current three-dimensional bounding box and the historical three-dimensional bounding box according to the mapping relation obtained after the camera is calibrated;
and 2.2.4, taking the direction vector, the movement speed and the relative position extracted in the steps 2.2.1-2.2.3 as three-dimensional movement characteristics of pedestrians.
Preferably, marking, according to the current matching result and the historical matching result, that the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box is successful in the first matching, successful in the re-matching after the loss, successful in the continuous matching or out of the shooting range, includes:
if the pedestrian features are successfully extracted, but the matching result is that the matching is failed, marking the current pedestrian state as the primary matching success;
if the same pedestrian in the history matching result is not matched for M times continuously, marking the state of the pedestrian as lost;
If the matching result of the pedestrian marked as lost is successful in the matching, updating the state of the pedestrian to be successful in the matching after the pedestrian is lost;
if the same pedestrian in the history matching result is matched for L times continuously, updating the state of the pedestrian to be successful in continuous matching;
if the same pedestrian in the history matching result is not matched for N times continuously, marking the pedestrian as walking out of the shooting range, wherein M is smaller than N.
Preferably, if the status of the current pedestrian is marked as successful in the first matching, new pedestrian information is allocated to the pedestrian in the history feature library, and the pedestrian feature of the pedestrian is associated with the newly allocated pedestrian information.
The invention also provides a people stream statistical system, which comprises:
the pedestrian detection module is used for acquiring the optical image, detecting pedestrians in the optical image and outputting three-dimensional bounding boxes of the pedestrians and corresponding time stamps;
the feature extraction module is used for acquiring pedestrian features based on the optical image and the three-dimensional bounding box, and specifically executes the following steps:
a. extracting the human body shape and characteristics of pedestrians in the optical image as the apparent characteristics of pedestrians of each pedestrian, and storing the pedestrian appearance characteristics into a historical characteristic library;
b. Based on the current three-dimensional bounding box of the pedestrians and the three-dimensional bounding boxes distributed according to the time sequence in the historical feature library, extracting the three-dimensional motion feature of each pedestrian and storing the three-dimensional motion feature into the historical feature library;
c. predicting the three-dimensional motion characteristics of the pedestrian at the next moment based on the three-dimensional motion characteristics of the pedestrian and the three-dimensional motion characteristics of the pedestrian in the appointed time in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrian in the historical characteristic library;
the pedestrian recognition module is used for recognizing pedestrians based on pedestrian features in the history feature library, and specifically executes the following steps:
a. calculating apparent feature distances one by one based on the current apparent features of the pedestrians and the apparent features of the pedestrians in each history in the history feature library, judging that the current apparent features of the pedestrians and the apparent features of the pedestrians in the history feature library belong to the same pedestrians if the apparent feature distances are larger than an apparent threshold value, and determining the current apparent feature distances as the apparent feature distances of the pedestrians;
b. calculating a spatial feature distance one by one based on the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment of each pedestrian in the historical feature library, judging that the current three-dimensional motion feature of the pedestrian belongs to the same pedestrian with the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment in the historical feature library if the spatial feature distance is larger than a spatial threshold, and determining the current spatial feature distance as the spatial feature distance of the pedestrian;
c. Judging whether the motion pattern of the same pedestrian is met or not based on the current three-dimensional motion characteristics, apparent characteristic distance and spatial characteristic distance of the pedestrian and the historical three-dimensional motion characteristics of each pedestrian in the historical characteristic library, and outputting the motion pattern matching degree as the motion pattern matching degree of the pedestrian;
d. weighting calculation is carried out on apparent feature distances, space feature distances and motion pattern matching degrees of the same pedestrians to obtain matching results of the pedestrians in the current three-dimensional bounding box and the pedestrians in the historical feature library, wherein the matching results comprise successful matching or unsuccessful matching, and pedestrian information obtained by matching is also included when the matching is successful;
the pedestrian marking module is used for marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful missing, successful re-matching after missing, successful continuous matching or leaving the shooting range according to the matching result of the time and the historical matching result;
and the people flow statistics module is used for counting the people flow in a statistical range corresponding to the optical image in the preset time according to the pedestrian state.
Preferably, the detecting a pedestrian in the optical image outputs a three-dimensional bounding box of the pedestrian, and performs the following operations:
Calibrating a camera for acquiring an optical image to obtain a mapping relation between pixels in the optical image and the distance of the camera;
detecting pedestrians in the optical image, and acquiring a two-dimensional bounding box of the pedestrians in the optical image;
and obtaining the three-dimensional bounding box of the pedestrian based on the two-dimensional bounding box and the mapping relation.
Preferably, the three-dimensional motion feature of each pedestrian is extracted based on the current three-dimensional bounding box of the pedestrian and the three-dimensional bounding boxes distributed according to time sequence in the historical feature library, and the following operations are executed:
direction vector extraction: extracting the moving direction of pedestrians in the horizontal direction and the moving direction of pedestrians in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
motion speed extraction: extracting the motion speed of a person in the horizontal direction and the motion speed in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
extracting relative positions: outputting coordinates of pedestrians in a three-dimensional coordinate system taking the camera as a center based on the current three-dimensional bounding box and the historical three-dimensional bounding box according to the mapping relation obtained after the camera is calibrated;
feature integration: and taking the extracted direction vector, the motion speed and the relative position as three-dimensional motion characteristics of the pedestrians.
Preferably, the step of marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box according to the current matching result and the historical matching result is successful in initial matching, successful in re-matching after losing, successful in continuous matching or out of the shooting range, and executing the following operations:
if the pedestrian features are successfully extracted, but the matching result is that the matching is failed, marking the current pedestrian state as the primary matching success;
if the same pedestrian in the history matching result is not matched for M times continuously, marking the state of the pedestrian as lost;
if the matching result of the pedestrian marked as lost is successful in the matching, updating the state of the pedestrian to be successful in the matching after the pedestrian is lost;
if the same pedestrian in the history matching result is matched for L times continuously, updating the state of the pedestrian to be successful in continuous matching;
if the same pedestrian in the history matching result is not matched for N times continuously, marking the pedestrian as walking out of the shooting range, wherein M is smaller than N.
Preferably, if the status of the current pedestrian is marked as successful in the first matching, new pedestrian information is allocated to the pedestrian in the history feature library, and the pedestrian feature of the pedestrian is associated with the newly allocated pedestrian information.
The pedestrian recognition method for the public places provided by the invention comprehensively considers the apparent characteristics, the three-dimensional motion characteristics and the motion modes of pedestrians, accurately recognizes the pedestrians, acquires the time and the position of the pedestrians entering and exiting the statistical range and the moving track within the statistical range, and provides a pedestrian flow statistical system based on the method, so that the pedestrian flow entering and exiting the statistical range in unit time can be accurately counted.
Drawings
FIG. 1 is a flow chart of the method for identifying pedestrians in public places according to the present invention;
FIG. 2 is a flow chart of the present invention outputting a three-dimensional bounding box of a pedestrian;
FIG. 3 is a flow chart of the invention for acquiring pedestrian features based on an optical image and a three-dimensional bounding box;
FIG. 4 is a schematic diagram of the invention for extracting motion features according to human body structure in a right-hand coordinate system;
FIG. 5 is a flow chart of pedestrian recognition based on pedestrian features in a historical feature library in accordance with the present invention;
FIG. 6 is a flow chart of pedestrian status tagging in accordance with the present invention;
fig. 7 is a block diagram of the current statistical system of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
In one embodiment, a method for identifying pedestrians in public places is provided, the pedestrians are accurately identified, and the method can be used for urban planning based on pedestrian identification statistics, business strategy adjustment is conducted through mall people flow statistics, subway shift adjustment is conducted through subway station people flow statistics and other scenes.
As shown in fig. 1, the public place pedestrian recognition method in the present embodiment includes the steps of:
and step 1, acquiring an optical image, detecting pedestrians in the optical image, and outputting a three-dimensional bounding box of the pedestrians and corresponding time stamps.
The embodiment obtains the optical image based on the camera, and the obtained time stamp is the time when the camera shoots the optical image. It is to be readily understood that the optical image acquisition may be based on any image acquisition device, and this embodiment is described by taking a camera as an example.
Since the three-dimensional bounding box is with depth information and the optical image does not contain depth information, the present embodiment includes the following steps when forming the three-dimensional bounding box, as shown in fig. 2:
Calibrating a camera for acquiring an optical image to obtain a mapping relation between pixels in the optical image and the distance of the camera; detecting pedestrians in the optical image, and acquiring a two-dimensional Bounding Box (BBox) of the pedestrians in the optical image; a three-dimensional bounding box (3D Bounding Box,3D BBox) of the pedestrian is obtained based on the two-dimensional bounding box and the mapping relation.
According to the embodiment, the mapping relation between the pedestrian pixels in the optical image and the distance between the cameras is obtained through calibrating the cameras, the corresponding depth information is obtained based on the mapping relation, the depth information reflects the actual distance between the pedestrian and the cameras, the movement change of the pedestrian is contained, and the three-dimensional motion feature of the pedestrian can be conveniently extracted based on the depth information.
In this embodiment, a monocular fixed-focus camera is used to capture video, a cube with each side of 1 meter is used to calibrate the mapping relationship, each surface of the cube is divided into 100 grids with black and white phases on average, and the shooting range of the camera is a statistical range.
It should be noted that, in the embodiment, the calibration is not limited to a specific step by using a conventional technical means for calibrating the camera, and depth information is obtained based on a mapping relationship calibrated by the camera, which is a preferred method provided in the embodiment, but is not limited to a unique means, for example, the depth information can be superimposed by combining the camera and the depth camera.
In this embodiment, pedestrians in the optical image are identified based on a pedestrian detection method, and a two-dimensional bounding box is output, and the pedestrian detection method used is a conventional method in image identification, for example, using a recognition network based on Yolo expansion, which is obtained by training on a pedestrian data set. When the three-dimensional bounding box is obtained, the two-dimensional bounding box and the mapping relation are input into a three-dimensional estimation method, and the three-dimensional bounding box is output.
The three-dimensional estimation method used in the embodiment is a monocular depth estimation method based on optical flow, and the method can output inverse depth, and depth information can be obtained by calculating the inverse depth. The depth information has different error coefficients in different ranges from the camera, in this embodiment an error matrix is used.
The pedestrian recognition based on the three-dimensional bounding box can effectively overcome the problem of pedestrian shielding, and the partially shielded two-dimensional bounding box can be restored to be a complete three-dimensional bounding box because the body structure of the person accords with geometric constraint, and the spatial error of the partially shielded two-dimensional bounding box is within an allowable range.
Step 2, obtaining pedestrian features based on the optical image and the three-dimensional bounding box, as shown in fig. 3, including:
and 2.1, extracting the human body shape and characteristics of pedestrians in the optical image as the apparent characteristics of pedestrians of each pedestrian, and storing the pedestrian appearance characteristics into a historical characteristic library.
Since the shape and the characteristics of the human body are important characteristics for distinguishing different pedestrians, the embodiment adopts a pedestrian apparent characteristic extraction method, and mainly extracts the human body which can be observed visuallyThe character and the characteristics are convenient for distinguishing, and the apparent characteristics of the pedestrians are marked as F appearance
In this embodiment, embedding structure (embedding) of a recognition network based on Yolo expansion is used as a pedestrian apparent feature extraction method.
And 2.2, extracting the three-dimensional motion characteristics of pedestrians of each pedestrian based on the current three-dimensional bounding box of the pedestrian and the three-dimensional bounding boxes distributed according to the time sequence in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrians in the historical characteristic library.
The three-dimensional motion characteristic of the pedestrian is the position change characteristic of the pedestrian in the three-dimensional space, and is marked as F for the convenience of distinguishing displacemenr Is an important feature for data correlation over time.
As shown in fig. 4, in this embodiment, a right-hand coordinate system is established, and the three-dimensional bounding box is divided into three parts according to the human body structure, and the three-dimensional bounding box is input into the three-dimensional motion feature extraction method of the pedestrian, mainly to extract the position change feature of the pedestrian in the three-dimensional space, and output the three-dimensional motion feature of the pedestrian.
In this embodiment, the three-dimensional motion feature extraction method is composed of:
Step 2.2.1, extracting a direction vector: and extracting the moving direction of the pedestrian in the horizontal direction and the moving direction in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box.
Since the three-dimensional bounding box has a corresponding timestamp, the direction vector of the pedestrian can be obtained based on the position change of the three-dimensional bounding box of the time distribution. The multiple three-dimensional bounding boxes can be used for determining the direction vector by using only the first two bounding boxes, or multiple pairs of three-dimensional bounding boxes can be used, and the average value, the median value or other values of the multiple direction vectors are taken as the finally determined direction vector.
Step 2.2.2, extracting the movement speed: and extracting the motion speed of the person in the horizontal direction and the motion speed in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box.
Similar to the direction vector extraction, the three-dimensional bounding box based on time distribution obtains the movement speed of the pedestrian according to the time difference and the position difference of the corresponding three-dimensional bounding box, and the plurality of three-dimensional bounding boxes can calculate the movement speed by using only the first two bounding boxes, can also use a plurality of pairs of three-dimensional bounding boxes, and takes the average value, the median value or other values of the plurality of movement speeds as the finally determined movement speed.
Step 2.2.3, extracting relative positions: and outputting coordinates of the pedestrian in a three-dimensional coordinate system taking the camera as a center based on the current three-dimensional bounding box and the historical three-dimensional bounding box according to the mapping relation obtained after the camera is calibrated.
When the coordinates are determined, each three-dimensional bounding box is equivalent to a point, the coordinates of the point are obtained as the coordinates of pedestrians, and the point can be the center point, a certain vertex or any point of the three-dimensional bounding box.
And 2.2.4, taking the direction vector, the movement speed and the relative position extracted in the steps 2.2.1-2.2.3 as three-dimensional movement characteristics of pedestrians.
Since a plurality of three-dimensional bounding boxes of pedestrians generally exist in the historical feature library, when three-dimensional motion features of pedestrians are extracted, feature matching is firstly carried out on the current and the historical three-dimensional bounding boxes (for example, a Hungary matching algorithm is adopted), and three-dimensional motion features of pedestrians are extracted by taking the historical three-dimensional bounding boxes which are not used and have the highest matching degree.
If the current three-dimensional bounding box is a new three-dimensional bounding box of a pedestrian newly entering the statistical range, if the matching fails to obtain a corresponding historical three-dimensional bounding box, setting a direction vector and a motion speed of the new pedestrian as default values (for example, the direction vector is no and the transport speed is 0), and taking the coordinates of the current three-dimensional bounding box as the three-dimensional motion characteristics of the pedestrian.
And 2.3, predicting the three-dimensional motion characteristics of the pedestrian at the next moment based on the three-dimensional motion characteristics of the pedestrian and the three-dimensional motion characteristics of the pedestrian in the appointed time in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrian in the historical characteristic library.
The three-dimensional motion characteristic of the pedestrian at the next moment is marked as F predicted The method is predicted by a track prediction algorithm and shows that pedestrians are in space-timeThe problem of target loss caused by shielding pedestrians can be further solved by using the characteristics; in the embodiment, the three-dimensional motion characteristics of the pedestrian at the next moment of the pedestrian are predicted by using Kalman filtering.
Similar to the extraction of the three-dimensional motion features of the pedestrians, since the three-dimensional motion features of the pedestrians generally exist in the historical feature library, when predicting the three-dimensional motion features of the pedestrians, feature matching is firstly performed on the current three-dimensional motion features and the historical three-dimensional motion features of the pedestrians (for example, a Hungary matching algorithm) and the three-dimensional motion features of the pedestrians which are not used and have the highest matching degree are taken for predicting the three-dimensional motion features of the pedestrians.
If the current three-dimensional motion feature of the pedestrian is the three-dimensional motion feature of the pedestrian newly entering the statistical range, the matching cannot obtain the corresponding historical three-dimensional motion feature of the pedestrian, and prediction is directly performed based on the current three-dimensional motion feature of the pedestrian. Step 3, pedestrian recognition is performed based on pedestrian features in the history feature library, as shown in fig. 5, including:
And 3.1, calculating apparent feature distances one by one based on the current apparent features of the pedestrians and the historic apparent features of each pedestrian in the historic feature library, judging that the current apparent features of the pedestrians and the apparent features of the pedestrians in the historic feature library belong to the same pedestrian if the apparent feature distances are larger than an apparent threshold value, and determining the apparent feature distances as the apparent feature distances of the pedestrians.
In the embodiment, a pedestrian apparent characteristic matching method is used, and the current pedestrian apparent characteristic is calculatedApparent characteristics of pedestrian in history characteristic library>Whether the pedestrian belongs to the same pedestrian or not is judged, for example, the apparent characteristic distance is calculated by using the weights of the mahalanobis distance and the cosine distance, and the coefficients are respectively 0.02 and 0.98. The historic pedestrian appearance characteristics in the present embodiment mainly take the pedestrian appearance characteristics of the previous moment. For the purpose ofIn another embodiment, if the apparent features of pedestrians belonging to the same pedestrian are judged, an apparent similar pedestrian list can be established, each pedestrian corresponds to one apparent similar pedestrian list, and apparent feature distances are further distinguished based on the list.
For example, the current moment has the apparent characteristics of two persons A and B, wherein the similarity of A and A { t-1} (the apparent characteristics of pedestrians at the previous moment), B and A { t-1} is also high, and even the similarity of a plurality of time periods in the apparent similar pedestrian lists of B and A is high, but the similarity of a plurality of time periods in the apparent similar pedestrian lists of B and B is higher, then B can be judged to be B. But because the search method is time-consuming, the method is generally used in a scene with high requirements for pedestrian recognition.
And 3.2, calculating a spatial feature distance one by one based on the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment of each pedestrian in the historical feature library, judging that the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment in the historical feature library belong to the same pedestrian if the spatial feature distance is larger than a spatial threshold, and determining the current spatial feature distance as the spatial feature distance of the pedestrian.
The three-dimensional motion characteristics of the pedestrian at the next moment, namely the current three-dimensional motion characteristics of the pedestrian, which are predicted at the previous moment, are matched, and the predicted three-dimensional motion characteristics of the pedestrian and the actual current three-dimensional motion characteristics of the pedestrian can be used as one of the judgment bases of whether the pedestrian belongs to the same pedestrian or not, and the matching has referential because the three-dimensional motion characteristics of the same pedestrian cannot be excessively changed. In the embodiment, a hungarian algorithm is used as a pedestrian three-dimensional motion feature matching method to calculate, and whether the pedestrian belongs to the same pedestrian is judged.
Similar to the apparent similar pedestrian list, in another embodiment, if it is determined that pedestrians belonging to the same pedestrian are three-dimensional motion features, a spatially similar pedestrian list may be established.
And 3.3, judging whether the motion pattern of the same pedestrian is met or not based on the current three-dimensional motion characteristics, apparent characteristic distance and spatial characteristic distance of the pedestrian and the historical three-dimensional motion characteristics of each pedestrian in the historical characteristic library, and outputting the motion pattern matching degree as the motion pattern matching degree of the pedestrian.
For the movement pattern of the pedestrian, the present embodiment focuses on the movement logic of the pedestrian on the changing speed and the spatial position of the pedestrian in time sequence, and the movement logic includes, but is not limited to, common behaviors of turning back, staying in place, jogging, squatting and the like. While considering the conventional movement of pedestrians, the present embodiment focuses on the speed of change of the object within 3 seconds and the moving logic of the object within the shooting space. Since the camera acquires the optical image based on the preset interval, pedestrians with reasonable speed change can be judged as the same pedestrian.
Inputting the three-dimensional motion characteristics, apparent characteristic distances and spatial characteristic distances of the current pedestrians into a motion pattern matching method, judging whether the behaviors of the pedestrians accord with common motion patterns of the pedestrians in public places, if the motion pattern matching degree is smaller than a motion threshold value, the pedestrians belong to the same pedestrian, and establishing a motion pattern similar pedestrian list.
The calculation of the matching degree of the pedestrian motion mode can be based on the direct output matching degree of the pre-trained neural network, and the direct judgment can be carried out by predicting a preset matching rule. The neural network training method based on the sample is relatively flexible in judgment, but needs to train the neural network based on a large number of samples, and the neural network training method based on the sample can be directly generated and used, is convenient to add, delete and modify, is relatively low in flexibility, and can select a proper mode according to actual requirements.
In one embodiment, based on actual observations and statistics, a matching rule (specific probability value is abbreviated) is established as shown in table 1, which represents the probability of the behavior pattern transition from the previous stage to the corresponding behavior pattern of the current stage.
Table 1 probability of transition of behavior pattern of previous stage to corresponding behavior pattern of current stage
When the motion pattern matching degree is carried out based on the table 1, the three-dimensional motion characteristics of the historic pedestrians corresponding to the apparent characteristic distances are taken, the three-dimensional motion characteristics of the historic pedestrians corresponding to the space characteristic distances are taken, if the pedestrians corresponding to the two taken three-dimensional motion characteristics of the historic pedestrians are not the same pedestrians, the matching is abandoned, if the pedestrians are the same pedestrians, the behavior pattern of the last stage of the pedestrians and the behavior pattern of the current stage of the pedestrians are judged according to the taken three-dimensional motion characteristics of the historic pedestrians and the current three-dimensional motion characteristics of the pedestrians, and then the probability value can be obtained through table lookup to serve as the motion pattern matching degree.
It should be noted that, the behavior pattern of one stage is determined by at least two three-dimensional motion features of a pedestrian, and since one three-dimensional motion feature of a pedestrian has a direction vector, a motion speed and coordinates, the current stage can be determined to be forward, turn back or turn by the change of the two direction vectors, and the forward or stop can be further distinguished by the change of the coordinates, and the forward or acceleration can be further distinguished by the change of the motion speed.
Of course, the above table is a preferable matching rule adopted in the present embodiment, and may be further optimized in actual use, for example, refine turning into left turning or right turning, etc., and the probability value in the table may also be updated according to the probability of statistics in actual use, so as to improve the pedestrian recognition rate.
And 3.4, carrying out weighted calculation (namely inputting the weighted calculation into a weighted calculator) on the apparent feature distance, the spatial feature distance and the motion pattern matching degree of the same pedestrian to obtain a matching result of the pedestrian in the current three-dimensional bounding box and the pedestrian in the historical feature library, wherein the matching result comprises matching success or matching failure and pedestrian information obtained by matching when the matching is successful.
In this embodiment, weights of the apparent feature distance, the spatial feature distance and the motion pattern matching degree are respectively 0.6, 0.2 and 0.2, and since the apparent features are the most intuitive features for distinguishing different pedestrians, the apparent feature distance is set to have the highest weight in this embodiment. Of course, in actual use, weight adjustment can be performed, for example, the weight of the matching degree of the motion mode is improved, so as to avoid misjudgment caused by two persons with similar apparent characteristics.
The failure of matching in the final matching result indicates that the characteristic of the current pedestrian does not have a history record, namely the pedestrian is a pedestrian newly entering a statistical range; and the successful matching indicates that the characteristics of the current pedestrian have a history record, so that pedestrian information obtained by matching is output to associate new characteristics and history characteristics of the same pedestrian. The pedestrian information may be a unique identifier (e.g., an ID value), a spatial location, a time, etc.
And 4, marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful re-matching after losing, successful continuous matching or leaving the shooting range according to the matching result of the time and the historical matching result.
As shown in fig. 6, a specific matching method provided in this embodiment may be:
if the pedestrian features are successfully extracted (namely the identification is successful at the current moment), but the matching result is that the matching is failed (namely no history record exists), marking the current pedestrian state as the primary matching success;
if the same pedestrian (for example, the current recognition fails and the recognition fails at the last moment or the number of times of the current recognition failure and the continuous recognition failure is not more than a threshold value) in the history matching result is not matched with the continuous M times (for example, 50 times (10 seconds) 5 times/second) continuously, marking the state of the pedestrian as lost;
If the re-matching in the pedestrian marked as lost is successful (for example, the current moment is successful in recognition and the history is present, but the last moment is failed in matching), updating the pedestrian state to be successful in re-matching after the pedestrian is lost;
if the pedestrian is matched with the history matching result for L times (for example, 50 times (10 seconds) 5 times/second) continuously (for example, the identification is successful at the current moment, the history record exists, and the matching is successful at the last moment), the pedestrian state is updated to be the continuous matching success;
if the same pedestrian (for example, the current recognition fails and the number of times of continuous recognition failure is greater than the threshold) in the history matching result is not matched with N times (for example, 150 times (10 seconds) 15 times/second) in succession, the pedestrian is marked as having a state of going out of the imaging range, and M < N.
If the current pedestrian state is marked as successful in the primary matching, new pedestrian information is distributed to the pedestrian in the historical feature library, the pedestrian features of the pedestrian are associated with the newly distributed pedestrian information, and the pedestrian can be used as historical data for identifying and tracking the pedestrian at the next moment.
As shown in fig. 7, in another implementation, there is provided a people stream statistics system including:
The pedestrian detection module is used for acquiring the optical image, detecting pedestrians in the optical image and outputting three-dimensional bounding boxes of the pedestrians and corresponding time stamps;
the feature extraction module is used for acquiring pedestrian features based on the optical image and the three-dimensional bounding box, and specifically executes the following steps:
a. extracting the human body shape and characteristics of pedestrians in the optical image as the apparent characteristics of pedestrians of each pedestrian, and storing the pedestrian appearance characteristics into a historical characteristic library;
b. based on the current three-dimensional bounding box of the pedestrians and the three-dimensional bounding boxes distributed according to the time sequence in the historical feature library, extracting the three-dimensional motion feature of each pedestrian and storing the three-dimensional motion feature into the historical feature library;
c. predicting the three-dimensional motion characteristics of the pedestrian at the next moment based on the three-dimensional motion characteristics of the pedestrian and the three-dimensional motion characteristics of the pedestrian in the appointed time in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrian in the historical characteristic library;
the pedestrian recognition module is used for recognizing pedestrians based on pedestrian features in the history feature library, and specifically executes the following steps:
a. calculating apparent feature distances one by one based on the current apparent features of the pedestrians and the apparent features of the pedestrians in each history in the history feature library, judging that the current apparent features of the pedestrians and the apparent features of the pedestrians in the history feature library belong to the same pedestrians if the apparent feature distances are larger than an apparent threshold value, and determining the current apparent feature distances as the apparent feature distances of the pedestrians;
b. Calculating a spatial feature distance one by one based on the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment of each pedestrian in the historical feature library, judging that the current three-dimensional motion feature of the pedestrian belongs to the same pedestrian with the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment in the historical feature library if the spatial feature distance is larger than a spatial threshold, and determining the current spatial feature distance as the spatial feature distance of the pedestrian;
c. judging whether the motion pattern of the same pedestrian is met or not based on the current three-dimensional motion characteristics, apparent characteristic distance and spatial characteristic distance of the pedestrian and the historical three-dimensional motion characteristics of each pedestrian in the historical characteristic library, and outputting the motion pattern matching degree as the motion pattern matching degree of the pedestrian;
d. weighting calculation is carried out on apparent feature distances, space feature distances and motion pattern matching degrees of the same pedestrians to obtain matching results of the pedestrians in the current three-dimensional bounding box and the pedestrians in the historical feature library, wherein the matching results comprise successful matching or unsuccessful matching, and pedestrian information obtained by matching is also included when the matching is successful;
the pedestrian marking module is used for marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful missing, successful re-matching after missing, successful continuous matching or leaving the shooting range according to the matching result of the time and the historical matching result;
And the people flow statistics module is used for counting the people flow in a statistical range corresponding to the optical image in the preset time according to the pedestrian state.
For specific limitation in the people stream statistics system, refer to the specific limitation of the public place pedestrian recognition method, and no detailed description is given here.
In a preferred embodiment, the detecting pedestrians in the optical image outputs a three-dimensional bounding box of the pedestrians, and performs the following operations:
calibrating a camera for acquiring an optical image to obtain a mapping relation between pixels in the optical image and the distance of the camera;
detecting pedestrians in the optical image, and acquiring a two-dimensional bounding box of the pedestrians in the optical image;
and obtaining the three-dimensional bounding box of the pedestrian based on the two-dimensional bounding box and the mapping relation.
In this embodiment, the pedestrian detection module has a camera calibration function and a parameter management function, and in other embodiments, the camera calibration device may be used as a part independent of the people stream statistics system of this embodiment, and may send the calibrated device parameters and the mapping relationship to the parameter management module of the people stream statistics system of this embodiment.
It should be noted that, in this embodiment, people flow statistics is performed based on an optical image process, that is, the people flow statistics system further includes a video acquisition module, and the video acquisition module is connected with a peripheral video acquisition device, and after acquiring a real-time video within a statistical range, an optical picture of each frame is sent to a pedestrian detection module.
In another embodiment, the three-dimensional motion feature of each pedestrian is extracted based on the current three-dimensional bounding box of the pedestrian and the three-dimensional bounding boxes distributed according to the time sequence in the history feature library, and the following operations are performed:
direction vector extraction: extracting the moving direction of pedestrians in the horizontal direction and the moving direction of pedestrians in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
motion speed extraction: extracting the motion speed of a person in the horizontal direction and the motion speed in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
extracting relative positions: outputting coordinates of pedestrians in a three-dimensional coordinate system taking the camera as a center based on the current three-dimensional bounding box and the historical three-dimensional bounding box according to the mapping relation obtained after the camera is calibrated;
feature integration: and taking the extracted direction vector, the motion speed and the relative position as three-dimensional motion characteristics of the pedestrians.
In another embodiment, the marking, according to the current matching result and the historical matching result, the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box is successful in the first matching, lost, successful in the re-matching after the lost, successful in the continuous matching or out of the shooting range, and the following operations are performed:
If the pedestrian features are successfully extracted, but the matching result is that the matching is failed, marking the current pedestrian state as the primary matching success;
if the same pedestrian in the history matching result is not matched for M times continuously, marking the state of the pedestrian as lost;
if the matching result of the pedestrian marked as lost is successful in the matching, updating the state of the pedestrian to be successful in the matching after the pedestrian is lost;
if the same pedestrian in the history matching result is matched for L times continuously, updating the state of the pedestrian to be successful in continuous matching;
if the same pedestrian in the history matching result is not matched for N times continuously, marking the pedestrian as walking out of the shooting range, wherein M is smaller than N.
In another embodiment, if the status of the current pedestrian is marked as successful in the first matching, new pedestrian information is allocated to the pedestrian in the history feature library, and the pedestrian feature of the pedestrian is associated with the newly allocated pedestrian information.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. A method for identifying pedestrians in public places, characterized in that the method comprises the following steps:
step 1, acquiring an optical image, detecting pedestrians in the optical image, and outputting a three-dimensional bounding box of the pedestrians and corresponding time stamps;
step 2, acquiring pedestrian characteristics based on the optical image and the three-dimensional bounding box, comprising:
step 2.1, extracting the human body shape and characteristics of pedestrians in the optical image as the apparent characteristics of pedestrians of each pedestrian, and storing the pedestrian appearance characteristics into a historical characteristic library;
step 2.2, extracting three-dimensional motion characteristics of pedestrians of each pedestrian based on the current three-dimensional bounding box of the pedestrian and the three-dimensional bounding boxes distributed according to time sequences in a historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrians in the historical characteristic library;
Step 2.3, predicting the three-dimensional motion characteristics of the pedestrian at the next moment based on the three-dimensional motion characteristics of the pedestrian and the three-dimensional motion characteristics of the pedestrian in the appointed time in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrian in the historical characteristic library;
step 3, pedestrian recognition is carried out based on pedestrian features in the history feature library, and the method comprises the following steps:
step 3.1, calculating apparent feature distances one by one based on the present apparent features of pedestrians and the apparent features of pedestrians in each history in the history feature library, if the apparent feature distances are larger than the apparent threshold value, judging that the present apparent features of pedestrians and the apparent features of pedestrians in the history feature library belong to the same pedestrians, and determining the present apparent feature distances as the apparent feature distances of the pedestrians;
step 3.2, calculating a spatial feature distance one by one based on the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment of each pedestrian in the historical feature library, judging that the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment in the historical feature library belong to the same pedestrian if the spatial feature distance is larger than a spatial threshold, and determining the current spatial feature distance as the spatial feature distance of the pedestrian;
Step 3.3, judging whether the motion pattern of the same pedestrian is met or not based on the current three-dimensional motion characteristics, apparent characteristic distance and spatial characteristic distance of the pedestrian and the historical three-dimensional motion characteristics of each pedestrian in the historical characteristic library, and outputting the motion pattern matching degree as the motion pattern matching degree of the pedestrian;
step 3.4, weighting calculation is carried out on the apparent feature distance, the space feature distance and the motion pattern matching degree of the same pedestrian to obtain a matching result of the pedestrian in the current three-dimensional bounding box and the pedestrian in the historical feature library, wherein the matching result comprises matching success or matching failure and pedestrian information obtained by matching when the matching is successful;
and 4, marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful re-matching after losing, successful continuous matching or leaving the shooting range according to the matching result of the time and the historical matching result.
2. The method for recognizing pedestrians in public places according to claim 1, wherein the detecting pedestrians in the optical image and outputting a three-dimensional bounding box of the pedestrians comprises:
calibrating a camera for acquiring an optical image to obtain a mapping relation between pixels in the optical image and the distance of the camera;
Detecting pedestrians in the optical image, and acquiring a two-dimensional bounding box of the pedestrians in the optical image;
and obtaining the three-dimensional bounding box of the pedestrian based on the two-dimensional bounding box and the mapping relation.
3. The method for recognizing pedestrians in public places according to claim 2, wherein the extracting the three-dimensional motion feature of the pedestrian of each pedestrian based on the three-dimensional bounding box of the pedestrian current and the three-dimensional bounding box distributed according to the time sequence in the history feature library comprises:
step 2.2.1, extracting a direction vector: extracting the moving direction of pedestrians in the horizontal direction and the moving direction of pedestrians in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
step 2.2.2, extracting the movement speed: extracting the motion speed of a person in the horizontal direction and the motion speed in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
step 2.2.3, extracting relative positions: outputting coordinates of pedestrians in a three-dimensional coordinate system taking the camera as a center based on the current three-dimensional bounding box and the historical three-dimensional bounding box according to the mapping relation obtained after the camera is calibrated;
and 2.2.4, taking the direction vector, the movement speed and the relative position extracted in the steps 2.2.1-2.2.3 as three-dimensional movement characteristics of pedestrians.
4. The method for identifying pedestrians in public places according to claim 1, wherein the marking the pedestrian status of the pedestrians corresponding to the three-dimensional bounding box according to the present matching result and the historical matching result is successful in the first matching, lost, successful in the re-matching after the lost, successful in the continuous matching or out of the shooting range comprises:
if the pedestrian features are successfully extracted, but the matching result is that the matching is failed, marking the current pedestrian state as the primary matching success;
if the same pedestrian in the history matching result is not matched for M times continuously, marking the state of the pedestrian as lost;
if the matching result of the pedestrian marked as lost is successful in the matching, updating the state of the pedestrian to be successful in the matching after the pedestrian is lost;
if the same pedestrian in the history matching result is matched for L times continuously, updating the state of the pedestrian to be successful in continuous matching;
if the same pedestrian in the history matching result is not matched for N times continuously, marking the pedestrian as walking out of the shooting range, wherein M is smaller than N.
5. The method for recognizing pedestrians in public places according to claim 1, wherein if the status of the current pedestrians is marked as successful in the first matching, new pedestrian information is allocated to the pedestrians in the history feature library, and the pedestrian features of the pedestrians are associated with the newly allocated pedestrian information.
6. A people stream statistics system, the people stream statistics system comprising:
the pedestrian detection module is used for acquiring the optical image, detecting pedestrians in the optical image and outputting three-dimensional bounding boxes of the pedestrians and corresponding time stamps;
the feature extraction module is used for acquiring pedestrian features based on the optical image and the three-dimensional bounding box, and specifically executes the following steps:
a. extracting the human body shape and characteristics of pedestrians in the optical image as the apparent characteristics of pedestrians of each pedestrian, and storing the pedestrian appearance characteristics into a historical characteristic library;
b. based on the current three-dimensional bounding box of the pedestrians and the three-dimensional bounding boxes distributed according to the time sequence in the historical feature library, extracting the three-dimensional motion feature of each pedestrian and storing the three-dimensional motion feature into the historical feature library;
c. predicting the three-dimensional motion characteristics of the pedestrian at the next moment based on the three-dimensional motion characteristics of the pedestrian and the three-dimensional motion characteristics of the pedestrian in the appointed time in the historical characteristic library, and storing the three-dimensional motion characteristics of the pedestrian in the historical characteristic library;
the pedestrian recognition module is used for recognizing pedestrians based on pedestrian features in the history feature library, and specifically executes the following steps:
a. calculating apparent feature distances one by one based on the current apparent features of the pedestrians and the apparent features of the pedestrians in each history in the history feature library, judging that the current apparent features of the pedestrians and the apparent features of the pedestrians in the history feature library belong to the same pedestrians if the apparent feature distances are larger than an apparent threshold value, and determining the current apparent feature distances as the apparent feature distances of the pedestrians;
b. Calculating a spatial feature distance one by one based on the current three-dimensional motion feature of the pedestrian and the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment of each pedestrian in the historical feature library, judging that the current three-dimensional motion feature of the pedestrian belongs to the same pedestrian with the three-dimensional motion feature of the pedestrian at the next moment predicted at the last moment in the historical feature library if the spatial feature distance is larger than a spatial threshold, and determining the current spatial feature distance as the spatial feature distance of the pedestrian;
c. judging whether the motion pattern of the same pedestrian is met or not based on the current three-dimensional motion characteristics, apparent characteristic distance and spatial characteristic distance of the pedestrian and the historical three-dimensional motion characteristics of each pedestrian in the historical characteristic library, and outputting the motion pattern matching degree as the motion pattern matching degree of the pedestrian;
d. weighting calculation is carried out on apparent feature distances, space feature distances and motion pattern matching degrees of the same pedestrians to obtain matching results of the pedestrians in the current three-dimensional bounding box and the pedestrians in the historical feature library, wherein the matching results comprise successful matching or unsuccessful matching, and pedestrian information obtained by matching is also included when the matching is successful;
the pedestrian marking module is used for marking the pedestrian state of the pedestrian corresponding to the three-dimensional bounding box as successful primary matching, successful missing, successful re-matching after missing, successful continuous matching or leaving the shooting range according to the matching result of the time and the historical matching result;
And the people flow statistics module is used for counting the people flow in a statistical range corresponding to the optical image in the preset time according to the pedestrian state.
7. The people stream statistics system of claim 6, wherein the detecting pedestrians in the optical image outputs a three-dimensional bounding box of the pedestrians, performing the following operations:
calibrating a camera for acquiring an optical image to obtain a mapping relation between pixels in the optical image and the distance of the camera;
detecting pedestrians in the optical image, and acquiring a two-dimensional bounding box of the pedestrians in the optical image;
and obtaining the three-dimensional bounding box of the pedestrian based on the two-dimensional bounding box and the mapping relation.
8. The people stream statistics system of claim 7, wherein the three-dimensional motion feature of each pedestrian is extracted based on the three-dimensional bounding box of the pedestrian and the three-dimensional bounding box distributed according to time sequence in the history feature library, and the following operations are performed:
direction vector extraction: extracting the moving direction of pedestrians in the horizontal direction and the moving direction of pedestrians in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
motion speed extraction: extracting the motion speed of a person in the horizontal direction and the motion speed in the vertical direction through the current three-dimensional bounding box and the historical three-dimensional bounding box;
Extracting relative positions: outputting coordinates of pedestrians in a three-dimensional coordinate system taking the camera as a center based on the current three-dimensional bounding box and the historical three-dimensional bounding box according to the mapping relation obtained after the camera is calibrated;
feature integration: and taking the extracted direction vector, the motion speed and the relative position as three-dimensional motion characteristics of the pedestrians.
9. The people stream statistics system of claim 6, wherein the pedestrian status of the pedestrian corresponding to the three-dimensional bounding box is successful in the initial matching, lost, successful in the re-matching after the lost, successful in the continuous matching or out of the imaging range, according to the matching result of this time and the matching result of the history, the following operations are performed:
if the pedestrian features are successfully extracted, but the matching result is that the matching is failed, marking the current pedestrian state as the primary matching success;
if the same pedestrian in the history matching result is not matched for M times continuously, marking the state of the pedestrian as lost;
if the matching result of the pedestrian marked as lost is successful in the matching, updating the state of the pedestrian to be successful in the matching after the pedestrian is lost;
if the same pedestrian in the history matching result is matched for L times continuously, updating the state of the pedestrian to be successful in continuous matching;
If the same pedestrian in the history matching result is not matched for N times continuously, marking the pedestrian as walking out of the shooting range, wherein M is smaller than N.
10. The people stream statistics system of claim 6, wherein if the status of the current pedestrian is marked as successful in the first match, new pedestrian information is assigned to the pedestrian in the history feature library, and the pedestrian feature of the pedestrian is associated with the newly assigned pedestrian information.
CN202011477711.XA 2020-12-15 2020-12-15 Pedestrian recognition method and people stream statistics system in public place Active CN112446355B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011477711.XA CN112446355B (en) 2020-12-15 2020-12-15 Pedestrian recognition method and people stream statistics system in public place
PCT/CN2020/137803 WO2022126668A1 (en) 2020-12-15 2020-12-19 Method for pedestrian identification in public places and human flow statistics system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011477711.XA CN112446355B (en) 2020-12-15 2020-12-15 Pedestrian recognition method and people stream statistics system in public place

Publications (2)

Publication Number Publication Date
CN112446355A CN112446355A (en) 2021-03-05
CN112446355B true CN112446355B (en) 2023-10-17

Family

ID=74739432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011477711.XA Active CN112446355B (en) 2020-12-15 2020-12-15 Pedestrian recognition method and people stream statistics system in public place

Country Status (2)

Country Link
CN (1) CN112446355B (en)
WO (1) WO2022126668A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011329B (en) * 2021-03-19 2024-03-12 陕西科技大学 Multi-scale feature pyramid network-based and dense crowd counting method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650562A (en) * 2016-06-14 2017-05-10 西安电子科技大学 Online continuous human behavior identification method based on Kinect
CN109829476A (en) * 2018-12-27 2019-05-31 青岛中科慧畅信息科技有限公司 End-to-end three-dimension object detection method based on YOLO
CN110490901A (en) * 2019-07-15 2019-11-22 武汉大学 The pedestrian detection tracking of anti-attitudes vibration
US10839203B1 (en) * 2016-12-27 2020-11-17 Amazon Technologies, Inc. Recognizing and tracking poses using digital imagery captured from multiple fields of view
CN111968235A (en) * 2020-07-08 2020-11-20 杭州易现先进科技有限公司 Object attitude estimation method, device and system and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709974B (en) * 2020-06-22 2022-08-02 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650562A (en) * 2016-06-14 2017-05-10 西安电子科技大学 Online continuous human behavior identification method based on Kinect
US10839203B1 (en) * 2016-12-27 2020-11-17 Amazon Technologies, Inc. Recognizing and tracking poses using digital imagery captured from multiple fields of view
CN109829476A (en) * 2018-12-27 2019-05-31 青岛中科慧畅信息科技有限公司 End-to-end three-dimension object detection method based on YOLO
CN110490901A (en) * 2019-07-15 2019-11-22 武汉大学 The pedestrian detection tracking of anti-attitudes vibration
CN111968235A (en) * 2020-07-08 2020-11-20 杭州易现先进科技有限公司 Object attitude estimation method, device and system and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
行人动态实时监测系统的设计与实现;李随伟;李刚柱;;信息技术(第05期);正文15-20页 *

Also Published As

Publication number Publication date
WO2022126668A1 (en) 2022-06-23
CN112446355A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN110675418B (en) Target track optimization method based on DS evidence theory
LU102028B1 (en) Multiple view multiple target tracking method and system based on distributed camera network
CN107431786B (en) Image processing apparatus, image processing system, and image processing method
CN112836640B (en) Single-camera multi-target pedestrian tracking method
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
US9165190B2 (en) 3D human pose and shape modeling
CN112163537B (en) Pedestrian abnormal behavior detection method, system, terminal and storage medium
US7873208B2 (en) Image matching system using three-dimensional object model, image matching method, and image matching program
Berrio et al. Camera-LIDAR integration: Probabilistic sensor fusion for semantic mapping
EP3518146B1 (en) Image processing apparatus and image processing method
CN107851318A (en) System and method for Object tracking
CN109934127B (en) Pedestrian identification and tracking method based on video image and wireless signal
CN109345568A (en) Sports ground intelligent implementing method and system based on computer vision algorithms make
CN111144213A (en) Object detection method and related equipment
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
WO2020213099A1 (en) Object detection/tracking device, method, and program recording medium
CN109636828A (en) Object tracking methods and device based on video image
CN111652900A (en) Scene flow-based passenger flow counting method, system, equipment and storage device
KR20150065370A (en) Apparatus and method for recognizing human actions
CN112446355B (en) Pedestrian recognition method and people stream statistics system in public place
CN113408550B (en) Intelligent weighing management system based on image processing
CN114820765A (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN116824641B (en) Gesture classification method, device, equipment and computer storage medium
JP7488674B2 (en) OBJECT RECOGNITION DEVICE, OBJECT RECOGNITION METHOD, AND OBJECT RECOGNITION PROGRAM
Bardas et al. 3D tracking and classification system using a monocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant