CN111027462A - Pedestrian track identification method across multiple cameras - Google Patents

Pedestrian track identification method across multiple cameras Download PDF

Info

Publication number
CN111027462A
CN111027462A CN201911243955.9A CN201911243955A CN111027462A CN 111027462 A CN111027462 A CN 111027462A CN 201911243955 A CN201911243955 A CN 201911243955A CN 111027462 A CN111027462 A CN 111027462A
Authority
CN
China
Prior art keywords
pedestrian
coordinate system
world coordinate
cameras
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911243955.9A
Other languages
Chinese (zh)
Inventor
蒋云翔
蔡晔
涂传亮
丁杰
李道坚
刘�文
唐岳凌
田震华
刘彦
叶军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD
Original Assignee
CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD filed Critical CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD
Priority to CN201911243955.9A priority Critical patent/CN111027462A/en
Publication of CN111027462A publication Critical patent/CN111027462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian track recognition method across multiple cameras, which comprises the steps of building a pedestrian track recognition system with multiple cameras; acquiring all pedestrian tracking tracks of each camera; transforming the position coordinates of the overlapping coverage area between two adjacent cameras into a unified world coordinate system; marking pedestrians at the same time and the same position in a unified world coordinate system; transforming the track data of the camera associated with the marked pedestrian track into a unified world coordinate system; merging the pedestrian tracks in a unified world coordinate system; identifying and calibrating pedestrians in the selected area; and repeating the steps to complete the tracking and identification of the pedestrian track across multiple cameras. The method has low operation complexity, and can simply, reliably, efficiently and accurately identify and track the pedestrian track.

Description

Pedestrian track identification method across multiple cameras
Technical Field
The invention belongs to the field of image processing, and particularly relates to a pedestrian track identification method across multiple cameras.
Background
With the development of economic technology, identification of moving objects (such as pedestrians, vehicles, etc.) has been widely used. In a multi-camera monitoring system, a basic task is to connect pedestrians crossing cameras at different time and different places, which is a pedestrian re-identification technology. Specifically, the re-recognition is a process of visually matching a single pedestrian or multiple pedestrians in different scenes according to a series of data obtained by cameras distributed in different scenes at different times. The main purpose of pedestrian re-identification is to determine whether a pedestrian in a certain camera appears in other cameras, that is, to compare the characteristics of a pedestrian with those of other pedestrians, and determine whether the pedestrian belongs to the same pedestrian.
The main challenges of pedestrian re-identification are: the influence of pedestrian gesture and camera visual angle, the influence of pedestrian's background clutter and sheltering from, the influence of illumination and image resolution ratio etc.. These challenges pose great difficulties for pedestrian feature matching, and current methods focus primarily on extracting robust discriminative features. In the actual monitoring process, the effective information of the face of the pedestrian cannot be captured, and the whole pedestrian is generally used for searching. In the process of identifying pedestrians, the characteristics of different pedestrians are likely to be more similar than those of the same pedestrian due to the influence of multiple factors such as the postures, the illumination and the angles of cameras of the pedestrians, and therefore the pedestrian search is difficult.
The existing pedestrian re-identification technology focuses on enhancing the single algorithm identification rate of pedestrian re-identification by a single camera or multiple cameras, and reference are not available for the system identification rate of a pedestrian identification system.
Disclosure of Invention
The invention aims to provide a simple and reliable pedestrian track identification method with high efficiency and spanning multiple cameras.
The invention provides a pedestrian track identification method across multiple cameras, which comprises the following steps:
s1, building a pedestrian track recognition system with multiple cameras, and ensuring that an overlapping coverage area exists between every two adjacent cameras;
s2, acquiring all pedestrian tracking tracks of each camera;
s3, converting the position coordinates of the overlapped coverage area between two adjacent cameras into a unified world coordinate system;
s4, marking pedestrians at the same time and the same position in a unified world coordinate system;
s5, converting the track data of the camera associated with the pedestrian track marked in the step S4 into a unified world coordinate system;
s6, combining the pedestrian tracks in a unified world coordinate system;
s7, identifying and calibrating the pedestrians in the selected area;
s8, repeating the steps S3-S7 to complete the tracking and identification of the pedestrian track across multiple cameras.
The transformation into the unified world coordinate system is to transform the position coordinates in the camera into the unified world coordinate system by adopting the following formula:
Figure BDA0002307004840000021
wherein (u, v) are position coordinates within the camera;
Figure BDA0002307004840000031
is a scale factor; f is the distance from point w to the center of projection (camera aperture); u. of0The quantization coefficients from the image plane to the direction of the U axis of the discrete pixel value; v. of0The quantization coefficients from the image plane to the direction of the V axis of the discrete pixel value; sxIs the component of the distance from point w to the optical axis in the x-direction of the image plane; syIs the component of the distance from the point me to the optical axis in the y-axis direction of the image plane; r3×3Is a rotation matrix; t is3×1The translation vectors represent the translation amounts of X, Y, Z coordinate axis directions respectively; (X)w,Yw,Zw) Is X, Y, Z coordinate value of point w in the world coordinate system.
Marking N points in the visual field of the camera, recording coordinates, acquiring the coordinates of the corresponding N points in a unified world coordinate system, defining the marching area of the pedestrian as a plane, and estimating a mapping matrix M by adopting the following formula:
Figure BDA0002307004840000032
in the formula (x)i,yi) Is the coordinate of the ith point in N points in the camera, (x)i',yi') the coordinates of the ith point of the corresponding N points in the unified world coordinate system, tiThe scale factor is generated by the corresponding relation between the scale of the world coordinate system and the scale of the discrete coordinate of the image; i-0, 1, 2.
According to the pedestrian track identification method across multiple cameras, the problem of continuous tracking of pedestrians across cameras is solved by utilizing uniqueness of time and space and combination of overlapping areas of the multiple cameras, and the identification rate of pedestrian identification is improved by ensuring continuous tracking of pedestrians in a system; therefore, the method has low operation complexity, and can simply, reliably, efficiently and accurately identify and track the pedestrian track.
Drawings
FIG. 1 is a schematic process flow diagram of the process of the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the method of the present invention: the invention provides a pedestrian track identification method across multiple cameras, which comprises the following steps:
s1, building a pedestrian track recognition system with multiple cameras, and ensuring that an overlapping coverage area exists between every two adjacent cameras;
s2, acquiring all pedestrian tracking tracks of each camera;
s3, converting the position coordinates of the overlapped coverage area between two adjacent cameras into a unified world coordinate system;
s4, marking pedestrians at the same time and the same position in a unified world coordinate system;
s5, converting the track data of the camera associated with the pedestrian track marked in the step S4 into a unified world coordinate system;
s6, combining the pedestrian tracks in a unified world coordinate system;
s7, identifying and calibrating the pedestrians in the selected area;
s8, repeating the steps S3-S7 to complete the tracking and identification of the pedestrian track across multiple cameras.
In the above step, the coordinates in the camera are transformed into a unified world coordinate system, specifically, the position coordinates in the camera are transformed into the unified world coordinate system by using the following formula:
Figure BDA0002307004840000041
wherein (u, v) are position coordinates within the camera;
Figure BDA0002307004840000042
is a scale factor; f is the distance from point w to the center of projection (camera aperture); u. of0The quantization coefficients from the image plane to the direction of the U axis of the discrete pixel value; v. of0As a plane of the imageQuantization coefficients to the direction of the V-axis of the discrete pixel values; sxIs the component of the distance from point w to the optical axis in the x-direction of the image plane; syIs the component of the distance from the point me to the optical axis in the y-axis direction of the image plane; r3×3Is a rotation matrix; t is3×1The translation vectors represent the translation amounts of X, Y, Z coordinate axis directions respectively; (X)w,Yw,Zw) Is X, Y, Z coordinate value of point w in the world coordinate system.
In specific implementation, for simplicity, a feasible pedestrian area can be defined as a plane, so that the data of the Z axis is omitted; meanwhile, before the system works formally, marking N points in the visual field of the camera and recording coordinates, acquiring the coordinates of the corresponding N points in a unified world coordinate system, defining the marching area of the pedestrian as a plane, and estimating a mapping matrix M by adopting the following formula:
Figure BDA0002307004840000051
in the formula (x)i,yi) Is the coordinate of the ith point in N points in the camera, (x)i',yi') the coordinates of the ith point of the corresponding N points in the unified world coordinate system, tiThe scale factor is generated by the corresponding relation between the scale of the world coordinate system and the scale of the discrete coordinate of the image; i-0, 1, 2.
After the mapping matrix M is obtained, when the system operates normally, the pedestrian positioned in the image can be mapped into the world coordinate system from the image coordinate system by using the formula and the mapping matrix M, and then the track of the pedestrian in the world coordinate system can be obtained.
In consideration of space-time uniqueness, a position (world coordinate point) where a pedestrian is located at a certain time has uniqueness; a certain point in the world coordinate system may be mapped to a point in the image coordinate system and vice versa. Under certain limiting conditions, for example, the spatial coordinate axis Z is not considered, the area where the pedestrian can travel is a plane, and the coordinate mapping has uniqueness. From this it can be deduced that the spatiotemporal uniqueness of the pedestrian trajectory in world coordinates can be mapped to the spatiotemporal uniqueness in the image coordinate system.
If the two cameras have overlapped areas in the visual fields, after calibration, pedestrians passing through the overlapped areas in the visual fields have space-time uniqueness of the tracks at the moment, and therefore the pedestrians can appear in the corresponding image coordinate systems of the two cameras and are at the corresponding positions. In the same way, if the pedestrian passes through the overlapping area of the two cameras, the positions of the pedestrian in the image coordinate systems of the two cameras can be calculated through the M in the upper section and the corresponding formula, and the current time is recorded at the same time, so that a group of space-time data can be obtained. The space and time are unique, so the space and time positions of the pedestrian in the overlapping region of the visual fields calculated by the two cameras are necessarily consistent, and therefore the pedestrian tracks of the two cameras can be combined.

Claims (3)

1. A pedestrian track identification method across multiple cameras comprises the following steps:
s1, building a pedestrian track recognition system with multiple cameras, and ensuring that an overlapping coverage area exists between every two adjacent cameras;
s2, acquiring all pedestrian tracking tracks of each camera;
s3, converting the position coordinates of the overlapped coverage area between two adjacent cameras into a unified world coordinate system;
s4, marking pedestrians at the same time and the same position in a unified world coordinate system;
s5, converting the track data of the camera associated with the pedestrian track marked in the step S4 into a unified world coordinate system;
s6, combining the pedestrian tracks in a unified world coordinate system;
s7, identifying and calibrating the pedestrians in the selected area;
s8, repeating the steps S3-S7 to complete the tracking and identification of the pedestrian track across multiple cameras.
2. The method for pedestrian trajectory recognition across multiple cameras of claim 1, wherein the transformation into a unified world coordinate system is specifically a transformation of position coordinates within the cameras into a unified world coordinate system using the following equations:
Figure FDA0002307004830000011
wherein (u, v) are position coordinates within the camera;
Figure FDA0002307004830000012
is a scale factor; f is the distance from point w to the center of projection (camera aperture); u. of0The quantization coefficients from the image plane to the direction of the U axis of the discrete pixel value; v. of0The quantization coefficients from the image plane to the direction of the V axis of the discrete pixel value; sxIs the component of the distance from point w to the optical axis in the x-direction of the image plane; syIs the component of the distance from the point me to the optical axis in the y-axis direction of the image plane; r3×3Is a rotation matrix; t is3×1The translation vectors represent the translation amounts of X, Y, Z coordinate axis directions respectively; (X)w,Yw,Zw) Is X, Y, Z coordinate value of point w in the world coordinate system.
3. The method for identifying pedestrian trajectories across multiple cameras according to claim 1 or 2, wherein N points are marked in the field of view of the cameras and coordinates are recorded, the coordinates of the corresponding N points are acquired in a unified world coordinate system, and meanwhile, the travelable area of the pedestrian is defined as a plane, and the mapping matrix M is estimated by adopting the following formula:
Figure FDA0002307004830000021
in the formula (x)i,yi) Is the coordinate of the ith point in N points in the camera, (x'i,y′i) Coordinates of the ith point which is a corresponding N points in the unified world coordinate system, tiFor scaling the scale factor, the scale is scaled by the scale of the world coordinate system and the discrete coordinate of the imageCorrespondingly generating; i-0, 1, 2.
CN201911243955.9A 2019-12-06 2019-12-06 Pedestrian track identification method across multiple cameras Pending CN111027462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911243955.9A CN111027462A (en) 2019-12-06 2019-12-06 Pedestrian track identification method across multiple cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911243955.9A CN111027462A (en) 2019-12-06 2019-12-06 Pedestrian track identification method across multiple cameras

Publications (1)

Publication Number Publication Date
CN111027462A true CN111027462A (en) 2020-04-17

Family

ID=70204568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911243955.9A Pending CN111027462A (en) 2019-12-06 2019-12-06 Pedestrian track identification method across multiple cameras

Country Status (1)

Country Link
CN (1) CN111027462A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709974A (en) * 2020-06-22 2020-09-25 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN112380894A (en) * 2020-09-30 2021-02-19 北京智汇云舟科技有限公司 Video overlapping area target duplicate removal method and system based on three-dimensional geographic information system
CN112633282A (en) * 2021-01-07 2021-04-09 清华大学深圳国际研究生院 Vehicle real-time tracking method and computer readable storage medium
CN113515982A (en) * 2020-05-22 2021-10-19 阿里巴巴集团控股有限公司 Track restoration method and equipment, equipment management method and management equipment
WO2022002151A1 (en) * 2020-06-30 2022-01-06 杭州海康威视数字技术股份有限公司 Implementation method and apparatus for behavior analysis of moving target, and electronic device
CN114445502A (en) * 2020-11-06 2022-05-06 财团法人工业技术研究院 Multi-camera positioning and scheduling system and method
CN115223102A (en) * 2022-09-08 2022-10-21 枫树谷(成都)科技有限责任公司 Real-time crowd density fusion sensing method and model based on camera cluster
CN117173215A (en) * 2023-09-04 2023-12-05 东南大学 Inland navigation ship whole-course track identification method and system crossing cameras

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203260A (en) * 2016-06-27 2016-12-07 南京邮电大学 Pedestrian's recognition and tracking method based on multiple-camera monitoring network
CN107240124A (en) * 2017-05-19 2017-10-10 清华大学 Across camera lens multi-object tracking method and device based on space-time restriction
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108924507A (en) * 2018-08-02 2018-11-30 高新兴科技集团股份有限公司 A kind of personnel's system of path generator and method based on multi-cam scene
CN109446946A (en) * 2018-10-15 2019-03-08 浙江工业大学 A kind of multi-cam real-time detection method based on multithreading
CN109558831A (en) * 2018-11-27 2019-04-02 成都索贝数码科技股份有限公司 It is a kind of fusion space-time model across camera shooting head's localization method
CN110046277A (en) * 2019-04-09 2019-07-23 北京迈格威科技有限公司 More video merging mask methods and device
CN110245609A (en) * 2019-06-13 2019-09-17 深圳力维智联技术有限公司 Pedestrian track generation method, device and readable storage medium storing program for executing
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203260A (en) * 2016-06-27 2016-12-07 南京邮电大学 Pedestrian's recognition and tracking method based on multiple-camera monitoring network
CN107240124A (en) * 2017-05-19 2017-10-10 清华大学 Across camera lens multi-object tracking method and device based on space-time restriction
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108924507A (en) * 2018-08-02 2018-11-30 高新兴科技集团股份有限公司 A kind of personnel's system of path generator and method based on multi-cam scene
CN109446946A (en) * 2018-10-15 2019-03-08 浙江工业大学 A kind of multi-cam real-time detection method based on multithreading
CN109558831A (en) * 2018-11-27 2019-04-02 成都索贝数码科技股份有限公司 It is a kind of fusion space-time model across camera shooting head's localization method
CN110046277A (en) * 2019-04-09 2019-07-23 北京迈格威科技有限公司 More video merging mask methods and device
CN110245609A (en) * 2019-06-13 2019-09-17 深圳力维智联技术有限公司 Pedestrian track generation method, device and readable storage medium storing program for executing
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋志宏, 北京理工大学出版社 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515982A (en) * 2020-05-22 2021-10-19 阿里巴巴集团控股有限公司 Track restoration method and equipment, equipment management method and management equipment
CN113515982B (en) * 2020-05-22 2022-06-14 阿里巴巴集团控股有限公司 Track restoration method and equipment, equipment management method and management equipment
CN111709974A (en) * 2020-06-22 2020-09-25 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111709974B (en) * 2020-06-22 2022-08-02 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
WO2022002151A1 (en) * 2020-06-30 2022-01-06 杭州海康威视数字技术股份有限公司 Implementation method and apparatus for behavior analysis of moving target, and electronic device
CN112380894A (en) * 2020-09-30 2021-02-19 北京智汇云舟科技有限公司 Video overlapping area target duplicate removal method and system based on three-dimensional geographic information system
CN112380894B (en) * 2020-09-30 2024-01-19 北京智汇云舟科技有限公司 Video overlapping region target deduplication method and system based on three-dimensional geographic information system
CN114445502A (en) * 2020-11-06 2022-05-06 财团法人工业技术研究院 Multi-camera positioning and scheduling system and method
CN112633282A (en) * 2021-01-07 2021-04-09 清华大学深圳国际研究生院 Vehicle real-time tracking method and computer readable storage medium
CN112633282B (en) * 2021-01-07 2023-06-20 清华大学深圳国际研究生院 Real-time tracking method for vehicle and computer readable storage medium
CN115223102A (en) * 2022-09-08 2022-10-21 枫树谷(成都)科技有限责任公司 Real-time crowd density fusion sensing method and model based on camera cluster
CN117173215A (en) * 2023-09-04 2023-12-05 东南大学 Inland navigation ship whole-course track identification method and system crossing cameras

Similar Documents

Publication Publication Date Title
CN111027462A (en) Pedestrian track identification method across multiple cameras
CN111079600A (en) Pedestrian identification method and system with multiple cameras
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
Ahmed et al. A robust features-based person tracker for overhead views in industrial environment
Keller et al. The benefits of dense stereo for pedestrian detection
CN111860352B (en) Multi-lens vehicle track full tracking system and method
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
Nassu et al. A vision-based approach for rail extraction and its application in a camera pan–tilt control system
JP2008046903A (en) Apparatus and method for detecting number of objects
KR20150049529A (en) Apparatus and method for estimating the location of the vehicle
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
CN106934338B (en) Long-term pedestrian tracking method based on correlation filter
CN113256731A (en) Target detection method and device based on monocular vision
Nath et al. On road vehicle/object detection and tracking using template
CN111932590B (en) Object tracking method and device, electronic equipment and readable storage medium
CN110706251B (en) Cross-lens tracking method for pedestrians
Nguyen et al. Optical flow-based moving-static separation in driving assistance systems
Fangfang et al. Real-time lane detection for intelligent vehicles based on monocular vision
Bravo et al. Outdoor vacant parking space detector for improving mobility in smart cities
CN115767424A (en) Video positioning method based on RSS and CSI fusion
CN114926508A (en) Method, device, equipment and storage medium for determining visual field boundary
Lookingbill et al. Learning activity-based ground models from a moving helicopter platform
CN103473787A (en) On-bridge-moving-object detection method based on space geometry relation
McCartney et al. Image registration for sequence of visual images captured by UAV
Micheal et al. Comparative analysis of SIFT and SURF on KLT tracker for UAV applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200417