CN111079600A - Pedestrian identification method and system with multiple cameras - Google Patents

Pedestrian identification method and system with multiple cameras Download PDF

Info

Publication number
CN111079600A
CN111079600A CN201911242483.5A CN201911242483A CN111079600A CN 111079600 A CN111079600 A CN 111079600A CN 201911242483 A CN201911242483 A CN 201911242483A CN 111079600 A CN111079600 A CN 111079600A
Authority
CN
China
Prior art keywords
pedestrian
camera
coordinate system
world coordinate
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911242483.5A
Other languages
Chinese (zh)
Inventor
蔡晔
蒋云翔
涂传亮
丁杰
李道坚
刘�文
唐岳凌
田震华
刘彦
叶军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD
Original Assignee
CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD filed Critical CHANGSHA HAIGE BEIDOU INFORMATION TECHNOLOGY CO LTD
Priority to CN201911242483.5A priority Critical patent/CN111079600A/en
Publication of CN111079600A publication Critical patent/CN111079600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian identification method with multiple cameras, which comprises the steps of arranging multiple cameras; each camera records a clearly identifiable area; each camera detects and tracks the human body and the face of a pedestrian; each camera identifies pedestrians, postures and human faces again and obtains identified characteristic values; collecting data information of each camera; transforming the position coordinates of the overlapping area of each camera into a unified world coordinate system; marking pedestrians at the same time and the same position in a unified world coordinate system; transforming the track data of the associated cameras into a unified world coordinate system; merging pedestrian tracks in a unified world coordinate system; acquiring a characteristic value identified by a camera; comparing the obtained identified characteristic values and identifying and calibrating the pedestrians; and repeating the steps to finish the pedestrian recognition of the multiple cameras. The invention also discloses a pedestrian recognition method for realizing the multiple cameras for scoring and reissuing. The invention has high reliability, good practicability and high calculation efficiency.

Description

Pedestrian identification method and system with multiple cameras
Technical Field
The invention belongs to the field of image processing, and particularly relates to a pedestrian identification method and system with multiple cameras.
Background
Pedestrian identification is widely applied in long-term pedestrian tracking and criminal investigation search. In a multi-camera monitoring system, a basic task is to connect pedestrians crossing cameras at different time and different places, which is a pedestrian re-identification technology. Specifically, the re-recognition is a process of visually matching a single pedestrian or multiple pedestrians in different scenes according to a series of data obtained by cameras distributed in different scenes at different times. The main purpose of pedestrian re-identification is to determine whether a pedestrian in a certain camera appears in other cameras, that is, to compare the characteristics of a pedestrian with those of other pedestrians, and determine whether the pedestrian belongs to the same pedestrian.
The main challenges that pedestrians have in identifying: the influence of pedestrian gesture and camera visual angle, the influence of pedestrian's background clutter and sheltering from, the influence of illumination and image resolution ratio etc.. These challenges pose great difficulties for pedestrian feature matching, and current methods focus primarily on extracting robust discriminative features. In the actual monitoring process, the effective information of the face of the pedestrian cannot be captured, and the whole pedestrian is generally used for searching. In the process of identifying pedestrians, the characteristics of different pedestrians are likely to be more similar than those of the same pedestrian due to the influence of multiple factors such as the postures, the illumination and the angles of cameras of the pedestrians, and therefore the pedestrian search is difficult. Meanwhile, most of pedestrian identification systems with multiple cameras are concentrated on a cloud server end of the system, and the requirements on transmission bandwidth and the processing capacity of a cloud server are very high. In addition, the existing pedestrian re-identification technology focuses on the single-body algorithm identification rate of how to enhance pedestrian re-identification by a single camera or multiple cameras, and reference is not made to the system identification rate of a pedestrian identification system by reference of references and technologies.
Disclosure of Invention
The invention aims to provide a pedestrian recognition method with multiple cameras, which is high in reliability, good in practicability and high in calculation efficiency.
The invention also aims to provide a recognition system for realizing the multi-camera pedestrian recognition method.
The invention provides a pedestrian identification method with multiple cameras, which comprises the following steps:
s1, arranging a plurality of cameras and ensuring that overlapping coverage areas exist between adjacent cameras;
s2, recording the definite identifiable area in the camera range under the current illumination condition and the environmental condition by each camera;
s3, carrying out human body detection tracking and face detection tracking on pedestrians in respective camera areas by the cameras;
s4, each camera carries out pedestrian re-recognition, posture re-recognition and face re-recognition on the pedestrian in the definite recognizable area obtained in the step S2, so that a recognized characteristic value is obtained;
s5, collecting data information of each camera;
s6, transforming the position coordinates of the overlapping area of each camera into a unified world coordinate system;
s7, marking pedestrians at the same time and the same position in a unified world coordinate system;
s8, converting the track data of the camera associated with the pedestrian track marked in the step S7 into a unified world coordinate system;
s9, combining pedestrian tracks in a unified world coordinate system;
s10, acquiring the characteristic values of identification of the related cameras aiming at the clear identifiable regions of the pedestrians in the cameras;
s11, comparing the obtained identified characteristic values so as to identify and calibrate the pedestrian;
s12, repeating the steps S2-S11 to complete the pedestrian recognition of the multiple cameras.
The transformation into the unified world coordinate system is to transform the position coordinates in the camera into the unified world coordinate system by adopting the following formula:
Figure BDA0002306643010000031
wherein (u, v) are position coordinates within the camera;
Figure BDA0002306643010000032
is a scale factor; f is the distance from point w to the center of projection (camera aperture); u. of0The quantization coefficients from the image plane to the direction of the U axis of the discrete pixel value; v. of0The quantization coefficients from the image plane to the direction of the V axis of the discrete pixel value; sxIs the component of the distance from point w to the optical axis in the x-direction of the image plane; syIs the component of the distance from the point me to the optical axis in the y-axis direction of the image plane; r3×3Is a rotation matrix; t is3×1The translation vectors represent the translation amounts of X, Y, Z coordinate axis directions respectively; (X)w,Yw,Zw) Is X, Y, Z coordinate value of point w in the world coordinate system.
Marking N points in the visual field of the camera, recording coordinates, acquiring the coordinates of the corresponding N points in a unified world coordinate system, defining the marching area of the pedestrian as a plane, and estimating a mapping matrix M by adopting the following formula:
Figure BDA0002306643010000033
in the formula (x)i,yi) Is the coordinate of the ith point in N points in the camera, (x'i,y’i) Coordinates of the ith point which is a corresponding N points in the unified world coordinate system, tiThe scale factor is generated by the corresponding relation between the scale of the world coordinate system and the scale of the discrete coordinate of the image; i-0, 1, 2.
The invention also provides an identification system for realizing the pedestrian identification method of the multiple cameras, which comprises a plurality of camera modules and a server module; the camera module is used for recording a clear identifiable region, carrying out human body detection tracking and face detection tracking, carrying out pedestrian re-identification, posture re-identification and face re-identification on pedestrians in the clear identifiable region, acquiring identified characteristic values and sending data information to the server module; the server module is used for transforming the position coordinates of the overlapping areas of the cameras into a unified world coordinate system, marking pedestrians at the same time and at the same position, transforming the track data of the cameras related to the marked pedestrian tracks into the unified world coordinate system, merging the pedestrian tracks in the unified world coordinate system, acquiring the identification characteristic values of the cameras for the pedestrians in the clearly identifiable areas of the cameras, comparing the identification characteristic values and calibrating the pedestrians.
The server module is a cloud server module.
The multi-camera pedestrian identification method and the multi-camera pedestrian identification system provided by the invention are focused on a method of considering the combination of the edge end and the cloud end, realize the detection and tracking of the human body and the human face of the pedestrian at the edge end, map coordinate collection tracks, simultaneously collect the human body image and the human face image of the pedestrian and send comprehensive data to the server module; the server module collects data sent by different cameras, comprehensively processes the data, combines tracks, extracts biological features, and compares the features to determine the identity ID of the pedestrian; therefore, the invention can reduce the complexity of cloud computing and the transmission bandwidth of the system to a greater extent, and is very favorable for the realization of system engineering sensitive to the bandwidth and the computing capability of the server; the invention has high reliability, good practicability and high calculation efficiency.
Drawings
FIG. 1 is a schematic process flow diagram of the process of the present invention.
FIG. 2 is a functional block diagram of the system of the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the method of the present invention: the invention provides a pedestrian identification method with multiple cameras, which comprises the following steps:
s1, arranging a plurality of cameras and ensuring that overlapping coverage areas exist between adjacent cameras;
s2, recording the definite identifiable area in the camera range under the current illumination condition and the environmental condition by each camera;
s3, carrying out human body detection tracking and face detection tracking on pedestrians in respective camera areas by the cameras;
s4, each camera carries out pedestrian re-recognition, posture re-recognition and face re-recognition on the pedestrian in the definite recognizable area obtained in the step S2, so that a recognized characteristic value is obtained;
s5, collecting data information of each camera;
s6, transforming the position coordinates of the overlapping area of each camera into a unified world coordinate system;
s7, marking pedestrians at the same time and the same position in a unified world coordinate system;
s8, converting the track data of the camera associated with the pedestrian track marked in the step S7 into a unified world coordinate system;
s9, combining pedestrian tracks in a unified world coordinate system;
s10, acquiring the characteristic values of identification of the related cameras aiming at the clear identifiable regions of the pedestrians in the cameras;
s11, comparing the obtained identified characteristic values so as to identify and calibrate the pedestrian;
s12, repeating the steps S2-S11 to complete the pedestrian recognition of the multiple cameras.
In the above step, the transformation into the unified world coordinate system is specifically to transform the position coordinates in the camera into the unified world coordinate system by using the following formula:
Figure BDA0002306643010000061
wherein (u, v) are position coordinates within the camera;
Figure BDA0002306643010000062
is a scale factor; f is the distance from point w to the center of projection (camera aperture); u. of0The quantization coefficients from the image plane to the direction of the U axis of the discrete pixel value; v. of0The quantization coefficients from the image plane to the direction of the V axis of the discrete pixel value; sxIs the component of the distance from point w to the optical axis in the x-direction of the image plane; syIs the component of the distance from the point me to the optical axis in the y-axis direction of the image plane; r3×3Is a rotation matrix; t is3×1The translation vectors represent the translation amounts of X, Y, Z coordinate axis directions respectively; (X)w,Yw,Zw) Is X, Y, Z coordinate value of point w in the world coordinate system.
In specific implementation, for simplicity, a feasible pedestrian area can be defined as a plane, so that the data of the Z axis is omitted; meanwhile, before the system works formally, marking N points in the visual field of the camera and recording coordinates, acquiring the coordinates of the corresponding N points in a unified world coordinate system, defining the marching area of the pedestrian as a plane, and estimating a mapping matrix M by adopting the following formula:
Figure BDA0002306643010000063
in the formula(xi,yi) Is the coordinate of the ith point in N points in the camera, (x'i,y’i) Coordinates of the ith point which is a corresponding N points in the unified world coordinate system, tiThe scale factor is generated by the corresponding relation between the scale of the world coordinate system and the scale of the discrete coordinate of the image; i-0, 1, 2.
After the mapping matrix M is obtained, when the system operates normally, the pedestrian positioned in the image can be mapped into the world coordinate system from the image coordinate system by using the formula and the mapping matrix M, and then the track of the pedestrian in the world coordinate system can be obtained.
In consideration of space-time uniqueness, a position (world coordinate point) where a pedestrian is located at a certain time has uniqueness; a certain point in the world coordinate system may be mapped to a point in the image coordinate system and vice versa. Under certain limiting conditions, for example, the spatial coordinate axis Z is not considered, the area where the pedestrian can travel is a plane, and the coordinate mapping has uniqueness. From this it can be deduced that the spatiotemporal uniqueness of the pedestrian trajectory in world coordinates can be mapped to the spatiotemporal uniqueness in the image coordinate system.
If the two cameras have overlapped areas in the visual fields, after calibration, pedestrians passing through the overlapped areas in the visual fields have space-time uniqueness of the tracks at the moment, and therefore the pedestrians can appear in the corresponding image coordinate systems of the two cameras and are at the corresponding positions. In the same way, if the pedestrian passes through the overlapping area of the two cameras, the positions of the pedestrian in the image coordinate systems of the two cameras can be calculated through the M in the upper section and the corresponding formula, and the current time is recorded at the same time, so that a group of space-time data can be obtained. The space and time are unique, so the space and time positions of the pedestrian in the overlapping region of the visual fields calculated by the two cameras are necessarily consistent, and therefore the pedestrian tracks of the two cameras can be combined.
FIG. 2 shows a functional block diagram of the system of the present invention: the invention also provides an identification system for realizing the pedestrian identification method of the multiple cameras, which comprises a plurality of camera modules and a server module; the camera module is used for recording a clear identifiable region, carrying out human body detection tracking and face detection tracking, carrying out pedestrian re-identification, posture re-identification and face re-identification on pedestrians in the clear identifiable region, acquiring identified characteristic values and sending data information to the server module; the server module is used for transforming the position coordinates of the overlapping areas of the cameras into a unified world coordinate system, marking pedestrians at the same time and at the same position, transforming the track data of the cameras related to the marked pedestrian tracks into the unified world coordinate system, merging the pedestrian tracks in the unified world coordinate system, acquiring the identification characteristic values of the cameras for the pedestrians in the clearly identifiable areas of the cameras, comparing the identification characteristic values and calibrating the pedestrians.
In particular, the server module may be a cloud server module.
Meanwhile, the camera module may include a general module and an edge terminal module; the universal module mainly provides some tools and definitions for the edge module and the service module; the method comprises the steps of comprising a deep learning engine (EngineDL) for loading a deep learning model; a math matrix memory pool (MatPool) for managing the memory for storing the image and other matrixes; a memory pool (MemoryPool) for managing frequently and largely applied memory data; global macro definition (GlobalDefine) for defining some global parameters and tunable parameters; the pedestrian data recording structure (structPedestian) is used for recording data of a pedestrian, and comprises parameters such as tracks, images, characteristics and attributes; the edge end module is used for detecting the human body and the face of the pedestrian, tracking the human body and the face in real time, finally obtaining the track of the pedestrian and data such as a human body picture and a face picture, and transmitting the data to the server module.
The server module comprehensively processes data from the edge terminal module. Combining pedestrian tracks of different cameras, extracting biological features and attributes of pedestrians, comparing the biological features and attributes with a personnel database to determine the ID of the pedestrians, clustering the pedestrian tracks, combining all tracks of the same personnel ID, and maintaining a pedestrian track database. The system also comprises modules such as track merging, track clustering and a pedestrian track database.

Claims (5)

1. A pedestrian identification method with multiple cameras comprises the following steps:
s1, arranging a plurality of cameras and ensuring that overlapping coverage areas exist between adjacent cameras;
s2, recording the definite identifiable area in the camera range under the current illumination condition and the environmental condition by each camera;
s3, carrying out human body detection tracking and face detection tracking on pedestrians in respective camera areas by the cameras;
s4, each camera carries out pedestrian re-recognition, posture re-recognition and face re-recognition on the pedestrian in the definite recognizable area obtained in the step S2, so that a recognized characteristic value is obtained;
s5, collecting data information of each camera;
s6, transforming the position coordinates of the overlapping area of each camera into a unified world coordinate system;
s7, marking pedestrians at the same time and the same position in a unified world coordinate system;
s8, converting the track data of the camera associated with the pedestrian track marked in the step S7 into a unified world coordinate system;
s9, combining pedestrian tracks in a unified world coordinate system;
s10, acquiring the characteristic values of identification of the related cameras aiming at the clear identifiable regions of the pedestrians in the cameras;
s11, comparing the obtained identified characteristic values so as to identify and calibrate the pedestrian;
s12, repeating the steps S2-S11 to complete the pedestrian recognition of the multiple cameras.
2. The method of claim 1, wherein the transformation to the uniform world coordinate system is performed by transforming position coordinates within the camera to the uniform world coordinate system using the following equation:
Figure FDA0002306642000000021
wherein (u, v) are position coordinates within the camera;
Figure FDA0002306642000000022
is a scale factor; f is the distance from point w to the center of projection (camera aperture); u. of0The quantization coefficients from the image plane to the direction of the U axis of the discrete pixel value; v. of0The quantization coefficients from the image plane to the direction of the V axis of the discrete pixel value; sxIs the component of the distance from point w to the optical axis in the x-direction of the image plane; syIs the component of the distance from the point me to the optical axis in the y-axis direction of the image plane; r3×3Is a rotation matrix; t is3×1The translation vectors represent the translation amounts of X, Y, Z coordinate axis directions respectively; (X)w,Yw,Zw) Is X, Y, Z coordinate value of point w in the world coordinate system.
3. The multi-camera pedestrian recognition method according to claim 1 or 2, wherein N points are marked in the field of view of the camera and coordinates are recorded, and the coordinates of the corresponding N points are acquired in a unified world coordinate system while specifying a travelable area of a pedestrian as a plane, and the mapping matrix M is estimated using the following equation:
Figure FDA0002306642000000023
in the formula (x)i,yi) Is the coordinate of the ith point in N points in the camera, (x'i,y′i) Coordinates of the ith point which is a corresponding N points in the unified world coordinate system, tiThe scale factor is generated by the corresponding relation between the scale of the world coordinate system and the scale of the discrete coordinate of the image; i-0, 1, 2.
4. A recognition system for implementing the multi-camera pedestrian recognition method according to any one of claims 1 to 3, characterized by comprising a plurality of camera modules and a server module; the camera module is used for recording a clear identifiable region, carrying out human body detection tracking and face detection tracking, carrying out pedestrian re-identification, posture re-identification and face re-identification on pedestrians in the clear identifiable region, acquiring identified characteristic values and sending data information to the server module; the server module is used for transforming the position coordinates of the overlapping areas of the cameras into a unified world coordinate system, marking pedestrians at the same time and at the same position, transforming the track data of the cameras related to the marked pedestrian tracks into the unified world coordinate system, merging the pedestrian tracks in the unified world coordinate system, acquiring the identification characteristic values of the cameras for the pedestrians in the clearly identifiable areas of the cameras, comparing the identification characteristic values and calibrating the pedestrians.
5. The identification system of claim 5, wherein said server module is a cloud server module.
CN201911242483.5A 2019-12-06 2019-12-06 Pedestrian identification method and system with multiple cameras Pending CN111079600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911242483.5A CN111079600A (en) 2019-12-06 2019-12-06 Pedestrian identification method and system with multiple cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911242483.5A CN111079600A (en) 2019-12-06 2019-12-06 Pedestrian identification method and system with multiple cameras

Publications (1)

Publication Number Publication Date
CN111079600A true CN111079600A (en) 2020-04-28

Family

ID=70312983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911242483.5A Pending CN111079600A (en) 2019-12-06 2019-12-06 Pedestrian identification method and system with multiple cameras

Country Status (1)

Country Link
CN (1) CN111079600A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709974A (en) * 2020-06-22 2020-09-25 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111738125A (en) * 2020-06-16 2020-10-02 中国银行股份有限公司 Method and device for determining number of clients
CN111931638A (en) * 2020-08-07 2020-11-13 华南理工大学 Local complex area positioning system and method based on pedestrian re-identification
CN112163503A (en) * 2020-09-24 2021-01-01 高新兴科技集团股份有限公司 Method, system, storage medium and equipment for generating insensitive track of personnel in case handling area
CN112163537A (en) * 2020-09-30 2021-01-01 中国科学院深圳先进技术研究院 Pedestrian abnormal behavior detection method, system, terminal and storage medium
CN112200106A (en) * 2020-10-16 2021-01-08 中国计量大学 Cross-camera pedestrian re-identification and tracking method
CN112733719A (en) * 2021-01-11 2021-04-30 西南交通大学 Cross-border pedestrian track detection method integrating human face and human body features
CN112738725A (en) * 2020-12-18 2021-04-30 福建新大陆软件工程有限公司 Real-time identification method, device, equipment and medium for target crowd in semi-closed area
CN112950674A (en) * 2021-03-09 2021-06-11 厦门市公安局 Cross-camera track tracking method and device based on cooperation of multiple identification technologies and storage medium
CN113689475A (en) * 2021-08-27 2021-11-23 招商银行股份有限公司 Cross-border head trajectory tracking method, equipment and storage medium
WO2022067606A1 (en) * 2020-09-30 2022-04-07 中国科学院深圳先进技术研究院 Method and system for detecting abnormal behavior of pedestrian, and terminal and storage medium
CN116433709A (en) * 2023-04-14 2023-07-14 北京拙河科技有限公司 Tracking method and device for sports ground monitoring
CN116704448A (en) * 2023-08-09 2023-09-05 山东字节信息科技有限公司 Pedestrian recognition method and recognition system with multiple cameras

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150329048A1 (en) * 2014-05-16 2015-11-19 GM Global Technology Operations LLC Surround-view camera system (vpm) online calibration
CN106203260A (en) * 2016-06-27 2016-12-07 南京邮电大学 Pedestrian's recognition and tracking method based on multiple-camera monitoring network
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108924507A (en) * 2018-08-02 2018-11-30 高新兴科技集团股份有限公司 A kind of personnel's system of path generator and method based on multi-cam scene
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN110046277A (en) * 2019-04-09 2019-07-23 北京迈格威科技有限公司 More video merging mask methods and device
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110414441A (en) * 2019-07-31 2019-11-05 浙江大学 A kind of pedestrian's whereabouts analysis method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150329048A1 (en) * 2014-05-16 2015-11-19 GM Global Technology Operations LLC Surround-view camera system (vpm) online calibration
CN106203260A (en) * 2016-06-27 2016-12-07 南京邮电大学 Pedestrian's recognition and tracking method based on multiple-camera monitoring network
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108924507A (en) * 2018-08-02 2018-11-30 高新兴科技集团股份有限公司 A kind of personnel's system of path generator and method based on multi-cam scene
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN110046277A (en) * 2019-04-09 2019-07-23 北京迈格威科技有限公司 More video merging mask methods and device
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110414441A (en) * 2019-07-31 2019-11-05 浙江大学 A kind of pedestrian's whereabouts analysis method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋志宏, 北京理工大学出版社 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738125A (en) * 2020-06-16 2020-10-02 中国银行股份有限公司 Method and device for determining number of clients
CN111738125B (en) * 2020-06-16 2023-10-27 中国银行股份有限公司 Method and device for determining number of clients
CN111709974A (en) * 2020-06-22 2020-09-25 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111709974B (en) * 2020-06-22 2022-08-02 苏宁云计算有限公司 Human body tracking method and device based on RGB-D image
CN111931638A (en) * 2020-08-07 2020-11-13 华南理工大学 Local complex area positioning system and method based on pedestrian re-identification
CN111931638B (en) * 2020-08-07 2023-06-20 华南理工大学 Pedestrian re-identification-based local complex area positioning system and method
CN112163503A (en) * 2020-09-24 2021-01-01 高新兴科技集团股份有限公司 Method, system, storage medium and equipment for generating insensitive track of personnel in case handling area
WO2022067606A1 (en) * 2020-09-30 2022-04-07 中国科学院深圳先进技术研究院 Method and system for detecting abnormal behavior of pedestrian, and terminal and storage medium
CN112163537A (en) * 2020-09-30 2021-01-01 中国科学院深圳先进技术研究院 Pedestrian abnormal behavior detection method, system, terminal and storage medium
CN112163537B (en) * 2020-09-30 2024-04-26 中国科学院深圳先进技术研究院 Pedestrian abnormal behavior detection method, system, terminal and storage medium
CN112200106A (en) * 2020-10-16 2021-01-08 中国计量大学 Cross-camera pedestrian re-identification and tracking method
CN112738725B (en) * 2020-12-18 2022-09-23 福建新大陆软件工程有限公司 Real-time identification method, device, equipment and medium for target crowd in semi-closed area
CN112738725A (en) * 2020-12-18 2021-04-30 福建新大陆软件工程有限公司 Real-time identification method, device, equipment and medium for target crowd in semi-closed area
CN112733719B (en) * 2021-01-11 2022-08-02 西南交通大学 Cross-border pedestrian track detection method integrating human face and human body features
CN112733719A (en) * 2021-01-11 2021-04-30 西南交通大学 Cross-border pedestrian track detection method integrating human face and human body features
CN112950674A (en) * 2021-03-09 2021-06-11 厦门市公安局 Cross-camera track tracking method and device based on cooperation of multiple identification technologies and storage medium
CN112950674B (en) * 2021-03-09 2024-03-05 厦门市公安局 Cross-camera track tracking method and device based on cooperation of multiple recognition technologies and storage medium
CN113689475A (en) * 2021-08-27 2021-11-23 招商银行股份有限公司 Cross-border head trajectory tracking method, equipment and storage medium
CN116433709A (en) * 2023-04-14 2023-07-14 北京拙河科技有限公司 Tracking method and device for sports ground monitoring
CN116704448A (en) * 2023-08-09 2023-09-05 山东字节信息科技有限公司 Pedestrian recognition method and recognition system with multiple cameras
CN116704448B (en) * 2023-08-09 2023-10-24 山东字节信息科技有限公司 Pedestrian recognition method and recognition system with multiple cameras

Similar Documents

Publication Publication Date Title
CN111079600A (en) Pedestrian identification method and system with multiple cameras
CN109887040B (en) Moving target active sensing method and system for video monitoring
Ahmed et al. Top view multiple people tracking by detection using deep SORT and YOLOv3 with transfer learning: within 5G infrastructure
Wheeler et al. Face recognition at a distance system for surveillance applications
CN111027462A (en) Pedestrian track identification method across multiple cameras
Hassanein et al. A survey on Hough transform, theory, techniques and applications
Zhuang et al. 3-D-laser-based scene measurement and place recognition for mobile robots in dynamic indoor environments
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
US20130163822A1 (en) Airborne Image Capture and Recognition System
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprints
CN105160321A (en) Vision-and-wireless-positioning-based mobile terminal identity verification method
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN113962274B (en) Abnormity identification method and device, electronic equipment and storage medium
CN111242077A (en) Figure tracking method, system and server
CN113256731A (en) Target detection method and device based on monocular vision
KR101579275B1 (en) Security system using real-time monitoring with location-trace for dangerous-object
CN114511592A (en) Personnel trajectory tracking method and system based on RGBD camera and BIM system
CN111932590A (en) Object tracking method and device, electronic equipment and readable storage medium
CN110728249A (en) Cross-camera identification method, device and system for target pedestrian
CN115767424A (en) Video positioning method based on RSS and CSI fusion
CN115994953A (en) Power field security monitoring and tracking method and system
CN113743380B (en) Active tracking method based on video image dynamic monitoring
CN113628251B (en) Smart hotel terminal monitoring method
CN116051876A (en) Camera array target recognition method and system of three-dimensional digital model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination