CN112307912A - Method and system for determining personnel track based on camera - Google Patents
Method and system for determining personnel track based on camera Download PDFInfo
- Publication number
- CN112307912A CN112307912A CN202011118416.5A CN202011118416A CN112307912A CN 112307912 A CN112307912 A CN 112307912A CN 202011118416 A CN202011118416 A CN 202011118416A CN 112307912 A CN112307912 A CN 112307912A
- Authority
- CN
- China
- Prior art keywords
- camera
- person
- coordinate system
- target
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Abstract
The invention discloses a method and a system for determining a person track based on a camera, belonging to the technical field of positioning and comprising the following steps: s1: deploying a camera and selecting a reference object; s2: establishing a three-dimensional coordinate system and selecting a coordinate origin; s3: identifying target personnel and acquiring images of the target personnel at all times; s4: coordinate conversion is performed and the person trajectory is traced. The invention can reduce unnecessary sensor equipment, save cost, fully utilize the camera device in the existing field and does not need additional expenditure; the person trajectory determination can be made for all persons including the alien person; the background can acquire more information, so that more information analysis can be made, adjustment measures can be made in time, and the method is worthy of being popularized and used.
Description
Technical Field
The invention relates to the technical field of positioning, in particular to a method and a system for determining a person track based on a camera.
Background
The current method for determining the trajectory of the person is to send positioning information to a background in real time by the person carrying a positioning sensor, and the background stores the received position information at different moments and draws the trajectory of the person.
The method has certain defects, personnel tracks can be drawn only by returning position information to the background by means of the positioning sensor, and the cost is high; personnel trajectories cannot be described for personnel who do not carry sensors or who come outside the area; the background can only receive position change information, and cannot analyze and effectively adjust the sudden factors influencing the trajectory of the personnel. Therefore, a method and a system for determining the trajectory of a person based on a camera are provided.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: how to solve the problem that the existing method for determining the person track needs to realize positioning and track determination by means of a positioning sensor, and provides a method for determining the person track based on a camera.
The invention solves the technical problems through the following technical scheme, and the invention comprises the following steps:
s1: deploying a camera and selecting a reference
Selecting an optimal deployment position to deploy a camera, fixing the camera firmly, and selecting a static object in a monitoring area as a reference object;
s2: establishing a three-dimensional coordinate system and selecting an origin of coordinates
Establishing a three-dimensional coordinate system of a monitoring area, and then selecting one point in the monitoring area as a coordinate origin of the three-dimensional coordinate system;
s3: identifying target person and acquiring target person image constantly
Monitoring a monitorable area in real time through a camera, identifying target personnel in the monitored area, and acquiring images containing the target personnel at all times after the target personnel are detected;
s4: coordinate transformation and person trajectory delineation
After the target personnel are identified, the angle and the distance of the target personnel relative to the reference object are calculated, the two-dimensional coordinates are converted into three-dimensional space coordinates, the position of the target personnel is determined, and finally the personnel track is described according to the time sequence.
Further, in the step S1, the reference object is a camera.
Further, in the step S2, the three-dimensional coordinate system is required to conform to the right-hand rule.
Further, in the step S2, if the monitored area is a rectangular space, a rectangular vertex is selected as the origin of coordinates; and if the monitoring area is an irregular space body, selecting a camera as the origin of coordinates of the three-dimensional space.
Further, in the step S3, the MaskRCNN algorithm is used to identify the target person in the monitored area.
Further, the specific process of identifying the target person in the monitored area is as follows:
s31: establishing a database containing a large batch of face images;
s32: the camera calls a built-in face recognition function module to compare the shot image features with the image features of a face image database, and then target personnel in the monitored area can be recognized;
s33: identifying the shot image of the camera by using MaskRCNN algorithm to obtain the pixel coordinate P of the target personu,v。
Further, in the step S4, the conversion of the two-dimensional coordinates into three-dimensional space coordinates refers to the conversion of pixel coordinates into world coordinates (actual point coordinates) as follows:
Pu,v=KTPw
wherein, Pu,vAs pixel coordinates of the object, PwThe actual coordinates of the target, K is an internal reference and is related to the camera and the lens, and T is an external reference and is the transformation of a camera coordinate system to a world coordinate system.
Further, the conversion formula is converted into a matrix form as follows:
the first matrix on the right side is a camera intrinsic parameter matrix corresponding to a parameter K, the second matrix is a camera extrinsic parameter matrix, and a corresponding parameter T and s are equation coefficients;
fx、fyis a camera focus coordinate; r isijA transformation rotation variable of a camera coordinate system relative to a world coordinate system; t is tiThe displacement offset in the coordinate system conversion process is obtained; u and v are coordinates of a pixel coordinate system; s is a pixel coordinate coefficient; and x, y and z are world coordinate system coordinates.
The invention also provides a system for determining the trajectory of the personnel based on the camera, and the method for determining the trajectory of the target personnel comprises the following steps:
the deployment module is used for selecting a deployment position to deploy the camera and firmly fixing the camera, and selecting a static object in the monitoring area as a reference object;
the coordinate system selection module is used for establishing a three-dimensional coordinate system of the monitoring area and then selecting one point in the monitoring area as a coordinate origin of the three-dimensional coordinate system;
the target person image acquisition module is used for monitoring a monitorable area in real time through the camera, identifying target persons in the monitoring area and acquiring images containing the target persons at all times after the target persons are detected;
the coordinate track processing module is used for calculating the angle and the distance of the target person relative to the reference object after the target person is identified, converting the two-dimensional coordinates into three-dimensional space coordinates, determining the position of the target person and finally drawing a person track according to the time sequence;
the control module is used for sending instructions to each module to complete related actions;
the deployment module, the coordinate system selection module, the target person image acquisition module and the coordinate track processing module are all electrically connected with the control module.
Compared with the prior art, the invention has the following advantages: the method for determining the personnel track based on the camera can reduce unnecessary sensor equipment, save cost, fully utilize the camera device of the existing field and do not need additional expenditure; the person trajectory determination can be made for all persons including the alien person; the background can acquire more information, so that more information analysis can be made, adjustment measures can be made in time, and the method is worthy of being popularized and used.
Drawings
Fig. 1 is a schematic flowchart of a method for determining a person trajectory based on a camera according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a two-dimensional coordinate transformation into a three-dimensional coordinate according to a first embodiment of the present invention;
FIG. 3 is a flowchart of an implementation of a method for determining a person trajectory based on a camera according to a second embodiment of the present invention;
fig. 4 is a structural diagram of a machine room in the third embodiment of the present invention;
FIG. 5a is a schematic picture without MaskRCNN processing in example III of the present invention;
FIG. 5b is a schematic picture processed by MaskRCNN in example III of the present invention;
fig. 6 is a schematic diagram of calibrating parameters of a camera by using a black and white checkerboard calibration board in the third embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
As shown in fig. 1, the present embodiment provides a technical solution: a method for determining a person track based on a camera comprises the following steps:
s1: deploying a camera and selecting a reference
Selecting an optimal deployment position to deploy the camera, fixing the camera firmly to prevent the camera from being shielded by foreign matters, and selecting a static object in a monitoring area as a reference object;
s2: establishing a three-dimensional coordinate system and selecting an origin of coordinates
Establishing a three-dimensional coordinate system of a monitoring area, wherein the requirement of the three-dimensional coordinate system of the monitoring area is in accordance with a right-hand rule, if the monitoring area is a rectangular space, a rectangular vertex is selected as a coordinate origin, and if the monitoring area is an irregular space body, a camera is selected as the coordinate origin of a three-dimensional space;
s3: identifying target person and acquiring target person image constantly
Monitoring a monitorable area in real time through a camera, identifying a target person in the monitoring area by using a face recognition technology, and acquiring an image containing the target person at any time after the target person is detected;
s4: coordinate transformation and person trajectory delineation
And calculating the angle and the distance of the target personnel relative to the reference object through a target identification algorithm, converting the two-dimensional coordinates into three-dimensional space coordinates, determining the position of the personnel target, and then drawing the personnel track according to the time sequence.
The reference object may also be a camera, in principle an object in a stationary state, and may be a reference object as long as it is within the area.
As shown in fig. 2, a schematic diagram of two-dimensional coordinates converted into three-dimensional coordinates is shown.
There are four coordinate systems in the figure, world, camera, image, pixel respectively.
world is a world coordinate system, and an Xw axis and a Yw axis can be arbitrarily specified to be a coordinate system of a point P in fig. 2.
camera is a camera coordinate system, the origin is positioned in the small hole, the z axis is coincident with the optical axis (optical axis), the Xw axis and the Yw axis are parallel to a projection plane, the XcYcZc in the coordinate system of the upper drawing, the small hole is the Fc position, and the camera forms an imaging focus;
image is an image coordinate system, the origin is positioned at the intersection point of the optical axis and the projection plane, the Xw axis and the Yw axis are parallel to the projection plane, and XYZ is in the upper image coordinate system;
pixel is a pixel coordinate system, when viewed from the aperture to the projection plane, the upper left corner of the projection plane is an origin, the uv axis and two sides of the projection plane are coincident, the coordinate system and the image coordinate system are in the same plane, but the origin is different.
The following formula is a transformation formula of pixel coordinates pixel and world coordinates world, the first matrix on the right side is an in-camera parameter matrix, and the second matrix is an out-camera parameter matrix. The transformation formula is as follows:
wherein f isx、fyIs a camera focus coordinate; r isijA transformation rotation variable of a camera coordinate system relative to a world coordinate system; t is tiThe displacement offset in the coordinate system conversion process is obtained; u and v are coordinates of a pixel coordinate system; s is a pixel coordinate coefficient; and x, y and z are world coordinate system coordinates.
I.e. Pu,v=KTPw (1)
The right side of the equation implies a conversion from a homogeneous coordinate to a non-homogeneous coordinate, and K is an internal reference and is related to a camera and a lens and is measured by a Zhang-Zhengyou method; t is external reference, which is the transformation of the camera coordinate system to the world coordinate system, and is obtained by a PNP estimation method, and s is an equation coefficient. The equations can thus transform two-dimensional coordinates in the picture into three-dimensional spatial coordinates.
The embodiment also provides a system for determining the trajectory of a person based on the camera, and the method for determining the trajectory of the target person comprises the following steps:
the deployment module is used for selecting a deployment position to deploy the camera and firmly fixing the camera, and selecting a static object in the monitoring area as a reference object;
the coordinate system selection module is used for establishing a three-dimensional coordinate system of the monitoring area and then selecting one point in the monitoring area as a coordinate origin of the three-dimensional coordinate system;
the target person image acquisition module is used for monitoring a monitorable area in real time through the camera, identifying target persons in the monitoring area and acquiring images containing the target persons at all times after the target persons are detected;
the coordinate track processing module is used for calculating the angle and the distance of the target person relative to the reference object after the target person is identified, converting the two-dimensional coordinates into three-dimensional space coordinates, determining the position of the target person and finally drawing a person track according to the time sequence;
the control module is used for sending instructions to each module to complete related actions;
the deployment module, the coordinate system selection module, the target person image acquisition module and the coordinate track processing module are all electrically connected with the control module.
Example two
As shown in fig. 3, the present embodiment provides a technical solution: a method for determining a person track based on a camera comprises the following steps:
s1: deploying a camera and selecting a reference
Selecting an optimal deployment position to deploy the camera, fixing the camera firmly to prevent the camera from being shielded by foreign matters, and selecting a static object in a monitoring area as a reference object;
s2: establishing a three-dimensional coordinate system and selecting an origin of coordinates
Establishing a three-dimensional coordinate system of a monitoring area, wherein the requirement of the three-dimensional coordinate system of the monitoring area is in accordance with a right-hand rule, if the monitoring area is a rectangular space, a rectangular vertex is selected as a coordinate origin, and if the monitoring area is an irregular space body, a camera is selected as the coordinate origin of a three-dimensional space;
s3: identifying target person and acquiring target person image constantly
Monitoring a monitorable area in real time through a camera, identifying a target person in the monitoring area by using a face recognition technology, and acquiring an image containing the target person at any time after the target person is detected;
s4: coordinate transformation and person trajectory delineation
And calculating the angle and the distance of the target personnel relative to the reference object through a target identification algorithm, converting the two-dimensional coordinates into three-dimensional space coordinates, determining the position of the personnel target, and then drawing the personnel track according to the time sequence.
EXAMPLE III
The embodiment provides a technical scheme: a method for determining a person track based on a camera mainly comprises the following steps:
1. selecting reference object to establish coordinate system
As shown in fig. 4, there is a room in which a camera has been installed, the camera being installed at a position close to the Yw axis as shown on the left side of the figure. And selecting a vertex of the rectangular space as a coordinate origin to establish a three-dimensional space coordinate system XwYwZw, and selecting a leftmost cabinet in the graph as a reference object.
2. Obtaining pixel coordinates
As shown in fig. 5a and 5b, the camera detects a target person through a face recognition function, and tracks and positions a pedestrian in a picture by using a computer vision algorithm MaskRCNN to obtain pixel coordinates (u, v) of the target.
The processing procedure of the computer vision algorithm MaskRCNN is as follows:
firstly, inputting a picture, and then performing corresponding preprocessing operation, or inputting a preprocessed picture; then inputting the data into a pre-trained neural network (ResNeXt, etc.) to obtain a corresponding feature map;
then, a predetermined ROI is set for each point in this feature map, thereby obtaining a plurality of candidate ROIs;
then, the candidate ROI is sent to an RPN network for binary classification (foreground or background) and BB regression, and a part of candidate ROI is filtered;
then ROIAlign operation is carried out on the rest ROI (namely, the original image is corresponding to the pixel of the feature map, and then the feature map is corresponding to the fixed feature);
finally, these ROIs are classified (N category classification), BB regression, and MASK generated pixel coordinates (u, v).
3. Coordinate transformation
As shown in fig. 6, a black and white checkerboard calibration board with a 15cm interval is printed, and a plurality of checkerboard pictures with different distances, different directions and different inclination angles are shot by using the installed cameras, so that the 2D/3D corresponding relation of a plurality of viewing angles is obtained.
The chessboard calibration board has each small lattice corresponding to an established 3D space coordinate system (x)n,yn,zn) The picture shot by the camera has 2D pixel coordinate system coordinates (u)n,vn) Therefore, 2D/3D coordinates corresponding to a plurality of angles can be obtained by shooting different distances, different directions and different inclination angles.
In OpenCV, the intrinsic parameter k of the camera can be solved using the calibretanemura function1And an external parameter T for each view angle1In the first embodimentEquation (1) of (a) to:
forming point pixel coordinate (u) in the photo according to the reference object cabinet outline point in the figuren,vn) With actual point coordinates (x)n,yn,zn) Corresponding relation of (actual point coordinates (x)n,yn,zn) Coordinated when a three-dimensional coordinate system is established), s is calculated, and then the two-dimensional coordinates (u, v) are converted into the coordinates (x, y, z) of the determined three-dimensional coordinate system.
4. Depicting the trajectory of a person
And storing the converted coordinates and the corresponding time into a system, and then depicting the movement track of the target person according to the time.
In summary, the method for determining the person track based on the camera according to the embodiment can reduce unnecessary sensor devices, save cost, fully utilize the camera device in the existing field, and do not need additional expenditure; the person trajectory determination can be made for all persons including the alien person; the background can acquire more information, so that more information analysis can be made, adjustment measures can be made in time, and the method is worthy of being popularized and used.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (9)
1. A method for determining a person track based on a camera is characterized by comprising the following steps:
s1: deploying a camera and selecting a reference
Selecting a deployment position to deploy a camera, fixing the camera firmly, and selecting a static object in a monitoring area as a reference object;
s2: establishing a three-dimensional coordinate system and selecting an origin of coordinates
Establishing a three-dimensional coordinate system of a monitoring area, and then selecting one point in the monitoring area as a coordinate origin of the three-dimensional coordinate system;
s3: identifying target person and acquiring target person image constantly
Monitoring a monitorable area in real time through a camera, identifying target personnel in the monitored area, and acquiring images containing the target personnel at all times after the target personnel are detected;
s4: coordinate transformation and person trajectory delineation
After the target personnel are identified, the angle and the distance of the target personnel relative to the reference object are calculated, the two-dimensional coordinates are converted into three-dimensional space coordinates, the position of the target personnel is determined, and finally the personnel track is described according to the time sequence.
2. The camera-based method for determining a trajectory of a person according to claim 1, wherein: in step S1, the reference object is a camera.
3. The camera-based method for determining a trajectory of a person according to claim 2, wherein: in step S2, the three-dimensional coordinate system is subject to the right-hand rule.
4. The camera-based method for determining a trajectory of a person according to claim 3, wherein: in step S2, if the monitored area is a rectangular space, selecting a rectangular vertex as the origin of coordinates; and if the monitoring area is an irregular space body, selecting a camera as the origin of coordinates of the three-dimensional space.
5. The camera-based method for determining a trajectory of a person according to claim 4, wherein: in step S3, target persons in the monitored area are identified by using MaskRCNN algorithm.
6. The camera-based method for determining a trajectory of a person according to claim 5, wherein: the specific process of identifying the target person in the monitored area is as follows:
s31: establishing a database containing a large batch of face images;
s32: the camera calls a face recognition function module to compare the shot image features with the image features of the face image database, and target personnel in the monitored area are recognized;
s33: identifying the image shot by the camera by using MaskRCNN algorithm to acquire the pixel coordinate P of the target personu,v。
7. The camera-based method for determining a trajectory of a person according to claim 6, wherein: in step S4, the conversion of the two-dimensional coordinates into three-dimensional space coordinates refers to the conversion of pixel coordinates into world coordinates, and the conversion formula is as follows:
Pu,v=KTPw
wherein, Pu,vAs pixel coordinates of the object, PwThe actual coordinates of the target, K is an internal reference and is related to the camera and the lens, and T is an external reference and is the transformation of a camera coordinate system to a world coordinate system.
8. The camera-based method for determining a trajectory of a person according to claim 7, wherein: the conversion formula is converted to a matrix form as follows:
the first matrix on the right side is a camera intrinsic parameter matrix corresponding to a parameter K, the second matrix is a camera extrinsic parameter matrix, and a corresponding parameter T and s are equation coefficients;
fx、fyis a camera focus coordinate; r isijA transformation rotation variable of a camera coordinate system relative to a world coordinate system; t is tiThe displacement offset in the coordinate system conversion process is obtained; u and v are coordinates of a pixel coordinate system; s is a pixel coordinate coefficient; x, y and z are world coordinate systemsAnd (4) marking.
9. A system for determining a person track based on a camera is characterized in that: determining the trajectory of the target person using the method of any one of claims 1 to 8, comprising:
the deployment module is used for selecting a deployment position to deploy the camera and firmly fixing the camera, and selecting a static object in the monitoring area as a reference object;
the coordinate system selection module is used for establishing a three-dimensional coordinate system of the monitoring area and then selecting one point in the monitoring area as a coordinate origin of the three-dimensional coordinate system;
the target person image acquisition module is used for monitoring a monitorable area in real time through the camera, identifying target persons in the monitoring area and acquiring images containing the target persons at all times after the target persons are detected;
the coordinate track processing module is used for calculating the angle and the distance of the target person relative to the reference object after the target person is identified, converting the two-dimensional coordinates into three-dimensional space coordinates, determining the position of the target person and finally drawing a person track according to the time sequence;
the control module is used for sending instructions to each module to complete related actions;
the deployment module, the coordinate system selection module, the target person image acquisition module and the coordinate track processing module are all electrically connected with the control module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011118416.5A CN112307912A (en) | 2020-10-19 | 2020-10-19 | Method and system for determining personnel track based on camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011118416.5A CN112307912A (en) | 2020-10-19 | 2020-10-19 | Method and system for determining personnel track based on camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112307912A true CN112307912A (en) | 2021-02-02 |
Family
ID=74327864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011118416.5A Pending CN112307912A (en) | 2020-10-19 | 2020-10-19 | Method and system for determining personnel track based on camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112307912A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240707A (en) * | 2021-04-16 | 2021-08-10 | 国网河北省电力有限公司沧州供电分公司 | Method and device for tracking personnel moving path and terminal equipment |
CN113688194A (en) * | 2021-07-13 | 2021-11-23 | 金钱猫科技股份有限公司 | Personnel movement track monitoring method and system and storage equipment |
CN115187916A (en) * | 2022-09-13 | 2022-10-14 | 太极计算机股份有限公司 | Method, device, equipment and medium for preventing and controlling epidemic situation in building based on space-time correlation |
CN117058743A (en) * | 2023-10-12 | 2023-11-14 | 广州尚航信息科技股份有限公司 | Data compatible transmission processing method and system based on DCIM platform |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190508A (en) * | 2018-08-13 | 2019-01-11 | 南京财经大学 | A kind of multi-cam data fusion method based on space coordinates |
CN111080679A (en) * | 2020-01-02 | 2020-04-28 | 东南大学 | Method for dynamically tracking and positioning indoor personnel in large-scale place |
CN111754552A (en) * | 2020-06-29 | 2020-10-09 | 华东师范大学 | Multi-camera cooperative target tracking method based on deep learning |
-
2020
- 2020-10-19 CN CN202011118416.5A patent/CN112307912A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190508A (en) * | 2018-08-13 | 2019-01-11 | 南京财经大学 | A kind of multi-cam data fusion method based on space coordinates |
CN111080679A (en) * | 2020-01-02 | 2020-04-28 | 东南大学 | Method for dynamically tracking and positioning indoor personnel in large-scale place |
CN111754552A (en) * | 2020-06-29 | 2020-10-09 | 华东师范大学 | Multi-camera cooperative target tracking method based on deep learning |
Non-Patent Citations (2)
Title |
---|
WHITE_LEARNER,: "相机标定(二)——图像坐标与世界坐标转换", 《HTTPS://BLOG.CSDN.NET/KALENEE/ARTICLE/DETAILS/99207102》, 12 October 2019 (2019-10-12), pages 1 * |
大学生嵌入式系统专题邀请赛组委会组: "《大学生嵌入式系统专题邀请赛优秀作品选编》", 上海:上海交通大学出版社, pages: 194 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240707A (en) * | 2021-04-16 | 2021-08-10 | 国网河北省电力有限公司沧州供电分公司 | Method and device for tracking personnel moving path and terminal equipment |
CN113688194A (en) * | 2021-07-13 | 2021-11-23 | 金钱猫科技股份有限公司 | Personnel movement track monitoring method and system and storage equipment |
CN115187916A (en) * | 2022-09-13 | 2022-10-14 | 太极计算机股份有限公司 | Method, device, equipment and medium for preventing and controlling epidemic situation in building based on space-time correlation |
CN117058743A (en) * | 2023-10-12 | 2023-11-14 | 广州尚航信息科技股份有限公司 | Data compatible transmission processing method and system based on DCIM platform |
CN117058743B (en) * | 2023-10-12 | 2024-02-20 | 广州尚航信息科技股份有限公司 | Data compatible transmission processing method and system based on DCIM platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112307912A (en) | Method and system for determining personnel track based on camera | |
US8600192B2 (en) | System and method for finding correspondence between cameras in a three-dimensional vision system | |
CN110142785A (en) | A kind of crusing robot visual servo method based on target detection | |
CN106485753B (en) | The method and apparatus of camera calibration for pilotless automobile | |
CN111345029B (en) | Target tracking method and device, movable platform and storage medium | |
US11488322B2 (en) | System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same | |
CN109887040A (en) | The moving target actively perceive method and system of facing video monitoring | |
CN110189322B (en) | Flatness detection method, device, equipment, storage medium and system | |
JP2011198349A (en) | Method and apparatus for processing information | |
JP6352208B2 (en) | 3D model processing apparatus and camera calibration system | |
JP5672112B2 (en) | Stereo image calibration method, stereo image calibration apparatus, and computer program for stereo image calibration | |
KR102199094B1 (en) | Method and Apparatus for Learning Region of Interest for Detecting Object of Interest | |
CN113313097B (en) | Face recognition method, terminal and computer readable storage medium | |
CN112950667A (en) | Video annotation method, device, equipment and computer readable storage medium | |
CN110910459A (en) | Camera device calibration method and device and calibration equipment | |
CN113129383A (en) | Hand-eye calibration method and device, communication equipment and storage medium | |
CN113329179A (en) | Shooting alignment method, device, equipment and storage medium | |
CN109308714A (en) | Camera and laser radar information method for registering based on classification punishment | |
CN112470189B (en) | Occlusion cancellation for light field systems | |
KR20120108256A (en) | Robot fish localization system using artificial markers and method of the same | |
JP2010217984A (en) | Image detector and image detection method | |
CN110909617B (en) | Living body face detection method and device based on binocular vision | |
CN109389367B (en) | Personnel attendance checking method, device and storage medium | |
JP2018201146A (en) | Image correction apparatus, image correction method, attention point recognition apparatus, attention point recognition method, and abnormality detection system | |
CN112995641B (en) | 3D module imaging device and method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |