WO2021196294A1 - 一种跨视频人员定位追踪方法、系统及设备 - Google Patents
一种跨视频人员定位追踪方法、系统及设备 Download PDFInfo
- Publication number
- WO2021196294A1 WO2021196294A1 PCT/CN2020/085081 CN2020085081W WO2021196294A1 WO 2021196294 A1 WO2021196294 A1 WO 2021196294A1 CN 2020085081 W CN2020085081 W CN 2020085081W WO 2021196294 A1 WO2021196294 A1 WO 2021196294A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- geographic
- person
- tracked
- surveillance
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 81
- 238000001514 detection method Methods 0.000 claims abstract description 32
- 238000004364 calculation method Methods 0.000 claims abstract description 26
- 238000007476 Maximum Likelihood Methods 0.000 claims abstract description 23
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 16
- 238000004458 analytical method Methods 0.000 claims abstract description 15
- 238000012544 monitoring process Methods 0.000 claims abstract description 10
- 230000015654 memory Effects 0.000 claims description 15
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 5
- 230000001960 triggered effect Effects 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 14
- 239000011159 matrix material Substances 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 11
- 238000003860 storage Methods 0.000 description 6
- 238000009826 distribution Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- This application belongs to the technical field of pedestrian positioning and tracking, and in particular relates to a method, system and electronic equipment for cross-video personnel positioning and tracking.
- video personnel positioning methods mainly adopt environmental measurement positioning methods and camera-based positioning methods.
- the environmental measurement positioning method refers to outdoor personnel positioning or ranging.
- laser range measurement or ultrasonic range measurement can be used, with an accuracy of centimeters.
- this type of method has high accuracy and the surveyor has geographic location information, it is difficult to perform continuous, large-scale, and multi-person determinations.
- Camera-based positioning methods are divided into binocular positioning and monocular positioning.
- binocular positioning is used in the field of precise positioning of robots.
- the SLAM (simultaneous localization and mapping) method of constructing three-dimensional space, real-time positioning and map construction is not suitable for Public area camera surveillance system.
- the video-based monocular and binocular ranging methods have problems such as large workloads for building local databases, large calculations for extracting feature information, and serious impact of external factors on feature information, which still restrict the accuracy and usability of visual positioning.
- the geographic information construction method is mainly based on the image content to determine the geographic location information of the image.
- image-based geographic location recognition needs to first extract visual features from the image to compare the similarity between different images.
- these prior image data cannot be obtained in advance.
- the camera-based positioning method does not belong to the field of measurement, it is necessary to introduce an image geographic information construction method to obtain personnel geographic information.
- the method of introducing SIFT parameters is mainly adopted for the same person tracking, that is, as much as possible of the details of the person to be tracked and stored in the database. When the same person appears, the person who matches the database will be re-identified.
- a common cross-video tracking method is template matching. A given template is used to search for the image area to be matched, and the matching result is obtained according to the calculated matching degree.
- this kind of personnel matching method requires a given template in advance, and the area to be matched is strictly required to be searched. Due to the existence of matching and searching, the calculation efficiency in continuous video is low and time-consuming, and it cannot be fully applied to a multi-video system.
- the existing person (pedestrian) tracking system mainly adopts the method of finding and locating the person.
- the method of introducing more person characteristic parameters will be introduced for tracking.
- due to the inability to obtain personnel location information it is difficult for the existing personnel tracking system to obtain the continuous space-time trajectory of personnel.
- the present application provides a cross-video personnel location tracking method, system, and electronic device, which are intended to solve at least one of the above-mentioned technical problems in the prior art to a certain extent.
- a cross-video personnel location tracking method including the following steps:
- Step a Constructing an object geographic coordinate database, performing reference point topology matching and video geographic registration on the surveillance video according to object recognition, and determining the geographic coordinates of the surveillance video pixels;
- Step b Perform personnel detection location calculation on the surveillance video to obtain the geographic location of the person to be tracked;
- Step c Combining the geographic locations of the persons to be tracked detected by multiple nearby videos, and using the maximum likelihood estimation method to perform cross-video re-identification and analysis of persons on multiple surveillance videos to obtain the continuous spatio-temporal trajectory of the persons to be tracked.
- the technical solution adopted in the embodiment of the present application further includes: in the step a, the reference point topology matching and the video geographic registration of the surveillance video according to the object recognition specifically include:
- the world geographic coordinate system conversion method is adopted to perform geographic registration on the control points with the same name in the ground area in the surveillance video, so that the surveillance video has geographic location information.
- the technical solution adopted by the embodiment of the present application further includes: in the step a, the topological matching of reference points and the video geographic registration of the surveillance video according to the object recognition further include:
- a certain frame of a two-dimensional image in the pre-processed video image is intercepted, and edge detection and watershed segmentation methods are used to perform edge extraction on the two-dimensional image to obtain a ground area with GIS information in the two-dimensional image.
- the technical solution adopted in the embodiment of the present application further includes: in the step b, the calculation of the person detection position of the surveillance video is specifically:
- the frame difference method is used to detect the moving objects in the surveillance video, and the head detector is used to locate the position of the person to be tracked, and the head information of the person to be tracked is obtained.
- the technical solution adopted in the embodiment of the application further includes: the human head detector adopts a person detection method based on a convolutional neural network CNN, and the convolutional neural network includes an input layer, a convolutional layer, a pooling layer, a fully connected layer, and In the output layer, multiple convolutional layers and pooling layers are combined to process the input data, and the connection layer is used to map the output target.
- the technical solution adopted in the embodiment of the present application further includes: in the step b, the obtaining the geographic location of the personnel further includes:
- the movement detection of the person to be tracked is performed based on the head information, and the pixel points of the feet of the person to be tracked are obtained, and the pixels of the feet are the geographic location information of the person to be tracked.
- the technical solution adopted in the embodiment of the present application further includes: in the step b, the obtaining the geographic location of the personnel further includes:
- the geographic location information of the person to be tracked is calibrated by suppressing the camera itself.
- the technical solution adopted in the embodiment of the present application further includes: in the step c, the use of the maximum likelihood estimation method to perform cross-video re-identification analysis of multi-channel surveillance videos by people also includes:
- the geographic area overlap determination is triggered; the geographic area overlap determination is specifically:
- the geographic information of the shooting scene of the multi-channel surveillance video is located, and the surveillance area of each camera device is divided according to the overlapping geographic location area; the geographic information space coordinates of the person to be tracked in consecutive frames are connected to obtain the to-be-tracked person Track the continuous trajectory of the person.
- the geographic information space coordinate of the person to be tracked exceeds the monitoring area of the current camera and moves to the monitoring area of the next camera, the next camera is triggered to track the trajectory of the person.
- a cross-video personnel location tracking system including:
- Video geo-registration module used to build an object geographic coordinate database, perform reference point topology matching and video geo-registration on surveillance videos according to object recognition, and determine the geographic coordinates of the surveillance video pixels;
- Cross-video personnel positioning and tracking module used to perform personnel detection position calculation on the surveillance video and obtain the geographic location of the personnel to be tracked;
- Multi-video trajectory tracking module used to combine the geographic locations of the persons to be tracked detected by multiple nearby videos, and use the maximum likelihood estimation method to perform cross-video re-identification and analysis of persons on multiple surveillance videos to obtain the continuous spatio-temporal trajectories of the persons to be tracked.
- an electronic device including:
- At least one processor At least one processor
- a memory communicatively connected with the at least one processor; wherein,
- the memory stores instructions executable by the one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the following operations of the cross-video person location tracking method described above:
- Step a Constructing an object geographic coordinate database, performing reference point topology matching and video geographic registration on the surveillance video according to object recognition, and determining the geographic coordinates of the surveillance video pixels;
- Step b Perform personnel detection location calculation on the surveillance video to obtain the geographic location of the person to be tracked;
- Step c Combining the geographic locations of the persons to be tracked detected by multiple nearby videos, and using the maximum likelihood estimation method to perform cross-video re-identification and analysis of persons on multiple surveillance videos to obtain the continuous spatio-temporal trajectory of the persons to be tracked.
- the beneficial effects produced by the embodiments of the present application are: the cross-video personnel location tracking method, system and electronic equipment of the embodiments of the present application obtain the location information of the reference object through object recognition to perform the geographic registration of the surveillance video.
- the detection and acquisition of personnel's geographic location information and their movement time and space trajectory are simple in calculation and possess geographic location information, which has better application value in multi-channel video system scenarios.
- This application introduces personnel geographic location information based on video geographic calibration, performs maximum likelihood estimation during cross-video personnel re-identification, reduces the difficulty of visual personnel re-identification trajectory tracking algorithm and system complexity, and is better in multi-channel cross-video system scenarios Application value.
- FIG. 1 is a flowchart of a method for cross-video personnel location tracking according to an embodiment of the present application
- FIG. 2 is a schematic diagram of a surveillance video geographic registration algorithm according to an embodiment of the present application
- Figure 3(a) is the video image before preprocessing
- Figure 3(b) is the video image after preprocessing
- Figure 4 is a schematic diagram of a camera imaging model
- Figures 5(a) and (b) are schematic diagrams of the spatial relationship between pixel coordinates and world geographic coordinates.
- Figure 5(a) is pixel coordinates
- Figure 5(b) is world geographic coordinates;
- Fig. 6 is a schematic diagram of a person detection algorithm according to an embodiment of the present application.
- FIG. 7 is a flowchart of a frame difference method according to an embodiment of the present application.
- Figure 8 is an example diagram of maximum likelihood estimation
- FIG. 9 is a flowchart of a cross-video person tracking algorithm based on geographic area overlap determination according to an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a cross-video personnel positioning and tracking system according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of the hardware device structure of the cross-video personnel location tracking method provided by an embodiment of the present application.
- FIG. 1 is a flowchart of a cross-video personnel location tracking method according to an embodiment of the present application.
- the cross-video personnel location tracking method of the embodiment of the present application includes the following steps:
- Step 100 Obtain surveillance video of the person to be tracked
- Step 200 Construct a database of object geographic coordinates, perform reference point topology matching and video geographic registration on the surveillance video according to the object recognition, and determine the geographic coordinates of the surveillance video pixels;
- the object geographic coordinate database includes a third-party geographic information database such as Baidu or existing BIM file data (data for construction projects, including building appearance and geographic location).
- a third-party geographic information database such as Baidu or existing BIM file data (data for construction projects, including building appearance and geographic location).
- the standard WGS84 coordinate system is used for reference point calibration, and the BIM (Building Information Modeling) information of the identified object corresponds to the coordinate points of the WGS84 coordinate system in the public area and serves as a reference Points to improve the accuracy of tracking personnel space calculations.
- the object geographic database is used for the correspondence between the world coordinate system and the image coordinate system.
- the existing object geographic information and image are used to identify the object, and the pixel points in the surveillance video are registered with the actual actual coordinates, and the spatial position coordinates of the surveillance video are determined .
- FIG. 2 is a schematic diagram of a surveillance video geo-registration algorithm according to an embodiment of the present application.
- the surveillance video geographic registration algorithm of the embodiment of the application includes:
- Step 210 Perform object recognition and classification on the surveillance video by using an object recognition algorithm to obtain a reference point in the surveillance video. At the same time, perform image preprocessing on the surveillance video to obtain a fish-eye-calibrated video image;
- the object recognition algorithm includes methods such as RCNN, YOLO, etc., through object recognition and classification of the surveillance video, a reference point in the surveillance video is obtained, and the reference point is fuzzy matched with the object in the object geographic database.
- This application preprocesses the surveillance video through the checkerboard correction method, calculates the internal parameters and correction coefficients of the fisheye, then corrects and trims the surveillance video, removes part of the edge part, and obtains the fisheye calibrated video image.
- the preprocessed Video images can reduce errors in world coordinate conversion caused by nonlinear distortion. The details are shown in Figure 3(a) and (b), where Figure 3(a) is the video image before preprocessing, and Figure 3(b) is the video image after preprocessing.
- nonlinear distortion is generally geometric distortion, which causes a certain offset between pixel coordinates and ideal pixel coordinates, which can be expressed as:
- the first term in ⁇ u and ⁇ v is affected by the camera components, and the second and third terms are caused by the inaccuracy of the original camera imaging.
- s 1 and s 2 are nonlinear distortion parameters. By calculating the value of the nonlinear distortion parameter, the image distortion is restored.
- Step 220 Geographically register the identified reference point with the object in the object geographic database to obtain the geographic location information of the control point with the same name in the surveillance video;
- Step 230 intercept a frame of a two-dimensional image in the pre-processed video image, and use edge detection and watershed segmentation methods to perform edge extraction on the two-dimensional image to obtain a ground area with GIS information in the two-dimensional image;
- step 230 using the special spatial structure relationship of the surveillance video, the area under the video is the ground part, and vertical objects will block the ground. Therefore, this application uses edge detection technology and watershed algorithm to calculate the horizontal structure and vertical structure of the image.
- Step 240 Use the method of world geographic coordinate system conversion to perform geographic registration on the control points with the same name in the ground area in the surveillance video, so that the surveillance video has geographic location information;
- the offset matrix is calculated according to the conversion of the world geographic coordinate system, and each pixel in the ground area in the surveillance video is matched with geographic coordinates through methods such as image stretching, filling, and cutting, to obtain a geographic coordinate in the surveillance video.
- Information plane By comparing the object geographic database, the actual geographic location of the identified object and the relative position of the observed object are obtained. Since matrix calculation generally uses four points to complete the conversion between pixel coordinates and world coordinates of a plane, the result calculated by the world coordinate system conversion matrix is an estimated value, not an accurate value. In order to control the error of the estimated value, this application performs constraint calculation on the image when there are more control points with the same name.
- the coordinate transformation formula between the world coordinate system and the camera coordinate system is:
- R is a 3 ⁇ 3 rotation transformation matrix
- t is a 3 ⁇ 1 translation
- the transformation matrix, R and t respectively represent the relative posture and position between the world coordinate system and the camera coordinate system.
- the coordinate transformation method between the camera coordinate system and the image coordinate system is to project a three-dimensional space point P (X, Y, Z) in the camera coordinate system to the imaging plane to obtain the corresponding two-dimensional plane point p (x, y),
- the relationship between x, y and X, Y can be expressed as:
- f is the focal length of the camera.
- the vertical distance of the recognized object or pixel area on the ground is:
- the height of the identified object can be determined by the Y value.
- the ground area can be determined and the ground coordinates can be determined.
- Figures 5(a) and (b) are schematic diagrams of the spatial relationship between pixel coordinates and world geographic coordinates.
- Figure 5(a) is the pixel coordinates
- Figure 5(b) is the world geographic coordinates.
- this application proposes a variety of methods to suppress the error to measure the pixel points of the personnel. Position calibration. After the object is recognized, use the center point of the bottom of the object recognition frame as the geographic location of the object and calculate it as the reference point; first, the two-dimensional plane point (x, y) in the image and its corresponding point (u, v) ) The relationship is expressed by the following formula:
- d x , d y represent the physical distance of the pixel on the u-axis and v-axis and direction of the camera, and (u 0 , v 0 ) are the coordinates of the principal point of the camera in the pixel coordinate system.
- f u and f v respectively represent the focal length when using pixel width and height as the unit.
- matrix The parameters in are called camera internal parameters, and they are only affected by the camera's internal structure and imaging characteristics.
- the parameters of the matrix R and the matrix t are called external camera parameters.
- Step 300 Perform personnel detection location calculation on the surveillance video, and obtain the geographic location of the person to be tracked;
- FIG. 6 is a schematic diagram of a person detection algorithm according to an embodiment of the present application.
- the personnel detection algorithm in this embodiment of the application includes:
- Step 310 Detect the head of the person entering the surveillance video, and obtain the head information of the person to be tracked;
- human head detection is a method for quickly identifying a human head model, which is suitable for multi-channel surveillance videos.
- this application uses the frame difference method to detect moving objects in the surveillance video, and combines the human head detector to locate the human position.
- the head detector adopts a person detection method based on the convolutional neural network CNN.
- the convolutional neural network is composed of an input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer. It combines multiple convolutional layers and pooling. The layer processes the input data and realizes the mapping with the output personnel in the connection layer.
- the flow chart of the frame difference method is shown in Figure 7, which specifically includes: remember that the images of the n+1th, nth and n-1th frames in the video sequence are fn+1, fn and fn-1, respectively, three frames
- the gray values of the corresponding pixels of the image are denoted as fn+1(x,y), fn(x,y) and fn-1(x,y) respectively, and the difference images Dn+1 and Dn are obtained respectively.
- Dn +1 and Dn perform operations, then perform threshold processing and connectivity analysis, and finally detect moving objects.
- Step 320 Perform movement detection (personal position calculation) of the person to be tracked based on the head detection result to obtain the pixel points of the person to be tracked, which is the geographic location information of the person to be tracked;
- step 320 since the person is a moving object, according to the characteristics of the video, the bottom area of the moving object is the position where the feet stand. Therefore, this application uses the method of combining human head detection and movement detection to quickly find the pixel points under the feet corresponding to the movement area of the person to be tracked. Compared with methods such as general person posture detection, SIFE feature tracking, etc., this application It has faster detection efficiency, and is more robust in complex environments, and it also has a good performance in the accuracy of personnel recognition.
- Step 330 Calibrate the geographic location information of the person to be tracked
- step 330 the present application uses the method of suppressing the camera itself to calibrate the geographic location information of the person to be tracked, thereby reducing uncertain errors caused by blurring when the person moves, and improving positioning accuracy.
- Step 400 Combining the geographic locations of the persons to be tracked detected by multiple nearby videos, and use the maximum likelihood estimation method to perform cross-video re-identification and analysis of the persons to be tracked on multiple surveillance videos to obtain the continuous spatio-temporal trajectory of the persons to be tracked;
- step 400 when a person moves across videos, although the geographic location information of the person to be tracked is available, there is a situation in which it is impossible to determine whether they are the same person.
- This application maximizes the movement trajectory of the same person to be tracked in multiple videos.
- Likelihood estimation judge whether they are the same person through probability calculation. Specifically: given a probability distribution D, assuming its probability density function (continuous distribution) or probability aggregation function (discrete distribution) is f D , and a distribution parameter ⁇ , one can be extracted from the probability distribution D with n values Sampling x 1 ,x 2 ,...,x n , and then estimate ⁇ . Calculate its probability by using f D:
- the maximum likelihood estimation will find the most likely value of ⁇ (that is, in all possible values of ⁇ , find a value to maximize the "probability" of this sample). To realize the maximum likelihood estimation method mathematically, we must first define the possibility:
- the value that maximizes the probability is the maximum likelihood estimate of ⁇ .
- the maximum likelihood is judged, and after the threshold is set, it is judged whether it is the same person.
- Figure 8 it is an example diagram of maximum likelihood estimation.
- Geographical area overlap determination is to locate the geographic information of the shooting scenes of multiple surveillance videos, and then divide the respective tracking surveillance areas according to the overlapping geographic locations.
- the continuous trajectory of the person to be tracked is obtained by connecting the geographic information space coordinates of the person to be tracked in consecutive frames.
- FIG. 10 is a schematic structural diagram of a cross-video personnel positioning and tracking system according to an embodiment of the present application.
- the cross-video personnel location tracking system of the embodiment of the present application includes:
- Video geo-reference module used to build a database of object geographic coordinates, perform reference point topology matching and video geo-registration on surveillance video based on object recognition, and determine the geographic coordinates of surveillance video pixels;
- Cross-video personnel positioning and tracking module used to perform personnel detection position calculation on surveillance videos and obtain the geographic location of the personnel to be tracked;
- Multi-video trajectory tracking module It is used to combine multiple nearby videos to detect the geographic location of the person to be tracked, and use the maximum likelihood estimation method to perform cross-video re-identification and analysis of the multi-channel surveillance video to obtain the continuous spatio-temporal trajectory of the person to be tracked.
- FIG. 11 is a schematic diagram of the hardware device structure of the cross-video personnel location tracking method provided by an embodiment of the present application.
- the device includes one or more processors and memory. Taking a processor as an example, the device may also include: an input system and an output system.
- the processor, the memory, the input system, and the output system may be connected through a bus or in other ways.
- the connection through a bus is taken as an example.
- the memory can be used to store non-transitory software programs, non-transitory computer executable programs, and modules.
- the processor executes various functional applications and data processing of the electronic device by running non-transitory software programs, instructions, and modules stored in the memory, that is, realizing the processing methods of the foregoing method embodiments.
- the memory may include a program storage area and a data storage area, where the program storage area can store an operating system and an application program required by at least one function; the data storage area can store data and the like.
- the memory may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid state storage devices.
- the memory may optionally include a memory remotely provided with respect to the processor, and these remote memories may be connected to the processing system through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- the input system can receive input digital or character information, and generate signal input.
- the output system may include display devices such as a display screen.
- the one or more modules are stored in the memory, and when executed by the one or more processors, the following operations of any of the foregoing method embodiments are performed:
- Step a Constructing an object geographic coordinate database, performing reference point topology matching and video geographic registration on the surveillance video according to object recognition, and determining the geographic coordinates of the surveillance video pixels;
- Step b Perform personnel detection location calculation on the surveillance video to obtain the geographic location of the person to be tracked;
- Step c Combining the geographic locations of the persons to be tracked detected by multiple nearby videos, and using the maximum likelihood estimation method to perform cross-video re-identification and analysis of persons on multiple surveillance videos to obtain the continuous spatio-temporal trajectory of the persons to be tracked.
- the embodiments of the present application provide a non-transitory (non-volatile) computer storage medium.
- the computer storage medium stores computer-executable instructions, and the computer-executable instructions can perform the following operations:
- Step a Build an object geographic coordinate database, perform reference point topology matching and video geographic registration on the surveillance video according to object recognition, and determine the geographic coordinates of the surveillance video pixels;
- Step b Perform personnel detection location calculation on the surveillance video to obtain the geographic location of the person to be tracked;
- Step c Combine the geographic locations of the persons to be tracked detected by multiple nearby videos, and use the maximum likelihood estimation method to perform cross-video re-identification analysis of persons on multiple surveillance videos to obtain the continuous spatio-temporal trajectory of the persons to be tracked.
- the embodiment of the present application provides a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, when the program instructions are executed by a computer To make the computer do the following:
- Step a Constructing an object geographic coordinate database, performing reference point topology matching and video geographic registration on the surveillance video according to object recognition, and determining the geographic coordinates of the surveillance video pixels;
- Step b Perform personnel detection location calculation on the surveillance video to obtain the geographic location of the person to be tracked;
- Step c Combine the geographic locations of the persons to be tracked detected by multiple nearby videos, and use the maximum likelihood estimation method to perform cross-video re-identification analysis of persons on multiple surveillance videos to obtain the continuous spatio-temporal trajectory of the persons to be tracked.
- the cross-video personnel positioning and tracking method, system, and electronic equipment of the embodiments of the present application obtain the location information of the reference object through object recognition to perform the geographic registration of the surveillance video, obtain the geographic location information of the personnel and obtain the movement spatiotemporal trajectory through the personnel detection, and the calculation is simple. It also has geographic location information, which has better application value in the multi-channel video system scenario. At the same time, this application introduces personnel geographic location information based on video geographic calibration, performs maximum likelihood estimation during cross-video personnel re-identification, and reduces the difficulty and system complexity of visual personnel re-identification trajectory tracking algorithm. Better application value.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims (10)
- 一种跨视频人员定位追踪方法,其特征在于,包括以下步骤:步骤a:构建物体地理坐标数据库,根据物体识别对监控视频进行参照点拓扑匹配、视频地理配准,确定所述监控视频像素点的地理坐标;步骤b:对所述监控视频进行人员检测位置计算,获取待追踪人员地理位置;步骤c:结合多个临近视频检测的待追踪人员地理位置,采用最大似然估计方法对多路监控视频进行人员跨视频重识别分析,得到所述待追踪人员连续时空轨迹。
- 根据权利要求1所述的跨视频人员定位追踪方法,其特征在于,在所述步骤a中,所述根据物体识别对监控视频进行参照点拓扑匹配、视频地理配准具体包括:采用物体识别算法对所述监控视频进行物体识别分类,得到所述监控视频中的参照点;将所述参照点与所述物体地理数据库中的物体进行匹配,得到所述监控视频中的同名控制点地理位置信息;采用世界地理坐标系转换方法,对所述监控视频中地面区域的同名控制点进行地理配准,使得所述监控视频具有地理位置信息。
- 根据权利要求2所述的跨视频人员定位追踪方法,其特征在于,在所述步骤a中,所述根据物体识别对监控视频进行参照点拓扑匹配、视频地理配准还包括:对所述监控视频进行图像预处理,得到经过鱼眼校准的视频图像;截取所述预处理后的视频图像中的某一帧二维图像,采用边缘检测及分水岭 分割方法对所述二维图像进行边缘提取,得到所述二维图像中具有GIS信息的地面区域。
- 根据权利要求1至3任一项所述的跨视频人员定位追踪方法,其特征在于,在所述步骤b中,所述对所述监控视频进行人员检测位置计算具体为:采用帧差法对所述监控视频中的运动物体进行检测,并结合人头检测器对待追踪人员的位置进行定位,获取待追踪人员的头部信息。
- 根据权利要求4所述的跨视频人员定位追踪方法,其特征在于,所述人头检测器采用基于卷积神经网络CNN的人员检测方法,所述卷积神经网络包括输入层、卷积层、池化层、全连接层和输出层,复合多个卷积层和池化层对输入数据进行加工,并通过连接层进行与输出目标之间的映射。
- 根据权利要求4所述的跨视频人员定位追踪方法,其特征在于,在所述步骤b中,所述获取人员地理位置还包括:基于所述头部信息对所述待追踪人员进行移动检测,得到所述待追踪人员的脚下像素点,所述脚下像素点即为待追踪人员的地理位置信息。
- 根据权利要求6所述的跨视频人员定位追踪方法,其特征在于,在所述步骤b中,所述获取人员地理位置还包括:通过抑制摄像头本身的方法对所述待追踪人员的地理位置信息进行校准。
- 根据权利要求7所述的跨视频人员定位追踪方法,其特征在于,在所述步骤c中,所述采用最大似然估计方法对多路监控视频进行人员跨视频重识别分析还包括:当所述多路监控视频内存在拍摄场景移动时,则触发地理区域重叠判定;所述地理区域重叠判定具体为:将所述多路监控视频的拍摄场景的地理信息进行定位,根据重叠的地理位置区域划分各个摄像装置的监控区域;通过连接所述待追踪人员在连续帧中的地理信息空间坐标得到所述待追踪人员的连续轨迹,当所述待追踪人员的地理信息空间坐标超出当前摄像头的监控区域并移动到下一个摄像头的监控区域时,则触发下一个摄像头进行人员轨迹追踪。
- 一种跨视频人员定位追踪系统,其特征在于,包括:视频地理配准模块:用于构建物体地理坐标数据库,根据物体识别对监控视频进行参照点拓扑匹配、视频地理配准,确定所述监控视频像素点的地理坐标;跨视频人员定位追踪模块:用于对所述监控视频进行人员检测位置计算,获取待追踪人员地理位置;多视频轨迹追踪模块:用于结合多个临近视频检测的待追踪人员地理位置,采用最大似然估计方法对多路监控视频进行人员跨视频重识别分析,得到所述待追踪人员连续时空轨迹。
- 一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述1至8任一项所述的跨视频人员定位追踪方法的以下操作:步骤a:构建物体地理坐标数据库,根据物体识别对监控视频进行参照点拓扑匹配、视频地理配准,确定所述监控视频像素点的地理坐标;步骤b:对所述监控视频进行人员检测位置计算,获取待追踪人员地理位置;步骤c:结合多个临近视频检测的待追踪人员地理位置,采用最大似然估计 方法对多路监控视频进行人员跨视频重识别分析,得到所述待追踪人员连续时空轨迹。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010259428.3A CN111462200B (zh) | 2020-04-03 | 2020-04-03 | 一种跨视频行人定位追踪方法、系统及设备 |
CN202010259428.3 | 2020-04-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021196294A1 true WO2021196294A1 (zh) | 2021-10-07 |
Family
ID=71680274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/085081 WO2021196294A1 (zh) | 2020-04-03 | 2020-04-16 | 一种跨视频人员定位追踪方法、系统及设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111462200B (zh) |
WO (1) | WO2021196294A1 (zh) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040168A (zh) * | 2021-11-16 | 2022-02-11 | 西安热工研究院有限公司 | 一种用于火力电厂的智慧型电力网络监控机构 |
CN114240997A (zh) * | 2021-11-16 | 2022-03-25 | 南京云牛智能科技有限公司 | 一种智慧楼宇在线跨摄像头多目标追踪方法 |
CN114550041A (zh) * | 2022-02-18 | 2022-05-27 | 中国科学技术大学 | 一种多摄像头拍摄视频的多目标标注方法 |
CN114862973A (zh) * | 2022-07-11 | 2022-08-05 | 中铁电气化局集团有限公司 | 基于固定点位的空间定位方法、装置、设备及存储介质 |
CN115033960A (zh) * | 2022-06-09 | 2022-09-09 | 中国公路工程咨询集团有限公司 | Bim模型和gis系统的自动融合方法及装置 |
CN115457449A (zh) * | 2022-11-11 | 2022-12-09 | 深圳市马博士网络科技有限公司 | 一种基于ai视频分析和监控安防的预警系统 |
CN115527162A (zh) * | 2022-05-18 | 2022-12-27 | 湖北大学 | 一种基于三维空间的多行人重识别方法、系统 |
CN115578756A (zh) * | 2022-11-08 | 2023-01-06 | 杭州昊恒科技有限公司 | 基于精准定位和视频联动的人员精细化管理方法及系统 |
CN115731287A (zh) * | 2022-09-07 | 2023-03-03 | 滁州学院 | 基于集合与拓扑空间的运动目标检索方法 |
CN115808170A (zh) * | 2023-02-09 | 2023-03-17 | 宝略科技(浙江)有限公司 | 一种融合蓝牙与视频分析的室内实时定位方法 |
CN115856980A (zh) * | 2022-11-21 | 2023-03-28 | 中铁科学技术开发有限公司 | 一种编组站作业人员监控方法和系统 |
CN115979250A (zh) * | 2023-03-20 | 2023-04-18 | 山东上水环境科技集团有限公司 | 基于uwb模块、语义地图与视觉信息的定位方法 |
CN116189116A (zh) * | 2023-04-24 | 2023-05-30 | 江西方兴科技股份有限公司 | 一种交通状态感知方法及系统 |
CN116631596A (zh) * | 2023-07-24 | 2023-08-22 | 深圳市微能信息科技有限公司 | 一种放射人员工作时长的监控管理系统及方法 |
CN116740878A (zh) * | 2023-08-15 | 2023-09-12 | 广东威恒输变电工程有限公司 | 一种多摄像头协同的全局区域双向绘制的定位预警方法 |
CN117185064A (zh) * | 2023-08-18 | 2023-12-08 | 山东五棵松电气科技有限公司 | 一种智慧社区管理系统、方法、计算机设备及存储介质 |
CN117058331B (zh) * | 2023-10-13 | 2023-12-19 | 山东建筑大学 | 基于单个监控摄像机的室内人员三维轨迹重建方法及系统 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070003A (zh) * | 2020-09-07 | 2020-12-11 | 深延科技(北京)有限公司 | 基于深度学习的人员追踪方法和系统 |
CN112184814B (zh) * | 2020-09-24 | 2022-09-02 | 天津锋物科技有限公司 | 定位方法和定位系统 |
CN112163537B (zh) * | 2020-09-30 | 2024-04-26 | 中国科学院深圳先进技术研究院 | 一种行人异常行为检测方法、系统、终端以及存储介质 |
WO2022067606A1 (zh) * | 2020-09-30 | 2022-04-07 | 中国科学院深圳先进技术研究院 | 一种行人异常行为检测方法、系统、终端以及存储介质 |
CN112766210A (zh) * | 2021-01-29 | 2021-05-07 | 苏州思萃融合基建技术研究所有限公司 | 建筑施工的安全监控方法、装置及存储介质 |
CN113190711A (zh) * | 2021-03-26 | 2021-07-30 | 南京财经大学 | 地理场景中视频动态对象轨迹时空检索方法及系统 |
CN113435329B (zh) * | 2021-06-25 | 2022-06-21 | 湖南大学 | 一种基于视频轨迹特征关联学习的无监督行人重识别方法 |
CN113627497B (zh) * | 2021-07-27 | 2024-03-12 | 武汉大学 | 一种基于时空约束的跨摄像头行人轨迹匹配方法 |
CN113837023A (zh) * | 2021-09-02 | 2021-12-24 | 北京新橙智慧科技发展有限公司 | 一种跨摄像头行人自动追踪方法 |
CN117237418B (zh) * | 2023-11-15 | 2024-01-23 | 成都航空职业技术学院 | 一种基于深度学习的运动目标检测方法和系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105472332A (zh) * | 2015-12-01 | 2016-04-06 | 杨春光 | 基于定位技术与视频技术的分析方法及其分析系统 |
CN105913037A (zh) * | 2016-04-26 | 2016-08-31 | 广东技术师范学院 | 基于人脸识别与射频识别的监控跟踪系统 |
CN107547865A (zh) * | 2017-07-06 | 2018-01-05 | 王连圭 | 跨区域人体视频目标跟踪智能监控方法 |
WO2018087545A1 (en) * | 2016-11-08 | 2018-05-17 | Staffordshire University | Object location technique |
CN110147471A (zh) * | 2019-04-04 | 2019-08-20 | 平安科技(深圳)有限公司 | 基于视频的轨迹跟踪方法、装置、计算机设备及存储介质 |
CN110375739A (zh) * | 2019-06-26 | 2019-10-25 | 中国科学院深圳先进技术研究院 | 一种移动端视觉融合定位方法、系统及电子设备 |
CN110414441A (zh) * | 2019-07-31 | 2019-11-05 | 浙江大学 | 一种行人行踪分析方法及系统 |
WO2020055928A1 (en) * | 2018-09-10 | 2020-03-19 | Mapbox, Inc. | Calibration for vision in navigation systems |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101548639B1 (ko) * | 2014-12-10 | 2015-09-01 | 한국건설기술연구원 | 감시 카메라 시스템의 객체 추적장치 및 그 방법 |
US10372970B2 (en) * | 2016-09-15 | 2019-08-06 | Qualcomm Incorporated | Automatic scene calibration method for video analytics |
CN107153824A (zh) * | 2017-05-22 | 2017-09-12 | 中国人民解放军国防科学技术大学 | 基于图聚类的跨视频行人重识别方法 |
WO2019145018A1 (en) * | 2018-01-23 | 2019-08-01 | Siemens Aktiengesellschaft | System, device and method for detecting abnormal traffic events in a geographical location |
CN109461132B (zh) * | 2018-10-31 | 2021-04-27 | 中国人民解放军国防科技大学 | 基于特征点几何拓扑关系的sar图像自动配准方法 |
CN110717414B (zh) * | 2019-09-24 | 2023-01-03 | 青岛海信网络科技股份有限公司 | 一种目标检测追踪方法、装置及设备 |
CN110765903A (zh) * | 2019-10-10 | 2020-02-07 | 浙江大华技术股份有限公司 | 行人重识别方法、装置及存储介质 |
CN110706259B (zh) * | 2019-10-12 | 2022-11-29 | 四川航天神坤科技有限公司 | 一种基于空间约束的可疑人员跨镜头追踪方法及装置 |
-
2020
- 2020-04-03 CN CN202010259428.3A patent/CN111462200B/zh active Active
- 2020-04-16 WO PCT/CN2020/085081 patent/WO2021196294A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105472332A (zh) * | 2015-12-01 | 2016-04-06 | 杨春光 | 基于定位技术与视频技术的分析方法及其分析系统 |
CN105913037A (zh) * | 2016-04-26 | 2016-08-31 | 广东技术师范学院 | 基于人脸识别与射频识别的监控跟踪系统 |
WO2018087545A1 (en) * | 2016-11-08 | 2018-05-17 | Staffordshire University | Object location technique |
CN107547865A (zh) * | 2017-07-06 | 2018-01-05 | 王连圭 | 跨区域人体视频目标跟踪智能监控方法 |
WO2020055928A1 (en) * | 2018-09-10 | 2020-03-19 | Mapbox, Inc. | Calibration for vision in navigation systems |
CN110147471A (zh) * | 2019-04-04 | 2019-08-20 | 平安科技(深圳)有限公司 | 基于视频的轨迹跟踪方法、装置、计算机设备及存储介质 |
CN110375739A (zh) * | 2019-06-26 | 2019-10-25 | 中国科学院深圳先进技术研究院 | 一种移动端视觉融合定位方法、系统及电子设备 |
CN110414441A (zh) * | 2019-07-31 | 2019-11-05 | 浙江大学 | 一种行人行踪分析方法及系统 |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114240997A (zh) * | 2021-11-16 | 2022-03-25 | 南京云牛智能科技有限公司 | 一种智慧楼宇在线跨摄像头多目标追踪方法 |
CN114240997B (zh) * | 2021-11-16 | 2023-07-28 | 南京云牛智能科技有限公司 | 一种智慧楼宇在线跨摄像头多目标追踪方法 |
CN114040168A (zh) * | 2021-11-16 | 2022-02-11 | 西安热工研究院有限公司 | 一种用于火力电厂的智慧型电力网络监控机构 |
CN114550041A (zh) * | 2022-02-18 | 2022-05-27 | 中国科学技术大学 | 一种多摄像头拍摄视频的多目标标注方法 |
CN114550041B (zh) * | 2022-02-18 | 2024-03-29 | 中国科学技术大学 | 一种多摄像头拍摄视频的多目标标注方法 |
CN115527162A (zh) * | 2022-05-18 | 2022-12-27 | 湖北大学 | 一种基于三维空间的多行人重识别方法、系统 |
CN115033960B (zh) * | 2022-06-09 | 2023-04-07 | 中国公路工程咨询集团有限公司 | Bim模型和gis系统的自动融合方法及装置 |
CN115033960A (zh) * | 2022-06-09 | 2022-09-09 | 中国公路工程咨询集团有限公司 | Bim模型和gis系统的自动融合方法及装置 |
CN114862973A (zh) * | 2022-07-11 | 2022-08-05 | 中铁电气化局集团有限公司 | 基于固定点位的空间定位方法、装置、设备及存储介质 |
CN115731287A (zh) * | 2022-09-07 | 2023-03-03 | 滁州学院 | 基于集合与拓扑空间的运动目标检索方法 |
CN115578756A (zh) * | 2022-11-08 | 2023-01-06 | 杭州昊恒科技有限公司 | 基于精准定位和视频联动的人员精细化管理方法及系统 |
CN115457449B (zh) * | 2022-11-11 | 2023-03-24 | 深圳市马博士网络科技有限公司 | 一种基于ai视频分析和监控安防的预警系统 |
CN115457449A (zh) * | 2022-11-11 | 2022-12-09 | 深圳市马博士网络科技有限公司 | 一种基于ai视频分析和监控安防的预警系统 |
CN115856980A (zh) * | 2022-11-21 | 2023-03-28 | 中铁科学技术开发有限公司 | 一种编组站作业人员监控方法和系统 |
CN115808170B (zh) * | 2023-02-09 | 2023-06-06 | 宝略科技(浙江)有限公司 | 一种融合蓝牙与视频分析的室内实时定位方法 |
CN115808170A (zh) * | 2023-02-09 | 2023-03-17 | 宝略科技(浙江)有限公司 | 一种融合蓝牙与视频分析的室内实时定位方法 |
CN115979250A (zh) * | 2023-03-20 | 2023-04-18 | 山东上水环境科技集团有限公司 | 基于uwb模块、语义地图与视觉信息的定位方法 |
CN116189116A (zh) * | 2023-04-24 | 2023-05-30 | 江西方兴科技股份有限公司 | 一种交通状态感知方法及系统 |
CN116189116B (zh) * | 2023-04-24 | 2024-02-23 | 江西方兴科技股份有限公司 | 一种交通状态感知方法及系统 |
CN116631596A (zh) * | 2023-07-24 | 2023-08-22 | 深圳市微能信息科技有限公司 | 一种放射人员工作时长的监控管理系统及方法 |
CN116631596B (zh) * | 2023-07-24 | 2024-01-02 | 深圳市微能信息科技有限公司 | 一种放射人员工作时长的监控管理系统及方法 |
CN116740878A (zh) * | 2023-08-15 | 2023-09-12 | 广东威恒输变电工程有限公司 | 一种多摄像头协同的全局区域双向绘制的定位预警方法 |
CN116740878B (zh) * | 2023-08-15 | 2023-12-26 | 广东威恒输变电工程有限公司 | 一种多摄像头协同的全局区域双向绘制的定位预警方法 |
CN117185064A (zh) * | 2023-08-18 | 2023-12-08 | 山东五棵松电气科技有限公司 | 一种智慧社区管理系统、方法、计算机设备及存储介质 |
CN117185064B (zh) * | 2023-08-18 | 2024-03-05 | 山东五棵松电气科技有限公司 | 一种智慧社区管理系统、方法、计算机设备及存储介质 |
CN117058331B (zh) * | 2023-10-13 | 2023-12-19 | 山东建筑大学 | 基于单个监控摄像机的室内人员三维轨迹重建方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN111462200A (zh) | 2020-07-28 |
CN111462200B (zh) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021196294A1 (zh) | 一种跨视频人员定位追踪方法、系统及设备 | |
Walch et al. | Image-based localization using lstms for structured feature correlation | |
CN107240124B (zh) | 基于时空约束的跨镜头多目标跟踪方法及装置 | |
JP6095018B2 (ja) | 移動オブジェクトの検出及び追跡 | |
CN103325112B (zh) | 动态场景中运动目标快速检测方法 | |
Alcantarilla et al. | On combining visual SLAM and dense scene flow to increase the robustness of localization and mapping in dynamic environments | |
US10043097B2 (en) | Image abstraction system | |
Gao et al. | Robust RGB-D simultaneous localization and mapping using planar point features | |
US9275472B2 (en) | Real-time player detection from a single calibrated camera | |
US20220180534A1 (en) | Pedestrian tracking method, computing device, pedestrian tracking system and storage medium | |
CN108470356B (zh) | 一种基于双目视觉的目标对象快速测距方法 | |
CN111160291B (zh) | 基于深度信息与cnn的人眼检测方法 | |
WO2019075948A1 (zh) | 移动机器人的位姿估计方法 | |
Jung et al. | Object detection and tracking-based camera calibration for normalized human height estimation | |
He et al. | Ground and aerial collaborative mapping in urban environments | |
CN115376034A (zh) | 一种基于人体三维姿态时空关联动作识别的运动视频采集剪辑方法及装置 | |
CN116468786A (zh) | 一种面向动态环境的基于点线联合的语义slam方法 | |
CN110636248B (zh) | 目标跟踪方法与装置 | |
CN111829522B (zh) | 即时定位与地图构建方法、计算机设备以及装置 | |
CN108694348B (zh) | 一种基于自然特征的跟踪注册方法及装置 | |
CN116128919A (zh) | 基于极线约束的多时相图像异动目标检测方法及系统 | |
CN114608522A (zh) | 一种基于视觉的障碍物识别与测距方法 | |
Sen et al. | SceneCalib: Automatic targetless calibration of cameras and LiDARs in autonomous driving | |
KR102249380B1 (ko) | 기준 영상 정보를 이용한 cctv 장치의 공간 정보 생성 시스템 | |
CN110991383B (zh) | 一种多相机联合的周界区域人员定位方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20928887 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20928887 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20928887 Country of ref document: EP Kind code of ref document: A1 |