CN114973147A - Distributed monitoring camera positioning method and system based on laser radar mapping - Google Patents

Distributed monitoring camera positioning method and system based on laser radar mapping Download PDF

Info

Publication number
CN114973147A
CN114973147A CN202210706570.7A CN202210706570A CN114973147A CN 114973147 A CN114973147 A CN 114973147A CN 202210706570 A CN202210706570 A CN 202210706570A CN 114973147 A CN114973147 A CN 114973147A
Authority
CN
China
Prior art keywords
points
point cloud
map
laser radar
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210706570.7A
Other languages
Chinese (zh)
Inventor
巢建树
明瑞成
王新文
赵伟杰
窦光义
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanzhou Institute of Equipment Manufacturing
Original Assignee
Quanzhou Institute of Equipment Manufacturing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanzhou Institute of Equipment Manufacturing filed Critical Quanzhou Institute of Equipment Manufacturing
Priority to CN202210706570.7A priority Critical patent/CN114973147A/en
Publication of CN114973147A publication Critical patent/CN114973147A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a distributed monitoring camera positioning method and a system based on laser radar mapping, wherein the method comprises the following steps: acquiring point cloud data of each area through the laser radar positioning device, and dividing indivisible ground points and other division points; calculating the smoothness of the points based on the segmentation result, dividing edge points and plane points, screening and dividing different feature point sets from the edge points and the plane points to obtain point cloud types; optimizing a radar odometer, constructing a constraint relation by using a point cloud type, and obtaining an attitude transformation matrix by using two times of LM optimization to obtain the attitude transformation of adjacent frames; constructing a global map; emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of a camera; and displaying the number, the position and the direction of the camera in real time in the global map. The technical scheme of the invention can realize the rapid and accurate positioning of the distributed cameras and pictures.

Description

Distributed monitoring camera positioning method and system based on laser radar mapping
Technical Field
The invention relates to the technical field of sensor positioning, in particular to a distributed monitoring camera positioning method and system based on laser radar mapping.
Background
In the existing distributed video monitoring, because one camera can only shoot a picture in a specific direction of a certain area, when an accident occurs, the pictures of the monitoring cameras need to be checked one by one, the area where the accident occurs cannot be quickly positioned, and the real-time effect of other intelligent security measures such as visual object identification, target tracking and the like is poor, so that the requirement of processing events efficiently cannot be met.
Disclosure of Invention
The invention aims to solve the technical problem of providing a distributed monitoring camera positioning method and system based on laser radar mapping, so as to realize quick and accurate positioning of a distributed camera and a picture.
In a first aspect, the present invention provides a method for positioning distributed surveillance cameras based on laser radar mapping, wherein a laser radar sensor is installed on each camera to form an integrated laser radar positioning device, and the method comprises the following steps:
step 1, point cloud data of each area is obtained through the laser radar positioning device, and an indivisible ground point and other segmentation points are obtained through division;
step 2, calculating the smoothness of the points based on the segmentation result, sorting the points according to the smoothness, dividing edge points and plane points, screening and dividing different feature point sets from the edge points and the plane points to obtain point cloud types;
step 3, optimizing the radar odometer, constructing a constraint relation by using a point cloud type, and obtaining an attitude transformation matrix by using two times of LM optimization to obtain the attitude transformation of adjacent frames;
step 4, selecting a group of similar feature sets to construct a corresponding global map feature point cloud by using a graph optimization-based method, and constructing a global map by using a gtsam optimization pose relationship by using a loop detection method;
step 5, emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of a camera according to a matching result;
and 6, displaying the serial number, the position and the direction of the camera in real time in the global map.
Further, the step 1 specifically comprises:
step 11, acquiring point cloud data of each area through a laser radar sensor in each laser radar positioning device;
step 12, projecting the point cloud data of each frame into a distance image;
step 13, performing column evaluation on the distance image, and dividing the distance image into an inseparable ground point and other segmentation points;
and 14, segmenting the distance image by using an image-based segmentation method, clustering points in the image, removing points which cannot be clustered to obtain segmentation labels of each type of point set, wherein the segmentation labels comprise other segmentation points except ground points.
Further, the step 3 specifically includes: the method comprises the steps of constructing a constraint relation by utilizing a point cloud type, wherein the constraint relation comprises a point-to-surface constraint relation of ground points and a point-to-line constraint relation of segmentation points, optimizing and solving the constraint relation by using an LM (Linear optimization) method, performing LM (Linear optimization) on the point-to-surface constraint relation of the ground points to obtain the variation of vertical dimension, performing LM (Linear optimization) on the point-to-line constraint relation of the segmentation points to obtain the variation of horizontal dimension, and obtaining an attitude transformation matrix between two frames based on two-step LM optimization.
Further, the step 5 specifically includes:
emitting light beams to three different positions through a laser radar positioning device to be positioned, receiving a large number of points returned from the outer surface of an object, and obtaining a local point cloud map;
matching the local point cloud map with the global point cloud map, and if the matching degree between the local point cloud map and the global point cloud map is greater than a threshold value, determining the matching relation between the local point cloud map and the global point cloud map;
and obtaining the position and the direction of a camera in the laser radar positioning device according to the matching relation.
In a second aspect, the present invention provides a distributed monitoring camera positioning system based on lidar mapping, wherein a lidar sensor is mounted on each camera to form an integrated lidar positioning device, and the system comprises:
the dividing point module is used for acquiring point cloud data of each area through the laser radar positioning device and dividing the point cloud data to obtain an indivisible ground point and other dividing points;
the characteristic extraction module is used for calculating the smoothness of the points based on the segmentation result, sequencing the points according to the smoothness, dividing edge points and plane points, screening and dividing different characteristic point sets from the edge points and the plane points to obtain point cloud types;
the pose optimization module is used for optimizing the radar odometer, constructing a constraint relation by using a point cloud type, and obtaining a pose transformation matrix by using LM optimization twice to obtain the pose transformation of adjacent frames;
the map building module is used for selecting a group of similar feature sets to build a corresponding global map feature point cloud by using a map optimization-based method, and building a global map by using a gtsam optimization pose relationship by using a loop detection method;
the camera positioning module is used for emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of the camera according to a matching result; and
and the display module is used for displaying the serial number, the position and the direction of the camera in real time in the global map.
Further, the segmentation point module specifically includes:
acquiring point cloud data of each area through a laser radar sensor in each laser radar positioning device;
re-projecting the point cloud data of each frame into a distance image;
performing column evaluation on the distance image, and dividing the distance image into an inseparable ground point and other segmentation points;
the distance image is divided by using an image-based division method, the points in the image are clustered, the points which cannot be clustered are removed, and the division label of each type of point set is obtained, wherein the division label comprises other division points except the ground points.
Further, the pose optimization module specifically includes: the method comprises the steps of constructing a constraint relation by utilizing a point cloud type, wherein the constraint relation comprises a point-to-surface constraint relation of ground points and a point-to-line constraint relation of segmentation points, optimizing and solving the constraint relation by using an LM (Linear optimization) method, performing LM (Linear optimization) on the point-to-surface constraint relation of the ground points to obtain the variation of vertical dimension, performing LM (Linear optimization) on the point-to-line constraint relation of the segmentation points to obtain the variation of horizontal dimension, and obtaining an attitude transformation matrix between two frames based on two-step LM optimization.
Further, the feature extraction module specifically includes: based on the segmentation result, the distance image is horizontally divided into a plurality of sub-images, the smoothness of the points is calculated, the points are sorted according to the smoothness, the edge points and the plane points are separated, four feature point sets are obtained, and four point cloud types are obtained.
Further, the camera positioning module includes:
emitting light beams to three different positions through a laser radar positioning device to be positioned, receiving a large number of points returned from the outer surface of an object, and obtaining a local point cloud map;
matching the local point cloud map with the global point cloud map, and if the matching degree between the local point cloud map and the global point cloud map is greater than a threshold value, determining the matching relation between the local point cloud map and the global point cloud map;
and obtaining the position and the direction of a camera in the laser radar positioning device according to the matching relation.
The invention has the following advantages:
(1) the method not only utilizes the laser radar SLAM technology to construct the global map, but also integrates the constructed laser radar SLAM map into a video monitoring system, is applied to video monitoring scenes, displays the SLAM global map of a monitoring and controlling place on a large display screen of the monitoring system in real time, provides real-time and visual environment information inside and outside a video monitoring area for a user, is convenient for managing the safety of the area, and improves the user experience;
(2) the map construction method disclosed by the invention is obtained by acquiring data by using the distributed monitoring cameras and then processing the data, is relatively comprehensive, can accurately position the monitoring cameras which are subsequently added in the garden in the map through the device, does not need to rebuild the map, realizes that the map can be used for a long time after being constructed once, effectively saves the cost, and has high implementability and maintainability;
(3) the invention combines the laser radar SLAM map with the laser radar positioning system, can display the position and the shooting direction of the camera in the global SLAM map, can judge the accurate position of the picture in the whole area map according to the monitoring picture, can be displayed on a large screen of a monitoring center, and provides technical support and a basic platform for other intelligent technologies such as visual object identification, target tracking and the like.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
Fig. 1 is a flow chart of an execution of a distributed monitoring camera positioning method based on laser radar mapping according to the present invention.
Fig. 2 is an execution flow chart of a distributed monitoring camera positioning system based on laser radar mapping according to the present invention.
FIG. 3 is a schematic structural diagram of a lidar positioning apparatus of the present invention.
Fig. 4 is a schematic diagram of the positioning effect of the distributed monitoring camera based on the laser radar mapping according to the present invention.
Detailed Description
As shown in fig. 1, in the distributed monitoring camera positioning method based on lidar mapping provided by the present invention, a lidar sensor is mounted on each camera, and an integrated lidar positioning device is formed, as shown in fig. 3, the lidar positioning device includes a lidar sensor a and a camera B, so that the position of the lidar sensor in a three-dimensional space is substantially equivalent to that of the distributed monitoring camera, and the method includes the following steps:
step 1, point cloud data of each area is obtained through the laser radar positioning device, and an indivisible ground point and other segmentation points are obtained through division;
step 2, horizontally dividing the distance image into a plurality of sub-images based on the segmentation result, calculating the smoothness of points (for example, the distance in the vertical direction is 16, the distance in the horizontal direction needs to be divided, and for each sub-image, the smoothness is calculated in the vertical direction of the point), sorting the points according to the smoothness, dividing edge points and plane points, screening and dividing different feature point sets from the edge points and the plane points to obtain a point cloud type;
step 3, optimizing the radar odometer, constructing a constraint relation by using a point cloud type, and obtaining an attitude transformation matrix by using two times of LM optimization to obtain the attitude transformation of adjacent frames;
step 4, selecting a group of similar feature sets to construct a corresponding global map feature point cloud by using a graph optimization-based method, and constructing a global map by using a gtsam optimization pose relationship by using a loop detection method; the similar feature set is a feature point set of the current frame and the adjacent frame; and radar mapping, namely mapping the feature point set of each frame to the corresponding global map feature point cloud, and selecting the feature sets of the current frame and the adjacent frames thereof to construct the corresponding global map feature point cloud by adopting a method based on map optimization in order to obtain the global map feature point cloud. Constructing attitude constraint between the feature point set of each frame and the corresponding global map feature point cloud by the LM optimization method; and (3) optimizing the pose relation by using a loop detection method and using the gtsam to obtain the final global map.
Step 5, emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of a camera according to a matching result;
and 6, displaying the serial number, the position and the direction of the camera in real time in the global map. As shown in fig. 4, the position, the shooting direction and the shooting area of each camera can be visually displayed to the user in real time.
Preferably, the step 1 specifically comprises:
step 11, acquiring point cloud data of each area through a laser radar sensor in each laser radar positioning device; each laser radar sensor is utilized to emit laser beams to each area, then received signals reflected from targets are compared with the emitted signals, and after proper processing is carried out, parameters such as positions, directions, heights and the like of the targets in the areas are obtained so as to obtain point cloud data;
step 12, projecting the point cloud data of each frame into a distance image; can be achieved by opencv;
step 13, performing column evaluation on the distance image, and dividing the distance image into an indivisible ground point and other segmentation points; for example, the vertical direction of the VLP-16 laser sensor is 16 scanning lines, so that the vertical direction of the distance is 16, the characteristic of the vertical dimension in the original three-dimensional space is represented, and ground points and non-ground points can be well marked by judging the characteristic of the vertical dimension.
And 14, segmenting the distance image by using an image-based segmentation method, clustering points in the image, removing points which cannot be clustered to obtain segmentation labels of each type of point set, wherein the segmentation labels comprise other segmentation points except ground points.
Preferably, the step 3 specifically includes: the method comprises the steps of constructing a constraint relation by utilizing a point cloud type, wherein the constraint relation comprises a point-to-surface constraint relation of ground points and a point-to-line constraint relation of segmentation points, optimizing and solving the constraint relation by using an LM (Linear optimization) method, performing LM (Linear optimization) on the point-to-surface constraint relation of the ground points to obtain the variation of vertical dimension, performing LM (Linear optimization) on the point-to-line constraint relation of the segmentation points to obtain the variation of horizontal dimension, and obtaining an attitude transformation matrix between two frames based on two-step LM optimization.
Preferably, the step 5 specifically includes:
emitting light beams to three different positions through a laser radar positioning device to be positioned, receiving a large number of points returned from the outer surface of an object, and obtaining a local point cloud map;
matching the local point cloud map and the global point cloud map, and if the matching degree between the local point cloud map and the global point cloud map is greater than a threshold value, determining the matching relation between the local point cloud map and the global point cloud map, namely the 2D characteristic points and the 3D point cloud characteristic points can be overlapped;
and obtaining the position and the direction of a camera in the laser radar positioning device according to the matching relation.
Knowing the 3D coordinates of the feature points in the world coordinate system and the 2D pixel coordinates in a certain frame of picture, the coordinates of the pixel points in the picture frame in the world coordinate system can be solved, so the 3D-2D matching is called. At least 3 pairs of matching points are needed to calculate the pose, i.e. the P3P method. The P3P method is used for solving the coordinates of the points in the camera coordinate system by using 3 pairs of matching points and listing a binary quadratic equation by using a triangular similarity relation, and then solving the pose of the camera coordinate system relative to the world coordinate system by using an ICP pose estimation algorithm. The position is directly obtained according to the pose, and the direction can be obtained according to the field of view (fov) of the laser emitted by the radar.
As shown in fig. 2, the distributed monitoring camera positioning system based on lidar mapping provided by the present invention is configured such that a lidar sensor is mounted on each of the cameras, and an integrated lidar positioning device is configured, as shown in fig. 3, the lidar positioning device includes a lidar sensor a and a camera B, such that the position of the lidar sensor in a three-dimensional space is substantially equivalent to that of the distributed monitoring camera, and the system includes:
the dividing point module is used for acquiring point cloud data of each area through the laser radar positioning device and dividing the point cloud data to obtain an indivisible ground point and other dividing points;
the feature extraction module is used for horizontally dividing the distance image into a plurality of sub-images based on the segmentation result, calculating the smoothness of points (for example, the distance in the vertical direction is 16, the distance in the horizontal direction needs to be divided, and for each sub-image, the smoothness is calculated in the vertical direction of the point), sorting the points according to the smoothness, dividing edge points and plane points, screening and dividing different feature point sets from the edge points and the plane points to obtain a point cloud type;
the pose optimization module is used for optimizing the radar odometer, constructing a constraint relation by using a point cloud type, and obtaining a pose transformation matrix by using LM optimization twice to obtain the pose transformation of adjacent frames;
the map building module is used for selecting a group of similar feature sets to build a corresponding global map feature point cloud by using a map optimization-based method, and building a global map by using a gtsam optimization pose relationship by using a loop detection method; the similar feature set is a feature point set of the current frame and the adjacent frames; and radar mapping, namely mapping the feature point set of each frame to the corresponding global map feature point cloud, and selecting the feature sets of the current frame and the adjacent frames thereof to construct the corresponding global map feature point cloud by adopting a method based on map optimization in order to obtain the global map feature point cloud. Constructing attitude constraint between the feature point set of each frame and the corresponding global map feature point cloud by the LM optimization method; and (3) optimizing the pose relation by using a loop detection method and using the gtsam to obtain the final global map.
The camera positioning module is used for emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of the camera according to a matching result; and
and the display module is used for displaying the serial number, the position and the direction of the camera in real time in the global map. As shown in fig. 4, the position, the shooting direction and the shooting area of each camera can be visually displayed to the user in real time.
Preferably, the dividing point module specifically includes:
acquiring point cloud data of each area through a laser radar sensor in each laser radar positioning device; each laser radar sensor is utilized to emit laser beams to each area, then received signals reflected from targets are compared with the emitted signals, and after proper processing is carried out, parameters such as positions, directions, heights and the like of the targets in the areas are obtained so as to obtain point cloud data;
re-projecting the point cloud data of each frame into a distance image; can be achieved by opencv;
performing column evaluation on the distance image, and dividing the distance image into an inseparable ground point and other segmentation points; for example, the vertical direction of the VLP-16 laser sensor is 16 scanning lines, so that the vertical direction of the distance is 16, the characteristic of the vertical dimension in the original three-dimensional space is represented, and ground points and non-ground points can be well marked by judging the characteristic of the vertical dimension.
The distance image is divided by using an image-based division method, points in the image are clustered, points which cannot be clustered are removed, and division labels of each type of point set are obtained, wherein the division labels comprise division points except ground points.
Preferably, the pose optimization module specifically includes: the method comprises the steps of constructing a constraint relation by utilizing a point cloud type, wherein the constraint relation comprises a point-to-surface constraint relation of ground points and a point-to-line constraint relation of segmentation points, optimizing and solving the constraint relation by using an LM (Linear optimization) method, performing LM (Linear optimization) on the point-to-surface constraint relation of the ground points to obtain the variation of vertical dimension, performing LM (Linear optimization) on the point-to-line constraint relation of the segmentation points to obtain the variation of horizontal dimension, and obtaining an attitude transformation matrix between two frames based on two-step LM optimization.
Preferably, the camera positioning module includes:
emitting light beams to three different positions through a laser radar positioning device to be positioned, receiving a large number of points returned from the outer surface of an object, and obtaining a local point cloud map;
matching the local point cloud map and the global point cloud map, and if the matching degree between the local point cloud map and the global point cloud map is greater than a threshold value, determining the matching relation between the local point cloud map and the global point cloud map, namely the 2D characteristic points and the 3D point cloud characteristic points can be overlapped;
and obtaining the position and the direction of a camera in the laser radar positioning device according to the matching relation.
Knowing the 3D coordinates of the feature points in the world coordinate system and the 2D pixel coordinates in a certain frame of picture, the coordinates of the pixel points in the picture frame in the world coordinate system can be solved, so the 3D-2D matching is called. At least 3 pairs of matching points are needed to calculate the pose, i.e. the P3P method. The P3P method is used for solving the coordinates of the points in the camera coordinate system by using 3 pairs of matching points and listing a binary quadratic equation by using a triangular similarity relation, and then solving the pose of the camera coordinate system relative to the world coordinate system by using an ICP pose estimation algorithm. The position is directly obtained according to the pose, and the direction can be obtained according to the range of a view field (fov) of the radar emitted laser.
According to the method, the regional map of the monitoring place is constructed by utilizing the laser radar SLAM technology, and the function of positioning the distributed monitoring cameras in the regional map is realized by utilizing the laser radar positioning system. Firstly, establishing a regional map by using a laser radar SLAM map construction method; then, carrying out laser scanning on the local area by using a laser radar positioning device to obtain a local point cloud map of at least three different positions; and performing point cloud matching on the acquired local point cloud map and a prestored SLAM map so as to determine the position and the shooting direction of the corresponding distributed monitoring camera, wherein the positioning precision is higher, and the device is simple and easy to operate.
By adopting the technical scheme of the invention, the positions and the shooting directions of the distributed monitoring cameras can be displayed on the SLAM map, a user can visually and quickly check the relation between the corresponding cameras and the region, can reflect the environmental information in the video monitoring region in real time, and determine the accurate positions and the shooting directions of the cameras in the region map, so that the region can be quickly positioned according to the accident condition occurring in the monitoring picture, meanwhile, the invention is also beneficial to the maintenance and management of monitoring equipment by monitoring system workers, and the invention can optimize the real-time performance and the effectiveness of other intelligent security measures such as visual object identification, target tracking and the like, and improve the user experience. The user can judge the accurate position of the monitoring picture in the global SLAM map according to the real-time acquired monitoring picture, and the SLAM map is used for integrating the regional environment into the system in the form of a three-dimensional map, so that the pictures with rich user layers and wide visual field are provided, the rotation of the three-dimensional picture and the viewing of the transformation angle can be carried out, and the marking, path planning and the like on the map are facilitated.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.

Claims (8)

1. A distributed monitoring camera positioning method based on laser radar mapping is characterized in that: installing a laser radar sensor on each camera to form an integrated laser radar positioning device, wherein the method comprises the following steps:
step 1, point cloud data of each area is obtained through the laser radar positioning device, and an indivisible ground point and other segmentation points are obtained through division;
step 2, calculating the smoothness of the points based on the segmentation result, sorting the points according to the smoothness, dividing edge points and plane points, screening and dividing different feature point sets from the edge points and the plane points to obtain point cloud types;
step 3, optimizing the radar odometer, constructing a constraint relation by using a point cloud type, and obtaining an attitude transformation matrix by using two times of LM optimization to obtain the attitude transformation of adjacent frames;
step 4, selecting a group of similar feature sets to construct a corresponding global map feature point cloud by using a graph optimization-based method, and constructing a global map by using a gtsam optimization pose relationship by using a loop detection method;
step 5, emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of a camera according to a matching result;
and 6, displaying the number, the position and the direction of the camera in real time in the global map.
2. The method of claim 1, wherein: the step 1 specifically comprises the following steps:
step 11, acquiring point cloud data of each area through a laser radar sensor in each laser radar positioning device;
step 12, projecting the point cloud data of each frame into a distance image;
step 13, performing column evaluation on the distance image, and dividing the distance image into an inseparable ground point and other segmentation points;
and 14, segmenting the distance image by using an image-based segmentation method, clustering points in the image, removing points which cannot be clustered to obtain segmentation labels of each type of point set, wherein the segmentation labels comprise other segmentation points except ground points.
3. The method of claim 1, wherein: the step 3 specifically includes: the method comprises the steps of constructing a constraint relation by utilizing a point cloud type, wherein the constraint relation comprises a point-to-surface constraint relation of ground points and a point-to-line constraint relation of segmentation points, optimizing and solving the constraint relation by using an LM (Linear optimization) method, performing LM (Linear optimization) on the point-to-surface constraint relation of the ground points to obtain the variation of vertical dimension, performing LM (Linear optimization) on the point-to-line constraint relation of the segmentation points to obtain the variation of horizontal dimension, and obtaining an attitude transformation matrix between two frames based on two-step LM optimization.
4. The method of claim 1, wherein: the step 5 specifically includes:
emitting light beams to three different positions through a laser radar positioning device to be positioned, receiving a large number of points returned from the outer surface of an object, and obtaining a local point cloud map;
matching the local point cloud map with the global point cloud map, and if the matching degree between the local point cloud map and the global point cloud map is greater than a threshold value, determining the matching relation between the local point cloud map and the global point cloud map;
and obtaining the position and the direction of a camera in the laser radar positioning device according to the matching relation.
5. Distributed surveillance camera positioning system based on laser radar builds picture, its characterized in that: installing a lidar sensor on each camera and forming an integrated lidar positioning device, the system comprising:
the dividing point module is used for acquiring point cloud data of each area through the laser radar positioning device and dividing the point cloud data to obtain an indivisible ground point and other dividing points;
the characteristic extraction module is used for calculating the smoothness of the points based on the segmentation result, sequencing the points according to the smoothness, dividing edge points and plane points, screening and dividing different characteristic point sets from the edge points and the plane points to obtain point cloud types;
the pose optimization module is used for optimizing the radar odometer, constructing a constraint relation by using a point cloud type, and obtaining a pose transformation matrix by using LM optimization twice to obtain the pose transformation of adjacent frames;
the mapping module is used for selecting a group of similar feature sets to construct a corresponding global map feature point cloud by using a graph optimization-based method, and constructing a global map by using a loop detection method and optimizing a position and orientation relation by using gtsam;
the camera positioning module is used for emitting light beams to three different positions through a laser radar positioning device to be positioned to obtain a local map, matching the local map with a global map, and determining the position and the direction of the camera according to a matching result; and
and the display module is used for displaying the serial number, the position and the direction of the camera in real time in the global map.
6. The system of claim 5, wherein: the dividing point module specifically comprises:
acquiring point cloud data of each area through a laser radar sensor in each laser radar positioning device;
re-projecting the point cloud data of each frame into a distance image;
performing column evaluation on the distance image, and dividing the distance image into an inseparable ground point and other segmentation points;
the distance image is divided by using an image-based division method, the points in the image are clustered, the points which cannot be clustered are removed, and the division label of each type of point set is obtained, wherein the division label comprises other division points except the ground points.
7. The system of claim 5, wherein: the feature extraction module specifically comprises: the pose optimization module specifically comprises: the method comprises the steps of constructing a constraint relation by utilizing a point cloud type, wherein the constraint relation comprises a point-to-surface constraint relation of ground points and a point-to-line constraint relation of segmentation points, optimizing and solving the constraint relation by using an LM (Linear optimization) method, performing LM (Linear optimization) on the point-to-surface constraint relation of the ground points to obtain the variation of vertical dimension, performing LM (Linear optimization) on the point-to-line constraint relation of the segmentation points to obtain the variation of horizontal dimension, and obtaining an attitude transformation matrix between two frames based on two-step LM optimization.
8. The system of claim 5, wherein: the camera positioning module includes:
emitting light beams to three different positions through a laser radar positioning device to be positioned, receiving a large number of points returned from the outer surface of an object, and obtaining a local point cloud map;
matching the local point cloud map with the global point cloud map, and if the matching degree between the local point cloud map and the global point cloud map is greater than a threshold value, determining the matching relation between the local point cloud map and the global point cloud map;
and obtaining the position and the direction of a camera in the laser radar positioning device according to the matching relation.
CN202210706570.7A 2022-06-21 2022-06-21 Distributed monitoring camera positioning method and system based on laser radar mapping Pending CN114973147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210706570.7A CN114973147A (en) 2022-06-21 2022-06-21 Distributed monitoring camera positioning method and system based on laser radar mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210706570.7A CN114973147A (en) 2022-06-21 2022-06-21 Distributed monitoring camera positioning method and system based on laser radar mapping

Publications (1)

Publication Number Publication Date
CN114973147A true CN114973147A (en) 2022-08-30

Family

ID=82965618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210706570.7A Pending CN114973147A (en) 2022-06-21 2022-06-21 Distributed monitoring camera positioning method and system based on laser radar mapping

Country Status (1)

Country Link
CN (1) CN114973147A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030212A (en) * 2023-03-28 2023-04-28 北京集度科技有限公司 Picture construction method, device, vehicle and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030212A (en) * 2023-03-28 2023-04-28 北京集度科技有限公司 Picture construction method, device, vehicle and program product
CN116030212B (en) * 2023-03-28 2023-06-02 北京集度科技有限公司 Picture construction method, equipment, vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
Xiao et al. Building extraction from oblique airborne imagery based on robust façade detection
CN107067794B (en) Indoor vehicle positioning and navigation system and method based on video image processing
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
EP2798611B1 (en) Camera calibration using feature identification
EP2874097A2 (en) Automatic scene parsing
US10163256B2 (en) Method and system for generating a three-dimensional model
US20160154999A1 (en) Objection recognition in a 3d scene
Wen et al. Spatial-related traffic sign inspection for inventory purposes using mobile laser scanning data
US20160283774A1 (en) Cloud feature detection
US9373174B2 (en) Cloud based video detection and tracking system
Taneja et al. Geometric change detection in urban environments using images
Malinovskiy et al. Video-based vehicle detection and tracking using spatiotemporal maps
Józsa et al. Towards 4D virtual city reconstruction from Lidar point cloud sequences
US20220044558A1 (en) Method and device for generating a digital representation of traffic on a road
Gómez et al. Intelligent surveillance of indoor environments based on computer vision and 3D point cloud fusion
WO2020211593A1 (en) Digital reconstruction method, apparatus, and system for traffic road
CN114973147A (en) Distributed monitoring camera positioning method and system based on laser radar mapping
CN114511592A (en) Personnel trajectory tracking method and system based on RGBD camera and BIM system
CN116977806A (en) Airport target detection method and system based on millimeter wave radar, laser radar and high-definition array camera
Börcs et al. Dynamic 3D environment perception and reconstruction using a mobile rotating multi-beam Lidar scanner
CN116259001A (en) Multi-view fusion three-dimensional pedestrian posture estimation and tracking method
Shao et al. 3D crowd surveillance and analysis using laser range scanners
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN115249345A (en) Traffic jam detection method based on oblique photography three-dimensional live-action map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination