CN115641670A - Access control method based on fusion of camera and laser radar - Google Patents

Access control method based on fusion of camera and laser radar Download PDF

Info

Publication number
CN115641670A
CN115641670A CN202211361533.3A CN202211361533A CN115641670A CN 115641670 A CN115641670 A CN 115641670A CN 202211361533 A CN202211361533 A CN 202211361533A CN 115641670 A CN115641670 A CN 115641670A
Authority
CN
China
Prior art keywords
camera
laser
frame
target
personnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211361533.3A
Other languages
Chinese (zh)
Inventor
袁行船
肖汉彪
汪玄
李娜
陆德波
周冬桥
余敏
焦红爱
张忠兵
詹文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Csic Wuhan Lingjiu Hi Tech Co ltd
Original Assignee
Csic Wuhan Lingjiu Hi Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Csic Wuhan Lingjiu Hi Tech Co ltd filed Critical Csic Wuhan Lingjiu Hi Tech Co ltd
Priority to CN202211361533.3A priority Critical patent/CN115641670A/en
Publication of CN115641670A publication Critical patent/CN115641670A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention is suitable for the field of security protection, and provides an access control method based on the fusion of a camera and a laser radar, which comprises the following steps: calibrating the internal and external parameters of each camera; calibrating external parameters between the camera and the laser radar; fusing and labeling image data of a camera and point cloud data of a laser radar; detecting target personnel according to the fusion data; tracking the target personnel; and analyzing the position, state and track of the target person, and correspondingly controlling the opening and closing of the access control system. According to the invention, the identity authentication of personnel is realized by obtaining the identity information of the personnel, the access authority of the personnel is judged, the entrance and exit where the personnel enter are judged according to the personnel track, if the system allows the personnel to enter, a door lock or an automatic door is opened, and if the personnel is not allowed to enter, the unlocking is refused; the control scheme realizes the fusion of the camera and the three-dimensional laser radar and can accurately, efficiently and intelligently complete the access control.

Description

Access control method based on fusion of camera and laser radar
Technical Field
The invention belongs to the technical field of security protection, and particularly relates to an entrance guard control method based on fusion of a camera and a laser radar.
Background
At present, the access control system is developing towards more and more intellectualization. A conventional access control system collects identity information based on a certain mode (card swiping, biometric identification, etc.), and then verifies and controls the authority of the identity. As long as the identity is legal, the system automatically opens the door lock. The door lock control method indeed solves a plurality of problems and is convenient for the management of a plurality of districts and office buildings.
With the development of technology, new technology is applied to the access control system more and more. Especially, the development of artificial intelligence AI technology includes target detection, target recognition, target tracking, trajectory calculation, behavior understanding, etc., wherein the target detection and recognition are the basis for subsequent tracking, trajectory calculation and behavior understanding.
Three-dimensional lidar is a radar system that detects the spatial three-dimensional spherical coordinates of a target by emitting a laser beam. And transmitting a detection signal to a target by adopting a coherent angle measurement and frequency modulation distance measurement principle, comparing a received echo signal reflected from the target with a transmission signal, and obtaining the spherical coordinate of the target after proper processing. With the rapid development of three-dimensional imaging technology, lidar is also applied in various fields. The entrance and exit and the behavior of people have certain motion track and spatial information in the actual environment, so that a laser sensor is introduced to acquire three-dimensional information of a target in a scene so as to perform more accurate tracking, track calculation and behavior understanding.
At present, three-dimensional laser radars are rarely applied to an access control system, and application number 201810259923.7 discloses an access control system based on laser radar 3D imaging, but the low-cost laser radars are adopted in the patent, three-dimensional information of an object is collected through a human body information collection module, and the three-dimensional information is judged whether to be a legal user through a background, so that the laser radars in the patent are only used for collecting the three-dimensional information of a human body and are not used for tracking the position and calculating the track of the human body.
The existing access control is not intelligent enough, and only can identity identification be carried out in modes of simple card swiping, biological identification and the like, and especially for multi-door linkage access control, a special identification device similar to a card reader, fingerprint identification and the like needs to be arranged on each door. The current access control system can not carry out effective and intelligent control.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a door access control method based on the fusion of a camera and a laser radar, and to solve the technical problem that the existing door access system has a single control mode.
The invention adopts the following technical scheme:
the access control method based on the fusion of the camera and the laser radar comprises the following steps:
s1, calibrating internal and external parameters of each camera.
And S2, calibrating external parameters between the camera and the laser radar.
S3, fusing and labeling the image data of the camera and the point cloud data of the laser radar to obtain a conversion relation of points between a coordinate system of the laser radar and a two-dimensional plane coordinate system of the camera, and labeling the data by using a labeling tool;
s4, detecting target personnel according to the fusion data;
s5, tracking the target personnel;
and S6, analyzing the position, the state and the track of the target person, and correspondingly controlling the opening and closing of the access control system.
The invention has the beneficial effects that: the invention adopts the combination of a camera and a three-dimensional laser radar, is different from a common door control, effectively controls personnel based on face recognition, realizes tracking of the behavior of the personnel based on the three-dimensional laser radar, and finally controls based on the access authority of the personnel, specifically, realizes the identity verification of the personnel by obtaining the identity information of the personnel, judges the access authority of the personnel, judges the access opening for the personnel to enter through the personnel track, opens a door lock or an automatic door if the system allows the personnel to enter, and refuses to unlock if the system does not allow the personnel to enter; the control scheme realizes the fusion of the camera and the three-dimensional laser radar and can accurately, efficiently and intelligently complete the access control.
Drawings
Fig. 1 is a flowchart of an access control method based on the fusion of a camera and a laser radar according to a first embodiment of the present invention;
fig. 2 is a structural diagram of an access control system based on the fusion of a camera and a laser radar according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
fig. 1 shows a flow of an access control method based on the fusion of a camera and a laser radar according to an embodiment of the present invention, and only the relevant portions according to the embodiment of the present invention are shown for convenience of description.
As shown in fig. 1, the access control method based on the fusion of the camera and the laser radar provided by this embodiment includes the following steps:
s1, calibrating internal and external parameters of each camera.
The camera in this embodiment employs a wide-angle camera. The field of view of a common camera is limited to about 60 degrees, so that the camera is restricted in an intelligent monitoring scene, and a wide-range view blind area exists. In order to solve the influence of the field angle of the camera, the fish-eye wide-angle camera is used for replacing a common pinhole camera.
Firstly, parameters such as a focal length, a principal point, a nonlinear distortion coefficient and the like of the camera are determined. And carrying out image correction by using the calibrated nonlinear distortion coefficient to generate image data of a normal visual angle. Specifically, the internal reference matrix and the distortion coefficient of the camera are calibrated by taking the checkerboard as a calibration board, and the external reference matrix of the adjacent camera is calculated. In the embodiment, a reference coordinate system is established by using the checkerboard, and the position relation of the camera relative to the checkerboard is calculated through corrected image data, so that the relative position relation of the camera is determined.
And S2, calibrating external parameters between the camera and the laser radar.
The camera and the laser radar also adopt checkerboards as calibration plates, the relative position relation of the checkerboards in the camera visual imaging is respectively calculated, and the coordinates of the plane where the checkerboards are located in the camera coordinate system are determined, and the coordinates can be expressed by plane unit normal vectors and distances. In a laser radar coordinate system, coordinates of a plane in the radar coordinate system are fitted through coordinates of radar three-dimensional coordinate points on the plane, and the plane is characterized by a plane unit normal vector and a distance.
In order to extract the accurate corner coordinates of the checkerboard point cloud and further obtain more accurate combined calibration precision, the step provides a corner extraction method based on edge refinement of a calibration plate, and the specific process is as follows:
s21, completing crude extraction of the point cloud of the calibration plate by utilizing a conditional filtering algorithm of the point cloud: and (3) a point pi with coordinates (xi, yi, zi) in the laser point cloud, if the point coordinates satisfy: x is the number of i ∈(X 1 ,X 2 )∩y i ∈(Y 1 ,Y 2 )∩z i ∈(Z 1 ,Z 2 ) If so, then the point is retained, otherwise the point is culled, wherein X 1 、X 2 A threshold value set along the X axis of the laser coordinate system; y is 1 、Y 2 A threshold value set along the Y axis of the laser coordinate system; z 1 、Z 2 Is a threshold set along the Z-axis of the laser coordinate system.
Extracting the point cloud of the thermosensitive calibration plate from the whole original point cloud, wherein the point cloud at the moment mainly comprises the calibration plate and a person for holding the calibration plate. And (3) setting the pi coordinate of a certain point in the laser point cloud as (xi, yi, zi), and if the point coordinate satisfies: x is the number of i ∈(X 1 ,X 2 )∩y i ∈(Y 1 ,Y 2 )∩z i ∈(Z 1 ,Z 2 ) If not, the point is eliminated, and the point cloud near the calibration plate can be obtained through the step.
S22, fitting a calibration plate plane and obtaining calibration plate point clouds, and filtering out non-calibration plate point clouds in the point clouds obtained in the step S21: and repeatedly establishing a parameterized model from the calibration board point data set, calculating the deviation degree between the point cloud and the established model after establishing the model each time, calling the point with the deviation degree smaller than a set threshold value as an inner point, recording the number of the inner points obtained by fitting the model each time, and taking the parameterized model obtained when the number of the inner points is the maximum as an optimal model.
In the steps of the invention, 3 non-collinear points are randomly selected each time to construct a three-dimensional plane equation, the distance values between other points and the plane equation are calculated, the total number of points with the distance less than a given threshold value is counted, and if the ratio of the number of the points to the total number of the whole calibration plate point cloud is greater than a certain proportional threshold value, the algorithm is ended. All the obtained interior points are used as calibration plate point clouds.
S23, point cloud and edge fitting: calculating to obtain edge points of the point cloud of the calibration plate according to the distribution characteristics of the point cloud, obtaining four edges of the calibration plate through edge point fitting, respectively processing the four edges of the calibration plate, and obtaining the intersection point of two adjacent straight lines in a three-dimensional space to obtain four vertexes of the calibration plate.
Because of the error of the laser radar, two straight lines which are actually fitted are difficult to intersect, and therefore the approximate intersection point of the two different-surface straight lines is directly used as the vertex of the calibration plate.
And S3, fusing and labeling the image data of the camera and the point cloud data of the laser radar to obtain a conversion relation of points between a coordinate system of the laser radar and a two-dimensional plane coordinate system of the camera, and labeling the data by using a labeling tool.
In projective geometry, a plane and a point are dual elements, the plane can be converted into the point through dual, a checkerboard calibration board is only required to be placed in different postures of three times or more, and finally a rotation matrix and a translation vector between two coordinate systems are calculated by solving a least square method. According to the external parameter calibration result, the three-dimensional point cloud generated by the laser radar can be completely mapped to the panoramic visual image plane of the camera through the small hole imaging model, so that the high-precision fusion of the laser point cloud and the image data is realized. The fused panoramic data not only comprises color information (namely RGB three-channel color values), but also comprises a three-dimensional structure of a scene (namely coordinate values in three directions of XYZ), and the panoramic data representation of the monitored scene is realized.
The specific process is as follows:
s31, the calibration plate is placed under different postures, 3D coordinates of feature points of the calibration plate in the laser point cloud are obtained, the 3D coordinates are projected on 2D coordinates of the image, the pose transformation relation of a laser radar coordinate system and a camera coordinate system is obtained, and fusion of laser point cloud data and image data is achieved.
After the calibration plate is placed at a proper pose each time, 3D coordinates of characteristic points of the calibration plate in the three-dimensional laser point cloud can be obtained, pixel coordinates of the characteristic points in the infrared image can be obtained by extracting vertexes of the calibration plate from the image, and 2D coordinates of the known N points projected on the image are obtained by solving the 3D coordinates of the points, namely the pose transformation relation of the laser radar and the camera is obtained, so that high-precision fusion of the laser point cloud and the image data is realized.
And S32, labeling the image data and the laser point cloud data.
For the labeling of image data, a conventional labeling tool Labelme is used, and the tool can be used for performing polygon framing and label making on multiple targets. For marking of laser point cloud data, firstly, a 3D target detection algorithm is used for generating an initial label, and then an accurate bounding box is obtained by using a region growing algorithm and a PAC algorithm on the basis. The region growing algorithm is used for segmenting the point cloud by using the discontinuity and similarity characteristics of data, so that intelligent point cloud frame selection is realized, and the interference of environmental factors such as the ground and the like on target positioning is eliminated. The PAC algorithm is used for calculating the main direction of the point cloud, and calculating the minimum bounding box of the pedestrian according to the main direction of the point cloud, wherein the bounding box represents the position, the size and the direction information of the target.
For the acquired image data and laser point cloud data, in order to realize the identification and classification of target personnel, the marking of the image data and the laser point cloud data needs to segment image semantic information: in the 3D laser point cloud, objects such as pedestrians and obstacles required to be marked in a standard mode are marked, the positions of the objects are directly mapped to the 2D image, and a rectangular laser frame is formed in the positions. And when the image frame and the laser frame with the mapping relation are selected and activated, the image frame and the laser frame are simultaneously displayed as red highlighting, and the image corresponding to the laser frame is highlighted.
And S4, detecting the target person according to the fusion data.
The specific process of the step is as follows: converting the three-dimensional laser point cloud data into a bird's-eye view graph form, taking the bird's-eye view graph form and the image data as input, and carrying out target detection on the basis of a detection network of YOLO-v5 to obtain an image frame and output target personnel information, wherein the information comprises position information, ID (identity) and classification number of a target.
Firstly, training a detection network based on YOLO-v5, constructing a monitoring scene target database, labeling data in the database, determining the ID, the category and the sample of the data, acquiring the data, and inputting the data into the detection network of the YOLO-v5 for training. And carrying out multi-target detection and identification on the target in the image based on the trained YOLO-v5 detection network, and outputting information such as the area of the target in the image, the ID of the target, the classification number and the like.
In order to effectively improve the accuracy of target detection after fusion, the invention defines the comprehensive similarity diff with distance constraint ij When there is an overlap between the laser frame and the image frame, diff is applied ij As a basis for selecting the best matching box. The detection of the laser frame and the image frame has the following three conditions:
the method comprises the following steps that in the first situation, an image frame and a laser frame are not overlapped, the image frame is processed currently but the laser frame is not processed currently, if the confidence coefficient of the image frame is greater than 0.5, a long-distance target is considered to exist, otherwise, the camera is considered to be detected by mistake, and the camera is removed;
the second situation is that the image frame and the laser frame are not overlapped, the laser frame is currently processed, and the laser frame is reserved;
and in the third case, the image frame and the laser frame are overlapped, and the method is divided into the following two modes:
1) And a laser frame L j Intersecting image frame C i The number is equal to 1 if L j And C i Satisfies the distance similarity of Δ T (L) j ,C i )>δ ΔT Keeping the image frame C in a relatively accurate position i
2) And a laser frame L j Intersecting image frame C i The number is more than 1, and the comprehensive similarity diff of all the laser frame and image frame pairs is calculated ij Keeping the image frame with the maximum similarity with the laser frame, wherein diff ij For comprehensive similarity with distance constraint, diff ij =[d ij +0.5]+ΔT(L j ,C i ) Wherein d is ij Is the cross-over ratio of the image frame and the laser frame, [ d ] ij +0.5]Means not exceeding d ij Maximum integer of +0.5, Δ T (L) j ,C i ) Representing the picture frame C i And a laser frame L j Distance similarity of (2).
The invention describes three existing types of the detection frame, simultaneously gives out corresponding processing strategies, and then correspondingly explains the given processing strategies according to the actual situation appearing in the experiment. Because the laser frame is formed by projecting 3D point clusters onto the picture, if the laser frame appears, it indicates that a real object necessarily exists at the position; if no laser frame appears, it indicates that there may be an object that is blocked and is not detected, and there may be no object.
For case one above, currently processed is image frame C i But without the laser frame L j It appears that this is most likely a distant target, experimentally tested, if image frame C i If the confidence coefficient is greater than 0.5, the camera is considered to be a long-distance target, otherwise, the camera is considered to be false detection, and the camera is rejected.
For the second case, the laser frame is currently processed, which indicates that there must be a real object, and the laser frame is reserved.
For case three, image frame C i And a laser frame L j Is overlapped due to L j Occurrence, indicating that there must be a real object, and L j The true number of objects is at least 1. There are two cases: 1) And L j Intersecting image frame C i The number is equal to 1 if L j And C i Satisfies the distance similarity of Δ T (L) j ,C i )>δ ΔT Keeping the image frame C with more accurate position i . 2) And L j Intersecting image frame C i The number is more than 1, it is possible that the laser detects several objects close to the same class, and the comprehensive similarity diff of all the frame pairs is calculated at this time ij And reserving the image frame with the maximum similarity and the laser frame.
Through the target frame matching based on the distance constraint, the advantages of each sensor can be exerted, and the position information of the target in the image and the laser point cloud, the ID, the classification number and the like of the target can be output.
And S5, tracking the target person.
S51, extracting an appearance characteristic vector of the target person, wherein the appearance characteristic vector comprises six-dimensional data of image (R, G, B) information and laser point cloud (x, y, z) information.
S52, describing the similarity between target persons by using the cosine distance of the feature vectors, performing interframe matching, defining the directions of the target feature vectors of the front and rear frames as u and v respectively, and defining the cosine distance in a three-dimensional space as follows:
Figure BDA0003922202390000081
and solving cosine values corresponding to the included angles to represent the similarity of the two characteristic vectors. The smaller the included angle is, the closer to 0 degree, the closer to 1 the cosine value is, the more the directions of the included angle and the cosine value are matched, and the more the included angle and the cosine value are similar, thereby completing the association of the front frame target and the rear frame target.
S53 target position prediction and update. In order to obtain the optimal estimation and the minimum error variance, the measured value obtained from the target model is recurred step by step, and the state estimation value at a new moment is obtained in real time, and the target state equation and the observation equation are assumed to be respectively:
X(k)=AX(k-1)+BW(k-1)
Y(k)=HX(k)+V(k)
wherein k is a current frame, k-1 is a previous frame, X (k) represents the state of a target in the current frame, Y (k) is an observation signal, A is a state transition matrix, B is a noise correlation matrix, and W (k) and V (k) are respectively process noise and observation noise. And predicting the state estimation value of the current frame according to the state estimation value of the previous frame moment of the target, and then obtaining the update value of the target of the current frame, so that the target position is predicted and updated, and the tracking task is completed.
And S6, analyzing the position, the state and the track of the target person, and correspondingly controlling the opening and closing of the access control system.
And after the target enters the monitoring area, starting to execute a face recognition program. The method comprises the steps of establishing a face feature vector library of workers based on image information acquired offline, generating vectors by using face features acquired in real time, calculating the distance between the two vectors in the image library, and judging the similarity degree of two faces, wherein if the similarity degree is larger than a similarity threshold value, such as 95%, the two faces are judged as the workers. TargetFace recognition is always performed within the monitoring range, and acquired information is given to the track generated in real time. Real-time positioning analysis is carried out by utilizing the path track of the target, and real-time positioning information, namely T, is determined by the target 3D frame, the three-dimensional distance information and the time T which are acquired by the neural network traj And = f (x, y, z, t), wherein x, y, z are determined by the center of the 3D frame of the moving object, and the distance between the moving object and the entrance guard is calculated in real time. And if the distance between the target and the entrance guard is smaller than a set distance value, if the distance is 2m, starting a face matching program, if the distance is judged to be a worker, opening the corresponding entrance guard, and otherwise, displaying that no person is input and informing an administrator.
Example two:
this embodiment provides a camera and laser radar fused access control system, as shown in fig. 2, including many cameras and laser radar, still include controller and access control lock mechanism, the controller includes:
the first parameter calibration unit is used for calibrating the internal and external parameters of each camera;
the second parameter calibration unit is used for calibrating external parameters between the camera and the laser radar;
the fusion marking unit is used for fusing and marking the image data of the camera and the point cloud data of the laser radar to obtain the conversion relation of points between the coordinate system of the laser radar and the two-dimensional plane coordinate system of the camera, and marking the data by using a marking tool;
the target detection unit is used for detecting target personnel according to the fusion data;
the target tracking unit is used for tracking the target personnel;
and the analysis control unit is used for analyzing the position, the state and the track of the target person and correspondingly controlling the opening and closing of the access control system.
Each functional unit of the controller in this embodiment correspondingly implements steps S1 to S6 in the first embodiment. After the camera and the laser radar are calibrated, the image data and the point cloud data are fused and labeled to obtain a conversion relation between a laser radar coordinate system and a camera coordinate system; then, target detection, tracking and analysis are carried out, and finally, the corresponding access control system is correspondingly controlled to be switched on and off.
In this embodiment, the camera acquires an RGB video, thereby realizing panoramic visual imaging. Under the control of the controller, the identity information of the personnel and the behavior track of the personnel on the panoramic imaging can be obtained based on target recognition and target tracking. The camera is required to be capable of taking panoramic pictures, and people can take pictures no matter how mobile the people move in the controlled access area. Photographing is carried out from the acquisition of head portrait information of the personnel until the face information of the personnel is recognized, and the information of all personnel in the area is stored and the track of the personnel is tracked through personnel identity information comparison. Recording the whole video recording of the person from entering the area to leaving the area, and acquiring the motion track of the person.
The laser radar directly acquires the position information of each controlled door, acquires the position information of each door based on the field condition under the action of the controller, and is deeply fused with the video imaging data of the camera; meanwhile, the behavior track coordinates of the personnel are obtained and are also fused with the video imaging data of the camera in depth. And acquiring the walking track position information of the personnel. And the three-dimensional laser radar system is used for acquiring the three-dimensional coordinates of the personnel. The 3D laser radar mainly acquires position information relative to the laser radar based on the spherical system coordinates, is a three-dimensional coordinate system, and can be converted into three-dimensional coordinates through calculation. If the coordinate of the person is found to be away from the coordinate of a certain door, the system is activated to judge whether the identity of the person can enter the door or not, and then the access control system judges. If such a door coordinate zone is left, the system issues a door close command.
The controller determines whether to open the controlled door by matching the behavior track coordinates with the coordinates of the controlled door and combining the corresponding personnel access authority. The controller can carry out intelligent recognition, obtain personnel's identity information, realize personnel's authentication, judge personnel's self business turn over authority, judge this personnel's orbit, judge the access & exit that personnel will get into based on the orbit, if the system allows personnel to get into, then open lock or automatically-controlled door, if do not allow personnel to get into, then refuse to unblank.
In summary, the invention provides an access control method combining a three-dimensional laser radar and a camera, which is used for effectively controlling personnel based on camera face recognition, tracking the behavior of the personnel based on the three-dimensional laser radar and acquiring identity information and position information of the personnel. The identity is acquired through the identity information, the authority is verified, the door through which the personnel pass is found through the position information, then the personnel access authority is controlled, and the automatic opening and closing of the door lock are realized.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A door control method based on the fusion of a camera and a laser radar is characterized by comprising the following steps:
s1, calibrating internal and external parameters of each camera;
s2, calibrating external parameters between the camera and the laser radar;
s3, fusing and labeling the image data of the camera and the point cloud data of the laser radar to obtain a conversion relation of points between a coordinate system of the laser radar and a two-dimensional plane coordinate system of the camera, and labeling the data by using a labeling tool;
s4, detecting target personnel according to the fusion data;
s5, tracking the target personnel;
and S6, analyzing the position, the state and the track of the target person, and correspondingly controlling the opening and closing of the access control system.
2. The door access control method based on the fusion of the camera and the lidar as claimed in claim 1, wherein in step S1, the camera adopts a wide-angle camera, the internal reference matrix and the distortion coefficient of the camera are calibrated by using the checkerboard as a calibration board, and the external reference matrix of the adjacent camera is calculated to determine the relative position relationship of the camera.
3. The door access control method based on the fusion of the camera and the laser radar as claimed in claim 2, wherein the step S2 specifically comprises the following steps:
s21, completing crude extraction of the point cloud of the calibration plate by utilizing a conditional filtering algorithm of the point cloud: and (3) a point pi with coordinates (xi, yi, zi) in the laser point cloud, if the point coordinates satisfy: x is the number of i ∈(X 1 ,X 2 )∩y i ∈(Y 1 ,Y 2 )∩z i ∈(Z 1 ,Z 2 ) If so, then the point is retained, otherwise the point is culled, where X 1 、X 2 A threshold value set along the X axis of the laser coordinate system; y is 1 、Y 2 A threshold value set along the Y axis of the laser coordinate system; z 1 、Z 2 A threshold value set along the Z axis of the laser coordinate system;
s22, fitting a calibration plate plane and obtaining calibration plate point clouds, and filtering out non-calibration plate point clouds in the point clouds obtained in the step S21: repeatedly establishing a parameterized model from a calibration board point data set, calculating the deviation degree between the point cloud and the established model after each model establishment, calling points with the deviation degree smaller than a set threshold value as interior points, recording the number of the interior points obtained by fitting the model each time, and taking the parameterized model obtained when the number of the interior points is the maximum as an optimal model;
s23, point cloud and edge fitting: calculating to obtain edge points of the point cloud of the calibration plate according to the distribution characteristics of the point cloud, obtaining four edges of the calibration plate through edge point fitting, respectively processing the four edges of the calibration plate, and obtaining the intersection point of two adjacent straight lines in a three-dimensional space to obtain four vertexes of the calibration plate.
4. The door access control method based on the fusion of the camera and the lidar as claimed in claim 3, wherein the step S3 comprises the following specific steps:
s31, a calibration plate is placed under different postures to obtain 3D coordinates of feature points of the calibration plate in laser point cloud, and the pose transformation relation of a laser radar coordinate system and a camera coordinate system is obtained by projecting the 3D coordinates on 2D coordinates of an image to realize the fusion of laser point cloud data and image data;
and S32, labeling the image data and the laser point cloud data.
5. The door access control method based on the fusion of the camera and the laser radar as claimed in claim 4, wherein the specific process of the step S4 is as follows: converting three-dimensional laser point cloud data into a bird's-eye view graph form, taking the bird's-eye view graph form and the image data as input, and carrying out target detection on the basis of a detection network of YOLO-v5 to obtain an image frame, a laser frame and output target personnel information, wherein the output target personnel information comprises position information, ID (identity) and classification number of a target;
the detection of the laser frame and the image frame has the following three conditions:
the method comprises the following steps that in the first situation, an image frame and a laser frame are not overlapped, the currently processed image frame is the image frame but the laser frame is not included, if the confidence coefficient of the image frame is larger than 0.5, a long-distance target is considered to exist, otherwise, a camera is considered to be falsely detected, and the long-distance target is eliminated;
the second situation is that the image frame and the laser frame are not overlapped, the laser frame is currently processed, and the laser frame is reserved;
and in the third case, the image frame and the laser frame are overlapped, and the method is divided into the following two modes:
1) And the laser frame L j Intersecting image frame C i The number is equal to 1 if L j And C i Satisfies the distance similarity of Δ T (L) j ,C i )>δ ΔT Keeping the image frame C in a relatively accurate position i
2) And a laser frame L j Intersecting image frame C i The number is more than 1, and the comprehensive similarity diff of all the laser frame and image frame pairs is calculated ij Keeping the image frame with the maximum similarity with the laser frame, wherein diff ij For comprehensive similarity with distance constraint, diff ij =[d ij +0.5]+ΔT(L j ,C i ) Wherein d is ij Is the cross-over ratio of the image frame and the laser frame, [ d ] ij +0.5]Means not exceeding d ij Maximum integer of +0.5, Δ T (L) j ,C i ) Representing the picture frame C i And a laser frame L j Distance similarity of (2).
6. The door access control method based on the fusion of the camera and the laser radar as claimed in claim 5, wherein the step S5 specifically comprises the following steps:
the method comprises the steps of extracting appearance characteristic vectors of target personnel, describing similarity among the target personnel by cosine distances of the characteristic vectors to carry out interframe matching, then obtaining position correlation of a moving target, and realizing tracking of the target through a Kalman filtering algorithm.
7. The door access control method based on the fusion of the camera and the laser radar as claimed in claim 6, wherein the specific process of the step S6 is as follows:
establishing a face feature vector library of workers by using the image information, generating vectors by using the face features of the target workers acquired in real time, calculating the distance between the two vectors in the image library, and judging the similarity degree of the two faces, wherein if the similarity degree is greater than a similarity threshold value, the current target worker is judged to be a worker;
predicting and analyzing the path track of the target person in real time, positioning the relative position of the target person and the entrance guard, and starting a face matching program if the distance between the target person and the entrance guard is less than a set distance value;
and if the judgment result is that the staff is the staff, opening the corresponding access control, otherwise, displaying that no staff is input and informing the administrator.
CN202211361533.3A 2022-11-02 2022-11-02 Access control method based on fusion of camera and laser radar Pending CN115641670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211361533.3A CN115641670A (en) 2022-11-02 2022-11-02 Access control method based on fusion of camera and laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211361533.3A CN115641670A (en) 2022-11-02 2022-11-02 Access control method based on fusion of camera and laser radar

Publications (1)

Publication Number Publication Date
CN115641670A true CN115641670A (en) 2023-01-24

Family

ID=84946249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211361533.3A Pending CN115641670A (en) 2022-11-02 2022-11-02 Access control method based on fusion of camera and laser radar

Country Status (1)

Country Link
CN (1) CN115641670A (en)

Similar Documents

Publication Publication Date Title
US7149325B2 (en) Cooperative camera network
US7321386B2 (en) Robust stereo-driven video-based surveillance
US8711221B2 (en) Visually tracking an object in real world using 2D appearance and multicue depth estimations
US7929017B2 (en) Method and apparatus for stereo, multi-camera tracking and RF and video track fusion
Cui et al. Multi-modal tracking of people using laser scanners and video camera
Treptow et al. Real-time people tracking for mobile robots using thermal vision
Andreasson et al. Has somethong changed here? autonomous difference detection for security patrol robots
AU2002303377A1 (en) Method and apparatus for tracking with identification
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
AU2021255130B2 (en) Artificial intelligence and computer vision powered driving-performance assessment
CN113591722B (en) Target person following control method and system for mobile robot
Bozorgi et al. 2D laser and 3D camera data integration and filtering for human trajectory tracking
Monari et al. A robust and efficient approach for human tracking in multi-camera systems
Fitzgibbons et al. Bearing only SLAM using colour-based feature tracking
CN115641670A (en) Access control method based on fusion of camera and laser radar
Bardas et al. 3D tracking and classification system using a monocular camera
Ling et al. A multi-pedestrian detection and counting system using fusion of stereo camera and laser scanner
Llorca et al. Assistive pedestrian crossings by means of stereo localization and rfid anonymous disability identification
CN115457237A (en) Vehicle target rapid detection method based on radar vision fusion
Bonin-Font et al. A monocular mobile robot reactive navigation approach based on the inverse perspective transformation
Haoran et al. MVM3Det: a novel method for multi-view monocular 3D detection
Camarda et al. Multisensor Tracking of Lane Boundaries based on Smart Sensor Fusion
Hermiston et al. Pose estimation and recognition of ground vehicles in aerial reconnaissance imagery
Reulke et al. Traffic observation and situation assessment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination