CN112581519B - Method and device for identifying and positioning radioactive waste bag - Google Patents

Method and device for identifying and positioning radioactive waste bag Download PDF

Info

Publication number
CN112581519B
CN112581519B CN202011542885.XA CN202011542885A CN112581519B CN 112581519 B CN112581519 B CN 112581519B CN 202011542885 A CN202011542885 A CN 202011542885A CN 112581519 B CN112581519 B CN 112581519B
Authority
CN
China
Prior art keywords
shielding container
point cloud
pose
positioning
loading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011542885.XA
Other languages
Chinese (zh)
Other versions
CN112581519A (en
Inventor
熊会元
刘建勋
刘羽
李同同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Nuclear Power Engineering Co Ltd
Sun Yat Sen University
Original Assignee
China Nuclear Power Engineering Co Ltd
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Nuclear Power Engineering Co Ltd, Sun Yat Sen University filed Critical China Nuclear Power Engineering Co Ltd
Priority to CN202011542885.XA priority Critical patent/CN112581519B/en
Publication of CN112581519A publication Critical patent/CN112581519A/en
Application granted granted Critical
Publication of CN112581519B publication Critical patent/CN112581519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention discloses a method and a device for identifying and positioning radioactive waste bags, wherein the method comprises the following steps: and manufacturing a waste bag point cloud template and a shielding container point cloud template, identifying the pose of a shielding container and the pose of a waste bag by using a two-stage matching method of environment constraint characteristic vectors so as to carry out identification and positioning, and loading or unloading the waste bag according to the identification and positioning result. The invention realizes the identification and the accurate positioning of the posture and the internal state of the shielding container, realizes the accurate identification and the positioning of the goods pack in the shielding outer box, and solves the problem that the traditional technology sensitive to environmental illumination, shielding and the like cannot solve the identification and the positioning of the goods pack in the shielding outer box.

Description

Method and device for identifying and positioning radioactive waste bag
Technical Field
The invention relates to the technical field of identification and positioning of goods, in particular to a method and a device for identifying and positioning a radioactive waste bag.
Background
The goods package identification and positioning technology is the most basic and key technology in an unmanned goods loading and unloading scene, and the reliability of the whole loading and unloading system is directly influenced by the stability and accuracy of identification and positioning. In order to complete the accurate loading and unloading process, the transfer container on the transfer trolley and the waste bag in the container need to be identified and positioned respectively.
Aiming at the scene of a goods bag in an outer box, no effective identification and positioning method exists at present. The traditional goods package positioning and identifying method mainly comprises three ideas of positioning by means of a position sensor, positioning by means of a perception sensor and combination of the positioning and the perception sensor, the positioning by means of the position sensor comprises positioning based on a GPS/GNSS (global positioning system/global navigation satellite system), positioning based on a distributed track or the position sensor, positioning based on an RFID (radio frequency identification) label and the like, the positioning by means of the perception sensor comprises positioning based on monocular vision, positioning based on binocular vision, positioning based on (a plurality of) two-dimensional laser radars, positioning based on millimeter wave radars and the like, and the combination method comprises fusion positioning of images and the GPS, fusion positioning of the laser radars and the position sensor and the like. For example, chinese patent publication No. CN107507167A, publication No. 2017.12.22: a goods tray detection method and system based on point cloud plane contour matching is based on monocular vision positioning, projection distortion is easily generated when an object is not in the center of a view field, a good identification and positioning effect can be obtained only when a camera is in an orthographic position, the problems of small identification range, low flexibility and the like exist, and the goods tray detection method and system are sensitive to shielding and difficult to identify and detect the object in a container.
In the current mainstream method, the method using the GPS has the problems of poor precision, environmental influence and the like, the GPS sensor can only be used outdoors, the precision is m-level, and the error is large; the method based on the track or the RFID sensor has the problems of complicated equipment layout, difficult calibration and the like, needs a short detection distance and is difficult to detect objects placed in the container; the method relying on the monocular vision sensor is sensitive to depth and observation angle, usually needs a laser radar or a millimeter wave radar to assist in detecting depth information, is easy to generate projection distortion when an object is not in the center of a visual field, can obtain a good identification and positioning effect only when a camera is in a normal view position, has the problems of small identification range, low flexibility and the like, is sensitive to shielding, and is difficult to identify and detect the object in a container; the method relying on binocular vision is sensitive to characteristics, the binocular vision needs characteristic matching, accurate three-dimensional coordinates can be obtained only in places with obvious characteristics, the method is not suitable for identifying large-area planar objects in a container, and the binocular vision is sensitive to shielding, so that identification and positioning of objects in the container are difficult to realize; in the scheme of sensor fusion, the current positioning method is mostly the fusion of a visual sensor and a positioning sensor, such as the fusion of an image and a millimeter wave radar and the fusion of an image and a position sensor, but the method also has the problems of complicated equipment arrangement and calibration, calibration errors in data fusion, narrow identification range and difficulty in solving the problems of shielding, sparse identification object characteristics and the like.
Disclosure of Invention
The invention provides a radioactive waste bag identification and positioning method and device, which are used for realizing the identification and accurate positioning of the posture and the internal state of a shielding container, realizing the accurate identification and positioning of a cargo bag in a shielding outer box and solving the problem that the identification and positioning of the cargo bag in the box cannot be solved by the traditional technology sensitive to environmental illumination, shielding and the like.
The technical scheme of the invention is as follows:
a radioactive waste package identification and positioning method comprises the following steps of manufacturing a waste package point cloud template and a shielding container point cloud template, identifying the pose of a shielding container and the pose of a waste package by using a two-stage matching algorithm, identifying and positioning, and loading or unloading the waste package according to the identification and positioning result, wherein the method comprises the following specific steps:
s1, collecting scene point clouds through a laser scanning sensor, extracting calibrated point clouds in an effective area of the lane, performing noise reduction and filtering to remove disordered random outliers, and extracting ground point clouds;
s2, removing ground point clouds, clustering object point clouds in the scene to obtain each object point cloud, and screening to obtain clustered vehicle point clouds and shielding container point clouds;
s3, performing two-stage pose matching on the acquired point cloud of the shielding container by applying the point cloud template of the shielding container, identifying the pose of the shielding container, if the pose is normal, guiding the loading and unloading device to unload the top cover of the shielding container according to the pose, and if the pose is abnormal, re-identifying the pose of the shielding container;
s4, analyzing the point cloud in the shielding container according to the pose of the shielding container obtained by identification, setting a minimum threshold value and a maximum threshold value of the vertical density of the point cloud, and detecting whether the waste bag is loaded in the shielding container and whether the internal state of the shielding container is normal;
s41, if the point cloud vertical density is larger than the maximum threshold value, the shielding container is loaded with the waste bag, when the shielding container is loaded with the waste bag, unloading is carried out, and the unloading process is as follows:
s411, extracting a waste bag point cloud in the shielding container and extracting a boundary;
s412, performing two-stage pose matching on the extracted waste bag point cloud by using a waste bag point cloud template to obtain the pose of the waste bag and transmit the pose to an unloading system, and sending an operation instruction to a loading and unloading device by the unloading system to grab the waste bag for unloading;
s413, monitoring the pose of the waste bag in real time in the unloading process and feeding back the pose to an unloading system;
s42, if the point cloud vertical density is smaller than the minimum threshold value, the shielding container is empty, when the shielding container is empty, loading is carried out, and the loading process is as follows:
s421, directly transmitting the internal space pose of the shielding container to a loading system, and sending an operation instruction to a loading and unloading device by the loading system to grab a waste bag for loading;
s422, detecting the pose of the waste bag after loading is finished and feeding back the pose to a loading system;
and S43, when the foreign matters exist in the container, the foreign matters are reported to the central control system when the foreign matters exist in the shielding container, and the pose of the shielding container is identified again after the central control system checks and eliminates the abnormality.
In the application scene of the invention, in order to realize the identification and positioning of supporting loading and unloading under the condition of outer box cargo, a loading and unloading object-radioactive waste bag is a cube with clamping holes at the corners, the waste bag is placed in a specially-made transporting shielding container, the position of the waste bag in the shielding container is not fixed and keeps a certain interval with the inner wall of the container, and the shielding container is fixed on a transport vehicle. When the goods pack is loaded, the transport vehicle is positioned below the loading and unloading device, the position and posture of the shielding container are identified by applying the point cloud template of the shielding container, then whether the position and posture of the shielding container are normal is detected, then the top cover of the shielding container is unloaded, whether the inner space of the container is normal (whether foreign objects exist or not) is detected, then the waste pack is loaded into the shielding container, and finally whether the posture of the loaded waste pack is normal is detected. When the goods package is unloaded, the transport vehicle is positioned below the unloading device, whether the pose of the shielding container is normal is detected, then the top cover of the shielding container is unloaded, then the pose of the waste package in the shielding container is identified and positioned by using the point cloud template of the waste package, and the unloading device is guided to unload the waste package.
Further, the method for manufacturing the waste bag point cloud template comprises the following steps:
acquiring point clouds, including acquiring the point clouds from a 1:1 three-dimensional model or scanning and acquiring spliced point clouds by using a laser radar, wherein due to the fact that the point clouds of the waste bags scanned by laser in actual conditions are shielded, the top surfaces of the point clouds can be positioned to obtain accurate poses, namely only the point clouds of the top surfaces of the waste bags are acquired;
the collected top surface point clouds are aligned to the original point and are uniformly sampled, so that the accurate three-dimensional pose is conveniently solved subsequently, the interference caused by the uneven density of the point clouds is avoided, the top geometric features of the waste bags are concentrated at the edges, and four square holes are grabbed by the loading and unloading device, so that the boundary point clouds are extracted, the geometric features of the waste bags are reserved to the maximum extent, and redundant feature points are eliminated.
Further, the manufacturing of the shielding container point cloud template comprises the following processes:
acquiring point clouds, including acquiring the point clouds from a 1:1 three-dimensional model or scanning and acquiring spliced point clouds by using a laser radar, wherein only the point clouds on the inner side and the outer side and the top of a shielding container can be scanned due to the fact that the point clouds of the shielding container scanned by laser are shielded in an actual situation, so that only the point clouds on the inner side and the outer side and the top of the shielding container are acquired;
and aligning the collected point cloud to the origin and uniformly sampling so as to solve the accurate three-dimensional pose in the subsequent process and avoid the interference caused by the uneven density of the point cloud.
Further, in step S1, the ground point cloud is extracted by a random sampling consistency algorithm.
Further, in step S1, the laser scanning sensor is mounted on a loading and unloading device several meters away from the ground, so that the sensing range covers the loading and unloading area of the waste bag, thereby collecting the whole scene point cloud.
Further, in step S2, each object point cloud is obtained by an euclidean clustering or DBSCAN clustering method, and the vehicle point cloud and the shielding container point cloud to be matched are screened out according to the cluster size or the calculated feature distribution histogram.
Further, the two-stage matching algorithm is as follows: and solving by using a two-step matching method of environment constraint characteristic vector coarse registration and ICP fine registration to obtain the pose.
Further, in step S3, when the pose of the shielding container is identified as abnormal, the pose is reported to the central control system, and the pose of the shielding container is identified again after the central control system checks and eliminates the abnormality. The central control system is a background control end used for unified scheduling control.
Further, in step S3, the shielding container point cloud template and the shielding container point cloud are downsampled to make the point cloud densities uniform.
The invention also provides a radioactive waste bag identification and positioning device, which comprises the radioactive waste bag identification and positioning method.
The invention has the beneficial effects that:
firstly, the recognition and positioning precision is high. On one hand, compared with a camera, a laser scanning sensor (laser radar) based on the TOF principle has higher spatial measurement accuracy, point cloud data has spatial rotation invariance, and the spatial geometric feature positioning accuracy based on the point cloud is far higher than that based on an image. On the other hand, the measurement precision of the single sensor has inherent advantages compared with a multi-sensor fusion system, and more accurate original data can be obtained under the condition of avoiding data calibration, so that more accurate three-dimensional pose is obtained.
Secondly, the adaptability of the object and the environment is good. The method is not influenced by light materials and the like, and has wide identification range and strong robustness. Compare in two-dimensional vision sensor such as camera just can obtain better discernment location effect at the angle of orthographic view, because the space rotation invariance of point cloud, laser scanning sensor in the field of vision scope as long as do not obviously shelter from all can normally work, the discernment scope has very big promotion, has promoted unmanned operation's flexibility ratio and robustness.
And thirdly, the non-fixed position waste package in the shielding container can be identified, because the waste package and the shielding container are similar in appearance but are separated in space, the space segmentation and clustering based on point cloud data has congenital advantages compared with other methods, the unloading working condition and the loading working condition can be distinguished in a self-adaptive manner, and the accurate posture detection (whether the shielding container is inclined to meet the loading condition) and the accurate judgment (whether foreign matters exist) of the internal state of the container are realized.
And fourthly, the identification and positioning of the waste package are simple, a transmission unit necessary for calculation is removed in the aspect of hardware, only one laser scanning holder is used as a sensor, and compared with the conventional goods package identification device, the device identification device has the advantages that the device type selection purchasing, the device installation calibration, the device maintenance and overhaul and the like are greatly simplified, and the calibration error among the sensors is reduced.
Drawings
FIG. 1 is a schematic flow chart of the method for identifying and positioning the radioactive waste bag according to the present invention;
FIG. 2 is a schematic view of the identification of a trash bag during unloading;
FIG. 3 is a schematic view of the identification of the trash bag during loading;
in the figure: laser scanning sensor 1, handling device 2, transport vehicle 3, shielding container 4, refuse bag 5.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent.
Example 1:
as shown in fig. 1, a radioactive waste bag identification and positioning method comprises the steps of manufacturing a waste bag point cloud template and a shielding container point cloud template, identifying the pose of a shielding container and the pose of a waste bag by using a two-stage matching algorithm, identifying and positioning, and loading or unloading the waste bag according to the identification and positioning result, and specifically comprises the following steps:
s1, collecting scene point clouds through a laser scanning sensor, extracting calibrated point clouds in an effective area of the lane, performing noise reduction and filtering to remove disordered random outliers, and extracting ground point clouds;
s2, removing ground point clouds, clustering object point clouds in the scene to obtain each object point cloud, and screening to obtain clustered vehicle point clouds and shielding container point clouds;
s3, performing two-stage pose matching on the acquired point cloud of the shielding container by applying the point cloud template of the shielding container, identifying the pose of the shielding container, if the pose is normal, guiding the loading and unloading device to unload the top cover of the shielding container according to the pose, and if the pose is abnormal, re-identifying the pose of the shielding container;
s4, analyzing the point cloud in the shielding container according to the pose of the shielding container obtained by identification, setting a minimum threshold value and a maximum threshold value of the vertical density of the point cloud, and detecting whether the waste bag is loaded in the shielding container and whether the internal state of the shielding container is normal;
s41, if the point cloud vertical density is larger than the maximum threshold value, the shielding container is loaded with the waste bag, when the shielding container is loaded with the waste bag, unloading is carried out, and the unloading process is as follows:
s411, extracting a waste bag point cloud in the shielding container and extracting a boundary;
s412, performing two-stage pose matching on the extracted waste bag point cloud by using a waste bag point cloud template to obtain the pose of the waste bag and transmit the pose to an unloading system, and sending an operation instruction to a loading and unloading device by the unloading system to grab the waste bag for unloading;
s413, monitoring the pose of the waste bag in real time in the unloading process and feeding back the pose to an unloading system;
s42, if the point cloud vertical density is smaller than the minimum threshold value, the shielding container is empty, when the shielding container is empty, loading is carried out, and the loading process is as follows:
s421, directly transmitting the internal space pose of the shielding container to a loading system, and sending an operation instruction to a loading and unloading device by the loading system to grab a waste bag for loading;
s422, detecting the pose of the waste bag after loading is finished and feeding back the pose to a loading system;
and S43, when the foreign matters exist in the container, the foreign matters are reported to the central control system when the foreign matters exist in the shielding container, and the pose of the shielding container is identified again after the central control system checks and eliminates the abnormality.
In this embodiment, the process of making the waste cloud template includes the following steps:
acquiring point clouds, including acquiring the point clouds from a 1:1 three-dimensional model or scanning and acquiring spliced point clouds by using a laser radar, wherein due to the fact that the point clouds of the waste bags scanned by laser in actual conditions are shielded, the top surfaces of the point clouds can be positioned to obtain accurate poses, namely only the point clouds of the top surfaces of the waste bags are acquired;
the collected top surface point clouds are aligned to the original point and are uniformly sampled, so that the accurate three-dimensional pose is conveniently solved subsequently, the interference caused by the uneven density of the point clouds is avoided, the top geometric features of the waste bags are concentrated at the edges, and four square holes are grabbed by the loading and unloading device, so that the boundary point clouds are extracted, the geometric features of the waste bags are reserved to the maximum extent, and redundant feature points are eliminated.
In this embodiment, the process of making the cloud template of shielding container point includes the following steps:
acquiring point clouds, including acquiring the point clouds from a 1:1 three-dimensional model or scanning and acquiring spliced point clouds by using a laser radar, wherein only the point clouds on the inner side and the outer side and the top of a shielding container can be scanned due to the fact that the point clouds of the shielding container scanned by laser are shielded in an actual situation, so that only the point clouds on the inner side and the outer side and the top of the shielding container are acquired;
and aligning the collected point cloud to the origin and uniformly sampling so as to solve the accurate three-dimensional pose in the subsequent process and avoid the interference caused by the uneven density of the point cloud.
In step S1 of this embodiment, the laser scanning sensor is installed on a loading and unloading device several meters away from the ground, so that the sensing range covers the loading and unloading area of the waste bag, thereby collecting the whole scene point cloud; and extracting the ground point cloud through a random sampling consistency algorithm.
In step S2 of this embodiment, each object point cloud is obtained by an euclidean clustering method or a DBSCAN clustering method; because the volume difference between the transport vehicle and the shielding container is obvious compared with other scene objects, the vehicle point cloud and the shielding container point cloud to be matched can be quickly screened out according to the cluster size or the calculated characteristic distribution histogram.
In step S3 of this embodiment, the shielding container point cloud template and the shielding container point cloud are downsampled to make the point cloud densities uniform; and when the pose of the shielding container is identified abnormally, reporting to a central control system, and re-identifying the pose of the shielding container after the central control system checks and eliminates the abnormality, wherein the central control system is a background control end for unified scheduling control.
In this embodiment, the two-stage matching algorithm refers to: and solving by using a two-step matching method of environment constraint characteristic vector coarse registration and ICP fine registration to obtain the pose.
In this embodiment, the handling device refers to a device having a gripping function and a shifting function.
In step S4 of this embodiment, the minimum threshold and the maximum threshold of the point cloud vertical density may be set in advance, and are set according to the density of the sample point cloud when the shielding container is empty and the density of the sample point cloud when the waste bag is loaded in the shielding container.
In this embodiment, the method further includes a radioactive waste bag identification and positioning device, including applying the above-mentioned radioactive waste bag identification and positioning method.
For further illustration, the identified located application scenario is depicted, with the unloading process shown in FIG. 2 and the loading process shown in FIG. 3. The laser scanning sensor 1 is arranged on the loading and unloading device 2 which is a plurality of meters away from the ground, so that the sensing range can cover the waste packaging and unloading area, the arrangement is simple, and the maintenance is convenient.
In the process of identifying and positioning, the coordinate system is converted into another coordinate system, the whole process involves three types of world coordinate system W, laser scanning sensor coordinate system L and point cloud template coordinate system P, and the point cloud is manufacturedWhen the point cloud is aligned to the origin of the coordinate system during the template process, the calculation can be simplified, and the transformation matrix M from the coordinate system L of the laser scanning sensor to the coordinate system P of the point cloud template can be simplifiedLPIs an identity matrix, and can obtain a transformation matrix M from a laser scanning sensor coordinate system L to a world coordinate system W by jointly calibrating external parameters of the laser scanning sensor and ground parameters of scanning point cloudsLWAssuming that the position information (x, y, z) obtained by the positioning process is identified, its position (x ', y ', z ') in the world coordinate system can be expressed as:
(x′,y′,z′,1)=MLW*(x,y,z,1)T
thereby transforming the calculated pose to the world coordinate system in a unified way.
The test was carried out in an open simulation environment, and the specific layout is shown in fig. 2, the transport vehicle 3 has a size of 10.5m × 2.7m × 3.3m (clear height of 1.6m), the shield container 4 has a size of 4.2m × 2.2m × 1.7m, the wall thickness is 20mm, and is located in the center of the top of the vehicle, the waste bag 5 has a size of 3.5m × 2m × 1.5m, the four corners have clamp holes for clamping, the hole arc diameter is 124mm, the width is 64mm, the field angle of the laser scanning sensor 1 is 45 ° × 90 °, the angular resolution is 0.3 °, and the laser scanning sensor 1 is disposed on the handling device 2 at a position 9.5m from the ground. The gripping mechanism on the handling device 2 for gripping the waste bag 5 is not shown.
When the point cloud template is manufactured, the point cloud template is the sum of the point clouds of the recognized objects under all conditions, an equal-proportion three-dimensional model is actually adopted to uniformly sample the point clouds, the point clouds of areas which cannot be scanned are eliminated, the point clouds of the waste bag 5 and the shielding container 4 are respectively aligned to the origin of a coordinate system, the center of the point clouds is aligned to the origin, the length, width and height directions are respectively aligned to the directions of the x axis, the y axis and the z axis, and the point cloud template manufacturing is completed. In the process of manufacturing the waste bag point cloud template, point clouds are collected from a 1:1 three-dimensional model, top surface point clouds in a visible area are intercepted, a waste bag point cloud template obtained by boundary point clouds is extracted, and the point clouds are aligned to the origin of a coordinate system. In the process of manufacturing the shielding container point cloud template, point cloud is obtained by sampling from the 1:1 three-dimensional model, invisible point cloud data is removed, and the remaining point cloud data comprising the vehicle body part of the transport vehicle 3 and the partial point cloud data of the shielding container 4 are aligned to the origin of a coordinate system.
Establishing a simulation scene with a real size, placing the waste bag 5 at a random position in the shielding container 4, placing the transport vehicle 3 at any position of a working area, and carrying out laser scanning by the laser scanning sensor 1 to acquire scene point cloud. For example, when the roof is unloaded in the positional relationship shown in fig. 2, the scene point cloud at that time is obtained. Assuming that the set of point clouds is C:
C={P1,P2,…,PN}
wherein P isiAnd (i-1, 2, …, N) is a certain point in the point cloud, and comprises three coordinate values of x, y and z, and N is the number of points in the point cloud.
Extracting effective data of a lane area according to pre-calibrated data, deleting ineffective point cloud data, then carrying out plane (ground) extraction on the collected point cloud, utilizing a RANSAC method, firstly randomly selecting some points in C, utilizing a least square method to fit to obtain a best fit plane, randomly selecting the same number of points in C again, calculating the distance from each point to the plane, considering the points as interior points if the distance is less than a threshold value of 10cm, re-fitting the plane according to a new point cloud set if the proportion of the interior points is smaller, and finishing circulation if the proportion of the interior points exceeds the threshold value to obtain a plane point cloud (ground point cloud) and a plane normal vector (X)P,YP,ZP) Removing the plane point cloud to obtain an out-of-plane object point cloud C1:
C1={P1,P2,…,PN′}
where N' is the number of out-of-plane object point clouds.
Clustering the out-of-plane object point cloud C1 based on Euclidean clustering, establishing an octree for the point cloud C1, and randomly selecting a certain point PiSearching for nearest neighbor Pi0If P isiAnd Pi0If the Euclidean distance is less than the threshold value, P is seti0Saving, searching P next stepi0And repeating the steps until the distance between two points is greater than the threshold value at a certain time, obtaining a certain clustering point cloud, continuing to randomly select a certain point from the rest points for clustering, and repeating the steps to obtain all clustering point clouds.
And screening out the point cloud of the transport vehicle 3 to be matched and the point cloud of the shielding container 4 according to the clustering size, and performing down-sampling on the point cloud of the shielding container 4 and the point cloud template of the shielding container to make the point cloud densities uniform. Down-sampling employs voxel filtering, selecting the voxel size, for a point data matrix within each voxel,
Figure BDA0002849521860000091
wherein xi,yi,zi(i-1, 2, …, n) is the xyz coordinate value of a certain point i, and n is the point cloud size.
The coordinates of the points after voxel filtering processing are:
Figure BDA0002849521860000092
suppose downsampling as point cloud C2, its data matrix MC2Comprises the following steps:
Figure BDA0002849521860000093
wherein xi,yi,zi(i-1, 2, …, m) is the xyz coordinate value of a certain point i, m is the point cloud scale, and the last line 1 is the zoom factor.
Center of the shielded container point cloud C2 is calculated:
Figure BDA0002849521860000101
wherein xminAnd xmaxAre respectively a matrix MC2In xiMinimum and maximum values of (i ═ 1,2, …, m), yminAnd ymaxAre each yi(i ═ 1,2, …, m) minimum and maximum values, zminAnd zmaxAre each zi(i ═ 1,2, …, m) minimum and maximum values.
Enforcing environmental constraintsCoarse registration of feature vectors: calculate the maximum principal vector, M, of C2C2Since the vehicle normally runs on the road surface, the eigenvector corresponding to the maximum eigenvalue of the covariance matrix of (X)P,YP,ZP) As the second principal vector, the third principal vector is obtained according to the cross multiplication of the first principal vector and the second principal vector and is standardized, the center (0,0,0) of the shielding container point cloud template is aligned to the point cloud center, the x, y and z axes are respectively aligned to the first principal vector direction, the second principal vector direction and the third principal vector direction of the point cloud, the preliminary registration is completed, and the point cloud data matrix M after the registrationC3Comprises the following steps:
Figure BDA0002849521860000102
TTR1=-MTR1·[CX,CY,CZ]t
wherein M isT1Rigid body transformation matrix for first registration, MTR1For the rotation matrix, each column is respectively the three principal vectors, T, obtained previouslyTR1Is a translation transformation matrix.
Performing ICP fine registration: performing secondary fine registration by taking the point cloud of the shielding container 4 as a target position and the primarily registered shielding container point cloud template as an initial position, and performing ICP iterative registration, wherein in each iteration, the initial position point cloud Q is { Q ═ Q { (Q) }1,q2,…,ql} (corresponding point cloud data matrix MC3) Is the target point cloud P formed by corresponding point pairs of { P ═ P }1,p2,…,p1Minimize its root mean square error E:
Figure BDA0002849521860000103
where R is the rotation matrix, t is the translation matrix, pi、qiAnd (4) for a certain point, l is the point cloud scale, R and t are searched for each time of operation to enable E to be minimum, and the calculation is repeated until convergence is achieved, so that the second fine registration is completed. Calculating rigid body transformation matrix M of whole processT2Applied to the template point cloud to obtain a screenThe precise spatial pose of the shrouded vessel 4. The point cloud data matrix after the secondary registration is
MC4=MT2*MC3
The spatial pose R of the shielding container is as follows:
Figure BDA0002849521860000111
wherein (X)L,YL,ZL) As position information, (R)X,RY,RZ) Is the attitude information.
When the internal space constraint G of the shield container 4 is determined according to the size of the shield container 4, the actual space position constraint G' of the shield container 4 is MT2*MT1G, extracting the internal space point cloud of the shielding container 4 according to the constraint G', and projecting the point cloud to the position (X) with the center as the originP,YP,ZP) Calculating density distribution set D ═ D on vector of direction1,d2,…,dn-each of which:
Figure BDA0002849521860000112
wherein xi,yi,ziIs the coordinate of a certain point of the internal space point cloud. And D, analyzing a value distribution rule in D to judge whether the shielding container is loaded with the waste bag and the foreign matters, wherein in the scene, most elements of D are distributed (1300,1600) within an interval (namely, the maximum threshold value is set to be 1300), and the shielding container is judged to be loaded with the waste bag and not to be loaded with the foreign matters.
Referring to fig. 2, under the unloading condition, the point clouds in the shielding container 4 are registered in two stages according to the same idea as that of positioning the shielding container 4, the point clouds on the top surface of the waste bag 5 in the shielding container 4 are used as target positions, the point cloud templates of the waste bag are used as initial positions, matching is performed, and a rigid transformation matrix M is obtainedT1' and MT2' then, the spatial pose of the refuse bag 5 is:
Figure BDA0002849521860000113
wherein (X)L′,YL′,ZL') is position information, (R)X′,RY′,RZ') is pose information. And real-time monitoring is performed during the unloading process.
And (3) actual test results: the calculated posture recognition error of the trash bag 5 and the shield container 4 is about 10 mm. The positioning and shielding container 4 can be well identified in the visible range of the laser scanning sensor 1, and the waste bag 5 in the positioning and shielding container 4 can be identified in a larger area around the sensor.
If in the loaded case, the arrangement scenario is as shown in fig. 3, the shield container 4 identification positioning and the previous steps are the same as above, except that the point cloud inside the shield container 4 is projected to the origin at center, (X)P,YP,ZP) When the direction vector is up, most of the element distribution in the density distribution set D' is within the range of (-100,100) (i.e. the minimum threshold is set to 100), which means that the interior of the container is empty and there is no foreign matter. Feeding back and updating the pose of the internal space of the shielding container 4 to the loading system in real time, extracting the point cloud in the container to identify the pose of the waste bag 5 after loading is finished, and feeding back the point cloud to the loading system for judgment.
The method for identifying and positioning the radioactive waste bag is mainly used for storage yards or warehouses and the like which need to carry out loading and unloading operation on the radioactive waste bag transportation in an unmanned scene, and can obtain accurate spatial position information of the radioactive waste bag through a simple identification process, so that a loading and unloading device is guided to carry out high-efficiency and high-precision grabbing and placing operation. Different from a common loading and unloading and grabbing scene, the waste bag is placed in a specially-made transporting shielding container, the relative position of the waste bag and the shielding container is not fixed, the waste bag in the shielding container needs to be accurately positioned during grabbing, and the posture of the shielding container and the internal state of the container need to be identified during loading so as to judge whether loading conditions and accurate guiding loading are met. Compared with the traditional positioning device and technology, the identification method provided by the invention has the advantages of simple arrangement, wide coverage range and high identification accuracy, can be used for accurately positioning the goods package in the shielding container, guiding and controlling the gripping tool to accurately grip and load, and avoids interference with the shielding container while aligning the waste package.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A radioactive waste package identification and positioning method is characterized in that a waste package point cloud template and a shielding container point cloud template are manufactured, the pose of a shielding container and the pose of a waste package are identified through a two-stage matching algorithm, identification and positioning are carried out, and the waste package is loaded or unloaded according to the identification and positioning result, and the method specifically comprises the following steps:
s1, collecting scene point clouds through a laser scanning sensor, extracting calibrated point clouds in an effective area of the lane, performing noise reduction and filtering to remove disordered random outliers, and extracting ground point clouds;
s2, removing ground point clouds, clustering object point clouds in the scene to obtain each object point cloud, and screening to obtain clustered vehicle point clouds and shielding container point clouds;
s3, performing two-stage pose matching on the acquired point cloud of the shielding container by applying the point cloud template of the shielding container, identifying the pose of the shielding container, if the pose is normal, guiding the loading and unloading device to unload the top cover of the shielding container according to the pose, and if the pose is abnormal, re-identifying the pose of the shielding container;
s4, analyzing the point cloud in the shielding container according to the pose of the shielding container obtained by identification, setting a minimum threshold value and a maximum threshold value of the vertical density of the point cloud, and detecting whether the waste bag is loaded in the shielding container and whether the internal state of the shielding container is normal;
s41, if the point cloud vertical density is larger than the maximum threshold value, the shielding container is loaded with the waste bag, when the shielding container is loaded with the waste bag, unloading is carried out, and the unloading process is as follows:
s411, extracting a waste bag point cloud in the shielding container and extracting a boundary;
s412, performing two-stage pose matching on the extracted waste bag point cloud by using a waste bag point cloud template to obtain the pose of the waste bag and transmit the pose to an unloading system, and sending an operation instruction to a loading and unloading device by the unloading system to grab the waste bag for unloading;
s413, monitoring the pose of the waste bag in real time in the unloading process and feeding back the pose to an unloading system;
s42, if the point cloud vertical density is smaller than the minimum threshold value, the shielding container is empty, when the shielding container is empty, loading is carried out, and the loading process is as follows:
s421, directly transmitting the internal space pose of the shielding container to a loading system, and sending an operation instruction to a loading and unloading device by the loading system to grab a waste bag for loading;
s422, detecting the pose of the waste bag after loading is finished and feeding back the pose to a loading system;
and S43, when the foreign matters exist in the container, the foreign matters are reported to the central control system when the foreign matters exist in the shielding container, and the pose of the shielding container is identified again after the central control system checks and eliminates the abnormality.
2. The method for identifying and positioning the radioactive waste bag according to claim 1, wherein the step of making the waste bag point cloud template comprises the following steps:
acquiring point clouds, including acquiring the point clouds from a 1:1 three-dimensional model or scanning and acquiring spliced point clouds by using a laser radar, wherein due to the fact that the point clouds of the waste bags scanned by laser in actual conditions are shielded, the top surfaces of the point clouds can be positioned to obtain accurate poses, namely only the point clouds of the top surfaces of the waste bags are acquired;
and aligning the collected top surface point cloud to an original point and uniformly sampling, wherein the top geometric features of the waste bag are concentrated at the edge, and four square holes are captured by a loading and unloading device, so that boundary point cloud also needs to be extracted, the geometric features of the waste bag are reserved, and redundant feature points are removed.
3. The radioactive waste package identification and positioning method of claim 1, wherein the step of making the shielding container point cloud template comprises the following steps:
acquiring point clouds, including acquiring the point clouds from a 1:1 three-dimensional model or scanning and acquiring spliced point clouds by using a laser radar, wherein only the point clouds on the inner side and the outer side and the top of a shielding container can be scanned due to the fact that the point clouds of the shielding container scanned by laser are shielded in an actual situation, so that only the point clouds on the inner side and the outer side and the top of the shielding container are acquired;
and aligning the collected point cloud to an origin and uniformly sampling.
4. The method for identifying and locating radioactive waste packages of claim 1, wherein in step S1, the ground point cloud is extracted by a random sampling consistency algorithm.
5. The method for identifying and locating radioactive waste packages as claimed in claim 1, wherein in step S1, the laser scanning sensor is mounted on the loading and unloading device several meters away from the ground, so that the sensing range covers the loading and unloading area of the waste package, thereby collecting the whole scene point cloud.
6. The method for identifying and locating the radioactive waste package according to claim 1, wherein in step S2, each object point cloud is obtained by an euclidean clustering method or a DBSCAN clustering method, and the vehicle point cloud and the shielding container point cloud to be matched are screened out according to a cluster size or a calculated feature distribution histogram.
7. The radioactive waste package identification and positioning method according to claim 1, wherein the two-stage matching algorithm is as follows: and solving by using a two-step matching method of environment constraint characteristic vector coarse registration and ICP fine registration to obtain the pose.
8. The method for identifying and positioning the radioactive waste package according to claim 1, wherein in step S3, when the pose of the shielding container is identified abnormally, the pose of the shielding container is reported to the central control system, and after the central control system checks to eliminate the abnormality, the pose of the shielding container is identified again.
9. The method for identifying and locating radioactive waste packages of claim 1, wherein in step S3, the shielding container point cloud template and the shielding container point cloud are downsampled to make the point cloud densities uniform.
10. An apparatus for identifying and positioning radioactive waste packages, comprising the radioactive waste package identifying and positioning method according to any one of claims 1 to 9.
CN202011542885.XA 2020-12-21 2020-12-21 Method and device for identifying and positioning radioactive waste bag Active CN112581519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011542885.XA CN112581519B (en) 2020-12-21 2020-12-21 Method and device for identifying and positioning radioactive waste bag

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011542885.XA CN112581519B (en) 2020-12-21 2020-12-21 Method and device for identifying and positioning radioactive waste bag

Publications (2)

Publication Number Publication Date
CN112581519A CN112581519A (en) 2021-03-30
CN112581519B true CN112581519B (en) 2022-03-22

Family

ID=75139214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011542885.XA Active CN112581519B (en) 2020-12-21 2020-12-21 Method and device for identifying and positioning radioactive waste bag

Country Status (1)

Country Link
CN (1) CN112581519B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641462B (en) * 2022-12-26 2023-03-17 电子科技大学 Radar image target identification method
CN116912312B (en) * 2023-09-15 2023-12-01 湖南大学 Three-dimensional hole positioning method for complex curved surface component

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880147A (en) * 2012-09-28 2013-01-16 浙江核芯监测科技有限公司 Automatic system of radioactive-source waste vault
CN110085342A (en) * 2019-04-23 2019-08-02 中国核电工程有限公司 The retired method of Spent Radioactive liquid storage tank
CN110400345A (en) * 2019-07-24 2019-11-01 西南科技大学 Radioactive waste based on deeply study, which pushes away, grabs collaboration method for sorting
CN110689622A (en) * 2019-07-05 2020-01-14 电子科技大学 Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction
CN111091062A (en) * 2019-11-21 2020-05-01 东南大学 Robot out-of-order target sorting method based on 3D visual clustering and matching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880147A (en) * 2012-09-28 2013-01-16 浙江核芯监测科技有限公司 Automatic system of radioactive-source waste vault
CN110085342A (en) * 2019-04-23 2019-08-02 中国核电工程有限公司 The retired method of Spent Radioactive liquid storage tank
CN110689622A (en) * 2019-07-05 2020-01-14 电子科技大学 Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction
CN110400345A (en) * 2019-07-24 2019-11-01 西南科技大学 Radioactive waste based on deeply study, which pushes away, grabs collaboration method for sorting
CN111091062A (en) * 2019-11-21 2020-05-01 东南大学 Robot out-of-order target sorting method based on 3D visual clustering and matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Numerical calculation model performance analysis for aluminum alloy mortise-and-tenon structural joints used in electric vehicles;HuiyuanXiong et.al;《Composites Part B: Engineering》;20190315;第161卷;第77-86页 *

Also Published As

Publication number Publication date
CN112581519A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
EP3683721B1 (en) A material handling method, apparatus, and system for identification of a region-of-interest
US9327406B1 (en) Object segmentation based on detected object-specific visual cues
US7376262B2 (en) Method of three dimensional positioning using feature matching
EP2045772B1 (en) Apparatus for picking up objects
CN112581519B (en) Method and device for identifying and positioning radioactive waste bag
US10163225B2 (en) Object state identification method, object state identification apparatus, and carrier
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN110766758B (en) Calibration method, device, system and storage device
CN110794406B (en) Multi-source sensor data fusion system and method
Dai et al. Development of a feature-based approach to automated image registration for multitemporal and multisensor remotely sensed imagery
CN116128841A (en) Tray pose detection method and device, unmanned forklift and storage medium
CN114049352B (en) Luggage pallet rapid detection method applied to self-service luggage consignment
Deng et al. Joint calibration of dual lidars and camera using a circular chessboard
CN115546202B (en) Tray detection and positioning method for unmanned forklift
CN109635692B (en) Scene re-identification method based on ultrasonic sensor
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
Fontana et al. A comparative assessment of parcel box detection algorithms for industrial applications
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
Hebel et al. Automatic registration of laser point clouds of urban areas
CN113345023B (en) Box positioning method and device, medium and electronic equipment
CN111964681B (en) Real-time positioning system of inspection robot
CN115127538A (en) Map updating method, computer equipment and storage device
CN114627101A (en) Plate body transportation method and device and storage medium
CN116137831A (en) Information processing apparatus and information processing method
CN113706610A (en) Pallet pose calculation method based on RGB-D camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant