CN117333463A - Drainage pipe network detection method, system, computer equipment and medium - Google Patents

Drainage pipe network detection method, system, computer equipment and medium Download PDF

Info

Publication number
CN117333463A
CN117333463A CN202311319998.7A CN202311319998A CN117333463A CN 117333463 A CN117333463 A CN 117333463A CN 202311319998 A CN202311319998 A CN 202311319998A CN 117333463 A CN117333463 A CN 117333463A
Authority
CN
China
Prior art keywords
image
camera
change information
cloud data
pipe network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311319998.7A
Other languages
Chinese (zh)
Inventor
王殿常
李韦烨
彭寿海
王万琼
米荣熙
陈晓龙
李佳颖
李雅晴
张驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges Corp
Original Assignee
China Three Gorges Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges Corp filed Critical China Three Gorges Corp
Priority to CN202311319998.7A priority Critical patent/CN117333463A/en
Publication of CN117333463A publication Critical patent/CN117333463A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of defect detection of a drainage pipe network, and provides a drainage pipe network detection method, a drainage pipe network detection system, computer equipment and a medium. The drainage pipe network detection method comprises the following steps: acquiring a first image of the current moment of the drainage pipe network, first point cloud data and a first relative position relation between a first camera and a second camera, wherein the first image is acquired through the first camera, and the first point cloud data is acquired through the second camera; generating a second image according to the first image, the first point cloud data and the first relative position relation, wherein the second image is a three-dimensional image; and detecting the drainage pipe network according to the second image to obtain a detection result of the drainage pipe network. According to the invention, pixel information in the image and three-dimensional space information in the point cloud data are fully utilized, and the accuracy of detecting the defects of the drainage pipe network is improved.

Description

Drainage pipe network detection method, system, computer equipment and medium
Technical Field
The invention relates to the field of defect detection of a drainage pipe network, in particular to a drainage pipe network detection method, a drainage pipe network detection system, computer equipment and a medium.
Background
The basic data of the drainage pipe network is important information required by operation and maintenance management, the basic data is mainly obtained by on-site detection, and the data precision is directly related to the adopted detection technology. Therefore, efficient and accurate pipeline detection techniques or methods are important.
In the prior art, the drainage pipe network is mainly detected by a CCTV camera. However, the image obtained by the CCTV camera only contains the pixel information of the drainage pipe network, so that the drainage pipe network can be detected only by the pixel information obtained by photographing by the CCTV camera, and the defect information in the three-dimensional space of the drainage pipe network can not be detected and quantified, and the defect detection accuracy of the drainage pipe network is not high.
Disclosure of Invention
In order to improve the detection precision of the defects of the drainage pipe network, the invention provides a drainage pipe network detection method, a drainage pipe network detection system, computer equipment and a medium.
In a first aspect, the present invention provides a method for detecting a drain pipe network, the method comprising:
acquiring a first image of the current moment of the drainage pipe network, first point cloud data and a first relative position relation between a first camera and a second camera, wherein the first image is acquired through the first camera, and the first point cloud data is acquired through the second camera;
Generating a second image according to the first image, the first point cloud data and the first relative position relation, wherein the second image is a three-dimensional image;
and detecting the drainage pipe network according to the second image to obtain a detection result of the drainage pipe network.
According to the method, the first camera is utilized to obtain the pixel information in the first image of the drainage pipe network, the second camera is utilized to obtain the three-dimensional space information in the first point cloud data of the drainage pipe network, the second image of the drainage pipe network is obtained according to the first image and the first point cloud data, the second image contains both the pixel information and the three-dimensional space information, defects in the drainage pipe network are detected by utilizing the second image, the pixel information in the image and the three-dimensional space information in the point cloud data are fully utilized, and the accuracy of detecting the defects of the drainage pipe network is improved.
In an alternative embodiment, the step of obtaining a first relative positional relationship between the first camera and the second camera comprises:
acquiring a third image and second point cloud data of the drainage pipe network at the previous moment, wherein the third image is acquired through a first camera, and the second point cloud data is acquired through a second camera;
determining first position change information of a first camera and second position change information of a second camera, wherein the first motion state change information is used for representing position and posture information of the first camera in a world coordinate system at different moments, and the second motion state change information is used for representing position and posture information of the second camera in the world coordinate system at different moments;
Determining a second relative position relationship between the first camera and the second camera at the current moment according to the first position change information and the second position change information;
determining at least one target point in the drain pipe network;
determining a first conversion relation of each target point according to the second relative position relation, wherein the first conversion relation is used for converting first point cloud data of the target point into a first image of the target point;
acquiring a third relative position relation between the first camera and the second camera at the previous moment;
determining a second conversion relation of each target point according to the third relative position relation, wherein the second conversion relation is used for converting second point cloud data of the target point into a third image of the target point;
updating the first position change information according to the first conversion relations and the second conversion relations to obtain updated first position change information, and recalculating the second relative position relation between the first camera and the second camera until the second relative position relation meets the preset condition, and taking the second relative position relation meeting the preset condition as the first relative position relation.
Through the embodiment, the first position change information of the first camera is updated through the first conversion relation and the second conversion relation, so that the first position change information is more accurate, and further, the first relative position information between the first camera and the second camera obtained through calculation according to the first position change information is more accurate, so that the second image generated according to the first relative position information is more accurate, and the defect detection accuracy of the drainage pipe network is improved.
In an alternative embodiment, the step of determining the first position change information of the first camera comprises:
respectively determining characteristic points in the third image and characteristic points in the first image;
and comparing the characteristic points of the third image with the characteristic points of the first image to obtain first position change information.
In an alternative embodiment, comparing the feature points of the third image with the feature points of the first image to obtain the first position change information includes:
matching the characteristic points of the third image with the characteristic points of the first image to obtain a plurality of groups of matched characteristic points;
and obtaining first position change information according to the matched multiple groups of characteristic points.
Through the above embodiment, the feature points of the third image and the feature points of the first image are matched to obtain a plurality of sets of matched feature points, and the first position change information is determined according to the plurality of sets of matched feature points.
In an alternative embodiment, the step of determining second position change information of the second camera comprises:
matching the first point cloud data with the second point cloud data to obtain a plurality of groups of matched point cloud data;
and determining second position change information according to the matched multiple groups of point cloud data.
In an alternative embodiment, updating the first location change information according to each first conversion relation and each second conversion relation to obtain updated first location change information includes:
inputting each first conversion relation into a pre-constructed projection function to obtain a position vector of each target point, wherein the position vector is a vector of the position of the target point in the first point cloud data pointing to the position of the target point in the first image;
and updating the first position change information according to the second conversion relations and the position vectors to obtain updated first position change information, wherein the updated first position change information enables the objective function value to be minimum, and the objective function value is calculated according to the updated first position change information, the second conversion relations and the position vectors.
By the above embodiment, the position vector of each target point pointed to the image by the point cloud data is obtained by using the projection function, the objective function value is calculated by the second conversion relation and the position vector of each target point, and the first position change information when the objective function value is the minimum is used as the updated first position change information.
In an alternative embodiment, the position change information includes a rotation matrix and a translation matrix, the conversion relationship includes a distance relationship and a direction relationship, and the objective function value is calculated by the following formula:
Wherein L is an objective function value;a position vector for the jth target point; r is R C 、t C Respectively a rotation matrix and a translation matrix in the first position change information; r is R 0P j 、t 0 j Respectively the distance relation and the direction relation in the second conversion relation; proj is the projection function.
In an alternative embodiment, the detecting the drainage pipe network according to the second image to obtain a detection result of the drainage pipe network includes:
and inputting the second image of the drainage pipe network into a pre-constructed deep learning model, and analyzing and processing the second image by using the deep learning model to obtain a detection result.
In a second aspect, the present invention also provides a drainage pipe network detection system, which includes:
CCTV cameras, laser cameras, and computing devices;
a CCTV camera for acquiring a first image of a target object;
the laser camera is used for acquiring first point cloud data of the target object;
the computing device is used for computing a first relative position relation between the CCTV camera and the laser camera; generating a second image according to the first image, the first point cloud data and the first relative position relation; and determining the detection result of the drainage pipe network according to the second image.
Through the system, the CCTV camera is utilized to acquire the pixel information in the first image of the drainage pipe network, the laser camera is utilized to acquire the three-dimensional space information in the first point cloud data of the drainage pipe network, the second image of the drainage pipe network is acquired according to the first image and the first point cloud data, the second image contains both the pixel information and the three-dimensional space information, defects in the drainage pipe network are detected by utilizing the second image, the pixel information in the image and the three-dimensional space information in the point cloud data are fully utilized, and the accuracy of detecting the defects of the drainage pipe network is improved.
In an alternative embodiment, the system further comprises:
and the camera lifting device is used for lifting the CCTV camera and/or the laser camera.
Through the embodiment, the CCTV camera and the laser camera in the system are lifted by using the camera lifting equipment, so that the shooting visual field of the camera is continuously changed, and the comprehensive detection of different positions of the drainage pipe network in different directions is realized.
In an alternative embodiment, the system further comprises a travel device comprising a track stepping structure and a triangular track structure;
the crawler belt stepping structure is used for a moving system;
and the triangular crawler belt structure is used for performing overturning action so that the system can cross an obstacle.
Through the embodiment, the track stepping structure is utilized to realize the movement of the system, the mobile detection is carried out on the drainage pipe network, and when the obstacle in the drainage pipe network prevents the system from carrying out the mobile detection, the triangular track structure is utilized to span the obstacle, so that the movement performance of the system is improved.
In an alternative embodiment, the system further comprises a lighting device and a power supply device.
In an alternative embodiment, the system further comprises an inertial gyroscope positioning device for acquiring at least one of position information, speed information, heading information, and attitude angle information of the system.
Through the implementation mode, the position, the speed, the course and the attitude angle of the system are comprehensively monitored by using the inertial gyroscope positioning equipment.
In a third aspect, the present invention also provides a computer device, including a memory and a processor, where the memory and the processor are communicatively connected to each other, and the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the steps of the drainage pipe network detection method according to the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the present invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the drainage network detection method of the first aspect or any implementation manner of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting a drain pipe network according to an exemplary embodiment;
FIG. 2 is a detailed operational diagram of a method of detecting a drain pipe network, in one example;
FIG. 3 is a three-dimensional space-two-dimensional image correspondence graph in one example;
FIG. 4 is a schematic diagram of a drainage network detection system according to an exemplary embodiment;
FIG. 5 is a schematic view of the overall structure of a drainage network inspection robot in one example;
FIG. 6 is a front view of a robotic end in one example;
FIG. 7 is a side view of a robotic end in one example;
FIG. 8 is a top view of a robotic end in one example;
FIG. 9 is a schematic view of a scenario in which monitoring control is performed on the ground through a remote control terminal in one example;
fig. 10 is a schematic diagram of a hardware structure of a computer device according to an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
In order to improve the detection precision of the defects of the drainage pipe network, the invention provides a drainage pipe network detection method, a drainage pipe network detection system, computer equipment and a medium.
Fig. 1 is a flowchart of a method for detecting a drain pipe network according to an exemplary embodiment. As shown in fig. 1, the drainage pipe network detection method includes the following steps S101 to S103.
Step S101: acquiring a first image of the current moment of the drainage pipe network, first point cloud data and a first relative position relation between a first camera and a second camera, wherein the first image is acquired through the first camera, and the first point cloud data is acquired through the second camera.
In an alternative embodiment, the first camera may be a CCTV camera, and the first image collected by the first camera is a two-dimensional image, where the first image includes pixel information in the drainage pipe network image.
In an alternative embodiment, the second camera may be a laser industrial camera, and the laser industrial camera obtains point cloud data of the drainage pipe network by transmitting laser radar. The first point cloud data comprises three-dimensional space information in the drainage pipe network image.
In an alternative embodiment, the first relative positional relationship refers to a relative positional relationship of the first camera and the second camera in a world coordinate system. The first relative positional relationship includes, but is not limited to, a rotation matrix and a translation matrix.
Step S102: and generating a second image according to the first image, the first point cloud data and the first relative position relation, wherein the second image is a three-dimensional image.
In an alternative embodiment, the three-dimensional space information in the first point cloud data is converted and mapped into the pixel information of the first image through the first relative position relationship, and the second image is generated.
In an alternative embodiment, the second image includes both pixel information in the first image and three-dimensional space information in the first point cloud data.
Step S103: and detecting the drainage pipe network according to the second image to obtain a detection result of the drainage pipe network.
In an alternative embodiment, the second image may be analyzed using an artificial intelligence model to identify defects in the drain pipe and generate a detection result.
In an alternative embodiment, the detection result includes defect information in the drainage pipe network, wherein the defect information includes but is not limited to structural defects and functional defects. Wherein a structural defect refers to the absence of a drain pipe or the like, and a functional defect refers to the occurrence of a drain failure of the drain pipe or the like.
According to the method, the first camera is utilized to obtain the pixel information in the first image of the drainage pipe network, the second camera is utilized to obtain the three-dimensional space information in the first point cloud data of the drainage pipe network, the second image of the drainage pipe network is obtained according to the first image and the first point cloud data, the second image contains both the pixel information and the three-dimensional space information, defects in the drainage pipe network are detected through the second image, the pixel information in the image and the three-dimensional space information in the point cloud data are fully utilized, and the accuracy of detecting the defects of the drainage pipe network is improved.
In an example, in the step S101, the first relative positional relationship between the first camera and the second camera is obtained by:
step a1: and acquiring a third image and second point cloud data of the drainage pipe network at the previous moment, wherein the third image is acquired through the first camera, and the second point cloud data is acquired through the second camera.
Step a2: and determining first position change information of the first camera and second position change information of the second camera, wherein the first motion state change information is used for representing position and posture information of the first camera in the world coordinate system at different moments, and the second motion state change information is used for representing position and posture information of the second camera in the world coordinate system at different moments.
In an alternative embodiment, the world coordinate system refers to a coordinate system in the external real world where the first camera and the second camera are located. The first camera and the second camera both have position and orientation information in the world coordinate system.
In an alternative embodiment, the position and orientation information includes, but is not limited to, position coordinates and rotation angles, and the corresponding position change information is characterized by a translation matrix and a rotation matrix. Because the first camera and the second camera are in a moving state when inspecting the drainage pipe network, the position and posture information is not the same at different moments.
Step a3: and determining a second relative position relation between the first camera and the second camera at the current moment according to the first position change information and the second position change information.
In an alternative embodiment, the second relative positional relationship also includes, but is not limited to, a rotation matrix and a translation matrix.
In an alternative embodiment, the second relative positional relationship between the first camera and the second camera at the current time is determined by the following formula:
wherein,a rotation matrix and a translation matrix representing position change information of the first camera when the current moment is at position i,/v>The rotation matrix and the translation matrix of the positional change information of the second camera when the current time is at the position i are represented, and R, t represents the rotation matrix and the translation matrix in the second relative positional relationship of the first camera and the second camera.
Step a4: at least one target point is determined in the drainage network. Each target point determined in the drain network is captured by the first camera and the second camera. The number of target points may be set according to actual needs, and is not particularly limited herein.
Step a5: according to the second relative position relationship, a first conversion relationship of each target point is determined, and the first conversion relationship is used for converting first point cloud data of the target point into a first image of the target point.
In an alternative embodiment, the first transformation relationship is a transformation relationship between a pixel coordinate system in the first image of the first camera and a lidar coordinate system in the first point cloud data in the second camera.
In an alternative embodiment, the first transformation relationship is characterized by a distance relationship and a direction relationship of the target point P in the first image and the target point P in the first point cloud data.
Step a6: and acquiring a third relative position relation between the first camera and the second camera at the previous moment.
Step a7: and determining a second conversion relation of each target point according to the third relative position relation, wherein the second conversion relation is used for converting second point cloud data of the target point into a third image of the target point.
In an alternative embodiment, the second conversion relationship is a conversion relationship between a pixel coordinate system in the third image and a laser radar coordinate system in the second point cloud data.
Step a8: updating the first position change information according to the first conversion relations and the second conversion relations to obtain updated first position change information, and recalculating the second relative position relation between the first camera and the second camera until the second relative position relation meets the preset condition, and taking the second relative position relation meeting the preset condition as the first relative position relation. At this time, the obtained first relative positional relationship is a calibrated and updated relative positional relationship, and the second image can be generated according to the calibrated and updated first relative positional relationship, the first image and the first point cloud data.
In an alternative embodiment, the preset condition may be that the second relative positional relationship is not changed, or the change amplitude of the second relative positional relationship obtained by the current calculation with respect to the second relative positional relationship obtained by the last calculation is smaller than a preset value, which is not limited in detail herein.
In the embodiment of the invention, the first position change information of the first camera is updated through the first conversion relation and the second conversion relation, so that the first position change information is more accurate, and further, the first relative position information between the first camera and the second camera, which is calculated according to the first position change information, is more accurate, so that the second image generated according to the first relative position information is more accurate, and the defect detection accuracy of the drain pipe network is improved.
In an example, in the above step a2, the first position change information of the first camera is determined by:
step b1: feature points in the third image and feature points in the first image are determined, respectively.
Step b2: and comparing the characteristic points of the third image with the characteristic points of the first image to obtain first position change information.
In an alternative embodiment, in the step b2, the feature points of the third image and the feature points of the first image are compared by:
firstly, matching the characteristic points of the third image with the characteristic points of the first image to obtain a plurality of groups of matched characteristic points.
And then, according to the matched multiple groups of characteristic points, obtaining first position change information.
In the embodiment of the invention, the characteristic points of the first image and the characteristic points of the third image are determined through an AKAZE characteristic algorithm, the characteristic points are subjected to characteristic description by adopting an M-LDB characteristic descriptor, then the characteristic points of the third image and the characteristic points of the first image are matched, and a plurality of groups of matched characteristic points are analyzed and calculated by adopting a 5-point algorithm and a random sampling consistent algorithm (RAndomSAmpleConsensus, RANSAC), so that the first position change information is determined.
In an example, in the above step a2, the second position change information of the second camera is determined by:
first, matching the first point cloud data with the second point cloud data to obtain a plurality of groups of matched point cloud data.
And then, determining second position change information according to the matched multiple groups of point cloud data.
In the embodiment of the invention, the matching and calculation are carried out on the plurality of groups of point cloud data through an iterative near point method (Iterative Closest Point, ICP), so that the second position change information is obtained.
In an example, in the above step a8, the first position change information is updated by:
firstly, each first conversion relation is input into a pre-constructed projection function to obtain a position vector of each target point, wherein the position vector is a vector of the position of the target point in first point cloud data pointing to the position of the target point in a first image.
In an alternative embodiment, the position vector includes a distance relationship and a direction relationship, and the calculation formula of the position vector of the target point may be expressed as follows:
v 1 j =Proj(R P j +t j )
wherein v is 1 j Is the position vector of the target point j; r is R P j Characterizing a distance from the position of the target point j in the first point cloud data to the position of the target point j in the first image for a distance relationship in the position vector; t is t j Characterizing the direction in which the position of the target point j in the first point cloud data points to the position of the target point j in the first image for the direction relationship in the position vector; proj () is a projection function.
Then, the first position change information is updated according to the second conversion relations and the position vectors, updated first position change information is obtained, the updated first position change information enables the objective function value to be minimum, and the objective function value is calculated according to the updated first position change information, the second conversion relations and the position vectors.
In the above embodiment, the position vector of each target point directed to the image from the point cloud data is obtained by using the projection function, the objective function value is calculated by the second conversion relation and the position vector of each target point, and the first position change information when the objective function value is minimized is used as the updated first position change information.
In an alternative embodiment, the position change information includes a rotation matrix and a translation matrix, the conversion relation includes a distance relation and a direction relation, and the objective function value is calculated by the following formula:
wherein L is an objective function value;a position vector for the jth target point; r is R C 、t C Respectively a rotation matrix and a translation matrix in the first position change information; r is R 0P j 、t 0 j Respectively the distance relation and the direction relation in the second conversion relation; proj () is a projection function.
In an example, in the step S103, the second image of the drainage pipe network is input into a pre-constructed deep learning model, and the second image is analyzed and processed by using the deep learning model to obtain a detection result.
The operation of a method for detecting a drain pipe network is described below by way of a specific embodiment, as shown in fig. 2. The process comprises the following steps:
step c1: in the initialization stage, a CCTV camera (first camera) collects pipeline videos, performs feature description on two camera images (a third image at the previous moment and a first image at the current moment) at two adjacent moments in the videos, extracts feature points from the two images and performs matching, and calculates a rotation matrix R of the CCTV camera between the two images after the matching is completed C And a translation matrix t C (first position change information) to complete motion estimation of the CCTV camera, wherein the rotation matrix and the translation matrix represent a motion state change of the CCTV camera between the two images.
Step c2: initialization phase, laser industry phase The machine (a second camera) collects point cloud data (radar data) of the internal structure of the pipeline, and registers and calculates the two-time point cloud data by an iterative near point method (ICP algorithm) to obtain a rotation matrix R of the laser industrial camera between the two-time point cloud data of the laser industrial camera L And a translation matrix t L And (the second position change information) completing the motion estimation of the laser industrial camera, and representing the motion state change of the laser industrial camera between the two times of point cloud data.
Step c3: after external parameters (first position change information and second position change information) of the CCTV camera and the laser industrial camera in a world coordinate system are determined, external calibration is performed on the CCTV camera and the laser industrial camera, and the relative position relations R and t of the CCTV camera and the laser industrial camera, namely the second relative position relation, are calculated preliminarily through the following formula:
wherein,a 3 x 3 rotation matrix and a 3 x 1 translation component representing the change of the pose position of the CCTV camera at position i, +.>The 3×3 rotation matrix and 3×1 translation component representing the change in the attitude position of the laser industrial camera at position i, and R, t the 3×3 rotation matrix and 3×1 translation component representing the relative positional relationship of the CCTV camera and the laser industrial camera.
The R and t need nonlinear optimization in the calculation process, and the optimization calculation method is as follows:
Step c4: constructing a three-dimensional space-two-dimensional image (3D-2D) corresponding relation, projecting a target point P in a three-dimensional space in point cloud data onto a first image of a CCTV camera by using a projection function, and calculating the following equation:
v 1 j =Proj(R P j +t j )
wherein v is 1 j Is the position vector of the target point j; r is R P j Characterizing a distance from the position of the target point j in the first point cloud data to the position of the target point j in the first image for a distance relationship in the position vector; t is t j Characterizing the direction in which the position of the target point j in the first point cloud data points to the position of the target point j in the first image for the direction relationship in the position vector; proj () is a projection function.
Step c5: the tracking target point P is projected into the camera image 1 to the camera image 2, and the three-dimensional-two-dimensional correspondence is as shown in fig. 3, and the motion state change (first position change information) of the CCTV camera between the two images is optimized by minimizing the objective function and recalculated, and the calculation equation is as follows:
wherein L is an objective function value;a position vector for the jth target point; r is R C 、t C Respectively a rotation matrix and a translation matrix in the first position change information; r is R 0P j 、t 0 j Respectively the distance relation and the direction relation in the second conversion relation; proj () is a projection function. This process can be expressed by the following formula:
Step c6: and c 4-c 5 of the iterative process is repeated until the second relative position relation does not change greatly any more, so that an accurate first relative position relation R, t is obtained, and the calibration of the position and posture relation between the CCTV camera and the laser industrial camera is completed.
Step c7: after the calibration is completed, an RGB-D depth image (second image) is generated according to the first image, the first point cloud data, and the first relative positional relationship.
Step c8: the RGB-D image is analyzed and processed by utilizing the artificial intelligent model, so that the structural and functional defects of the pipeline can be more accurately identified, and the detected defects can be quantitatively marked.
Fig. 4 is a schematic diagram of a drainage network detection system according to an exemplary embodiment. As shown in fig. 4, the system includes: CCTV camera 401, laser camera 402, and computing device 403.
A CCTV camera 401 for acquiring a first image of a target object.
A laser camera 402 for acquiring first point cloud data of a target object.
A computing device 403 for computing a first relative positional relationship of the CCTV camera and the laser camera; generating a second image according to the first image, the first point cloud data and the first relative position relation; and determining the detection result of the drainage pipe network according to the second image.
Through the system, the CCTV camera is utilized to acquire the pixel information in the first image of the drainage pipe network, the laser camera is utilized to acquire the three-dimensional space information in the first point cloud data of the drainage pipe network, the second image of the drainage pipe network is acquired according to the first image and the first point cloud data, the second image contains both the pixel information and the three-dimensional space information, defects in the drainage pipe network are detected by utilizing the second image, the pixel information in the image and the three-dimensional space information in the point cloud data are fully utilized, and the accuracy of detecting the defects of the drainage pipe network is improved.
In an example, the system further comprises:
and the camera lifting device is used for lifting the CCTV camera and/or the laser camera.
Through the embodiment, the CCTV camera and the laser camera in the system are lifted by using the camera lifting equipment, so that the shooting visual field of the camera is continuously changed, and comprehensive detection of different positions of the drainage pipe network in different directions is realized.
In an example, the system further includes a travel device including a track stepping structure and a triangular track structure.
Track stepping structure for use in a mobile system.
And the triangular crawler belt structure is used for performing overturning action so that the system can cross an obstacle.
Through the embodiment, the track stepping structure is utilized to realize the movement of the system, the mobile detection is carried out on the drainage pipe network, and when the obstacle in the drainage pipe network prevents the system from carrying out the mobile detection, the triangular track structure is utilized to span the obstacle, so that the movement performance of the system is improved.
In an example, the system further comprises a lighting device and a power supply device. The lighting device is used for starting lighting when the first camera works, so that the first camera can shoot and obtain image data conveniently. The power supply device may employ a lithium ion battery to provide electrical energy to the system.
In an example, the system further includes an inertial gyroscope positioning device for acquiring at least one of position information, speed information, heading information, and attitude angle information of the system. And the position, speed, course and attitude angle of the system are comprehensively monitored by using inertial gyroscope positioning equipment, and the working state of the system is known in real time.
Fig. 5 is a schematic diagram of the overall structure of a drainage pipe network detection robot. As shown in fig. 5, the drainage pipe network detection robot includes a remote control end 1 and a robot end 2. The remote control end comprises a display 11, a controller 12, a data storage module 13 and a wireless communication module 14, and the robot end comprises a control unit 21, a positioning unit 22, an action unit 23, a detection unit 24 and an energy supply unit 25.
Fig. 6 is a front view of the robot end 2, fig. 7 is a side view of the robot end 2, and fig. 8 is a top view of the robot end 2. The control unit 21 of the robot end 2 is internally provided with a wireless communication module 211, a microprocessor 212, a data storage module 213 and a data exchange interface 214, the positioning unit 22 is internally provided with an inertial gyroscope positioning module 221 and a GPS+Beidou positioning module 222, the action unit 23 is internally provided with a crawler stepping structure 231 and a turnover advancing structure 232, the detection unit 24 is internally provided with a laser 241, a laser industrial camera 242, a CCTV camera 243, an auxiliary lighting module 244 and a camera lifting module 245, and the energy supply unit 25 is internally provided with a high-density lithium ion battery 251.
In the detection operation process, the energy supply unit 25 supplies power to the robot end 2, a worker monitors and controls the ground through the remote control end 1, as shown in fig. 9, a control signal is transmitted to the wireless communication module 211 of the robot end through the wireless communication module 14, the robot end 2 is placed in an underground drainage pipeline to perform the operation, and detection information and positioning information are transmitted to the wireless communication module 14 of the remote control end through the wireless communication module 211.
In fig. 5, when the controller 12 controls the running action of the robot, the control signal is transmitted to the data exchange interface 214 by the wireless communication module 211, and then the control signal is transmitted to the action unit 23 by the data exchange interface 214, and when the interior of the front pipe section is relatively flat, the track stepping structure 231 is controlled to drive the robot to move forward, and the triangular track structure of the robot is kept motionless as a whole; when the obstacles in the front pipe section are more, the overturning advancing structure 232 is controlled to rotate, and the triangular crawler belt structure integrally starts to rotate to drive the robot to overtake the obstacles.
In fig. 5, when the controller 12 controls the detection operation of the robot, the control signal is transmitted to the data exchange interface 214 by the wireless communication module 211, then the control signal is transmitted to the detection unit 24 by the data exchange interface 214, the camera lifting module realizes the lifting and horizontal 360-degree rotation of the CCTV camera 243 and the laser industrial camera 242, the spherical structure of the camera can realize the 360-degree rotation in the vertical direction, the CCTV camera 243 shoots a color video image in a pipeline, the auxiliary illumination module 244 simultaneously starts illumination when the CCTV camera works, the laser 241 emits point cloud laser, the laser industrial camera 242 receives laser signals reflected by objects, and three-dimensional coordinate information is obtained.
In fig. 5, the microprocessor 212 reads the correlation between the three-dimensional coordinates of the laser industrial camera 242, the CCTV camera 243 and the robot body stored in the data storage module 213, controls the laser industrial camera 242 and the CCTV camera 243 to keep the same data acquisition frame rate and resolution for synchronous acquisition, performs joint calibration on CCTV color video images and laser point cloud data, and realizes data acquisition on RGB color information of each frame of image pixels of the CCTV video and three-dimensional coordinate information corresponding to each frame of image pixels. The CCTV video and laser fused detection information obtained by the detection unit 24 is uploaded to the data exchange interface 214, and then stored in the data storage module 213, and transmitted to the wireless communication module 211.
The inertial gyroscope positioning module 221 acquires the position, speed, course and attitude angle information of the robot in real time, the GPS+Beidou positioning module 222 corrects the position of the robot in real time, and the positioning unit uploads the position, speed, course and attitude angle information of the robot to the data exchange interface 214, stores the information to the data storage module 213 and simultaneously transmits the information to the wireless communication module 211.
The display 11 of the remote control terminal 1 displays the defect condition in the pipeline uploaded by the robot terminal 2 in real time, and the position, speed, course and attitude angle information of the robot, so that the control is convenient, and meanwhile, the data storage module 13 stores the information.
Fig. 10 is a schematic diagram of a hardware structure of a computer device according to an exemplary embodiment. As shown in fig. 10, the device includes one or more processors 1010 and a memory 1020, the memory 1020 including persistent memory, volatile memory and a hard disk, one processor 1010 being illustrated in fig. 10. The apparatus may further include: an input device 1030 and an output device 1040.
The processor 1010, memory 1020, input device 1030, and output device 1040 may be connected by a bus or other means, for example in fig. 10.
The processor 1010 may be a central processing unit (Central Processing Unit, CPU). The processor 1010 may also be a chip such as another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or a combination thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1020 is used as a non-transitory computer readable storage medium, including persistent memory, volatile memory, and hard disk, and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the drainage network detection method in the embodiments of the present application. The processor 1010 executes various functional applications and data processing of the server by running non-transitory software programs, instructions, and modules stored in the memory 1020, i.e., implementing any of the drain pipe network detection methods described above.
Memory 1020 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data or the like used as needed. In addition, memory 1020 may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 1020 may optionally include memory located remotely from processor 1010, which may be connected to the data processing apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 1030 may receive input numeric or character information and generate signal inputs related to user settings and function control. The output 1040 may include a display device such as a display screen.
One or more modules are stored in the memory 1020 that, when executed by the one or more processors 1010, perform the method as shown in fig. 1.
The product can execute the method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details which are not described in detail in the present embodiment can be found in the embodiment shown in fig. 1.
The present invention also provides a non-transitory computer storage medium storing computer executable instructions that can perform the method of any of the above-described method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of embodiments of the present invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (15)

1. A method for detecting a drainage pipe network, the method comprising:
acquiring a first image of the current moment of the drainage pipe network, first point cloud data and a first relative position relation between a first camera and a second camera, wherein the first image is acquired through the first camera, and the first point cloud data is acquired through the second camera;
generating a second image according to the first image, the first point cloud data and the first relative position relation, wherein the second image is a three-dimensional image;
and detecting the drainage pipe network according to the second image to obtain a detection result of the drainage pipe network.
2. The method of claim 1, wherein the step of obtaining a first relative positional relationship between the first camera and the second camera comprises:
acquiring a third image and second point cloud data of the drainage pipe network at the previous moment, wherein the third image is acquired by the first camera, and the second point cloud data is acquired by the second camera;
determining first position change information of the first camera and second position change information of the second camera, wherein the first motion state change information is used for representing position and posture information of the first camera in a world coordinate system at different moments, and the second motion state change information is used for representing position and posture information of the second camera in the world coordinate system at different moments;
determining a second relative positional relationship between the first camera and the second camera at the current moment according to the first positional change information and the second positional change information;
determining at least one target point in the drain pipe network;
determining a first conversion relation of each target point according to the second relative position relation, wherein the first conversion relation is used for converting first point cloud data of the target point into a first image of the target point;
Acquiring a third relative positional relationship between the first camera and the second camera at a previous time;
determining a second conversion relation of each target point according to the third relative position relation, wherein the second conversion relation is used for converting second point cloud data of the target point into a third image of the target point;
updating the first position change information according to the first conversion relations and the second conversion relations to obtain updated first position change information, and recalculating the second relative position relation between the first camera and the second camera until the second relative position relation meets the preset condition, and taking the second relative position relation meeting the preset condition as the first relative position relation.
3. The method of claim 2, wherein determining the first position change information for the first camera comprises:
determining feature points in the third image and feature points in the first image respectively;
and comparing the characteristic points of the third image with the characteristic points of the first image to obtain the first position change information.
4. A method according to claim 3, wherein comparing the feature points of the third image with the feature points of the first image to obtain the first position change information comprises:
Matching the characteristic points of the third image with the characteristic points of the first image to obtain a plurality of groups of matched characteristic points;
and obtaining the first position change information according to the matched multiple groups of characteristic points.
5. The method of claim 2, wherein determining second position change information for the second camera comprises:
matching the first point cloud data with the second point cloud data to obtain a plurality of groups of matched point cloud data;
and determining the second position change information according to the matched multiple groups of point cloud data.
6. The method of claim 2, wherein updating the first location change information based on each of the first conversion relationships and each of the second conversion relationships to obtain updated first location change information comprises:
inputting each first conversion relation into a pre-constructed projection function to obtain a position vector of each target point, wherein the position vector is a vector of the position of the target point in the first point cloud data pointing to the position of the target point in the first image;
and updating the first position change information according to the second conversion relations and the position vectors to obtain updated first position change information, wherein the updated first position change information enables an objective function value to be minimum, and the objective function value is calculated according to the updated first position change information, the second conversion relations and the position vectors.
7. The method of claim 6, wherein the position change information includes a rotation matrix and a translation matrix, the conversion relationship includes a distance relationship and a direction relationship, and the objective function value is calculated by the following formula:
wherein L is the objective function value;a position vector for the jth target point; r is R C 、t C Respectively a rotation matrix and a translation matrix in the first position change information; r is R 0P j 、t 0 j Respectively the distance relation and the direction relation in the second conversion relation; proj () is a projection function.
8. The method of claim 1, wherein detecting the drainage network based on the second image to obtain a detection result of the drainage network comprises:
and inputting the second image of the drainage pipe network into a pre-constructed deep learning model, and analyzing and processing the second image by using the deep learning model to obtain the detection result.
9. A drainage network detection system, the system comprising: CCTV cameras, laser cameras, and computing devices;
the CCTV camera is used for acquiring a first image of a target object;
the laser camera is used for acquiring first point cloud data of the target object;
The computing device is used for computing a first relative position relation between the CCTV camera and the laser camera; generating a second image according to the first image, the first point cloud data and the first relative position relationship; and determining a detection result of the drainage pipe network according to the second image.
10. The system of claim 9, wherein the system further comprises:
and the camera lifting device is used for lifting the CCTV camera and/or the laser camera.
11. The system of claim 9, further comprising a traveling device comprising a track stepping structure and a triangular track structure;
the crawler stepping structure is used for moving the system;
the triangular crawler belt structure is used for performing overturning actions so that the system spans an obstacle.
12. The system of claim 9, further comprising a lighting device and a power supply device.
13. The system of claim 9, further comprising an inertial gyroscope positioning device for acquiring at least one of position information, speed information, heading information, and attitude angle information of the system.
14. A computer device comprising a memory and a processor, said memory and said processor being communicatively coupled to each other, said memory having stored therein computer instructions, said processor executing said computer instructions to perform the steps of the drainage network detection method of any of claims 1-8.
15. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the drainage network detection method according to any of claims 1-8.
CN202311319998.7A 2023-10-11 2023-10-11 Drainage pipe network detection method, system, computer equipment and medium Pending CN117333463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311319998.7A CN117333463A (en) 2023-10-11 2023-10-11 Drainage pipe network detection method, system, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311319998.7A CN117333463A (en) 2023-10-11 2023-10-11 Drainage pipe network detection method, system, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN117333463A true CN117333463A (en) 2024-01-02

Family

ID=89275159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311319998.7A Pending CN117333463A (en) 2023-10-11 2023-10-11 Drainage pipe network detection method, system, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN117333463A (en)

Similar Documents

Publication Publication Date Title
JP7221324B2 (en) Method and device, electronic device, storage medium and computer program for detecting obstacles
CN110587600B (en) Point cloud-based autonomous path planning method for live working robot
De Silva et al. Fusion of LiDAR and camera sensor data for environment sensing in driverless vehicles
Wang et al. Deep semantic segmentation for visual understanding on construction sites
CN111679291A (en) Inspection robot target positioning configuration method based on three-dimensional laser radar
Asadi et al. Building an integrated mobile robotic system for real-time applications in construction
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
WO2022021661A1 (en) Gaussian process-based visual positioning method, system, and storage medium
JP2018536550A (en) Active camera movement determination for object position and range in 3D space
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN114972421A (en) Workshop material identification tracking and positioning method and system
CN111241940B (en) Remote control method of robot and human body boundary frame determination method and system
Ruan et al. Feature-based autonomous target recognition and grasping of industrial robots
CN113553943B (en) Target real-time detection method and device, storage medium and electronic device
CN113129373B (en) Indoor mobile robot vision positioning method based on convolutional neural network
CN116047440B (en) End-to-end millimeter wave radar and camera external parameter calibration method
US11763492B1 (en) Apparatus and methods to calibrate a stereo camera pair
CN117333463A (en) Drainage pipe network detection method, system, computer equipment and medium
Wu et al. Motion parameters measurement of user-defined key points using 3D pose estimation
Megalingam et al. Adding intelligence to the robotic coconut tree climber
CN113744378B (en) Exhibition article scanning method and device, electronic equipment and storage medium
Paton et al. Eyes in the back of your head: Robust visual teach & repeat using multiple stereo cameras
Šuľaj et al. Examples of real-time UAV data processing with cloud computing
Schubert et al. Towards camera based navigation in 3d maps by synthesizing depth images
CN112348875B (en) Zxfoom sign rod sign mark rod parameter representation determination method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination