CN114554030A - Device detection system and device detection method - Google Patents

Device detection system and device detection method Download PDF

Info

Publication number
CN114554030A
CN114554030A CN202011309798.XA CN202011309798A CN114554030A CN 114554030 A CN114554030 A CN 114554030A CN 202011309798 A CN202011309798 A CN 202011309798A CN 114554030 A CN114554030 A CN 114554030A
Authority
CN
China
Prior art keywords
detection
positioning
image
pose
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011309798.XA
Other languages
Chinese (zh)
Other versions
CN114554030B (en
Inventor
黄金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Airbus Beijing Engineering Technology Center Co Ltd
Original Assignee
Airbus Beijing Engineering Technology Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airbus Beijing Engineering Technology Center Co Ltd filed Critical Airbus Beijing Engineering Technology Center Co Ltd
Priority to CN202011309798.XA priority Critical patent/CN114554030B/en
Publication of CN114554030A publication Critical patent/CN114554030A/en
Application granted granted Critical
Publication of CN114554030B publication Critical patent/CN114554030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a device detection system and a device detection method. The device detection system includes: a mounting platform; the positioning sensing device comprises a positioning shooting device and a positioning measuring device; detecting a shooting device; and a processor configured to: and processing the positioning image shot by the positioning shooting device, the measured value of the positioning measuring device and the detection image shot by the detection shooting device so as to detect the object to be detected. And the processor determines the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameter of the object to be detected. According to the equipment detection system and the equipment detection method, the depth calculation of the positioning image is optimized, the dense depth map of the surrounding environment is constructed, the dense depth map is combined with the design parameters of the object to be detected, the pose estimation of the carrying platform is corrected, the pose drift is minimized, the positioning precision in the detection process is improved, the quality of the detection image is improved, and the detection accuracy is improved.

Description

Device detection system and device detection method
Technical Field
The present invention relates to a device detection system and a device detection method, and more particularly, to a device detection system and a device detection method for detecting a device using a mobile apparatus (e.g., a drone or a robot).
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
In the production, manufacturing and using processes of a plurality of industrial devices, various detection and maintenance are required for the devices. With the development of technology, the equipment detection technology based on unmanned aerial vehicles or robots is widely applied to various industrial fields. For example, in the field of aircraft, unmanned or robotic-based equipment detection techniques are widely used for digital manufacturing, pre-flight inspection, maintenance and repair of aircraft. The application of the equipment detection technology based on the unmanned aerial vehicle or the robot, especially the application in the facilities which are in bad detection environment and inconvenient for manual detection, obviously reduces the labor cost and improves the detection efficiency.
In drone or robot based device detection technology, the accuracy of the navigational positioning of the drone or robot is critical to the accuracy of the device detection performed. The higher the accuracy of the navigation fix, the higher the accuracy of the performed device detection. In order to further improve the accuracy of device detection, there is a need to further improve the accuracy of navigation positioning of the unmanned aerial vehicle or robot in the process of performing device detection.
Disclosure of Invention
The invention aims to provide a device detection system, which aims to improve the positioning precision of a carrying platform of the device detection system in the process of carrying out device detection, improve the shooting quality of a detected image and further improve the accuracy of device detection. Another object of the present invention is to provide a method for detecting equipment, so as to improve the positioning accuracy and the detection accuracy.
One aspect of the present invention provides a device detection system including: the carrying platform is suitable for moving along the detection motion path of the object to be detected; the positioning sensing device is arranged on the carrying platform and comprises a positioning shooting device and a positioning measuring device, the positioning shooting device is configured to shoot positioning images in the movement process of the carrying platform, and the positioning measuring device is configured to measure the movement of the carrying platform in real time; the detection shooting device is arranged on the carrying platform, can move relative to the carrying platform and is configured to shoot a detection image of the object to be detected; and a processor configured to: and processing the positioning image, the measured value of the positioning measuring device and the detection image so as to detect the object to be detected. And the processor determines the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameter of the object to be detected.
The processor includes a pose module including a current pose determination unit configured to: and constructing a dense depth map according to the positioning image and the measured value, correcting the dense depth map according to the design parameters of the object to be detected, and determining the pose of the carrying platform according to the corrected dense depth map. The detection module includes a detection image acquisition unit configured to acquire a detection image.
The current pose determination unit is configured to: performing depth estimation of the positioning image based on the positioning image and the measured value; calculating the confidence coefficient of the pixels of the continuous multiframe positioning images; constructing a spatial cost function of the positioning image; correcting the depth information of the positioning image according to the spatial cost function; and constructing the dense depth map based on the corrected depth information.
In one embodiment, the pose module further includes a first motion control unit that controls the motion of the mounting platform in real time according to the pose of the mounting platform so that the mounting platform enters the detection motion path and moves along the detection motion path.
The pose module further includes an object pose calculation unit configured to: and determining the target detection shooting pose of the detection shooting device according to the pose of the carrying platform, and sending the calculated target detection shooting pose to the detection module.
The detection module further comprises a second motion control unit. The processor is further configured to: the movement of the carrying platform is controlled by the first movement control unit, and/or the movement of the detection shooting device relative to the carrying platform is controlled by the second movement control unit, so that the pose of the detection shooting device is adjusted to the target detection shooting pose. The detection image acquisition unit is configured to acquire a detection image captured by the detection capture device at the object detection capture pose.
The current pose determination unit is configured to: and carrying out pixel level segmentation and identification on the positioning image by combining the design parameters of the object to be detected so as to segment a target image from the positioning image, wherein the target image is an image of the object to be detected or an image of a detection part of the object to be detected. The target pose calculation unit is configured to: and determining the depth information of the target image according to the corrected dense depth map, determining the detection position of the carrying platform according to the position and the depth information of the target image, and determining the current detection item and the corresponding target detection shooting position and the position of the current detection item.
The current pose determination unit is configured to: converting the target image into a three-dimensional space coordinate system pixel by pixel to form a three-dimensional point cloud; dividing the three-dimensional point cloud into a plurality of sub-point clouds according to the design parameters of the object to be detected; matching the design parameters of the points in each sub-point cloud with the corresponding points of the object to be detected, and correcting the points in each sub-point cloud by using the design parameters of the object to be detected according to the matching result; and correcting the dense depth map by using each corrected sub-point cloud.
The equipment detection system also comprises a memory, the memory stores design parameter information, detection item information, detection movement path information and defect judgment criteria of the object to be detected, and the processor is communicated with the memory in a wired mode or a wireless mode.
The equipment detection system also includes a ground controller configured to set a detection task, start a detection, and stop a detection.
The carrying platform is an unmanned aerial vehicle or a detection robot.
The object to be detected is located indoors or outdoors, and the device detection system detects the outer contour and/or the interior of the object to be detected.
Another aspect of the present invention is to provide an apparatus detecting method, including the steps of: appointing an object to be detected and setting a detection task; the carrying platform moves to a detection motion path of an object to be detected and moves along the detection motion path, wherein the carrying platform is provided with a positioning sensing device, the positioning sensing device comprises a positioning shooting device and a positioning measuring device, the positioning shooting device is used for shooting a positioning image in the motion process of the carrying platform, and the positioning measuring device is used for measuring the motion of the carrying platform in real time; shooting a detection image of an object to be detected by using a detection shooting device, wherein the detection shooting device is arranged on the carrying platform and can move relative to the carrying platform; and processing the positioning image, the measured value of the positioning measuring device and the detection image so as to detect the object to be detected. The device detection method further comprises: and in the process of moving the carrying platform to the detection motion path of the object to be detected and moving along the detection motion path, determining the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameters of the object to be detected.
Determining the pose of the carrying platform comprises: and constructing a dense depth map according to the positioning image and the measured value, correcting the dense depth map according to the design parameters of the object to be detected, and determining the pose of the carrying platform according to the corrected dense depth map.
Constructing a dense depth map includes: based on the positioning image and the measured value, carrying out depth estimation of the positioning image; calculating the confidence coefficient of the pixels of the continuous multiframe positioning images; constructing a spatial cost function of the positioning image; correcting the depth information of the positioning image according to the spatial cost function; and constructing a dense depth map based on the corrected depth information.
The device detection method further comprises: and in the process of moving the carrying platform to the detection motion path of the object to be detected and moving along the detection motion path, controlling the motion of the carrying platform in real time according to the pose of the carrying platform so as to enable the carrying platform to move along the detection motion path.
The device detection method further comprises: and determining a target detection shooting pose of the detection shooting device, and adjusting the pose of the detection shooting device to the target detection shooting pose.
Determining a target detection shooting pose of the detection shooting device includes: performing pixel level segmentation and identification on the positioning image by combining with the design parameters of the object to be detected so as to segment a target image from the positioning image, wherein the target image is an image of the object to be detected or an image of a detection part of the object to be detected; and determining the depth information of the target image according to the corrected dense depth map, determining the detection position of the carrying platform according to the position and the depth information of the target image, and determining the current detection item and the corresponding target detection shooting position and the position of the current detection item.
Correcting the dense depth map includes: converting the target image into a three-dimensional space coordinate system pixel by pixel to form a three-dimensional point cloud; dividing the three-dimensional point cloud into a plurality of sub-point clouds according to the design parameters of the object to be detected; matching the design parameters of the points in each sub-point cloud with the corresponding points of the object to be detected, and correcting the points in each sub-point cloud by using the design parameters of the object to be detected according to the matching result; and correcting the dense depth map by using each corrected sub-point cloud.
Adjusting the pose of the detection shooting device to the target detection shooting pose includes: controlling the motion of the carrying platform; and/or controlling the detection shooting device to move relative to the carrying platform.
The detection of the object to be detected comprises the following steps: and analyzing the state of the object to be detected based on the detection image shot by the detection shooting device at the target detection shooting pose.
The carrying platform is an unmanned aerial vehicle or a detection robot.
The invention provides an improved equipment detection system and an equipment detection method. According to the equipment detection system and the equipment detection method, the continuous frame positioning images are processed to construct the dense depth map of the surrounding environment, the depth calculation of the pixel points of the positioning images is optimized, the precision of the depth information of the pixel points of the positioning images is improved, the dense depth map of the surrounding environment is combined with the digital design parameters of the detection object, the digital design parameters of the detection object are used for optimizing the dense depth map, the pose estimation of the carrying platform is corrected, more accurate synchronous composition and positioning based on vision are realized, the deviation caused by the inconsistency of the pixels among frames is avoided, the pose drift of the positioning sensing device is minimized, the positioning accuracy in the process of executing equipment detection does not need to be completely dependent on the quality of the shot environment images, and therefore, even under the influence of the environment (such as illumination, the image quality of the environment is not influenced, Shadow or occlusion, etc.) to cause the shot image to have noise, the pose estimation can be corrected, the positioning precision is improved, a more reliable basis is provided for subsequent positioning control and shooting pose control, the quality of the detected image can be improved, and the accuracy of detection and analysis is improved.
Drawings
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings. In the drawings, like features or components are designated with like reference numerals, and the drawings are not necessarily drawn to scale, and wherein:
FIG. 1 shows a schematic block diagram of a device detection system according to an embodiment of the present invention;
FIG. 2 shows a flow chart of a device detection method of the device detection system according to the invention; and
FIG. 3 shows a flow chart of a device detection system determining a current pose of a ride-on platform according to the present invention.
Detailed Description
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, like reference numerals indicate like or similar parts and features. The drawings are only schematic representations of the concepts and principles of embodiments of the present invention, and do not necessarily show the specific dimensions and proportions of the various embodiments of the invention. Certain features that are part of a particular figure may be used in an exaggerated manner to illustrate relevant details or structures of embodiments of the present invention.
Fig. 1 shows a schematic block diagram of a device detection system 1 according to the present invention. As shown in fig. 1, the equipment detection system 1 includes a ground control device 10, a mounting platform 20, a positioning sensing device 30, a detection camera 40, a processor 50, and a memory 60.
The ground control device 10 is a control device of the equipment inspection system 1, and may be a hand-held controller operated by an operator. The ground control device 10 is configured to start or stop the movement of the mounting platform 20 to start or stop the detection of the detection object. The detection object can be various industrial equipment, such as an airplane, a vehicle, an electric tower and the like, and can also be certain specific spaces or facilities, such as a working room or a factory building. The ground control device 10 is provided with a user input interface that may include a detection task setting interface 11, a start detection interface 13, and a stop detection interface 15. The operator can input the type of the object to be detected, the position of the object to be detected, and the position of the object to be detected through the detection task setting interface 11 of the ground control device 10 to set a detection task, and then start the movement of the carrying platform 20 by operating the start detection interface 13 to start the detection of the object to be detected. During the movement of the mounting platform 20 to perform the inspection, the operator can terminate the inspection by operating the stop inspection interface 15 of the ground control apparatus 10 at any time.
The mounting platform 20 is a mobile device that can move around the outer contour of the inspection object or inside the inspection object to perform inspection, and may be, for example, an unmanned aerial vehicle or a robot for inspection. The mounting platform 20 may mount a plurality of devices of the device inspection system 1, for example, the positioning sensing device 30, the inspection photographing device 40, the processor 50, and the memory 60 may be disposed on the mounting platform 20. The mounting platform 20 moves according to the command of the ground control device 10 to perform the detection.
The positioning sensing device 30 is disposed on the mounting platform 20, and is configured to photograph a motion environment during a motion process of the mounting platform 20, and measure a motion of the mounting platform 20, so as to implement synchronous mapping and positioning of the motion environment of the mounting platform 20 based on vision. The positioning sensing device 30 includes a positioning photographing device 31 and a positioning measuring device 33. The positioning imaging device 31 is mounted on the mounting platform 20. In the present example, the positioning camera 31 is installed to have a fixed posture with respect to the mounting platform 20, however, the present invention is not limited thereto. In other examples according to the present invention, the positioning camera 31 may also be installed such that the relative attitude between the positioning camera 31 and the mounting platform 20 is adjustable. For example, in one example, the positioning camera 31 may be mounted on the mounting platform 20 via a motion-controllable pan/tilt head, and may be capable of moving relative to the mounting platform 20, such as horizontal movement and vertical pitch movement relative to the mounting platform 20, to adjust the shooting orientation and pitch angle. The positioning camera 31 is configured to take images of the moving environment in real time during the movement of the mounting platform 20 in order to determine and correct the moving path of the mounting platform 20 and the posture thereof. The positioning camera 31 may comprise one or more first cameras to capture images of the environment in real time during the movement of the mounting platform 20. The first camera may be a high speed camera capable of taking RGB images of, for example, 60 to 200 frames per second. The positioning and measuring device 33 is disposed on the carrying platform 20, and is used for sensing spatial six-degree-of-freedom motion of the carrying platform 20 in real time, so as to determine the pose of the carrying platform 20 and each device carried thereby. In the present embodiment, the positioning measurement device 33 includes an Inertial Measurement Unit (IMU) that can provide six-degree-of-freedom measurements, may include three single-axis accelerometers and three single-axis gyroscopes, for detecting acceleration and angular velocity of the mounting platform 20 in three-dimensional space to solve for spatial six-degree-of-freedom motion of the mounting platform 20, i.e., movement in the three cartesian axes of a spatial three-axis coordinate system and rotation about the three axes, for determining the attitude of the mounting platform 20, and thus for correcting the path of motion of the mounting platform 20. In other examples, the inertial measurement unit of the positioning measurement device 33 may provide nine degrees of freedom measurements, for example, three single axis magnetic sensors may be included in addition to accelerometers and gyros to provide heading information.
The detection camera 40 may be mounted on the mounting platform 20 via a motion-controllable pan/tilt head, for example, and may be movable relative to the mounting platform 20, such as horizontal movement and vertical pitch movement relative to the mounting platform 20, to adjust a shooting orientation and a pitch angle. The detection camera 40 is configured to take a whole image of the detection object or an image of a certain portion of the detection object when the mounting platform 20 moves along the detection movement path of the detection object, for detection and analysis of the detection object. The detection motion path of the detection object may be a path outside the detection object around the outer contour of the detection object or a path inside the detection object. The detection photographing device 40 may include one or more second cameras for photographing the detection object or a local portion thereof when the mounting platform 20 moves along the detection motion path of the detection object, for detection analysis of the detection object. The second camera may be a low speed camera that can take RGB images, for example, 3 to 10 frames per second. Herein, for convenience of description, RGB images taken by the positioning camera 31 and the detection camera 40 are collectively referred to as an environment image, where the RGB image taken by the positioning camera 31 is defined as a positioning image, the RGB image taken by the detection camera 40 is defined as a detection image, and a shooting pose of the positioning camera 31 is defined as a positioning shooting pose, and a shooting pose of the detection camera 40 is defined as a detection shooting pose.
The processor 50 is configured to communicate with the ground control device 10, the positioning sensing device 30, the detection camera 40, and the memory 60 in a wired or wireless manner, and perform various processes and calculations. For example, the processor 50 may acquire the detection task from the ground control device 10, acquire the corresponding detection motion path from the memory 60 according to the detection task, process and analyze the images captured by the positioning cameras 31 and the detection cameras 40, receive the motion information sensed by the positioning measurement device 33, calculate the poses of the mounting platform 20 and the mounted positioning cameras 31 and detection cameras 40, and perform detection and analysis of the detection object. In the present embodiment, the processor 50 is provided on the mounting platform 20. Alternatively, the processor 50 may be provided on the surface control device 10 in other embodiments according to the invention.
The processor 50 mainly includes a pose module 51 and a detection module 55. The pose module 51 includes a current pose determination unit 511, a first motion control unit 513, and an object pose calculation unit 515. The current pose determination unit 511 is configured to receive and process the positioning image captured by the positioning camera 31, the sensing signal of the positioning measurement device 33, and process the positioning image and the sensing signal to acquire the current pose of the embarkation platform 20 for calculating the target detection shooting pose (optimal shooting pose) of the detection camera 40. The attitude of the mounting platform 20 refers to the position and attitude of the mounting platform 20 in the spatial three-axis coordinate system. The current pose determination unit 511 includes a plurality of sub-units, such as, but not limited to, an image segmentation unit 5111, a depth calculation unit 5113, and a current pose correction unit 5115.
The image segmentation unit 5111 is configured to perform segmentation recognition on the positioning image captured by the positioning capture device 31 to extract a target image in the positioning image, which may be, for example, an image of the detection object or an image of a detection portion of the detection object. The image segmentation unit 5111 is configured to perform pixel-level segmentation recognition on a positioning image (an image of a motion environment) including a detection object captured by the positioning capture device 31 during movement of the mounting platform 20 toward the detection object and along a detection motion path of the detection object, determine a pixel position of the detection object in the positioning image, and extract an image of the detection object from the positioning image for calculating a relative position between the mounting platform 20 and the detection object, so as to correct the motion of the mounting platform 20 to enable the mounting platform 20 to move along the detection motion path of the detection object. In addition, the image segmentation unit 5111 is further configured to perform pixel-level segmentation recognition on the positioning image captured by the positioning capture device 31 during the movement of the mounting platform 20 along the detection motion path of the detection object, determine the pixel position of the detection site of the detection object in the positioning image, extract an image of the detection site of the detection object (e.g., the nose or the wing of the airplane) from the positioning image, and allow the target pose calculation unit 515 to calculate the target detection capture pose that the detection capture device 40 should take when capturing the detection image of the detection site.
The depth calculating unit 5113 is configured to analyze the two-dimensional positioning image captured by the positioning imaging device 31, and acquire depth information of each pixel point in the positioning image. The depth information of each pixel in the positioning image is the distance between the shooting point of the image (i.e., the positioning shooting device 31) and the spatial position corresponding to each pixel. The depth calculating unit 5113 is further configured to calculate the confidence of the pixel points of the positioning image by comparing the continuous multi-frame positioning image with the corresponding key frame, construct a spatial cost function of the positioning image, correct the depth information of the positioning image based on the spatial cost function, and construct a dense depth map of the surrounding environment, thereby obtaining the three-dimensional data of the positioning image.
The current pose correction unit 5115 is configured to correct the dense depth map based on the surrounding environment constructed by the depth calculation unit 5113 according to the design parameters of the detection object, and correct the rough pose estimation of the mounting platform 20 and the positioning imaging device 31 (for example, the rough pose estimation based on the measurement result of the positioning measurement device 33) according to the corrected dense depth map, thereby acquiring the current poses of the mounting platform 20 and the positioning imaging device 31. The pose of the positioning camera 31 is determined based on the pose of the mounting platform 20 and its relative position to the mounting platform 20 (including the mounting position, azimuth, pitch angle, etc. with respect to the mounting platform 20).
The first motion control unit 513 is configured to calibrate the motion of the mounting platform 20 so that the mounting platform 20 enters the detection motion path of the detection object and moves along the detection motion path, based on the position of the detection object and the current pose of the mounting platform 20 calculated by the current pose correction unit 5115.
The target pose calculation unit 515 is configured to determine the current pose of the inspection camera 40, determine the inspection position at which the mounting platform 20 is currently located, and calculate the target inspection shooting pose that the inspection camera 40 for shooting the inspection image of the inspection object should take, based on the relative position of the inspection camera 40 to the mounting platform 20 (including the installation position, azimuth, pitch angle, and the like with respect to the mounting platform 20) and the current poses of the mounting platform 20 and the positioning camera 31 calculated by the current pose correction unit 5115, and output the calculated target inspection shooting pose to the inspection module 55 in real time.
The inspection module 55 is configured to acquire an inspection image of the inspection object photographed by the inspection photographing device 40 and analyze a state of the inspection object according to the acquired inspection image to determine whether a defect exists and to determine a location where the defect exists and a type of the defect. The detection module 55 includes a second motion control unit 551, a detected image acquisition unit 553, and an analysis unit 555. The second motion control unit 551 is configured to receive the current pose of the mounting platform 20 from the current pose determination unit 511 of the pose module 51 and the object detection shooting pose from the object pose calculation unit 515, and control the motion of the mounting platform 20 and/or control the motion of the detection shooting device 40 relative to the mounting platform 20 to bring the detection shooting device 40 into the object detection shooting pose, based on the current pose of the mounting platform 20, the calculated object detection shooting pose, and the relative position of the detection shooting device 40 and the mounting platform 20, to improve the quality of the detection image shot by the detection shooting device 40. The inspection image acquisition unit 553 is configured to acquire an inspection image of the inspection object photographed by the inspection photographing device 40 in the object inspection photographing pose. The analysis unit 555 is configured to process and analyze the inspection image acquired by the inspection image acquisition unit 553 to determine whether the inspection object has a defect and the location and type of the defect. Preferably, the analysis unit 555 is configured to process and analyze the detection image acquired by the detection image acquisition unit 553 after the mounting platform 20 has moved around the detection object to complete the shooting of the detection object and return, without performing the analysis and processing in real time during the movement of the mounting platform 20, so that the real-time calculation amount of the processor 50 can be reduced, and the configuration requirement of the processor 50 can be reduced.
The memory 60 is configured to store various information, algorithms, and the like, including: the type of the detection object, the design parameters of each type of detection object, the detection motion path information corresponding to each type of detection object, the detection item information corresponding to each detection part of the detection object, the defect judgment criteria corresponding to each detection item, and the like. The design parameters of each inspection object include design parameters of each part of the inspection object, assembly parameters between the parts, and overall assembly parameters of the inspection object. Design parameters of a test object can be based on digitized prototype model (DMU) data of the test object. The detection movement path information is information related to a detection movement path preset for each detection object, and includes a detection start point, a detection movement path to be followed, and a safety distance, which is a minimum distance that the mounting platform 20 should keep with the detection object during movement. During the course of the mounting platform 20 moving along the detection motion path of the detection object for detection, the movement of the mounting platform 20 may be corrected to follow the detection motion path of the detection object according to the safety distance. The detection item information of each detection site of the detection object is detection item information corresponding to a service manual of each detection site of the detection object. For example, when the detection site is a wing of an airplane, the detection items of the detection site include wing tip detection, wing edge detection and wing airfoil detection. The defect judgment criterion is a defect judgment criterion corresponding to each inspection item in the repair manual of the inspection object. In the process of detecting the detection object using the device detection system 1, the information stored in the memory 60, the algorithm, and the like are accessed and called by the processor 50 in a wired or wireless manner to perform corresponding processing.
A schematic block diagram of a device detection system 1 is introduced above. The device inspection system 1 according to the present invention processes the environmental image captured by the positioning camera 31 during the movement of the mounting platform 20 and the measurement result of the positioning measurement device 33 to realize the simultaneous mapping and positioning of the mounting platform 20, and corrects the pose estimation of the mounting platform based on the design parameters of the inspection object (e.g., digitized sample model data of the inspection object), and thereby corrects the movement of the mounting platform 20, and corrects the inspection shooting pose of the inspection camera 40 when shooting the inspection image of the inspection object, reducing the shift of the pose, so that the shot inspection image has more accurate pose information, and realizing more accurate inspection analysis.
The following will describe a detection method for detecting an aircraft using the device detection system 1 according to the present invention with an aircraft as a detection object, with reference to the drawings. Fig. 2 shows a flowchart of the detection method, and fig. 3 shows a detailed flowchart of the step of determining the current pose of the mounting platform in fig. 2.
As shown in fig. 2, when the aircraft is detected by using the device detection system 1, first, in step S101, a detection task is set and detection is started. A user sets a detection task through a user input interface of the ground control device 10, inputs a detection object to be detected as an airplane in the detection task setting interface 11, inputs a position where the airplane to be detected is located, and specifies a part of the airplane to be detected, for example, a nose, a fuselage, a wing, etc. of the airplane to be detected. Then, the start item on the start detection interface 13 is operated. Once the detection task has been set and the start item is operated, in step S102, the equipment detecting system 1 acquires design data of the aircraft (i.e., the detection object) and detection motion path information for the aircraft, including a detection start point, a detection motion path to be followed by the mounting platform 20, and a safety distance to be maintained between the mounting platform 20 and the aircraft, from the memory 60 according to the set detection task, then activates a driver of the mounting platform 20, moves the mounting platform 20 toward the aircraft, and activates the positioning photographing device 31 of the positioning sensing device 30 to photograph the positioning image, and activates the positioning measuring device 33 to measure the motion of the mounting platform 20.
During the movement of the mounting platform 20 toward the aircraft, the current pose determination unit 511 of the processor 50 determines the current pose of the mounting platform 20 in real time at step S103. The current pose of the mounting platform 20 is used to calculate the current pose of the devices (e.g., the positioning camera 31, the detection camera 40) mounted on the mounting platform 20 in the subsequent steps. For example, the current posture of the apparatus mounted on the mounting platform 20 may be determined according to the relative position between the apparatus and the mounting platform 20 (including the mounting position of the apparatus on the mounting platform 20 and the pitch angle, azimuth angle, etc. with respect to the mounting platform 20) and the current posture of the mounting platform 20.
In step S104, the movement of the mounting platform 20 is corrected so that the mounting platform 20 enters the detection movement path of the aircraft and moves along the detection movement path, in accordance with the current posture of the mounting platform 20 determined in step S103. The detection movement path of the aircraft is a path along which the mounting platform 20 is moved to perform detection of the aircraft. Along the detection movement path, the mounting platform 20 and the aircraft approach each other and maintain a suitable distance therebetween, for example, a safe distance therebetween. The first motion control unit 513 of the pose module 51 acquires the relative position between the mounting platform 20 and the aircraft according to the current pose of the mounting platform 20 and the position where the aircraft is located, and controls the motion of the mounting platform 20 according to the detection motion path information for the aircraft, so that the distance between the mounting platform 20 and the detection object is maintained as a safe distance, and the mounting platform 20 enters the detection motion path of the detection object and moves to the detection starting point.
Once the mounting platform 20 enters the detection motion path of the aircraft and moves to the detection starting point, in step S105, the processor 50 processes the positioning image captured by the positioning capture device 31 during the movement of the mounting platform 20 along the detection motion path, performs pixel-level segmentation recognition on the positioning image, and determines the detection position where the mounting platform 20 is currently located. First, the image segmentation unit 5111 determines the pixel position of the airplane (i.e., the detection object) in the positioning image, performs pixel-level segmentation recognition on the positioning image including the airplane, and separates the airplane image in the positioning image from the environment image. Next, based on design data of the aircraft (e.g., digitized sample model data of the aircraft), the image segmentation unit 5111 proceeds with pixel-level image segmentation of the aircraft image, dividing the aircraft image into a plurality of parts, e.g., a nose, a wing, a fuselage, a tail, etc. Then, based on the pixel-level segmentation recognition of the aircraft image and the depth of each pixel point, the target pose calculation unit 515 determines which detection position of the aircraft the mounting platform 20 is currently at, and determines the current detection item.
In step S106, the object pose calculation unit 515 of the processor 50 calculates and outputs the object detection shooting pose in real time. The object pose calculation unit 515 acquires the detection item corresponding to the currently detected part and the corresponding design parameter information from the memory 60, and calculates in real time the object detection shooting pose that the detection shooting device 40 corresponding to the detection item of the detected part needs to take based on the relative position between the mounting platform 20 and the currently detected part, and outputs the calculated object detection shooting pose to the detection module 55. For example, when the mounting platform 20 is currently at the detection position corresponding to the wing, the target pose calculation unit 515 acquires the detection items of the wing, and calculates in real time the target detection shooting pose that needs to be taken by the detection shooting device 40 at the time of detecting each detection item of the wing based on the relative position between the mounting platform 20 and the wing, and outputs the target detection shooting pose to the detection module 55 in real time.
In step S107, the pose of the detection camera 40 is adjusted based on the calculated object detection camera pose. The second motion control unit 551 of the detection module 55 receives the object detection shooting pose calculated by the object pose calculation unit 515 from the pose module 51, and controls the motion of the mounting platform 20 in real time and the motion of the detection shooting device 40 relative to the mounting platform 20, for example, the pan-tilt of the detection shooting device 40, based on the object detection shooting pose, the current pose of the mounting platform 20 obtained in step S103, and the relative position of the detection shooting device 40 and the mounting platform 20, so that the detection shooting device 40 is at the object detection shooting pose of the corresponding detection item of the detection part. For example, for the detection of a wing, the target shooting pose of the detected image of the wing tip is different from the target shooting pose of the detected image of the wing edge. When a detection image of the wing edge is taken after the detection image of the wing tip of the wing is taken by the detection taking device 40, the detection taking pose of the detection taking device 40 is adjusted from the target detection taking pose taken when the wing tip is taken to the target detection taking pose when the wing edge is taken.
In step S108, the detection image acquisition unit 553 of the detection module 55 acquires a detection image that the detection camera 40 captures at the object detection capturing pose. For the detection of the wing, for example, the detection image acquisition unit 553 acquires the detection image of the wing edge and the detection image of the wing tip, which are captured by the detection capture unit 40 in the corresponding object detection capture poses, respectively. Next, in step S109, it is determined whether the detection task has been completed. If the detection task is completed and it is not necessary to continue the detection of another detection site of the aircraft (i.e., the detection target), the mounting platform 20 is returned to step S110, and the detection is completed. If the detection task is not completed and the next detection part needs to be detected, the process returns to step S103, and the following steps are repeated until all the detections are completed. After the mounting platform 20 returns, the analyzing unit 555 of the inspection module 55 performs defect analysis based on the inspection image acquired by the inspection image acquiring unit 553. The analysis unit 555 processes and analyzes the inspection image captured by the inspection camera 40 in the object inspection shooting pose, compares information of the inspection image with the digitized design data of the inspection object (e.g., digitized prototype data of the inspection object) stored in the memory 60, and determines whether there is a defect at the inspection portion of the inspection object and the position and type of the defect based on the service manual of the inspection object.
Fig. 3 shows a detailed flow of step S103, which illustrates a method for determining the current pose of the embarkation platform according to the embodiment of the present invention.
As shown in fig. 3, first, in step S1031, the processor 50 acquires the positioning image captured by the positioning capture device 31 and the movement information of the mounting platform 20 measured by the positioning measurement device 33 (e.g., the inertial measurement unit). Then, in step S1032, based on the received positioning image and motion information, depth estimation of the positioning image is performed. The depth calculating unit 5113 calculates a distance between the positioning camera 31 and a spatial position corresponding to each pixel point of the positioning image, and uses the calculated distance as a rough depth value of each pixel point of the positioning image. The depth of the pixel points of the positioning image may be calculated based on a principle of stereoscopic vision, for example, based on a principle of stereoscopic vision of monocular vision or binocular vision (e.g., binocular vision).
In step S1033, the processor 50 calculates a matching confidence for each pixel of the consecutive frames of the positioning image. The depth calculating unit 5113 constructs a depth range including the depth value according to the rough depth value of each pixel point of the positioning image, constructs a three-dimensional space with the depth direction as one coordinate axis direction, divides the three-dimensional space into N parts by depth labels along the depth direction, and extracts information included in each neighborhood space, such as a shooting angle, a position, a pixel shift, and the like, respectively. Continuous multiframe positioning images in the shooting sequence of the positioning shooting device 31 are respectively projected onto related key frame positioning images, and the matching reliability is calculated according to the consistency of the same pixel in the continuous frame positioning images. For example, when the positioning image capturing device 31 captures the positioning image, the captured positioning image is affected by the environment (e.g., light, shadow, or shading), which causes noise to exist in the captured positioning image, and the same pixel point is inconsistent in the continuous multi-frame positioning images, which results in deviation. The higher the consistency, the higher the matching confidence. In positioning images of a shooting sequence of the positioning camera 31, the key frame positioning image needs to satisfy at least one of the following conditions: 1) the pose angle change between successive frame alignment images exceeds a predetermined angle change threshold, which in one example may be 5 degrees; 2) spatial displacement variations in the successive frame alignment images exceed a predetermined displacement threshold, which in one example is 0.1 m; and 3) the pixel change ratio during affine transformation of the images of the successive frame alignment images is less than an effective pixel ratio threshold, which in one example is 0.5.
In step S1034, a spatial cost function of the positioning image is constructed through image interpolation operation based on the matching confidence of each pixel of the positioning image calculated in step S1033.
Then, in step S1035, based on the spatial cost function of the positioning image constructed in step S1034, the depth calculation unit 5113 corrects the coarse depth value of the positioning image, calculates a depth correction value of the positioning image, and corrects the depth information of the positioning image. In step S1036, based on the corrected depth information, the depth calculation unit 5113 constructs a dense depth map of the surrounding environment, and implements three-dimensional construction of a two-dimensional positioning image.
Next, in step S1037, based on the dense depth map of the surrounding environment and according to the design data of the aircraft (for example, the digitized design model data of the aircraft) called from the memory 60, the current pose correction unit 5115 corrects the pose estimation of the mounting platform 20, and acquires the current pose of the mounting platform 20, thereby achieving the vision-based synchronized composition and positioning of the mounting platform 20. Specifically, a target image (an image of an airplane or an image of some detection portion of the airplane) identified by pixel-level segmentation from a positioning image is converted pixel by pixel into a three-dimensional space coordinate system to form a three-dimensional point cloud. The three-dimensional point clouds are then divided into a plurality of groups according to design parameters of the aircraft (e.g., DMU model data of the aircraft), forming a plurality of sub-point clouds. For example, a three-dimensional point cloud is divided into a plurality of sub-point clouds by a clustering algorithm or the like according to the appearance parameter relationship of each point of the same part of the airplane. And then, matching the points in each sub-point cloud with the design parameters of the corresponding points of the airplane, and correcting the points in each sub-point cloud by using the design parameters of the airplane according to the matching result. And then, optimizing the dense depth map by using each corrected sub-point cloud to obtain a corrected dense depth map. Through the processing, the dense depth map is corrected by using the design data of the airplane, so that the more accurate synchronous composition and positioning of the carrying platform 20 based on vision are realized, and the more accurate current pose of the carrying platform 20 is obtained, so that the positioning precision of the carrying platform 20 and the detection shooting device carried by the carrying platform 20 can be improved, the quality of the detection image can be improved, and the accuracy of detection and analysis can be improved.
According to the equipment detection system 1, the continuous frame positioning images are processed, the space cost function of the positioning images is constructed based on the confidence degrees of the pixels of the positioning images, the depth information of the positioning images is corrected based on the space cost function of the positioning images, the dense depth map of the surrounding environment is constructed, the depth calculation of the positioning images of the shooting sequence of the positioning shooting device 31 is optimized, and the depth calculation precision of each pixel point of the positioning images is improved. And, the device inspection system 1 combines the dense depth map of the surrounding environment with the digital design prototype data of the inspection object, the constructed dense depth map is further modified based on the digitized design prototype data of the test object, thereby correcting the pose estimation of the carrying platform 20, realizing more accurate synchronous composition and positioning based on vision, avoiding the deviation caused by the inconsistency of the pixels between frames, the pose drift of the shooting device is minimized, the positioning accuracy in the process of executing equipment detection does not need to be completely dependent on the quality of the shot environment image, therefore, even under the condition that the shot image has noise due to the influence of the environment (such as illumination, shadow or shielding), the pose estimation can be corrected, the positioning accuracy is improved, and a more reliable basis is provided for subsequent positioning control, shooting pose control and defect analysis. Therefore, the device detection system 1 and the device detection method thereof according to the present invention are applicable regardless of whether the detection object is located in an indoor environment or an outdoor environment, and regardless of whether the outer contour of the detection object is detected or the inside of the detection object is detected.
The device inspection system 1 according to the present invention is described above with reference to the drawings, and the device inspection method of the device inspection system 1 according to the present invention is described with an example in which an airplane is an inspection object. However, the above examples should not be taken as limiting the device detection system according to the present invention. The device detection system according to the invention can also be applied to the detection of other devices.
Herein, exemplary embodiments of the present invention have been described in detail, but it should be understood that the present invention is not limited to the specific embodiments described and illustrated in detail above. Various modifications and alterations of this invention will become apparent to those skilled in the art without departing from the spirit and scope of this invention. All such variations and modifications are intended to be within the scope of the present invention. Moreover, all the components described herein may be replaced by other technically equivalent components.

Claims (20)

1. A device detection system comprising:
the carrying platform is suitable for moving along a detection motion path of an object to be detected;
the positioning sensing device is arranged on the carrying platform and comprises a positioning shooting device and a positioning measuring device, the positioning shooting device is configured to shoot a positioning image in the moving process of the carrying platform, and the positioning measuring device is configured to measure the movement of the carrying platform in real time;
a detection photographing device provided on the mounting platform, movable relative to the mounting platform, and configured to photograph a detection image of the object to be detected; and
a processor configured to: processing the positioning image, the measurement value of the positioning measurement device and the detection image to detect the object to be detected,
the method is characterized in that the processor determines the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameters of the object to be detected.
2. The device detection system of claim 1, wherein the processor comprises a pose module and a detection module,
wherein the pose module includes a current pose determination unit configured to: constructing a dense depth map according to the positioning image and the measured value, correcting the dense depth map according to the design parameters of the object to be detected, determining the pose of the carrying platform according to the corrected dense depth map,
the detection module includes a detection image acquisition unit configured to acquire the detection image.
3. The apparatus detection system according to claim 2, wherein the current pose determination unit is configured to: performing a depth estimation of the positioning image based on the positioning image and the measurement values; calculating the confidence coefficient of the pixels of the continuous multiframe positioning images; constructing a spatial cost function of the positioning image; correcting the depth information of the positioning image according to the spatial cost function; and constructing the dense depth map based on the corrected depth information.
4. The device detection system according to claim 2, wherein the pose module further includes a first motion control unit that controls a motion of the mounting platform in real time according to a pose of the mounting platform to cause the mounting platform to enter and move along the detection motion path.
5. The device detection system according to claim 2, wherein the pose module further comprises an object pose calculation unit configured to: and determining the target detection shooting pose of the detection shooting device according to the pose of the carrying platform, and sending the calculated target detection shooting pose to the detection module.
6. The device detection system of claim 5, wherein the detection module further comprises a second motion control unit,
the processor is further configured to: controlling a movement of the mounting platform by the first movement control unit and/or controlling a movement of the detection photographing apparatus with respect to the mounting platform by the second movement control unit to adjust a pose of the detection photographing apparatus to the target detection photographing pose, and
the detection image acquisition unit is configured to acquire a detection image captured by the detection capture device with the object detection capture pose.
7. The apparatus detection system according to claim 5, wherein the current pose determination unit is configured to: performing pixel level segmentation recognition on the positioning image by combining with the design parameters of the object to be detected so as to segment a target image from the positioning image, wherein the target image is an image of the object to be detected or an image of a detection part of the object to be detected,
the target pose calculation unit is configured to: and determining the depth information of the target image according to the corrected dense depth map, determining the detection position of the carrying platform according to the position and the depth information of the target image, and determining the current detection item and the corresponding target detection shooting position.
8. The apparatus detection system according to claim 7, wherein the current pose determination unit is configured to:
converting the target image into a three-dimensional space coordinate system pixel by pixel to form a three-dimensional point cloud;
dividing the three-dimensional point cloud into a plurality of sub-point clouds according to the design parameters of the object to be detected;
matching the design parameters of the points in the sub point clouds with the design parameters of the corresponding points of the object to be detected, and correcting the points in the sub point clouds by using the design parameters of the object to be detected according to the matching result; and
and correcting the dense depth map by using each corrected sub-point cloud.
9. The device inspection system according to any one of claims 1 to 8, wherein the device inspection system further comprises a memory storing design parameter information, inspection item information, inspection movement path information, and defect judgment criteria of the object to be inspected, the processor communicating with the memory in a wired manner or in a wireless manner.
10. The equipment detection system of any one of claims 1 to 8, further comprising a ground controller configured to set a detection task, start a detection, and stop a detection.
11. The device detection system of any one of claims 1 to 8, wherein the mounting platform is a drone or a detection robot.
12. A device detection method, comprising the steps of:
appointing an object to be detected and setting a detection task;
moving a carrying platform to a detection motion path of the object to be detected and moving along the detection motion path, wherein the carrying platform is provided with a positioning sensing device, the positioning sensing device comprises a positioning shooting device and a positioning measuring device, the positioning shooting device is configured to shoot a positioning image in the process of moving the carrying platform, and the positioning measuring device is configured to measure the motion of the carrying platform in real time;
shooting a detection image of the object to be detected by using a detection shooting device, wherein the detection shooting device is arranged on the carrying platform and can move relative to the carrying platform;
processing the positioning image, the measurement value of the positioning measurement device and the detection image to detect the object to be detected,
the equipment detection method is characterized by further comprising the following steps: and in the process of moving the carrying platform to the detection motion path of the object to be detected and moving along the detection motion path, determining the pose of the carrying platform in real time according to the positioning image, the measured value and the design parameters of the object to be detected.
13. The device detection method of claim 12, wherein determining the pose of the airborne platform comprises: and constructing a dense depth map according to the positioning image and the measured value, correcting the dense depth map according to the design parameters of the object to be detected, and determining the pose of the carrying platform according to the corrected dense depth map.
14. The device detection method of claim 13, wherein constructing the dense depth map comprises: based on the positioning image and the measured value, performing depth estimation of the positioning image; calculating the confidence coefficient of the pixels of the continuous multiframe positioning images; constructing a spatial cost function of the positioning image; correcting the depth information of the positioning image according to the spatial cost function; and constructing the dense depth map based on the corrected depth information.
15. The device detection method of claim 14, wherein the device detection method further comprises: and in the process of moving the carrying platform to the detection motion path of the object to be detected and moving along the detection motion path, controlling the motion of the carrying platform in real time according to the pose of the carrying platform so as to enable the carrying platform to move along the detection motion path.
16. The device detection method of claim 13, further comprising: and determining a target detection shooting pose of the detection shooting device, and adjusting the pose of the detection shooting device to the target detection shooting pose.
17. The device detection method according to claim 16, wherein determining the object detection shooting pose of the detection shooting apparatus includes: performing pixel level segmentation recognition on the positioning image by combining with the design parameters of the object to be detected so as to segment a target image from the positioning image, wherein the target image is an image of the object to be detected or an image of a detection part of the object to be detected; and determining the depth information of the target image according to the corrected dense depth map, determining the detection position of the carrying platform according to the position and the depth information of the target image, and determining the current detection item and the corresponding target detection shooting position.
18. The device detection method of claim 17, wherein modifying the dense depth map comprises:
converting the target image pixel by pixel into a three-dimensional space coordinate system to form a three-dimensional point cloud;
dividing the three-dimensional point cloud into a plurality of sub-point clouds according to the design parameters of the object to be detected;
matching the design parameters of the points in each sub-point cloud and the corresponding points of the object to be detected, and correcting the points in each sub-point cloud by using the design parameters of the object to be detected according to the matching result;
and correcting the dense depth map by using each corrected sub-point cloud.
19. The apparatus detection method according to claim 16, wherein adjusting the pose of the detection camera to the object detection camera pose includes: controlling the motion of the carrying platform; and/or controlling the movement of the detection shooting device relative to the carrying platform.
20. The device detection method according to claim 16, wherein detecting the object to be detected comprises: and analyzing the state of the object to be detected based on a detection image shot by the detection shooting device at the target detection shooting pose.
CN202011309798.XA 2020-11-20 2020-11-20 Device detection system and device detection method Active CN114554030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309798.XA CN114554030B (en) 2020-11-20 2020-11-20 Device detection system and device detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309798.XA CN114554030B (en) 2020-11-20 2020-11-20 Device detection system and device detection method

Publications (2)

Publication Number Publication Date
CN114554030A true CN114554030A (en) 2022-05-27
CN114554030B CN114554030B (en) 2023-04-07

Family

ID=81660470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309798.XA Active CN114554030B (en) 2020-11-20 2020-11-20 Device detection system and device detection method

Country Status (1)

Country Link
CN (1) CN114554030B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147411A (en) * 2022-08-30 2022-10-04 启东赢维数据信息科技有限公司 Labeller intelligent positioning method based on artificial intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150172626A1 (en) * 2012-07-30 2015-06-18 Sony Computer Entertainment Europe Limited Localisation and mapping
CN105388905A (en) * 2015-10-30 2016-03-09 深圳一电航空技术有限公司 Unmanned aerial vehicle flight control method and device
CN108416840A (en) * 2018-03-14 2018-08-17 大连理工大学 A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN109341694A (en) * 2018-11-12 2019-02-15 哈尔滨理工大学 A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN109596118A (en) * 2018-11-22 2019-04-09 亮风台(上海)信息科技有限公司 It is a kind of for obtaining the method and apparatus of the spatial positional information of target object
CN110806411A (en) * 2019-11-07 2020-02-18 武汉理工大学 Unmanned aerial vehicle rail detecting system based on line structure light
CN111077907A (en) * 2019-12-30 2020-04-28 哈尔滨理工大学 Autonomous positioning method of outdoor unmanned aerial vehicle
CN111354043A (en) * 2020-02-21 2020-06-30 集美大学 Three-dimensional attitude estimation method and device based on multi-sensor fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150172626A1 (en) * 2012-07-30 2015-06-18 Sony Computer Entertainment Europe Limited Localisation and mapping
CN105388905A (en) * 2015-10-30 2016-03-09 深圳一电航空技术有限公司 Unmanned aerial vehicle flight control method and device
CN108416840A (en) * 2018-03-14 2018-08-17 大连理工大学 A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN109341694A (en) * 2018-11-12 2019-02-15 哈尔滨理工大学 A kind of autonomous positioning air navigation aid of mobile sniffing robot
CN109596118A (en) * 2018-11-22 2019-04-09 亮风台(上海)信息科技有限公司 It is a kind of for obtaining the method and apparatus of the spatial positional information of target object
CN110806411A (en) * 2019-11-07 2020-02-18 武汉理工大学 Unmanned aerial vehicle rail detecting system based on line structure light
CN111077907A (en) * 2019-12-30 2020-04-28 哈尔滨理工大学 Autonomous positioning method of outdoor unmanned aerial vehicle
CN111354043A (en) * 2020-02-21 2020-06-30 集美大学 Three-dimensional attitude estimation method and device based on multi-sensor fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147411A (en) * 2022-08-30 2022-10-04 启东赢维数据信息科技有限公司 Labeller intelligent positioning method based on artificial intelligence

Also Published As

Publication number Publication date
CN114554030B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
CN108171733B (en) Method of registering two or more three-dimensional 3D point clouds
JP7260269B2 (en) Positioning system for aeronautical non-destructive inspection
CN105652891B (en) A kind of rotor wing unmanned aerial vehicle movement Target self-determination tracks of device and its control method
CN108399642B (en) General target following method and system fusing rotor unmanned aerial vehicle IMU data
US9013576B2 (en) Aerial photograph image pickup method and aerial photograph image pickup apparatus
JP2022039906A (en) Multi-sensor combined calibration device and method
WO2017080108A1 (en) Flying device, flying control system and method
WO2018143263A1 (en) Photographing control device, photographing control method, and program
CN103020952A (en) Information processing apparatus and information processing method
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
CN109282808A (en) Unmanned plane and Multi-sensor Fusion localization method for the detection of bridge Cut-fill
JP2016177640A (en) Video monitoring system
CN114554030B (en) Device detection system and device detection method
CN115540849A (en) Laser vision and inertial navigation fusion positioning and mapping device and method for aerial work platform
CN110720113A (en) Parameter processing method and device, camera equipment and aircraft
CN111103608A (en) Positioning device and method used in forestry surveying work
JP2018009918A (en) Self-position detection device, moving body device, and self-position detection method
JP5267100B2 (en) Motion estimation apparatus and program
Hsia et al. Height estimation via stereo vision system for unmanned helicopter autonomous landing
CN115471555A (en) Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching
JP7437930B2 (en) Mobile objects and imaging systems
US20220222851A1 (en) Moving body, position estimation method, and program
CN112050814A (en) Unmanned aerial vehicle visual navigation system and method for indoor transformer substation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant