CN112465808A - Substation equipment state identification method and device, inspection robot and storage medium - Google Patents

Substation equipment state identification method and device, inspection robot and storage medium Download PDF

Info

Publication number
CN112465808A
CN112465808A CN202011472721.4A CN202011472721A CN112465808A CN 112465808 A CN112465808 A CN 112465808A CN 202011472721 A CN202011472721 A CN 202011472721A CN 112465808 A CN112465808 A CN 112465808A
Authority
CN
China
Prior art keywords
reference image
image
target device
target equipment
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011472721.4A
Other languages
Chinese (zh)
Other versions
CN112465808B (en
Inventor
党晓婧
刘顺桂
严艺明
陈红强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Bureau Co Ltd
Original Assignee
Shenzhen Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Bureau Co Ltd filed Critical Shenzhen Power Supply Bureau Co Ltd
Priority to CN202011472721.4A priority Critical patent/CN112465808B/en
Publication of CN112465808A publication Critical patent/CN112465808A/en
Application granted granted Critical
Publication of CN112465808B publication Critical patent/CN112465808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10008Still image; Photographic image from scanner, fax or copier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a substation equipment state identification method and device, a patrol robot and a storage medium. The method comprises the following steps: acquiring a static part reference image, a dynamic part reference image and a position relation in a target equipment reference image; carrying out three-dimensional scanning on target equipment at a specified position in a transformer substation to generate a target equipment scanning image; determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image; determining a dynamic part scanning image in the target equipment scanning image according to the position of the static part scanning image in the target equipment scanning image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image; and comparing the dynamic part scanning image with the dynamic part reference image to determine whether the target equipment is in a standard state. By adopting the method, the equipment state can be rapidly determined.

Description

Substation equipment state identification method and device, inspection robot and storage medium
Technical Field
The application relates to the technical field of data processing, in particular to a substation equipment state identification method and device, a patrol robot and a storage medium.
Background
The intelligent inspection robot for the transformer substation integrates the core technologies of uncooled focal plane detectors, trackless laser navigation positioning, infrared temperature measurement, intelligent meter reading, image identification and the like, performs all-weather inspection, data acquisition, video monitoring, temperature and humidity measurement, air pressure monitoring and the like on power transmission and transformation equipment, and improves the safe operation of the power transmission and transformation equipment. When taking place unusual emergency, intelligence is patrolled and examined robot and can be regarded as portable monitoring platform, replaces the manual work and in time pinpoints equipment failure, reduces personnel's safety risk.
When the inspection robot performs inspection work, the inspection robot can move to an appointed position according to an inspection map, and three-dimensional scanning is performed on target equipment at the appointed position to obtain point cloud data of the target equipment. By analyzing and judging the point cloud data, the state of the target equipment can be determined.
However, the data volume of the point cloud data is very large, which not only occupies a large amount of storage space, but also has low processing efficiency. The inspection robot needs to spend a long time to determine the state of the target equipment, and the abnormal state of the target equipment may not be found in time to give an alarm, so that certain safety risk exists in the transformer substation.
Disclosure of Invention
In view of the above, it is necessary to provide a substation equipment state identification method and apparatus, an inspection robot, and a storage medium, which can quickly determine the equipment state.
A substation equipment state identification method comprises the following steps:
acquiring a static part reference image and a dynamic part reference image in a target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image; the target equipment reference image is a projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation;
carrying out three-dimensional scanning on the target equipment at a specified position in the transformer substation to generate a target equipment scanning image; wherein the target device scanning image is a projection of the target device on the projection plane in a current state;
determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image;
determining a dynamic part scanning image in the target device scanning image according to the position of the static part scanning image in the target device scanning image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image;
comparing the dynamic partial scan image and the dynamic partial reference image to determine whether the target device is in the standard state.
In one embodiment, the obtaining of the static part reference image and the dynamic part reference image in the target device reference image and the position relationship of the static part reference image and the dynamic part reference image in the target device reference image includes:
three-dimensionally scanning the target equipment in a standard state at least one position in the transformer substation to generate a reference image of the target equipment;
displaying the target device reference image;
in response to touch operation on the target equipment reference image, intercepting and storing a static part reference image and a dynamic part reference image in the target equipment reference image;
obtaining and storing the position relationship of the static part reference image and the dynamic part reference image in the target equipment reference image according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image;
alternatively, the first and second electrodes may be,
and acquiring the stored static part reference image, the dynamic part reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image.
In one embodiment, the three-dimensional scanning of the target device at the designated position in the substation to generate a target device scanning image includes:
three-dimensional scanning is carried out on the target equipment in the current state at a specified position in the transformer substation, and point cloud data of the target equipment in the current state are obtained;
establishing a three-dimensional model of the target equipment in the current state based on the point cloud data of the target equipment in the current state;
and projecting the three-dimensional model of the target equipment in the current state on the projection plane to form a scanning image of the target equipment.
In one embodiment, before the building the three-dimensional model of the target device based on the point cloud data of the target device, the method further comprises:
filtering the point cloud data of the target equipment in the current state; and/or the presence of a gas in the gas,
and sampling in the point cloud data of the target equipment in the current state, and deleting the point cloud data which is not sampled.
In one embodiment, the target device reference image and the target device scan image are both JPEG formatted image files.
In one embodiment, the determining a static partial scan image in the target device scan image based on the static partial reference image comprises:
intercepting a plurality of comparison images in the target device scan image, the comparison images having the same size as the static partial reference image;
selecting the comparison image with the highest similarity to the static part reference image as the static part scan image.
In one embodiment, the method further comprises:
and if the similarity between the static part scanning image and the static part reference image is smaller than a threshold value, triggering an alarm to remind inspection personnel to check.
A substation equipment status identification apparatus, the apparatus comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a static part reference image and a dynamic part reference image in a target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image; the target equipment reference image is a projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation;
the scanning module is used for carrying out three-dimensional scanning on the target equipment at a specified position in the transformer substation to generate a target equipment scanning image; wherein the target device scanning image is a projection of the target device on the projection plane in a current state;
a static determination module, configured to determine a static partial scan image in the scan image of the target device based on the static partial reference image, and obtain a position of the static partial scan image in the scan image of the target device;
a dynamic determination module, configured to determine a dynamic part scanned image in the target device scanned image according to a position of the static part scanned image in the target device scanned image and a positional relationship between the static part reference image and the dynamic part reference image in the target device reference image;
a state determination module for comparing the dynamic partial scan image and the dynamic partial reference image to determine whether the target device is in the standard state.
An inspection robot comprising a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a static part reference image and a dynamic part reference image in a target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image; the target equipment reference image is a projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation;
carrying out three-dimensional scanning on the target equipment at a specified position in the transformer substation to generate a target equipment scanning image; wherein the target device scanning image is a projection of the target device on the projection plane in a current state;
determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image;
determining a dynamic part scanning image in the target device scanning image according to the position of the static part scanning image in the target device scanning image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image;
comparing the dynamic partial scan image and the dynamic partial reference image to determine whether the target device is in the standard state.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a static part reference image and a dynamic part reference image in a target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image; the target equipment reference image is a projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation;
carrying out three-dimensional scanning on the target equipment at a specified position in the transformer substation to generate a target equipment scanning image; wherein the target device scanning image is a projection of the target device on the projection plane in a current state;
determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image;
determining a dynamic part scanning image in the target device scanning image according to the position of the static part scanning image in the target device scanning image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image;
comparing the dynamic partial scan image and the dynamic partial reference image to determine whether the target device is in the standard state.
According to the substation state identification method and device, the inspection robot and the storage medium, the static part reference image and the dynamic part reference image in the target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image are obtained firstly. The target device reference image is a projection of the target device in the standard state on a projection plane, and may be compared with a projection of the target device in the current state on the same projection plane to determine whether the target device is in the standard state. The target equipment is arranged in the transformer substation, so that the target equipment is subjected to three-dimensional scanning at a specified position in the transformer substation to generate a target equipment scanning image, and the target equipment scanning image is the projection of the target equipment in the current state on a projection plane. And determining a static part scanning image which is consistent with the static part reference image in the scanning image of the target device by using the condition that the projection of the static part in the target device in different states on the projection plane is the same, thereby obtaining the position of the static part scanning image in the scanning image of the target device. And then, obtaining the position relation of the static part scanning image and the dynamic part scanning image in the target equipment scanning image according to the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image by using the same position relation of the static part and the dynamic part in the target equipment in different states. And then combining the position of the static part scanning image in the scanning image of the target device to obtain the position of the dynamic part scanning image in the scanning image of the target device, thereby determining the dynamic part scanning image in the scanning image of the target device. And finally, comparing the dynamic part scanned image with the dynamic part reference image by using different projections of the dynamic part in the target equipment in different states on a projection plane, so as to determine whether the target equipment is in a standard state. In the process, the processing objects are basically images, and the data volume of the images is smaller than that of the point cloud data, so that the occupied storage space can be reduced, and the processing efficiency is improved. And the image processing is only carried out on the static part or the dynamic part, so that the data volume of the processing is further reduced, and the processing efficiency is improved. In addition, by using the fact that the static parts in the target equipment in different states are the same and the dynamic parts are different, the scanning image and the reference image are registered aiming at the static parts, and then the scanning image and the reference image are compared aiming at the dynamic parts, so that the accuracy of state determination can be ensured.
Drawings
Fig. 1 is an application environment diagram of a substation equipment state identification method in an embodiment;
fig. 2 is a schematic flow chart of a substation device state identification method in an embodiment;
FIG. 3 is a diagram illustrating a reference image of a target device in one embodiment;
FIG. 4 is a diagram of a static partial reference image in one embodiment;
FIG. 5 is a diagram of a dynamic partial reference image in one embodiment;
FIG. 6 is a schematic representation of an embodiment prior to registration of a dynamic partial scan image with a dynamic partial reference image;
FIG. 7 is a schematic representation of an embodiment after registration of a dynamic partial scan image with a dynamic partial reference image;
fig. 8 is a block diagram illustrating a configuration of a substation device status identification apparatus according to an embodiment;
fig. 9 is an internal structure view of the inspection robot in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The substation equipment state identification method provided by the application can be applied to the application environment shown in fig. 1. The inspection robot 102 performs three-dimensional scanning on target equipment such as the disconnecting switch 104 and the like at a specified position in the substation to generate a target equipment scanning image, wherein the target equipment scanning image is a projection of the target equipment in a current state on a projection plane. And comparing the target device scanning image with a target device reference image, wherein the target device reference image is the projection of the target device in the standard state on the same projection plane, and determining whether the target device is in the standard state.
The transformer substation is an electric device for converting voltage, receiving and distributing electric energy, controlling the flow direction of electric power and adjusting voltage in an electric power system, and is used for connecting power grids of all levels of voltage through a transformer. Various types of equipment such as transformer equipment, switch equipment, four-small-device equipment, reactive device equipment, auxiliary device equipment and the like are arranged in the transformer substation. The target equipment is any one of the equipment arranged in the transformer substation.
And the inspection robot performs all-weather inspection on each device in the transformer substation. During inspection, the inspection robot moves in the transformer substation according to the inspection path. If the inspection robot moves to a detection point on the inspection path, the inspection robot can stop moving to detect equipment in the transformer substation. Therefore, the designated position is the position of the inspection robot for detecting the target equipment on the inspection path in the transformer substation.
In one embodiment, as shown in fig. 2, a substation equipment state identification method is provided, which is described by taking the method as an example of being applied to the inspection robot in fig. 1, and includes the following steps:
step S202, a static part reference image and a dynamic part reference image in the target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image are obtained.
The target device reference image is a projection of the target device in a standard state on a projection plane. The target device is arranged in the substation. The standard state is a state in which the target device is operating normally. For example, if the target device is a disconnector, the standard state is closing of the disconnector.
The target device is divided into a static part and a dynamic part according to whether the target device is the same in different states. The static part is the same in different states and the dynamic part is different in different states. For example, a disconnector comprises a base and a blade. The base is used for being fixed in a transformer substation, the states of the base during switching on and switching off are the same, and the base belongs to a static part. The knife switch is rotatably arranged on the base, and the states of the knife switch during opening and closing are different, and the knife switch belongs to a dynamic part. And the static part reference image is an image of a static part in the target device reference image, and the dynamic part reference image is an image of a dynamic part in the target device reference image.
The position relationship between the static part reference image and the dynamic part reference image in the target device reference image refers to the position relationship between the occupied area of the static part reference image in the target device reference image and the occupied area of the dynamic part reference image in the target device reference image. For example, the distance between the location of the center of the static partial reference image on the target device reference image and the location of the center of the dynamic partial reference image on the target device reference image, and the orientation of each other.
Specifically, the inspection robot is provided with a three-dimensional scanner, and can perform three-dimensional scanning on the target equipment in advance when the target equipment is in a standard state to obtain point cloud data of the target equipment in the standard state. Based on the point cloud data of the target device in the standard state, a target device reference image may be generated. And then, the static part reference image and the dynamic part reference image are intercepted from the target equipment reference image, so that the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image can be obtained. And the inspection robot is internally provided with a memory, and after the static part reference image, the dynamic part reference image and the position relationship are obtained according to the steps, the obtained static part reference image, the obtained dynamic part reference image and the obtained position relationship can be stored in the memory, so that the static part reference image, the dynamic part reference image and the position relationship can be directly obtained from the memory in the following process, and the inspection robot is simpler, more convenient and more efficient to use.
In this embodiment, by obtaining the static part reference image and the dynamic part reference image in the target device reference image and the position relationship between the static part reference image and the dynamic part reference image in the target device reference image, the target device reference image is a projection of the target device in a standard state on a projection plane, and may be compared with a projection of the target device in a current state on the projection plane. If the projection of the target device in the current state on the projection plane is the same as that in the standard state, the target device can be determined to be in the standard state currently; if the projection of the target device in the current state on the projection plane is different from that in the standard state, it may be determined that the target device is not currently in the standard state. And the static part reference image and the dynamic part reference image are respectively obtained from the target equipment reference image, and can be respectively processed according to different characteristics of the static part and the dynamic part in the target equipment, so that the data volume of image processing can be reduced, and the accuracy of image identification can be improved.
In addition, the target device is a three-dimensional stereo structure, the projections of which on projection planes in different directions are different. The projection plane corresponding to the reference image of the target equipment can be selected according to the position of the inspection robot relative to the target equipment. For example, if the inspection robot detects the target device in the north or south direction of the target device, the projection plane extending in the east-west direction may be selected.
And step S204, carrying out three-dimensional scanning on the target equipment at the specified position in the transformer substation to generate a scanning image of the target equipment.
The scanning image of the target device is the projection of the target device in the current state on the projection plane. The projection plane corresponding to the target device scan image is the same as or parallel to the projection plane corresponding to the target device reference image, so that the target device scan image is comparable to the target device reference image.
In addition, the specified position is a position where the inspection robot detects the target device on an inspection path in the substation.
Specifically, the inspection robot is provided with a three-dimensional scanner. And if the inspection robot moves to the specified position, three-dimensional scanning is carried out on the target equipment through the three-dimensional scanner, and point cloud data of the target equipment in the current state are obtained. Based on the point cloud device of the target device in the current state, a target device scan image may be generated.
In this embodiment, the point cloud data of the target device in the current state can be obtained by three-dimensionally scanning the target device at the designated position in the substation. And further based on the point cloud data of the target device in the current state, a scanning image of the target device can be generated. The scanning image of the target device is the projection of the target device in the current state on the projection plane, and can be compared with the projection of the target device in the standard state on the same projection plane. If the projection of the target device in the current state on the projection plane is the same as that in the standard state, the target device can be determined to be in the standard state currently; if the projection of the target device in the current state on the projection plane is different from that in the standard state, it may be determined that the target device is not currently in the standard state.
Step S206, determining the static part scanned image in the scanned image of the target device based on the static part reference image, and obtaining the position of the static part scanned image in the scanned image of the target device.
Wherein, the static part scanning image is the image of the static part in the target scanning image. The position of the static partial scan image in the scan image of the target device refers to the corresponding position of the area occupied by the static partial scan image in the scan image of the target device.
Specifically, the static portion in the target device is the same in different states, and therefore the image of the static portion in the target device reference image (i.e., the static portion reference image) should be the same as the image of the static portion in the target device scanned image (i.e., the static portion scanned image). The inspection robot searches the image with the highest similarity with the static part reference image in the target equipment scanning image to obtain the static part scanning image and the position of the static part scanning image in the target equipment scanning image.
In this embodiment, the default static partial reference image and the static partial scan image are the same in different states with the static part in the target device, so that the static partial scan image can be determined in the target device scan image based on the static partial reference image, and the position of the static partial scan image in the target device scan image can be obtained.
Step S208, determining the dynamic part scanning image in the scanning image of the target device according to the position of the static part scanning image in the scanning image of the target device and the position relation of the static part reference image and the dynamic part reference image in the reference image of the target device.
The dynamic part scanning image is an image of a dynamic part in a scanning image of the target equipment.
Specifically, if the current state of the target device is the standard state, the positional relationship of the dynamic part with respect to the static part in the target device in the current state should be the same as the positional relationship of the dynamic part with respect to the static part in the target device in the standard state. Therefore, the positional relationship of the default static part reference image and the dynamic part reference image in the target device reference image is the same as the positional relationship of the static part scan image and the dynamic part scan image in the target device scan image. Therefore, the inspection robot can obtain the position relation of the static part scanning image and the dynamic part scanning image in the target equipment scanning image according to the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image. At this time, the position of the dynamic part scanned image in the scanned image of the target device can be obtained by combining the position of the static part scanned image in the scanned image of the target device, and then the dynamic part scanned image can be obtained in the scanned image of the target device.
In this embodiment, the position relationship between the static part reference image and the dynamic part reference image in the target device reference image is the same as the position relationship between the static part scanned image and the dynamic part scanned image in the target device scanned image, and the position relationship between the static part scanned image and the dynamic part scanned image in the target device scanned image is obtained according to the position relationship between the static part reference image and the dynamic part reference image in the target device reference image. And combining the position of the static part scanning image in the scanning image of the target equipment to obtain the position of the dynamic part scanning image in the scanning image of the target equipment, and further obtaining the dynamic part scanning image in the scanning image of the target equipment.
Step S210, comparing the dynamic part scan image and the dynamic part reference image, and determining whether the target device is in a standard state.
Specifically, the inspection robot may first determine the similarity between the dynamic partial scan image and the dynamic partial reference image. If the similarity between the dynamic part scanning image and the dynamic part reference image is larger than or equal to the threshold value, the inspection robot determines that the target equipment is in a standard state; and if the similarity between the dynamic part scanning image and the dynamic part reference image is smaller than the threshold value, the inspection robot determines that the target equipment is not in the standard state.
Illustratively, Fast Point Feature Histograms (FPFH) may be used to determine the similarity of the dynamic partial scan image and the dynamic partial reference image.
In this embodiment, the dynamic partial scan image is obtained from the projection of the target device in the current state on the projection plane, and the dynamic partial reference image is obtained from the projection of the target device in the standard state on the same projection plane by comparing the dynamic partial scan image with the dynamic partial reference image. If the dynamic part scanning image is the same as the dynamic part reference image, the current state of the target equipment is a standard state; if the dynamic part scanning image and the dynamic part reference image are different, the current state of the target device is not the standard state. Therefore, comparing the dynamic partial scan image with the dynamic partial reference image, it can be determined whether the target device is in a normal state.
In the substation equipment identification method, a static part reference image and a dynamic part reference image in a target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image are obtained firstly. The target device reference image is a projection of the target device in the standard state on a projection plane, and may be compared with a projection of the target device in the current state on the same projection plane to determine whether the target device is in the standard state. The target equipment is arranged in the transformer substation, so that the target equipment is subjected to three-dimensional scanning at a specified position in the transformer substation to generate a target equipment scanning image, and the target equipment scanning image is the projection of the target equipment in the current state on a projection plane. And determining a static part scanning image which is consistent with the static part reference image in the scanning image of the target device by using the condition that the projection of the static part in the target device in different states on the projection plane is the same, thereby obtaining the position of the static part scanning image in the scanning image of the target device. And then, obtaining the position relation of the static part scanning image and the dynamic part scanning image in the target equipment scanning image according to the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image by using the same position relation of the static part and the dynamic part in the target equipment in different states. And then combining the position of the static part scanning image in the scanning image of the target device to obtain the position of the dynamic part scanning image in the scanning image of the target device, thereby determining the dynamic part scanning image in the scanning image of the target device. And finally, comparing the dynamic part scanned image with the dynamic part reference image by using different projections of the dynamic part in the target equipment in different states on a projection plane, so as to determine whether the target equipment is in a standard state. In the process, the processing objects are basically images, and the data volume of the images is smaller than that of the point cloud data, so that the occupied storage space can be reduced, and the processing efficiency is improved. And the image processing is only carried out on the static part or the dynamic part, so that the data volume of the processing is further reduced, and the processing efficiency is improved. In addition, by using the fact that the static parts in the target equipment in different states are the same and the dynamic parts are different, the scanning image and the reference image are registered aiming at the static parts, and then the scanning image and the reference image are compared aiming at the dynamic parts, so that the accuracy of state determination can be ensured.
In one embodiment, acquiring the static part reference image and the dynamic part reference image in the target device reference image and the position relationship of the static part reference image and the dynamic part reference image in the target device reference image comprises: three-dimensionally scanning target equipment in a standard state at least one position in a transformer substation to generate a reference image of the target equipment; displaying a target device reference image; in response to the touch operation on the target equipment reference image, intercepting and storing a static part reference image and a dynamic part reference image in the target equipment reference image; and obtaining and storing the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image.
At least one position in the substation is a position around the target device so as to perform three-dimensional scanning on the target device.
Illustratively, the at least one location within the substation may comprise a designated location within the substation.
Specifically, the inspection robot is provided with a three-dimensional scanner, so that when the target equipment is in a standard state, the target equipment can be scanned in a three-dimensional manner to obtain point cloud data of the target equipment in the standard state. Based on the point cloud data of the target device in the standard state, a target device reference image may be generated. And displaying the target equipment reference image to a user, wherein the user can set the area of the static part reference image and the area of the dynamic part reference image on the target equipment reference image through touch operation, so that the inspection robot responds to the touch operation on the target equipment reference image and intercepts the static part reference image and the dynamic part reference image from the target equipment reference image. And according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image, the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image can be obtained. In addition, a memory is arranged in the inspection robot, and the static part reference image, the dynamic part reference image, and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image can be stored in the memory, so that the inspection robot can be directly called from the memory when in subsequent acquisition.
For example, as shown in fig. 3, when the disconnecting switch 302 is in the standard state, the inspection robot performs three-dimensional scanning on the disconnecting switch 302 at a specified position to obtain point cloud data of the disconnecting switch 302 in the standard state. Based on the point cloud data of the disconnector in the standard state, a three-dimensional model of the disconnector in the standard state can be established. Projection planes in different directions are selected, and images of the same part are different. Fig. 3 shows a projection of the three-dimensional model onto a projection plane.
Illustratively, a coordinate system as shown in FIG. 3 is established, the horizontal range of the three-dimensional scan is 1.8675-2.0944, the vertical range of the three-dimensional scan is-0.1047-0.2618, and the precision of the three-dimensional scan is 1.0mm/10 m. The point cloud data of the isolator in the standard state obtained at this time is 13076KB, and the point cloud data of the isolator in the current state is 13492 KB. The resolution of the image was 2 cm.
The static partial reference image may be truncated from the target device reference image based on a range of coordinates of the static partial reference image on the target device reference image.
Xml, the coordinate range of the static partial reference image on the target device reference image is stored in, exemplarily:
<?xml version="1.0"?>
<opencv_storage>
<RegistRect>235.85611510791367 174.44146079484426 119.13669064748202
182.70676691729324</RegistRect>
<RegistRect>197.0071942446043 167.91621911922664 59.56834532374103
181.2567132116004</RegistRect>
</opencv_storage>
as shown in fig. 4, the image in the box 402 is cut out in the target device reference image as the static part reference image according to the coordinate range of the static part reference image on the target device reference image.
Accordingly, the dynamic partial reference image may be truncated from the target device reference image based on the coordinate range of the dynamic partial reference image on the target device reference image.
Illustratively, the coordinate range of the dynamic partial reference image on the target device reference image is stored in prior.
<?xml version="1.0"?>
<opencv_storage>
<PositiveRect>279.0215827338129 196.91729323308272 24.17266187050359
30.451127819548873</PositiveRect>
</opencv_storage>
As shown in fig. 5, the image in the box 502 is cut out from the reference image of the target device as the dynamic part reference image according to the coordinate range of the dynamic part reference image on the reference image of the target device.
For example, the inspection robot may intercept the static part reference image and the dynamic part reference image in the target device reference image according to the following instructions:
I0703 11:28:54.746832 9336faro_interface.cpp:235]The cloud size:before pass filter=2083549;
after pass filter=1115779
I0703 11:28:54.809198 9336isolator_detection.cpp:78]The registration image limits:
x=197.007-256.576
y=235.856-354.993
z=174.441-357.148
I0703 11:28:54.824820 9336isolator_detection.cpp:91]The detection image roi:
y=279.022-303.194
z=196.917-227.368
illustratively, the size of the static partial reference image is 2194KB and the size of the static partial scan image is 2407 KB. The size of the dynamic partial reference picture is 144KB and the size of the dynamic partial scan picture is 138 KB. It can be seen that the amount of data processed is greatly reduced.
In another embodiment, acquiring the static part reference image and the dynamic part reference image in the target device reference image and the position relationship of the static part reference image and the dynamic part reference image in the target device reference image comprises: and acquiring the stored static part reference image, the dynamic part reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image.
Specifically, if the memory stores the static part reference image and the dynamic part reference image, and the position relationship between the static part reference image and the dynamic part reference image in the target device reference image, the static part reference image, the dynamic part reference image, and the position relationship between the static part reference image and the dynamic part reference image in the target device reference image are directly acquired from the memory, which is simple, convenient and efficient.
In one embodiment, before building the three-dimensional model of the target device based on the point cloud data of the target device, the method further comprises: and filtering the point cloud data of the target equipment in the current state.
Filtering is an operation of filtering out specific band frequencies in a signal, and is an important measure for suppressing and preventing interference.
In this embodiment, noise exists in the point cloud data obtained by scanning due to influences of factors such as scanning device defects, environmental interference, target device properties, and the like in the three-dimensional scanning process. By filtering the point cloud data, noise in the point cloud data can be filtered, the accuracy of the point cloud data is improved, and occupied storage space is reduced.
Specifically, the inspection robot can filter out random noise in the point cloud data by adopting median filtering. The inspection robot can filter discrete noise in the point cloud data by adopting K neighbor mean filtering.
In another embodiment, before building the three-dimensional model of the target device based on the point cloud data of the target device, the method further comprises: sampling is carried out in the point cloud data of the target equipment in the current state, and the point cloud data which is not sampled is deleted.
Here, sampling is to replace an original continuous signal in time with a sequence of signal samples at regular intervals, that is, to discretize an analog signal in time.
In this embodiment, the data volume of the point cloud data is large, so that the direct processing causes a large calculation load, and the feature extraction is inconvenient. Through sampling, the point cloud data can be simplified, redundant data are eliminated on the basis of keeping original characteristics of the point cloud data, processing efficiency is improved, and occupied storage space is reduced.
In particular, the point cloud data may be randomly sampled or curvature sampled.
In yet another embodiment, before building the three-dimensional model of the target device based on the point cloud data of the target device, the method further comprises: filtering point cloud data of target equipment in the current state; sampling is carried out in the point cloud data of the target equipment in the current state, and the point cloud data which is not sampled is deleted.
In this embodiment, the point cloud data is filtered first to filter the noise. And then, point cloud data is sampled, and only characteristic data are reserved, so that the data volume can be reduced to the greatest extent, the occupied storage space is reduced, and the processing efficiency is improved.
For example, the number of individual processes may be as follows:
I0703 11:28:55.090415 9336isolator_detection.cpp:227]Before filtered:target cloud size=1151279,source cloud size=1115779
I0703 11:28:55.090415 9336isolator_detection.cpp:228]After filtered:target cloud size=231201,source cloud size=227820
I0703 11:28:55.528152 9336isolator detection.cpp:256]coarse aligned fitness score:0.00167768
I0703 11:28:55.543776 9336isolator detection.cpp:257]NDT matching time=0.397936s
I0703 11:28:55.575017 9336isolator_detection.cpp:267]fine aligned fitness score:0.0015699
I0703 11:28:55.575017 9336isolator_detection.cpp:268]GICP matching time=0.0338196s
I0703 11:28:55.590641 9336isolator_detection.cpp:312]The target roi cloud size:12784
I0703 11:28:55.590641 9336isolator_detection.cpp:313]The source roi cloud size:12349
I0703 11:28:55.590641 9336CmdRelay.cpp:96]The detection result=97.6011%
for the scanned image, the size of the point cloud data before filtering is 1151279, and the size of the point cloud data after filtering is reduced to 231201. For the reference image, the size of the point cloud data before filtering is 1115779, and the size of the point cloud data after filtering is reduced to 227820. It follows that filtering and sampling can significantly reduce the size of the data volume.
In one embodiment, the target device reference image and the target device scan image are both image files in JPEG (Joint Photographic Experts Group) format.
Compared with point cloud data formed by three-dimensional scanning, the image file in the JPEG format is much smaller, the occupied storage space can be reduced to one sixth of the original storage space, the data volume of processing is greatly reduced, and the processing efficiency is improved by at least 10 times.
Specifically, the inspection robot may store the target device reference image, the static part coordinate range (indicated by the registration range in table one), and the dynamic part coordinate range (indicated by the identification range in table one) according to table one below:
storage file of watch-inspection robot
Figure BDA0002836387550000141
Wherein all data is stored in a binary manner in a file.
In one embodiment, determining a static partial scan image in a target device scan image based on a static partial reference image comprises: intercepting a plurality of comparison images with the same size as the static part reference image in a scanning image of the target equipment; and selecting the comparison image with the highest similarity with the static part reference image as the static part scanning image.
In this embodiment, in view of external factors such as weather and road conditions and internal factors such as limited navigation accuracy, a deviation, for example ± 10mm, may exist between the detected position and the specified position of the inspection robot, which may cause a change in the scale and/or the angle of view of the scanned image of the target device relative to the reference image of the target device. As shown in fig. 6, if the static partial scan image is truncated in the target device scan image according to the position of the static partial reference image in the target device reference image, the static partial scan image 602 cannot coincide with the static partial reference image 604. Therefore, it is necessary to search for a static partial scan image having the highest similarity to the static partial reference image among the target device scan images. As shown in fig. 7, the static portion scan image and the static portion reference image in block 702 are now substantially coincident. Therein, the registration speed may be 0.43s and the registration accuracy may be 0.033.
Specifically, the deviation range between the target device scanned image and the target device reference image is limited, and a plurality of comparison images with the same size as the static part reference image can be intercepted in the target device scanned image according to the position of the static part reference image in the target device reference image, wherein the deviation between the position of each comparison image in the target device scanned image and the position of the static part reference image in the target device reference image is within a set range. Therefore, under the condition of ensuring certain accuracy, the processing amount of data is greatly reduced, and the processing efficiency is improved.
For example, the position of the static partial scan image on the scan image of the target device may be initially estimated by using a Random Sample Consensus (RANSAC) algorithm, and then accurately determined by using an Iterative Closest Point (ICP) algorithm.
Of course, each comparison image with the same size as the static part reference image can be intercepted in the target device reference image, so that the accuracy of the static part scanning image can be ensured.
In one embodiment, the method further comprises: and if the similarity between the static part scanned image and the static part reference image is smaller than a threshold value, triggering an alarm to remind the inspection personnel to check.
In this embodiment, if the similarity between the static part scanned image and the static part reference image is smaller than the threshold, it indicates that the difference between the static part scanned image and the static part reference image is too large. At this time, the deviation caused by external factors such as weather and road conditions and internal factors such as limited navigation precision may be not, the inspection robot itself may have problems, and the target device may have serious problems such as foundation subsidence, device overturning, device deformation and the like. Triggering an alarm to remind inspection personnel to inspect so as to timely process problems and ensure the normal operation of inspection of the target equipment and the inspection robot.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 8, there is provided a substation equipment state identification device, including: an acquisition module 801, a scanning module 802, a static determination module 803, a dynamic determination module 804, and a status determination module 805, wherein:
an obtaining module 801, configured to obtain a static part reference image and a dynamic part reference image in a target device reference image, and a position relationship between the static part reference image and the dynamic part reference image in the target device reference image; the target device reference image is a projection of the target device in a standard state on a projection plane, and the target device is arranged in the substation.
The scanning module 802 is configured to perform three-dimensional scanning on a target device at a specified position in a substation to generate a target device scanning image; the scanning image of the target device is the projection of the target device in the current state on the projection plane.
A static determination module 803, configured to determine a static part scan image in the scan image of the target device based on the static part reference image, and obtain a position of the static part scan image in the scan image of the target device.
And the dynamic determination module 804 is configured to determine the dynamic part scanned image in the target device scanned image according to the position of the static part scanned image in the target device scanned image and the positional relationship between the static part reference image and the dynamic part reference image in the target device reference image.
A state determination module 805 for comparing the dynamic part scan image and the dynamic part reference image to determine whether the target device is in a standard state.
The substation state identification device firstly acquires a static part reference image and a dynamic part reference image in a target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image. The target device reference image is a projection of the target device in the standard state on a projection plane, and may be compared with a projection of the target device in the current state on the same projection plane to determine whether the target device is in the standard state. The target equipment is arranged in the transformer substation, so that the target equipment is subjected to three-dimensional scanning at a specified position in the transformer substation to generate a target equipment scanning image, and the target equipment scanning image is the projection of the target equipment in the current state on a projection plane. And determining a static part scanning image which is consistent with the static part reference image in the scanning image of the target device by using the condition that the projection of the static part in the target device in different states on the projection plane is the same, thereby obtaining the position of the static part scanning image in the scanning image of the target device. And then, obtaining the position relation of the static part scanning image and the dynamic part scanning image in the target equipment scanning image according to the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image by using the same position relation of the static part and the dynamic part in the target equipment in different states. And then combining the position of the static part scanning image in the scanning image of the target device to obtain the position of the dynamic part scanning image in the scanning image of the target device, thereby determining the dynamic part scanning image in the scanning image of the target device. And finally, comparing the dynamic part scanned image with the dynamic part reference image by using different projections of the dynamic part in the target equipment in different states on a projection plane, so as to determine whether the target equipment is in a standard state. In the process, the processing objects are basically images, and the data volume of the images is smaller than that of the point cloud data, so that the occupied storage space can be reduced, and the processing efficiency is improved. And the image processing is only carried out on the static part or the dynamic part, so that the data volume of the processing is further reduced, and the processing efficiency is improved. In addition, by using the fact that the static parts in the target equipment in different states are the same and the dynamic parts are different, the scanning image and the reference image are registered aiming at the static parts, and then the scanning image and the reference image are compared aiming at the dynamic parts, so that the accuracy of state determination can be ensured.
In one embodiment, the acquisition module 801 includes a generation unit, a display unit, a touch unit, and a determination unit, wherein: the generating unit is used for carrying out three-dimensional scanning on target equipment in a standard state at least one position in the transformer substation to generate a reference image of the target equipment; a display unit for displaying a target device reference image; the touch unit is used for intercepting and storing a static part reference image and a dynamic part reference image in the target equipment reference image in response to the touch operation on the target equipment reference image; and the determining unit is used for obtaining and storing the position relationship of the static part reference image and the dynamic part reference image in the target equipment reference image according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image.
In another embodiment, the obtaining module 801 is configured to obtain the stored static part reference image, the stored dynamic part reference image, and the position relationship between the static part reference image and the dynamic part reference image in the target device reference image.
In one embodiment, the scanning module 802 includes a scanning unit, a building unit, and a forming unit, wherein: the scanning unit is used for carrying out three-dimensional scanning on target equipment in the current state at a specified position in the transformer substation to obtain point cloud data of the target equipment in the current state; the system comprises an establishing unit, a calculating unit and a calculating unit, wherein the establishing unit is used for establishing a three-dimensional model of target equipment in the current state based on point cloud data of the target equipment in the current state; and the forming unit is used for projecting the three-dimensional model of the target equipment in the current state on the projection plane to form a scanning image of the target equipment.
In one embodiment, the apparatus further comprises a filtering module and/or a sampling module, wherein: the filtering module is used for filtering the point cloud data of the target equipment in the current state before establishing a three-dimensional model of the target equipment based on the point cloud data of the target equipment; and the sampling module is used for sampling in the point cloud data of the target equipment in the current state and deleting the point cloud data which is not sampled before the three-dimensional model of the target equipment is established based on the point cloud data of the target equipment.
In one embodiment, the target device reference image and the target device scan image are both JPEG formatted image files.
In one embodiment, the static determination module 803 includes a truncation unit and a selection unit, wherein: an intercepting unit configured to intercept a plurality of comparison images having the same size as the static partial reference image in the target device scan image; and the selection unit is used for selecting the comparison image with the highest similarity with the static part reference image as the static part scanning image.
In one embodiment, the apparatus further comprises an alert module, wherein: and the alarm module is used for triggering an alarm when the similarity between the static part scanning image and the static part reference image is smaller than a threshold value so as to remind the inspection personnel to check.
For specific limitations of the substation equipment state identification device, reference may be made to the above limitations on the substation equipment state identification method, which are not described herein again. All or part of the modules in the substation equipment state identification device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the inspection robot, and can also be stored in a memory in the inspection robot in a software form, so that the processor can call and execute the corresponding operations of the modules.
In one embodiment, an inspection robot is provided, the internal structure of which may be as shown in fig. 9. The inspection robot comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the inspection robot is used to provide computing and control capabilities. The storage of the inspection robot comprises a nonvolatile storage medium and an internal storage. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the inspection robot is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a substation device status identification method. The display screen of the inspection robot can be a liquid crystal display screen or an electronic ink display screen, and the input device of the inspection robot can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the inspection robot, and an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the inspection robot to which the present application is applied, and a particular inspection robot may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
In one embodiment, there is provided an inspection robot comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: acquiring a static part reference image and a dynamic part reference image in a target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image; the target equipment reference image is the projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation; carrying out three-dimensional scanning on target equipment at a specified position in a transformer substation to generate a target equipment scanning image; the scanning image of the target equipment is the projection of the target equipment in the current state on a projection plane; determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image; determining a dynamic part scanning image in the target equipment scanning image according to the position of the static part scanning image in the target equipment scanning image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image; and comparing the dynamic part scanning image with the dynamic part reference image to determine whether the target equipment is in a standard state.
In one embodiment, the processor, when executing the computer program, further performs the steps of: three-dimensionally scanning target equipment in a standard state at least one position in a transformer substation to generate a reference image of the target equipment; displaying a target device reference image; in response to the touch operation on the target equipment reference image, intercepting and storing a static part reference image and a dynamic part reference image in the target equipment reference image; obtaining and storing the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image; alternatively, the stored static part reference image, the dynamic part reference image, and the positional relationship of the static part reference image and the dynamic part reference image in the target device reference image are acquired.
In one embodiment, the processor, when executing the computer program, further performs the steps of: three-dimensional scanning is carried out on target equipment in the current state at a specified position in a transformer substation, and point cloud data of the target equipment in the current state are obtained; establishing a three-dimensional model of the target equipment in the current state based on the point cloud data of the target equipment in the current state; and projecting the three-dimensional model of the target equipment in the current state on a projection plane to form a scanning image of the target equipment.
In one embodiment, the processor, when executing the computer program, further performs the steps of: filtering point cloud data of target equipment in the current state; and/or sampling in the point cloud data of the target equipment in the current state, and deleting the point cloud data which is not sampled.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the target device reference image and the target device scan image are both image files in JPEG format.
In one embodiment, the processor, when executing the computer program, further performs the steps of: intercepting a plurality of comparison images with the same size as the static part reference image in a scanning image of the target equipment; and selecting the comparison image with the highest similarity with the static part reference image as the static part scanning image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and if the similarity between the static part scanned image and the static part reference image is smaller than a threshold value, triggering an alarm to remind the inspection personnel to check.
The inspection robot firstly acquires a static part reference image and a dynamic part reference image in a target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image. The target device reference image is a projection of the target device in the standard state on a projection plane, and may be compared with a projection of the target device in the current state on the same projection plane to determine whether the target device is in the standard state. The target equipment is arranged in the transformer substation, so that the target equipment is subjected to three-dimensional scanning at a specified position in the transformer substation to generate a target equipment scanning image, and the target equipment scanning image is the projection of the target equipment in the current state on a projection plane. And determining a static part scanning image which is consistent with the static part reference image in the scanning image of the target device by using the condition that the projection of the static part in the target device in different states on the projection plane is the same, thereby obtaining the position of the static part scanning image in the scanning image of the target device. And then, obtaining the position relation of the static part scanning image and the dynamic part scanning image in the target equipment scanning image according to the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image by using the same position relation of the static part and the dynamic part in the target equipment in different states. And then combining the position of the static part scanning image in the scanning image of the target device to obtain the position of the dynamic part scanning image in the scanning image of the target device, thereby determining the dynamic part scanning image in the scanning image of the target device. And finally, comparing the dynamic part scanned image with the dynamic part reference image by using different projections of the dynamic part in the target equipment in different states on a projection plane, so as to determine whether the target equipment is in a standard state. In the process, the processing objects are basically images, and the data volume of the images is smaller than that of the point cloud data, so that the occupied storage space can be reduced, and the processing efficiency is improved. And the image processing is only carried out on the static part or the dynamic part, so that the data volume of the processing is further reduced, and the processing efficiency is improved. In addition, by using the fact that the static parts in the target equipment in different states are the same and the dynamic parts are different, the scanning image and the reference image are registered aiming at the static parts, and then the scanning image and the reference image are compared aiming at the dynamic parts, so that the accuracy of state determination can be ensured.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a static part reference image and a dynamic part reference image in a target equipment reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image; the target equipment reference image is the projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation; carrying out three-dimensional scanning on target equipment at a specified position in a transformer substation to generate a target equipment scanning image; the scanning image of the target equipment is the projection of the target equipment in the current state on a projection plane; determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image; determining a dynamic part scanning image in the target equipment scanning image according to the position of the static part scanning image in the target equipment scanning image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image; and comparing the dynamic part scanning image with the dynamic part reference image to determine whether the target equipment is in a standard state.
In one embodiment, the computer program when executed by the processor further performs the steps of: three-dimensionally scanning target equipment in a standard state at least one position in a transformer substation to generate a reference image of the target equipment; displaying a target device reference image; in response to the touch operation on the target equipment reference image, intercepting and storing a static part reference image and a dynamic part reference image in the target equipment reference image; obtaining and storing the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image; alternatively, the stored static part reference image, the dynamic part reference image, and the positional relationship of the static part reference image and the dynamic part reference image in the target device reference image are acquired.
In one embodiment, the computer program when executed by the processor further performs the steps of: three-dimensional scanning is carried out on target equipment in the current state at a specified position in a transformer substation, and point cloud data of the target equipment in the current state are obtained; establishing a three-dimensional model of the target equipment in the current state based on the point cloud data of the target equipment in the current state; and projecting the three-dimensional model of the target equipment in the current state on a projection plane to form a scanning image of the target equipment.
In one embodiment, the computer program when executed by the processor further performs the steps of: filtering point cloud data of target equipment in the current state; and/or sampling in the point cloud data of the target equipment in the current state, and deleting the point cloud data which is not sampled.
In one embodiment, the computer program when executed by the processor further performs the steps of: the target device reference image and the target device scan image are both image files in JPEG format.
In one embodiment, the computer program when executed by the processor further performs the steps of: intercepting a plurality of comparison images with the same size as the static part reference image in a scanning image of the target equipment; and selecting the comparison image with the highest similarity with the static part reference image as the static part scanning image.
In one embodiment, the computer program when executed by the processor further performs the steps of: and if the similarity between the static part scanned image and the static part reference image is smaller than a threshold value, triggering an alarm to remind the inspection personnel to check.
The storage medium acquires a static part reference image and a dynamic part reference image in a target device reference image, and the position relation of the static part reference image and the dynamic part reference image in the target device reference image. The target device reference image is a projection of the target device in the standard state on a projection plane, and may be compared with a projection of the target device in the current state on the same projection plane to determine whether the target device is in the standard state. The target equipment is arranged in the transformer substation, so that the target equipment is subjected to three-dimensional scanning at a specified position in the transformer substation to generate a target equipment scanning image, and the target equipment scanning image is the projection of the target equipment in the current state on a projection plane. And determining a static part scanning image which is consistent with the static part reference image in the scanning image of the target device by using the condition that the projection of the static part in the target device in different states on the projection plane is the same, thereby obtaining the position of the static part scanning image in the scanning image of the target device. And then, obtaining the position relation of the static part scanning image and the dynamic part scanning image in the target equipment scanning image according to the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image by using the same position relation of the static part and the dynamic part in the target equipment in different states. And then combining the position of the static part scanning image in the scanning image of the target device to obtain the position of the dynamic part scanning image in the scanning image of the target device, thereby determining the dynamic part scanning image in the scanning image of the target device. And finally, comparing the dynamic part scanned image with the dynamic part reference image by using different projections of the dynamic part in the target equipment in different states on a projection plane, so as to determine whether the target equipment is in a standard state. In the process, the processing objects are basically images, and the data volume of the images is smaller than that of the point cloud data, so that the occupied storage space can be reduced, and the processing efficiency is improved. And the image processing is only carried out on the static part or the dynamic part, so that the data volume of the processing is further reduced, and the processing efficiency is improved. In addition, by using the fact that the static parts in the target equipment in different states are the same and the dynamic parts are different, the scanning image and the reference image are registered aiming at the static parts, and then the scanning image and the reference image are compared aiming at the dynamic parts, so that the accuracy of state determination can be ensured.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A transformer substation equipment state identification method is applied to a patrol robot, and comprises the following steps:
acquiring a static part reference image and a dynamic part reference image in a target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image; the target equipment reference image is a projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation;
carrying out three-dimensional scanning on the target equipment at a specified position in the transformer substation to generate a target equipment scanning image; wherein the target device scanning image is a projection of the target device on the projection plane in a current state;
determining a static part scanning image in the target equipment scanning image based on the static part reference image to obtain the position of the static part scanning image in the target equipment scanning image;
determining a dynamic part scanning image in the target device scanning image according to the position of the static part scanning image in the target device scanning image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image;
comparing the dynamic partial scan image and the dynamic partial reference image to determine whether the target device is in the standard state.
2. The method according to claim 1, wherein the obtaining of the static part reference image and the dynamic part reference image in the target device reference image and the position relationship of the static part reference image and the dynamic part reference image in the target device reference image comprises:
three-dimensionally scanning the target equipment in a standard state at least one position in the transformer substation to generate a reference image of the target equipment;
displaying the target device reference image;
in response to touch operation on the target equipment reference image, intercepting and storing a static part reference image and a dynamic part reference image in the target equipment reference image;
obtaining and storing the position relationship of the static part reference image and the dynamic part reference image in the target equipment reference image according to the position of the static part reference image in the target equipment reference image and the position of the dynamic part reference image in the target equipment reference image;
alternatively, the first and second electrodes may be,
and acquiring the stored static part reference image, the dynamic part reference image and the position relation of the static part reference image and the dynamic part reference image in the target equipment reference image.
3. The method of claim 1, wherein the three-dimensional scanning of the target device at the designated location within the substation to generate a target device scan image comprises:
three-dimensional scanning is carried out on the target equipment in the current state at a specified position in the transformer substation, and point cloud data of the target equipment in the current state are obtained;
establishing a three-dimensional model of the target equipment in the current state based on the point cloud data of the target equipment in the current state;
and projecting the three-dimensional model of the target equipment in the current state on the projection plane to form a scanning image of the target equipment.
4. The method of claim 3, wherein prior to the building of the three-dimensional model of the target device based on the point cloud data of the target device, the method further comprises:
filtering the point cloud data of the target equipment in the current state; and/or the presence of a gas in the gas,
and sampling in the point cloud data of the target equipment in the current state, and deleting the point cloud data which is not sampled.
5. The method according to any one of claims 1 to 4, wherein the target device reference image and the target device scan image are both image files in JPEG format.
6. The method of any one of claims 1 to 4, wherein determining a static partial scan image in the target device scan image based on the static partial reference image comprises:
intercepting a plurality of comparison images in the target device scan image, the comparison images having the same size as the static partial reference image;
selecting the comparison image with the highest similarity to the static part reference image as the static part scan image.
7. The method of any one of claims 1 to 4, further comprising:
and if the similarity between the static part scanning image and the static part reference image is smaller than a threshold value, triggering an alarm to remind inspection personnel to check.
8. A substation equipment state identification device, characterized in that the device includes:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a static part reference image and a dynamic part reference image in a target device reference image and the position relation of the static part reference image and the dynamic part reference image in the target device reference image; the target equipment reference image is a projection of target equipment in a standard state on a projection plane, and the target equipment is arranged in a transformer substation;
the scanning module is used for carrying out three-dimensional scanning on the target equipment at a specified position in the transformer substation to generate a target equipment scanning image; wherein the target device scanning image is a projection of the target device on the projection plane in a current state;
a static determination module, configured to determine a static partial scan image in the scan image of the target device based on the static partial reference image, and obtain a position of the static partial scan image in the scan image of the target device;
a dynamic determination module, configured to determine a dynamic part scanned image in the target device scanned image according to a position of the static part scanned image in the target device scanned image and a positional relationship between the static part reference image and the dynamic part reference image in the target device reference image;
a state determination module for comparing the dynamic partial scan image and the dynamic partial reference image to determine whether the target device is in the standard state.
9. An inspection robot comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011472721.4A 2020-12-15 2020-12-15 Substation equipment state identification method, device, inspection robot and storage medium Active CN112465808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011472721.4A CN112465808B (en) 2020-12-15 2020-12-15 Substation equipment state identification method, device, inspection robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011472721.4A CN112465808B (en) 2020-12-15 2020-12-15 Substation equipment state identification method, device, inspection robot and storage medium

Publications (2)

Publication Number Publication Date
CN112465808A true CN112465808A (en) 2021-03-09
CN112465808B CN112465808B (en) 2023-07-11

Family

ID=74804724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011472721.4A Active CN112465808B (en) 2020-12-15 2020-12-15 Substation equipment state identification method, device, inspection robot and storage medium

Country Status (1)

Country Link
CN (1) CN112465808B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743825A (en) * 2022-06-09 2022-07-12 武汉黉门电工科技有限公司 Isolating switch and method for monitoring on-off state of isolating switch

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105356619A (en) * 2015-12-10 2016-02-24 国网四川省电力公司天府新区供电公司 Anti-misoperation method for transformer station based image recognition
CN110045364A (en) * 2019-03-01 2019-07-23 上海大学 Dynamic target tracking and static object detection system and method based on the identification of gradual fault image feature
US20200104626A1 (en) * 2018-10-02 2020-04-02 International Business Machines Corporation Construction of an efficient representation for a three-dimensional (3d) compound object from raw video data
CN111353507A (en) * 2020-02-26 2020-06-30 国网江苏省电力有限公司电力科学研究院 Image recognition method and device for oil stains on surface of transformer substation device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105356619A (en) * 2015-12-10 2016-02-24 国网四川省电力公司天府新区供电公司 Anti-misoperation method for transformer station based image recognition
US20200104626A1 (en) * 2018-10-02 2020-04-02 International Business Machines Corporation Construction of an efficient representation for a three-dimensional (3d) compound object from raw video data
CN110045364A (en) * 2019-03-01 2019-07-23 上海大学 Dynamic target tracking and static object detection system and method based on the identification of gradual fault image feature
CN111353507A (en) * 2020-02-26 2020-06-30 国网江苏省电力有限公司电力科学研究院 Image recognition method and device for oil stains on surface of transformer substation device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHOU-YIN LU ET AL.: "Mobile Robot for Power Substation Inspection: A Survey", 《IEEE/CAA JOURNAL OF AUTOMATICA SINICA,》, vol. 4, no. 4, pages 830 - 847 *
陈翔宇 等: "新一代智能变电站变电设备状态 评估大数据分析应用展望", 《电力信息与通信技术》, vol. 14, no. 8, pages 46 - 51 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743825A (en) * 2022-06-09 2022-07-12 武汉黉门电工科技有限公司 Isolating switch and method for monitoring on-off state of isolating switch

Also Published As

Publication number Publication date
CN112465808B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN110799989A (en) Obstacle detection method, equipment, movable platform and storage medium
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
CN109190573B (en) Ground detection method applied to unmanned vehicle, electronic equipment and vehicle
JP2008134161A (en) Position attitude measuring method, and position attitude measuring device
JP2017102672A (en) Geographic position information specification system and geographic position information specification method
CN113971653A (en) Target detection method, device and equipment for remote sensing image and storage medium
CN112465808A (en) Substation equipment state identification method and device, inspection robot and storage medium
CN116152863A (en) Personnel information identification method and device, electronic equipment and storage medium
CN112418038A (en) Human body detection method, human body detection device, electronic equipment and medium
KR101874968B1 (en) Visibility measuring system base on image information and method for using the same
CN112598737A (en) Indoor robot positioning method and device, terminal equipment and storage medium
CN111524108B (en) Transformer substation equipment detection method and equipment
CN114943809A (en) Map model generation method and device and storage medium
KR20220018396A (en) Methods and apparatuses, systems, electronic devices, storage media and computer programs for associating objects
CN116704037B (en) Satellite lock-losing repositioning method and system based on image processing technology
CN110427911A (en) A kind of Approach for road detection, device, equipment and storage medium
CN115294204B (en) Outdoor target positioning method and system
CN115802004B (en) Laboratory construction monitoring method and system
CN117422754B (en) Method for calculating space distance of substation near-electricity operation personnel based on instance segmentation, readable storage medium and electronic equipment
CN116704138B (en) Method and device for establishing oblique photography three-dimensional model
KR102351117B1 (en) Method for providing sand loss information, server and system using the same
KR102493171B1 (en) Method and system for anomaly detection of IoT meteorological sensor based on integration of sensor data
CN114255264B (en) Multi-base-station registration method and device, computer equipment and storage medium
CN115019290A (en) Instrument reading identification method and device and storage medium
CN117557494A (en) Unmanned aerial vehicle photovoltaic inspection positioning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant