CN116338396A - Acoustic-optical combined imaging method for partial discharge source - Google Patents

Acoustic-optical combined imaging method for partial discharge source Download PDF

Info

Publication number
CN116338396A
CN116338396A CN202310417944.8A CN202310417944A CN116338396A CN 116338396 A CN116338396 A CN 116338396A CN 202310417944 A CN202310417944 A CN 202310417944A CN 116338396 A CN116338396 A CN 116338396A
Authority
CN
China
Prior art keywords
electrical equipment
partial discharge
image
information
imaging method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310417944.8A
Other languages
Chinese (zh)
Inventor
夏兆俊
赵成
倪楠
张忠
孙立成
赖玮
范洋洋
熊丽辉
尚宝
景阳
葛虎
朱永彬
胡文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
MaAnshan Power Supply Co of State Grid Anhui Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
MaAnshan Power Supply Co of State Grid Anhui Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd, MaAnshan Power Supply Co of State Grid Anhui Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
Priority to CN202310417944.8A priority Critical patent/CN116338396A/en
Publication of CN116338396A publication Critical patent/CN116338396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/12Testing dielectric strength or breakdown voltage ; Testing or monitoring effectiveness or level of insulation, e.g. of a cable or of an apparatus, for example using partial discharge measurements; Electrostatic testing
    • G01R31/1227Testing dielectric strength or breakdown voltage ; Testing or monitoring effectiveness or level of insulation, e.g. of a cable or of an apparatus, for example using partial discharge measurements; Electrostatic testing of components, parts or materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/08Systems for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof

Abstract

The invention discloses an acousto-optic combined imaging method of a local discharge source, which is applied to the technical field of acousto-optic combination. The method comprises the following steps: s1, carrying out surface scanning on electrical equipment through an optical camera to acquire image information of the surface of the electrical equipment; s2, analyzing and processing the image information of the surface of the electrical equipment obtained in the S1, identifying defects and pollution conditions of the surface of the electrical equipment, and accurately positioning; s3, internal monitoring is carried out on the electrical equipment by using a partial discharge ultrasonic array sensor, and a partial discharge signal is obtained; s4, integrating the image information of the surface of the electrical equipment acquired in the S1 with the partial discharge signal of the interior of the electrical equipment acquired in the S3 to obtain integrated data; and S5, displaying the integrated data through a visual interface to obtain the surface discharge position of the electrical equipment. The invention can realize comprehensive, accurate, efficient and real-time monitoring effect in the monitoring of the electrical equipment.

Description

Acoustic-optical combined imaging method for partial discharge source
Technical Field
The invention relates to the technical field of acousto-optic combination, in particular to an acousto-optic combination imaging method of a partial discharge source.
Background
Partial discharge refers to a partial discharge phenomenon occurring in electrical equipment, and is mainly caused by a certain degree of damage or pollution to medium inside the electrical equipment, and long-term partial discharge can cause equipment failure. To avoid equipment failure, it is necessary to monitor and locate the electrical equipment for the presence of partial discharges.
Ultrasonic detection techniques have been used for many years to detect partial discharges in electrical devices. In recent years, with the development of ultrasonic array technology, partial discharge ultrasonic array positioning technology has become a more accurate, sensitive and efficient monitoring method. The technology can realize accurate positioning of the partial discharge signal by utilizing the high-resolution characteristic of the ultrasonic array sensor, thereby improving the detection accuracy and positioning accuracy. Meanwhile, the ultrasonic array positioning technology can monitor a plurality of devices or positions at the same time, so that the monitoring efficiency and the coverage area are improved.
At present, the technology is widely applied in the fields of electric power, aerospace, metallurgy and the like. The ultrasonic array positioning technology can be applied to detecting and positioning partial discharge in equipment such as a high-voltage test transformer, a generator and the like, and provides a reliable technical means for safe operation of the power industry. In addition, the partial discharge ultrasonic array positioning technology is also applied to industries such as petroleum, natural gas and the like, and is used for monitoring partial discharge in equipment such as pipelines, oil tanks and the like, so that safe operation of the equipment is ensured.
The partial discharge ultrasonic array positioning technology can realize the high-precision positioning of the partial discharge in the electrical equipment. However, this technique can only provide the case inside the apparatus, and cannot detect the case of defects or contamination on the surface of the apparatus. In order to improve the monitoring effect, the partial discharge ultrasonic array positioning technology and the vision fusion technology can be combined to realize comprehensive monitoring of the surface condition and the internal condition of the equipment.
At present, the vision fusion technology has been widely applied in the fields of robots, automation equipment, aviation and the like. The visual fusion technology can fuse image information of various different sources, and improves the accuracy and reliability of detection. And the method is combined with the partial discharge ultrasonic array positioning technology, so that the internal and surface conditions of the equipment can be comprehensively monitored.
Specifically, the mechanical arm with the camera can be used for scanning the electrical equipment, acquiring the image information of the surface of the equipment, and identifying and positioning the defects, pollution and other conditions of the surface of the equipment by using an image identification algorithm. Then, the local discharge ultrasonic array positioning technology and the visual fusion technology are combined, and the monitoring information of the interior and the surface of the equipment is integrated, so that the accuracy and the reliability of local discharge monitoring are improved.
The fusion technology can be widely applied to the fields of monitoring, repairing, maintaining and the like of electrical equipment. For example, in the monitoring of important electrical equipment such as a high-voltage test transformer, a generator and the like, the technology can realize high-precision positioning of partial discharge, and simultaneously can monitor the surface condition of the equipment in real time, thereby improving the safety and reliability of the equipment.
Therefore, an acousto-optic combined imaging method of a partial discharge source is provided to solve the difficulties existing in the prior art, which is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a sound-light combined imaging method of a partial discharge source, which can enable maintenance personnel to quickly and accurately find fault points, accurately locate visually and perform visual display.
In order to achieve the above object, the present invention provides the following technical solutions:
an acousto-optic combined imaging method of a partial discharge source comprises the following steps:
s1, carrying out surface scanning on electrical equipment through an optical camera to acquire image information of the surface of the electrical equipment;
s2, analyzing and processing the image information of the surface of the electrical equipment obtained in the S1, identifying defects and pollution conditions of the surface of the electrical equipment, and accurately positioning;
s3, internal monitoring is carried out on the electrical equipment by using a partial discharge ultrasonic array sensor, and a partial discharge signal is obtained;
s4, integrating the image information of the surface of the electrical equipment acquired in the S1 with the partial discharge signal of the interior of the electrical equipment acquired in the S3 to obtain integrated data;
and S5, displaying the integrated data through a visual interface to obtain the surface discharge position of the electrical equipment.
Optionally, the visual interface performs fault early warning.
Optionally, the acquiring, in S1, the image information of the surface of the device by the optical camera includes: position information and attitude information of the target object.
Optionally, in S1, image information of the surface of the electrical device is obtained by using an optical camera, and the gesture parameters of the target object, including the rotation angle and the translation vector, are estimated by using a computer vision algorithm.
Optionally, the monitoring information in S4 specifically includes: and measuring the distance between the target object and the optical camera by using the ultrasonic array sensor, and simultaneously calculating the relative position and the angle.
Optionally, a kalman filtering algorithm is adopted to integrate the object posture information acquired by the optical camera and the distance information acquired by the ultrasonic array sensor, and final object position information and posture information are obtained by performing weighted average processing on the integrated data.
Compared with the prior art, the invention discloses an acousto-optic combined imaging method of a partial discharge source, which has the beneficial effects that:
1) And (3) overall monitoring: the visual fusion technology can acquire the surface information of the equipment, the partial discharge ultrasonic array positioning technology can acquire the internal information of the equipment, and the comprehensive monitoring of the electrical equipment can be realized after the two information are fused;
2) The detection accuracy is improved: the visual fusion technology can identify the defects, pollution and other conditions on the surface of the equipment; the partial discharge ultrasonic array positioning technology can realize accurate positioning of partial discharge signals in equipment. The two kinds of information are fused, so that the monitoring accuracy can be greatly improved;
3) Enhancing monitoring efficiency: the optical camera is used for scanning, so that the monitoring efficiency can be greatly improved, meanwhile, the vision fusion technology can realize monitoring of a plurality of devices or positions, and the monitoring efficiency and the coverage area are further improved;
4) And (3) real-time monitoring: the surface information of the equipment and the internal information of the equipment are fused in real time, so that abnormal conditions such as partial discharge and the like can be found out in time, and the abnormal conditions can be processed in time, so that the failure of the electrical equipment is avoided;
5) The cost is reduced: compared with the traditional monitoring method, the comprehensive application of the partial discharge ultrasonic array positioning technology and the visual fusion technology can greatly reduce the monitoring cost.
In summary, the partial discharge ultrasonic array positioning technology and the vision fusion technology are combined, so that comprehensive, accurate, efficient and real-time monitoring effects can be realized in the monitoring of electrical equipment, and a reliable electrical equipment monitoring technology is provided for the fields of electric power, aerospace, metallurgy and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a partial discharge source acousto-optic joint imaging method provided by the invention;
FIG. 2 is a schematic diagram of the overall structure of a system of the partial discharge source acousto-optic combined imaging method provided by the invention;
FIG. 3 is a schematic diagram of the ultrasonic array signal positioning process provided by the present invention;
FIG. 4 is a schematic diagram of the noise reduction and interference resistance processing of an ultrasonic array on signals provided by the invention;
FIG. 5 is a determination diagram of a coordinate system provided by the present invention;
FIG. 6 is a simplified image provided by the present invention, taken from a side view of the intersection of a sphere field of view;
FIG. 7 is a schematic diagram of a frontal sphere field-of-view matrix grid provided by the present invention;
FIG. 8 is a simplified image of a perpendicular view of a sphere;
FIG. 9 is a simplified image of a horizontal pattern intersecting the field of view of a sphere provided by the present invention;
FIG. 10 is a view of a simplified image of a sphere in vertical section with its field of view;
FIG. 11 is a cross-sectional view of a simplified image intersecting the sphere field of view provided by the present invention;
FIG. 12 is a graph of horizontal distance and vertical distance of a detection point from the center point of an image provided by the present invention;
FIG. 13 is a schematic view of a cross-section of a sphere field of view provided by the present invention;
FIG. 14 is a graph showing contrast of the coincidence ratio of matrix images provided by the present invention;
FIG. 15 is a comparative illustration of a matrix image provided by the present invention;
fig. 16 is a schematic view of a cross-section of fields of view of a coincident image provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention discloses a local discharge source acousto-optic combined imaging method, which comprises the following steps:
s1, carrying out surface scanning on electrical equipment through an optical camera to acquire image information of the surface of the electrical equipment;
s2, analyzing and processing the image information of the surface of the electrical equipment obtained in the S1, identifying defects and pollution conditions of the surface of the electrical equipment, and accurately positioning;
s3, internal monitoring is carried out on the electrical equipment by using a partial discharge ultrasonic array sensor, and a partial discharge signal is obtained;
s4, integrating the image information of the surface of the electrical equipment acquired in the S1 with the partial discharge signal of the interior of the electrical equipment acquired in the S3 to obtain integrated data;
and S5, displaying the integrated data through a visual interface to obtain the surface discharge position of the electrical equipment.
Further, the method further comprises the following steps: and performing fault early warning on the visual interface.
Referring to fig. 2, a system overall schematic diagram of integrating an ultrasonic array signal with an image signal acquired by an optical camera is shown.
Further, the step of obtaining, in S1, the image information of the surface of the device by the optical camera includes: position information and attitude information of the target object.
Further, in S1, image information of the surface of the electrical device is obtained through the optical camera, and the posture parameters of the target object, including the rotation angle and the translation vector, are estimated by using the computer vision algorithm.
Further, the monitoring information in S4 specifically includes: and measuring the distance between the target object and the optical camera by using the ultrasonic array sensor, and simultaneously calculating the relative position and the angle.
Specifically, in S4, the simultaneous calculation of the relative position and the angle is specifically:
basic conditions:
the camera and the detection probe share the same rotating machine; the camera is 5 cm away from the origin, and is positioned between the detection probe and the rotation origin; the detection probe continuously takes values;
the minimum motor rotation angle is 0.36 degrees, and the camera takes pictures according to a certain angle interval.
The coordinate system determination is shown in fig. 5:
constructing an angle coordinate system based on the rotation angle by means of the space coordinate system according to the rotation angle of the motor: (a, b, r);
origin point: the motor rotation pivot is the origin of the coordinate system;
the x-axis: setting the motor to be initially positioned on a horizontal plane, and assuming that the initial pointing direction of the motor is an x axis;
y axis: a straight line passing through the origin in the vertical direction is a y axis;
a: the collecting point and the origin point are connected into a straight line, the straight line intersects with a horizontal plane, and an included angle in the vertical direction is a;
b: the projection of the acquisition point on the horizontal plane is in a straight line with the origin, the straight line is positioned on the horizontal plane, and the included angle between the straight line and the x-axis is b;
r is the distance from the origin to the center of the image, and the length is temporarily negligible, i.e. the coordinates are simplified (a, b).
Image interval rotation angle:
conditions are as follows:
the motor operation control precision is 0.36 degrees, namely the minimum rotation angle of the motor is 0.36 degrees, the video rotation angle is an integer multiple of 0.36, the plane image is intercepted in a longitude and latitude mode, as shown in fig. 6, fine adjustment is carried out, as shown in fig. 7, the equatorial level is positioned at the center point of the image, the cross section of the image at the equatorial is square, and after moving towards the two poles, the image is gradually changed into a trapezoid until reaching the triangle at the poles.
1) Calculating the minimum rotation angle of the camera:
according to the minimum rotation angle of the motor of 0.36 degrees, when the camera rotates to the position closest to the pole, the video corners intersect at the pole, and the following calculation results are obtained:
Figure BDA0004185629920000071
m value range (1, 500)
Calculating the value of m through code circulation to obtain the selectable value of (m, n) as (4, 62), (100, 2).
a has the following two results:
m=4, a=1.44, a semicircle can obtain 125 image sections;
m=100, a=36, and a semicircle can obtain 5 image sections.
Results:
it can be seen that the best option for a is 1.44, i.e. the camera takes one image frame per rotation of 1.44 degrees. The detection track is a sphere, the x-axis in the initial state is positioned at the center point of the square of the image, and the rotation angle of the camera in the horizontal direction and the vertical direction is 1.44 degrees each time.
The detection points map to the image:
the condition assumes that:
and the position is marked by the coordinate rotation angle according to the image acquisition and the temperature acquisition data.
The angular coordinates represent the formula: (x, y), x represents a rotation angle in the vertical direction, upward rotation is positive, downward rotation is negative y represents a rotation angle in the horizontal direction, the x-axis is positive to the right, and the x-axis is negative to the left. The rotation angle is equal to [ -180, +180].
The image center point, the edge corner point and the temperature acquisition data can be simultaneously represented by using angle coordinates.
The position of the image acquisition is marked by a two-dimensional vector beyond the coordinates, for example: the image with the initial x-axis forward direction as the center point is (0, 0). Starting horizontally right with (1, 0), (2, 0), …, (124,0) horizontally left with (-1, 0), (-2, 0), …, (-124,0), vertically up (0, 1), (0, 2), …, (0, 124) vertically down (0, -1), (0, -2), …, (0, -124) vertically up (0, 1), (0, 2, 3552).
The problems to be solved are as follows:
the two-dimensional vector coordinates of the image corresponding to the detection point of any coordinates confirm that the detection point corresponds to the image by confirming the vector coordinates;
after the image corresponding to the detection point is determined, continuing to confirm the position of the pixel point of the image corresponding to the detection point;
after the two above confirmation, the angle mapping from the detection point to the image is determined in the same way as the motor rotation angle relation.
The calculation steps are as follows:
according to the angle coordinates of the detection points, the two-dimensional vector coordinates of the corresponding images of the detection points are calculated, namely the angle between the rotation position of the motor and the horizontal plane is ax, the angle between the vertical tangential plane of the x-axis is ay, and the coordinate position represented by the two-dimensional vector of the mapping image of the detection points is calculated.
Firstly, judging the position quadrant of the coordinate according to the positive and negative of the angle coordinate, and judging the basis: ax is positive, and ax is negative, and is below the origin level, ay is positive, and is negative, and is left of the x-axis vertical section. A total of 4 results were presented, one of which was exemplified for calculation as follows:
assuming angular coordinates (+ax, +ay), the detection point is located above the origin level and on the right side of the x-axis vertical section, and it is known that the two-dimensional vector coordinates of the corresponding images are all positive numbers.
Obtaining two-dimensional vector coordinates by a rounding mode:
Figure BDA0004185629920000091
i.e. the two-dimensional vector coordinates are (m, n):
taking actual data as an example, if the detection point angle coordinate is (100 ), the two-dimensional vector calculation process is as follows:
Figure BDA0004185629920000092
the two-dimensional vector coordinates of the image corresponding to the angular coordinates (100 ) are (69, 69) and represent the pictures taken after the camera has been rotated 69 times upwards and 69 times to the right from the initial origin position.
2) Calculating the angle coordinates of the corresponding images of the detection points according to the angle coordinates of the detection points
Calculating the corner of the image where the detection point is mapped from the vertical direction and the horizontal direction, and continuing to calculate by taking the angle coordinate of the previous step as an example.
Referring to fig. 8, the vertical direction: the motor rotates from an initial position to a detection point position, and a plurality of fixed angles are arranged at intervals.
Detection point position: assuming a rotation angle ax, the rotation angle in the image is x, the two-dimensional vector position m of the image is located, and the rotation angle of the detection point in the image is calculated:
ax+0.97-1.44*m=x
ax=100, m=69, obtainable: x=1.36
Namely: in the vertical direction of the image, the image is rotated by 1.36 degrees from the bottom edge upwards.
Referring to fig. 9, the horizontal direction:
the horizontal section direction rotation angle schematic diagram, wherein the x-axis is a positive rotation angle to the right, the rotation angle in an image is assumed to be y, the rotation angle in the image is the two-dimensional vector position n of the image, and the rotation angle of a detection point in the image is calculated:
ay+0.97-1.44*n=y
ay=100, n=69, obtainable: y=1.36
Namely: it can be derived that the inspection point is turned up by an angle of 1.36 degrees from the original point position in the horizontal direction of the image.
The above calculation results can be obtained, and from the initial position direction, the rotation angle of the pixel point corresponding to the detection point in the image is (1.36), and the same calculation formula can be used for the pixel points in other directions to obtain the results.
3) Calculating the position of the detection point mapping into the image
The calculation steps are as follows:
a: calculating a formula of a vertical tangent plane
b: calculating a formula of a horizontal tangent plane
C: method for acquiring position of partial intersection of adjacent positions of actual image
The position of the detection point in one quadrant is selected as an example, and the calculation process is as follows, and the vertical direction is calculated firstly:
referring to fig. 10, a vertical section:
b1 B2 is a projection interface of different distances of the shot image, the interface intersecting with the spherical track is b1, and v1 is the intersection point of the detection probe and the image.
Let the vertical segment length from origin to b1 be equal to r, a0a2=r1, a0v1=rx, where rx is the distance of the detection point from the center point.
Figure BDA0004185629920000101
The formula of the result of the last step provides a relation formula of the corner of the detection point in the vertical direction image and the center of the image, and the unknown variable r1 is obtained subsequently. Next calculate the horizontal plane cross section, see fig. 11:
the distance of a01v2 in fig. 11, that is, the distance of the detection point from the center point of the image in the horizontal direction is calculated.
The length o2v1 is calculated through the right graph of fig. 11, assuming that ax is the rotation angle of the detection point in the horizontal direction, rx is the distance from the mapping point of the detection point to the center of the image in the horizontal direction, and can be obtained through the calculation in the previous step, rv is the distance from the origin to the intersection point of the detection point and the image in the vertical direction, namely, the distance value can be calculated through the cosine of the included angle and oa0 in the right graph of fig. 11, and the length of oa0 is the length detected by the detection probe.
Figure BDA0004185629920000111
Figure BDA0004185629920000112
o2v1=rV×sin(90-ax)
In connection with fig. 11, assuming that the angle of the included angle a01-o-v2 in the top view is equal to ax2, the angle ax2 can be obtained by calculation in the previous step, and in connection with the calculation result obtained, oa 01=o2v1, the following result is obtained:
Figure BDA0004185629920000113
v2a01=tan(ax2)×rv×sin(90-ax)
the variables in the final formula are ax2, rv, ax, and angles ax2 and ax can be obtained from the previous calculation, and rv can be obtained from the cosine calculation of the included angle. The horizontal distance and the vertical distance between the detection point and the center point of the image are obtained through the calculation of the upper two parts, and are shown in fig. 12:
because the image and the track of the detection point intersect to form a circle, and the position of the center point is also the center point of the cross section, the mapping position of the detection point in the image can be conveniently calculated by taking the center point as a datum point. Referring to fig. 12, the detection point map point t is located by ac, ab length.
By combining the steps, the positioning of the detection point to which image is mapped from the center point can be obtained, but the basis of calculation is the track length of the detection point, and the actual length and width of the image do not participate in calculation.
See fig. 13:
the calculated result is in a plane c1, the actual image is in a plane c2, the proportional relation between the distance in the plane c1 and the distance of the actual image is required to be obtained, and the position in the actual image is calculated according to the proportion.
The following section describes a method of acquiring the image scale.
The positions of the coincident points are calculated by code debugging and synthesis of three adjacent pictures near the equator of the water plane, see fig. 14, 15 and 16.
The cross section width after the intersection of adjacent image parts at the equator, namely the length L in the upper graph, corresponds to p2p3 in the left graph, OO1 in the left graph is the length of the detection probe after being connected with a motor, the value can be obtained through actual measurement, the angle of the motor is known to be 1.44 degrees, and the length of p0O1 can be calculated according to an angle tangent formula.
Knowing the p0O1 length and the length L, the ratio of the calculated value to the actual image length is obtained.
The length and width distances from the center point calculated in the steps are amplified proportionally, so that the position on the actual image can be obtained correctly.
The steps are summarized as follows:
determining the quadrant of the detection point according to the positive and negative of the angle coordinate data;
after the quadrant is determined, calculating two-dimensional vector coordinates of the images according to the angle coordinates, namely determining which image the detection point is mapped to;
after confirming the image, calculating the relative rotation angle of the detection point in the image: the angle in the vertical direction and the angle in the horizontal direction are used for obtaining an angle coordinate;
calculating a distance calculation value between the detection mapping point and the center of the image according to the angle and the distance length of the probe;
measuring the actual superposition position of the images and the cross section width after intersection through the equatorial adjacent images, and solving the ratio value of the calculated position and the actual position of the images;
and obtaining the mapping point position of the detection point on the image through the actual distance from the center point of the image.
Further, a Kalman filtering algorithm is adopted to integrate the object posture information acquired by the optical camera and the distance information acquired by the ultrasonic array sensor, and final object position information and posture information are obtained by carrying out weighted average processing on the integrated data.
Specifically, the positioning fusion technology of the optical camera and the ultrasonic array sensor is a technology for fusing two different sensor data together so as to improve positioning accuracy. The optical camera can provide high-resolution and accurate visual information, the ultrasonic array sensor can provide distance, direction and depth information, and the distance, the direction and the depth information can be mutually complemented after being fused, so that the positioning accuracy is improved.
Referring to fig. 3, the ultrasonic signals have different source positions, different distances, and different phases and time differences of arrival at the ultrasonic sensors.
Referring to fig. 4, signals received by the sensors are delayed appropriately to make the phases the same, so as to form forward signal superposition, and the receiving gain can be obtained, thereby improving the detection sensitivity of the ultrasonic signals; for the peripheral interference noise, since the forward superposition is not formed and the receiving gain is not provided, if the reverse superposition is performed, partial cancellation can be performed, and noise interference can be reduced.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. The acousto-optic combined imaging method of the partial discharge source is characterized by comprising the following steps of:
s1, carrying out surface scanning on electrical equipment through an optical camera to acquire image information of the surface of the electrical equipment;
s2, analyzing and processing the image information of the surface of the electrical equipment obtained in the S1, identifying defects and pollution conditions of the surface of the electrical equipment, and accurately positioning;
s3, internal monitoring is carried out on the electrical equipment by using a partial discharge ultrasonic array sensor, and a partial discharge signal is obtained;
s4, integrating the image information of the surface of the electrical equipment acquired in the S1 with the partial discharge signal of the interior of the electrical equipment acquired in the S3 to obtain integrated data;
and S5, displaying the integrated data through a visual interface to obtain the surface discharge position of the electrical equipment.
2. The partial discharge source acousto-optic combination imaging method of claim 1, further comprising: and performing fault early warning on the visual interface.
3. The partial discharge source acousto-optic joint imaging method according to claim 1, wherein the step of obtaining the image information of the surface of the device by the optical camera in S1 comprises: position information and attitude information of the target object.
4. The sound-light combined imaging method of claim 3, wherein in S1, the image information of the surface of the electrical device is obtained by the optical camera, and the posture parameters of the target object, including the rotation angle and the translation vector, are estimated by using the computer vision algorithm.
5. The partial discharge source acousto-optic joint imaging method according to claim 1, wherein the monitoring information in S4 is specifically: and measuring the distance between the target object and the optical camera by using the ultrasonic array sensor, and simultaneously calculating the relative position and the angle.
6. The partial discharge source acousto-optic joint imaging method according to claim 3 or 5, wherein a kalman filtering algorithm is adopted to integrate the object posture information acquired by the optical camera and the distance information acquired by the ultrasonic array sensor, and final object position information and posture information are obtained by performing weighted average processing on the integrated data.
CN202310417944.8A 2023-04-19 2023-04-19 Acoustic-optical combined imaging method for partial discharge source Pending CN116338396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310417944.8A CN116338396A (en) 2023-04-19 2023-04-19 Acoustic-optical combined imaging method for partial discharge source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310417944.8A CN116338396A (en) 2023-04-19 2023-04-19 Acoustic-optical combined imaging method for partial discharge source

Publications (1)

Publication Number Publication Date
CN116338396A true CN116338396A (en) 2023-06-27

Family

ID=86895040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310417944.8A Pending CN116338396A (en) 2023-04-19 2023-04-19 Acoustic-optical combined imaging method for partial discharge source

Country Status (1)

Country Link
CN (1) CN116338396A (en)

Similar Documents

Publication Publication Date Title
Jahanshahi et al. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation
US8249832B2 (en) Correlation of inspection information and computer-aided design data for structural assessment
CN102519400B (en) Large slenderness ratio shaft part straightness error detection method based on machine vision
Liu et al. An improved online dimensional measurement method of large hot cylindrical forging
CN110008893A (en) A kind of automobile driving running deviation automatic testing method based on vehicle-mounted imaging sensor
Xia et al. Global calibration of non-overlapping cameras: State of the art
US9214024B2 (en) Three-dimensional distance measurement apparatus and method therefor
CN111637851B (en) Aruco code-based visual measurement method and device for plane rotation angle
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
CN111311659B (en) Calibration method based on three-dimensional imaging of oblique plane mirror
Li et al. Accurate and automatic extrinsic calibration for a monocular camera and heterogenous 3D LiDARs
Wang et al. Automatic reading system for analog instruments based on computer vision and inspection robot for power plant
CN113567451A (en) Cable defect detection and diameter measurement method
CN113702384A (en) Surface defect detection device, detection method and calibration method for rotary component
CN117314986A (en) Unmanned aerial vehicle cross-mode power distribution equipment inspection image registration method based on semantic segmentation
Summan et al. A novel visual pipework inspection system
CN116338396A (en) Acoustic-optical combined imaging method for partial discharge source
CN111504269B (en) Underwater scale measurement method and device thereof
CN112396651B (en) Method for realizing equipment positioning through two-angle image
CN109035335B (en) Submarine tunnel water seepage level identification method based on monocular vision
CN112991376A (en) Equipment contour labeling method and system in infrared image
Pham et al. A Mobile Vision-based System for Gap and Flush Measuring between Planar Surfaces using ArUco Markers
CN112816053A (en) Non-contact vibration information detection method and system for ship equipment
CN112330740A (en) Pseudo-binocular dynamic distance measurement method based on monocular video
JPH07139918A (en) Method for measuring central position/radius of cylinder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination