CN111738259A - Tower state detection method and device - Google Patents

Tower state detection method and device Download PDF

Info

Publication number
CN111738259A
CN111738259A CN202010602973.8A CN202010602973A CN111738259A CN 111738259 A CN111738259 A CN 111738259A CN 202010602973 A CN202010602973 A CN 202010602973A CN 111738259 A CN111738259 A CN 111738259A
Authority
CN
China
Prior art keywords
tower
training
detected
included angle
yolo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010602973.8A
Other languages
Chinese (zh)
Inventor
张壮领
郑松源
潘岐深
刘文松
莫一夫
陈彩娜
毕明利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202010602973.8A priority Critical patent/CN111738259A/en
Publication of CN111738259A publication Critical patent/CN111738259A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/20Checking timed patrols, e.g. of watchman

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a tower state detection method and device, comprising the following steps: acquiring a to-be-detected image containing a to-be-detected tower; acquiring a tower detection frame in an image to be detected by adopting a YOLO-V3 target detection network; inputting the image in the tower detection frame into a feature extraction network Resnet-50 to obtain the positions of two end points of the tower; and connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring an included angle between the connecting line of the two end points and the vertical line. The method and the device have low requirements on training data and can effectively improve the detection precision.

Description

Tower state detection method and device
Technical Field
The application relates to the technical field of image target detection, in particular to a tower state detection method and device.
Background
The electric power pole tower is an important component of a power grid structure, and in recent years, along with the rapid development of power grid construction, the power grid scale is gradually enlarged, and the mileage of a power transmission line is also rapidly increased. Most power transmission line corridors are distributed in suburban areas, are greatly influenced by objective natural conditions such as climate, geographic conditions and the like, and the stable operation of a power grid is directly influenced by the operation reliability of the power transmission line corridors. Therefore, the power transmission line inspection is necessary to be carried out timely, and the power safety defect is found timely.
At present, the inspection of the electric power tower is mainly manual inspection, even though an unmanned aerial vehicle technology is introduced, the video pictures shot by the unmanned aerial vehicle are mainly inspected in a manual mode, time and labor are wasted, the efficiency is low, and the accuracy cannot be guaranteed.
Disclosure of Invention
The application provides a pole tower state detection method and device, and solves the technical problems that the current inspection method wastes time and labor, is low in efficiency and cannot guarantee accuracy.
In view of this, a first aspect of the present application provides a tower status detection method, including:
acquiring a to-be-detected image containing a to-be-detected tower;
acquiring a tower detection frame in the image to be detected by adopting a YOLO-V3 target detection network;
inputting the image in the tower detection frame into a feature extraction network Resnet-50 to obtain the positions of two end points of the tower;
and connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring an included angle between the two end point connecting line and the vertical line.
Optionally, after the measuring an included angle between the two end point connection line and the vertical line, the method further includes:
if the included angle is more than 0 degree and less than or equal to 5 degrees, judging that the tower state is normal;
if the included angle is more than 5 degrees and less than or equal to 20 degrees, determining that the tower is inclined reversely;
if the included angle a is larger than 20 degrees, the pole and tower is judged to be reversed;
a represents the angle.
Optionally, the obtaining, by using the YOLO-V3 target detection network, a tower detection frame in the image to be detected further includes:
training the YOLO-V3 target detection network.
Optionally, the training of the YOLO-V3 target detection network specifically includes:
acquiring a plurality of images marked with poles and towers as a first training set;
inputting the first training set into the YOLO-V3 target detection network for training, wherein the YOLO-V3 target detection network adopts GIOU as a loss function.
Optionally, the inputting the image in the tower detection frame to the feature extraction network Resnet-50 to obtain the positions of the two end points of the tower further includes:
and training the feature extraction network Resnet-50.
Optionally, the training of the feature extraction network Resnet-50 specifically includes:
acquiring a plurality of local images containing towers, labeling two end points of the towers in the local images, and taking the labeled local images as a second training set;
inputting the second training set into the feature extraction network Resnet-50 for training.
This application second aspect provides a shaft tower state detection device, the device includes:
the device comprises a to-be-detected image acquisition unit, a to-be-detected image acquisition unit and a to-be-detected image acquisition unit, wherein the to-be-detected image acquisition unit is used for acquiring a to-be-detected image containing a to-be-detected tower;
the detection frame acquisition unit is used for acquiring a tower detection frame in the image to be detected by adopting a YOLO-V3 target detection network;
the end point detection unit is used for inputting the image in the tower detection frame to a feature extraction network Resnet-50 to obtain the positions of two end points of the tower;
and the included angle measuring unit is used for connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring the included angle between the two end point connecting line and the vertical line.
Optionally, the method further includes:
the judging unit is used for judging the state of the tower, and if the included angle is more than 0 degree and less than or equal to 5 degrees, the state of the tower is judged to be normal; if the included angle is more than 5 degrees and less than or equal to 20 degrees, determining that the tower is inclined reversely; if the included angle a is larger than 20 degrees, the pole and tower is judged to be reversed; a represents the angle.
Optionally, the method further includes:
a first training unit for training the YOLO-V3 target detection network.
Optionally, the method further includes:
and the second training unit is used for training the Resnet-50 of the feature extraction network.
According to the technical scheme, the method has the following advantages:
the application provides a tower state detection method and device, and the method comprises the following steps: acquiring a to-be-detected image containing a to-be-detected tower; acquiring a tower detection frame in an image to be detected by adopting a YOLO-V3 target detection network; inputting the image in the tower detection frame into a feature extraction network Resnet-50 to obtain the positions of two end points of the tower; and connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring an included angle between the connecting line of the two end points and the vertical line. The method comprises the steps of detecting the tower through an improved YOLO algorithm, detecting two end points of the tower based on a detected tower local graph in a detection frame, and calculating an included angle between a line segment and an absolute vertical line based on a detected end point connecting line to judge the state of the tower. The method is efficient and rapid, and can effectively improve the detection precision.
Drawings
Fig. 1 is a flowchart of a method according to an embodiment of a tower state detection method of the present application;
fig. 2 is a device structure diagram of an embodiment of a tower state detection device according to the present application;
FIG. 3 is a schematic diagram of the angle detection by the end connection line and the vertical line in the present application;
fig. 4 is a real object diagram of the detection of two end points of the tower in the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of a method according to an embodiment of a tower state detection method according to the present application, as shown in fig. 1, where fig. 1 includes:
101. and acquiring an image to be detected containing the tower to be detected.
It should be noted that, can accomplish patrolling and examining electric power tower through unmanned aerial vehicle in this application to gather the image that awaits measuring that contains the shaft tower through unmanned aerial vehicle, the image of gathering should contain complete electric power tower.
102. And acquiring a tower detection frame in the image to be detected by adopting a YOLO-V3 target detection network.
It should be noted that, in the present application, an image including a complete power tower needs to be input into a trained YOLO-V3 target detection network, a tower in the image is detected, and a detection frame including only a complete frame tower is obtained. Besides the YOLO-V3 target detection network, common target detection algorithms including SSD, Faster-RCNN, etc. can be adopted.
103. And inputting the image in the tower detection frame into a feature extraction network Resnet-50 to obtain the positions of the two end points of the tower.
It should be noted that, in the present application, the partial image corresponding to the detection frame including only the complete tower may be input to the feature extraction network Resnet-50, so as to identify two end points of the tower. Besides the feature extraction network Resnet-50, common feature extraction networks such as MobileNet and VGG can be adopted.
104. And connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring an included angle between the connecting line of the two end points and the vertical line.
It should be noted that after the local image corresponding to the detection frame only including the complete tower and the positions of the two end points of the tower in the image are obtained, the connection line of the two end points in the local image and the vertical line passing through the end point at the bottom of the tower and perpendicular to the detection frame can be obtained, so as to calculate the included angle between the connection line of the two end points and the vertical line, as shown in fig. 3 specifically. And comparing the obtained included angle with an empirical threshold value to obtain the state of the electric power tower in the image to be measured.
In a specific embodiment, comparing the obtained included angle with the empirical threshold specifically includes:
specifically, if the included angle is more than 0 degrees and less than or equal to 5 degrees, the tower state is judged to be normal; if the included angle is more than 5 degrees and less than or equal to 20 degrees, judging that the tower is inclined reversely; if the included angle a is larger than 20 degrees, the pole tower is judged to be inverted; and a represents an included angle. It should be noted that the empirical threshold needs to be set according to actual situations, and the threshold set here is only an exemplary illustration.
The method and the device have the advantages that the tower is detected through an improved yolo algorithm, two end points of the tower are detected based on the detected tower local image in the detection frame, and then the included angle between the line segment and an absolute vertical line is calculated based on the detected end point connecting line to judge the state of the tower. The method is efficient and rapid, and can effectively improve the detection precision.
The application further provides another embodiment of a pole tower state detection method, and on the basis of the first embodiment of the application, the method further includes:
201. the YOLO-V3 target detection network was trained.
The method for training the YOLO-V3 target detection network specifically comprises the following steps:
2011. and acquiring a plurality of images marked on the tower as a first training set.
It should be noted that the images in the training set adopted in the present application may be tower original image data that has been labeled, and the tower original image data includes tower images collected under different angles and different distances, so that the YOLO-V3 target detection network can better acquire the characteristics of the tower.
2012. And inputting the first training set into a YOLO-V3 target detection network for training, wherein the YOLO-V3 target detection network adopts GIOU as a loss function.
It should be noted that, when the YOLO-V3 is trained, GIOU (generalized union intersection) may be used as a loss function, so that the position accuracy of the detection frame of the tower can be effectively improved.
202. The feature extraction network Resnet-50 is trained.
The method for training the feature extraction network Resnet-50 specifically comprises the following steps:
2021. and acquiring a plurality of local images containing the tower, labeling two end points of the tower in the local images, and taking the labeled local images as a second training set.
It should be noted that, in the present application, a local image corresponding to a detection frame including a complete tower is used, and both end points of the tower in the local image need to be labeled, and the local image is used as a training set to train the feature extraction network Resnet-50, so that the feature extraction network Resnet-50 can better identify the end point features of the tower.
2022. And inputting the second training set into a feature extraction network Resnet-50 for training.
The above is an embodiment of the method of the present application, and the present application further includes an embodiment of a tower state detection device, as shown in fig. 2, the device includes:
and the to-be-detected image acquisition unit 301 is configured to acquire an to-be-detected image including a to-be-detected tower.
A detection frame obtaining unit 302, configured to obtain a tower detection frame in the image to be detected by using the YOLO-V3 target detection network.
And the endpoint detection unit 303 is configured to input the image in the tower detection frame to the feature extraction network Resnet-50 to obtain the positions of the two endpoints of the tower.
And the included angle measuring unit 304 is used for connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring the included angle between the connecting line of the two end points and the vertical line.
In a specific embodiment, the method further comprises the following steps:
the judging unit is used for judging the state of the tower, and if the included angle is more than 0 degree and less than or equal to 5 degrees, the state of the tower is judged to be normal; if the included angle is more than 5 degrees and less than or equal to 20 degrees, judging that the tower is inclined reversely; if the included angle a is larger than 20 degrees, the pole tower is judged to be inverted; and a represents an included angle. It should be noted that the empirical threshold needs to be set according to actual situations, and the threshold set here is only an exemplary illustration.
The training unit is used for training the YOLO-V3 target detection network.
And the second training unit is used for training the Resnet-50 of the feature extraction network.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A pole tower state detection method is characterized by comprising the following steps:
acquiring a to-be-detected image containing a to-be-detected tower;
acquiring a tower detection frame in the image to be detected by adopting a YOLO-V3 target detection network;
inputting the image in the tower detection frame into a feature extraction network Resnet-50 to obtain the positions of two end points of the tower;
and connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring an included angle between the two end point connecting line and the vertical line.
2. The tower state detection method according to claim 1, wherein after the measuring an angle between the connection line of the two end points and the vertical line, the method further comprises:
if the included angle is more than 0 degree and less than or equal to 5 degrees, judging that the tower state is normal;
if the included angle is more than 5 degrees and less than or equal to 20 degrees, determining that the tower is inclined reversely;
if the included angle a is larger than 20 degrees, the pole and tower is judged to be reversed;
a represents the angle.
3. The tower state detection method according to claim 1, wherein before the obtaining of the tower detection frame in the image to be detected by using the YOLO-V3 target detection network, the method further comprises:
training the YOLO-V3 target detection network.
4. The tower state detection method according to claim 3, wherein the training of the YOLO-V3 target detection network specifically comprises:
acquiring a plurality of images marked with poles and towers as a first training set;
inputting the first training set into the YOLO-V3 target detection network for training, wherein the YOLO-V3 target detection network adopts GIOU as a loss function.
5. The tower state detection method according to claim 1, wherein the step of inputting the image in the tower detection frame to a feature extraction network Resnet-50 to obtain the positions of the two end points of the tower further comprises:
and training the feature extraction network Resnet-50.
6. The tower state detection method according to claim 5, wherein the training of the feature extraction network Resnet-50 specifically comprises:
acquiring a plurality of local images containing towers, labeling two end points of the towers in the local images, and taking the labeled local images as a second training set;
inputting the second training set into the feature extraction network Resnet-50 for training.
7. A pole tower state detection device, comprising:
the device comprises a to-be-detected image acquisition unit, a to-be-detected image acquisition unit and a to-be-detected image acquisition unit, wherein the to-be-detected image acquisition unit is used for acquiring a to-be-detected image containing a to-be-detected tower;
the detection frame acquisition unit is used for acquiring a tower detection frame in the image to be detected by adopting a YOLO-V3 target detection network;
the end point detection unit is used for inputting the image in the tower detection frame to a feature extraction network Resnet-50 to obtain the positions of two end points of the tower;
and the included angle measuring unit is used for connecting the two end points, generating a vertical line perpendicular to the upper edge of the tower detection frame by passing through one end point, and measuring the included angle between the two end point connecting line and the vertical line.
8. The tower state detection device of claim 7, further comprising:
the judging unit is used for judging the state of the tower, and if the included angle is more than 0 degree and less than or equal to 5 degrees, the state of the tower is judged to be normal; if the included angle is more than 5 degrees and less than or equal to 20 degrees, determining that the tower is inclined reversely; if the included angle a is larger than 20 degrees, the pole and tower is judged to be reversed; a represents the angle.
9. The tower state detection device of claim 7, further comprising:
a first training unit for training the YOLO-V3 target detection network.
10. The tower state detection device of claim 7, further comprising:
and the second training unit is used for training the Resnet-50 of the feature extraction network.
CN202010602973.8A 2020-06-29 2020-06-29 Tower state detection method and device Pending CN111738259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010602973.8A CN111738259A (en) 2020-06-29 2020-06-29 Tower state detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010602973.8A CN111738259A (en) 2020-06-29 2020-06-29 Tower state detection method and device

Publications (1)

Publication Number Publication Date
CN111738259A true CN111738259A (en) 2020-10-02

Family

ID=72651620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010602973.8A Pending CN111738259A (en) 2020-06-29 2020-06-29 Tower state detection method and device

Country Status (1)

Country Link
CN (1) CN111738259A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449885A (en) * 2021-06-30 2021-09-28 佛山市南海区广工大数控装备协同创新研究院 Concrete pole automatic state evaluation method based on deep learning technology
CN114898221A (en) * 2022-07-14 2022-08-12 灵图数据(杭州)有限公司 Tower inclination detection method and device, electronic equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296694A (en) * 2016-08-13 2017-01-04 哈尔滨理工大学 Shaft tower tilts intelligent image identification measuring method
CN109977943A (en) * 2019-02-14 2019-07-05 平安科技(深圳)有限公司 A kind of images steganalysis method, system and storage medium based on YOLO
CN110188611A (en) * 2019-04-26 2019-08-30 华中科技大学 A kind of pedestrian recognition methods and system again introducing visual attention mechanism
CN110889827A (en) * 2019-11-06 2020-03-17 国网山西省电力公司吕梁供电公司 Transmission line tower online identification and inclination detection method based on vision
CN111027413A (en) * 2019-11-20 2020-04-17 佛山缔乐视觉科技有限公司 Remote multi-station object detection method, system and storage medium
CN111104906A (en) * 2019-12-19 2020-05-05 南京工程学院 Transmission tower bird nest fault detection method based on YOLO

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296694A (en) * 2016-08-13 2017-01-04 哈尔滨理工大学 Shaft tower tilts intelligent image identification measuring method
CN109977943A (en) * 2019-02-14 2019-07-05 平安科技(深圳)有限公司 A kind of images steganalysis method, system and storage medium based on YOLO
CN110188611A (en) * 2019-04-26 2019-08-30 华中科技大学 A kind of pedestrian recognition methods and system again introducing visual attention mechanism
CN110889827A (en) * 2019-11-06 2020-03-17 国网山西省电力公司吕梁供电公司 Transmission line tower online identification and inclination detection method based on vision
CN111027413A (en) * 2019-11-20 2020-04-17 佛山缔乐视觉科技有限公司 Remote multi-station object detection method, system and storage medium
CN111104906A (en) * 2019-12-19 2020-05-05 南京工程学院 Transmission tower bird nest fault detection method based on YOLO

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449885A (en) * 2021-06-30 2021-09-28 佛山市南海区广工大数控装备协同创新研究院 Concrete pole automatic state evaluation method based on deep learning technology
CN114898221A (en) * 2022-07-14 2022-08-12 灵图数据(杭州)有限公司 Tower inclination detection method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN109800697B (en) Transformer target detection and appearance defect identification method based on VGG-net style migration
CN112581463A (en) Image defect detection method and device, electronic equipment, storage medium and product
CN109739904B (en) Time sequence marking method, device, equipment and storage medium
CN111639647B (en) Indicator light state identification method and device, computer equipment and storage medium
CN111738259A (en) Tower state detection method and device
CN114511718B (en) Intelligent management method and system for materials for building construction
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN114494274A (en) Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium
CN113837159A (en) Instrument reading identification method and device based on machine vision
CN115346171A (en) Power transmission line monitoring method, device, equipment and storage medium
CN114004950B (en) BIM and LiDAR technology-based intelligent pavement disease identification and management method
Xin et al. Three‐dimensional reconstruction of Vitis vinifera (L.) cvs Pinot Noir and Merlot grape bunch frameworks using a restricted reconstruction grammar based on the stochastic L‐system
CN110807416A (en) Digital instrument intelligent recognition device and method suitable for mobile detection device
CN117498225B (en) Unmanned aerial vehicle intelligent power line inspection system
CN108629310B (en) Engineering management supervision method and device
CN109903308B (en) Method and device for acquiring information
CN116563841B (en) Detection method and detection device for power distribution network equipment identification plate and electronic equipment
CN112085724A (en) Cabinet temperature measuring method and device based on BIM and thermal image
CN116645612A (en) Forest resource asset determination method and system
CN111079752A (en) Method and device for identifying circuit breaker in infrared image and readable storage medium
CN111985266A (en) Scale map determination method, device, equipment and storage medium
CN115187880A (en) Communication optical cable defect detection method and system based on image recognition and storage medium
CN115494871A (en) Unmanned aerial vehicle inspection method, device and system for power transmission line abnormity
CN109214398B (en) Method and system for measuring rod position from continuous images
CN111739011A (en) Telegraph pole inclination detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination