CN112581402B - Road and bridge fault automatic detection method based on machine vision technology - Google Patents
Road and bridge fault automatic detection method based on machine vision technology Download PDFInfo
- Publication number
- CN112581402B CN112581402B CN202011561812.5A CN202011561812A CN112581402B CN 112581402 B CN112581402 B CN 112581402B CN 202011561812 A CN202011561812 A CN 202011561812A CN 112581402 B CN112581402 B CN 112581402B
- Authority
- CN
- China
- Prior art keywords
- bridge
- data
- model
- time
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005516 engineering process Methods 0.000 title claims abstract description 28
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 230000000007 visual effect Effects 0.000 claims abstract description 27
- 201000010099 disease Diseases 0.000 claims abstract description 26
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000012800 visualization Methods 0.000 claims description 13
- 238000013499 data model Methods 0.000 claims description 12
- 239000003086 colorant Substances 0.000 claims description 7
- 238000004140 cleaning Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 210000001503 joint Anatomy 0.000 claims description 3
- 238000007726 management method Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- IQLZWWDXNXZGPK-UHFFFAOYSA-N methylsulfonyloxymethyl methanesulfonate Chemical compound CS(=O)(=O)OCOS(C)(=O)=O IQLZWWDXNXZGPK-UHFFFAOYSA-N 0.000 claims description 3
- 238000005065 mining Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000035807 sensation Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000009194 climbing Effects 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 description 7
- 230000006872 improvement Effects 0.000 description 6
- 230000007547 defect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a road and bridge disease automatic detection method based on a machine vision technology, which comprises the following steps: acquiring bridge data, acquiring bridge images and original pictures, processing the images, modeling the data, fusing the models, comparing the models and determining the positions of diseases; according to the method, the picture, the distance and the frame of the bridge are obtained through the vision technology of the wall climbing robot, the picture sample of the bridge is obtained again through the vision technology of the unmanned aerial vehicle, the two-time collection is carried out, the data are more comprehensive, the original picture data of the bridge are obtained to serve as the comparison file, the picture and the comparison file which are shot through the vision technology are respectively used for building a visual real-time bridge vector model and a visual comparison bridge vector model, the frame which is obtained before is modeled into space-time coordinates and is fused into the visual real-time bridge vector model, and the bridge disease condition can be judged through comparison between the fusion model and the visual comparison bridge vector model.
Description
Technical Field
The invention relates to the technical field of road and bridge detection, in particular to a road and bridge disease automatic detection method based on a machine vision technology.
Background
At present, the road and bridge detection method mainly comprises the steps of carrying personnel to approach a bridge through field manual detection or by means of detection equipment such as a beam detection vehicle and the like, and visually inspecting defects and diseases of the bridge by means of human eyes;
the detection method needs a large amount of manpower, needs a plurality of days for detection, wastes manpower and material resources, and has low efficiency, and meanwhile, in the detection process, due to the defect of human eyes, the bridge diseases are difficult to be detected comprehensively, and the neglected positions exist, so that the whole detection effect is influenced.
Disclosure of Invention
Aiming at the problems, the invention provides a road and bridge fault automatic detection method based on a machine vision technology, which acquires a picture, a distance and a frame of a bridge through a vision technology of a wall climbing robot, acquires a picture sample of the bridge again through a vision technology of an unmanned aerial vehicle, collects the picture sample twice, has more comprehensive data, marks different colors and texture elements through model comparison, and can judge the bridge fault condition by combining with a related camera to shoot the picture.
In order to realize the purpose of the invention, the invention is realized by the following technical scheme: a road and bridge disease automatic detection method based on a machine vision technology comprises the following steps:
the method comprises the following steps: obtaining bridge data
Adsorbing the wall-climbing robot to the side wall of the bridge, starting the wall-climbing robot to move along the side wall of the bridge, simultaneously connecting the wall-climbing robot with computer drawing software through a wireless network, shooting pictures on the surface of the bridge by using a high-definition camera on the robot in the moving process of the wall-climbing robot, measuring the size and distance of each drop surface on the surface of the bridge by using an infrared distance meter of the robot, and drawing a frame of the bridge by using the computer drawing software along with the movement of the robot;
step two: obtaining bridge image and original picture
Starting the unmanned aerial vehicle, acquiring the image sample of the bridge again by using a CCD camera and an infrared light sensor on the unmanned aerial vehicle, then switching on a municipal data website, and acquiring original picture data of the bridge by using an external source, a bridge picture library and a corresponding traditional knowledge base as data sources to establish multi-source comparison data;
step three: image processing
Preprocessing a picture of the bridge surface shot by the high-definition camera in the step one and a bridge image sample obtained by the unmanned aerial vehicle in the step two, reducing noise, obtaining color texture distribution of a bridge layer in the image to form a color sample set, removing weight of the color sample set, marking a joint of color and texture, integrating the joint into a real-time image library, extracting the color texture distribution of the bridge layer in multi-source contrast data, removing weight, marking the joint of the color and texture, and integrating the joint into a contrast image library;
step four: image modeling
Performing three-dimensional model 3D visualization on image data of the real-time image library and the comparison image library, respectively constructing a real-time bridge model and a comparison bridge model, and vectorizing numerical values of corresponding color features and texture features in the two models to complete a visualized real-time bridge vector model and a visualized comparison bridge vector model;
step five: data modeling
Carrying out data modeling on the bridge frame diagram obtained in the step one, firstly, determining specific space-time numerical value ranges of Path and Row according to the space position of the bridge, then carrying out primary screening on the specific numerical value ranges, ensuring that each Path and Row corresponds to a certain position of the bridge, obtaining a data set model covering the bridge, then measuring and distinguishing the space-time data, completing classification and cleaning of inaccurate data by using a data probability value, then cleaning similar repeated data in the data set model by sequencing an improved SNM algorithm for multiple times, obtaining an accurate, clear and visual data model, then carrying out space-time data vectorization by using SVG (scalable vector graphics), forming points, lines and surfaces, forming specific space-time data coordinates by the points, the lines and the surfaces, and constructing a vectorized data model;
step six: model fusion
Fusing the vectorized data model in the fifth step into the visual real-time bridge vector model in the fourth step, fusing the vectorized data model by taking the visual real-time bridge vector model as a carrier based on an exchangeable image file EXIF principle, and embedding associated space-time coordinate information into physical structures of images, colors and textures to realize associated multi-element data fusion to form a complete bridge physical model;
step seven: model comparison
Inputting the bridge physical model obtained in the sixth step and the visual comparison bridge vector model obtained in the fourth step into a metadata management system MDMS, comparing, mainly comparing color and texture characteristics, mining association information and different information, marking different color and texture elements in the bridge physical model, synchronously extracting pictures shot by related cameras in the marked area, displaying a live-action image of the area, and judging the bridge disease condition by an operator according to the live-action image;
step eight: determining disease location
And acquiring the space-time coordinate associated with the marked area in the bridge physical model, and determining the specific position of the bridge disease according to the bridge size, distance and proportion measured by the infrared distance meter and the infrared light sensation meter.
The further improvement lies in that: in the first step, the wall-climbing robot is provided with a standard information conversion interface for transmitting texts, PDFs and image files, combines the Big data technology and adopts the distributed rapid exchange technology for information transmission.
The further improvement lies in that: in the second step, an external source, a bridge picture library and a corresponding traditional knowledge base are used as data sources, and data are selected, combined, internalized and externalized and subjected to heterogeneous semantic elimination, so that original picture data of the bridge are obtained.
The further improvement lies in that: in the fourth step, the specific operations of 3D visualization of the three-dimensional model are: and (3) adopting ContextCapture to carry out three-dimensional model construction on image data of the real-time image library and the contrast image library, carrying out 3D visualization on the images, and enhancing the visualization effect through symbolization, in particular enhancing the color characteristic and the texture characteristic of the target model.
The further improvement lies in that: and fifthly, carrying out space-time data vectorization by using the SVG to form points, lines and surfaces, wherein specific space-time data coordinates are formed by the points, the lines and the surfaces, and each space-time data coordinate comprises a scene image.
The further improvement lies in that: in the sixth step, a portable importing mechanism provided by WebGIS application software is utilized to realize seamless butt joint and attribute lossless integration of space-time coordinate information, images, colors and texture physical structures in the bridge physical model, and a coordinate query function is provided for the model.
The further improvement lies in that: and seventhly, after the pictures shot by the relevant cameras in the marked area are extracted, if the pictures are not clear, secondarily sharpening the pictures, and displaying the high-definition live-action images in the area.
The beneficial effects of the invention are as follows: the invention obtains the picture, distance and frame of the bridge by the visual technology of the wall climbing robot, obtains the picture sample of the bridge again by the visual technology of the unmanned aerial vehicle, collects the data twice, the data is more comprehensive, and obtains the original picture data of the bridge as a contrast file by an external source, a bridge picture library and a corresponding traditional knowledge base, respectively constructs a visual real-time bridge vector model and a visual contrast bridge vector model by utilizing the 3D visualization of a three-dimensional model, models the frame obtained before into a space-time coordinate, fuses the space-time coordinate into the visual real-time bridge vector model, leads the space-time coordinate information to be embedded into the physical structure of the image, the color and the texture, marks different color and texture elements by comparing the fusion model with the visual contrast bridge vector model, combines the relevant camera to shoot the picture, and can judge the bridge disease condition, the whole process is more convenient, the efficiency is high, the model comparison is realized, the error is small, meanwhile, the specific position of the bridge disease can be determined by utilizing the space-time coordinate associated with the marked area, and the timely troubleshooting is convenient.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic use view of the wall-climbing robot of the present invention.
Detailed Description
In order to further understand the present invention, the following detailed description will be made with reference to the following examples, which are only used for explaining the present invention and are not to be construed as limiting the scope of the present invention.
As shown in fig. 1 and 2, the present embodiment provides a method for automatically detecting a road and bridge fault based on a machine vision technology, which includes the following steps.
The method comprises the following steps: obtaining bridge data
Adsorbing a wall-climbing robot to the side wall of a bridge, starting the wall-climbing robot to move along the side wall of the bridge, connecting the wall-climbing robot with computer drawing software through a wireless network, shooting pictures on the surface of the bridge by using a high-definition camera on the robot in the moving process of the wall-climbing robot, measuring the size and distance of each drop surface on the surface of the bridge by using an infrared distance meter of the robot, drawing a frame of the bridge by using the computer drawing software along with the movement of the robot, and combining a Big data technology to transmit texts, PDFs and image files and transmit information by using a distributed rapid exchange technology;
step two: obtaining bridge image and original picture
Starting an unmanned aerial vehicle, acquiring an image sample of the bridge again by using a CCD camera and an infrared light sensor on the unmanned aerial vehicle, then switching on a municipal data website, selecting, combining, internalizing and externalizing the data and eliminating heterogeneous semantics by using an external source, a bridge picture library and a corresponding traditional knowledge base as data sources, thereby acquiring original picture data of the bridge, and establishing the original picture data as multi-source comparison data;
step three: image processing
Preprocessing a picture of the bridge surface shot by the high-definition camera in the step one and a bridge image sample obtained by the unmanned aerial vehicle in the step two, reducing noise, obtaining color texture distribution of a bridge layer in the image to form a color sample set, removing weight of the color sample set, marking a joint of color and texture, integrating the joint into a real-time image library, extracting the color texture distribution of the bridge layer in multi-source contrast data, removing weight, marking the joint of the color and texture, and integrating the joint into a contrast image library;
step four: image modeling
Carrying out three-dimensional model 3D visualization on image data of the real-time image library and the comparison image library, and specifically operating as follows: adopting ContextCapture to carry out three-dimensional model construction on image data of a real-time image library and a contrast image library, carrying out 3D visualization on images, enhancing a visualization effect through symbolization, particularly enhancing color features and texture features of a target model, respectively constructing a real-time bridge model and a contrast bridge model, and carrying out vectorization on numerical values of corresponding color features and texture features in the two models to complete a visualized real-time bridge vector model and a visualized contrast bridge vector model;
step five: data modeling
Carrying out data modeling on the bridge frame diagram obtained in the step one, firstly, determining specific space-time numerical value ranges of Path and Row according to the space position of the bridge, then carrying out primary screening on the specific numerical value ranges, ensuring that each Path and Row corresponds to a certain position of the bridge, obtaining a data set model covering the bridge, then measuring and distinguishing the space-time data, completing classification and cleaning of inaccurate data by using a data probability value, then cleaning similar repeated data in the data set model by improving an SNM (selective non-scanning) algorithm through multiple sequencing to obtain an accurate, clear and visual data model, then carrying out space-time data vectorization by using SVG (scalable vector graphics), forming specific space-time data coordinates comprising points, lines and surfaces, wherein each space-time data coordinate comprises a scene image, and constructing a vectorized data model;
step six: model fusion
Fusing the vectorized data model in the fifth step into the visual real-time bridge vector model in the fourth step, fusing the vectorized data model by taking the visual real-time bridge vector model as a carrier based on an exchangeable image file EXIF principle, embedding associated spatio-temporal coordinate information into a physical structure of an image, a color and a texture to realize associated multi-data fusion to form a complete bridge physical model, and then realizing seamless butt joint and attribute lossless integration of the spatio-temporal coordinate information and the physical structure of the image, the color and the texture in the bridge physical model by utilizing a convenient and fast importing mechanism provided by WebGIS application software to provide a coordinate query function for the model;
step seven: model comparison
Inputting the bridge physical model obtained in the sixth step and the visual comparison bridge vector model obtained in the fourth step into a metadata management system MDMS, comparing, mainly comparing color and texture characteristics, mining association information and different information, marking different color and texture elements in the bridge physical model, synchronously extracting pictures shot by a related camera in the marked area, secondarily sharpening the pictures, displaying live-action images in the area, and judging bridge disease conditions by an operator according to the live-action images;
step eight: determining disease location
And acquiring the space-time coordinate associated with the marked area in the bridge physical model, and determining the specific position of the bridge disease according to the bridge size, distance and proportion measured by the infrared distance meter and the infrared light sensation meter.
The method for automatically detecting the road and bridge diseases based on the machine vision technology comprises the steps of obtaining pictures, distances and frames of a bridge through the vision technology of a wall-climbing robot, obtaining picture samples of the bridge again through the vision technology of an unmanned aerial vehicle, collecting the pictures twice, obtaining original picture data of the bridge as a comparison file through an external source, a bridge picture library and a corresponding traditional knowledge base, respectively constructing a visual real-time bridge vector model and a visual comparison bridge vector model by utilizing 3D visualization of a three-dimensional model, modeling the previously obtained frames into space-time coordinates, fusing the space-time coordinates into the visual real-time bridge vector model to enable space-time coordinate information to be embedded into a physical structure of images, colors and textures, comparing the visual comparison bridge vector model with the visual comparison bridge vector model through the fusion model, marking out different colors and texture elements, and shooting pictures by combining related cameras, can judge the bridge disease condition, whole process is more convenient, and is efficient, and the model contrast, the error is little, simultaneously, utilizes the space-time coordinate that the mark region is correlated with, can confirm the concrete position of bridge disease, convenient in time investigation.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (5)
1. A road and bridge disease automatic detection method based on a machine vision technology is characterized by comprising the following steps:
the method comprises the following steps: obtaining bridge data
Adsorbing a wall-climbing robot to the side wall of a bridge, starting the wall-climbing robot to move along the side wall of the bridge, connecting the wall-climbing robot with computer drawing software through a wireless network, shooting pictures on the surface of the bridge by using a high-definition camera on the robot in the moving process of the wall-climbing robot, measuring the size and distance of each drop surface on the surface of the bridge by using an infrared distance meter of the robot, drawing a frame of the bridge by using the computer drawing software along with the movement of the robot, and combining a Big data technology to transmit texts, PDFs and image files and transmit information by using a distributed rapid exchange technology;
step two: obtaining bridge image and original picture
Starting an unmanned aerial vehicle, acquiring an image sample of the bridge again by using a CCD camera and an infrared light sensor on the unmanned aerial vehicle, then switching on a municipal data website, selecting, combining, internalizing, externalizing and eliminating heterogeneous semantics for data by using an external source, a bridge picture library and a corresponding traditional knowledge base as data sources, acquiring original picture data of the bridge, and establishing the original picture data as multi-source comparison data;
step three: image processing
Preprocessing a picture of the surface of the bridge shot by the high-definition camera in the step one and a bridge image sample obtained by the unmanned aerial vehicle in the step two, reducing noise, obtaining color texture distribution of a bridge layer in the picture to form a color sample set, removing weight of the color sample set, marking a joint of color and texture, integrating the color texture distribution and the weight of the bridge layer in multi-source contrast data into a real-time image library, simultaneously extracting the color texture distribution of the bridge layer in the multi-source contrast data, removing weight, marking the joint of the color and the texture, and integrating the color texture and the joint of the color and the texture into a contrast image library;
step four: image modeling
Performing three-dimensional model 3D visualization on image data of the real-time image library and the comparison image library, respectively constructing a real-time bridge model and a comparison bridge model, and vectorizing numerical values of corresponding color features and texture features in the two models to complete a visualized real-time bridge vector model and a visualized comparison bridge vector model;
step five: data modeling
Carrying out data modeling on the bridge frame diagram obtained in the step one, firstly, determining specific space-time numerical value ranges of Path and Row according to the space position of the bridge, then carrying out primary screening on the specific numerical value ranges, ensuring that each Path and Row corresponds to a certain position of the bridge, obtaining a data set model covering the bridge, then measuring and distinguishing the space-time data, completing classification and cleaning of inaccurate data by using a data probability value, then cleaning similar repeated data in the data set model by improving an SNM (selective non-scanning) algorithm through multiple sequencing, obtaining an accurate, clear and visual data model, then carrying out space-time data vectorization by using SVG (scalable vector graphics), forming specific space-time data coordinates of points, lines and surfaces, and forming a vectorized data model by using the points, the lines and the surfaces;
step six: model fusion
Fusing the vectorized data model in the fifth step into the visual real-time bridge vector model in the fourth step, fusing the vectorized data model by taking the visual real-time bridge vector model as a carrier based on an exchangeable image file EXIF principle, and embedding associated space-time coordinate information into physical structures of images, colors and textures to realize associated multi-element data fusion to form a complete bridge physical model;
step seven: model comparison
Inputting the bridge physical model obtained in the sixth step and the visual comparison bridge vector model obtained in the fourth step into a metadata management system MDMS, comparing, mainly comparing color and texture characteristics, mining association information and different information, marking different color and texture elements in the bridge physical model, synchronously extracting pictures shot by related cameras in the marked area, displaying a live-action image of the area, and judging the bridge disease condition by an operator according to the live-action image;
step eight: determining disease location
And acquiring the space-time coordinate associated with the marked area in the bridge physical model, and determining the specific position of the bridge disease according to the bridge size, distance and proportion measured by the infrared distance meter and the infrared light sensation meter.
2. The method for automatically detecting the road and bridge diseases based on the machine vision technology as claimed in claim 1, wherein the method comprises the following steps: in the fourth step, the specific operations of 3D visualization of the three-dimensional model are: and (3) adopting ContextCapture to carry out three-dimensional model construction on image data of the real-time image library and the contrast image library, carrying out 3D visualization on the images, and enhancing the visualization effect through symbolization, in particular enhancing the color characteristic and the texture characteristic of the target model.
3. The method for automatically detecting the road and bridge diseases based on the machine vision technology as claimed in claim 1, wherein the method comprises the following steps: and fifthly, carrying out space-time data vectorization by using the SVG to form points, lines and surfaces, wherein specific space-time data coordinates are formed by the points, the lines and the surfaces, and each space-time data coordinate comprises a scene image.
4. The method for automatically detecting the road and bridge diseases based on the machine vision technology as claimed in claim 1, wherein the method comprises the following steps: in the sixth step, a portable importing mechanism provided by WebGIS application software is utilized to realize seamless butt joint and attribute lossless integration of space-time coordinate information, images, colors and texture physical structures in the bridge physical model, and a coordinate query function is provided for the model.
5. The method for automatically detecting the road and bridge diseases based on the machine vision technology as claimed in claim 1, wherein the method comprises the following steps: and seventhly, after the pictures shot by the relevant cameras in the marked area are extracted, if the pictures are not clear, secondarily sharpening the pictures, and displaying the high-definition live-action images in the area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011561812.5A CN112581402B (en) | 2020-12-25 | 2020-12-25 | Road and bridge fault automatic detection method based on machine vision technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011561812.5A CN112581402B (en) | 2020-12-25 | 2020-12-25 | Road and bridge fault automatic detection method based on machine vision technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112581402A CN112581402A (en) | 2021-03-30 |
CN112581402B true CN112581402B (en) | 2022-09-16 |
Family
ID=75139671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011561812.5A Active CN112581402B (en) | 2020-12-25 | 2020-12-25 | Road and bridge fault automatic detection method based on machine vision technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112581402B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554747B (en) * | 2021-07-28 | 2023-04-07 | 上海大风技术有限公司 | Unmanned aerial vehicle inspection data viewing method based on three-dimensional model |
CN113643432A (en) * | 2021-08-20 | 2021-11-12 | 北京市商汤科技开发有限公司 | Data editing method and device, computer equipment and storage medium |
CN114267003B (en) * | 2022-03-02 | 2022-06-10 | 城云科技(中国)有限公司 | Road damage detection method, device and application |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102842203B (en) * | 2012-09-04 | 2014-07-09 | 广东省公路管理局 | Method for monitoring bridge fault on basis of video image |
CN107967685A (en) * | 2017-12-11 | 2018-04-27 | 中交第二公路勘察设计研究院有限公司 | A kind of bridge pier and tower crack harmless quantitative detection method based on unmanned aerial vehicle remote sensing |
CN111709337B (en) * | 2020-06-08 | 2023-05-12 | 余姚市浙江大学机器人研究中心 | Remote positioning and map building device and method for wall climbing robot |
CN111762272B (en) * | 2020-07-17 | 2024-07-30 | 吉林大学 | Bridge detection device and method for automatically realizing detection surface conversion |
CN111967440B (en) * | 2020-09-04 | 2023-10-27 | 郑州轻工业大学 | Comprehensive identification treatment method for crop diseases |
-
2020
- 2020-12-25 CN CN202011561812.5A patent/CN112581402B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112581402A (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112581402B (en) | Road and bridge fault automatic detection method based on machine vision technology | |
CN109544679B (en) | Three-dimensional reconstruction method for inner wall of pipeline | |
JP6289564B2 (en) | Method, apparatus and computer readable medium for detecting changes to structures | |
Garilli et al. | Automatic detection of stone pavement's pattern based on UAV photogrammetry | |
CN116630394B (en) | Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint | |
CN105654732A (en) | Road monitoring system and method based on depth image | |
CN112800911A (en) | Pavement damage rapid detection and natural data set construction method | |
CN110136186B (en) | Detection target matching method for mobile robot target ranging | |
CN110298330B (en) | Monocular detection and positioning method for power transmission line inspection robot | |
CN107491071A (en) | A kind of Intelligent multi-robot collaboration mapping system and its method | |
CN110503637B (en) | Road crack automatic detection method based on convolutional neural network | |
CN109342423A (en) | A kind of urban discharging pipeline acceptance method based on the mapping of machine vision pipeline | |
CN114639115B (en) | Human body key point and laser radar fused 3D pedestrian detection method | |
CN115035251B (en) | Bridge deck vehicle real-time tracking method based on field enhanced synthetic data set | |
CN114972177A (en) | Road disease identification management method and device and intelligent terminal | |
CN110349209A (en) | Vibrating spear localization method based on binocular vision | |
CN114812403A (en) | Large-span steel structure hoisting deformation monitoring method based on unmanned aerial vehicle and machine vision | |
Gao et al. | Large-scale synthetic urban dataset for aerial scene understanding | |
CN112932910A (en) | Wearable intelligent sensing blind guiding system | |
CN116630267A (en) | Roadbed settlement monitoring method based on unmanned aerial vehicle and laser radar data fusion | |
CN116071747A (en) | 3D point cloud data and 2D image data fusion matching semantic segmentation method | |
Hsu et al. | Defect inspection of indoor components in buildings using deep learning object detection and augmented reality | |
CN113033386B (en) | High-resolution remote sensing image-based transmission line channel hidden danger identification method and system | |
Yin et al. | Promoting Automatic Detection of Road Damage: A High-Resolution Dataset, a New Approach, and a New Evaluation Criterion | |
CN116721300A (en) | Prefabricated part apparent disease target detection method based on improved YOLOv3 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |