CN116309672B - Night bridge dynamic deflection measuring method and device based on LED targets - Google Patents
Night bridge dynamic deflection measuring method and device based on LED targets Download PDFInfo
- Publication number
- CN116309672B CN116309672B CN202310581503.1A CN202310581503A CN116309672B CN 116309672 B CN116309672 B CN 116309672B CN 202310581503 A CN202310581503 A CN 202310581503A CN 116309672 B CN116309672 B CN 116309672B
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- threshold
- frame image
- initial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000006073 displacement reaction Methods 0.000 claims description 35
- 238000004364 calculation method Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 14
- 230000009467 reduction Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 11
- 230000002146 bilateral effect Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 238000000691 measurement method Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 14
- 238000000605 extraction Methods 0.000 abstract description 6
- 230000007613 environmental effect Effects 0.000 abstract description 4
- 238000005259 measurement Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 14
- 238000013461 design Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000002411 adverse Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000422 nocturnal effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/16—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a night bridge dynamic deflection measuring method and device based on LED targets, wherein in the process of measuring the night bridge dynamic deflection, the LED targets are firstly set for recording, a bridge vibration video is obtained, the gray threshold value of each frame of image is subjected to iterative solution, the gray threshold value range is reduced in a plurality of iterations, and the threshold value between the front Jing Guangban and the background in the correct image is found, so that foreground light spots and the background are accurately segmented, errors caused by light spot extraction due to environmental factors are avoided, and the bridge dynamic deflection measuring result is more accurate.
Description
Technical Field
The invention relates to the technical field of bridge structure monitoring, in particular to a night bridge dynamic deflection measuring method and device based on an LED target.
Background
After the visual sensing technology obtains the structural vibration video through image acquisition equipment such as an industrial camera and a digital camera, the pixel displacement of the structure is obtained at each frame of image position through the structure, and then the conversion from the pixel distance to the actual distance is realized through the scale factor, so that the actual displacement of the structure is obtained. Compared with the measurement by using equipment such as an acceleration sensor, a microwave radar, a laser radar and the like, the vision sensing technology is low in measurement cost, and can realize multi-point dynamic monitoring, so that the vision sensing technology is widely paid attention to domestic and foreign scholars.
However, in practical engineering, many bridge safety checks are performed at night in order to avoid the interference of daily traffic. At night, one of the main challenges is that the visibility and resolution of natural textures on a bridge are insufficient, and at the moment, accurate measurement is difficult to be carried out by using a common target or a target-free vision sensing technology, so that measurement is carried out by using an LED target, and a bridge measuring point is obtained by calculating the central change of a light spot during measurement. The common algorithms for detecting the light spot center include a gray level centroid method, a circle fitting method, a Hough method and a Gaussian surface fitting method, wherein the gray level centroid method is simple in principle and high in calculation speed, so that the method is widely applied, but when the method is used, threshold segmentation is firstly carried out on a light spot image, and when the common OTSU method (maximum inter-class variance method) is used for interference of phenomena such as strong light, fog and artifacts in the LED target imaging process, a part of background is divided into foreground in a staggered manner, so that the light spot extraction generates errors, the light spot center is influenced to be calculated through the gray level centroid method, and bridge dynamic deflection measurement failure is caused.
In view of this, overcoming the drawbacks of the prior art is a problem to be solved in the art.
Disclosure of Invention
The invention aims to solve the technical problem of accurately dividing the foreground and the background in the image of the LED target and reducing errors generated in the light spot extraction process.
The invention adopts the following technical scheme:
in a first aspect, a method for measuring night bridge dynamic deflection based on an LED target is provided, including:
setting an LED target at a bridge measuring point position to obtain a first preset image set, and obtaining each frame of image of the first preset image set;
iteratively solving the gray threshold of each frame of image, so as to narrow the gray threshold range of each frame of image, and determining the final threshold of each frame of image through the reduced threshold range of each frame of image;
dividing each frame of image according to the final threshold value of each frame of image to obtain a final prospect;
and obtaining the bridge dynamic deflection change according to the final prospect.
Preferably, the iteratively solving the gray threshold of each frame image reduces the gray threshold range of each frame image, and determines the final threshold of each frame image by the reduced threshold range of each frame image, which specifically includes:
acquiring an ith frame image, when i=1, an initial threshold th of the ith frame image i Set to 0; when i > 1, acquiring a gray histogram H of an ith frame image i And a gray level histogram H of an i-1 th frame image i-1 Judging the H i As described in the H i-1 Whether the relation coefficient is larger than a preset coefficient, and when the relation coefficient is larger than the preset coefficient, the initial threshold th of the ith frame image i Setting the final threshold of the ith-1 frame image, and if the relation coefficient is smaller than or equal to the preset coefficient, setting the initial threshold th of the ith frame image i Set to 0;
in each iteration, according to the initial threshold th i Thresholding the ith frame image to obtain an initial prospect, judging whether the initial prospect meets a first judging condition and a second judging condition, and if not, judging that one or more of the first judging condition and the second judging condition are not met, according to a gray level histogram H of the ith frame image i Initial threshold th i Obtain the exhaustive gray scale range OTSU of the ith frame image gray And according to the exhaustive gray scale range OTSU gray Obtaining an updated threshold th of the ith frame image new Update the initial threshold to th i =th new The updated initial threshold th i Inputting into the next iteration, and when the initial foreground meets the first judgment condition and the second judgment condition at the same time, setting an initial threshold th i And outputting the image as a final threshold value of the ith frame.
Preferably, the H i As described in the H i-1 The calculation formula of the relation coefficient between the two is as follows:
;
wherein d (H) i-1 ,H i ) Is H i As described in the H i-1 The coefficient of the relationship between the two,n is the number of image pixels, and I is the set of coordinates of all the pixels of the image.
Preferably, the determining whether the initial foreground meets the first determining condition and the second determining condition specifically includes:
the first judgment condition is as follows: obtaining the light spot circularity according to the initial prospect, and judging whether the light spot circularity is smaller than a preset value;
the second judging condition is as follows: obtaining a facula area A of the ith frame image according to the initial prospect of the ith frame image i Obtaining the facula area A of the ith-1 frame image according to the final prospect of the ith-1 frame image i-1 And judging the facula area A of the ith frame image i Whether or not it is smaller than the spot area A of the i-1 th frame image i-1 Is a preset multiple of (a).
Preferably, the gray histogram H according to the ith frame image i Initial threshold th i Obtain the exhaustive gray scale range OTSU of the ith frame image gray The method specifically comprises the following steps:
acquiring a gray level histogram H of the ith frame image i All gray values with the number of middle pixels being smaller than the preset proportion, wherein the maximum gray value in all the obtained gray values is img_gray max The minimum gray value of all the obtained gray values is img_gray min The img_gray is processed min With the initial threshold th of the ith frame image i Comparing when the img_gray max Greater than th i At the time, the img_gray min Is kept unchanged whenThe img_gray min Less than or equal to th i At the time, img_gray is made min =th i Obtaining an exhaustive gray scale range OTSU of the ith frame image gray The OTSU gray The following are provided:
。
preferably, the updating threshold th of the ith frame image is obtained according to the exhaustive gray scale range new The method specifically comprises the following steps:
traversing the exhaustive gray scale range of the ith frame imageAnd the image is brought into an objective function formula, when the objective function takes the maximum value, the corresponding gray value th is the updated threshold th of the ith frame image new ;
The objective function formula is:
;
wherein:is an objective function; said->The gray average value of the initial foreground pixel of the ith frame image is obtained; the L is the maximum value of gray values; said->The probability that the gray value in the initial foreground pixel of the ith frame image is i is given; said->The probability of pixel points with gray values smaller than th in the initial foreground pixels of the ith frame of image is obtained; the saidFor the first frame of the ith frame imageThe probability of pixels with gray values greater than th in the foreground pixels; the saidThe pixel point gray average value of which the gray value is smaller than th in the initial foreground pixel of the ith frame image; the saidThe pixel point variance of the gray value larger than th in the initial foreground pixel of the ith frame image is obtained; said->The pixel point gray average value with the gray value larger than th in the initial foreground pixel of the ith frame image is obtained.
Preferably, the setting of the LED targets at the bridge measuring point positions, to obtain a first preset image set, further includes:
performing bilateral filtering noise reduction on the first preset image set, so that each frame of image in the first preset image set is subjected to noise reduction sharpening, and interference between a foreground and a background in each frame of image is reduced;
the bilateral filtering noise reduction formula is as follows:
;
;
wherein, K (I, j) is an image output after bilateral filtering noise reduction, I (I, j) is an original input image, W (I, j) is a normalization parameter, andfor a 5x5 neighborhood range around the pixel point, ws (i, j) is a spatial proximity weight, and Wr (i, j) is a gray scale similarity weight, where:
;
;
wherein the saidIs a spatial domain factor, said->The space domain factor is represented by a neighborhood radius of the pixel point, and the gray domain factor is represented by a neighborhood gray standard deviation; the f (i, j) is an image point gray value, and the f (x, y) is an image coordinate; the f (x, y) is a 5x5 neighborhood point gray scale value of the image point f (i, j), and the x and y are 5x5 neighborhood point image coordinates of the image point f (i, j).
Preferably, the obtaining the bridge dynamic deflection change according to the final prospect includes:
obtaining a light spot center of each frame of image according to the final foreground of each frame of image, and obtaining bridge measuring point pixel displacement according to the displacement of the light spot center;
and obtaining a measuring point scale factor according to the size of the LED target, and calculating the bridge measuring point pixel displacement according to the scale factor to obtain the actual displacement of the bridge measuring point, thereby obtaining the bridge dynamic deflection change.
Preferably, the light spot center of each frame image is obtained according to the final foreground of each frame image, and the formula is as follows:
;
wherein I and j are pixel coordinates, I ij The gray value at the pixel coordinate (i, j), m is the number of pixels in the horizontal direction of the image, n is the number of pixels in the vertical direction, and x and y are the coordinates of the center of the light spot;
the measuring point scale factor is obtained through the size of the LED target, and the formula is as follows:
;
wherein the SF is a measuring point scale factor, and the d known The actual size of the LED target; the I is known The actual size of the target corresponds to the pixel length of the image.
In a second aspect, an LED target-based night bridge dynamic deflection measuring device includes at least one processor, and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor for executing the night bridge dynamic deflection measuring method based on the LED targets.
The embodiment of the invention provides a night bridge dynamic deflection measuring method and device of an LED target, wherein in the process of measuring the night bridge dynamic deflection, the LED target is firstly set for recording, images are acquired frame by frame, the gray threshold range of each frame of image is iterated, the gray threshold range is reduced in a plurality of iterations, and a more accurate threshold value between a foreground and a background in the image is found, so that the foreground and the background are accurately segmented, errors caused by light spot extraction due to environmental factors are avoided, and the bridge dynamic deflection measuring result is more accurate.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a method flow chart of a night bridge dynamic deflection measuring method of an LED target according to an embodiment of the present invention;
fig. 2 is a non-interference light spot diagram and a non-interference light spot gray scale diagram of a night bridge dynamic deflection measuring method of an LED target according to an embodiment of the present invention;
fig. 3 is a fog interference light spot diagram and a fog interference light spot gray scale diagram of a night bridge dynamic deflection measuring method of an LED target according to an embodiment of the present invention;
fig. 4 is an iteration flowchart of a method for measuring night bridge dynamic deflection of an LED target according to an embodiment of the present invention;
fig. 5 is an imaging diagram of a light spot when environmental interference exists in a night bridge dynamic deflection measurement method of an LED target and an imaging diagram of an existing OSTU threshold method according to an embodiment of the present invention;
fig. 6 is a comparison diagram of an imaging diagram of an OSTU thresholding method and an imaging diagram of an iterative thresholding method of the night bridge dynamic deflection measurement method of an LED target according to an embodiment of the present invention;
fig. 7 is a flowchart of a method for measuring night bridge dynamic deflection of an LED target according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a cantilever structure and an LED target position in vibration measurement according to the nocturnal bridge dynamic deflection measurement method of the LED target provided by the embodiment of the present invention;
FIG. 9 is a working condition 1LED target dynamic deflection measurement result of a night bridge dynamic deflection measurement method of an LED target provided by an embodiment of the invention;
FIG. 10 is a graph showing the measurement result of the dynamic deflection of the LED target under the working condition 2 of the night bridge dynamic deflection measuring method of the LED target according to the embodiment of the invention;
FIG. 11 is a graph showing the measurement result of the dynamic deflection of the LED target under the working condition 3 of the night bridge dynamic deflection measuring method of the LED target provided by the embodiment of the invention;
fig. 12 is a diagram of error calculation results of a method for measuring night bridge dynamic deflection of an LED target according to an embodiment of the present invention;
fig. 13 is a schematic diagram of a device for measuring night bridge dynamic deflection of an LED target according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, terms such as "inner", "outer", "longitudinal", "transverse", "upper", "lower", "top", "bottom", and the like refer to an orientation or positional relationship based on that shown in the drawings, and are merely for convenience in describing the present invention and do not require that the present invention must be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
The terms "first," "second," and the like herein are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the present application, unless explicitly specified and limited otherwise, the term "coupled" is to be construed broadly, and for example, "coupled" may be either fixedly coupled, detachably coupled, or integrally formed; can be directly connected or indirectly connected through an intermediate medium. Furthermore, the term "coupled" may be a means of electrical connection for achieving signal transmission.
In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1:
the embodiment 1 of the invention provides a night bridge dynamic deflection measuring method based on an LED target, which comprises the following steps:
in step 101, setting an LED target at a bridge measurement point position to obtain a first preset image set, and obtaining each frame of image of the first preset image set.
In this embodiment, the LED targets are LED lamps, in actual engineering, in order to avoid interference to daily traffic, safety inspection of many bridges is performed at night, the LED targets are used for being set at preset measurement points of the bridges in the process of measuring at night, then video recording is performed on the LED targets, and displacement of light spots of the LED targets is calculated frame by frame through video, so that dynamic deflection of measurement point positions of the bridges is measured; the recorded video can be regarded as a picture collection in which each frame of image is superimposed, so that the first preset image collection can be regarded as the recorded video, including the image of each frame in the video.
In step 102, the gray threshold of each frame of image is iteratively solved, so as to narrow the gray threshold range of each frame of image, and the final threshold of each frame of image is determined through the reduced threshold range of each frame of image.
In step 103, each frame of image is segmented according to the final threshold value of each frame of image to obtain a final background and a final foreground.
In step 104, the bridge dynamic deflection change is obtained according to the final prospect.
For each frame of image obtained by detection, the pixel point of the light spot itself needs to be calculated separately, and then the displacement of the LED target is obtained through the displacement of the light spot center on the images of different frames, so that the light spot in each frame of image and other positions except for the light spot need to be separated, the pixel point of the light spot in each frame of image is accurately found out, in the embodiment, the light spot is called a foreground, and the positions except for the light spot are called a background.
The threshold value is a critical gray value between the foreground and the background, the pixel points corresponding to the gray value larger than or equal to the threshold value are classified as the foreground, and the pixel points corresponding to the gray value smaller than the threshold value are classified as the background; the gray threshold range is a range interval in which a threshold exists.
As shown in fig. 2, the X-axis of the interference-free spot gray level image is a gray level value, the Y-axis is the number of pixels corresponding to the gray level value, the peak value can be seen to be concentrated at a position with lower gray level value according to the interference-free spot gray level image, the number of pixels exceeds 3000, namely, the background with most of black in the image is represented, a lower peak value exists at a position with higher gray level value, namely, the background with few parts in the interference-free spot image can be seen, the positions of the peaks are concentrated, so that the vast majority of all pixels of the whole interference-free spot image have only two gray levels of foreground and background, the outline between the foreground and the background is clear, and the spot can be basically and accurately identified; the gray threshold range of the image is a range of gray values between two peaks at the left and right sides, the number of pixels is very small, and the gray threshold range is reduced through iteration of the embodiment, so that an accurate final threshold is obtained.
As shown in fig. 3, the X-axis of the foggy interference light spot gray image is gray, the Y-axis is the number of pixels corresponding to gray values, according to the foggy interference light spot gray image, it can be seen that the number of pixels corresponding to the maximum peak at the position with lower gray values only exceeds 1200, compared with the maximum peak in the non-interference light spot gray image, the rightmost side of the foggy interference light spot gray image represents the peak of the foreground, the right side of the highest peak of the background represents a plurality of other peaks, which are the interference positions around the light spot in the foggy interference light spot image, a certain brightness exists at the interference positions, the gray values are lower than the light spot gray but higher than the background gray, so that the gray values cannot be included in the gray threshold range of the image, namely the gray value range with extremely small number of pixels at the middle right position, and the gray threshold range is narrowed by iteration of the embodiment, so as to obtain an accurate final threshold.
In the prior art, the OTSU method which is often matched with the gray centroid method is considered as one of the optimal algorithms for selecting the threshold value in the image segmentation, has simple calculation and is not influenced by the brightness and the contrast of the image, so the OTSU method is widely applied to the digital image processing. However, when the problems of artifact, strong light interference, fog interference and the like are faced, the light spot image cannot be extracted correctly through the OTSU method, because the artifact, strong light interference, fog and the like in the imaging process of the LED target can generate intermediate gray values between darker background and brighter light spots, and most of coefficients in an inter-class variance formula relied on when the OTSU method is used for thresholding are related to gray value probability, namely, the more a certain part of pixels are, the larger the influence on the inter-class variance is, and the number of light spot pixels is relatively small, when the light spot is influenced by intermediate gray, only one time of threshold calculation can mistakenly divide a part of intermediate gray background into foreground light spots, and in the prior art, the gray threshold range is generally set to be the whole gray interval, namely 0 to 255, the threshold is calculated in the whole interval, so that the calculation efficiency is lower, and the gray value of an interference area is also included in the calculation range, so that the final threshold is wrong, the foreground and the background is erroneously divided, and the final measurement result is influenced.
In the process of measuring the bridge dynamic deflection at night, the LED targets are firstly set for recording, images are acquired frame by frame, the gray threshold range of each frame of image is iterated, the gray threshold range is reduced in multiple iterations, the foreground pixels are thresholded for multiple times to approach the real threshold, and the threshold between the foreground and the background in the more accurate image is found, so that the foreground and the background are accurately segmented, errors caused by light spot extraction due to environmental factors are avoided, and the bridge dynamic deflection measuring result is more accurate.
Because the light spots on the images of adjacent frames do not have larger differences except for the frames where the light spots are displaced and the external interference appears and disappears, in order to save the calculation power in the iteration process, the initial thresholds of the images of the adjacent frames with small changes can be regarded as consistent, and the iteration is performed on the basis to reduce the gray threshold range, so the embodiment designs the following designs:
the flow of the iteration is shown in fig. 4 as follows:
in step 401, an i-th frame image and an i-1-th frame image, and a gray histogram H of the i-th frame image are acquired i And a gray level histogram H of an i-1 th frame image i-1 。
In step 402, the H is determined i As described in the H i-1 The relation coefficient d (H i-1 ,H i ) Whether the coefficient is larger than a preset coefficient or not; if yes, go to step 403, if not, go toTo step 404.
In step 403, when the relationship coefficient is greater than the preset coefficient, the initial threshold th of the ith frame image is set i Set as the final threshold for the i-1 st frame image and jump to step 405.
In step 404, when the relationship coefficient is less than or equal to the preset coefficient, the initial threshold th of the ith frame image i Set to 0 and jump to step 409.
Acquiring an ith frame image, and when i=1, setting an initial threshold th of the ith frame image i Set to 0; when i > 1, acquiring a gray histogram H of an ith frame image i And a gray level histogram H of an i-1 th frame image i-1 。
In this embodiment, the preset coefficient is 0.95, and when the correlation coefficient of two adjacent frames of images is greater than 0.95, the light spots representing the two adjacent frames of images are basically consistent, and the initial threshold th of the next frame is the same as the initial threshold th of the next frame i The final threshold value of the previous frame of image can be used, the iteration of the initial threshold value from 0 is avoided, the iteration efficiency is improved, and further iteration is carried out on the basis of the threshold value to narrow the range of the gray threshold value, so that a more accurate threshold value is obtained; when the correlation coefficient of two adjacent frames of images is smaller than or equal to 0.95, the light spots of the two adjacent frames of images are greatly changed, and the initial threshold value is required to be set to 0 to start iteration.
The H is i As described in the H i-1 The calculation formula of the relation coefficient between the two is as follows:
;
wherein d (H) i-1 ,H i ) Is H i As described in the H i-1 The coefficient of the relationship between the two,,n is the number of image pixels, and I is the set of coordinates of all the pixels of the image.
Each round of iterative flow is as follows:
in step 405, according to the initial threshold th i And carrying out threshold segmentation on the ith frame image to obtain an initial prospect.
In step 406, it is determined whether the initial foreground meets a first determination condition, if yes, the process goes to step 407, and if not, the process goes to step 409.
In step 407, it is determined whether the initial foreground meets a second determination condition, if yes, the process goes to step 408, and if not, the process goes to step 409.
In step 408, an initial threshold th i As a final threshold output.
In step 409, a gray histogram H according to the i-th frame image i Initial threshold th i Obtain the exhaustive gray scale range OTSU of the ith frame image gray 。
In step 410, OTSU according to the exhaustive gray scale range gray Obtaining an updated threshold th of the ith frame image new 。
In step 411, the initial threshold is updated to th i =th new And jumps to step 405.
Wherein, the first judging condition is: obtaining the light spot circularity according to the initial prospect, and judging whether the light spot circularity is smaller than a preset value; since the light spot is mostly circular or elliptical, the thresholded initial foreground circularity is used as an iteration stop condition, and when the foreground is circular or elliptical, the correct foreground image is obtained.
The spot circularity formula is as follows:
;
wherein: f is the initial foreground perimeter, A is the initial foreground area; when the foreground pixel is an ideal circle, roundness=1.
In addition, in order to prevent false images, strong light and fog from interfering pixels to be distributed around the light spots, misjudgment is caused, and the area consistency constraint between frames is increased, namely, the light spots of the front frame and the back frame are considered not to be changed greatly. If the circularity of the image meets the requirement, but the foreground area is larger than the previous frame, namely false judgment is considered to happen, the threshold iteration is continued to be carried out on the foreground pixels, and at the moment, the iteration termination condition is changed to be similar to the foreground area and the previous frame.
The second judgment condition obtains the facula area A of the ith frame image according to the initial foreground of the ith frame image i And judging the facula area A of the ith frame image i Whether or not it is smaller than the spot area A of the i-1 th frame image i-1 Is a preset multiple of (a).
And when the first judging condition and the second judging condition are met, the correct facula image threshold value is considered to be obtained.
As can be seen from the above description of fig. 2 and 3, the number of pixels corresponding to the critical gray values between the foreground and the background is very small, so that gray values with a pixel ratio smaller than a certain proportion in the whole image can be obtained, so as to obtain a gray threshold range, and the final threshold is necessarily within the gray threshold range, so that the present embodiment relates to the following design:
the gray level histogram H according to the ith frame image i Initial threshold th i Obtain the exhaustive gray scale range OTSU of the ith frame image gray The method specifically comprises the following steps:
acquiring a gray level histogram H of the ith frame image i All gray values with the number of middle pixels being smaller than the preset proportion, wherein the maximum gray value in all the obtained gray values is img_gray max The minimum gray value of all the obtained gray values is img_gray min The img_gray is processed min With the initial threshold th of the ith frame image i Comparing when the img_gray max Greater than th i At the time, the img_gray min Remains unchanged when the img_gray min Less than or equal to th i At the time, img_gray is made min =th i Obtaining an exhaustive gray scale range OTSU of the ith frame image gray The OTSU gray The following are provided:
。
in this embodiment, the preset ratio is 0.2%, that is, the gray values with the pixel number less than 0.2% may be the final threshold of the image, and the maximum gray value of the gray values is the upper limit img_gray of the exhaustive gray range max While the smallest of these gray values may be smaller than the initial threshold th i Initial threshold th i The method is used for segmenting the foreground and the background before condition judgment, and the minimum value of the gray threshold range in the subsequent iteration process is only smaller than the initial threshold th i Large, therefore, when the minimum gray value among the gray values selected in accordance with the preset ratio is smaller than the initial threshold th i The minimum gray value is not adopted, but an initial threshold th is selected i Img_gray as the lower limit of the exhaustive gray range min 。
After obtaining the exhaustive gray range, find the updated threshold th by traversing all gray levels in the range new The present embodiment thus relates to the following design:
the updated threshold th of the ith frame image is obtained according to the exhaustive gray scale range new The method specifically comprises the following steps:
traversing And the image is brought into an objective function formula, when the objective function takes the maximum value, the corresponding gray value th is the updated threshold th of the ith frame image new ;
The objective function formula is:
;
wherein:is an objective function; said->The gray average value of the initial foreground pixel of the ith frame image is obtained; the L is the maximum value of gray values; said->The probability that the gray value in the initial foreground pixel of the ith frame image is i is given; said->The probability of pixel points with gray values smaller than th in the initial foreground pixels of the ith frame of image is obtained; the saidThe probability of pixel points with gray values larger than th in the initial foreground pixels of the ith frame of image is given; the saidThe pixel point gray average value of which the gray value is smaller than th in the initial foreground pixel of the ith frame image; the saidThe pixel point variance of the gray value larger than th in the initial foreground pixel of the ith frame image is obtained; said->The pixel point gray average value with the gray value larger than th in the initial foreground pixel of the ith frame image is obtained.
The objective function formula is obtained by improving an OTSU method inter-class variance formula, and the gray level histogram of the light spots is analyzed to obtain the light spot pixels which are correctly segmented with uniform brightness and smaller gray level difference, namely the foreground inter-class varianceAnd the target function is more suitable for the light spot image by combining the assumption of minimum variance in the foreground class on the basis of maximum inter-class variance.
Get updated threshold th new After that, the initial threshold th i Let th be new And enter intoThe next iteration is carried out until the initial threshold th i The segmented foreground light spot meets the first judgment condition and the second judgment condition at the same time, namely represents the initial threshold th i Is the final threshold.
Because the directly obtained video image tends to have more noise, the sharpening degree between the foreground and the background is insufficient, the limit is not clear enough, and the subsequent measurement is interfered, the following design is designed in the embodiment:
the bridge measuring point preset position is provided with an LED target to obtain a first preset image set, and the method further comprises the following steps:
performing bilateral filtering noise reduction on the first preset image set, so that each frame of image in the first preset image set is subjected to noise reduction sharpening, and interference between a foreground and a background in each frame of image is reduced;
the bilateral filtering noise reduction formula is as follows:
;
;
wherein, K (I, j) is an image output after bilateral filtering noise reduction, I (I, j) is an original input image, W (I, j) is a normalization parameter, andfor a 5x5 neighborhood range around the pixel point, ws (i, j) is a spatial proximity weight, and Wr (i, j) is a gray scale similarity weight, where:
;
;
wherein the saidIs a spatial domain factor, said->The space domain factor is represented by a neighborhood radius of the pixel point, and the gray domain factor is represented by a neighborhood gray standard deviation; the f (i, j) is an image point gray value, and the f (x, y) is an image coordinate; the f (x, y) is a 5x5 neighborhood point gray scale value of the image point f (i, j), and the x and y are 5x5 neighborhood point image coordinates of the image point f (i, j).
After each frame of image is divided into a final foreground and a final background by the final threshold value, the final foreground needs to be corresponding to a light spot center, and the displacement of the whole light spot is represented by the displacement of the light spot center, so the embodiment also relates to the following design:
and obtaining the light spot center of each frame of image according to the final foreground of each frame of image, and obtaining the pixel displacement of the bridge measuring point according to the displacement of the light spot center.
In the embodiment, the center of the light spot of the foreground after final threshold segmentation is obtained by a gray centroid method, and the formula is as follows:
;
wherein the I is ij The gray value at the pixel coordinate (i, j), m is the number of pixels in the horizontal direction of the image, n is the number of pixels in the vertical direction, and x and y are the coordinates of the center of the light spot.
After the spot center is obtained, the displacement of the spot center is obtained, but since all previous calculations are performed on the image size, and not the actual size of the bridge, the reduction is required according to the proportion, so the embodiment also relates to the following design:
and obtaining a measuring point scale factor according to the size of the LED target, and calculating the bridge measuring point pixel displacement according to the scale factor to obtain the actual displacement of the bridge measuring point, thereby obtaining the bridge dynamic deflection change.
The formula for obtaining the measuring point scale factor through the size of the LED target is specifically as follows:
;
wherein said d known The actual size of the LED target; the I is known The actual size of the target corresponds to the pixel length of the image.
And restoring the displacement of the center of the light spot according to the measuring point scale factor, so as to obtain the actual dynamic deflection of the bridge.
The results of the conventional OTSU method error threshold segmentation and the iterative OTSU method correct threshold segmentation for the flare image without interference, strong light interference and fog interference are shown in fig. 5 and 6 respectively.
Example 2:
the embodiment 2 of the invention provides a night bridge dynamic deflection measuring method based on an LED target, which is applied to the vibration measuring process in actual situations on the basis of the embodiment 1.
The flow chart of the structural vibration vision measurement method based on the LED target is shown in fig. 7, bilateral filtering noise reduction is firstly carried out on a vibration video, then a region of interest (Region of Interest, abbreviated as ROI) is selected according to the position of the LED target, then the pixel displacement of the LED target is calculated based on the region of interest by using the proposed iterative OTSU method in combination with a gray centroid method, then a measuring point scale factor is calculated according to the size of the known LED target, finally the pixel displacement of the LED target is converted into actual displacement through the scale factor, and finally the pixel displacement of the LED target is converted into actual displacement through the scale factor, so that the bridge measuring point dynamic deflection change is obtained.
The application of the method provided by the invention in bridge dynamic deflection measurement is described below by taking vibration measurement of a simply supported beam model as an example.
Applying external excitation to a simple beam with one end fixed, one free end, cantilever length of 96cm and section height of 2cm by a centrifugal motor fixed at the free end of the cantilever, fixing an LED target at a position 28cm away from the free end, and then using a single-phase inverter to perform parallel operationA vibration video of the cantilever structure is shot, the object distance is 2m, the schematic diagram of the cantilever structure and the LED target position is shown in figure 8, the video frame rate is 60fps, and the image is 1280720, and a size of 720.
Three working conditions are set in the experiment:
working condition one: interference-free, second working condition: strong light interference, working condition three: mist interference; the vibration video is thresholded by using an OTSU method and an iterative OTSU method respectively, displacement extraction results are calculated, compared with laser displacement sensor data, and mean error (AFSE) and standard root mean square error (NRMSE) are calculated, and the calculation formula is as follows:
;
;
wherein: x is x i For visually sensed data, y i For laser displacement sensor data, y max And (3) the data maximum value and the data minimum value of the ymin laser displacement sensor are the data maximum value and the data minimum value of the ymin laser displacement sensor, and n is the data quantity.
The measuring results of the dynamic deflection of the LED targets under the first working condition, the second working condition and the third working condition are shown in fig. 9, 10 and 11 respectively; the error calculation result is shown in fig. 12; the result shows that after the adverse conditions of strong light and fog are introduced, the OTSU method dynamic deflection measuring result gradually deviates from an actual value, and the iterative OTSU method provided by the embodiment can accurately extract the displacement of the LED target, and the measuring result is consistent with the laser displacement sensor.
The night bridge dynamic deflection measuring method based on the LED targets has the beneficial effects that:
in the process of extracting the light spot displacement, the influence of adverse conditions such as strong light interference, artifacts, fog interference and the like on the light spot image can be eliminated, a correct threshold value is obtained through calculation, and the displacement of the LED target is successfully tracked. The method provided by the invention is simple in calculation and convenient to apply, and can accurately calculate and obtain the night bridge dynamic deflection change curve.
Example 3:
fig. 13 is a schematic diagram of a device for measuring night bridge dynamic deflection based on an LED target according to an embodiment of the present invention. The night bridge dynamic deflection measuring device based on the LED targets of the embodiment comprises one or more processors 81 and a memory 82. In fig. 13, a processor 81 is taken as an example.
The processor 81 and the memory 82 may be connected by a bus or otherwise, for example in fig. 8.
The memory 82 is used as a non-volatile computer readable storage medium for storing non-volatile software programs and non-volatile computer executable programs, such as the night bridge dynamic deflection measurement method based on LED targets in the above embodiments. The processor 81 executes the night bridge dynamic deflection measurement method based on the LED targets by running non-volatile software programs and instructions stored in the memory 82.
The memory 82 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 82 may optionally include memory located remotely from processor 81, such remote memory being connectable to processor 81 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 82, which when executed by the one or more processors 81, perform the LED target-based night bridge dynamic deflection measurement method of the above-described embodiments, for example, performing the various steps shown in fig. 1-12 described above.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (9)
1. The night bridge dynamic deflection measuring method based on the LED targets is characterized by comprising the following steps of:
setting an LED target at a bridge measuring point position to obtain a first preset image set, and obtaining each frame of image of the first preset image set;
iteratively solving the gray threshold of each frame of image, so as to narrow the gray threshold range of each frame of image, and determining the final threshold of each frame of image through the reduced threshold range of each frame of image;
acquiring an ith frame image, when i=1, an initial threshold th of the ith frame image i Set to 0; when i > 1, acquiring a gray histogram H of an ith frame image i And a gray level histogram H of an i-1 th frame image i-1 Judging the H i As described in the H i-1 Whether the relation coefficient is larger than a preset coefficient, and when the relation coefficient is larger than the preset coefficient, the initial threshold th of the ith frame image i Setting the final threshold of the ith-1 frame image, and if the relation coefficient is smaller than or equal to the preset coefficient, setting the initial threshold th of the ith frame image i Set to 0;
in each iteration, according to the initial threshold th i Thresholding the ith frame image to obtain an initial prospect, judging whether the initial prospect meets a first judging condition and a second judging condition, and if not, judging that one or more of the first judging condition and the second judging condition are not met, according to a gray level histogram H of the ith frame image i Initial threshold th i Obtain the exhaustive gray scale range OTSU of the ith frame image gray And according to the exhaustive gray scale range OTSU gray Obtaining an updated threshold th of the ith frame image new Update the initial threshold to th i =th new The updated initial threshold th i Inputting into the next iteration, and when the initial foreground meets the first judgment condition and the second judgment condition at the same time, setting an initial threshold th i Outputting the final threshold value as an ith frame image;
dividing each frame of image according to the final threshold value of each frame of image to obtain a final prospect;
and obtaining the bridge dynamic deflection change according to the final prospect.
2. The LED target-based night bridge dynamic deflection measurement method of claim 1, wherein the H i As described in the H i-1 The calculation formula of the relation coefficient between the two is as follows:
;
wherein d (H) i-1 ,H i ) Is H i As described in the H i-1 The coefficient of the relationship between the two,n is the number of image pixels, and I is the set of coordinates of all the pixels of the image.
3. The method for measuring the night bridge dynamic deflection based on the LED targets according to claim 1, wherein the determining whether the initial foreground meets a first determination condition and a second determination condition specifically comprises:
the first judgment condition is as follows: obtaining the light spot circularity according to the initial prospect, and judging whether the light spot circularity is smaller than a preset value;
the second judging condition is as follows: obtaining a facula area A of the ith frame image according to the initial prospect of the ith frame image i Obtaining the facula area A of the ith-1 frame image according to the final prospect of the ith-1 frame image i-1 And judging the facula area A of the ith frame image i Whether or not it is smaller than the spot area A of the i-1 th frame image i-1 Is a preset multiple of (a).
4. The night bridge dynamic deflection measuring method based on the LED targets as claimed in claim 1, wherein the method is characterized in thatThe gray level histogram H according to the ith frame image i Initial threshold th i Obtain the exhaustive gray scale range OTSU of the ith frame image gray The method specifically comprises the following steps:
acquiring a gray level histogram H of the ith frame image i All gray values with the number of middle pixels being smaller than the preset proportion, wherein the maximum gray value in all the obtained gray values is img_gray max The minimum gray value of all the obtained gray values is img_gray min The img_gray is processed min With the initial threshold th of the ith frame image i Comparing when the img_gray max Greater than th i At the time, the img_gray min Remains unchanged when the img_gray min Less than or equal to th i At the time, img_gray is made min =th i Obtaining an exhaustive gray scale range OTSU of the ith frame image gray The OTSU gray The following are provided:
。
5. the method for measuring the dynamic deflection of the night bridge based on the LED targets according to claim 4, wherein the updated threshold th of the ith frame of image is obtained according to the exhaustive gray scale range new The method specifically comprises the following steps:
traversing the exhaustive gray scale range of the ith frame imageAnd the image is brought into an objective function formula, when the objective function takes the maximum value, the corresponding gray value th is the updated threshold th of the ith frame image new ;
The objective function formula is:
;
wherein:is an objective function; said->The gray average value of the initial foreground pixel of the ith frame image is obtained; the L is the maximum value of gray values; said->The probability that the gray value in the initial foreground pixel of the ith frame image is i is given; said->The probability of pixel points with gray values smaller than th in the initial foreground pixels of the ith frame of image is obtained; the saidThe probability of pixel points with gray values larger than th in the initial foreground pixels of the ith frame of image is given; the saidThe pixel point gray average value of which the gray value is smaller than th in the initial foreground pixel of the ith frame image; the saidThe pixel point variance of the gray value larger than th in the initial foreground pixel of the ith frame image is obtained; said->The pixel point gray average value with the gray value larger than th in the initial foreground pixel of the ith frame image is obtained.
6. The method for measuring night bridge dynamic deflection based on the LED targets according to claim 1, wherein the step of setting the LED targets at the bridge measuring point positions to obtain a first preset image set, further comprises:
performing bilateral filtering noise reduction on the first preset image set, so that each frame of image in the first preset image set is subjected to noise reduction sharpening, and interference between a foreground and a background in each frame of image is reduced;
the bilateral filtering noise reduction formula is as follows:
;
;
wherein, K (I, j) is an image output after bilateral filtering noise reduction, I (I, j) is an original input image, W (I, j) is a normalization parameter, andfor a 5x5 neighborhood range around the pixel point, ws (i, j) is a spatial proximity weight, and Wr (i, j) is a gray scale similarity weight, where:
;
;
wherein the saidIs a spatial domain factor, said->The space domain factor is represented by a neighborhood radius of the pixel point, and the gray domain factor is represented by a neighborhood gray standard deviation; the f (i, j) is an image point gray value, and the i and j are image coordinates; the f (x, y) is a 5x5 neighborhood point gray scale value of the image point f (i, j), and the x and y are 5x5 neighborhood point image coordinates of the image point f (i, j).
7. The method for measuring the bridge dynamic deflection at night based on the LED targets according to claim 1, wherein the step of obtaining the bridge dynamic deflection change according to the final prospect comprises the steps of:
obtaining a light spot center of each frame of image according to the final foreground of each frame of image, and obtaining bridge measuring point pixel displacement according to the displacement of the light spot center;
and obtaining a measuring point scale factor according to the size of the LED target, and calculating the bridge measuring point pixel displacement according to the scale factor to obtain the actual displacement of the bridge measuring point, thereby obtaining the bridge dynamic deflection change.
8. The method for measuring the night bridge dynamic deflection based on the LED targets according to claim 7, wherein the light spot center of each frame of image is obtained according to the final foreground of each frame of image, and the formula is as follows:
;
wherein I and j are pixel coordinates, I ij The gray value at the pixel coordinate (i, j), m is the number of pixels in the horizontal direction of the image, n is the number of pixels in the vertical direction, and x and y are the coordinates of the center of the light spot;
the measuring point scale factor is obtained through the size of the LED target, and the formula is as follows:
;
wherein the SF is a measuring point scale factor, and the d known The actual size of the LED target; the I is known The actual size of the target corresponds to the pixel length of the image.
9. A night bridge dynamic deflection measuring device based on an LED target, which is characterized by comprising at least one processor and a memory in communication connection with the at least one processor; wherein the memory stores instructions executable by the at least one processor for performing the LED target-based night bridge dynamic deflection measurement method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310581503.1A CN116309672B (en) | 2023-05-23 | 2023-05-23 | Night bridge dynamic deflection measuring method and device based on LED targets |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310581503.1A CN116309672B (en) | 2023-05-23 | 2023-05-23 | Night bridge dynamic deflection measuring method and device based on LED targets |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116309672A CN116309672A (en) | 2023-06-23 |
CN116309672B true CN116309672B (en) | 2023-08-01 |
Family
ID=86820785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310581503.1A Active CN116309672B (en) | 2023-05-23 | 2023-05-23 | Night bridge dynamic deflection measuring method and device based on LED targets |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116309672B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118408482B (en) * | 2024-07-02 | 2024-08-27 | 大连徕特光电精密仪器有限公司 | Large-scale place displacement monitoring method, device and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101446483A (en) * | 2008-12-30 | 2009-06-03 | 重庆大学 | Photoelectric tracking macro-pixel iterative centroid method |
WO2017113146A1 (en) * | 2015-12-28 | 2017-07-06 | 苏州中启维盛机器人科技有限公司 | Speckle imaging device |
EP3712841A1 (en) * | 2019-03-22 | 2020-09-23 | Ricoh Company, Ltd. | Image processing method, image processing apparatus, and computer-readable recording medium |
WO2022141178A1 (en) * | 2020-12-30 | 2022-07-07 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus |
CN115205317A (en) * | 2022-09-15 | 2022-10-18 | 山东高速集团有限公司创新研究院 | Bridge monitoring photoelectric target image light spot center point extraction method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6693646B1 (en) * | 1999-04-28 | 2004-02-17 | Microsoft Corporation | Method and system for iterative morphing |
JP2016084579A (en) * | 2014-10-23 | 2016-05-19 | 国立研究開発法人産業技術総合研究所 | Monitoring method and monitoring device for deflection amount distribution of structure |
CN108286948A (en) * | 2017-01-09 | 2018-07-17 | 南京理工大学 | A kind of deflection of bridge span detection method based on image procossing |
CN106952269B (en) * | 2017-02-24 | 2019-09-20 | 北京航空航天大学 | The reversible video foreground object sequence detection dividing method of neighbour and system |
KR101917619B1 (en) * | 2018-02-23 | 2018-11-13 | (주)카이센 | System for measuring bridge deflection |
CN108775872A (en) * | 2018-06-26 | 2018-11-09 | 南京理工大学 | Deflection of bridge span detection method based on autozoom scan picture |
CN115797411B (en) * | 2023-01-17 | 2023-05-26 | 长江勘测规划设计研究有限责任公司 | Method for online recognition of hydropower station cable bridge deformation by utilizing machine vision |
CN116124393A (en) * | 2023-02-16 | 2023-05-16 | 武汉地震工程研究院有限公司 | Bridge multipoint dynamic deflection measuring method and device during off-axis measurement |
-
2023
- 2023-05-23 CN CN202310581503.1A patent/CN116309672B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101446483A (en) * | 2008-12-30 | 2009-06-03 | 重庆大学 | Photoelectric tracking macro-pixel iterative centroid method |
WO2017113146A1 (en) * | 2015-12-28 | 2017-07-06 | 苏州中启维盛机器人科技有限公司 | Speckle imaging device |
EP3712841A1 (en) * | 2019-03-22 | 2020-09-23 | Ricoh Company, Ltd. | Image processing method, image processing apparatus, and computer-readable recording medium |
WO2022141178A1 (en) * | 2020-12-30 | 2022-07-07 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus |
CN115205317A (en) * | 2022-09-15 | 2022-10-18 | 山东高速集团有限公司创新研究院 | Bridge monitoring photoelectric target image light spot center point extraction method |
Also Published As
Publication number | Publication date |
---|---|
CN116309672A (en) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109272489B (en) | Infrared weak and small target detection method based on background suppression and multi-scale local entropy | |
KR101609303B1 (en) | Method to calibrate camera and apparatus therefor | |
CN109086724B (en) | Accelerated human face detection method and storage medium | |
CN116309672B (en) | Night bridge dynamic deflection measuring method and device based on LED targets | |
EP2079054A1 (en) | Detection of blobs in images | |
CN111354047B (en) | Computer vision-based camera module positioning method and system | |
US20020158636A1 (en) | Model -based localization and measurement of miniature surface mount components | |
CN113313107B (en) | Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge | |
CN115171218A (en) | Material sample feeding abnormal behavior recognition system based on image recognition technology | |
CN106815851B (en) | A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement | |
CN117789198B (en) | Method for realizing point cloud degradation detection based on 4D millimeter wave imaging radar | |
CN116563262A (en) | Building crack detection algorithm based on multiple modes | |
CN118261821A (en) | Infrared image acquisition and early warning system for animal epidemic disease monitoring | |
Zhixin et al. | Adaptive centre extraction method for structured light stripes | |
CN115690190B (en) | Moving target detection and positioning method based on optical flow image and pinhole imaging | |
CN113705433A (en) | Power line detection method based on visible light aerial image | |
CN116818778B (en) | Rapid and intelligent detection method and system for automobile parts | |
KR101129220B1 (en) | Apparatus and method for noise reduction of range images | |
CN109784229B (en) | Composite identification method for ground building data fusion | |
CN112084957A (en) | Mobile target retention detection method and system | |
CN113834447B (en) | High-dynamic laser light bar self-adaptive imaging processing method in outdoor complex environment | |
CN111473944B (en) | PIV data correction method and device for observing complex wall surface in flow field | |
CN117115174B (en) | Automatic detection method and system for appearance of pliers | |
CN113487502B (en) | Shadow removing method for hollow image | |
CN118279300B (en) | Engine part identification method based on three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |