CN109801216B - Rapid splicing method for tunnel detection images - Google Patents

Rapid splicing method for tunnel detection images Download PDF

Info

Publication number
CN109801216B
CN109801216B CN201811561093.XA CN201811561093A CN109801216B CN 109801216 B CN109801216 B CN 109801216B CN 201811561093 A CN201811561093 A CN 201811561093A CN 109801216 B CN109801216 B CN 109801216B
Authority
CN
China
Prior art keywords
tunnel
images
image
splicing
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811561093.XA
Other languages
Chinese (zh)
Other versions
CN109801216A (en
Inventor
曹民
张德津
周瑾
卢毅
池桂梅
王新林
徐泽鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Optical Valley Excellence Technology Co ltd
Original Assignee
Wuhan Optical Valley Excellence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Optical Valley Excellence Technology Co ltd filed Critical Wuhan Optical Valley Excellence Technology Co ltd
Priority to CN201811561093.XA priority Critical patent/CN109801216B/en
Publication of CN109801216A publication Critical patent/CN109801216A/en
Application granted granted Critical
Publication of CN109801216B publication Critical patent/CN109801216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of tunnel detection, in particular to rapid splicing of tunnel images acquired by a tunnel detection vehicle, which is used for subsequent tunnel defect discovery. The invention discloses a method for quickly splicing tunnel detection images, which mainly comprises the steps of calibrating tunnel model images, unifying the resolution of the tunnel images, enhancing the image contrast, homogenizing the images, splicing the images and cutting sectional images. The method can quickly realize the splicing of the tunnel section images, the image contrast of the spliced tunnel section images is high, the splicing precision of the tunnel images is high, and the spliced section images can be synthesized into the tunnel image with a large view field for automatic disease extraction. The invention can greatly save labor and is beneficial to greatly improving the tunnel defect detection efficiency.

Description

Rapid splicing method for tunnel detection images
Technical Field
The invention relates to the technical field of tunnel detection, in particular to rapid splicing of tunnel images acquired by a tunnel detection vehicle, which is used for subsequent tunnel defect discovery.
Background
At present, in infrastructure, for example, the disease detection of road pavement, the disease detection of road railway tunnels or bridges. The traditional detection means is mainly manual, such as detection for a highway tunnel. The manual detection method not only influences traffic, but also needs a lot of manpower and time, and often depends on the experience of detection personnel to determine the degree of the disease.
At present, tunnel detection vehicles replacing manual detection are provided with cameras, tunnel lining surface images are acquired one by one along with the movement of the detection vehicles, and then one tunnel section image is spliced to generate a tunnel image with a large view field. Therefore, the tunnel diseases can be detected by using the image recognition and detection software for automatically recognizing and detecting the tunnel diseases. The method greatly reduces manual intervention, has high detection efficiency, greatly saves workload and personnel investment compared with the traditional manual detection algorithm, and greatly improves the efficiency of tunnel disease detection.
The tunnel image with a large view field is composed of a tunnel section, and in order to increase the speed of acquiring the tunnel section image by the inspection vehicle, a plurality of cameras are usually arranged on the inspection vehicle to work simultaneously, and each camera acquires local images of different angles on the same section. Therefore, the tunnel section is formed by splicing a plurality of images acquired by a plurality of cameras. Therefore, the tunnel image with the large view field is formed by continuously splicing one tunnel image, the splicing effect influences subsequent treatment processes such as disease detection and the like, and the splicing of the tunnel image is particularly important.
Disclosure of Invention
The invention aims to solve the technical problem of providing a rapid splicing method of tunnel detection images, which is used for accurately and rapidly splicing tunnel lining images acquired by each CCD area-array camera on a tunnel detection vehicle.
In order to solve the technical problem, the invention provides a method for quickly splicing tunnel detection images, which comprises the following steps:
(1) The tunnel model is calibrated, and the calibration comprises the following steps: calculating offset parameters between tunnel model images; determining the corresponding relation between a plurality of CCD cameras and laser points; obtaining the distance from a projection point of the laser scanner on the plane of the camera support to the camera;
(2) The tunnel detection vehicle acquires data of an actual tunnel, and the acquired data comprises tunnel lining surface images and laser data corresponding to each group of images; calculating the object distance corresponding to the real tunnel image at the moment according to the laser data, then calculating the resolution of each image according to the object distance and the focal length, and unifying the resolution of each image;
(3) And (3) splicing tunnel images: calculating to obtain pixels of offset between the tunnel images under the actual object distance according to the overlapping distance between the images of the calibration model, and splicing a group of tunnel images into a tunnel section;
(4) Cutting the spliced tunnel section image: the method comprises the steps of firstly calculating the cutting amount of a cross section through a camera trigger interval and the pixel resolution of an image, then cutting off the overlapped part between tunnel cross section images according to the cutting amount of the cross section, ensuring that the large-view-field tunnel images obtained subsequently do not have repeated parts, and obtaining a group of spliced tunnel cross section images.
Further, the method for quickly splicing the tunnel detection images comprises the following steps: and carrying out contrast enhancement processing on each image after the resolution is unified so as to improve the contrast of each image and make the image more clear visually.
Preferably, the method for quickly splicing the tunnel detection images further comprises the following steps: and after the contrast enhancement treatment is carried out on each tunnel image, carrying out dodging treatment, wherein the dodging treatment is based on gray level conversion of an overlapping area, so that the brightness of adjacent images tends to be consistent.
Furthermore, in the tunnel image splicing processing, the feature matching of the image overlapping region is introduced to constrain the offset pixels between the images, so that the tunnel image splicing accuracy is ensured.
The rapid splicing method for the tunnel detection images further comprises the following steps:
and traversing effective tunnel data acquired by the tunnel detection vehicle, and sequentially carrying out the processing processes of uniform resolution, contrast enhancement, image dodging, image splicing and image cutting on the data until all effective tunnel images are spliced.
The method has the advantages that the tunnel section images can be spliced quickly, the image contrast of the spliced tunnel section images is high, the splicing precision of the tunnel images is high, and the spliced section images can be synthesized into the tunnel image with a large view field for automatic disease extraction. Manpower can be greatly saved, and the tunnel defect detection efficiency is improved.
Drawings
The technical solution of the present invention will be further specifically described with reference to the accompanying drawings and the detailed description.
FIG. 1 is a flowchart illustrating the calibration of offset parameters between model images.
Fig. 2 is a geometric relationship diagram of a laser scanner and a camera.
FIG. 3 is a diagram of pixel resolution versus CCD pixel mapping of a camera.
Fig. 4 is a geometric diagram of an actual tunnel object distance and a tunnel model.
Fig. 5 is a cross-sectional image with overlapping pixels.
FIG. 6 is a flowchart of a tunnel image fast stitching algorithm.
Detailed Description
The present embodiment adopts a tunnel detection multi-sensor integrated platform invented by the applicant to acquire tunnel images (see the earlier granted patent of the applicant, "a tunnel detection multi-sensor integrated platform" -patent No. 201510870169.7). 16 CCD cameras and a laser scanner are arranged on the tunnel detection multi-sensor integrated platform.
With reference to fig. 1 to 6, a fast tunnel image stitching algorithm firstly calibrates a tunnel model, collects actual tunnel lining surface data and unifies resolution according to laser data, then performs contrast improvement and dodging processing on the image to ensure that details of the tunnel image are clear and brightness between the images is consistent, then calculates image overlapping parameters under the tunnel model to determine the number of offset pixels between the actual tunnel images according to an object distance of the actual tunnel lining surface, and cuts the sections after the tunnel images are stitched into the sections to ensure that no repeated area appears between the sections. Next, embodiments of each part will be specifically described.
Before splicing the images of the tunnel lining surface, calibrating a tunnel model to obtain offset parameters among the images acquired by 16 CCD cameras under the model, and calculating the offset parameters among the actual tunnel images when the images of the tunnel lining surface are actually spliced. The calibration is divided into three parts, namely calculation of offset parameters between tunnel model images, determination of corresponding relation between a camera and a laser point, and acquisition of the distance from a projection point of the laser scanner on a camera support plane to the camera.
For the calculation of the offset parameters between the tunnel model images, firstly, the tunnel model images are unified in resolution according to the model analysis documents of the tunnel detection vehicle, then, 16 tunnel model images are led into Photoshop, the same-name points between the images are selected, the offset pixels of the images are obtained according to the same-name points, the overlapped pixels between the images can be conveniently obtained through Photoshop, and the overlapped pixels between the images are calculated according to a formula
L i =overlap i ×Basic_resolution
Converted to actual length, where L i For the actual length of correspondence between images, overlap i For the overlapped pixels between the images, basic _ resolution is the base resolution, and the process is shown in fig. 1.
Then acquiring the corresponding relation between the laser and the image, blocking the corresponding camera by adopting a baffle, converting the acquired laser data into distance, displaying the laser distance data by an application program matlab, wherein the baffle is an obvious plane, so that the part displayed as the plane on the matlab is the laser point number corresponding to the plane, and after determining the laser point numbers in the range of 16 CCD cameras, respectively outputting the point numbers in txt according to the camera serial numbers.
The last process of calibration is to measure the distance between the projection point of the laser scanner on the camera support plane and each camera, as shown in fig. 2, the geometric relationship between the laser scanner and the camera is shown, the projection point of the laser scanner on the camera support plane is O, one of the cameras is a, the laser scanner is B, the distance BS from the laser to the tunnel surface is required to be used for subtracting a 'B, i.e. subtracting AO, in order to obtain the object distance SA', the AO is required to be used, and the AO can be obtained by using the total station to measure AB and OB according to the laser data BS. So far, the calculation of the offset parameters between the tunnel model images in the calibration process, the determination of the corresponding relation between the camera and the laser point and the determination of the distance from the projection point of the laser scanner on the camera support plane to the camera are all completed.
After the tunnel model image is calibrated, the tunnel model image is calibratedAnd acquiring image data of the inter-tunnel lining surface, and obtaining the distance from the laser scanner to the tunnel surface according to the laser data of the acquired actual tunnel lining surface. The data of the laser scanner is distance data, the laser scanner on the tunnel detection vehicle collects 541 point distance data, each point is numbered in sequence, the corresponding relation between an image and a laser point is already determined in the calibration process, therefore, the laser point data corresponding to each image can be easily extracted from the 541 points, for one tunnel image, the laser point data in the image range is already determined, the average value of the laser point data is calculated, namely the average value of the distances represented by the points is taken as the distance between the tunnel surface where the image is located and the laser scanner, namely the distance is the BS in fig. 2, here, abnormal data in the image range can be identified, in practical application, the value of the data for finding the abnormal laser point is 0, and therefore, the laser point data with the value of 0 is not counted when the average value is obtained. The calibration process has calculated the distance between the projection point of the laser scanner on the camera mount plane and the camera, i.e. a' B in fig. 2, is known, and then the formula d i =BS i -A′B i The object distance d corresponding to each tunnel image can be obtained i Then, the pixel resolution of each image is calculated according to the focal length of the camera lens and the size of the CCD pixel, as shown in FIG. 3, f is the focal length, d is the object distance, and CCD is the size of the CCD pixel of the camera, so the pixel resolution of the image is calculated according to the focal length of the camera lens and the size of the CCD pixel
Figure BDA0001913270880000061
Can calculate the pixel resolution corresponding to each of the 16 tunnel images i . And then selecting a reference resolution, and resampling the tunnel image to enable the pixel resolution of the tunnel image to be equal to the reference resolution so as to finish the unification of the image resolutions.
And then, carrying out contrast enhancement processing on the tunnel image with uniform resolution, wherein the contrast enhancement processing of the image is carried out by adopting sectional shooting simulation balance. Then, each image is subjected to light homogenizing treatment, so that the brightness of adjacent images tends to be consistent. The dodging is here performed in such a way that the light is dodged between two images. Here, taking the left image as an example of the standard image, the right image is processed so that the luminance of the right image is similar to that of the left image. And (3) carrying out piecewise affine balance between the partitions on the right picture by adopting gray level affine transformation based on partition histogram matching, so that the effect of enabling the brightness of the right picture to be consistent with that of a standard image is achieved, and details of the right picture are enhanced through balance.
In order to ensure the maximum consistency of the gray scales of the same feature after dodging, firstly, the gray scale cumulative histograms of the overlapping areas of the left and right images are respectively counted, because the overlapping areas of the left and right images have the identical feature, so the feature with the gray scale corresponding to the same probability on the gray scale cumulative histogram should be the same, and the gray scales L1 and L2 and R1 and R2 can be selected from the left and right images respectively at two specific probability positions, wherein L1 and L2 are the gray scale of the left image, R1 and R2 are the gray scale of the right image, L1 and R1 are the same-name gray scale, and L2 and R2 are the same-name gray scale. And performing gray scale conversion on the right film based on the corresponding gray scale of the same name, wherein the gray scale value range of the converted right film is the same as that of the left film, R1 is converted into L1, and R2 is converted into L2. The right piece which is correspondingly subjected to gray scale conversion processing based on the homonymy gray scale is subjected to the interval statistical normalization cumulative histogram, the subinterval of the right image after the gray scale conversion is the same as the left image, so that the end point of each interval is ensured to be the gray scale of the homonymy ground object, and the corresponding histogram matching of each interval is equivalent to the matching of the ground object with the gray scale between the gray scales of the homonymy ground objects, namely the gray scale interval of the right piece is assumed to be the same as the gray scale interval [ b ] of the right piece after the right piece is stretched based on the gray scale of the homonymy point k ,b k+1 ]Is converted into [ a k ,a k+1 ]For the left piece and the right piece gray scale interval after the gray scale stretching [ a ] k ,a k+1 ]And carrying out normalized cumulative histogram statistics, wherein the horizontal axis is a gray value, and the vertical axis is the probability of each gray level occupying the total pixel number of the gray interval, so that the gray stretching treatment of the right piece also ensures the correspondence of the gray range of the horizontal axis. If the left slice gray scale interval [ a k ,a k+1 ]The normalized cumulative histogram distribution of (a) is:
s=T(r)
right piece gray scale interval [ a ] after gray scale stretching treatment k ,a k+1 ]The normalized cumulative histogram distribution of (a) is:
v=G(z)
where r, z is the original gray level, and the right piece gray level z can be expressed as:
z=G -1 (v)
passing interval [ a k ,a k+1 ]We can find that for an s and v equal, then:
z=G -1 (T(r))
therefore, the gray z of the right slice is in the same gray range [ a ] of the left slice k ,a k+1 ]Having a gray level r corresponding thereto, by determining the transformation interval [ x ] in which r is located during the left fragment affine equalization processing k ,x k+1 ]So as to determine the transformation slope m when the pixel with the stretching processed right piece of gray scale z is subjected to affine equalization k Affine transformation is performed, and finally, the section [ a ] of the right piece after the stretching processing is realized k ,a k+1 ]And (4) affine equalization, namely sequentially carrying out the processes on the gray levels of the rest intervals until the affine equalization of the whole image is completed. The piecewise affine equalization of the right piece is actually to search the corresponding gray scale of the left piece through the inter-partition histogram matching, further perform affine equalization processing on the gray scale of the right piece according to the affine transformation relation of the corresponding gray scale of the left piece, and avoid excessive difference of the gray scale of the same ground object in two images caused by directly performing the piecewise affine equalization on the right piece after the gray scale stretching through the gray scale affine transformation based on the partition histogram matching. So as to complete the light homogenizing treatment between adjacent images. The process is the same for the light evening between two other images.
The next step is to perform stitching processing on the images subjected to the enhancement and the dodging processing, and during stitching, the horizontal and longitudinal offset pixels between every two actual tunnel images need to be calculated. According to the geometric relationship between the actual object distance and the object distance corresponding to the tunnel model image and the camera field angle, the calibrated transverse and longitudinal offset distances of the tunnel model image are converted to the object distance of the actual tunnel image, and the image is divided into pixelsThe relationship of resolution is calculated to yield the offset pixels. The geometric schematic diagram of the actual tunnel object distance and the tunnel model is shown in fig. 4, and because the distance between every two cameras on the actual tunnel detection vehicle is short, the planes projected by the cameras on the tunnel surface are considered to be approximately parallel. As shown in fig. 4, θ A And theta B Half the field angle of camera A and camera B, respectively, f A And f B Focal lengths of camera A and camera B, respectively, D 1 And D 2 Respectively corresponding the object distance of the tunnel model image when the camera B is calibrated and the object distance of the actual tunnel image obtained by the camera B, wherein the geometric relationship shows that the overlapping length of the actual tunnel image is approximately as follows:
overlap=(D 2 -D 1 )(tanθA+tanθB)+L
according to the relation, the overlapping length between every two tunnel images can be obtained, then the overlapping length is converted into the number of pixels according to the resolution of the images, and the number of pixels overlapped between every two images is obtained. According to the obtained number of overlapped pixels, translation parameters in Homography between images during splicing can be calculated, then feature matching of an overlapped area between the images is introduced, the translation parameters in Homography are corrected according to an accurate matching result, the splicing precision between the images is further improved, and a Laplacian Pyramid method is adopted to eliminate seam processing during pairwise splicing between the images, so that smooth transition of gray level at the junction of the images is realized. And splicing the tunnel images to obtain a tunnel section image. In practical application, splicing can be performed in a mode without introducing matching so as to meet the application requirement under the lower precision requirement.
The final step is to cut the spliced tunnel section image, the triggering distance of the CCD camera of the tunnel detection vehicle is 460mm, namely, the image is exposed once every 460mm, a group of 16 tunnel images are obtained, 460mm is converted into the pixel number of the image according to the pixel resolution, and the pixel number and the height of the tunnel section are in a relationship shown in fig. 5. Rows is the height of the cross-sectional image, r1 represents the number of pixels corresponding to the trigger pitch of 460mm, and r2 represents the overlapping pixels between the two cross-sections, and the relationship is known as follows:
r2=Rows-r1
similarly, matching between the sections is introduced to constrain the sections so as to ensure the accuracy of the cutting amount r2, and in practical application, whether the matching between the sections is adopted or not can be selected according to requirements to obtain the cutting amount.
And traversing effective tunnel data acquired by the tunnel detection vehicle, and sequentially performing the processing processes of resolution unification, contrast enhancement, image dodging, image splicing and image cutting on the data until all effective tunnel images are spliced.
Finally, it should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (5)

1. A rapid splicing method for tunnel detection images is characterized by comprising the following steps:
(1) The tunnel model is calibrated, and the calibration comprises the following steps: calculating offset parameters between tunnel model images; determining the corresponding relation between a plurality of CCD cameras and laser points; obtaining the distance from a projection point of the laser scanner on the plane of the camera support to the camera;
(2) The tunnel detection vehicle acquires data of an actual tunnel, and the acquired data comprises tunnel lining surface images and laser data corresponding to each group of images; calculating the object distance corresponding to the real tunnel image at the moment according to the laser data, then calculating the resolution of each image according to the object distance and the focal length, and unifying the resolution of each image;
(3) And (3) splicing tunnel images: calculating to obtain pixels of offset between the tunnel images under the actual object distance according to the overlapping distance between the images of the calibration model, and splicing a group of tunnel images into a tunnel section;
(4) Cutting the spliced tunnel section image: the method comprises the steps of firstly calculating the cutting amount of a cross section through a camera trigger interval and the pixel resolution of an image, and then cutting off the overlapped part between tunnel cross section images according to the cutting amount of the cross section to obtain a group of spliced tunnel cross section images.
2. The method for rapidly splicing the tunnel detection images according to claim 1, characterized by comprising the following steps: and carrying out contrast enhancement processing on each image subjected to resolution unification.
3. The method for rapidly splicing the tunnel detection images according to claim 2, characterized by comprising the following steps: and after the contrast enhancement treatment is carried out on each tunnel image, carrying out dodging treatment, wherein the dodging treatment is based on gray level conversion of an overlapping area, so that the brightness of adjacent images tends to be consistent.
4. The method according to claim 3, wherein in the tunnel image stitching process, the feature matching of the image overlapping region is introduced to constrain the offset pixels between the images.
5. The method for rapidly splicing the tunnel detection images according to claim 4, further comprising the following steps:
and traversing effective tunnel data acquired by the tunnel detection vehicle, and sequentially carrying out the processing processes of uniform resolution, contrast enhancement, image dodging, image splicing and image cutting on the data until all effective tunnel images are spliced.
CN201811561093.XA 2018-12-20 2018-12-20 Rapid splicing method for tunnel detection images Active CN109801216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811561093.XA CN109801216B (en) 2018-12-20 2018-12-20 Rapid splicing method for tunnel detection images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811561093.XA CN109801216B (en) 2018-12-20 2018-12-20 Rapid splicing method for tunnel detection images

Publications (2)

Publication Number Publication Date
CN109801216A CN109801216A (en) 2019-05-24
CN109801216B true CN109801216B (en) 2023-01-03

Family

ID=66557194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811561093.XA Active CN109801216B (en) 2018-12-20 2018-12-20 Rapid splicing method for tunnel detection images

Country Status (1)

Country Link
CN (1) CN109801216B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378856B (en) * 2019-07-22 2023-03-14 福建农林大学 Tunnel surface two-dimensional laser image enhancement processing method
CN110764106A (en) * 2019-10-09 2020-02-07 中交一公局集团有限公司 Construction method for assisting shield interval slope and line adjustment measurement by adopting laser radar
CN113310987B (en) * 2020-02-26 2023-04-11 保定市天河电子技术有限公司 Tunnel lining surface detection system and method
CN111583108B (en) * 2020-04-20 2020-12-18 北京新桥技术发展有限公司 Tunnel lining surface linear array image TOF fusion splicing method and device and storage medium
CN111707668B (en) * 2020-05-28 2023-11-17 武汉光谷卓越科技股份有限公司 Tunnel detection and image processing method based on sequence images
CN112435170B (en) * 2020-12-04 2023-11-03 安徽圭目机器人有限公司 Tunnel vault image splicing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2971982B2 (en) * 1991-05-30 1999-11-08 日立建機株式会社 Segment attitude detection method and automatic assembly device
JP2015049765A (en) * 2013-09-03 2015-03-16 公益財団法人鉄道総合技術研究所 Method of correcting distortion of tunnel lining surface image
CN105550995B (en) * 2016-01-27 2019-01-11 武汉武大卓越科技有限责任公司 tunnel image splicing method and system

Also Published As

Publication number Publication date
CN109801216A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN109801216B (en) Rapid splicing method for tunnel detection images
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN108764257B (en) Multi-view pointer instrument identification method
KR100817656B1 (en) Image processing method, 3-dimension position measuring method, and image processing device
CN111855664B (en) Adjustable three-dimensional tunnel defect detection system
EP3327198B1 (en) Crack analyzer, crack analysis method, and crack analysis program
CN111260615B (en) Laser and machine vision fusion-based method for detecting apparent diseases of unmanned aerial vehicle bridge
CN110033407B (en) Shield tunnel surface image calibration method, splicing method and splicing system
CN112740267A (en) Learning data collection device, learning data collection method, and program
US20110103655A1 (en) Fundus information processing apparatus and fundus information processing method
CN103902953B (en) A kind of screen detecting system and method
JP6333307B2 (en) Degraded site detection device, degraded site detection method and program
CN113989353A (en) Pig backfat thickness measuring method and system
JP2020038132A (en) Crack on concrete surface specification method, crack specification device, and crack specification system, and program
CN113435420A (en) Pavement defect size detection method and device and storage medium
WO2020158726A1 (en) Image processing device, image processing method, and program
CN117036326A (en) Defect detection method based on multi-mode fusion
JP2001174227A (en) Method and device for measuring diameter distribution of fiber
JP2011033428A (en) Pantograph height measuring device
Ziqiang et al. Research of the algorithm calculating the length of bridge crack based on stereo vision
US20220076428A1 (en) Product positioning method
CN109084721B (en) Method and apparatus for determining a topographical parameter of a target structure in a semiconductor device
JPH11190611A (en) Three-dimensional measuring method and three-dimensional measuring processor using this method
CN107860933B (en) Digital image-based automatic detection method and device for fiber content in textile
CN111696047B (en) Imaging quality determining method and system of medical imaging equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 430223 Hubei science and Technology Park, East Lake Development Zone, Wuhan, China

Applicant after: Wuhan Optical Valley excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Applicant before: WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant