CN116934833A - Binocular vision-based underwater structure disease detection method, equipment and medium - Google Patents

Binocular vision-based underwater structure disease detection method, equipment and medium Download PDF

Info

Publication number
CN116934833A
CN116934833A CN202310886273.XA CN202310886273A CN116934833A CN 116934833 A CN116934833 A CN 116934833A CN 202310886273 A CN202310886273 A CN 202310886273A CN 116934833 A CN116934833 A CN 116934833A
Authority
CN
China
Prior art keywords
underwater
image
binocular camera
binocular
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310886273.XA
Other languages
Chinese (zh)
Inventor
饶瑞
吴源
刘爱荣
毛吉化
陈炳聪
叶茂
黄永辉
陈立弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Wengu Inspection And Identification Co ltd
Guangzhou Guangjian Construction Engineering Testing Center Co ltd
Guangzhou University
Original Assignee
Guangzhou Guangjian Construction Engineering Testing Center Co ltd
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangjian Construction Engineering Testing Center Co ltd, Guangzhou University filed Critical Guangzhou Guangjian Construction Engineering Testing Center Co ltd
Priority to CN202310886273.XA priority Critical patent/CN116934833A/en
Publication of CN116934833A publication Critical patent/CN116934833A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a binocular vision-based underwater structure disease detection method, equipment and medium, which are characterized by comprising the following steps: calibrating the underwater binocular camera by adopting a calibration method, establishing a mapping relation between a coordinate system of the underwater binocular camera and a real world coordinate system, and acquiring parameters of the underwater binocular camera; performing enhancement processing on the acquired underwater picture, then performing feature point detection, performing stereo matching through binocular feature extraction, obtaining a parallax image containing depth information, and obtaining three-dimensional coordinates of feature points; calculating an included angle alpha between a plane where the underwater binocular camera is positioned and a plane where the detected structure is positioned for quality inspection; according to the method, the three-dimensional coordinates of the characteristic points are corrected according to the included angle alpha to obtain the three-dimensional size information of the diseases of the underwater tested structure.

Description

Binocular vision-based underwater structure disease detection method, equipment and medium
Technical Field
The document relates to the technical field of visual inspection, in particular to a binocular vision-based underwater structure disease detection method, equipment and medium.
Background
The underwater binocular vision detection technology is a technology based on computer vision and optical principles, combines an underwater optical sensor and a computer vision technology, and can be used in the fields of underwater environment monitoring, underwater search and rescue, marine organism monitoring and the like. Due to the complexity and diversity of the underwater environment, the underwater imaging picture can be unclear due to factors such as water turbidity, low visibility and the like. And when the camera is used for capturing structural diseases, the camera and the measured object cannot be completely parallel, so that trapezoidal distortion is caused, and larger error is caused.
Disclosure of Invention
The application provides a binocular vision-based underwater structure disease detection method, binocular vision-based underwater structure disease detection equipment and a binocular vision-based underwater structure disease medium method, and aims to solve the problems.
The application provides a binocular vision-based underwater structure disease detection method, which comprises the following steps:
s1, calibrating an underwater binocular camera by adopting a calibration method, establishing a mapping relation between a coordinate system of the underwater binocular camera and a real world coordinate system, and acquiring parameters of the underwater binocular camera;
s2, image acquisition is carried out on an underwater environment through an underwater binocular camera, and enhancement processing is carried out on acquired underwater pictures;
s3, performing feature point detection on the enhanced underwater picture, performing stereo matching through binocular feature extraction, obtaining a parallax image containing depth information, and calculating three-dimensional coordinates of feature points according to the parallax image;
s4, calculating an included angle alpha between a plane where the underwater binocular camera is positioned and a plane where the detected structure is positioned by respectively selecting three-dimensional coordinates of two characteristic points of the upper edge and the lower edge of the underwater structure;
and S5, correcting the three-dimensional coordinates of the characteristic points according to the included angle alpha to obtain the three-dimensional size information of the underwater detected structural disease.
The application provides an electronic device, comprising:
a processor; the method comprises the steps of,
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the steps of the binocular vision-based underwater structure disease detection method described above.
By adopting the embodiment of the application, the three-dimensional coordinates are obtained by using the binocular camera, so that the trapezoidal error is corrected, and the image quality is improved by an underwater image enhancement technology. The binocular camera can calculate the three-dimensional coordinates of the object through the angle difference in the imaging principle, so that the influence of the trapezoidal distortion is corrected, and the measurement accuracy is improved. Meanwhile, the image enhancement technology can improve the quality of the image, so that the image is clearer, and the measurement precision is improved.
The present application provides a storage medium for storing computer-executable instructions which, when executed, implement the steps of the binocular vision-based underwater structure disease detection method described above.
Drawings
For a clearer description of one or more embodiments of the present description or of the solutions of the prior art, the drawings that are necessary for the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description that follow are only some of the embodiments described in the description, from which, for a person skilled in the art, other drawings can be obtained without inventive faculty.
FIG. 1 is a flow chart of a binocular vision-based underwater structure disease detection method according to an embodiment of the present application;
fig. 2 is a schematic diagram of calculating an included angle α between a plane of a camera and a plane of a structure under test according to an embodiment of the present application.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive faculty, are intended to be within the scope of the present disclosure.
Method embodiment
According to the embodiment of the application, a binocular vision-based underwater structure disease detection method is provided, fig. 1 is a schematic diagram of the binocular vision-based underwater structure disease detection method according to the embodiment of the application, and according to the embodiment shown in fig. 1, the binocular vision-based underwater structure disease detection method specifically comprises the following steps:
s1, calibrating an underwater binocular camera by adopting a calibration method, establishing a mapping relation between a coordinate system of the underwater binocular camera and a real world coordinate system, and acquiring parameters of the underwater binocular camera, wherein the S1 specifically comprises the following steps:
and S11, calibrating the binocular camera by using a Zhang Zhengyou calibration method to obtain internal and external parameters of the camera, thereby obtaining the conversion relation among the world coordinate system, the camera coordinate system and the image coordinate system.
The mapping light system from the three-dimensional space point to the two-dimensional space point is as follows:
wherein s is a scale factor; n is a camera internal reference matrix; w is an extrinsic matrix, f x 、f y For focal length in x, y directions, R is 3*3 rotation matrix and t is translation vector 3*1.
S12, zhang Zhengyou calibration method selects a checkerboard for calibration, and the calibration plate is positioned on a plane with Z=0, so that the following formula can be obtained:
and (3) making:
where H is the mapping matrix of the three-dimensional space coordinates to the two-dimensional pixel coordinates. The properties of the rotation matrix R are as follows: />
Each image has the following two constraints on camera internal parameters:
s13, shooting more than 3 pictures to obtain an image matrix H of the camera, and calculating specific camera internal parameters and external parameters.
S2, image acquisition is carried out on an underwater environment through an underwater binocular camera, enhancement processing is carried out on acquired underwater pictures, and the S2 specifically comprises the following steps:
s21, removing turbidity from the image according to a foggy-day imaging model through a dark channel prior algorithm, and obtaining a color correction image;
s22, obtaining a contrast enhancement image through a self-adaptive gamma correction algorithm based on weighted distribution;
s23, adjusting the brightness distribution of the image through a histogram equalization algorithm to obtain a brightness equalization image;
s24, carrying out multi-scale fusion on the color correction image, the contrast enhancement image and the brightness balance image to obtain a fused image.
Specifically, S21 includes:
the foggy day imaging model is as follows:
I(x)=J(x)·t(x)+[1-t(x)]·A c
t(x)= -βI(x)
wherein I represents an underwater photographed image, J represents a clear ideal image, c the background light is represented, t (x) is the transmittance, the attenuation coefficient, and d (x) is the depth of field.
It can be considered that, according to the dark channel principle, of the three channels RGB, at least one channel intensity value is close to 0, then:
wherein J is dark Representing dark channel images, J C Representing three color channels, Ω () represents a local area centered on pixel x;
according to the dark channel prior, the atmospheric transmittance is calculated as follows:
where ω is the adjustment parameter.
Solving a candidate background light spot gray level histogram h, and accumulating the histogram h from right to left to obtain h sum When h sum >At 0.05% N, N is the total number of pixels, where the histogram abscissa value z is obtained, at [ z,255]Obtaining a water body background light value in the interval:
performing color correction on the degraded image, and calculating the transmissivity and the water body background light value to obtain a haze-removed image:
wherein t is 0 The critical value set for avoiding too small t (x) can effectively prevent the occurrence of too bright pixel points or pixel areas of the restored image, and the value is 0.1.
Specifically, S22 includes:
obtaining a high-contrast image through an adaptive gamma correction algorithm based on weighted distribution, wherein probability density of each intensity level in the image is as follows:
wherein n is l For pixels with intensity l, MN is the total number of pixels, and based on the probability density P (l), the weighted distribution function is expressed as:
wherein P is max (l) For the maximum probability density in the statistical histogram, min is the minimum probability density.
The gamma parameter gamma is optimized to be an adaptive gamma parameter gamma by a fixed coefficient so as to realize the intensity change of the underwater image relative to the generation of the underwater image:
γ α =1-P c (l)
wherein, gamma α In order to adapt the gamma parameter to be used,in order to accumulate the distribution function,
wherein l max For inputting maximum intensity.
Specifically, S23 includes:
the brightness distribution of the image is adjusted through a histogram equalization algorithm, so that the brightness distribution of the image is more uniform.
lnf(x,y)=lnf i (x,y)+lnf r (x,y)
Wherein f i (x, y) is the illumination component, f r (x, y) is a reflection component. Fourier transforming the above formula to obtain:
F(u,v)= i (u,v)+ r (u,v)
multiplying the above formula with homomorphic filter function h= (u, v) and performing inverse fourier transform to obtain:
c(x,y)= i (x,y)+ r (x,y)
and obtaining homomorphic filtering processing images through exponential transformation:
g(x,y)=exp[g i (x,y)·g r (x,y)]
s3, performing feature point detection on the enhanced underwater picture, performing stereo matching through binocular feature extraction, obtaining a parallax image containing depth information, and calculating three-dimensional coordinates of feature points according to the parallax image;
specifically, S3 in the embodiment of the present application includes:
and S31, detecting the characteristic points of the enhanced binocular picture through a sift algorithm, matching the characteristic points of the left picture and the right picture through binocular stereo matching, and obtaining view differences of the characteristic points.
S32, calculating the parallax d of the feature points obtained after matching, and calculating the depth information of the feature points:
wherein f is the focal length of the camera, B is the distance between the binocular cameras, and x l 、x r The abscissa of the feature points on the left and right images, respectively.
S33, combining the camera parameters to calculate the three-dimensional coordinates of the feature points.
S4, calculating an included angle alpha between a plane where the underwater binocular camera is positioned and a plane where the detected structure is positioned by respectively selecting three-dimensional coordinates of two characteristic points of the upper edge and the lower edge of the underwater structure;
fig. 2 is a schematic diagram of calculating an included angle α between a plane of an underwater binocular camera and a plane of a structure to be measured according to an embodiment of the present application, and according to the embodiment of the present application shown in fig. 2, the step of calculating an included angle α between the plane of the underwater binocular camera and the plane of the structure to be measured includes:
s41, selecting four corner feature points A on the upper and lower sides of the image 1 (x 1 ,y 1 ,z 1 )、A 2 (x 2 ,y 2 ,z 2 )、B 1 (x 3 ,y 3 ,z 3 )、B 2 (x 4 ,y 4 ,z 4 );
S42, respectively calculating the average value coordinates A of the upper edge corner points t (x t ,y t ,z t ) Average value coordinates B of lower edge corner points l (x l ,y l ,z l ) Wherein, the method comprises the steps of, wherein,
s43, calculating the included angle between the surface of the detected structure and the plane of the camera
And S5, correcting the three-dimensional coordinates of the characteristic points according to the included angle alpha to obtain the three-dimensional size information of the underwater detected structural disease.
Specifically, S5 in the embodiment of the present application specifically includes:
calculating the three-dimensional coordinate P obtained by S3 i (x i ,y i ,z i ) Depth correction, z' i =tanα·|z i -z t I, obtain corrected coordinate P' i (x i ,y i ,z′ i );
The corrected three-dimensional coordinates and the average value coordinates A of the upper edge corner points are combined t (x t ,y t ,z t ) And performing difference to obtain the accurate three-dimensional size information of the final underwater structure disease.
By adopting the embodiment of the application, the method has the following beneficial effects:
the binocular camera is used for obtaining three-dimensional coordinates, so that trapezoidal errors are corrected, and the image quality is improved through an underwater image enhancement technology. The binocular camera can calculate the three-dimensional coordinates of the object through the angle difference in the imaging principle, so that the influence of the trapezoidal distortion is corrected, and the measurement accuracy is improved. Meanwhile, the image enhancement technology can improve the quality of the image, so that the image is clearer, and the measurement precision is improved.
Device embodiment 1
The embodiment of the application provides electronic equipment, which comprises:
a processor; the method comprises the steps of,
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform steps as in the method embodiments described above.
Device example two
Embodiments of the present application provide a storage medium storing computer-executable instructions that, when executed, implement steps as in the method embodiments described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (10)

1. The binocular vision-based underwater structure disease detection method is characterized by comprising the following steps of:
s1, calibrating an underwater binocular camera by adopting a calibration method, establishing a mapping relation between a coordinate system of the underwater binocular camera and a real world coordinate system, and acquiring parameters of the underwater binocular camera;
s2, image acquisition is carried out on an underwater environment through an underwater binocular camera, and enhancement processing is carried out on acquired underwater pictures;
s3, performing feature point detection on the enhanced underwater picture, performing stereo matching through binocular feature extraction, obtaining a parallax image containing depth information, and calculating three-dimensional coordinates of feature points according to the parallax image;
s4, calculating an included angle alpha between a plane where the underwater binocular camera is positioned and a plane where the detected structure is positioned by respectively selecting three-dimensional coordinates of two characteristic points of the upper edge and the lower edge of the underwater structure;
and S5, correcting the three-dimensional coordinates of the characteristic points according to the included angle alpha to obtain three-dimensional size information of the underwater detected structural disease.
2. The method according to claim 1, wherein S1 specifically comprises:
s11, calibrating the underwater binocular camera by using a calibration method to obtain internal parameters and external parameters of the underwater binocular camera, so as to obtain a conversion relation among a world coordinate system, a camera coordinate system and an image coordinate system;
the mapping relation from the three-dimensional space point to the two-dimensional space point is as follows:
wherein s is a scale factor, N is a camera reference matrix, a W reference matrix, f x 、f y For the focal length in the x and y directions, R is 3*3 rotation matrix, and t is 3*1 translation vector;
s12, selecting a checkerboard for calibration, and enabling the calibration plate to be positioned on a plane with Z=0, so that the following formula can be obtained:
and (3) making:
where H is the mapping matrix of the three-dimensional space coordinates to the two-dimensional pixel coordinates,
and S13, acquiring a mapping matrix H by shooting a preset number of images, and calculating internal parameters and external parameters of the underwater binocular camera.
3. The method according to claim 1, wherein S2 specifically comprises:
s21, removing turbidity from the image according to a foggy-day imaging model through a dark channel prior algorithm, and obtaining a color correction image;
s22, obtaining a contrast enhancement image through a self-adaptive gamma correction algorithm based on weighted distribution;
s23, adjusting the brightness distribution of the image through a histogram equalization algorithm to obtain a brightness equalization image;
s24, carrying out multi-scale fusion on the color correction image, the contrast enhancement image and the brightness balance image to obtain a fused image.
4. A method according to claim 3, wherein S21 specifically comprises:
acquiring a foggy day imaging model by a formula 4:
wherein I represents an underwater photographed image, J represents a clear ideal image, A c Representing background light, t (x) representing transmissivity, beta being an attenuation coefficient, d (x) being a depth of field;
a dark channel image is acquired according to the dark channel principle by equation 5:
wherein J is dark Representing dark channel images, J C Representing three color channels, Ω (x) representing a local region centered on pixel x, y representing a single channel Ω region minimum parameter;
according to the dark channel prior, the atmospheric transmittance is calculated by a formula 6 to be:
wherein ω is the adjustment parameter;
and (3) obtaining a water body background light value through a formula 7:
wherein N is the total number of pixels;
performing color correction on the degraded image, and calculating the transmissivity and the water body background light value to obtain a haze-removed image:
wherein t is 0 Threshold value set to avoid t (x) being too small, I r An underwater image representing the r color channel.
5. A method according to claim 3, wherein S22 specifically comprises:
the probability density for each intensity level in the image is obtained by equation 9:
wherein n is l For pixels with intensity l, MN is the total number of pixels, based on probability density P (l), weighting distribution function P w (l) Expressed as:
wherein P is max (l) For maximum probability density in statistical histogram, P min Is the minimum probability density, and alpha is the adjustment parameter;
the gamma parameter gamma is optimized to be an adaptive gamma parameter gamma by a fixed coefficient so as to realize the intensity change of the underwater image relative to the generation of the underwater image:
γ α =1-P c (l) Equation 11;
wherein, gamma α In order to adapt the gamma parameter to be used,in order to accumulate the distribution function,
wherein l max To input the maximum intensity, T (l) represents the intensity of the element after correction.
6. A method according to claim 3, wherein S23 specifically comprises:
lnf(x,y)=lnf i (x,y)+lnf r (x, y) equation 13;
wherein f i (x, y) is the illumination component, f r (x, y) is a reflection component, f (x, y) represents an input image, and fourier transform is performed in the above formula to obtain:
F(u,v)=F i (u,v)+F r (u, v) equation 14;
multiplying the above formula with homomorphic filter function h= (u, v) and performing inverse fourier transform to obtain:
c(x,y)=g i (x,y)+g r (x, y) equation 15;
the homomorphic filtering image g (x, y) is obtained through exponential transformation:
g(x,y)=exp[g i (x,y)·g r (x,y)]equation 16.
7. The method according to claim 1, wherein S3 specifically comprises:
s31, detecting characteristic points of the enhanced underwater picture through a sift algorithm, matching the characteristic points of the left and right pictures acquired by the underwater binocular camera through binocular stereo matching, and obtaining view differences of the characteristic points;
s32, calculating depth information of the feature points according to the parallax d in the view difference:
wherein f is the focal length of the camera, B is the distance between the left and right cameras of the binocular camera, and x l 、x r The abscissa of the characteristic points of the left and right images acquired by the underwater binocular camera;
s33, combining the internal parameters and the external parameters of the underwater binocular camera to calculate the three-dimensional coordinates of the feature points.
8. The method according to claim 1, wherein S5 specifically comprises:
s51, performing depth correction on the three-dimensional coordinates of the feature points to obtain corrected coordinates;
s52, the corrected three-dimensional coordinates and the average value coordinates of the upper edge corner points are subjected to difference to obtain the accurate three-dimensional size information of the final underwater structure diseases.
9. An electronic device, comprising:
a processor; the method comprises the steps of,
a memory arranged to store computer executable instructions which when executed cause the processor to perform the steps of the binocular vision-based underwater structure disease detection method of any of claims 1 to 8.
10. A storage medium storing computer executable instructions which when executed implement the steps of the binocular vision-based underwater structure disease detection method of any of claims 1 to 8.
CN202310886273.XA 2023-07-18 2023-07-18 Binocular vision-based underwater structure disease detection method, equipment and medium Pending CN116934833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310886273.XA CN116934833A (en) 2023-07-18 2023-07-18 Binocular vision-based underwater structure disease detection method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310886273.XA CN116934833A (en) 2023-07-18 2023-07-18 Binocular vision-based underwater structure disease detection method, equipment and medium

Publications (1)

Publication Number Publication Date
CN116934833A true CN116934833A (en) 2023-10-24

Family

ID=88385716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310886273.XA Pending CN116934833A (en) 2023-07-18 2023-07-18 Binocular vision-based underwater structure disease detection method, equipment and medium

Country Status (1)

Country Link
CN (1) CN116934833A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523431A (en) * 2023-11-17 2024-02-06 中国科学技术大学 Firework detection method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523431A (en) * 2023-11-17 2024-02-06 中国科学技术大学 Firework detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2020259271A1 (en) Image distortion correction method and apparatus
US8000559B2 (en) Method of correcting image distortion and apparatus for processing image using the method
US7085430B2 (en) Correcting geometric distortion in a digitally captured image
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN108181319B (en) Accumulated dust detection device and method based on stereoscopic vision
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
US10931901B2 (en) Method and apparatus for selectively correcting fixed pattern noise based on pixel difference values of infrared images
JP2012026841A (en) Stereo distance measuring equipment and stereo distance measuring method
JPWO2009141998A1 (en) Calibration method, calibration apparatus, and calibration system including the apparatus
CN106570899B (en) Target object detection method and device
CN109883391B (en) Monocular distance measurement method based on digital imaging of microlens array
KR102106537B1 (en) Method for generating a High Dynamic Range image, device thereof, and system thereof
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN110033461B (en) Mobile phone anti-shake function evaluation method based on target displacement estimation
CN110261069B (en) Detection method for optical lens
CN116934833A (en) Binocular vision-based underwater structure disease detection method, equipment and medium
CN111323125A (en) Temperature measurement method and device, computer storage medium and electronic equipment
CN114494013A (en) Image splicing method, device, equipment and medium
CN117061868A (en) Automatic photographing device based on image recognition
CN111738241A (en) Pupil detection method and device based on double cameras
CN116668831A (en) Consistency adjusting method and device for multi-camera system
CN116380918A (en) Defect detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240410

Address after: 510006 No. 230 West Ring Road, University of Guangdong, Guangzhou

Applicant after: Guangzhou University

Country or region after: China

Applicant after: Guangzhou Guangjian Construction Engineering Testing Center Co.,Ltd.

Applicant after: GUANGDONG WENGU INSPECTION AND IDENTIFICATION CO.,LTD.

Address before: 510006 No. 230, Waihuan West Road, University Town, Xiaoguwei street, Panyu District, Guangzhou City, Guangdong Province

Applicant before: Guangzhou University

Country or region before: China

Applicant before: Guangzhou Guangjian Construction Engineering Testing Center Co.,Ltd.

TA01 Transfer of patent application right