CN113516007A - Underwater marker identification and splicing method for multi-group binocular camera networking - Google Patents

Underwater marker identification and splicing method for multi-group binocular camera networking Download PDF

Info

Publication number
CN113516007A
CN113516007A CN202110358465.4A CN202110358465A CN113516007A CN 113516007 A CN113516007 A CN 113516007A CN 202110358465 A CN202110358465 A CN 202110358465A CN 113516007 A CN113516007 A CN 113516007A
Authority
CN
China
Prior art keywords
circle
center
marker
underwater
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110358465.4A
Other languages
Chinese (zh)
Other versions
CN113516007B (en
Inventor
董军宇
范浩
宋德豪
王晓璇
胡业琦
解志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202110358465.4A priority Critical patent/CN113516007B/en
Publication of CN113516007A publication Critical patent/CN113516007A/en
Application granted granted Critical
Publication of CN113516007B publication Critical patent/CN113516007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A recognition and splicing method for underwater markers of a multi-group binocular camera networking comprises the steps of pasting CCT code markers on an underwater target object, shooting, recognizing and decoding the underwater target, analyzing semantic attributes of the markers and positioning the center points of the markers: and positioning information splicing are carried out on the underwater marker, coordinates of different central points under a world coordinate system established by a left eye camera of the underwater marker are obtained, the coordinates are converted into the world coordinate system established by the left eye of a first binocular camera, and therefore the coordinates of all observed marker central points under the world coordinate system are obtained, and three-dimensional splicing of the observed objects is achieved. Each group of underwater binocular cameras independently realizes the capture of underwater markers, and realizes the identification and positioning of the underwater markers. The invention has the following potential applications: and fusing the three-dimensional positioning information of the tracking markers under the visual angles of the multiple groups of binocular cameras to the same world coordinate system for displaying so as to realize underwater panoramic motion capture.

Description

Underwater marker identification and splicing method for multi-group binocular camera networking
Technical Field
The invention belongs to the field of underwater vision, and relates to an underwater marker identification and splicing method for networking of multiple groups of binocular cameras.
Background
Motion capture is a research hotspot of computer vision, and refers to a technology of acquiring data images of scene objects through a camera and auxiliary equipment, and performing data recording and posture recovery on displacement of shot objects according to acquired data. Motion capture has a wide range of applications including virtual reality, gaming, ergonomic research, simulation training, biomechanical research, and the like.
Underwater motion capture has higher requirements, the current technology capable of underwater motion capture is few, and the underwater marker mode is difficult to identify and position due to refraction and other problems in the underwater, so that the underwater marker mode is more difficult to realize in reality, and no method can realize panoramic construction of three-dimensional positioning information of the markers under the view angles of a plurality of groups of binocular cameras. Therefore, the existing motion capture technology is not ideal in effect in an underwater environment, and an underwater panoramic imaging motion capture method with a plurality of groups of binocular camera networks needs to be provided.
Disclosure of Invention
The invention provides an underwater panoramic imaging motion capture method for networking a plurality of groups of binocular cameras, aiming at an underwater environment. By means of the device and the method, markers can be identified and positioned on objects in an underwater environment, three-dimensional positioning information of the tracked markers under the view angles of a plurality of groups of binocular cameras is fused to the same world coordinate system for displaying, and panoramic motion capture of the underwater objects is achieved.
The device used by the invention has the main body which is a supporting frame with two parallel brackets, the two parallel brackets are respectively fixed with a row of underwater binocular cameras, and the supporting frame is also provided with a plurality of light source lamps; the binocular camera and the light source lamp are perpendicular to the supporting frame, and the lens of the binocular camera faces to the same direction.
The underwater marker identification and splicing method of the multi-group binocular camera networking adopts the following technical scheme:
0) pasting a plurality of CCT code markers (shown in the attached drawing) on an underwater target object;
1) the method comprises the following steps that the lens of each binocular camera faces an underwater target object, and all the binocular cameras shoot the underwater target object;
2) identifying and decoding underwater targets to resolve semantic attributes (values represented by CCT code markers) of the markers and locate the center points of the markers:
2.1) extracting the outline of the image to form a binary image for the image shot by each camera, wherein various outlines can be extracted from the shot image, the outline of the circle center of the CCT code marker needs to be extracted, and the non-circular outlines which do not meet the requirements are removed by sequentially applying the following methods:
2.1.1) the remaining points in the circular contour cannot be too few, because the following steps are required to fit an ellipse, at least 5 points are included in the contour, and a part of the contour with less than 5 points is screened according to the condition; however, this condition is not well controlled, and especially in the recognition of the whole image, the setting of this parameter is affected by the distance of the shooting; firstly, the image is scratched out and then amplified, and the condition is easier to set when a series of operations are carried out;
2.1.2) circumference of a circle squared is 4 π2R2The area is pi R2(ii) a The ratio of the perimeter square to the area is about 4 pi, and the contour with the ratio deviating from 4 pi too much is removed, namely the ratio is not 3 pi, 5 pi]Removing the contour of (1);
2.2) fitting an ellipse according to the points contained in the contour; there are at least 5 points to fit the ellipse, so the minimum value in screening the contour above is 5; the shot CCT code is a standard circle, and even if the CCT code is deformed due to various factors, the CCT code cannot be changed too much; but here, the ratio of the major axis to the minor axis of the fitted ellipse is still set to be not more than 1.5, so that the part of the contour which does not meet the ratio after the ellipse is fitted is screened out;
2.3) finding out the position of the circle center according to the ellipse fitted by the contour, and positioning out two circles of ellipses outside through the geometric relation of the CCT code marker self-carrying, namely the radius of an inner circle is equal to the width of an outer black ring and an outer white ring;
let the coordinates of the center of the ellipse be (x, y), and the major axis: a; short axis: b; then the process of the first step is carried out,
inner ring ellipse: ellipse center coordinates are (x, y), major axis: 2 a; short axis: 2 b;
outer ring ellipse: ellipse center coordinates are (x, y), major axis: 3 a; short axis: 3 b;
2.4) carrying out affine transformation on three areas of the central ellipse, the inner ring ellipse and the outer ring ellipse, and correcting the three areas into a perfect circle; judging whether the corrected three perfect circles have CCT:
2.4.1) recording the coordinates of the circle center as (x0, y0), the radius r of the central perfect circle is close to a/2; sampling the corrected center circle, inner ring circle and outer ring circle, and setting the number of sampling points N to be 36, namely sampling one point every 360 DEG/N to be 10 DEG; the sampling here is in fact:
2.4.2) a circle exists inside a center perfect circle, and 36 points on the circle are traversed, if the circle is the center circle of the CCT code marker, the 36 points are all white, namely the sum of the pixel values of the 36 points in the contour map (binary map) is equal to 36; the circle may be a circumferential sample located 0.5r from the center of the circle to confine the circle to the inside of the center circle;
2.4.3) a circle exists inside the inner ring right circle, 36 points on the circle are traversed, if the inner ring of the CCT code marker exists, the 36 points are all black, namely the sum of the pixel values of the 36 points in the contour map (binary map) is equal to 0; the circle can be arranged at a position 1.2-1.5r away from the circle center for circumferential sampling so as to limit the circle to an inner ring right circle;
2.4.4) a circle exists in the outer ring right circle, 36 points on the circle are traversed, because the CCT code marker adopts 12-bit coding, one bit of coding occupies 30 degrees, and because sampling is performed every 10 degrees, if the CCT code marker is a white fan-ring area with a radian of 30 degrees, at least 3 points are white when sampling is performed every 10 degrees, namely the sum of pixel values of the 36 positions in the binary image is more than 2, and the circle sampling can be set at a position 2.2-2.5r away from the circle center so as to limit the circle to the outer ring right circle;
2.4.5) decoding if the three positive circles are judged to be CCT code markers through the steps, otherwise, judging the next contour;
2.5) the specific steps for decoding the semantic attributes of the markers are as follows:
2.5.1) sampling the outer ring normal circle on the annular coding belt once every 30 degrees because the CCT code marker uses 12-bit coding, wherein the sampling point is 2.2-2.5r away from the circle center;
2.5.2) since the next code value is not entered every 30 ° of rotation, the starting point of sampling affects the decoding result, so that 30 samples are taken from the starting point 0, 1, 2, 3, ·.
0,30,60,90,......,330;
1,31,61,91,.......,331;
......,......,......,;
29,59,89,119,......,359;
For the binary code obtained in 30 times, the average value is obtained for each of the number series according to the sequence from the 1 st sampling to the 30 th sampling: greater than 0.5 is considered to be 1; otherwise, the value is considered to be 0; a 12-bit binary sequence is obtained, and the cyclic decoding is carried out on the sequence, wherein 12 cases exist; converting the 12 binary codes into decimal codes; coding of a CCT code marker which is a binary code corresponding to the minimum decimal code;
3) positioning of underwater markers: the underwater binocular camera comprises a left eye camera and a right eye camera, the left eye camera and the right eye camera simultaneously observe and recognize the same marker through the method in the step 2) (the marker has the same CCT code value and is regarded as the same marker), the central point of the same marker is mapped on the left eye image and the right eye image into a pair of matching points, and the central point of the marker observed by the left eye camera is recorded by establishing a coordinate system by the left eye camera;
4) splicing three-dimensional positioning information:
after the underwater positioning is finished, each image needs to be spliced so as to achieve the real-time control of the measured object, three-dimensional positioning information of CCT code markers under the visual angles of a plurality of groups of binocular cameras is fused to the same world coordinate system for displaying, and the method specifically comprises the following steps:
4.1) obtaining the circle center coordinates of each marker by the identification of the underwater CCT code markers in the step 2) according to (x)l,yl)、(xr,yr) The coordinates of the center of a circle of the marker in an image coordinate system;
the coordinates of the circle center are normalized by the focal length f:
Figure BDA0003004565340000021
obtaining the center coordinate (x) on the normalized planel′、yl′),(xr′、yr′),
4.2) obtaining the three-dimensional coordinate of the normalized circle center coordinate under a world coordinate system through an underwater camera refraction model:
with (X)L,YL,ZL) Representing the three-dimensional coordinates of the normalized circle center coordinates in the world coordinate system established by the eye camera, in (X)R,YR,ZR) Representing the three-dimensional coordinates of the normalized circle center coordinates in the world coordinate system established by the right-eye camera,
(XL,YL,ZL) And (X)R,YR,ZR) The following relationships exist:
Figure BDA0003004565340000031
Figure BDA0003004565340000032
the distance from the center of the camera to the protective shell glass is h, and the refractive index of light in the water medium is n
Is provided with (X)L,YL,ZL) And (X)R,YR,ZR) The rotation matrix of (1) is R, the translation matrix is T, the former is a 3 x 3 matrix, and the latter is a 3 x 1 matrixThen there is
The following relationships:
Figure BDA0003004565340000033
namely, it is
Figure BDA0003004565340000034
(XL,YL,ZL)=(XW,YW,ZW) Unfolding can obtain
Figure BDA0003004565340000035
An equation can be obtained
Figure BDA0003004565340000036
From this equation (X) can be solvedW,YW,ZW) Obtaining the coordinates of the central point of the marker in a world coordinate system established by a left eye camera;
and shooting by binocular cameras at different positions to obtain different marker center points, further obtaining coordinates of the different center points under a world coordinate system established by a left eye camera of the different center points, and converting to the world coordinate system established by the left eye of the first binocular camera by [ R T ] to obtain the coordinates of all observed marker center points under the world coordinate system, thereby realizing the three-dimensional splicing of the observed objects.
Each group of underwater binocular cameras independently realizes the capture of underwater markers, and realizes the identification and positioning of the underwater markers.
The invention has the following potential applications: and fusing the three-dimensional positioning information of the tracking markers under the visual angles of the multiple groups of binocular cameras to the same world coordinate system for displaying so as to realize underwater panoramic motion capture.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a diagram of an apparatus used in the present invention.
In the figure, 1, 2, 3, three, 4, four, 5, five, 6, six, 7, seven, 8, eight, 9, nine, 10, ten, 11, one, 12, two, 13, three, 14, four, 15, five, 16, six, 17, seven, 18, and eight halogen light sources.
Fig. 3 is a CCT code marker diagram.
Detailed Description
As shown in fig. 2, the main body of the device used in the present invention is a supporting frame having two parallel brackets, each of which is fixed with a row of underwater binocular cameras, and the supporting frame is further provided with a plurality of light source lamps; the binocular camera and the light source lamp are perpendicular to the supporting frame, and the lens of the binocular camera faces to the same direction.
The underwater marker identification and splicing method of the multi-group binocular camera networking comprises the following steps as shown in figure 1
0) Pasting a plurality of CCT code markers on an underwater target object (as shown in figure 3);
1) the method comprises the following steps that the lens of each binocular camera faces an underwater target object, and all the binocular cameras shoot the underwater target object;
2) identifying and decoding underwater targets to resolve semantic attributes (values represented by CCT code markers) of the markers and locate the center points of the markers:
2.1) extracting the outline of the image to form a binary image for the image shot by each camera, wherein various outlines can be extracted from the shot image, the outline of the circle center of the CCT code marker needs to be extracted, and the non-circular outlines which do not meet the requirements are removed by sequentially applying the following methods:
2.1.1) the remaining circular contour contains too few points, because the following steps are needed to fit the ellipse, at least 5 points are contained in the contour, and a part of the contour with less than 5 points is screened according to the condition; however, this condition is not well controlled, and especially in the recognition of the whole image, the setting of this parameter is affected by the distance of the shooting; firstly, the image is scratched out and then amplified, and the condition is easier to set when a series of operations are carried out;
2.1.2) circumference of a circle squared is 4 π2R2The area is pi R2(ii) a The ratio of the perimeter square to the area is about 4 pi, and the contour with the ratio deviating from 4 pi too much is removed, namely the ratio is not 3 pi, 5 pi]Removing the contour of (1);
2.2) fitting an ellipse according to the points contained in the contour; there are at least 5 points to fit the ellipse, so the minimum value in screening the contour above is 5; the shot CCT code is a standard circle, and even if the CCT code is deformed due to various factors, the CCT code cannot be changed too much; but here, the ratio of the major axis to the minor axis of the fitted ellipse is still set to be not more than 1.5, so that the part of the contour which does not meet the ratio after the ellipse is fitted is screened out;
2.3) finding out the position of the circle center according to the ellipse fitted by the contour, and positioning out two circles of ellipses outside through the geometric relation of the CCT code marker self-carrying, namely the radius of an inner circle is equal to the width of an outer black ring and an outer white ring;
let the coordinates of the center of the ellipse be (x, y), and the major axis: a; short axis: b; then the process of the first step is carried out,
inner ring ellipse: ellipse center coordinates are (x, y), major axis: 2 a; short axis: 2 b;
outer ring ellipse: ellipse center coordinates are (x, y), major axis: 3 a; short axis: 3 b;
2.4) carrying out affine transformation on three areas of the central ellipse, the inner ring ellipse and the outer ring ellipse, and correcting the three areas into a perfect circle; judging whether the corrected three perfect circles have CCT:
2.4.1) recording the coordinates of the circle center as (x0, y0), the radius r of the central perfect circle is close to a/2; sampling the corrected center circle, inner ring circle and outer ring circle, and setting the number of sampling points N to be 36, namely sampling one point every 360 DEG/N to be 10 DEG; the sampling here is in fact:
2.4.2) a circle exists inside a center perfect circle, and 36 points on the circle are traversed, if the circle is the center circle of the CCT code marker, the 36 points are all white, namely the sum of the pixel values of the 36 points in the contour map (binary map) is equal to 36; the circle may be a circumferential sample located 0.5r from the center of the circle to confine the circle to the inside of the center circle;
2.4.3) a circle exists inside the inner ring right circle, 36 points on the circle are traversed, if the inner ring of the CCT code marker exists, the 36 points are all black, namely the sum of the pixel values of the 36 points in the contour map (binary map) is equal to 0; the circle can be arranged at a position 1.2-1.5r away from the circle center for circumferential sampling so as to limit the circle to an inner ring right circle;
2.4.4) a circle exists in the outer ring right circle, 36 points on the circle are traversed, because the CCT code marker adopts 12-bit coding, one bit of coding occupies 30 degrees, and because sampling is performed every 10 degrees, if the CCT code marker is a white fan-ring area with a radian of 30 degrees, at least 3 points are white when sampling is performed every 10 degrees, namely the sum of pixel values of the 36 positions in the binary image is more than 2, and the circle sampling can be set at a position 2.2-2.5r away from the circle center so as to limit the circle to the outer ring right circle;
2.4.5) decoding if the three positive circles are judged to be CCT code markers through the steps, otherwise, judging the next contour;
2.5) the specific steps for decoding the semantic attributes of the markers are as follows:
2.5.1) sampling the outer ring normal circle on the annular coding belt once every 30 degrees because the CCT code marker uses 12-bit coding, wherein the sampling point is 2.2-2.5r away from the circle center;
2.5.2) since the next code value is not entered every 30 ° of rotation, the starting point of sampling affects the decoding result, so that 30 samples are taken from the starting point 0, 1, 2, 3, ·.
0,30,60,90,......,330;
1,31,61,91,.......,331;
......,......,......,;
29,59,89,119,......,359;
For the binary code obtained in 30 times, the average value is obtained for each of the number series according to the sequence from the 1 st sampling to the 30 th sampling: greater than 0.5 is considered to be 1; otherwise, the value is considered to be 0; a 12-bit binary sequence is obtained, and the cyclic decoding is carried out on the sequence, wherein 12 cases exist; converting the 12 binary codes into decimal codes; coding of a CCT code marker which is a binary code corresponding to the minimum decimal code;
3) positioning of underwater markers: the underwater binocular camera comprises a left eye camera and a right eye camera, the left eye camera and the right eye camera simultaneously observe and recognize the same marker through the method in the step 2) (the marker has the same CCT code value and is regarded as the same marker), the central point of the same marker is mapped on the left eye image and the right eye image into a pair of matching points, and the central point of the marker observed by the left eye camera is recorded by establishing a coordinate system by the left eye camera;
4) splicing three-dimensional positioning information:
after the underwater positioning is finished, each image needs to be spliced so as to achieve the real-time control of the measured object, three-dimensional positioning information of CCT code markers under the visual angles of a plurality of groups of binocular cameras is fused to the same world coordinate system for displaying, and the method specifically comprises the following steps:
4.1) obtaining the circle center coordinates of each marker by the identification of the underwater CCT code markers in the step 2) according to (x)l,yl)、(xr,yr) The coordinates of the center of a circle of the marker in an image coordinate system;
the coordinates of the circle center are normalized by the focal length f:
Figure BDA0003004565340000051
obtaining the center coordinate (x) on the normalized planel′、yl′),(xr′、yr′),
4.2) obtaining the three-dimensional coordinate of the normalized circle center coordinate under a world coordinate system through an underwater camera refraction model:
with (X)L,YL,ZL) Representing three dimensions of normalized circle center coordinates in world coordinate system established by eye cameraCoordinates of in (X)R,YR,ZR) Representing the three-dimensional coordinates of the normalized circle center coordinates in the world coordinate system established by the right-eye camera,
(XL,YL,ZL) And (X)R,YR,ZR) The following relationships exist:
Figure BDA0003004565340000052
Figure BDA0003004565340000061
the distance from the center of the camera to the protective shell glass is h, and the refractive index of light in the water medium is n
Is provided with (X)L,YL,ZL) And (X)R,YR,ZR) The rotation matrix of (1) is R, the translation matrix is T, the former is a 3 x 3 matrix, the latter is a 3 x 1 matrix, then
The following relationships:
Figure BDA0003004565340000062
namely, it is
Figure BDA0003004565340000063
(XL,YL,ZL)=(XW,YW,ZW) Unfolding can obtain
Figure BDA0003004565340000064
Further get the equation
Figure BDA0003004565340000065
From this equation (X) can be solvedW,YW,ZW),Obtaining the coordinates of the central point of the marker in a world coordinate system established by a left eye camera;
and shooting by binocular cameras at different positions to obtain different marker center points, further obtaining coordinates of the different center points under a world coordinate system established by a left eye camera of the different center points, and converting to the world coordinate system established by the left eye of the first binocular camera by [ R T ] to obtain the coordinates of all observed marker center points under the world coordinate system, thereby realizing the three-dimensional splicing of the observed objects.
By means of the method and the device, the markers of the object in the underwater environment can be identified and positioned, the three-dimensional positioning information of the tracking markers under the visual angles of the multiple groups of binocular cameras is fused to the same world coordinate system for displaying, and the panoramic motion capture of the underwater object is achieved.

Claims (1)

1. The underwater marker identification and splicing method of the multi-group binocular camera networking is characterized by comprising the following steps:
0) pasting a plurality of CCT code markers on an underwater target object;
1) the method comprises the following steps that the lens of each binocular camera faces an underwater target object, and all the binocular cameras shoot the underwater target object;
2) identifying and decoding the underwater target to analyze semantic attributes of the markers and locate the center points of the markers:
2.1) extracting the outline of the image to form a binary image for the image shot by each camera, wherein various outlines can be extracted from the shot image, the outline of the circle center of the CCT code marker needs to be extracted, and the non-circular outlines which do not meet the requirements are removed by sequentially applying the following methods:
2.1.1) screening out contours with less than 5 points;
2.1.2) removing the contour with the ratio of the perimeter square to the area not in [3 pi, 5 pi ];
2.2) fitting an ellipse according to the points contained in the contour; setting the ratio of the major axis to the minor axis of the fitted ellipse to be not more than 1.5, so as to screen out the part of the outline which does not meet the ratio after fitting the ellipse;
2.3) finding out the position of the circle center according to the ellipse fitted by the contour, and positioning out two circles of ellipses outside through the geometric relation of the CCT code marker self-carrying, namely the radius of an inner circle is equal to the width of an outer black ring and an outer white ring;
let the coordinates of the center of the ellipse be (x, y), and the major axis: a; short axis: b; then the process of the first step is carried out,
inner ring ellipse: ellipse center coordinates are (x, y), major axis: 2 a; short axis: 2 b;
outer ring ellipse: ellipse center coordinates are (x, y), major axis: 3 a; short axis: 3 b;
2.4) carrying out affine transformation on three areas of the central ellipse, the inner ring ellipse and the outer ring ellipse, and correcting the three areas into a perfect circle; judging whether the corrected three perfect circles have CCT:
2.4.1) recording the coordinates of the circle center as (x0, y0), the radius r of the central perfect circle is close to a/2; to the corrected center circle, inner ring circle and outer ring circleRound (T-shaped)Sampling, wherein the number N of sampling points is set to be 36, namely, one point is sampled every 360 DEG/N to 10 DEG; the sampling here is in fact:
2.4.2) a circle exists inside the center perfect circle, and 36 points on the circle are traversed, if the circle is the center circle of the CCT code marker, the 36 points are all white, namely the sum of the pixel values of the 36 points in the contour map is equal to 36; the circle is set to sample at a circumference of 0.5r from the center of the circle to limit the circle to the inside of the center circle;
2.4.3) a circle exists inside the inner ring right circle, 36 points on the circle are traversed, if the inner ring of the CCT code marker exists, the 36 points are all black, namely the sum of the pixel values of the 36 points in the contour map is equal to 0; the circle is arranged at a position 1.2-1.5r away from the circle center for sampling, so that the circle is limited to an inner ring right circle;
2.4.4) a circle exists in the outer ring right circle, 36 points on the circle are traversed, because the CCT code marker adopts 12-bit coding, one bit of coding occupies 30 degrees, and because sampling is performed every 10 degrees, if the CCT code marker is a white fan-ring area with a radian of 30 degrees, at least 3 points are white when sampling is performed every 10 degrees, namely the sum of pixel values of the 36 positions in the binary image is more than 2, and the circle is set in the circumferential sampling at a position 2.2-2.5r away from the center of the circle to limit the circle to the outer ring right circle;
2.4.5) decoding if the three positive circles are judged to be CCT code markers through the steps, otherwise, judging the next contour;
2.5) the specific steps for decoding the semantic attributes of the markers are as follows:
2.5.1) sampling the outer ring normal circle on the annular coding belt once every 30 degrees because the CCT code marker uses 12-bit coding, wherein the sampling point is 2.2-2.5r away from the circle center;
2.5.2) since the next code value is not entered every 30 ° of rotation, the starting point of sampling affects the decoding result, so that 30 samples are taken from the starting point 0, 1, 2, 3, ·.
0,30,60,90,......,330;
1,31,61,91,.......,331;
......,......,......,;
29,59,89,119,......,359;
For the binary code obtained in 30 times, the average value is obtained for each of the number series according to the sequence from the 1 st sampling to the 30 th sampling: greater than 0.5 is considered to be 1; otherwise, the value is considered to be 0; a 12-bit binary sequence is obtained, and the cyclic decoding is carried out on the sequence, wherein 12 cases exist; converting the 12 binary codes into decimal codes; coding of a CCT code marker which is a binary code corresponding to the minimum decimal code;
3) positioning of underwater markers: the underwater binocular camera comprises a left eye camera and a right eye camera, the left eye camera and the right eye camera simultaneously observe and recognize the same marker through the method in the step 2) (the marker has the same CCT code value and is regarded as the same marker), the central point of the same marker is mapped on the left eye image and the right eye image into a pair of matching points, and the central point of the marker observed by the left eye camera is recorded by establishing a coordinate system by the left eye camera;
4) splicing three-dimensional positioning information:
after the underwater positioning is finished, each image needs to be spliced so as to achieve the real-time control of the measured object, three-dimensional positioning information of CCT code markers under the visual angles of a plurality of groups of binocular cameras is fused to the same world coordinate system for displaying, and the method specifically comprises the following steps:
4.1) obtaining the circle center coordinates of each marker by the identification of the underwater CCT code markers in the step 2) according to (x)l,yl)、(xr,yr) The coordinates of the center of a circle of the marker in an image coordinate system;
the coordinates of the circle center are normalized by the focal length f:
Figure FDA0003004565330000021
obtaining circle center coordinates (x ') on the normalized plane'l、y′l),(x′r、y′r),
4.2) obtaining the three-dimensional coordinate of the normalized circle center coordinate under a world coordinate system through an underwater camera refraction model:
with (X)L,YL,ZL) Representing the three-dimensional coordinates of the normalized circle center coordinates in the world coordinate system established by the eye camera, in (X)R,YR,ZR) Representing the three-dimensional coordinates of the normalized circle center coordinates in the world coordinate system established by the right-eye camera,
(XL,YL,ZL) And (X)R,YR,ZR) The following relationships exist:
Figure FDA0003004565330000022
Figure FDA0003004565330000023
the distance from the center of the camera to the protective shell glass is h, and the refractive index of light in the water medium is n
Is provided with (X)L,YL,ZL) And (X)R,YR,ZR) With the rotation matrix R and the translation matrix T, frontA 3 × 3 matrix and a 3 × 1 matrix are given as follows
Figure FDA0003004565330000024
Namely, it is
Figure FDA0003004565330000025
(XL,YL,ZL)=(XW,YW,ZW) Unfolding can obtain
Figure FDA0003004565330000031
Further get the equation
Figure FDA0003004565330000032
From this equation (X) can be solvedW,YW,ZW) Obtaining the coordinates of the central point of the marker in a world coordinate system established by a left eye camera;
and shooting by binocular cameras at different positions to obtain different marker center points, further obtaining coordinates of the different center points under a world coordinate system established by a left eye camera of the different center points, and converting the coordinates into the world coordinate system established by the left eye of the first binocular camera by using RT (reverse transcription), thereby obtaining the coordinates of all observed marker center points under the world coordinate system, and further realizing the three-dimensional splicing of the observed objects.
CN202110358465.4A 2021-04-02 2021-04-02 Underwater marker identification and splicing method for networking of multiple groups of binocular cameras Active CN113516007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110358465.4A CN113516007B (en) 2021-04-02 2021-04-02 Underwater marker identification and splicing method for networking of multiple groups of binocular cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110358465.4A CN113516007B (en) 2021-04-02 2021-04-02 Underwater marker identification and splicing method for networking of multiple groups of binocular cameras

Publications (2)

Publication Number Publication Date
CN113516007A true CN113516007A (en) 2021-10-19
CN113516007B CN113516007B (en) 2023-12-22

Family

ID=78062195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110358465.4A Active CN113516007B (en) 2021-04-02 2021-04-02 Underwater marker identification and splicing method for networking of multiple groups of binocular cameras

Country Status (1)

Country Link
CN (1) CN113516007B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175261A (en) * 2011-01-10 2011-09-07 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
CN104007760A (en) * 2014-04-22 2014-08-27 济南大学 Self-positioning method in visual navigation of autonomous robot
CN104299261A (en) * 2014-09-10 2015-01-21 深圳大学 Three-dimensional imaging method and system for human body
CN105469418A (en) * 2016-01-04 2016-04-06 中车青岛四方机车车辆股份有限公司 Photogrammetry-based wide-field binocular vision calibration device and calibration method
CN108734744A (en) * 2018-04-28 2018-11-02 国网山西省电力公司电力科学研究院 A kind of remote big field-of-view binocular scaling method based on total powerstation
CN112509125A (en) * 2020-12-14 2021-03-16 广州迦恩科技有限公司 Three-dimensional reconstruction method based on artificial markers and stereoscopic vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175261A (en) * 2011-01-10 2011-09-07 深圳大学 Visual measuring system based on self-adapting targets and calibrating method thereof
CN104007760A (en) * 2014-04-22 2014-08-27 济南大学 Self-positioning method in visual navigation of autonomous robot
CN104299261A (en) * 2014-09-10 2015-01-21 深圳大学 Three-dimensional imaging method and system for human body
CN105469418A (en) * 2016-01-04 2016-04-06 中车青岛四方机车车辆股份有限公司 Photogrammetry-based wide-field binocular vision calibration device and calibration method
CN108734744A (en) * 2018-04-28 2018-11-02 国网山西省电力公司电力科学研究院 A kind of remote big field-of-view binocular scaling method based on total powerstation
CN112509125A (en) * 2020-12-14 2021-03-16 广州迦恩科技有限公司 Three-dimensional reconstruction method based on artificial markers and stereoscopic vision

Also Published As

Publication number Publication date
CN113516007B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN109003311B (en) Calibration method of fisheye lens
CN109509230A (en) A kind of SLAM method applied to more camera lens combined type panorama cameras
CN112085659B (en) Panorama splicing and fusing method and system based on dome camera and storage medium
CN106952225B (en) Panoramic splicing method for forest fire prevention
CN109523551B (en) Method and system for acquiring walking posture of robot
CN112067233B (en) Six-degree-of-freedom motion capture method for wind tunnel model
CN110110793B (en) Binocular image rapid target detection method based on double-current convolutional neural network
CN112949478A (en) Target detection method based on holder camera
CN115376024A (en) Semantic segmentation method for power accessory of power transmission line
US10867175B1 (en) Simulation method for detecting dim environment based on virtual reality
CN113298177B (en) Night image coloring method, device, medium and equipment
CN113506275B (en) Urban image processing method based on panorama
CN106997366B (en) Database construction method, augmented reality fusion tracking method and terminal equipment
CN112085117B (en) Robot motion monitoring visual information fusion method based on MTLBP-Li-KAZE-R-RANSAC
CN113516007B (en) Underwater marker identification and splicing method for networking of multiple groups of binocular cameras
CN113096016A (en) Low-altitude aerial image splicing method and system
CN108288037A (en) A kind of tire coding identifying system
CN113112532B (en) Real-time registration method for multi-TOF camera system
CN109345488B (en) Distortion correction method for ultra-wide-angle image shot by mobile phone angle expanding lens
CN114663299A (en) Training method and device suitable for image defogging model of underground coal mine
CN115223034A (en) Automatic hole selection method and device for cryoelectron microscope
CN114463334A (en) Inner cavity vision SLAM method based on semantic segmentation
CN107330436A (en) A kind of panoramic picture SIFT optimization methods based on dimensional criteria
CN109587469A (en) Image treatment method and device based on artificial intelligence identification
CN116091366B (en) Multi-dimensional shooting operation video and method for eliminating moire

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant