CN117474993B - Underwater image feature point sub-pixel position estimation method and device - Google Patents

Underwater image feature point sub-pixel position estimation method and device Download PDF

Info

Publication number
CN117474993B
CN117474993B CN202311407550.0A CN202311407550A CN117474993B CN 117474993 B CN117474993 B CN 117474993B CN 202311407550 A CN202311407550 A CN 202311407550A CN 117474993 B CN117474993 B CN 117474993B
Authority
CN
China
Prior art keywords
matrix
sub
point
feedback
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311407550.0A
Other languages
Chinese (zh)
Other versions
CN117474993A (en
Inventor
卞鑫宇
黄海
石健
胡敬伟
李凌宇
孙溢泽
张宗羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202311407550.0A priority Critical patent/CN117474993B/en
Publication of CN117474993A publication Critical patent/CN117474993A/en
Application granted granted Critical
Publication of CN117474993B publication Critical patent/CN117474993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An underwater image characteristic point sub-pixel position estimation method and device relate to the technical field of underwater image processing. Aiming at the technical problems of high complexity of image processing and high configuration requirement of a positioning method of image sub-pixels in the prior art, the invention provides the technical scheme as follows: the method for estimating the sub-pixel positions of the characteristic points of the underwater image comprises the steps of collecting a preset number of characteristic points on a moving object; defining a time ordinal matrix, an image feature point integral point feedback matrix and a nonlinear fitting parameter matrix to be solved; solving a nonlinear fitting parameter matrix to be solved; obtaining an estimated value of the next sub-pixel according to the solving; defining a change rate matrix of the characteristic point integer feedback position change; collecting sub-pixel linear prediction values of characteristic points at the later stage of slow motion change of visual servo control operation; and obtaining the next image plane characteristic point sub-pixel estimated value. The method is suitable for solving the problem of the positioning method of the image sub-pixels.

Description

Underwater image feature point sub-pixel position estimation method and device
Technical Field
The technical field of underwater image processing is related to, in particular to sub-pixel position estimation of underwater image feature points.
Background
The performance of the visual servoing method is closely dependent on the visual information feedback provided by computer vision. However, during UVMS visual servoing operations, it was found that significant distortion occurred based on the pixel positions of the discretized full-point imaging limited by the camera photosensitive elements. In this case, the visual feedback information may be stepped, thereby further causing an undesirable buffeting phenomenon of the controller in some cases.
In terms of conventional computer vision, researchers have conducted a great deal of research to address the positioning of image sub-pixels. Among different subpixel solving methods, four classical solving methods are Newton-Raphson algorithm, gradient algorithm, gray gradient iteration method and correlation coefficient curved surface fitting method. The methods are all used for carrying out image processing on a digital image of computer vision, so that on one hand, the complexity of the image processing is increased, the configuration requirement of a UVMS vision system is improved, and on the other hand, the calculation result is changed depending on the imaging quality of the image.
Disclosure of Invention
Aiming at the technical problems that in the prior art, the positioning method of the image sub-pixels is to perform image processing on a digital image with computer vision, on one hand, the complexity of the image processing is increased, the configuration requirement of a UVMS vision system is improved, and on the other hand, the calculation result is changed depending on the imaging quality of the image, the invention provides the technical scheme that:
an underwater image feature point sub-pixel position estimation method, comprising the following steps:
Collecting a preset number of feature points on a moving target;
defining a time ordinal matrix, an integral point feedback matrix of image feature points and a nonlinear fitting parameter matrix to be solved;
According to the characteristic points, the time sequence matrix image characteristic point integral point feedback matrix and the nonlinear fitting parameter matrix to be solved, solving the nonlinear fitting parameter matrix to be solved;
obtaining an estimated value of the next sub-pixel according to the solving of the nonlinear fitting parameter matrix to be solved;
Defining a change rate matrix of the characteristic point integer feedback position change;
Collecting sub-pixel linear prediction values of characteristic points at a later stage of slow motion change of visual servo control operation;
And obtaining the next image plane characteristic point sub-pixel estimated value according to the estimated value of the next sub-pixel, the change rate matrix of the characteristic point integer feedback position change and the predicted value.
Further, a preferred embodiment is provided, wherein the image feature point full-point feedback matrix is a continuous four-time image feature point full-point feedback matrix.
Further, a preferred embodiment is provided, wherein the nonlinear fitting to-be-solved parameter matrix is a continuous four-time nonlinear fitting to-be-solved parameter matrix.
Further, a preferred embodiment is provided wherein the non-linear fit to-be-solved parameter matrix is solved by defining a continuous four-time image feature point full-point feedback matrix when the rank of the time ordinal matrix is less than 4.
Further, a preferred embodiment is provided, wherein the estimated value of the next subpixel is obtained after three times of solving the nonlinear fitting parameter matrix to be solved.
Further, a preferred embodiment is provided, wherein the next image plane feature point subpixel estimation value is further obtained by an estimation error matrix, an estimation error accumulation matrix and an estimation error variation matrix.
Further, a preferred embodiment is provided, wherein the preset number of feature points locally satisfies a cubic nonlinear function model.
Based on the same inventive concept, the invention also provides an underwater image characteristic point sub-pixel position estimation device, which comprises:
The module is used for collecting the preset number of feature points on the moving target;
The module is used for defining a time ordinal matrix, an integral point feedback matrix of the image characteristic points and a nonlinear fitting parameter matrix to be solved;
The module is used for solving the nonlinear fitting parameter matrix to be solved according to the characteristic points, the time sequence matrix image characteristic point integral point feedback matrix and the nonlinear fitting parameter matrix to be solved;
a module for obtaining the estimated value of the next sub-pixel according to the solution of the nonlinear fitting parameter matrix to be solved;
A module for defining a change rate matrix of the characteristic point integer feedback position change;
the module is used for collecting the sub-pixel linear prediction value of the characteristic point at the later stage of slow motion change of the visual servo control operation;
and obtaining the next image plane characteristic point sub-pixel estimated value according to the estimated value of the next sub-pixel, the change rate matrix of the characteristic point integer feedback position change and the predicted value.
Based on the same inventive concept, the present invention also provides a computer storage medium for storing a computer program, which when read by a computer performs the method.
Based on the same inventive concept, the present invention also provides a computer comprising a processor and a storage medium, the computer performing the method when the processor reads a computer program stored in the storage medium.
Compared with the prior art, the technical scheme provided by the invention has the following advantages:
the invention provides an underwater image characteristic point sub-pixel position estimation method, which respectively provides a linear prediction algorithm and a nonlinear prediction algorithm according to different motion conditions and stages.
According to the underwater image characteristic point sub-pixel position estimation method, real-time sub-pixel level accurate feedback of the target in the camera view is realized by combining an estimation model, and the problems that the operation accuracy is not ideal due to the fact that the underwater low-light environment, the camera low-frame condition and the underwater robot-manipulator system observe the target with limited camera photosensitive element size are solved.
The underwater image characteristic point sub-pixel position estimation method provided by the invention realizes the break-out of absolute dependence of underwater visual perception on the camera hardware level, and realizes the excavation of real underwater environment information and operation target information under the conditions of limited visual perception hardware conditions and limited visual processing calculation resources.
The underwater image characteristic point sub-pixel position estimation method provided by the invention has the advantages of easiness in implementation, low hardware calculation resource requirement, high estimation speed, high precision and strong robustness.
The underwater image characteristic point sub-pixel position estimation method provided by the invention is suitable for being applied to the work of a positioning method for solving the problem of image sub-pixels.
Drawings
Fig. 1 is a camera imaging model mentioned in the eleventh embodiment;
fig. 2 shows the position change of the feature point 1 in three feedback cases according to the eleventh embodiment;
fig. 3 shows the position change of the feature point 2 in three feedback cases according to the eleventh embodiment;
Fig. 4 shows the position change of the feature point 3 in three feedback cases according to the eleventh embodiment;
fig. 5 shows the position change of the feature point 4 in three feedback cases according to the eleventh embodiment;
FIG. 6 is a comparison of the error variation in the image plane for the three feedback cases mentioned in the eleventh embodiment;
fig. 7 is a logic diagram of a subpixel estimation algorithm according to the eleventh embodiment.
Detailed Description
In order to make the advantages and benefits of the technical solution provided by the present invention more apparent, the technical solution provided by the present invention will now be described in further detail with reference to the accompanying drawings, in which:
an embodiment provides a method for estimating a subpixel position of a feature point of an underwater image, the method including:
Collecting a preset number of feature points on a moving target;
defining a time ordinal matrix, an integral point feedback matrix of image feature points and a nonlinear fitting parameter matrix to be solved;
According to the characteristic points, the time sequence matrix image characteristic point integral point feedback matrix and the nonlinear fitting parameter matrix to be solved, solving the nonlinear fitting parameter matrix to be solved;
obtaining an estimated value of the next sub-pixel according to the solving of the nonlinear fitting parameter matrix to be solved;
Defining a change rate matrix of the characteristic point integer feedback position change;
Collecting sub-pixel linear prediction values of characteristic points at a later stage of slow motion change of visual servo control operation;
And obtaining the next image plane characteristic point sub-pixel estimated value according to the estimated value of the next sub-pixel, the change rate matrix of the characteristic point integer feedback position change and the predicted value.
In particular. The steps of the technical scheme provided by the embodiment are specifically as follows:
step one: for m feature points existing on a moving object, the m feature points locally meet a three-time nonlinear function model, and then we can obtain:
wherein, And/>Is the current integer feedback characteristic point, the nonlinear parameter to be solved and the number m of the characteristic points.
Step two: thus, for any four consecutive feature point integer feedback, we can get:
Wherein the method comprises the steps of Is four continuous feature point integer feedbacks from the time t,Is the parameter of the t+4 th nonlinear fit.
Step three: defining a time ordinal matrix T t+4 as:
Defining a continuous four-time image characteristic point integral point feedback matrix X t+4 and an i+4th nonlinear fitting parameter matrix A t+4 to be solved as follows:
And/>
Step four: when rank (T t+4) =4, the solution of the to-be-solved parameter matrix a t+4 of the nonlinear fit is:
At+4=Tt+4 -1Xt+4
When rank (T t+4) < 4, we can manually change the time ordinal matrix T t+4 and the continuous four-time image feature point full-point feedback matrix X t+4 as:
And
Where n=3-rank (T),And feeding back information for the n+1 times of integral points before the t+1 time for history storage. Then, a nonlinear fitting parameter matrix to be solved a t+4 can be found:
At+4=Tt+4 -1Xt+4
Step five: then, after obtaining the parameters that need to be calculated for the three-time nonlinear fitting model, the next subpixel estimation is:
step six: the change rate matrix dX defining the characteristic point integer feedback position change is:
Step seven: the sub-pixel linear prediction estimation for obtaining the characteristic points at the later stage of the slow motion change of the visual servo control operation is as follows:
wherein, Is a weighted coefficient vector of the feature point motion rate.
Step eight: defining an estimation error matrix E t+4, an estimation error accumulation matrix I t+4, and an estimation error change matrix D t+4 as follows:
And
It+4=It+3+Et+4
Dt+4=Et+4-Et+3
Step nine: the image plane feature point sub-pixel estimation at time t+5 is obtained as follows:
Wherein the method comprises the steps of Is a closed loop feedback parameter vector.
The implementation steps of the present embodiment further include some specific implementation principles, as follows:
a) Local nonlinear fitting model for feature point position change in image plane used in step one
According to the principle of pinhole imaging, an imaging model of the camera can be built, as shown in fig. 1. Assuming that there is one target feature point P 1 in the geodetic system { I }, the description of this point in the geodetic system { I } is IP1=[Ix1,Iy1,Iz1]T. However, to obtain the projection position of the feature point in the geodetic coordinate system { I } on the camera image plane requires converting the position of the target feature point P 1 in the geodetic coordinate system { I } to the camera coordinate system { C }.
Assume homogeneous conversion of camera coordinate system to earth coordinate system { I }, toDefining the description of the target feature point P 1 in the camera coordinate system { C } as CP1=[Cx1,Cy1,Cz1]T, the coordinates of the target feature point in the camera coordinate system may be obtained as:
Defining the projection coordinate in the corresponding image plane as Cp1=[u1,v1]T, and if the focal length of the camera is f, obtaining according to the perspective projection relation:
the writing of the above-mentioned form of homogeneous transformation is:
In practice, however, the relation between the projection coordinates of the target feature point in the image plane and the target feature point in the camera coordinate system is not calculated completely as in equation (1). This is mainly due to two aspects: on the one hand, the origin of the image plane is generally selected not to coincide with the optical center, but at the upper left corner or lower left corner of the image plane; on the other hand, in the image plane, the projection coordinates Cp1=[u1,v1]T of the target feature point are defined in units of meters or the like, and the actual image plane coordinates (u C,vC) are generally in units of pixels (pixels). The size of the defined pixels is ρ x×ρyx and ρ y in pixel/m. Defining the coordinates (u 0,v0) as the position of the principal point, where the optical axis passes, on the image plane, there are:
where α=ρ xf,β=ρy f.
In addition, for reasons such as process, the two coordinate axes of the image plane of the camera are not necessarily completely perpendicular, and if the included angle between the two coordinate axes is γ, the above formula may be further expressed as:
Therefore, the homogeneous transformation relationship between the projection position Cp1=[u1,v1]T of the target feature point in the image plane and the position in the camera coordinate system { C } is:
Therefore, now, as long as the homogeneous transformation relation of the camera coordinate system { C } with respect to the geodetic coordinate system { I } is deduced, the projection position Cp1=[u1,v1]T of the target feature point IP1=[Ix1,Iy1,Iz1]T in the geodetic coordinate system { I } on the camera image plane can be obtained.
Assuming that the camera is fixedly connected to the coordinate system { T }, for a target with a coordinate of IP1=[Ix1,Iy1,Iz1]T | located under the coordinate system { I }, consider equation (2), the position-order perspective projection model can be written as:
Assume that Namely, the two coordinate axes of the camera are completely vertical, and the moving speed of the image plane characteristic points can be related to the moving speed of the camera:
wherein, Is the speed of the camera. /(I)Referred to as the image jacobian matrix (or interaction matrix) which describes the link between the motion speed of the feature points at the image plane and the motion speed of the camera in the geodetic system I, the resolution of which depends on the camera internal parameters and the depth of the feature points. It should be noted that the image jacobian matrix is studied for one feature point of the image plane, and if n feature points exist, the image jacobian matrix for the feature points on multiple groups of image planes can be obtained by increasing the number of rows of the jacobian matrix.
The motion of an underwater vehicle is generally considered continuous, and second order is minimal. Furthermore, the motion mapping between the underwater vehicle and the handheld camera may be represented linearly by a jacobian matrix. Thus, the movement of the feature point at the projected point of the image plane of the camera will be continuous second order differentiable, irrespective of camera image feedback pixelation limitations. Image jacobian matrixIs a linear mapping matrix. And because of/>Is a continuous variable for the speed of the camera, and therefore, the position change/>, of the feature point at the projection point Cp1=[u1,v1]T of the image plane of the cameraAnd is also a continuous variable. Three nonlinear continuous functions are used to fit the feature point local position change process.
B) Model solving is carried out on the artificial expansion matrix contained in the fourth step
The solving equation of the model undetermined parameter matrix in the fourth step is as follows: a t+4=Tt+4 -1Xt+4. For a time series matrix:
When the rank (T) < 4 of the time series matrix, and because rank (t|x) =4, all the equations will be solved without solution, so that the solving of the parameter matrix a t+4 fails. The finite-dimension matrix is formed by additionally extracting the feedback information of the integer points for the first n+1 times, and the original time sequence matrix is expanded to obtain a new time sequence matrix:
So that rank (T) =rank (t|x) and a t+4=Tt+4 -1Xt+4 is solved.
C) Model fitting feedback closed-loop correction algorithm contained in steps seven, eight and nine
In some cases, it is a non-linear fitting misinterpretation that results in a non-linear fitting with only zero solutions to the parameter matrix a to be solved. The reason for this is that the feature points of the integer feedback overlap in the u or v direction of the image plane, i.e. the feature points are no longer moving in the u or v direction of the image plane. In this case, a local tertiary nonlinear fit is no longer suitable, in particular the integer feedback result of four consecutive samples is the same value in both the u or v direction. It is noted that the motion of the feature points on the image plane must be continuous and the rate of change of the motion is continuous. Therefore, when there is a large amount of repetition and distortion in the integer feedback of the feature points, we estimate the rate of change of the feature points at a certain moment to estimate the positions of the feature points based on the above analysis.
The change rate matrix dX t+4 defining the feature point integer feedback position change is:
then, the sub-pixel linear prediction estimation of the feature point at the later stage of the slow motion change of the visual servo control job can be obtained as follows:
wherein, Is a weighted coefficient vector of the feature point motion rate. As can be seen from the equation above, as the estimation error is accumulated, the deviation between the estimated sub-pixel and the whole pixel value is greater than 1, which leads to erroneous estimation. Therefore, we use the estimation error to add closed loop feedback to the subpixel prediction estimate to converge the error of the subpixel estimate to no more than lpixel. Defining an estimation error matrix E t+4, an estimation error accumulation matrix I t+4, and an estimation error change matrix D t+4 as follows:
It+4=It+3+Et+4
Dt+4=Et+4-Et+3
then we can get the image plane feature point sub-pixel estimate at time t+5 as:
Wherein the method comprises the steps of Is a closed loop feedback parameter vector. Fig. 2 to 5 show the sub-pixel prediction and estimation results of four feature points in the visual servoing process.
In the second embodiment, the method for estimating the sub-pixel position of the feature point of the underwater image provided in the first embodiment is further limited, and the feedback matrix of the feature point of the image is a continuous four-time feedback matrix of the feature point of the image.
In the third embodiment, the method for estimating the sub-pixel position of the feature point of the underwater image provided in the second embodiment is further limited, and the nonlinear fitting to-be-solved parameter matrix is a continuous four-time nonlinear fitting to-be-solved parameter matrix.
In the fourth embodiment, the method for estimating the sub-pixel position of the characteristic point of the underwater image provided in the third embodiment is further limited, and when the rank of the time sequence matrix is less than 4, the non-linear fitting parameter matrix to be solved is solved by defining a continuous four-time image characteristic point integral point feedback matrix.
In a fifth embodiment, the method for estimating the position of the sub-pixel of the feature point of the underwater image provided in the first embodiment is further limited, and the estimated value of the next sub-pixel is obtained after the nonlinear fitting to-be-solved parameter matrix is obtained three times.
In the sixth embodiment, the method for estimating the position of the sub-pixel of the feature point of the underwater image provided in the first embodiment is further limited, and the sub-pixel estimated value of the feature point of the next image plane is further obtained through an estimated error matrix, an estimated error accumulation matrix and an estimated error change matrix.
In a seventh embodiment, the method for estimating the sub-pixel positions of the feature points of the underwater image provided in the first embodiment is further defined, and the preset number of feature points locally satisfy a cubic nonlinear function model.
An eighth embodiment provides an underwater image feature point subpixel position estimation device, including:
The module is used for collecting the preset number of feature points on the moving target;
The module is used for defining a time ordinal matrix, an integral point feedback matrix of the image characteristic points and a nonlinear fitting parameter matrix to be solved;
The module is used for solving the nonlinear fitting parameter matrix to be solved according to the characteristic points, the time sequence matrix image characteristic point integral point feedback matrix and the nonlinear fitting parameter matrix to be solved;
a module for obtaining the estimated value of the next sub-pixel according to the solution of the nonlinear fitting parameter matrix to be solved;
A module for defining a change rate matrix of the characteristic point integer feedback position change;
the module is used for collecting the sub-pixel linear prediction value of the characteristic point at the later stage of slow motion change of the visual servo control operation;
and obtaining the next image plane characteristic point sub-pixel estimated value according to the estimated value of the next sub-pixel, the change rate matrix of the characteristic point integer feedback position change and the predicted value.
The ninth embodiment provides a computer storage medium storing a computer program, which when read by a computer performs the method provided in the first embodiment.
In a tenth aspect, the present embodiment provides a computer including a processor and a storage medium, wherein when the processor reads a computer program stored in the storage medium, the computer performs the method provided in the first aspect.
An eleventh embodiment, which is described with reference to fig. 1 to 7, is to provide a specific description of the foregoing embodiment by a specific example, and the technical scheme is as follows:
The simulation procedure causes UVMS to perform a control procedure of one-point stabilization under the visual servoing control method based on the image, in which the imaging interval of the image is set to 0.05s. The imaging results of the integer discretization obtained by the camera model are plotted against the actual image feedback position situation by using the proposed image subpixel observing algorithm, as shown in fig. 2,3, 4 and 5. Fig. 2 to 5 show discrete integer projection position changes of the feedback display of the target feature point 1 to the target feature point 4 in the camera image plane limited by the camera photosensitive element, position changes of the sub-pixels based on the image sub-pixel observation algorithm, and ideal position changes of the feedback display irrespective of the camera photosensitive element, respectively. Wherein, the internal parameters of the 'eye in hand' camera of UVMS are: (u 0,v0) = (512 ), α=800, β=800. The homogeneous conversion matrix of the "eye in hand" camera to the end effector is:
Step one: defining a time ordinal matrix T t+4 as:
Defining a continuous four-time image characteristic point integral point feedback matrix X t+4 and an i+4th nonlinear fitting parameter matrix A t+4 to be solved as follows:
And/>
Step two: when rank (T t+4) =4, the solution of the to-be-solved parameter matrix a t+4 of the nonlinear fit is:
At+4=Tt+4 -1Xt+4
When rank (T t+4) < 4, we can manually change the time ordinal matrix T t+4 and the continuous four-time image feature point full-point feedback matrix X t+4 as:
And
Where n=3-rank (T),And feeding back information for the n+1 times of integral points before the t+1 time for history storage. Then, a nonlinear fitting parameter matrix to be solved a t+4 can be found:
At+4=Tt+4 -1Xt+4
step three: then, after obtaining the parameters that need to be calculated for the three-time nonlinear fitting model, the next subpixel estimation is:
step four: the change rate matrix dX defining the characteristic point integer feedback position change is:
Step five: the sub-pixel linear prediction estimation for obtaining the characteristic points at the later stage of the slow motion change of the visual servo control operation is as follows:
/>
wherein, Is a weighted coefficient vector of the feature point motion rate.
Step six: defining an estimation error matrix E t+4, an estimation error accumulation matrix I t+4, and an estimation error change matrix D t+4 as follows:
And
It+4=It+3+Et+4
Dt+4=Et+4-Et+3
Step seven: the image plane feature point sub-pixel estimation at time t+5 is obtained as follows:
Wherein the method comprises the steps of Is a closed loop feedback parameter vector.
In the simulation process, as can be seen from the partial enlarged view at the left of fig. 2, in the u-axis direction of the image plane of the camera, the movement speed of the feature point 1 is very slow, and the feedback display is discrete and integral limited by the photosensitive element of the camera, so that the position feedback of the feature point 1 in the image plane has a stepped phenomenon. It can be seen from a combination of the results of ideal feedback positions of feedback displays without consideration of camera photosensitive elements that discrete integer projection positions of feedback displays limited by camera photosensitive elements produce large feedback distortions. Especially in the control process of visual servo, the image feedback phenomenon of the stepwise distortion further causes the output of the controller to buffeting, so that the final control accuracy is reduced.
In contrast, when the position change of the feature point 1 is relatively large, the predicted point of the feature point 1 based on the subpixel observer algorithm is closer to the ideal position of the feedback display without taking the camera photosensitive element into consideration than the discrete and integral projected position of the feedback limited by the camera photosensitive element. Firstly, the sub-pixel observer algorithm can be embodied, and the observation precision of the sub-pixel observer algorithm can be further effectively improved on the basis of the observation quality limited by the photosensitive element of the camera. And compared with the discrete integer projection position of the feedback display limited by the camera photosensitive element, the position of the prediction point based on the sub-pixel observer algorithm is closer to the position of the feedback ideal condition without considering the limitation of the camera photosensitive element, so that the feedback stepped distortion phenomenon is eliminated to a certain extent. Therefore, more real and smooth information is fed back to the controller, and the buffeting of the output of the controller is avoided, so that the control precision and stability of the controller are improved.
Furthermore, as can be seen from the enlarged view of the upper right part of fig. 2, at the final fine adjustment stage of the visual servoing operation of UVMS, the position movement of the feature point 1 in the image plane is very slow due to the decrease in the movement amplitude of the UVMS system. Because of the discrete integer projection position feedback distortion and severe steplike of the camera photosensitive element limited feedback, for the image subpixel observer algorithm the discrete integer projection position information of the camera photosensitive element limited feedback is its unique information reference. The image subpixel observer algorithm is also forced to suffer from the effects of distortion and stepping. However, in the final comparison of the feedback results, it can be seen that the predicted point of the feature point 1 based on the subpixel observer algorithm is closer to the ideal position of the feedback display without taking the camera photosensitive element into consideration than the discrete and integer projection position of the feedback limited by the camera photosensitive element.
Similar to the case where the feature point 1 is in the image plane, the feature point 2, the feature point 3, and the feature point 4 are in the image plane as shown in fig. 3, 4, and 5. It can be seen that when the position change of the feature point is large, compared with discretization integer feedback limited by the photosensitive element of the camera, the feedback precision and accuracy of the feature point position prediction point based on the image subpixel observer algorithm are obviously improved. Also, at the final stage of the simulation process, the result of the algorithm is still affected by the discretized integer feedback value limited by the photosensitive element of the camera and an unstable result appears. With reference to fig. 2 to 5, the final precision of the feature point 1 and the feature point 3 is improved significantly, so that an ideal feedback situation is basically achieved, and the precision of the improvement of the influence of the discretized integer feedback values of the feature point 2 and the feature point 4 still limited by the photosensitive element is limited.
Fig. 6 shows the variation of the error in the image plane for three feedback cases. As can be seen from the sitting partial enlarged view of fig. 6, in the process of error convergence of the feature point positions in the image plane, the step phenomenon of the error convergence process based on the image subpixel observer algorithm is weakened compared with that of the error convergence process of discrete punctuation feedback, and the buffeting condition in the convergence process is greatly weakened. In the figure, the characteristic point 1 and the characteristic point 3 are shown to have buffeting to a certain extent under the feedback of discrete integral point formation in the vicinity of 30s,34s and 36s in the simulation process, and the error convergence based on the image subpixel observer algorithm is very stable. In addition, in the final stage of the whole simulation process, the prediction error of all characteristic points is smaller than 1pixel compared with the ideal feedback error based on the image subpixel observer algorithm, and the excellent and stable performance is shown.
The technical solution provided by the present invention is described in further detail through several specific embodiments, so as to highlight the advantages and benefits of the technical solution provided by the present invention, however, the above specific embodiments are not intended to be limiting, and any reasonable modification and improvement, combination of embodiments, equivalent substitution, etc. of the present invention based on the spirit and principle of the present invention should be included in the scope of protection of the present invention.
In the description of the present invention, only the preferred embodiments of the present invention are described, and the scope of the claims of the present invention should not be limited thereby; furthermore, the descriptions of the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise. Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention. Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.

Claims (10)

1. An underwater image feature point sub-pixel position estimation method is characterized by comprising the following steps:
Collecting a preset number of feature points on a moving target;
for m feature points existing on a moving object, the m feature points locally meet a three-time nonlinear function model, and the m feature points are obtained:
wherein, And/>The nonlinear parameter to be solved and the number m of the characteristic points are the current integer feedback characteristic points;
And (3) for any continuous four times of feature point integer feedback, obtaining:
Wherein the method comprises the steps of Is four continuous feature point integer feedbacks from the time t,Is the parameter of the t+4th nonlinear fitting;
defining a time ordinal matrix, an integral point feedback matrix of image feature points and a nonlinear fitting parameter matrix to be solved;
Defining a time ordinal matrix T t+4 as:
Defining a continuous four-time image characteristic point integral point feedback matrix X t+4 and an i+4th nonlinear fitting parameter matrix A t+4 to be solved as follows:
And/>
According to the characteristic points, the time sequence matrix image characteristic point integral point feedback matrix and the nonlinear fitting parameter matrix to be solved, solving the nonlinear fitting parameter matrix to be solved;
When rank (T t+4) =4, the solution of the to-be-solved parameter matrix a t+4 of the nonlinear fit is:
At+4=Tt+4 -1Xt+4
When rank (T t+4) < 4, the manual change time sequence number matrix T t+4 and the continuous four-time image feature point integral point feedback matrix X t+4 are:
And
Where n=3-rank (T),For the historically stored n+1 times of integral point feedback information before the time t+1, then solving a nonlinear fitting parameter matrix A t+4 to be solved:
At+4=Tt+4 -1Xt+4
obtaining an estimated value of the next sub-pixel according to the solving of the nonlinear fitting parameter matrix to be solved;
after obtaining the parameters that need to be calculated for the three-time nonlinear fitting model, the next subpixel estimation is:
Defining a change rate matrix of the characteristic point integer feedback position change;
the change rate matrix dX defining the characteristic point integer feedback position change is:
Collecting sub-pixel linear prediction values of characteristic points at a later stage of slow motion change of visual servo control operation;
The sub-pixel linear prediction estimation for obtaining the characteristic points at the later stage of the slow motion change of the visual servo control operation is as follows:
wherein, Is a weighting coefficient vector of the motion rate of the feature points;
obtaining the next image plane characteristic point sub-pixel estimated value according to the estimated value of the next sub-pixel, the change rate matrix of the characteristic point integer feedback position change and the predicted value;
Defining an estimation error matrix E t+4, an estimation error accumulation matrix I t+4, and an estimation error change matrix D t+4 as follows:
And
t4+4t43 Ten E t+4
Dt+4=Et+4-Et+3
Step nine: the image plane feature point sub-pixel estimation at time t+5 is obtained as follows:
Wherein the method comprises the steps of Is a closed loop feedback parameter vector.
2. The method for estimating a sub-pixel position of an underwater image feature point according to claim 1, wherein the image feature point full-point feedback matrix is a continuous four-time image feature point full-point feedback matrix.
3. The method for estimating the sub-pixel position of the characteristic point of the underwater image according to claim 2, wherein the nonlinear fitting parameter matrix to be solved is a continuous four-time nonlinear fitting parameter matrix to be solved.
4. A method for estimating a sub-pixel position of an underwater image feature point according to claim 3, wherein when the rank of the time series matrix is less than 4, the non-linear fitting parameter matrix to be solved is solved by defining a continuous four-time image feature point integral point feedback matrix.
5. The method for estimating the position of the sub-pixel of the characteristic point of the underwater image according to claim 1, wherein the estimated value of the next sub-pixel is obtained after solving the parameter matrix to be solved for nonlinear fitting is obtained three times.
6. The method of estimating a sub-pixel position of an underwater image feature point according to claim 1, wherein the sub-pixel estimation value of the feature point of the next image plane is further obtained by an estimation error matrix, an estimation error accumulation matrix and an estimation error variation matrix.
7. The method for estimating the sub-pixel position of the characteristic point of the underwater image according to claim 1, wherein the method comprises the following steps of.
8. An underwater image feature point sub-pixel position estimation device, comprising:
The module is used for collecting the preset number of feature points on the moving target;
for m feature points existing on a moving object, the m feature points locally meet a three-time nonlinear function model, and the m feature points are obtained:
wherein, And/>The nonlinear parameter to be solved and the number m of the characteristic points are the current integer feedback characteristic points;
And (3) for any continuous four times of feature point integer feedback, obtaining:
Wherein the method comprises the steps of Is four continuous feature point integer feedbacks from the time t,Is the parameter of the t+4th nonlinear fitting; the module is used for defining a time ordinal matrix, an integral point feedback matrix of the image characteristic points and a nonlinear fitting parameter matrix to be solved;
Defining a time ordinal matrix T t+4 as:
Defining a continuous four-time image characteristic point integral point feedback matrix X t+4 and an i+4th nonlinear fitting parameter matrix A t+4 to be solved as follows:
And/>
The module is used for solving the nonlinear fitting parameter matrix to be solved according to the characteristic points, the time sequence matrix image characteristic point integral point feedback matrix and the nonlinear fitting parameter matrix to be solved;
When rank (T t+4) =4, the solution of the to-be-solved parameter matrix a t+4 of the nonlinear fit is:
At+4=Tt+4 -1Xt+4
When rank (T t+4) < 4, the manual change time sequence number matrix T t+4 and the continuous four-time image feature point integral point feedback matrix X t+4 are:
And
Where n=3-rank (T),For the historically stored n+1 times of integral point feedback information before the time t+1, then solving a nonlinear fitting parameter matrix A t+4 to be solved:
At+4=Tt+4 -1Xt+4
a module for obtaining the estimated value of the next sub-pixel according to the solution of the nonlinear fitting parameter matrix to be solved;
after obtaining the parameters that need to be calculated for the three-time nonlinear fitting model, the next subpixel estimation is:
A module for defining a change rate matrix of the characteristic point integer feedback position change;
the change rate matrix dX defining the characteristic point integer feedback position change is:
the module is used for collecting the sub-pixel linear prediction value of the characteristic point at the later stage of slow motion change of the visual servo control operation;
The sub-pixel linear prediction estimation for obtaining the characteristic points at the later stage of the slow motion change of the visual servo control operation is as follows:
wherein, Is a weighting coefficient vector of the motion rate of the feature points;
obtaining a next image plane characteristic point sub-pixel estimated value according to the estimated value of the next sub-pixel, the change rate matrix of the characteristic point integer feedback position change and the predicted value;
Defining an estimation error matrix E t+4, an estimation error accumulation matrix I t+4, and an estimation error change matrix D t+1 as follows:
And
It+4=It+3+Et+4
Dt+4=Et+4-Et+3
The image plane feature point sub-pixel estimation at time t+5 is obtained as follows:
Wherein the method comprises the steps of Is a closed loop feedback parameter vector.
9. A computer storage medium storing a computer program, wherein when the computer program is read by a computer, the computer performs a method for estimating a sub-pixel position of a feature point of an underwater image as claimed in claim 1.
10. A computer comprising a processor and a storage medium, wherein the computer performs a method of estimating sub-pixel positions of feature points of an underwater image as claimed in claim 1 when the processor reads a computer program stored in the storage medium.
CN202311407550.0A 2023-10-27 2023-10-27 Underwater image feature point sub-pixel position estimation method and device Active CN117474993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311407550.0A CN117474993B (en) 2023-10-27 2023-10-27 Underwater image feature point sub-pixel position estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311407550.0A CN117474993B (en) 2023-10-27 2023-10-27 Underwater image feature point sub-pixel position estimation method and device

Publications (2)

Publication Number Publication Date
CN117474993A CN117474993A (en) 2024-01-30
CN117474993B true CN117474993B (en) 2024-05-24

Family

ID=89638993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311407550.0A Active CN117474993B (en) 2023-10-27 2023-10-27 Underwater image feature point sub-pixel position estimation method and device

Country Status (1)

Country Link
CN (1) CN117474993B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1434171A2 (en) * 1995-08-04 2004-06-30 Microsoft Corporation Method and system for texture mapping a source image to a destination image
JP2007219704A (en) * 2006-02-15 2007-08-30 Fujitsu Ltd Image position measuring method, image position measuring device, and image position measuring program
CN101068357A (en) * 2007-05-24 2007-11-07 北京航空航天大学 Frequency domain fast sub picture element global motion estimating method for image stability
CN101137003A (en) * 2007-10-15 2008-03-05 北京航空航天大学 Gray associated analysis based sub-pixel fringe extracting method
CN104331897A (en) * 2014-11-21 2015-02-04 天津工业大学 Polar correction based sub-pixel level phase three-dimensional matching method
CN108921893A (en) * 2018-04-24 2018-11-30 华南理工大学 A kind of image cloud computing method and system based on online deep learning SLAM
CN112700501A (en) * 2020-12-12 2021-04-23 西北工业大学 Underwater monocular sub-pixel relative pose estimation method
CN113592907A (en) * 2021-07-22 2021-11-02 广东工业大学 Visual servo tracking method and device based on optical flow
CN115143877A (en) * 2022-05-30 2022-10-04 中南大学 SAR offset tracking method, device, equipment and medium based on strong scattering amplitude suppression and abnormal value identification
WO2022247296A1 (en) * 2021-05-27 2022-12-01 广州柏视医疗科技有限公司 Mark point-based image registration method
WO2023050723A1 (en) * 2021-09-29 2023-04-06 深圳市慧鲤科技有限公司 Video frame interpolation method and apparatus, and electronic device, storage medium, program and program product
US11763485B1 (en) * 2022-04-20 2023-09-19 Anhui University of Engineering Deep learning based robot target recognition and motion detection method, storage medium and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1434171A2 (en) * 1995-08-04 2004-06-30 Microsoft Corporation Method and system for texture mapping a source image to a destination image
JP2007219704A (en) * 2006-02-15 2007-08-30 Fujitsu Ltd Image position measuring method, image position measuring device, and image position measuring program
CN101068357A (en) * 2007-05-24 2007-11-07 北京航空航天大学 Frequency domain fast sub picture element global motion estimating method for image stability
CN101137003A (en) * 2007-10-15 2008-03-05 北京航空航天大学 Gray associated analysis based sub-pixel fringe extracting method
CN104331897A (en) * 2014-11-21 2015-02-04 天津工业大学 Polar correction based sub-pixel level phase three-dimensional matching method
CN108921893A (en) * 2018-04-24 2018-11-30 华南理工大学 A kind of image cloud computing method and system based on online deep learning SLAM
CN112700501A (en) * 2020-12-12 2021-04-23 西北工业大学 Underwater monocular sub-pixel relative pose estimation method
WO2022247296A1 (en) * 2021-05-27 2022-12-01 广州柏视医疗科技有限公司 Mark point-based image registration method
CN113592907A (en) * 2021-07-22 2021-11-02 广东工业大学 Visual servo tracking method and device based on optical flow
WO2023050723A1 (en) * 2021-09-29 2023-04-06 深圳市慧鲤科技有限公司 Video frame interpolation method and apparatus, and electronic device, storage medium, program and program product
US11763485B1 (en) * 2022-04-20 2023-09-19 Anhui University of Engineering Deep learning based robot target recognition and motion detection method, storage medium and apparatus
CN115143877A (en) * 2022-05-30 2022-10-04 中南大学 SAR offset tracking method, device, equipment and medium based on strong scattering amplitude suppression and abnormal value identification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive Subpixel Estimation of Land Cover in a Remotely Sensed Multispectral Image;Senya Kiyasu, Kazunori Terashima, Seiji Hotta and Sueharu Miyahara;SICE-ICASE International Joint Conference 2006;20060830;全文 *
Color Difference-Based Subpixel Rendering for Matrix Displays;Suk-Ju Kang, Member, IEEE;JOURNAL OF DISPLAY TECHNOLOGY;20160426;全文 *
何壮.运动目标检测与跟踪及多像面协同成像技术研究.中国优秀博硕士论文库.全文. *

Also Published As

Publication number Publication date
CN117474993A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
Arsigny et al. A fast and log-euclidean polyaffine framework for locally linear registration
US8538192B2 (en) Image processing device, image processing method and storage medium
US5727093A (en) Image processing method and apparatus therefor
EP1800245B1 (en) System and method for representing a general two dimensional spatial transformation
JP2009134509A (en) Device for and method of generating mosaic image
CN110246161B (en) Method for seamless splicing of 360-degree panoramic images
JPH076232A (en) System and method for execution of hybrid forward diffrence process so as to describe bezier spline
CN112767486B (en) Monocular 6D attitude estimation method and device based on deep convolutional neural network
CA2875426A1 (en) Resizing an image
CN111275621A (en) Panoramic image generation method and system in driving all-round system and storage medium
CN117474993B (en) Underwater image feature point sub-pixel position estimation method and device
EP3101622B1 (en) An image acquisition system
CN112633248B (en) Deep learning full-in-focus microscopic image acquisition method
US20210327021A1 (en) Image scaling method based on linear extension/contraction mode
WO2009104218A1 (en) Map display device
US20040156556A1 (en) Image processing method
Kim et al. A quad edge-based grid encoding model for content-aware image retargeting
US6421049B1 (en) Parameter selection for approximate solutions to photogrammetric problems in interactive applications
Singh et al. Single image super-resolution using adaptive domain transformation
CN116824070A (en) Real-time three-dimensional reconstruction method and system based on depth image
KR100908084B1 (en) Recording medium recording method of 3-dimensional coordinates of subject and program for executing the method in computer
CN116883565A (en) Digital twin scene implicit and explicit model fusion rendering method and application
Saux B-spline functions and wavelets for cartographic line generalization
CN115239559A (en) Depth map super-resolution method and system for fusion view synthesis
CN106056586A (en) Sub-pixel location method and sub-pixel location device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant